Web3 has a memory problem, and finally we have a solution



Web3 has a memory problem. Not in the sense of “we forget something”, but in the central architectural sense. It does not have a real memory layer.

Block chains today are not completely strange compared to traditional computers, but there is still a central fundamental aspect of inherited computer science: a layer of memory built for decentralization that will admit the next Internet iteration.

Muriel Médard is a speaker at consensus 2025 from May 14 to 16. Register to get your ticket here.

After World War II, John Von Neumann appeared The architecture of modern computers. Each computer needs entry and exit, a CPU for control and arithmetic, and memory to store the latest version data, along with a “bus” to recover and update that data in memory. Commonly known as RAM, this architecture has been the basis of computer science for decades.

In essence, web3 is a decentralized computer, a “world computer.” In the upper layers, it is quite recognizable: the operating systems (EVM, SVM) that are executed in thousands of decentralized nodes, feeding decentralized applications and protocols.

But, when you deepen, something is missing. The essential layer of memory to store, access and update short and long term data, does not resemble memory bus or in the Von Neumann memory unit.

On the other hand, it is a combination of different best -effort approaches to achieve this purpose, and the results are generally disorderly, inefficient and difficult to navigate.

Here is the problem: if we are going to build a world computer that is fundamentally different from the Von Neumann model, it is better that there is a very good reason to do so. At this time, web3 memory layer is not only different, it is complicated and inefficient. Transactions are slow. The storage is slow and expensive. The scale for mass adoption with this current approach is almost impossible. And, that was not supposed to be decentralization.

But there is another way.

Many people in this space are doing everything possible to make this limitation and now we are at a point where current alternative solutions simply cannot be maintained. This is where the use of algebraic coding, which uses equations to represent data for efficiency, resilience and flexibility.

The main problem is this: How do we implement the decentralized code for web3?

A new memory infrastructure

That is why I made the leap of the Academy, where I played the role of MIT NEC, president of Software Sciences and Engineering to dedicate myself to me and a team of experts in advancing in the high performance memory for web3.

I saw something bigger: the potential to redefine how we think of computer science in a decentralized world.

My optimum team is creating decentralized memory that functions as a dedicated computer. Our approach is driven by random linear network coding (RLNC), a technology developed in me MIT laboratory For almost two decades. It is a verified data coding method that maximizes performance and resistance in high reliability networks from industrial systems to the Internet.

Data coding is the process of converting information from one format to another for efficient storage, transmission or processing. Data coding has existed for decades and there are many iterations in use in today’s networks. RLNC is the modern approach to data coding specifically for decentralized computing. This scheme transforms the data into packages for transmission through a network of nodes, ensuring high speed and efficiency.

With multiple engineering awards of the main global institutions, more than 80 patents and numerous implementations of the real world, RLNC is no longer just a theory. RLNC has obtained significant recognition, including the Joint Joint Document Award of the IEEE 2009 Information Theory for the work “a random linear network coding approach for multidifusion”. RLNC’s impact was recognized with the IEEE Koji Kobayashi Computers and Communications Award in 2022.

RLNC is now ready for decentralized systems, allowing faster data spread, efficient storage and real -time access, which makes it a key solution for scalability and efficiency challenges of web3.

Why does this matter

Let us step back. Why does all this matter? Because we need memory for the world computer that is not only decentralized but also efficient, scalable and reliable.

Currently, block chains trust the best ad hoc solutions that partially achieve which memory in high performance computer science. What they lack is a layer of unified memory that covers both the memory bus for data spread and RAM for data storage and access.

The part of the computer bus should not become the bottleneck, as it does now. Let me explain.

“Gossip” is the common method for the spread of data in blockchain networks. It is a communication protocol between peers in which the nodes exchange information with random peers to disseminate data throughout the network. In its current implementation, struggle on scale.

Imagine that you need 10 information from neighbors who repeat what they have heard. While you talk to them, at first you get new information. But as it approaches nine out of 10, the possibility of listening to something new from a neighbor falls, which makes the final information more difficult to obtain. Most likely, 90% is the next thing you hear is something you already know.

This is how Blockchain’s gossip works today, efficient from the beginning, but redundant and slow when trying to complete the exchange of information. You would have to be extremely lucky to get something new every time.

With RLNC, we overcome the problem of central scalability in current gossip. RLNC works as if he had managed to be very lucky, so every time he listens to information, it is information that is new to you. That means much greater performance and a much lower latency. This gossip driven by RLNC is our first product, which validators can implement through a simple API call to optimize data spread for their nodes.

Let us now examine the memory part. It helps to think about memory as dynamic storage, such as RAM on a computer or, for the case, our closet. Decentralized RAM must imitate a closet; It must be structured, reliable and consistent. A fact is there or not, without bits means, without missing sleeves. That is atomicity. The articles remain in the order in which they were placed: it is possible that you see an earlier version, but never an incorrect one. That is consistency. And, unless he moves, everything is still; The data does not disappear. That is durability.

Instead of the closet, what do we have? Mempools is not something we maintain on computers, so why do we do that on web3? The main reason is that there is no proper memory layer. If we think about data management in blockchains such as clothing management in our closet, a Mempool is like having a laundry stack on the floor, where you are not sure what you are there and needs to rummage.

Current delays in transaction processing can be extremely high for any single chain. Citing Ethereum as an example, two times or 12.8 minutes are needed to finish any unique transaction. Without decentralized RAM, web3 is based on Mempools, where transactions sit until they are processed, resulting in delays, congestion and unpredictability.

Complete nodes store everything, swelling the system and making the recovery complex and expensive. On computers, RAM maintains what is currently needed, while less used data move towards cold storage, perhaps in the cloud or on the disc. The complete nodes are like a closet with all the clothes you used (of everything you have used as a baby so far).

This is not something that we do on our computers, but they exist on web3 because the storage and access of reading/writing are not optimized. With RLNC, we create decentralized RAM (Deram) for a timely and update state in an economical, resistant and scalable way.

Deram and the spread of data driven by RLNC can solve the largest bottlenecks of web3 making memory faster, more efficient and more scalable. Optimizes data spread, reduces storage swelling and allows real -time access without compromising decentralization. For a long time it has been a key piece that is missing in the computer in the world, but not for a long time.



Leave a Comment

Your email address will not be published. Required fields are marked *