The technology you cannot see is still evolving. In such a deep-sea field, once disruptive innovation occurs, it will bring a new wave.

Editor’s note: This article comes from the WeChat public account “Geek Park” (ID: geekpark) , author: opposition.

Your invisible technology is still evolving. In such a deep-sea field, once disruptive innovation occurs, it will bring a new wave.

“If someone said ten years ago that there will be no hard disk in the PC in the future, no one may believe it. But in fact many PCs now do not have a hard disk, but SSD. I think five years later, the PC may even have an SSD No more, it will all become memory. “

Fan Chenggong, CEO of MemVerge, a “big memory” software company, predicted that this company announced a new round of strategic financing in May this year. Investors include industry giants such as Intel and Cisco, with a financing amount of $ 19 million.

The “large memory” in Fan Chenggong’s mouth may be strange to many people, but it is closely related to everyone. It is a concept proposed by MemVerge about a new era of computing.

What is large memory?

To understand large memory, we must first understand the computer architecture. In short, the storage of data in the computer is graded. Frequently used data is placed in more expensive memory, and less frequently used data is placed in cheaper hardware devices such as hard disks and SSDs (these devices It is collectively referred to as “storage” by professionals, corresponding to memory). The reason is very simple, nothing more than “make computing more economical.”

Memory is expensive because of its fast access speed, suitable for being placed close to the CPU for the CPU to read and write frequently used data. However, the medium used to make memory is volatile, so it cannot be used as permanent data storage hardware. Although the storage access speed is slow, it is cheap and can permanently save data. Therefore, it is suitable for storing infrequently used data. When the CPU wants to access the data in the storage, the storage hands the data to the memory, and the memory is handed over to the CPU.

The source of large memory is the birth of a memory medium called PMem (Persistent memory). The hardware made with this medium can integrate the advantages of memory and storage well, with low latencyLate and fast access, power loss is not easy to lose, and compared with memory, the same size of data is cheaper to store. In this way, PMem hardware can both perform the calculation and shoulder the responsibility of storage.

It can be expected that in the future storage architecture, PMem media hardware will replace memory and squeeze the value of storage, MemVerge calls it the era of large memory. Fan Chenggong predicts that in the future, applications will all run on large memory, and storage may be rarely used.

Intel is a pioneer in PMem. It started experimenting as early as 20 years ago, and launched PMem-based hardware “Ao Teng” in 2019. Intel is not a lonely early adopter. In addition to Intel, Micron, Hynix and other hardware companies are also stepping up their layout. Due to competition clause restrictions or R & D cycle reasons, there is currently only one Intel player on the market, but it is expected that the PMem hardware market will usher in a round in two years.

MemVerge’s software serves this explosive trend. More precisely, MemVerge provides an enterprise-level solution for PMem data centers, allowing existing software to run better on the upcoming new memory architecture.

Fan Chenggong introduced that first of all, new hardware requires a new programming model, but for existing enterprise-level software, the cost of rewriting is huge, so a migration tool for the middle layer is needed; PMem media can already provide large capacity, but to be fully competent to store large-scale data, cross-machine linkage is still required, so software is required to pool the hardware; in addition, data services based on storage have always been an important part of the storage industry It also constitutes an important requirement of the new architecture.

Meeting the above three requirements, MemVerge has released a product called Memory Machine, which provides software services for enterprises based on Intel Optane hardware. At present, Memory Machine has been used by some customers in the financial and AI fields.

Who is eating crab

For enterprises, the motivation to try new technologies often stems from the pain points in the actual business, especially on a new architecture such as large memory. One of MemVerge’s clients is an investment bank located on Wall Street.

During the daily stock market, this investment bank trades stocks at an average frequency of 50,000 transactions per second. These transaction data need to be distributed to more than 200 accounts in real time. Some of these accounts are transaction-related, such as hedge funds, banks, and other traders, and some are system-related, such as legalized management and risk assessment.

In this business of investment banks, the system delay affects the gains and losses of real money, so they have spared no effort in reducing the delay. inBefore the emergence of Optane, there was a very mature solution in the banking system called “publish / subscribe” (Pub / Sub), which was delayed by SSD and traditional network solutions on the order of hundreds of microseconds. MemVerge built the Optane RDMA solution for this bank, and with Memory Machine’s software technology, it can reduce the delay to about 3 microseconds.

Also in the financial sector, data center downtime recovery has always been a headache. Due to the real-time high concurrency feature, at present, during the transaction process, the data is difficult to be stored in non-volatile storage in real time, which is commonly known as “difficult to place”, but is temporarily stored in memory, waiting for the trading day After the end, the order will be placed uniformly.

For security reasons, only one log is placed during the transaction. In this way, if the system goes down during the transaction, and the data in the memory is lost, to restore it, you need to roll back from the time point of the last “disclosure” according to the log, that is, the night before. Such downtime recovery is often at the hour level, while transactions are carried out in minutes.

The storage of PMem media can effectively solve this problem, because all the data is stored in the large memory, it is not easy to lose power, so there is no action of placing the disk. Quickly roll back through high-frequency snapshots to solve data recovery problems caused by downtime in minutes.

In addition, in recent years in the hot AI field, large memory can also play a role. Fan Chenggong introduced that in deep learning model training, when the size of the model is larger than the size of the memory, the training speed of the AI ​​model will be constrained by the data transmission speed. The same applies to industries such as film and television animation, games, etc., and the application of large memory can be effectively solved.

What has changed with large memory?

At present, among MemVerge’s customers, in addition to financial institutions, there are Internet companies such as LinkedIn and Tencent Cloud, as well as some AI companies. These companies that have requirements for computing efficiency and reliability have become early adopters of the era of large memory.

In the changes brought about by the upgrade of the storage architecture, MemVerge ’s business is only a small part of it. Data services and network services based on the new architecture are still in the blue ocean stage. In addition, the upgrade of storage architecture also provides more imagination for the application level. For example, whether a faster gaming experience can spawn new game categories; whether the increase in AI computing speed has brought about a reduction in costs, and whether it will help the popularity of AI technology in the industry.

Technologies like memory media that are invisible to ordinary consumers are still evolving. In such a deep-sea field, once disruptive innovation occurs, it will bring a new wave. Just as the birth of the mobile Internet came from the invention of capacitive screens, the wide application of AI has benefited from the breakthrough of the bottleneck of von Neumann architecture. The surprise brought by PMem media memory alsoIt has not been revealed, and the hidden opportunities are waiting to be opened.