Historically, many applications have been dominated by read transactions. For example, in a file server 80% of the transactions are reads and the sizes typically range from 4KB to 64KB. Because of this, most of the hardware is optimized for reads and less importance is placed on write transactions. Today, there are applications where the percentage of write transactions is increasing significantly. For example, for Online Transaction Processing (OLTP) applications like databases, there is a high percentage of read-modify-write operations resulting in 2 reads for every write (33% writes), and the transactions are primarily random. Also, newer applications such as artificial intelligence, data analytics and cloud computing are more and more dominated by small random writes.
Thus there is a growing need for higher density, high-speed non-volatile random access memories. And the traditional approaches have not been adequate. Enter NVDIMMs.
There are a number of target applications that have a fundamental need for a memory solution that provides the performance, latency and endurance of DRAM, while also providing the non-volatility of flash. Small, byte-width, random writes can now be done at the speed of memory. There is no longer a need to find another non-volatile media to store the data as NVDIMMs will maintain their state over power loss. These are the same applications that adopted flash over time to improve system performance. However, NVDIMMs are not a replacement technology for flash or SSDs, but rather a complementary technology allowing storage architects additional options to tune system performance and cost.
Target Applications
- Databases: journaling, reduced recovery time, log acceleration
- Enterprise Storage: tiering, caching, write buffering and meta data storage
- Virtualization: higher VM consolidation with greater memory density, more virtual users per system
- Big Data: fast IOP workloads, in-memory processing, check point acceleration
- Cloud Computing: byte-level data processing, metadata store
- Artificial Intelligence: low latency look-up & processing, real-time processing
Benchmarks and Customer Feedback
“A Hard Drive with an NVDIMM is faster than SSD” – Microsoft
One big surprise for us was that by using NVDIMMs to buffer our writes, we could reduce the spin-time of our hard disks, which has a significant impact on power consumption. – Storage Architect @ Hyperscaler Customer
By targeting writes to NVDIMMs we don’t need to commit them to flash as frequently, thereby reducing the number of block writes, which in turn reduces the wear-out. – Storage Architect @ All-Flash Array Customer
The use-cases we have identified for NVDIMMs don’t require an enormous amount of capacity, just enough to buffer ingest for the period of time in which it might be needed. Even 128GB is actually over-kill for the software we have that can leverage it. Absent the need for gigantic NVDIMMs like Intel’s hugely expensive 512GB Optane modules, we prefer to just balance the config in our BOMs as much as possible and make them an ubiquitous part in our storage-servers. The endurance is effectively un-limited and the latency is the same as normal DDR4 DRAM. – Storage Architect @ Leading Travel Site
And What About 3DXPoint? An Industry Perspective…
From EE Times Article:
One of Intel’s largest potential customers is one of 3DXP’s biggest skeptics. The chips lack the endurance and the latency to play a significant role in server main memory, said Doug Voigt, a distinguished storage technologist at Hewlett-Packard Enterprise. At about 20 microseconds in latency, 3DXP is clearly above the two-microsecond upper boundary that Voigt says is a comfort zone for main memory. It is far slower than the 200 nanoseconds of the best main memory pools in today’s servers. Even memory with latency under a microsecond “will start clogging up your pipeline fairly quickly,” he said. “This is an architectural line of thinking, and I haven’t gotten a lot of pushback on it.” Intel may be able to lower 3DXP latency somewhat as it refines its manufacturing process. But “it sounds like we’re not there yet, [so] we’ll position it more as storage rather than memory,” he said.
From recent Micron Announcement (March 16, 2021):
With immediate effect, Micron will cease development of 3D XPoint™ and shift resources to focus on accelerating market introduction of CXL-enabled memory products. Micron has now determined that there is insufficient market validation to justify the ongoing high levels of investments required to successfully commercialize 3D XPoint at scale to address the evolving memory and storage needs of its customers.