DDR3 and Server Memory Evolution
Semiconductor memory is always in a state of flux. New semiconductor memory technologies emerge, grow in popularity, and take over the lion’s share of the market. Older memory technologies hang around for a while and then slowly vanish as they’re supplanted by the new. How can you predict which technologies will succeed? Well, Marc Greenberg, Denali Software’s Director of Technical Marketing (whose tutorial on DRAM provided the graphic below), has a saying about the memory market: “Never bet against the market.” By that, he means that semiconductor vendors are always placing their bets on four major factors:
- More density
- More speed
- Less cost
- Less power
Depending on the specific application, one or two of these major factors may be more important than the other, but they’re all important factors—all of the time.
Currently, we have divided the use space for semiconductor memory into four big regions:
- SRAM serves applications that require frequent, fast data access—more speed (usually cache)
- DRAM serves applications that need large storage space—more density—for frequently changing data at a low price—less cost
- NOR Flash currently serves the spot for holding code and data that must be accessed quickly—more speed—but doesn’t change often. You see NOR Flash mostly in smaller embedded applications because larger embedded applications, computers, and servers combine hard disk drives (HDDs) or solid-state disks (SSDs) with DRAM to serve the same function instead of NOR Flash.
- NAND Flash is a story all by itself. As an industry, we use NAND Flash in a wide variety of ways. We use it to hold data and code in bulk because it’s the cheapest semiconductor memory available—less cost. At the same time, NAND Flash is non-volatile, so it’s useful for retaining information through power outages. That’s why SSDs are packed with NAND Flash chips and it’s also why AgigA Tech uses NAND Flash to back up DRAM in its AGIGARAM bulletproof Non Volatile System (NVS) memory modules. Even better, NAND Flash power consumption is fairly low—less power—if the system uses the NAND Flash memories infrequently, which is exactly how they’re applied in AGIGARAM NVS memory modules.
Memory Technology Inflection Points
The immense importance of memory in a processor-centric, multicore world results in tremendous technology R&D efforts to develop semiconductor memories that improve on one or more of the four major factors listed above. Memory storage technology and memory cell design get a lot of attention. One aspect of memory design that sporadically pops up in importance is the memory interface.
For some, the memory interface isn’t nearly as glamorous as a new kind of memory cell (think phase-change memory or PCM, which has held the limelight lately) or lithographic shrinks (think 32nm heading for 2x nm). However, the memory’s interface performance plays a major role in determining how a memory performs and even how much power it consumes.
In the world of NAND Flash, ONFi (the Open NAND Flash interface) and the Toggle-Mode NAND interface are coming to the fore. We’ll leave the discussion of these competing, high-speed NAND Flash interfaces for another blog post. Today’s topic is DRAMs. For DRAMs, the hot “new” interface is DDR3, which is the third major iteration of the JEDEC interface standard for synchronous, double-date-rate (DDR) DRAM.
The original DDR (double data rate) specification appeared in June, 2000 after a four-year gestation. The DDR memory interface replaced the original JEDEC SDRAM interface, which appeared in 1993. Before that, DRAM used the baroque RAS/CAS asynchronous control structure and multiplexed row/column address lines that Mostek developed for the MK4096 4-kbit DRAM in 1973 to reduce package pin count. That old RAS/CAS stuff is still there, deep inside of today’s advanced DRAMs, but it’s now buried inside of the DDR parts where you can’t see it unless you’re a DRAM chip designer.
DDR3 Memory’s Advantages
What are the advantages of DDR3 over DDR2? They go straight back to the four major factors listed at the beginning of this blog post. Compared to DDR2, DDR3 memory provides more speed, more density, operates at lower voltage (and therefore consumes less power), and it will be less expensive than DDR2 memory at some point in the coming year. In short, DDR3 memory improves on all four of the major factors relative to DDR2 memory. Bets don’t get much safer than that.
For enterprise-class systems (servers), DDR3 memory provides many specific advantages. First, it promises denser memory modules by accommodating DRAM chips as large as 16 Gbits, permitting the development of 16-Gbyte registered DIMMs. Enterprise-class server architects love denser memory modules because they’re always strapped for room inside of their server boxes. DIMMs take up space and, worse, they block air flow and make cooling more difficult inside of the enclosures. Fewer DIMMs is definitely better for air flow.
Second, DDR3 memory transfers twice as much data per clock as DDR2 memory. Enterprise-class server architects can use this speed in one of two ways. They can run their processors faster with faster memory or they can run at the same speed but cut the clock rate to the memory modules and thus cut power consumption.
Real Power Savings
But the real power savings comes from DDR3’s lower operating voltage. DDR2 memory is specified for a 1.8V operating voltage while DDR3 memory operates at 1.35V (and maybe 1.2V in future low-power DDR3L devices). Because operating power is proportional to the square of the operating voltage, that small 150mV drop between DDR2 and DDR3 operating voltages translates into an appreciable drop in operating power—about 30% less!
Enterprise-class server designers like lower operating power, therefore less waste heat. In fact, they like it a lot. That’s because data centers pay double for every excess Watt of server power. Roughly speaking, each Watt consumed by a server takes one Watt of electricity to run and another Watt to cool the server. By at least one estimate, DRAM power usage accounts for 25% to 40% of a data center’s energy costs (and can be more than 50% according to this Denali memory blog post). By another estimate, Google’s power costs were $50 million in 2006. So power reduction is very high on the server designers’ wish lists because data-center operators can easily translate reduced power consumption into monetary savings and they are quite aware of that sort of calculation with respect to total cost of ownership (TCO) when evaluating competing servers.
Perhaps the biggest force driving the adoption of DDR3 memory is the support of Intel and AMD. Intel’s Core i7 and AMD’s Phenom II multicore processors and chipsets presume DDR3 memory. It won’t take long before this presumption filters down to the lesser PC processors and PC processors are the big dogs in the memory kennel. They largely drive what happens with mainstream DRAM parts. So DDR3 memory’s success is likely assured, just as DDR2’s was before that and just as DDR memory supplanted SDRAM. The cycle repeats, and often.
Currently, AgigA Tech offers AGIGARAM NVS modules with SDRAM and DDR2 interfaces. It doesn’t yet offer an off-the-shelf DDR3 module, but given the industry’s track, you can safely bet that there’s a DDR3 AGIGARAM module on the road map.