Forward Insights’ NAND Flash Price Predictions

One of last week’s blog entries discussed the topic “Why are SSDs still so expensive?” and tied SSD costs firmly to NAND Flash pricing. In August of this year, Forward Insights’ founder and Principal Analyst Gregory Wong presented his NAND Flash pricing forecasts through the year 2012. Wong presented on a panel at the Flash Memory Summit held in Santa Clara, California. Here’s the chart he presented:


Forward Insights NAND Flash Pricing Chart 2007-2012


You can see that from 2007 to 2008, NAND Flash pricing was in freefall and dropped from nearly $9/Gbyte to less than $2/Gbyte. This drop had two big effects. It hammered the NAND Flash vendors and it enticed a lot of companies to jump into the SSD market with visions of even cheaper NAND Flash chips on the horizon.

For now, it looks like NAND Flash pricing is on a more manageable price decline through 2012, at least according to this Forward Insights forecast. While this easing of the rate of price decline doesn’t bode well for those who look forward to large future price drops for SSDs, it provides hope for some bedrock stability for other NAND Flash applications such as AgigA Tech’s server-class, bulletproof AGIGARAM memory modules.

Sunday, December 13th, 2009 at 15:22

Early Results Show SATA 6G Performance All Over the Map

This very interesting article written by Ryan Shrout and just published by PC Perspective puts a spotlight on the performance possibilities of the new SATA 6G (also called SATA III) hard-disk drive (HDD) interface. The version of SATA now in use, SATA II, is limited to 3 Gbits/second. SATA 6G doubles that maximum transfer rate. However, that doesn’t mean you’ll necessarily see twice the performance from a SATA 6G drive. The technical analysis in this article provides performance clues and this analysis is what makes this particular article so interesting.

This first image from the article compares the observed average read performance from a Seagate XT SATA 6G HDD, a Seagate Barracuda SATA II HDD, and one of Intel’s X25 solid-state drives (SSD). The benchmark being used here is Simpli Software’s HDTach.


PC Perspective SATA 6 img 1


You can see that the SATA 6G drive is about 6 to 7% faster on the benchmark than the 3-Gbits/sec SATA II drive. That’s a far cry from twice as fast, strongly suggesting that the HDD interface is not the limiting factor for HDD performance, at least not in this situation. However, take a look at the performance of the Intel X25 SSD, with a SATA II interface. Its average read bandwidth is about 70% better than the Seagate SATA II HDD and 60% better than the Seagate SATA 6G HDD.

Now the impetus for this PC Perspective article was the receipt of a very unusual SSD from Marvell. Marvell is a semiconductor vendor. Unlike Intel, Marvell doesn’t make SSDs; it makes SSD controller chips and this Marvell SSD, which contains a Marvell SATA 6G SSD controller chip, is an engineering sample designed to help system developers evaluate SATA 6G for their systems.

According to the article, this Marvell SSD isn’t built with NAND Flash devices. It’s built with ROM devices. So you can read from it but cannot write to it. It’s a read-only SSD, which is not particularly practical if you’re building computer systems but this drive makes a good enough tool if you simply need to exercise or evaluate SATA 6G interfaces.

So how does the Marvell SATA 6G SSD fare? Here’s the graph from the PC Perspective article:


PC Perspective SATA 6 img 2


The Marvell read-only SATA 6G SSD attains a burst-read rate of just over 350 Mbytes/sec while the Intel X25 SATA II drive attains a burst-read rate of just over 260 Mbytes/sec. So the burst-read rate for the Marvell SSD is about 1/3 faster than for the Intel X25 SSD. Unfortunately, because the Marvell SSD is a read-only device, PC Perspective could not compare burst-write rates, which tend to be significantly slower for SSDs. Consequently, you might expect that the SATA 6G interface won’t be so helpful for write transations.

What to conclude?

Well, first of all, PC Perspective comments that the Intel SSD appears to be close to saturating the SATA II interface, which speaks well of the Intel X25 SSD’s internal architecture. Next the results indicate that SSDs will disproportionately benefit from the faster SATA 6G interface than will HDDs. Finally, it suggests that future SSDs designed for the faster SATA 6G interface standard will need to employ more than the 10 NAND channels employed in the Intel X25 SSD to boost the internal bandwidth of the SSD architecture.

Saturday, December 12th, 2009 at 22:11

Why Are SSDs Still So Expensive?

The above question recently appeared on the Yahoo! Answers site and it’s a perfect lead-in to a further discussion of Jim Handy’s keynote at the Bell Micro SSD seminar in Milpitas, California earlier this month. The simple question on Yahoo! Answers was phrased this way:


Why are the solid state disk drives still so expensive?

They are on the market for years and still so expensive. SSD of a reasonable capacity (256GB) costs as much as $800 or more. Aren’t they going to drop the prices?


Although the question appears to have been posed by someone not closely familiar with the ins and outs of hard-disk drive (HDD) and solid-state disk (SSD) technologies, markets, and pricing, it’s a frequent question posed by many in the industry. We’ve become so accustomed to large, regular drops in price/capacity for both mechanical storage (“rotating rust”) and semiconductor memory that we’ve collectively developed a sense of entitlement. If we can’t buy it today, we think, surely the price will drop and we’ll be able to afford it soon.

However, when we compare the price/capacity of SSDs against HDDs, we’re comparing one moving target against another. Moore’s Law governs the price of SSDs because the largest cost component in an SSD is NAND Flash memory (see below). Moore’s Law has been a monster force in the semiconductor industry, pushing prices ever lower for more than four decades. However, the HDD vendors are constantly working with their own price-reduction curve, which has proven to be just as robust as Moore’s Law. By pulling a veritable menagerie of rabbits out of various technological hats, HDD vendors have dropped per-bit pricing for HDDs about as fast as semiconductor vendors have cut the price/bit of NAND Flash memory.

Take a look at this graph from Handy’s keynote:


Handy HDD SSD Cost Differential


From the gross slopes of the two curves, you can see that HDD cost/capacity has remained about 20x lower than NAND Flash memory cost/capacity throughout this decade. Note that in 2006, there was a serious downturn in the slope of the curve for NAND Flash. Extrapolating that new slope led some to predict that NAND Flash cost/Gbyte would cross over that of HDDs by 2008 or 2009. That just didn’t happen. The increased rate of price decline was economically unsupportable and caused huge turmoil among NAND Flash vendors. (For extensive analysis of this situation, see this blog entry on Denali Software’s Web site.)

Now please understand, the expectation that NAND Flash cost/Gbyte would zoom past the HDD cost/Gbyte curve wasn’t just wishful thinking. NAND Flash per-bit costs did overtake and then zoom past that of DRAM, which was once the semiconductor industry’s king of cost/bit. That event happened in 2004 as shown in this slide from Handy’s keynote.


Handy NAND Flash and DRAM Costs


So the expectation that NAND Flash cost/bit would zoom past HDD cost/bit wasn’t at all far-fetched. It just didn’t happen. HDD vendors happily continued to cut the cost/bit of rotating storage, to the very great benefit of consumers and enterprise users everywhere.

Handy’s simple silicon anatomy of an SSD shows why the SSD’s cost/bit is closely tied to the cost of NAND Flash.


Silicon Anatomy of an SSD


From a silicon perspective, Handy’s illustration shows 34 key semiconductor devices in his example 64-Gbyte SSD. Two of the devices are a controller chip and a DRAM buffer. Total cost for those two devices: $6. The other 32 devices are NAND Flash chips. Total cost for those devices: $64 for 64 Gbytes of storage (not counting spare capacity). The cost of the NAND Flash devices is more than 90% of the silicon cost of an SSD. The SSD’s price is largely set by the cost of its internal NAND Flash.

That’s why SSDs aren’t likely to replace HDDs for bulk storage in the foreseeable future. As long as the HDD industry has a road map leading to higher capacity and lower cost/bit storage, and it does, then the HDD will keep the throne as the storage capacity king.

SSDs can beat HDDs in raw performance by one or two orders of magnitude, as measured in IOPS. There’s nothing on the HDD road map that can change that situation. For applications that can measure the value of storage speed, and there are many such applications for enterprise-class storage, SSDs provide sufficient value to justify their higher price/bit. For most consumers, people who are selecting laptops for example, the choice between a 160-Gbyte HDD or a 32-Gbyte SSD for the same price is obvious. The consumer will choose more capacity (to store more music, more pictures, more video, and more movies) every time.

Now take a look at Handy’s curves for DRAM and NAND Flash cost/bit once again:


Handy NAND Flash and DRAM Costs


Note that the cost/bit of NAND Flash is now roughly 10% that of DRAM. That means that as a DRAM backup medium, NAND Flash doesn’t add that much to the cost of the DRAM it’s backing up. Unlike the comparison of NAND Flash and HDD capacity, which tilts far in favor of the HDD, NAND Flash densities are much better than DRAM bit densities and that gap is growing thanks to multi-level cell (MLC) storage. These economics are behind the idea for AgigA Tech’s AGIGARAM modules. For a small cost adder, volatile DRAM can be made bulletproof when paired with NAND Flash memory. For more detail regarding this idea, see the earlier 3-part series in this blog (here, here, and here).


Saturday, December 12th, 2009 at 20:35

Hard Disk Drive (HDD) Abuse

Earlier this month, distributor Bell Micro sponsored a cross-country set of seminars on solid-state disks (SSDs) featuring storage expert Jim Handy as keynoter. Handy’s talk was so content rich that it’ll take several blog entries to deliver all of the delicious slices of insight from his presentation.

One of the interesting facets Handy discussed was the current practice of short-stroking enterprise-class hard disk drives (HDDs)—“abusing” them, as Handy explained. The idea’s pretty simple. An HDD’s average access time is determined by the average amount of time it takes to swing the arm carrying the read/write heads into position plus the average rotational latency. The fastest enterprise-class HDDs now spin at 15,000 RPM so there’s not much room for trimming there—not without having the disk platters fly apart under the centripetal force. However, there’s something that can be done about the average seek time. Simply use fewer of the available tracks on the disk. Doing so, you get faster average seek times because the arm never needs to travel very far.

You pay for that decreased seek time with lost capacity. You simply don’t use most of the tracks and therefore you discard most of an HDD’s storage capacity.

Handy gave the following real-world example of such HDD abuse. He described IBM’s DS8300 Turbo. It has best-in-class TPC-C specs: 123K IOPS, 16-msec latency. It gangs 512 HDDs—consisting of 73- and 146-Gbyte enterprise-class drives—into mirrored RAID arrays. The result is a storage subsystem with 53 Tbytes of actual capacity, but short-stroking the drives reduces the usable capacity to 9 Tbytes. IBM threw away 83% of the raw capacity to get those best-in-class TPC-C performance specs.

This is yet another example of why SSD manufacturers are crowding into the Flash Zone. If IBM can afford to throw away 83% of the available capacity in a huge multi-multi-Tbyte bank of enterprise-class HDDs, then high-performance SSDs that can muster one or two orders of magnitude performance improvement relative to the “rotating rust” HDDs must be worth a lot of money to data-center architects.

And apparently, they are.

Friday, December 11th, 2009 at 04:00

Quantifying the Flash Zone

This is quite the time for Flash-based solid-state drives (SSDs)! Seagate just dropped into the market and whenever a heavyweight like Seagate drops in, there’s a big splash. We’ll cover Seagate in a later blog (you can already read all about it all over the Web) but the announcement helps lead into a discussion of the live (!) SSD seminar that distributor Bell Micro has just taken across North America. The road show landed in Milpitas earlier this month and the keynote speaker, storage analyst extraordinaire Jim Handy, did such a great job of covering the topics of interest to server designers and enterprise system architects that it will take several blog entries to cover all of the information.

For this blog entry, we’re returning to the Flash Zone, a concept described by Denali Software’s CTO Mark Gogolewski in his keynote speech—The World is Flash: A Disruption of the Memory & Storage Hierarchy—at Memcon 2009. The Flash Zone is the name put to the performance gap between DRAM and disk storage. There’s not only a gap in performance within the Flash Zone, there’s a transition from volatile memory (DRAM) to non-volatile storage (hard disk). With steep cost/bit price declines and per-device capacity growth, NAND Flash devices now easily fit into this gap and produce a new and viable layer in the overall computer memory hierarchy.

What’s new is that Jim Handy’s keynote at the Bell Micro SSD seminar put some welcome numbers on the Flash Zone that further clarify Flash’s place in the hierarchy. Here’s an image of that particular slide.


Handy Flash Zone 1


This image plots the performance and cost of the different memory hierarchy layers from first-, second-, and third-level processor cache through DRAM, disk, and tape. Because Handy’s used a log-log scale to plot everything, the graph looks nice and linear even though the reality is quite a bit messier. For a conceptual graph however, this’ll do nicely.

Note that there’s a gap in the hierarchy. That’s the Flash Zone. Here’s the same plot augmented a bit. The big red circle identifies the Flash Zone.


Handy Flash Zone 2


Also note that Handy has labeled the gap and says it’s “growing.” The gap’s growing because DRAM is getting faster, bigger, and cheaper, moving its ellipse up and to the left while HDDs are getting bigger, although not much faster, moving the HDD ellipse horizontally to the left. The result is a growing performance and bandwidth gap between DRAM and HDDs.

Flash fits into this gap very, very nicely said Handy (and as discussed in this blog previously). Later in his keynote, he displayed this image to underscore the point.


Handy Flash Zone 3


There are currently at least three ways to fill the Flash Zone in a memory hierarchy using NAND Flash memory. The first way, the way that gets the most attention these days, is with solid-state drives (SSDs). Because they employ the same interfaces and share the same form factor with HDDs, SSDs are an easy, drop-in Flash Zone filler. They boost performance just by dropping them into place as HDD replacements, although that may not be the best way to introduce SSDs into the hierarchy. (More about that in a later blog.)

The second way to drop NAND Flash memory into the Flash Zone is through direct- or I/O-attached drives. This is the approach advocated by Fusion-io, as discussed in that earlier AgigA Tech blog entry on the Flash Zone. Direct-attached SSDs eliminate the HDD interface and protocols, which were designed with built-in assumptions about the performance characteristics and limitations of HDDs (“rotating rust” quipped Scott Stetzer, VP of Marketing at SSD vendor STEC). Free of those limiting assumptions and limits, direct-attached SSDs deliver more performance than do SSDs employing HDD interfaces.

Handy showed the ways to introduce these two types of SSDs with the following slide:


SSD Attachment Alternatives


In enterprise-class server systems, SSDs with HDD interfaces typically plug into SAN racks and tie to servers over a network while direct-attached SSDs plug directly into the server over a high-speed interface (typically PCIe). Note that smaller servers with HDD interfaces often talk to SSDs directly.

Because he was speaking at an SSD seminar, Handy did not discuss the third way of introducing NAND Flash into the Flash Zone—the approach employed by AgigA Tech’s AGIGARAM. That approach mates the NAND Flash directly to the server’s DRAM, creating a high-bandwidth connection between the two memory hierarchies. In this application, however, the NAND Flash is used for DRAM backup and power-failure bulletproofing—not necessarily for storage (although there are other possibilities to be discussed in this respect).

So far, we’ve only been able to discuss two of Handy’s 47 keynote slides. The talk contained a ton of good information for server designers and enterprise system architects. More later.

Note: Handy’s keynote was based on his company’s new report: Solid State Drives in the Enterprise – 2010.

Thursday, December 10th, 2009 at 17:49

The Problems with PCM (Phase-Change Memory)

The previous blog entry discussed work on PCM (phase-change memory) taking place in an attempt to dethrone NAND Flash memory as the king of nonvolatile semiconductor memory. If PCM technology were a slam dunk, then NAND Flash would never have been born because PCM was invented more than ten years before Flash. However, technologies do not advance at equal paces. Thomas Edison developed a practical incandescent light bulb in 1879 and it was in mass production within a very few years. Nikola Tesla experimented with fluorescent light bulbs during the 1890s but GE put them into mass production only in 1939 and incandescent bulbs, with their original Edison screw-in bases, are only now being phased out. It can take decades for a new technology to become production-ready.

So there must have been barriers to PCM becoming a commercial reality. The first such barrier is write current. PCM cells write bits by melting glass at 600° C. It doesn’t take much imagination to understand that there’s some appreciable amount of power required to do this, particularly at 1970 lithographic sizes. Today, 90nm and 45nm PCM cells require much less write current than 40 years ago, but the amount of current is still not negligible.

Next, there are mechanical issues associated with repeatedly melting a material inside of an integrated circuit. Eventually, voids can form in the melt zone resulting in cell destruction. Fortunately, the failure related to this mechanism always occurs at write time, so the cells can be read after a write to verify that a failure has not occurred.

There are also issues associated with operating temperature. High-temperature PCM operation tends to anneal PCM bits set to the amorphous state. Numonyx says that the retention time for its PCM cells is on the order of 10 years at 85° C. However, it’s 10 hours at 125° C, 10 seconds at 165° C, and 10 microseconds at 225° C. This problem isn’t insurmountable, but it must be understood and addressed by system designers.

Note that all memory technologies have similar problems. NAND Flash memory has well-understood wearout mechanisms. Because they’re well understood, system designers working with NAND Flash memory have little trouble incorporating them into their designs. Novice designers—well that’s a different story.


Saturday, December 5th, 2009 at 19:55

PCM (Phase-Change Memory) Basics and Technology Advances

Next week, Intel and Numonyx will present a paper on 3D cell-stacking developments for PCM (phase-change memory) at the IEDM conference in Baltimore, Maryland. The two companies previewed this paper in an announcement a few weeks ago (discussed in this blog here). Just before Thanksgiving, Numonyx presented a Webinar on PCM that contained some excellent background information on PCM. Four decades after its invention—when it appeared on the cover of Electronics magazine—PCM may be about to become a serious challenger to NAND Flash, the current king of nonvolatile memory technologies and the current low-cost leader among all semiconductor memories. These next few blog entries leading up to the Intel/Numonyx paper presentation will elaborate on some of the ideas presented in the pre-Thanksgiving Numonyx Webinar.

PCM manufacture involves introducing “foreign” elements (not silicon) from the periodic table into the IC-manufacturing process. Normally, this is something IC manufacturers avoid at all costs, but the material being introduced is glass—albeit something called chalcogenide (pronounces “kal-KAW-gen-ide”) glass—composed of germanium, antimony, and tellurium. The glass is pretty inert, so it apparently doesn’t present too many contamination problems that would absolutely preclude the material’s use in IC manufacturing.


Chalcogenide periodic table


If you’re an electrical engineer, it’s likely you’ve never heard of chalcogenide glass, but it’s one of the most closely studied materials with one of the highest manufacturing volumes in high tech—just not in electronics. A chalcogenide glass layer is the active component in recordable CDs and DVDs. In its crystalline form, the glass is highly reflective. In its amorphous form, the glass is not so reflective, resulting in a nice, binary, optical-storage mechanism. In an optical disk burner, laser-induced thermal heating switches the glass from one state to the other. A fast, strong laser pulse disrupts a spot of sputtered crystalline material and causes it to become amorphous, reducing its reflectivity. You can see the difference if you look closely at a written disk.

These optical differences between the crystalline and amorphous states are essential to recordable, optical-disk operation but they’re not at all relevant to PCM data storage. However, the chalcogenide glass also has measurably different resistance between the crystalline and amorphous states. The crystalline form of the glass has relatively low resistivity and the amorphous form has higher resistivity, until the glass melts. Now you’re talking memory.

You can see the differential resistivity between the crystalline and amorphous states at low “read” voltages in the figure below. At higher voltages, the glass heats and starts to melt. At that point, the crystalline and amorphous V/I curves merge as the glass starts to soften and melt.



PCM read-write curve


PCM cells exploit this V/I curve, which is conceptually similar to the hysteresis curve for magnetic memories. A fast, high-voltage pulse melts a spot of glass in the PCM cell through Joule heating. Joule heating causes a small amount of chalcogenide material to switch from amorphous to crystalline or back again depending on the size and shape of the write pulse (as shown in the following diagram). Once the pulse is removed, quick cooling allows the glass to solidify in amorphous form. A longer, less intense voltage pulse anneals the glass and puts in the crystalline state.



PCM read-write pulse



A PCM cell is pretty simple as shown below. The memory cell resides between a bit electrode and a word electrode. The cell itself consists of a current-limiting/heating resistor and a dot of polycrystalline chalcogenide glass. Current flow through this structure and the amount of that current depends on the voltage impressed on the word and bit lines. When the current is high enough, a region of glass next to the resistor (the dark gray mushroom cap atop the resistor in the figure) starts to melt. The PCM chip’s read/write control circuitry controls the size, shape, and timing of the write pulse.



PCM cell



It’s the simplicity of this mechanism and of the PCM cell design that excites chip makers like Intel, Numonyx, and the other vendors chasing after the 4-decade dream of a new form of semiconductor memory.

However, don’t get the impression that this is a trouble-free memory poised to wipe out all existing semiconductor memories overnight. Won’t happen. If this was simple stuff, PCM would have won the semiconductor memory wars long ago. That obviously didn’t happen. Why? Tune in for the next installment.


Saturday, December 5th, 2009 at 18:53

More than Moore: SLC, MLC, and TLC NAND Flash

The planar integrated circuit was invented 50 years ago this year at Fairchild Semiconductor in Mountain View, California by Gordon Moore, Jay Last, and the brilliant team of high-tech refugees from Shockley Semiconductor. Gordon Moore then published his brilliant article in April, 1965 that became the foundation of Moore’s Law, which forecasts the doubling of transistor counts on semiconductor die to a drumbeat with an 18- to 24-month cadence—see the original article published in Electronics magazine here. For most of those 50 years, ICs and Moore’s Law have essentially been restricted to a 2D world—sort of a real-world Flatland. On-chip circuits have been arrayed on a thin surface layer of the silicon die as they were on that first Fairchild IC. Five decades of circuit advances and expansion have essentially been restricted and limited to cramming more transistors per square millimeter.

With IC lithographies approaching atomic limits—and our absolute inability to pattern transistors using fractional atoms that inevitably leads to the subsequent slowing of Moore’s-Law scaling—a third dimension starts looking mighty attractive. Just as cities started to build up towards the sky to fit more people and more businesses into limited downtown real estate, IC designers would dearly love to find easy ways to pack more transistors, more gates, and more bits into the same limited on-die real estate. One way to do this is to build circuits in layers. Intel and Numonyx will be discussing a new way to build nonvolatile phase-change memory (PCM) ICs using multiple layers in a few days at the IEDM conference in Baltimore, Maryland. But NAND Flash designers have already discovered another way to pack more bits into the same space by stuffing existing 2D memory cells with multiple bits. This approach also represents a way to circumvent 2D limits—to put NAND Flash bit capacity on a trajectory that is “more than Moore.”

Flash memory stores bits as charge trapped in a transistor’s floating gate, which is an isolated island of semiconductor surrounded by insulator. Electron tunneling drives the charge into the floating gate through where it *mostly* remains trapped until erased. (Sometimes, the electrons wander off by themselves or through a phenomenon called “read disturb.”) The electrons trapped in the floating FET gate act like a phantom negative voltage that prevents the transistor from conducting when read. This is the original mechanism developed for the NAND Flash memory by Dr. Fujio Masuoka while working for Toshiba circa 1980. It’s a simple binary use of trapped charge. When charge is trapped in the NAND Flash cell’s floating gate, the associated transistor will not conduct when read. When there are no trapped electrons, the transistor will conduct during a read operation. You get a simple binary response to the trapped charge or lack of trapped charge.

However, there’s an essential analog mechanism available here. The Flash memory can trap more or fewer electrons in the floating gate. The variable amount of charge can be measured by a fast A/D converter. If you store four different charge amounts on a floating gate (empty, quarter full, half full, three quarters full, and full) then you have essentially put two bits (four states) worth of information in one NAND Flash memory cell. If you can trap and measure eight charge levels of charge in a NAND Flash memory cell, then you have essentially put three bits worth of information into one NAND Flash memory cell. NAND Flash memories that store one bit/cell are called single-level cell (SLC) memory. Store two bits/cell and you have multi-level cell (MLC) memory. Store three bits per cell and you have three-level cell (TLC) memory (or 3BPC—three-bit/cell—memory using Micron Technology’s terminology).

Great! Why not pack two or three bits worth of information into every NAND Flash memory cell and essentially boost the NAND Flash chip’s memory capacity “for free?” Well, why not?

There are a few reasons why not. First, MLC and TLC NAND Flash memory is slower than SLC NAND Flash memory. You need more time for the on-chip A/D conversion circuitry to resolve the amount of charge stored in the selected cell. Second, there are more and more complex wear, reliability, and endurance issues with MLC and TLC NAND Flash memory than for SLC NAND Flash memory. One NAND Flash wearout mechanism involves permanently trapped charge and MLC and TLC NAND Flash memories are more susceptible to such failures because the exact amount of charge trapped by the tunneling process is far more critical when storing more bits per cell. A little permanently trapped charge can really mess things up.

Choosing between SLC, MLC, and TLC memory can be tricky. The choice involves several of your design criteria including the simple and obvious one (read/write latency requirements) and the more fuzzy ones (failure rate, reliability, and cycle endurance). In short, you cannot choose using a simple cost/bit analysis.

Finally, if you’d like a painless video intro to the world of SLC, MLC, and TLC or 3BPC NAND Flash memory concepts, here’s a 5-minute video from Micron Technology:



Saturday, December 5th, 2009 at 17:34

Samsung Announces Production Ramp of 30nm NAND Flash Chips

Samsung Electronics announced today that it has started shipping production volumes of two different 32-Gbit MLC NAND Flash devices based on 30nm lithography. Two 3-bit MLC (multi-level cell)  memory devices will initially be packaged with a 3-bit NAND Flash controller chip in 8-Gbyte microSD cards. The other new NAND Flash devices are MLC NAND Flash memories that have asynchronous DDR (double data rate) interfaces with 133-Mbits/second, transfer rates. These chips replace single data rate (SDR) MLC NAND Flash memory with a slower overall read performance of 40 Mbits/sec.

These announcements underscore the industry trend of designing the most leading-edge NAND Flash devices for the largest-volume (consumer-grade) applications. Other applications that are based on NAND Flash memory such as AgigA Tech’s AGIGARAM bulletproof server RAM can take advantage of these semiconductor developments by riding the same high-volume consumer learning curve. Otherwise, such leverage would not be available at current enterprise-class manufacturing volumes for NAND Flash chips.

Tuesday, December 1st, 2009 at 14:47

SSDs as Investment-Grade Vehicles

You know that a technology is climbing the hype curve when it appears on the bill for an MIT/Stanford VLAB (Venture Laboratory) evening meeting. That’s exactly what happened for Solid-State Drives (SSDs) on November 17 when SSDs were the technology of the evening. The event was a panel titled “SSDs: Game-Changing Technology for Better, Bigger, Faster Apps and App Dev.” The panel moderator was well-known storage analyst Tom Coughlin. Panelists included Fusion-io’s President and CTO David Flynn; Bill Watkins, Former CEO, Seagate; Mike Chenery, President, Pliant Technology; Mike Speiser, Managing Director, Sutter Hill Ventures; and Sam Pullara, Chief Technologist, Yahoo! Inc.

SSDs are one of three ways to fill the memory/storage gap called the “Flash zone” as discussed in the previous AgigA Tech blog entry, which described Flynn’s initial panel presentation. Although not a major consumer of NAND Flash memory devices, yet, SSD use is growing quickly because of the speed advantages they deliver over what can be achieved with rotating mechanical storage (hard disk drives). Flynn’s talk described the ideal conditions under which I/O-attached storage (including products offered by Fusion-io) can deliver stellar storage performance as measured in IOPS. Flynn’s presentation prompted the first panel question from moderator Coughlin: “Are hard disk drives dead?”

Flynn answered first. Unsurprisingly, he said “No.” Tape hasn’t died either, said Flynn, and neither has DRAM. None of these technologies is in danger of disappearing overnight. HDDs (hard disk drives) currently enjoy a huge cost/capacity lead over any competing storage technology (excluding tape) and HDDs will only disappear when they lose that lead.

Speiser also weighed in. Tape’s huge cost/capacity lead over HDD storage is the only factor that keeps tapes alive for their ultimate use: “offline storage inside of (hollowed-out) mountains.” Tapes will outlast HDDs added Speiser. “They’re the cockroaches of the storage industry.” Chenery, who left HDD vendor Fujitsu in 2006 to start SSD supplier Pliant, also spoke favorably about HDDs. “No one wants a mechanical drive in their computer,” said Chenery, because of the power consumption and susceptibility to physical shock. However, “they provide so much value for capacity” he explained. “In 30 years, who knows?”

Watkins disagreed. “No one cares what’s in their PCs. Consumers think about the applications they want to run. Then they find the best hardware to fit their needs.” Watkins is more concerned by the applications that consumers will be using in five years. His conclusion: all mobility products will evolve into Flash-only use because Flash memory provides superior form factors for small, mobile end products. Meanwhile, cloud storage may obsolete large HDDs in laptops because it’s too dangerous to carry around all that valuable data in a form where it can be lost, stolen, damaged, or destroyed. Yahoo’s Pullara smiled at Watkins’ comment about cloud storage and quipped “How about unlimited storage (in Yahoo’s cloud)? Can you beat that, Google?”

“So if HDDs aren’t going away any time soon,” asked Coughlin, “why did you (Chenery) start Pliant?”

“Because no one would listen to me” replied Chenery, who feels that SSDs are clearly going to redefine they way computer systems are architected.

Speiser jumped on the bandwagon. “We’re looking to invest in companies that have fundamentally rethought applications to back out assumptions based on spinning media.”

Pullara concurred. “Look at anti-spam in 2003” he said. The need to maintain extensive lists of spam sources has soared since then. Maintaining those lists on slow HDDs would make it impossible to reject spam in real time, given the rising volume of spam emails.

Watkins returned the discussion to mobile applications. “The sweet spot for Flash is in the hand,” he said. SSDs must reach 100-Gbyte capacities for netbooks while enterprise applications require terabytes of data storage and corresponding changes in server architecture.

The question of data reliability and trust then arose. Flash memory has well-documented, well-understood wearout and failure mechanisms. In fact, Flash vendors have been far more open and informative about these technology issues than have HDD vendors. As a result, people better understand Flash failure modes and are more aware of them. Chenery grinned and asked “Why would you trust your data to a flying head on a disk?” referring to the incredibly small gap between the read/write head and the spinning media. Head crashes are a well-known HDD failure mechanism. “Flash memory has its idiosyncrasies, but technology overcomes a lot of these” said Chenery. “You manage these idiosyncrasies with appropriate controllers, software, and use models.” In the end said Chenery, system-level designers shouldn’t trust any of the HDD or SSD vendors. They should test and verify reliability claims.

In addition, said Chenery, SSDs don’t “fall off the cliff” (fail catastrophically like HDDs). They provide deterministic, predictable performance that allows for soft failures, usually seen as a gradual capacity decrease as control firmware walls off bad blocks in the Flash memory and moves data to good blocks. Most SSDs will decline in performance over time, claimed Chenery. They must be designed specifically to not decline in performance at the subsystem level. “Getting Flash to deliver deterministic performance in a random environment is hard. It requires enormous computing power.”

Sunday, November 29th, 2009 at 15:25