New Silicon-Based Memory 5X Denser Than NAND Flash 162
Lucas123 writes "Researchers at Rice University said today they have been able to create a new non-volatile memory using nanocrystal wires as small as 5 nanometers wide that can make chips five times more dense than the 27 nanometer NAND flash memory being manufactured today. And, the memory is cheap because it uses silicon and not more expensive graphite as been used in previous iterations of the nanowire technology. The nanowires also allow stacking of layers to create 3-D memory, even more dense. 'The fact that they can do this in 3D makes makes it highly scalable. We've got memory that's made out of dirt-cheap material and it works,' a university spokesman said."
Re:Is anybody writing this down? (Score:4, Insightful)
All we ever see is a drop in the price of USB sticks in the shop, but under the surface the duck is paddling as hard as ever.
Re:Is anybody writing this down? (Score:3, Insightful)
how many do we ever actually purchase?
Some. Is that not enough to make it newsworthy?
25x more dense, not 5x more dense... (Score:5, Insightful)
The big question I have for all of these technologies is whether or not is is mass production worthy and reliable over a normal usage life.
Here they come... (Score:4, Insightful)
Best Buy and Amazon are both selling Intel's 40 GB flash drive for just under $100 this week... I'm building a server based around it and will likely later post on how that goes. Intel recently announced that they're upping the sizes so you're likely going to see the 40 GB model in the clearance bin soon.
It's here, it's ready... and when you don't have a TB of data to store they're a great choice, especially when you read much more often that you write.
Re:I wouldn't be so sure (Score:5, Insightful)
If it takes 18 months to bring a plant online, that is pretty much the limit of the market's ability to cope with surprise demand(minus any slack in existing capacity that can be wrung out). For highly predictable stuff, no big deal, the plant will be built by the time we need it; but surprises can and do happen, even for common materials(especially given the degree to which "just in time" has come to dominate the supply chain. This isn't your merchant-princes of old, sitting on warehouses piled high. Inventory that isn't flowing like shit through a goose is considered a failure, with the rare exception of "national security" justified stockpiles or the rare hedge or futures position that is actually stored in kind, rather than in electronic accounts somewhere...)
And if you want a big SSD (Score:5, Insightful)
And if you do need a big SSD Kingston has had a laptop 512GB SSD out since May with huge performance, and this month Toshiba and Samsung will both step up to compete and bring the price down. We're getting close to retiring mechanical media in the first tier. Intel's research shows failure rates of SSD at 10% that of mechanical media. Google will probably have a whitepaper out in the next six months on this issue too.
This is essential because for server consolidation and VDI the storage bottleneck has become an impassable gate with spinning media. These SSDs are being used in shared storage devices (SANs) to deliver the IOPs required to solve this problem. Because incumbent vendors make millions from each of their racks-of-disks SANs, they're not about to migrate to inexpensive SSD, so you'll see SAN products from startups take the field here. The surest way to get your startup bought by an old-school SAN vendor for $Billions is to put a custom derivative of OpenFiler on a dense rack of these SSDs and dish it up as block storage over the user's choice of FC, iSCSI or Infiniband as well as NFS and SAMBA file based storage. To get the best bang for the buck, adapt the BackBlaze box [backblaze.com] for SFF SSD drives. Remember to architect for differences in drive bandwidths or you'll build in bottlenecks that will be hard to overcome later and drive business to your competitors with more forethought. Hint: When you're striping in a Commit-on-Write log-based storage architecture it's OK to oversubscribe individual drive bandwiths in your fanout to a certain multiple because the blocking issue is latency, not bandwidth. For extra credit, implement deduplication and back the SSD storage with supercapacitors and/or an immense battery powered write cache RAM for nearly instantaneous reliable write commits.
I should probably file for a patent on that, but I won't. If you want to then let me suggest "aggregation of common architectures to create synergistic fusion catalysts for progress" as a working title.
That leaves the network bandwidth problem to solve, but I guess I can leave that for another post.
Re:It has been obvious for years. (Score:5, Insightful)
We don't just go vertical without solving the heat dissipation problem. We already have a hard time dissipating the heat off the surface area of one layer. Now imagine trying to dissipate the heat off of the layer that is trapped between two more layers also generating the same amount of problematic heat. Then try to figure out how to dissipate the heat off a thousand layers to buy you just 10 more years of Moore's law.
Not to be too critical... (Score:1, Insightful)
Re:Sigh... (Score:3, Insightful)
Re:It has been obvious for years. (Score:5, Insightful)
It's not as obvious as it sounds. Some things get easier if you're basically still building a 2D chip but with one extra z layer for shorter routing. It quickly gets difficult if you decide you want your 6-core chip to now be a 6-layer one-core-per-layer chip. Three or four issues come to mind.
First is heat. Volume (a cubic function) grows faster than surface area (a square function). It's hard enough as it is to manage the hotspots on a 2D chip with a heatsink and fan on its largest side. With a small number of z layers, you would at the very least need to make sure the hotspots don't stack. For a more powerful chip, you'll have more gates, and therefore more heat. You may need to dedicate large regions of the chip for some kind of heat transfer, but this comes at the price of more complicated routing around it. You may need to redesign the entire structure of motherboards and cases to accommodate heatsinks and fans on both large sides of the CPU. Unfortunately, the shortest path between any two points is going to be through the center, but the hottest spot is also going to be the center, and the place that most needs some kind of chunk of metal to dissipate that heat is going to have to go through the center. In other words, nothing is going to scale as nicely as we like.
Second is delivering power and clock pulses everywhere. This is already a problem in 2D, despite the fact that radius (a linear function) scales slower than area and volume. There's so MUCH hardware on the chip that it's actually easier to have different parts run at different clock speeds and just translate where the parts meet, even though that means we get less speed than we could in an ideal machine. IIRC some of the benefit of the multiple clocking scheme is also to reduce heat generated, too. The more gates you add, the harder it gets to deliver a steady clock to each one, and the whole point of adding layers is so that we can add gates to make more powerful chips. Again, this means nothing will scale as nicely as we like (it already isn't going as nicely as we'd like in 2D). And you need to solve this at the same time as the heat problems.
Third is an insurmountable law of physics: the speed of light in our CPU and RAM wiring will never exceed the speed of light in vacuum. Since we're already slicing every second into 1-4 billion pieces, the amazing high speed of light ends up meaning that signals only travel a single-digit number of centimeters of wire per clock cycle. Adding z layers in order to add more gates means adding more wire, which is more distance, which means losing cycles just waiting for stuff to propagate through the chip. Oh, and with the added complexity of more layers and more gates, there's a higher number of possible paths through the chip, and they're going to be different lengths, and chip designers will need to juggle it all. Again, this means things won't scale nicely. And it's not the sort of problem that you can solve with longer pipelines - that actually adds more gates and more wiring. And trying to stuff more of the system into the same package as the CPU antagonizes the heat and power issues (while reducing our choices in buying stuff and in upgrading. Also, if the GPU and main memory performance *depend* on being inside the CPU package, replacement parts plugged into sockets on the motherboard are going to have inherent insurmountable disadvantages).
Re:Well that may be problematic (Score:3, Insightful)
L1 CPU caches are shamefully stuck with the laughable 20-year old 640K meme in rarely noticed ways. Everyone's first thought is about RAM memory, but remember that CPU's are less change friendly and benefit more from tech like 128K * 5 size at the new density improvement.
Our supposedly macho CPU's have only 128K L1 sizes and comparably, absurdly high L2 and L3 [amd.com] sizes to make up.
The current excuse is that cost and die-space constraints keep size-improvements mostly on the L2 and L3 side. Sadly, someone tagged the article "tenyears" and we'll be dealing with different research by then, like utilizing today's 64 bit, multi-core technology to its fullest.