Follow Slashdot stories on Twitter

 



Forgot your password?
typodupeerror
×
Technology

New Silicon-Based Memory 5X Denser Than NAND Flash 162

Lucas123 writes "Researchers at Rice University said today they have been able to create a new non-volatile memory using nanocrystal wires as small as 5 nanometers wide that can make chips five times more dense than the 27 nanometer NAND flash memory being manufactured today. And, the memory is cheap because it uses silicon and not more expensive graphite as been used in previous iterations of the nanowire technology. The nanowires also allow stacking of layers to create 3-D memory, even more dense. 'The fact that they can do this in 3D makes makes it highly scalable. We've got memory that's made out of dirt-cheap material and it works,' a university spokesman said."
This discussion has been archived. No new comments can be posted.

New Silicon-Based Memory 5X Denser Than NAND Flash

Comments Filter:
  • by TubeSteak ( 669689 ) on Tuesday August 31, 2010 @11:50PM (#33432428) Journal

    "Dirt cheap" isn't here to stay.

    Their technology requires polycrystalline silicon & the demand is increasing much faster than the supply.
    China might build more polysilicon factories, but they'll undoubtedly reserve the output for their own uses.
    This isn't a new problem, since mfgs have been complaining about shortages since 2006-ish (IIRC).

  • by symbolset ( 646467 ) on Tuesday August 31, 2010 @11:52PM (#33432436) Journal

    All of the tech we actually purchase comes out of tech published in articles like this one. Processor process technologies, bus evolutions, memory architectures, advancements in lithography are printed here and wind up in the products you buy. Not all of the articles are successful technologies but all of the successful technologies have articles and the time reading about the failures are the price we pay to know about such things in advance. Most of us don't mind, because there are lessons in failures too. Did you read the top of the page where it says "News for nerds."? Are you lost?

    Digg is over here [digg.com].

  • so how wide is 5nm? (Score:5, Informative)

    by ChipMonk ( 711367 ) on Wednesday September 01, 2010 @12:58AM (#33432658) Journal
    The radius of a silicon atom is 111 to 210 picometers, depending on the measurement context. (Check Wikipedia to see what I mean.) That means 5nm is somewhere between 23 and 45 silicon atoms wide.
  • by OeLeWaPpErKe ( 412765 ) on Wednesday September 01, 2010 @03:07AM (#33433016) Homepage

    2D : anything that only has connections in 2 directions. The fact that it's stacked does not change it's 2Dness, if the layers don't interact in a significant way (a book would not be considered 3d, nor even 2.5D, nor would a chip structured like a book).
    2.5D : anything that has connections in 3 directions, but one of the directions is severely limited in what it can connect, and which way the wires can run (e.g. you can only have wires straight up with no further structure)
    3D : true 3D means you can etch any 3d structure at all (meaning e.g. you can implement a transistor at a 30 degree angle from another)

    The most advanced tech in silicon chips we have now is 2.5D, and these chips are still not fully 3D.

  • by Alef ( 605149 ) on Wednesday September 01, 2010 @03:55AM (#33433142)

    First is heat. Volume (a cubic function) grows faster than surface area (a square function). It's hard enough as it is to manage the hotspots on a 2D chip with a heatsink and fan on its largest side. With a small number of z layers, you would at the very least need to make sure the hotspots don't stack.

    I'm not saying your point is entirely invalid, however, heat isn't necessarily a problem if you can parallelize the computation. Rather the opposite, in fact. If you decrease clock frequency and voltage, you get a non-linear decrease of power for a linear decrease of processing power. This means two slower cores can produce the same total number of FLOPS as one fast core, while using less power (meaning less heat to dissipate). As an extreme example of where this can get you, consider the human brain -- a massively parallel 3D processing structure. The brain has an estimated processing power of 38*10^15 operations per second (according to this reference [insidehpc.com]), while consuming about 20 W of power (reference [hypertextbook.com]). That is several orders of magnitude more operations per watt than current CPU:s have.

  • by Sycraft-fu ( 314770 ) on Wednesday September 01, 2010 @05:36AM (#33433460)

    Cache is not a case where more is necessary. What you discover is it is something of a logarithmic function in terms of amount of cache vs performance. On that scale, 100% would be the speed you would achieve if all RAM were cache speed, 0% is RAM only speed. With current designs, you get in the 95%+ range. Adding more gains you little.

    Now not everything works quite the same. Servers often need more cache for ideal performance so you'll find some server chips have more. In systems with a lot of physical CPUs, more cache can be important too so you see more on some of the heavy hitting CPUs like Power and Itanium.

    At any rate you discover that the chip makers are reasonably good with the tradeoff in terms of cache and other die uses and this is demonstrable because with normal workloads, CPUs are not memory starved. If the CPU was continually waiting on data it would have to work below peak capacity.

    In fact you can see this well with the Core i7s. There are two different kinds, the 800s and the 900s and they run on different boards, with different memory setups. The 900s feature faster memory by a good bit. However, for most consumer workloads, you see no performance difference with equal clocks. What that means is that the cache is being kept full by the RAM, despite the slower speed, and the CPU isn't waiting. On some pro stuff you do find that the increased memory bandwidth helps, the 800s are getting bandwidth starved. More cache could also possibly fix that problem, but perhaps not as well.

    Bigger caches are fine, but only if there's a performance improvement. No matter how small transistors get, space on a CPU will always be precious. You can always do something else with them other than memory, if it isn't useful.

  • by RivenAleem ( 1590553 ) on Wednesday September 01, 2010 @06:42AM (#33433688)

    more IOPS than god

    God doesn't need any Outputs. It's all one-way traffic with him.

I've noticed several design suggestions in your code.

Working...