Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
Technology Hardware

Intel Doubles Capacity of Likely Flash Successor 91

Intel has announced a new technique that allows them to effectively double the storage capacity of a single phase-change memory cell without adding cost to the current fabrication process. "Phase-change memory differs from other solid-state memory technologies such as flash and random-access memory because it doesn't use electrons to store data. Instead, it relies on the material's own arrangement of atoms, known as its physical state. Previously, phase-change memory was designed to take advantage of only two states: one in which atoms are loosely organized (amorphous), and another where they are rigidly structured (crystalline). But in a paper presented at the International Solid State Circuits Conference in San Francisco, researchers illustrated that there are two more distinct states that fall between amorphous and crystalline, and that these states can be used to store data."
This discussion has been archived. No new comments can be posted.

Intel Doubles Capacity of Likely Flash Successor

Comments Filter:
  • I just read an article about them passing the two billion transistor mark [bbc.co.uk] for a single chip. The BBC announcement mentions many of these transistors are used for memory (the caches I assume). I am not a hardware expert although I wonder if this new phase-change memory is what they are using. Highly unlikely since this seems to be brand new research. If not, I certainly look forward to them integrating this into their chips and dies for use in caching--they could be blowing Moore's Conjecture out of the
    • Re: (Score:3, Informative)

      by networkBoy ( 774728 )
      PCM litho tech in not compatible with CPU litho tech.
      So I doubt this will be happening any time in the near future.
      -nB
    • by SirLoadALot ( 991302 ) on Monday February 04, 2008 @01:23PM (#22294492)
      No, processor caches are made out of SRAM, which is very fast but takes a lot of silicon space. Flash and phase-change RAM are totally different technologies. They aren't intended to be as fast as SRAM, but they are non-volatile, storing their contents without power. Phase-change or a similar technology may end up giving DRAM a run for its money if it gets fast enough, but I've never heard of an alternative to SRAM for internal processor caches.
    • The new memory is described as being as fast in reads as DRAM, which is an order or two of magnitude slower than register memory, which is what you need in a processor. Intel's adding of so much memory into the CPU interests me far more. Add enough, and the main memory is IN the processor (a-la the Transputer). From there, a little reorganizing reverses the arrangement - from memory in a CPU to a process in memory (PIM) architecture. Special-purpose PIM has been done before - Cray embedded a communications
      • It's not register memory that's taking up half the die space; it's SRAM cache. There's nothing all that interesting about it. It's old technology that's just gotten tiny enough to fit a ton of it on a CPU die.
        • by imgod2u ( 812837 )
          Depends on who you are. Intel is notorious in being able to pack tons of SRAM into a small die space and this isn't just due to process shrinks. There are all sorts of flavors of SRAM, the standard taking 6 transistors but some taking as little as 1 depending on density/speed trade-offs.
  • Salt shaker please (Score:3, Insightful)

    by techpawn ( 969834 ) on Monday February 04, 2008 @01:22PM (#22294472) Journal
    "Intel Doubles Capacity of Likely Flash Successor" this from a site that had a huge Intel logo on it for how many months?

    It's neat tech, but as long as flash keeps getting bigger and cheaper we won't see it's 'Successor' for a while.
    • by EmbeddedJanitor ( 597831 ) on Monday February 04, 2008 @01:35PM (#22294734)
      Even when a technology becomes shippable it tends to take quite a while for it to catch on. It is easy to make small lab batches, but reliable low-cost high-volume production takes a lot longer. NAND flash was invented in 1988 but only really got going in around 2003 - 15 years later.
      • Re: (Score:3, Insightful)

        by networkBoy ( 774728 )

        It is easy to make small lab batches
        I see you don't work in an R&D lab doing PCM...

        Before I left my former job we were working on PCM.
        It was anything but easy to make in small batches in the lab. Our average yield of 100% good die was under 1 die/wafer.
        We had plenty of 50% dice, but very little fully functional ones.
        -nB
        • He choose a poor phrasing.
          He should have said something like:
          "Relative to each other, it's easier to make one functional unit in a lab than to make 1000 functional units a day"

      • Except... (Score:3, Interesting)

        by Khyber ( 864651 )
        ...the reliable low-cost high-production facilities already exist, as the process deviates very little from the CMOS manufacturing process. It's the same material that is used in rewritable optical media, and on top of that, it's basically just glass. Where you once needed stable unchanging silicon for memory/data storage, now we're just using different states of glass. Most of your concerns are addressed in this technology, and this is why I'm watching it very closely. Go read up a bit here. (PDF WARNING) [ovonic.com]
    • by Microlith ( 54737 ) on Monday February 04, 2008 @01:49PM (#22294938)

      as long as flash keeps getting bigger and cheaper we won't see it's 'Successor' for a while.

      As I understand it, flash (nand) capacity grows with the shrinking of the trace size. It's also cheap because it's produced in mass quantities.

      Everything that has made flash high capacity and cheap can be applied to PCM, only PCM has a number of advantages:
      - more durable, since it doesn't force high voltages over blocks to erase them
      - smaller cells, allowing more to be packed in the same space
      - rewriteability. You don't have to erase a block to change a single byte. It's more like RAM or hard disks in that respect.

      So what will likely happen is a slow change from FLASH to PCM as the major flash manufacturers transition their products to this technology. It'll still have the same form factor, and most people won't notice aside from an increase in capacity.

      IANAPCMEBIWNS (I am not a pcm expert but I work near some...)
      • by imgod2u ( 812837 )
        I'm not sure about the density of PCM vs flash but one of the biggest advantages of flash right now is that it uses the same fabrication process as other semiconductors. It's still CMOS.

        Consequently, you have economies of scale that translate from the other microelectronics markets. More importantly, you have one monolithic chip with control, interface, encryption logic, etc. all on one chip with one fab run. Many of our chip designs use small pockets of flash memory here and there (specially available f
      • rewriteability. You don't have to erase a block to change a single byte. It's more like RAM or hard disks in that respect.

        While I don't disagree with your point overall, isn't that exactly unlike a hard disk? On a hard disk, you must rewrite an entire sector to change a byte, and not only that, you must wait until the platter spins around to the right spot again in order to do it.

      • Too bad they're not dupes.
      • The story you're referencing is about advances in Single-Level Cell NAND NAND flash memory, which is the more expensive longer-wear-cycle cousin of the Multi-Level Cell flash memory that's been taking over the USB Flash market. MLC is more dense, and the price/gig has dropped by about 75% since last summer. Unfortunately, MLC technology only supports about 10,000 write cycles per cell (vs. about 100,000 for SLC), so you need to use wear-leveling drivers to keep it from wearing out, but that's still good e
  • No longer binary? (Score:1, Interesting)

    by sm62704 ( 957197 )
    Will we now have computers that do base 4 arithemetic rather than base 2? At leat the memory of them? Or is this exactly what the INtell engineers are saying?

    Could this new technology be used for CPUs as well, or only memory?
    • Re: (Score:3, Informative)

      Will we now have computers that do base 4 arithemetic rather than base 2?
      A base-4 digit is the same as two base-2 digits, information-wise, so it doesn't really matter how the information is stored. If you want to store a byte in some piece of hardware, you can store it as 4 base-4 values or 8 base-2 values, and it'll all look the same to the CPU.
      • Not really. Logic circuits use binary voltage levels so the single voltage output from the pcm representing two bits has to be converted into two separate signals sent to the CPU. So it does matter how its stored, because you have to convert the base 4 to base 2 before sending it to the CPU. This isn't a challenge, but it does need to be done.
    • Those are good questions.

      I'm terribly out of date on this, but - in the old days with multiple chips for a memory bank, there was address decoding circuitry that would point you to the right chips/pins. If - and I'm clueless here - the same thing is still being done but it's all been reduced to fewer chips, then this _might_ imply that you still do the address decoding as before, but you have fewer address wires to route. In other words, we used to need two cells for four - now we have four in one cell -
      • by imgod2u ( 812837 )
        Each cell, being just a variable resistor, will output a certain amount of voltage. Let's assume a 1.2V supply:

        00 = 0.0V
        01 = 0.4V
        10 = 0.8V
        11 = 1.2V

        You'd then do a crude A-D conversion (it's crude and small/lower power because it only needs to handle 4 distinct voltage levels). Off the top of my head, the smallest would be two pairs of N-FET and P-FET. One will have the threshold voltage biased at 0.2V and the other biased at 1.0V using a voltage divider from VDD to GND.

        The output of the first pair of FET
        • by imgod2u ( 812837 )
          Actually, scratch that. You'd probably want to bias the voltages coming out of the cell. Two paths, one to bias it by Vt-0.2V and the other Vt-0.4V and have them go to separate FET pairs.
    • On the very low hardware level, yes, the memory will be base 4. However, on anything but that very low level there's no reason for anything to care - it's trivial to convert between base 2 and 4, and it's a lot easier and more sensible to program a computer that works in base 2 than to make your opcodes do something sensible on 4-state things.
    • by gaelfx ( 1111115 )
      probably not since I doubt the elements would interact in a mathematically coherent way that transistors do. Even if you figured out a way to make sense of it, it would likely involve the use of something simpler than the element itself, thereby making it essentially the same as using transistors, at least from my understanding. But with the way that computers actually "evolve, it's unlikely that such technology would take hold in a marketable fashion in our lifetime, I mean, how do you think Microsoft is s
    • You can label the now total of 4 states however you like, such as 00/01/10/11 or 0/1/2/3 or A/B/C/D or T/A/C/O. But whatever they are, Intel would need to, at some point, convert this all back to 2 bits with states 0/1 when interfacing with external binary circuits. If they don't know how to do that they are welcome to "Ask Slashdot".

    • An interesting piece I ran across many years ago about ternary (and other bases -- try base-e!) systems, and how they _can_ be better at some things than binary.
      http://www.americanscientist.org/template/AssetDetail/assetid/14405?&print=yes [americanscientist.org]
      • I'm partial to base i myself.
      • by imgod2u ( 812837 )
        Mathematically speaking, e is the most efficient base. Imagine a decode being a search tree and imagine the base being the number of children per node. A decode function is simply traversing the tree. The first level of the tree being the first bit, second level being the second bit, etc.

        To find any number N, one would need a tree of L levels and B children per node at each level. B^L must then be >= N in order to guarantee that the number N is represented in the tree. We can then take this to mean
        • T = S*B^(ln(N)/ln(B))

          To find the base that would result in the lowest search time, take the double derivative of T with respect to B and find the roots (peak and valley where search time either maximizes or minimizes).
          First, I believe you mean roots of the _first_ derivative (not second) to find extrema. Second, I don't believe the heuristic model, because S*B^(ln(N)/ln(B)) has no dependence on B!
    • Re: (Score:3, Interesting)

      by imgod2u ( 812837 )
      You'd be surprised how much of your computer isn't "binary" per se. If you have a modem, I think the standard is a base-16 transmission code. Flash memory currently contains 2-bits-per-cell cells. Hell, the quad-pumped signal going from memory to processor (if you have a Core 2 or P4) isn't "binary" per se.
  • The article says that Intel has just doubled the size of PRAM, which is nice, but PRAM will not be commercially viable for some time to come, so I the article, or at least the headline is somewhat sensational. I guess science journalists are still journalists.

    When I am working on a design, I guess i could say that I increased the capability by an infinite amount at the moment when the first prototype is verified functional.

  • by Anonymous Coward
    The principle is similar to MLC flash (multi-level cell NAND flash), that also stores 2 bits per data element by using 4 different voltages. The tech behind this memory is completely different though.
  • If they found only one middle state, they could implement { true, false, file_not_found [thedailywtf.com] } enum
  • by ZombieRoboNinja ( 905329 ) on Monday February 04, 2008 @02:13PM (#22295392)
    N/T
  • One bit of material can be switched b/w four states. Thus Base 4 logic is possible. However, the article indicated that each cell will be used for 2 bits. In this arrangement capacity is only doubled.
    • by AdrocK ( 107367 )
      If you take each cell individually, there are 4 states it can be in. (2 bits worth of states). But when you look at that in the whole array you have a base4 number.

      I can see using this to jam more storage onto the device, then making a simple ciruit to convert the base4 to base2, but I don't ever see this being usable outside of storage; Unless there are some sort of quarternary logic gates that I dont know about.
      • Unless there are some sort of quarternary logic gates that I dont know about.

        Binary logic - yes, no.

        Quarternary logic - Yes, probably, possibly, no.

        Hmmm. 50 percent of the choices there are indeterminate; better stick with the trinary 'yes, maybe, no' model.

  • Storing the information in physical states of atoms, while it may lead to a practice increase in storage, means the theoretical limit of storage/volume is alot smaller. So unless we want to go one step forward and two steps backwards this looks like a major mistake as an area to do research in!
    • by Bender_ ( 179208 )

      And the alternative would be...?
    • Re: (Score:3, Informative)

      by imgod2u ( 812837 )
      Not really. If you think about it, fundamentally there just as many states a bunch of atoms can arrange itself in as counting electrons. You're really only bound by Plank's constant in how resistive or conductive a collection of atoms are. Assuming you had a device sensitive enough to detect the variation.

      The trick is, of course, in how fast you can change those states. I would imagine electrons are much easier to move than whole atoms. I understand how read speed for PCM is faster than a transistor bu
      • With electrons (im not sure how small the groups of atoms that can hold the electrons in a circuit can be but theoretically they could get down to 10s of atoms ( hell they could get down to storing it in 2 atoms (not 1 due to quantum effects)). But when you store the data in the arrangement of atoms you instantly loose the edges because they go crazy, then you need enough to store the pattern which i suppose wont be that many as its the just the energy levels of the electron see changing, but for it to be a
  • Oh look, another innovation that we will probably never see. I love new ideas - but instead of reporting on possible uses of technology science, how about actual science- stuff you will see this in 2 months reports. The "LOOK - SUPERBATTERIES WITH CARBON NANOTUBES - CHARGE IN SECONDS - 5 BAZILLION HOURS USE" kinds of articles are interesting, but not are they slashdotworthy. In Seinfeld's world, it definitely would not be sponge worthy.
  • Welcome our new bulging fore-headed overlords! Oh, you meant the other kind of memory...
  • Shall we call them 'amorlline' and 'crystphous'?

  • So now we're actually gonna see storage capacity measured in "gigaquads"?

Enzymes are things invented by biologists that explain things which otherwise require harder thinking. -- Jerome Lettvin

Working...