New RAM technology developed 43
Christopher Thomas writes "Tom's Hardware Guide had a link to this EE Times article, which describes a new type of RAM developed by Hitachi. It uses stored charge in what looks like a cleverly controlled floating gate to store data, as opposed to stored charge in capacitors in conventional DRAM. Hitachi says that it will be able to ship this in quantity reasonably soon. It looks reasonably compact, and will scale much more easily to smaller linewidths than standard capacitor-based DRAM cells. It's also faster, as you don't have the whole precharge/amplify readout cycle to deal with. "
Just a minor nit... (Score:4)
"...It's also faster, as you don't have the whole precharge/amplify readout cycle to deal with. "
This is probably not true. Precharge and amplify has less to do with nature of the memory cell,
but are used to speed up the access time of a RAM device. Even today's fast static-RAMs use
precharge and amplify circuits to speed things up. (DRAMs also use the sense amps to restore the
capacitor charge on read cycles, but SRAMs usually don't need this function, but still have amps
for speed).
The precharge is to put the bit sense lines into a known state (as opposed to an unknown state).
Once the bit cell starts driving the bit sense line the sense-amp, senses the small change from
the known state and amplifies it. If you don't precharge, you have to wait a while for the data
to stabilize. But if the bit sense lines are precharged to a known state, as soon as it starts
to change one way or the other, the amp kicks in and bam!
Also in a large (eg, Gbit arrays) the poor little super-small transistor that is attached to
a bit of the memory in the middle the array has little hope of driving a big long piece of
bit-sense wire out to the edge of the array without the help of an amplifier
Re:Good implantable technology (Score:1)
Good implantable technology (Score:1)
AI is interesting as an intellectual curiosity, but the
more immediate thing to look forward to is better human computer
interaction. Much, much better HCI. Screw wearable PC's, bring
on the implants. Now with this kind of information density, it's actually
worthwhile. It would be nice to be able to remember everything I ever said, or
was said to me. If nothing else, it would be a great help when I get into an
argument with my wife. A little further down the line, and you'll be able to
remember everything you ever saw.
And for all those out there who think we're going to evolve into a race of cyborgs:
you're crazy... it'll go MUCH further than that.
After all, once people have got decent hardware implanted in their heads, do you
think we're going to be satisfied with a 200baud connection (human speech).
No, we'll use the hardware in our heads to communicate with other people (through
the hardware in their heads). With sufficient communication, it stops making sense
to talk about multiple communicating processors - you end up with a single, massively
parallel computer. We will become the Borg, but not in a bad way. If you combine the
properties of humans and computers and end up with something which does not have the
best of both.. then you haven't done it right. The internet will evolve from being
a global suppository of all human knowledge into actually being humanity. We will be the
nodes on the network. It won't take long either. Just 150 years or so at this rate.
Of course, this is bound to cause a little friction during the transitional period.
Some people will doubtless object, and probably consider the end of humanity
as we know it to be a bad thing. I don't think the induhviduals (as Dogbert would have it)
will stand much of a chance though, they'll be seriously out-smarted and the
reliance which regular humanity places on computers will make them pathetically
unable to fight against those who have plugged in. The HumaNet might have to
annihilate them to protect itself. It's a bit like the old chestnut of being trapped
in a room with a potential madman - you best kill him first in case he's thinking
the same thing that you are. The HumaNet and humanity will essentially be different
species so the potential for distrust and misunderstanding is high.
Personally I hope that won't be necessary. The HumaNet will probably be smart
enough to protect itself without resorting to annihilation even if humanity tries to destroy it.
The HumaNet will take pleasure in letting some loose humans roam free - in the same way
that we like to think of Apes still living wild and free in the jungle.
Sorry guys, I seem to be losing it.
I wonder how many MP3s you can get on that thing...
Re: Airport X-Rays (Score:1)
Re:What ever happened to bubble memory? (Score:1)
X-rays are EM radiation (Score:1)
AFA the impact on laptops goes, whatever ionization occurs is transient. Over time with many exposures, you might kill your laptop with airport scanners, but I imagine that you'd kill yourself first. I wonder, has anyone run their laptop through a scanner while it was on? Did it crash the laptop?
PS memory: EE RDRAM, GS Embedded (Score:1)
RDRAM is high bandwidth, very high latency.
The Graphic Synthesizer has 4Mbytes of embedded DRAM for the RGBA-buffer/Z-buffer.
Embedded DRAM is high bandwidth, lower latency and on chip.
So the PS2 will have both
Re:Doesn't DRAM use latches? (Score:1)
Fast SRAM/DRAM Architecture 101 (Score:1)
RAM, the more cells you put on a bit line, the fewer levels of block
mux logic it has to traverse. There will be a sweet spot between
cells/bit line and the number of blocks to mux.
On most designs, there are pass-gates to partition out segments
of the bit-sense line. For DRAM the magic number is usually around
32 cells on a bit-sense line (to keep the DRAM cap bigger than
the bit-line wire cap). For SRAM it's usually a bit more. This is
basically the small block idea. But for each partition, is just like
a small block of memory...
However, in modern designs, fast SRAM and DRAM sense amps are not
all that different...
For DRAMs, each bit cell is a capacitor which are charged to hold a bit.
Each bit is connected to a bit line by a pass transistor.
Since the pass transistor isn't a perfect insulator, after a while,
the bit cell (being just capacitors) tended to leak and usually take
on whatever value was on the bit line (this is why refresh is required).
To make the bit retention time symmetric, usually the bit line is
kept at V/2 volts as much time as possible (although there are other
capacitor leakage paths...).
In the old days, the pass transistor was just opened and the voltage
sensed by an amplifier. The read is destructive (discharged the
capacitor) so it had to be re-written after it was read. Usually the
amplifier reads and re-writes in one step. The problem with this was that
it took a while for the voltage to stabilize and cross a voltage
detection threshold (to be certain it was a 1 or a zero stored in the
original bit cell).
For fast modern DRAMs, a dummy row of bits is made on each DRAM which
is always charged to V/2 volts (during the precharge cycle). The dummy
row is read at the same time as the real row. The sense amp just
amplifies the difference between the dummy row's bit-sense line and the
real row's bit-sense line. This is much faster than waiting for the
capacitor to charge up the bit wire to cross a voltage threshold.
As soon as a small voltage difference occurs between the dummy bit-line
and the real bit-line, the differential sense amp just amplifies it and
feeds it back. This is about as fast as you can get!
The reason for the dummy row instead of a reference voltage is that it is
just like a real row (same performance/timing). Sensing a differential
value is more independent of manufacturing variations than a fixed voltage
threshold. Also the capacitors discharge just like the real row so that
prop-delay effects of caps charging/discharging are minimized. They
also have neat techniques of getting exactly V/2 volts into the dummy row
(charge a cap to V volts charge an identical cap to 0 volts and short
them together
For SRAMs, each bit cell is two inverters in a feedback loop. The
bit cell is connected to a bit line by a pass transistor. To write
a value, you just open the pass transistor and stuff in a big enough
signal to flip the bit. For slow SRAMs you open up the pass transistor
to read the value. Note the inherent conflict: the stronger you make
the bit cell drivers, the harder it is to flip it
For fast SRAMs, you usually bring out 2 bit lines (bit and bit-bar).
The differential lines are hooked to a differential sense amp.
This is done for 4 reasons:
1. The bit cell transistors and power supply wires are usually sized
really small. This means they have high resistance. This effectively
means that in most dense SRAM designs, the bit cell transistors are slow
and can't even drive the bit-sense line w/o the help of the sense amp.
The inverters in an SRAM are just amplifiers. When you hang too much
load (C & R) on the output of any amp, it will stop working by either
oscillating or never reaching the voltage rails.
2. The sense amp in an SRAM feeds back the value so as to improve the SNR
in the bit cell. When the pass gate opens, all of a sudden the bit cell
inverter feedback loop sees this big load and the internal signal level
drops. The sense amp quickly feeds back the value so that the internal
signal level is restored ASAP so the SRAM won't forget! Otherwise you
have to wait until the small bit-cell transistors stabilize the feed
back loop again. This makes your memory cycle time long (because if you
didn't wait to stabilize or rewrite the cell value, multiple back-to-back
reads could erase the cell!).
3. Having a differential (rather than a single ended) way to flip a
bit cell makes write cycles faster (fast is good!).
4. Just like in DRAMs, differential detection is more tolerant of
manufacturing variations than a fixed voltage threshold.
Since the sense amp is shared, it makes more sense to have lots of small
weedy bit-transistors and 1 big sense amp than lots of big bit-transistors.
You could carry this idea forward and say why not have lots of sense
amps, but since sense amps are much bigger (say 10x) the size...
Another benefit of differential detection is in today's CMOS technology,
the P-channel N-channel transistors have different characteristic which
means pull up and pull down strenght is different (faster to pull to down).
If you precharge to high voltage and have transistors pull down, N-channel
transistors can pull down faster and the access time is faster.
If you precharge to V/2, and let transistors pull up or down, the
cells that have to pull up will be slower.
To summarize, in both fast SRAM and DRAM, the differential sense amp is
used to compare two bit-lines, which are usually precharged to voltage
above the high detection threshold. After a short while after opening
the pass gates, the amplifier senses and amplify the voltage difference
between the bit-lines and drive the signal back on to the bit line.
Usually the logic to turn on and off the sense-amp and the pass gates
are the most critical part of the design of fast RAMs (must make sure
two pass gates aren't turned on at the same time, must wait a short
time after the pass gate is open before feeding back the sense amp,
and of course there isn't a clock to drive a state machine
Today, there is not much difference between fast SRAM and DRAM control
logic...
Of course slow, big, async SRAMs use single ended amps which are always
on and drawing power (which is why the ususally have OE controls), have
no precharge, and have only one bit-sense wire per cell. Of course these
designs take less area, but are usually much too slow for today's 100MHz+
designs.
Given the greed-for-speed and manufacturing tolerance considerations,
I doubt that any new memory technology would do away with the
differential sense amp idea in the near future. Differential sense-amps
are a very elegent solution to a whole mess of problems...
Re:Doesn't DRAM use latches? (Score:2)
k = dielectric constant, 3.9 for SiO2.
e = permittivity of free space, 8.85e-12 F/m
A = area of plates.
d = distance between plates.
Solution? Make d really really small. The capacitance values are only a few femto-farads (1e-15) anyways.
http://www.ryans.dhs.org
It depend how you measure time... (Score:3)
what is called the row address by an amplifier (called a sense-amp). This takes a long time.
After reading, all 2K bits are fed-back on the sense amp (kinda like a cache for the row). Now
once the row is read, you can just switch out the data for a column address really fast (say 6ns).
But, if you need a new row, you have to go back to the slow row access.
But you really don't get 6ns column access time either. To top it off, today's SDRAMs are
pipelined (the S is for syncronous). This means you have to send in the address 2-3 clock cycles
ahead to get the data. You can still get new data every 6 ns in a burst, but you have to figure out
things a couple clocks in advance (i.e. random access will suck).
Trac -> access time from new row address (or SDRAM activate command) say 50-100ns
Tcac -> access time from column address (or SDRAM read command) say 5-20ns
R/W times have always been slow to DRAM arrays, but DRAM architects have been pretty good at
hiding it
Re:I thought floating gates were a bad thing. (Score:1)
I have always wondered why airport xrays do not destroy laptops, because are rays not high energy electrons that can leave a charge on a target?
Re:Just a minor nit... (Score:2)
This is probably not true. Precharge and amplify has less to do with nature of the memory cell,
but are used to speed up the access time of a RAM device. Even today's fast static-RAMs use
precharge and amplify circuits to speed things up.
I was referring to the sense amplifiers, which take a while to produce stable readings if I understand correctly. AFAIK precharge/amplify assisted circuits of the type you describe are faster, because they have a signal continuously driving them and so are less sensitive to noise. I could be wrong about this, but I don't think so.
Also in a large (eg, Gbit arrays) the poor little super-small transistor that is attached to
a bit of the memory in the middle the array has little hope of driving a big long piece of
bit-sense wire out to the edge of the array without the help of an amplifier
If you connected hundreds or thousands of transistors into a column, this would certainly be true. However, I certainly hope that chip designers wouldn't do this. Arrange a single RAM chip into multiple banks with a column height of 16 or 32 cells. Most of your die space is still taken up by memory, and your transistors only have the capacitance of a handful of other drains to worry about. This could be driven without amplification, and would still be quite fast. You even get many free row caches (one per bank).
Bubble memory (Score:3)
?. This is completely different from the definition that I heard. The type of "bubble memory" that I know about stores data in isolated magnetic domains (bubbles) that can be physcially moved around within a crystal. High density, but serial access (though you can get around that to some degree). I'm told that it was also slow and sensitive to external magnetic fields, but other sources say that those problems were solved.
RAM/DRAM (Score:2)
Addendum: This works for conventional DRAM too. However, I gather that it isn't done (except possibly for Virtual Channel Memory). A static transistor instead of a capacitor would still give you better signal strength, though.
Re:What to use a bit of core memory for (Score:1)
Pictures! Oh, please! This is too good to keep as a secret: please share some pictures!
(I still have not found good instructions for rolling my own core memory...)
Re:Bubble memory (Score:1)
Speaking of magnetic memory, does anyone know where to buy or talk someone out of some core memory? I suppose it would be a challenge to make some from scratch and even to work with it, but it just seems like such a durable form of media.
Perhaps a tiny assembly language program could fit, but I could do a real core dump!
Doesn't DRAM use latches? (Score:1)
I always thought DRAM uses a simple feedback circuit to store bits. How could you possibly create a capacitor small enough, maintain any reasonable charge at all and squash millions of them into a few sware milimeters?
What were those capacitance laws again?
Is this what the Playstation-2 uses? (Score:2)
new Sony Playstation-2 fly.
What's *MOST* interesting about this IMHO is
that it allows you to integrate dense, fast
RAM onto the same chip as dense, fast logic.
With normal DRAM technology, something to do
with the fabrication process makes it hard to
get lots of logic circuitry onto the same die.
So, when the RAM and the CPU (whatever) are on
the same chip, you are suddenly not limited in
speed by the width of the bus between the two.
With two separate devices, you can't really get
more than a couple of hundred wires of less than
a couple of inches long. With this technology,
the Playstation-2 chips are using 2560 bit-wide
busses - probably no more than a couple of
millimeters long - resulting in truly spectacular
overall memory access times.
For 3D graphics, this is an extremely important
advance that permits architectural changes in
the system that are *MUCH* more significant than
the raw density, cycle time or cost figures would
suggest.
Exciting stuff.
Resistance is not futile (Score:1)
...and you'll be able to
remember everything you ever saw.
That would not necessarily be the best thing. Forgetting serve as an important psychological defense, allowing painful (destructive and self-defeating) memories to be inhibited, and irrelevant minutia to be discovered.
And for all those out there who think we're going to evolve into a race of cyborgs:
you're crazy... it'll go MUCH further than that.
Take it to the logical conclusion - once reality is internally generated what need is there to remain a part of the external world? Start your own universe in which you are God...
We will become the Borg, but not in a bad way.
The borg aren't necessarily in a bad was as it is. You just have to give up your freedom and individualism in order to be part of the greater collective. If your life "sucks" this might be a desirable outcome.
If you combine the properties of humans and computers and end up with something which does not have the best of both.. then you haven't done it right.
Assuming that such a thing is possible, that is.
The internet will evolve from being
a global suppository of all human knowledge into actually being humanity. We will be the
nodes on the network. It won't take long either. Just 150 years or so at this rate.
It will likely require a whole new generation of people raised with the teaching that this is the way to go. Revolutionary ideas often become reality by attrition (i.e. those who oppose it grow old and die).
Of course, this is bound to cause a little friction during the transitional period.
Some people will doubtless object, and probably consider the end of humanity
as we know it to be a bad thing. I don't think the induhviduals (as Dogbert would have it)
will stand much of a chance though, they'll be seriously out-smarted and the
reliance which regular humanity places on computers will make them pathetically
unable to fight against those who have plugged in.
Resistance is futile,eh? Intelligence does not guarantee evolutionary superiority or guaranteed dominance, its just one trait or tool at the disposal of those who possess it.
The HumaNet might have to
annihilate them to protect itself. It's a bit like the old chestnut of being trapped
in a room with a potential madman - you best kill him first in case he's thinking
the same thing that you are. The HumaNet and humanity will essentially be different
species so the potential for distrust and misunderstanding is high.
I would hope that the most essential part of us - our humanity - would survive in the transition. Otherwise won't we just be mindless killing machines without compassion or a desire to work towards solutions in a peaceful, non-destructive fashion?
Personally I hope that won't be necessary. The HumaNet will probably be smart
enough to protect itself without resorting to annihilation even if humanity tries to destroy it.
The HumaNet will take pleasure in letting some loose humans roam free - in the same way
that we like to think of Apes still living wild and free in the jungle.
Heh.
Sorry guys, I seem to be losing it.
Maybe you are overclocking your brain?
I wonder how many MP3s you can get on that thing...
Never enough, I'm sure!
Marvin
Not exactly... but (Score:5)
they are going to use what is called a hybrid process (logic and DRAM on the same die) to make
an chip with embedded DRAM. Today's technology allows about 4Mbytes of DRAM to be put on a chip
with the left over space used for logic. Yes, hybrid processes are a bit less efficient than all
logic or all DRAM processes, but are catching up (about 1 generation behind)...
Hybrid processes are currently the state of the art and allow cool things such as embedded DRAM.
(which allow really wide busses and fast access) However, the memory is still capacitors and
transistors, for standard DRAM not this wacky new stuff (but one of these days...).
(oh, and -yes- I know what I'm talking about here...
Laptops and X-rays (Score:1)
I put my Powerbook through once while it was asleep: I seem to remember that it crashed pretty soon, though I'm not sure if it was immediately when I tried to wake it up, or a few minutes later. Not conclusive proof, but I could easily believe that some bits got twiddled.
David Gould
"special" RAM (Score:1)
Scientific American: In Focus:The Magnetic Attraction: May 1999 [sciam.com]
Check it out!
This has been the idea... (Score:1)
for a long time now, to replace magnetic media with EPROMs or non-volatile RAM or some other incarnation. So instead of having a 9 gig magnetic disk you have a chuck-o-semi-conductors. This would replace IDE and SCSI in a matter of years if it didnt fissle out. You could have a gig of ram that stores your operating system and programs in RAM and then use the same RAM to operate in. These systems wouldn't be limited yb disk access and R/W times. Depending on the bus width and clock you could have several gigabyte per second data transfer. Not to mention instant on with your BIOS stored on the main RAM bank, sort of like a disk's boot sector. DAMN yummy.
Re:Moderators suck (Score:2)
This actually isn't terribly relevant to the topic. There was, IIRC, only a single mention of the new type of RAM developed, as a tangential point. While the post was interesting, I can see why it was rated down.
Nothing remotely amusing can be allowed past the moderators though.
For all of the whining that goes on, I have yet to see something moderated down that didn't deserve to be. I do read at -1, so I see everything.
seems relevent to me (Score:1)
- it has very low power useage
- it's non-volatile
- it has incredible storeage potential
for instance, it would be capable of storing every conversation you ever had in your entire life (not the sound [yet], but a voice recognition based transcipt of it). That kind of capability is sufficiently useful for some people to be willing to submit to brain surgery in order to obtain it.
Most of the discussion I have seen, and the proposed uses for the chip talk about replacing flash / hard-drives so we get things like much lighter laptops, MP3 players, or digital cameras that can store a lot of pictures. Personally I think that this is very short sighted. There are much more interesting uses for this technology, and these kinds of advances will have a more profound effect on society than people realise.
Maybe if I had stated all this in the initial post it would have been less likely to be moderated down, but this all seemed too obvious to state.
Maybe I overestimated the intelligence of
BTW I read at -1 too, and it seems that people are much more likely to get moderated down for being funny than for being boring.