Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
Technology

An Overview of Quad Band Memory 125

tedgyz writes "AnandTech has a short article on a new memory technology from Via, called Quad Band Memory (QBM). Rather than using dual-channel DDR to increase bandwidth, they use phase-shifting inside the memory modules to accomplish the same goal. The end result is simpler (and presumably cheaper) motherboard designs that are backwards compatible with current DDR modules. The downside? It is currently only going to available in a P4 chipset that Intel has not authorized."
This discussion has been archived. No new comments can be posted.

An Overview of Quad Band Memory

Comments Filter:
  • wow, interesing (Score:1, Interesting)

    by diablo6683 ( 556085 )
    hmm, i wonder what the commercial applications of this are :)
  • Uhhh... (Score:3, Insightful)

    by GreyWolf3000 ( 468618 ) on Wednesday September 18, 2002 @07:31PM (#4285449) Journal

    The downside? It is currently only going to available in a P4 chipset that Intel has not authorized."

    Keyword: currently. I'm sure the technology will be available soon for plenty of other motherboards. I don't consider this much of a downside (feel free to set me straight if I'm wrong).

    • More importantly, the other downside is that the Quad memory will be more expensive, probably a bit more akin to ECC ram than normal SDRAM
    • Intel has not authorized
      Yet. why would intel not want to authorise this?
      -ben-
      • politics (Score:3, Informative)

        by GunFodder ( 208805 )
        Via has not explicitly licensed the P4 bus. Via insists it has rights to the necessary patents through the purchase of Cyrix. If Intel officially approves this arrangement then they may lose some licensing sales in the future by setting a precedent.

        The whole thing is kind of silly unless Intel is making money hand over foot in the chipset market. I wonder if their motivation to discourage 3rd party chipset development is to lock down control over various platform technologies? Sis currently makes P4 chipsets but they have a poor reputation for compatibility. Via has improved their rep by dominating the Athlon market. They might have the necessary market share to take the P4 platform in directions that Intel doesn't want to go.
  • What do you want to bet that you'll be able to find more than your fair share "QBM compliant" montherboards that do NOT play happily with a large chunk of the available "QBM" memory and visa versa?
    • What do you want to bet that you'll be able to find more than your fair share "QBM compliant" montherboards that do NOT play happily with a large chunk of the available "QBM" memory and visa versa?

      The same held true for DDR boards last year, where you were pretty much only guaranteed to get registered, buffered memory modules to work. Now, pretty much any recent motherboard will accept pretty much any DDR module.

      Compatibility and compliance always suffer at the beginning of a new product release. That's why technology and product reviews are so helpful.

  • by HardCase ( 14757 ) on Wednesday September 18, 2002 @07:37PM (#4285486)
    You can see more about this on Kentron's [kentrontech.com] web site. They developed the technology, then released it, royalty free, to manufacturers.


    Given the memory manufacturers' resistance to DDR400 and the achingly slow progress that DDR2 is taking (the module standard isn't even final yet), this technology has a pretty good potential to reach production.


    -h-

  • ...how long will it take for the major chipset makers (Via, SiS) to adopt this technology? It'd be great to see this available on the Athlon platform in the near future.
    • by Anonymous Coward
      Via made this, via makes chipsets. Seems likely they'll make chipsets that support this.
    • Athlons arent currently sufferring from lack of memory bandwidth, but instead they have some other bottlenecks. P4's on the other hand are not fed the memory bandiwdth they need with DDR ram which is why they run faster with RDRAM. Technology like this will hopefully help the future AMD processors but the athlons seem limited by other factors
  • by io333 ( 574963 ) on Wednesday September 18, 2002 @07:41PM (#4285505)
    From the article:

    *snip*

    Here's where the difference between QBM and conventional modules comes into play; QBM modules will have a set of 8 registers (QBM-10) as well as a phase-locked loop (PLL). The purpose of the PLL is to take the incoming clock signal from the chipset and shift it by 90 degrees; this shifted signal is then fed to the second bank of the DIMM, while the first bank receives the unaltered clock directly from the chipset.

    The 8 registers then switch between which bank gets to transfer data every clock; because of the 90 degree phase shift, there is a slight delay in transferring data from the second bank but both transfers actually end up happening within a single clock cycle. The end result is that you get two DDR transfers per clock, or 4 bits of data are sampled per clock thus doubling the throughput of DDR (hence the name Quad Band Memory).


    *snip*

    QBM modules will obviously be more expensive than regular DDR modules, the question of how much remains to be answered however.

    Let's see, one PLL... damn, I don't know if I can afford the extra six cents!

    (That extra six cents though doesn't detract from fact that this idea is just pure genius... with about 30,000 folks slapping their forheads for not thinking of it first!)
    • Just wondering if anyone can talk about this sort of technology in relation to the way modem technology progressed. People tinkered with phase shifting and spliting and amplitude shifting etc and we went from 75 baud (whatever) to 115,000 odd. Is the same sort of thing likely/possible with memory? Physically, these are analog devices - they are just interpreted digitally - so wouldn't it be possible to use some of the same lessons learn in modem technology here?
      • Modern RAM ramps up at such a high voltage that it causes side effects that could be interpreted as signal when combined with other RAMs in phase.

        Basically, rather than multiplexing, RAM has just gotten faster.

        And as far as being analog devices, and not digital, that's not a very good accessment. While the signal that they produce is analog, they look more like sinc functions in the analog domain than sinusoids. Those kind of functions are good for DIGITAL systems. They are defined in such a way that part of the wave form is considered unusable, and that nothing should interfere with that part of the signal.

        Compare that to modem signals, which look a good deal more like sinusoids - nice, slow, smooth curves, by comparison.

        I suppose they could be made to phase shift, and do all that sort of thing, but computers as we know it would have to be redesigned to interpret signals much, much differently. We'd have to have wavelet processors, or something like that, and everything would have to go slower.

        A good question is whether or not such slowdown would be worth it. Considering how well analog computers have done, perhaps not.

    • Let's see, one PLL... damn, I don't know if I can afford the extra six cents!

      Well, actually, it won't be that cheap. When I was in college working on my senior thesis (Fall 2001), we had an application where we needed to use a PLL in a 2.4GHz transmitter circuit. The thing was approximately 1cm x 2cm x 0.5cm, and cost around $25, and it was damn hard to find. Now, that piece would obviously be far too large and noisy for use on a memory chip (but maybe not)... but the point remains, a PLL that needs to operate in the 100's of megahertz to gigahertz range AND be electrically quiet enough would, I'm sure, jack up the price more than marginally. Now, that $25 piece is all well and good for some flakey-ass collegetransmitter, but this chip is gonna need something that's low-noise, high-gain, and a bunch of other characteristics that WILL make it much more expensive.

      Of course, you also gotta take into account that the PLL will (most likely) be hybridized (i.e. wafer removed and built custom onto the chip) and mass-produced, both of which will tend to drive the price down

      On the flip side, though, we've become used to buying $20 sticks of RAM, so it might seem pricey at first :-)
      • I am an ASIC designer, and used PLLs in more than one of our products. It adds 0.5mm*0.5mm of silicon (even less for latest processes), plus some cost for testing it (small compared to the massive cost of DRAM test).
        I would expect cost increase on the motherboard side, because multiphase signals are very sensitive to timing and signal degradation. If they need better quality materials and precision board manufacturing process, the MB cost will definitely increase.
        These days, silicon is cheap, wires are expensive.
    • It has been thought about for years, since a little after DDR was made standard. QBM uses a nice, clean quadruple-sine, spaced out evenly in quarter-clocks, unlike the messy jagged lines that DDR2 will use. The problem has always been how to eliminate the interference. It was looking as if there were going to need double-ECC circuits built in, because the neat quad-sinewave averages out to a straight line much easier than the jagged DDR2 lines, which are made to be combinative waves offset as little as possible, to made the signal stronger than the signal chatter. Radiation(white noise) was always a problem with RAM, much more so at higher elevations(Colorado gets about 100x more ram errors than sealevel does, necessitating ECC ram), and Kentron still hasnt explained how the PLL will work to filter out all that noise.
      • by richard-parker ( 260076 ) on Wednesday September 18, 2002 @09:42PM (#4285959)

        Radiation(white noise) was always a problem with RAM, much more so at higher elevations(Colorado gets about 100x more ram errors than sealevel does, necessitating ECC ram)
        Actually, the increased RAM failure rate due to the greater cosmic ray intensities at higher altitudes isn't as bad as you describe.

        For example, the expected soft-fail rate of a computer memory system in Denver, Colorado is about 4 times greater than the rate expected at a city it sea level (such as New York City). Even in Leadville, Colorado (which is located at 10,151 feet) the expected failure rate is only about 13 times greater than in NYC. No location in Colorado even approaches 100x.

        For more information, see the following paper:
        J. F. Ziegler, "Terrestrial cosmic ray intensities", IBM Journal of Research and Development, Vol. 42, No. 1, 1998.
        It can be found online here [ibm.com].
        • The sladot of old where i Find a interesting Tidbit that I didn't and didn't know to look! Comments like this are why slashdot IS what it IS.
        • Actually, the increased RAM failure rate due to the greater cosmic ray intensities at higher altitudes isn't as bad as you describe.
          For example, the expected soft-fail rate of a computer memory system in Denver, Colorado is about 4 times greater than the rate expected at a city it sea level (such as New York City). Even in Leadville, Colorado (which is located at 10,151 feet) the expected failure rate is only about 13 times greater than in NYC.
          One would think that with all that LEAD lying all over the place, cosmic ray showers wouldn't be such an issue...

          • One would think that with all that LEAD lying all over the place, cosmic ray showers wouldn't be such an issue...
            While I am sure you meant your comment to be facetious, I felt it deserved a reply due to the assumption that underlies it.

            When a cosmic ray hits the atmosphere it produces many secondary particles, it is these particles which cause soft errors in computer memory. For computer memory the worst secondary cosmic rays are the hadrons (protons, neutrons and pions). Neutrons are particularly troublesome since these are responsible for more than half of the terrestrial soft errors. In order to affect the computer memory these neutrons have already had to pass through kilometers of atmosphere, through the building, and through the computer housing - a little lead isn't likely to stop them. In fact, surrounding your computer with lead could even make things worse. Cosmic rays are often counted with a neutron monitor and it is not uncommon for neutron monitors to be deliberately constructed with a lead casing. The lead casing increases the neutron count by producing more neutrons as it is bombarded by cosmic rays.
  • but the cpu is still stuck at 533mhz with the athlon (should this be available for it anytime) is barely going to a 333mhz fsb
  • If it's *that* simple to double the data rate of memory, why don't they, for example, divide the memory architecture into eight sectors and have each bit of a byte on a different sector, making 16x memory? It seems that this philosophy has no limit, as long as you have lots of sectors. What's preventing people from doing that?

    Sorta like a beowulf cluster of chips, really :).
    • probably the same reason why intel and amd haven't put out 5.2 ghz chips yet, despite the fact that their current archetechture supports that, or at least they're fully capable of matching the other, should one raise the bar to that point in technology. if you put out the peak of technology, then you lose alot of potential revenue, which in turn helps keep a steady flow of cash comming in, which looks good to investors, ect ect ect. expect to see 16x memory after 8x memory, each of which will take 2.5 years to "develop" and put into motherboards & whatnot.

      basically, it's not today's technology that limits them, it's today's financial market setup that does.
      • But thtat is changing. there really is know point (i.e.money) in releasing an incremental jump, since people don't need the next 20% gain to run what they want, including games.
        I baught 486's with each jump, because the games and OSs at the time really need the jumps to run well, so harware was behind the software power curve, now hardware is way out in front og the power curve.
        Will a jump from 1.4G to 2Gig going to get me to buy a new rig? no, but but doubling the bandwidth night.
        • hmm. interesting point. do you have any insight on as to why we're still @ a lowly 266/533 system bus, and not @ somthing like, say, system bus operates at the same speed as the core? i think it'd cause some slight inconsistancies with the PCI/AGP bus, but that's about it.
          • traces on a system bus at cpu core frequencies would probably be prone to lots of noise, as well as possibly radiating lots of it (depending on their length). electronically, you can't just jack up the clock with larger thigns (wires longer than a few millimeters long) without having to deal with the interferance.
      • The architecture might support that, in that the timings and whatnot could theoretically reach that speed without falling over, it's still an awful lot of work to get manufacturing processes which can match it while having decent yields, and at the same time providing incremental bugfixes and improvements to reduce heat output, increase signal strength, etc.
  • In essence, they're putting additional capacity in quadrature. How clever! Memory that shares technology with NTSC color television....
    • Huh?? No.. they've increased transfer rates by operating independant DDR systems with different phase clocks. No one said anything about adding capacity. The "Quad" here means "Four", not anything to do with quadrature modulation of DDR bits!
  • the chipset will begin sampling in Q1-2003 and it will ship by the end of Q1-2003

    With the lastest news [slashdot.org] about Intel including DRM into the next major processor release, it would be smart for AMD to grab hold of the QBM memory and to use it for their advantage. If AMD will grab hold of this memory and run (since Intel wants to drag its feet), it will have 4.2 gigs/sec bandwidth for memory. With the news [slashdot.org] about the Opteron coming out in Q1 of next year, this would be optimal for AMD.

    The combination of the AMD Opteron x86-64 with QMB553 (4.2GB/s bandwidth) would make the issue of waiting on memory less noticable. It would be in the best intrest of AMD to take the QMB memory and run while Intel still drags its feet.
  • they use phase-shifting inside the memory modules to accomplish the same goal.

    Isn't phase-shifting what happened to Jordi and Ensign Ro causing the crew of the Enterprise to think that they had died in a transporter accident?

    Or is phase-shifting what the Traveler used to send the Enterprise to the center of the galaxy?

    In any case, I don't ever recall Star Trek using phase-shifting to increase memory bandwith. Something's amiss here.
    • Star Trek TNG used phase shifting, and sub space to fix almost every problem.

      I wouldn't doubt it if the engineers behind QBM weren't sitting around bullshitting about that when one of them said (jokingly) maybe we could use subspace to get better memory bandwidth, or phase shifting, or ...
      • How could you forget inverting the polarity? That seems to be the cure-all of the 25th century.

        I guess it wouldn't do to see Jordi running around with a roll of duct tape and giving malfunctioning gadgets a thump to get them working again.
        • How could you forget inverting the polarity?

          Unlike the first generation SDRAM, which sent a word whenever the clock signal went from low to high, DDR (standing for double data rate, or Dance Dance Revolution, or East Germany) sends a word whenever "phi" (the clock signal) goes low to high or high to low, that is, whenever it inverts. QBM adds a 90 degree phase shift, which lets it send a word 1/4 of a clock after phi rises or falls.

    • You've gotta love the writers, lets do all this walking through walls but forget falling through the floor and being able to breath the air. Still a good episode though.

      Although it still pisses me off that the Feds invented a working phase cloak before everyone else, but weren't allowed/couldn't use it for the Dominion War... same with Garak's Changeling torture device.

      I've said it before, and I'll say it again. You've got to love the writers.

  • Because of the pin-compatible DDR interface, QBM chipsets will be able to use both regular DDR SDRAM modules as well as QBM modules.

    That's a GREAT feature. If i have 1GB of DDR ram and only enough money to upgrade the mobo, i'd go with QBM because of this. Then later on the switch could be more gradual. Backwards compatability is a good thing, just look at the PS2 and how well it sells.

    DDR333=2.7GB/sec bandwidth

    QBM667=5.3GB/sec bandwidth

    Double the bandwidth with small modifications to a regular DDR chip has potential for growth.

    It seems that the only problem now is that it won't be out until the end of Q1 2003, and it will be on P4s... hopefully they won't have Pallidium too.
  • It probably will cost a buttload of money
  • Nowadays, you see all of these benchmarks on chips/chipsets/memory, and unless you're talking inSANE resolutions and color depths, AMD/Intel vs nVidia/ATI really doesn't matter much. It's just personal preference.

    Memory as well - how many of you TRULY saw a difference between PC100 and PC133 DRAM? Yeah, the benchmark numbers don't lie - but again, those are JUST benchmarks. Regardless, I don't think the system is being held back by memory.

    It was my understanding that the major bottleneck of any system is the DISK. So no matter how fast your ram is, if you still have to swap to the slow-ass disk, your system will be slow.

    However, I only have a 1Gig Duron w/512M PC133, so I don't exactly follow the bleeding edge. Your mileage, of course, may vary. :)
    • > Memory as well - how many of you TRULY saw a difference between PC100 and PC133 DRAM?
      Yeah, the benchmark numbers don't lie - but again, those are JUST benchmarks.

      You may call it JUST a benchmark, but the extra 50 fps I got in Quake 3 enabled me to crank the resolution up, along with the detail quality. This was from PC100 cas 3 to PC133 cas 2 on a 1.2 T-Bird.
    • by be-fan ( 61476 ) on Wednesday September 18, 2002 @08:52PM (#4285767)
      Um, disk isn't a bottleneck on my system. I do lots of C++ programming, and everything gets cached in RAM after the first build. Thus, my bottleneck becomes gcc ...err... CPU and memory bandwidth :) Same thing when I'm doing 3D rendering (which had better fit in RAM or else) or playing 3D games and whatnot. I thought moving to a 4200 RPM laptop hard drive was going to be bad, after my nice 7200 RPM IDE RAID. In truth, thanks to the Linux VM, I don't notice the difference after the first half hour of using the system. However, I did notice the big boost that came with moving from a PC100 memory system to a PC266, even just palying around with GUI wigets (resizing and whatnot).
    • The bottleneck is most definately not the disk for most games anymore (the only thing I need a high performance PC for). They basically load the level/area into memory and you play, for hours sometimes in the same area, very little disk access until you load the next area or if you have too little memory.

      Most server applications are definately bottlenecked by the disk since you serve more data than can fit into memory.
      • They basically load the level/area into memory and you play, for hours sometimes in the same area, very little disk access until you load the next area or if you have too little memory.

        What about continuous-world games such as Half-Life? The whole game is one area. Not all of us have 1 GB of RAM to keep a whole 700 MB CD-ROM disc cached, plus whatever the game needs. On every system I've tried it on, Half-Life freezes momentarily when "seamlessly" teleporting from one map to the next.

        • That, I believe is an issue with how half-life was created, not the system. some of the half-life mods use much larger maps and never have the "loading" issue that the original half-life has. and these are more more detailed maps than the original. but I really now nothing about this. maybe someone in the know can add more?
          ---
    • by Jimmy_B ( 129296 ) <<gro.hmodnarmij> <ta> <mij>> on Wednesday September 18, 2002 @09:01PM (#4285799) Homepage
      Nowadays, you see all of these benchmarks on chips/chipsets/memory, and unless you're talking inSANE resolutions and color depths, AMD/Intel vs nVidia/ATI really doesn't matter much. It's just personal preference.
      Resolutions and color depths have nothing to do with the chips/chipsets/memory; the component most affected by that is the video card. And if you think video cards are fast enough that choosing between nVidia and ATI is just personal preference, odds are you aren't doing anything which deserves more than an old 4-meg video card.
      It was my understanding that the major bottleneck of any system is the DISK. So no matter how fast your ram is, if you still have to swap to the slow-ass disk, your system will be slow.
      While starting up programs, yes, the disk is the bottleneck as files are loaded the first time. But for the tasks where speed *really* matters - compiling programs, long simulations, games - the speed of CPU and RAM are critical. (Disk can be made important if RAM is lacking in quantity, but with RAM prices as they are these days, that is inexcusable.)
  • FAQ about QBM (Score:4, Informative)

    by kyoko21 ( 198413 ) on Wednesday September 18, 2002 @08:16PM (#4285665)
    This is a link [kentrontech.com] to kentron's FAQ about Quad Band Memory.
  • The downside? It is currently only going to available in a P4 chipset that Intel has not authorized.

    Come on people! can't you plainly see the chips are on the table here!

    This link [google.com] clearly shows how Intel has known all along!

    Open your eyes!!!! [vanshardware.com]
  • My thoughts, as posted on AnandTech's bullitin board. No one has responded since there, maybe I can get more feedback here:

    Isn't this a technology that could be combined with dual bank motherboards? This would then provide 4x the bandwidth of standard single channel DDR.

    I'm not thinking so much for main memory, but for graphics. Dual Channel DDR2 + QBM would be a very very good thing. Especially on something like Nv30.

    Anyone know if it'd be possible to combine the technologies?

    Even if Intel doesn't embrace it, AMD should. Fast memory is irrelevant for Athlon, but Opterons (especially multi-processor Opterons) could seriously take advantage of this.

    Make a reference board, and others will follow suit.

    • Isn't this a technology that could be combined with dual bank motherboards?

      Aw, you stole the idea I was gonna post :)

      I was also thinking of signal integrity, which the Kentron FAQ only partially addresses. As a skimmingly understood this, QBM works by sending information "under the wire", so to speak. So QBM is only as good as the memory bus is slow. Therefore this technology is fundamentally limited, given mainboards speeds tend to increase. However, given the apparent difficulties with DDR400, maybe the limitations wouldn't really be hit for a while yet.

      Go Kentron! Go VIA!
    • Even if Intel doesn't embrace it, AMD should. Fast memory is irrelevant for Athlon, but Opterons (especially multi-processor Opterons) could seriously take advantage of this.
      You forget that Opterons and all Hammers already have the memory controller integrated on the processor die. If this takes off it will have to aim at higher budget customers that can afford motherboards with an extra memorycontroller attached to hypertransport, which of course is the customers that prefere reliable proven technology and don't want to jump the first wagon of anything. AMD's solution is neat indeed, but the price is that every new memory technology will have to be pioneered by Intel. When Intel has paved the way and earned money on first adopters AMD can step in and use it more efficiently on a market with lower prices.
      Technology never goes the best way, it's just takes any possible solution that seems economically viable.
  • by geekd ( 14774 ) on Wednesday September 18, 2002 @08:44PM (#4285748) Homepage
    The downside? It is currently only going to available in a P4 chipset that Intel has not authorized."

    Why is this a downside? Why should I give a rat's ass what Intel "authorizes".

    Intel sure as hell didn't authorize my Athlon on it's Abit mobo with a Via chipset.

    Is there an actual downside to not getting Intel's blessing (downside for consumers, not the company making the mobo)?

    • Yes, the downside is that nobody will manufacture motherboards with the chipset for fear of getting their arses sued.

      I'd say that's a pretty big downer for the consumer.
    • The downside is that since VIA don't make motherboards, they rely on mobo manufacturers, and the mobo makers don't want to piss off Intel, and so aren't too quick to look at using unauthorized chipsets.

      If no one makes the boards, the chipset may end up more or less stillborn.

      If VIA had started with an Athlon chipset, or didn't have this disagreement with Intel over whether or not they're allowed to make P4 chipsets, then there wouldn't be that problem.

      However, I would assume that there are obviously some manufacturers making VIA boards, so I would assume those ones would be the ones that would start making boards using this technology.
    • If Intel doesn't bless a chipset from Via and a large MB manufacturer like Asus declines to make a MB based on this chipset to avoid retaliation from Intel then I would say there is a downside for the consumer.
    • Why should I give a rat's ass what Intel "authorizes".

      Because you can't get warranty service from a company that's been sued out of existence for patent infringement.

  • What's next, CPUs that use dilithium crystals?

  • by Anonymous Coward
    http://www.theddrzone.com/news.asp?id=731

  • Why not place the PLLs on every other bank of normal DDR slots on a motherboard. Banks 0 and 2 would be normal. Banks 1 and 3 would be 90-degrees off. If you want double bandwidth, install DIMMs in pairs like on normal interleaving motherboards. Trace complexity would remain unchanged, motherboards would only need a few PLLs inlined, and we could all get twice the bandwidth using $40 DDR DIMMs...and not have to wait for new memory modules to drop in price.

    Just a thought.
  • ``only going to available in a P4 chipset that Intel has not authorized.''
    Ladies and gentleman, here's one fine instance of shooting oneself in the foot. It just proves how stupid Intel is, that they don't want faster memory. I mean, it goes without saying that they aren't _going_ to authorize it...
  • Interleaved memory dates back to the 286 Cpu's and earlier. This sounds like they have just re-implemented it yet again with the newer speeds and are marketing it as something totaly new.

    Definiation from a site on the net:
    Interleaved memory, which divides memory into two or four portions that process data alternately; that is, the CPU sends information to one section while another goes through a refresh cycle; a typical installation will have odd addresses on one side and even on the other (you can have word or block interleave). If memory accesses are sequential, the precharge of one will overlap the access time of the other.
  • Since when does Intel have to "Authorize" a new chip?????

  • What if this could also be applied to DDR II (QDR) memory? THAT would give some REALLY impressive bandwidth.

    The crappy side is that even if it can be applied, it's virtually guaranteed that the memory industry will take the wimp's way - first introduce DDR II, then wait a few years to introduce the dual-band DDR II. No sense in skipping a generation, that would just mean less revenue, right?

    steve
  • by Anonymous Coward
  • When I first glanced at the title of this story, I thought of memories being shifted four times.

    The first look at this Slashdot item prompted the mental image of thoughts regarding past events taking on four interpretations.

    When I first read the text above, I imagined remembering something, then remembering it again, and again, and one more time, each time different.

    The interpretation which before all others was formed in my mind after I read the information about which this post comments regarded bits of stored information being repetitively read and interpreted in different arrangements, more than three times, but less than five.



    Why, no, my name isn't Mojo Jojo, why do you ask?

Love may laugh at locksmiths, but he has a profound respect for money bags. -- Sidney Paternoster, "The Folly of the Wise"

Working...