Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
Technology

Is Rambus Destined to Return? 202

An anonymous reader pointed us to an article running over at Tom's that talks about the world of ram and criticizes the performance of DDR. The article goes into DDR333, DDR400, and Rambus, and explains the issues at higher clockspeeds.
This discussion has been archived. No new comments can be posted.

Is Rambus Destined to Return?

Comments Filter:
  • by 1155 ( 538047 ) on Friday February 15, 2002 @09:29PM (#3016268) Homepage
    Experience tells us that Rambus is faster.

    Pocket books tell us that ddr is better.

    Which will your wife let you decide on?
    • I'll take the Glad Bag!
  • Given the bad performances [tomshardware.com] of RDRAM due in large part to its insanely high latency, and Rambus' dubious business practices [ebnews.com] based mainly on trying to milk patents to leech on the entire memory industry's back, why on earth should anybody give then the opportunity to make a come-back ?
    • by linzeal ( 197905 )
      IS there anything else in memory arch besides rambus and DDR on the horizon?
    • by Dudio ( 529949 )
      The article you reference talks about RDRAM performance with the P3. The P4 presents an entirely different picture, as it fully utilizes the bandwidth of the dual-channel RDRAM architecture. See this article [tomshardware.com] for details. A brief quote from the conclusion [tomshardware.com]:
      The memory benchmarks from above show that Pentium 4 really requires the 3,200 MB/s of data bandwidth supplied by the two Rambus channels. I doubt that it will perform as well with DDR-SDRAM, unless two channels will be used.

      The business practices, of course, are a different story.
    • by Glonk ( 103787 )
      How is the parent post +4?

      He links to an obsolete article from Q3 2000 about RDRAM on the Pentium III...

      He talks about the "insanely high latency", and it's pretty obvious he's exaggerating slightly.

      RDRAM's latency, particularly with the upcoming PC1066, is far better than people give it credit for. See this AcesHardware article [aceshardware.com].

      PC1066 RDRAM latency for 128 bytes: 207 cycles
      PC800 RDRAM latency for 128 bytes: 247 cycles
      PC133 SDRAM latency for 128 bytes: 229 cycles

      Slashdot moderators: Would it kill you to check the links before going points-crazy?
    • I agree but am compelled to ask: why was latency an issue w/ RDRAM before but now RDRAM is the greatest thing since sliced bread? I'm not upset that THG has posted a correction, I just didn't see a satisfactory explanation...
  • by slithytove ( 73811 ) on Friday February 15, 2002 @09:37PM (#3016302) Homepage
    It seems to me that Rambus has offended so much of the industry that it even intel's continued (though lately lessening) support, or perhaps especially with intel's support it will fail to be implemented by the majority of m/b manufacturers.
    Other avenues for gaining speed exist- like Nvidia's extra memory controller for the gpu in the xbox and higher end nForce chipset.
    • To clarify on the second point:
      The dual 64bit memory controllers of the Nvidia nForce northbridge part allow the cpu and gpu to access the same ddr ram at the same time and with the same speed. This doesn't help in non 3d apps, but then I haven't seen a lot of desktop apps besides maya and games that are dying for a huge performance upgrade anyway.
      Larrence Leisig and I agree that there is a lack of killer apps to drive the demand to drive the r&d to keep moore's law going. On the other hand there are vast areas of application for embedded systems. Scientific and other cpu hungry apps will continue to be parralelized and run on ever cheaper distributed systems.
  • common sense? (Score:2, Insightful)

    by Kargan ( 250092 )
    With the exception of the shady business practices of Rambus, I don't fully understand why Intel dropped RDRAM in the first place. In every benchmark that I saw circa 7-8 months ago, the huge amount of memory bandwidth present gave Intel one of it's only advantages over the corresponding AMD CPUs.

    It couldn't have just been the prices either, because Intel obviously knows they're not going to win that race.

    Anyone?
    • Re:common sense? (Score:3, Insightful)

      by Greg Lindahl ( 37568 )

      Intel had 2 reasons to bring out SDRAM based boxes:

      1) Cost. Most consumers and business desktops don't care about speed, and RDRAM costs too much extra.

      2) Speed. RDRAM looked fast because it was implemented with multiple banks. You can do the same thing with SDRAM, if you like. And that would give an apples to apples comparison.

      • Re:common sense? (Score:4, Insightful)

        by VAXman ( 96870 ) on Friday February 15, 2002 @10:29PM (#3016443)
        Speed. RDRAM looked fast because it was implemented with multiple banks. You can do the same thing with SDRAM, if you like. And that would give an apples to apples comparison.

        The whole advantage of RDRAM is high bandwidth/pin, and the fastest RDRAM has more than double bandwith/pin than the fastest DDR. RDRAM is very cheap to make dual channel because it has fewer pins. It is very expensive to make a dual channel DDR system because it requires that many more signals. The only dual channel DDR system I know of is the upcoming Serverworks Grand Champion chipset for the P4 Xeon which is very high-end (and no doubt expensive).

        • Actually, there have been multiple dual-bank SDRAM chipsets. While SDRAM and DDR do use more pins than RDRAM, that isn't necessarily a huge cost, especially since they're lower speed pins than RDRAM, which turned out (in practice) to be a pain in the neck.

        • The problem with RDRAM is it moves the extra cost for extra bandwidth from the motherboard to the memory. Motherboards are cheap, even ones with 6-8 layers which is what you need for duel channel ddr. On the other hand RDRAM is expensive and because of the higher frequencies the yields are lowered significantly. So RDRAM trades off expensive testing, expensive parts in terms of yield for a little more work in the motherboard design and manufacture. RDRAM is trying to answer a question that doesn't exist. When they started designing rdram they thought we wouldn't be able to go past 4 layers in the pcb and that we wouldn't have materials that could run at great speeds without interference, the problem is we do. Dual channel DDR has already been shown to work, so we have near equivilant bandwidth with much lower latency. Sure the P4's prefetch helps hide latency, but not completely. An Athlon MP setup on an nforce style chipset would be huge on performance.
        • A few years ago this "bandwidth/pin" thing would have been a killer difference. Adding pins was expensive, Pins and packaging were serious problems.

          However that has changed considerably. uBGA and similar achieve huge pin densities at tiny cost per pin. It is harder to get your process right with BGA - but once you only have to do this at the design/setup stage, once it's right you get better quality, repeatability and yield with uBGA than you do with Fine Pitch packages. Multilayer PCBs are much less of an issue too - as are fine pitches in those PCBs.

          >The only dual channel DDR system I know of is the upcoming Serverworks Grand Champion chipset for the P4 Xeon

          Intel and others are working on several Dual DDR chipsets - "Granite Bay" is supposed to be released Q3 this year.
    • "I don't fully understand why Intel dropped RDRAM in the first place"

      I think the reasons were pretty clear:

      1) RDRAM was extremely expensive compared with SDRAM

      2) The performance advantages were (and are) largely theoretical in desktop PC's

      3) DDR RAM in practice showed itself to be faster than RAMBUS.

      4) The Athlon chipsets supporting SDR/DDR combined with the cheaper costs of the AMD CPU and DDR RAM gave AMD based machines a huge cost advantage to vendors who chose to go the AMD route

      [In fact, i would've bought a Dell last year except they only had P4's with RDRAM. That made a Dell computer not only slower, but more expensive than the Micron PC with the Athlon that I eventually bought. The price difference was significant, too]
  • by andaru ( 535590 ) <andaru2@onebox.com> on Friday February 15, 2002 @09:43PM (#3016328) Homepage
    I always thought that Rambus was even more annoying than the rest of the Lakers.

    Sorry...

    • Yeah, but with Shaq on the shelf with a bum wheel they could use some help in the post...
    • I always thought that Rambus was even more annoying than the rest of the Lakers.

      Wow, if I had mod points I wouldn't be sure whether to mod you up as funny or down as flamebait...

      I've got a friend that would consider that accusation fighting words - he's been carrying Kurt's bubble gum card in his wallet for 15+ years and reveres him as a demigod.

      Of course, I never watched him play a game (that I know of), and couldn't careless personally (I really hope that my firend doesn't read /. or he'll probably hunt me down).

      What is the world coming to...

  • by bravehamster ( 44836 ) on Friday February 15, 2002 @09:44PM (#3016331) Homepage Journal
    Seriously, ignoring pure performance considerations, RDRAM is garbage. It has to be put in pairs, and if those pairs aren't made by the same manufacturer, I've seen motherboards refuse to boot. Heat is a serious issue, and I've burned one finger too many on those heat spreaders. I've also seen an analog cable coming from the cdrom get stuck between the RIMM's and melt to the heat spreader. And price is still an issue, although it's improved quite a bit recently.

    Expensive + Has to run in pairs + Runs very hot == Useless to me.
    • While Rambus memory might not be the best design in the world your arguments for why it is garbage are retarded. The memory is interleved and runs hot. There's nothing wrong with interleved memory designs except the fact you have to buy two modules rather than one. If you use memory from different manufacturers in SDRAM based systems you can wind up with a system not booting too. It depends on how robust your memory controller is. They run hot but have twice the memory bandwidth of PC2100 memory. That sort of tradeoff is always inherent in a computer. You could say RDRAM is garbage because of the limited number of suppliers or patent issues or memory even the high latency. Instead you picked the fact it was interleaved and hot.
      • by Zathrus ( 232140 ) on Friday February 15, 2002 @10:53PM (#3016509) Homepage
        No, his points are valid.

        Interleaved memory designs (interleaving on a slot basis rather than interleaving on the RAM stick itself) causes many issues. First off, you have to have more slots for equivalent upgradeability. And more slots requires you to have more layers on the motherboard due to increased number of traces (although, admittedly, RDRAM has vastly fewer traces than SDRAM even so). It also requires more real estate on the board, which isn't debateable. Second, you start running into timing issues more often with interleaving than standard memory clocking. Sure, as you say, it depends how robust your controller is. But, funny thing, RDRAM either has amazingly shitty controllers, or they're just vastly more prone to lockups when you have slightly differing speed memory.

        As for heat - it's not a tradeoff issue. DDR didn't double the heat of standard SDRAM, and RDRAM isn't merely twice as hot as DDR. It's absurdly hot. And heat is a major computer issue already between CPUs, chipsets, and graphics cards throwing off oodles of heat as is. I don't know of a manufacturer that has a fan blowing specifically over the RAM, but RDRAM could certainly benefit from this. Heat kills systems (more specifically, thermal changes kill systems, but you'll get faster thermal changes with hotter components), so why design a system with RDRAM that is so much hotter than the alternatives? For how little (if any) of a performance gain?

        Oh, and you claim RDRAM is twice the speed. Ok. Want to compare apples to apples? Put RDRAM in a non-interleaved system (yes, they're out there. They're even predominant) and the memory bandwidth is only slightly higher than DDR. Or compare it to an interleaved DDR system (again, they're out there). Boom. You have a DDR system with nearly as much bandwidth as RDRAM.

        And, frankly, bandwidth ain't all it's cracked up to be. Funny how DDR systems routinely spank RDRAM systems in real world benchmarks (not pure memory bench's). Why? Because latency is king. Particularly if you're multitasking. You'll hit different areas of memory so much that bandwidth will make little difference compared to latency. And RDRAM has really, really miserable latency. And it gets higher as you add more sticks. So while it's great for some things (video editing/streaming, etc), it sucks for most applications.
        • What the fuck is it with people not reading everything people post. I was saying the argument of interleved memory and heat is a retarded basis for disliking RDRAM. I fucking said one of the problems with RDRAM is the high latency. Issues with a system's implimentation (ie. heat) as less important than a system's performance. The guy not liking RDRAM based on heat issues and the fact it is interleved is a bit ridculous when there are far worse problems with it, like latency as I originally said and you decided to repeat for your own benefit. RDRAM is a crappy technology and I wouldn't buy a system using it. I used to have a HX chipset in a Sony Vaio that used EDO RAM. The HX chipset was interleved so like you said I had to upgrade my SIMMs in pairs from the same vendor. This is a logistical hassle and prone to errors on my part and on the part of whoever is supplying me with the chips. I don't know why someone would design a system with more heat inherent in its operational design. I wouldn't personally as I am not a proponent of RDRAM or Rambus as a company. Because I said the guy was picking on two minor flaws in the design of RDRAM doesn't mean I have a Rambus t-shirt and posters on my walls.
          • >I was saying the argument of interleved memory and heat is a retarded basis for disliking RDRAM.

            And as someone who build systems where reliability is a major issue - I am saying that heat is a damn good reason for disliking RDRAM.

            If a component runs hot - that means it is using more power than a component which runs cool.

            Fans create Noise

            Fans are unreliable

            Heat itself lowers reliability - a good "rule of thumb" is that every 10C rise in temperature halves the life of a component.

            So Rambus = larger power supply, more cooling, more heat, more noise, less reliability.

            Disclaimer - I am a bit biased - I mainly build systems which sit in racks in places like Telehouse, systems which are used for Telecomms therefore they must be seriously reliable. Heat and Fans matter more for me than they probably do for most desktop users. But the same basic principles still apply to everyone.
            • I think for many non-professional buyers heat is definitely not an issue in terms of overall lifespan of the system or in terms of noise. They are afterall buying a chip that gobbles down 70 watts needing a heat sink so large as to need mounting to the chassis of the systems. The P4 systems aren't exactly sipping electricity to begin with. The heat generated by the RAM is marginal compared to that of the processor. You definitely wouldn't stick a P4 system in the closet with shitty ventilation. The point I was trying to make which I think most people missed is that RDRAM has more flaws then heat which is an implimentation issue, not a design issue. There's more pressing design issues like high latency which I think take precedent over the heat problem which has more to do with the manufacturer of the RAM than with the actual design of the system.
      • It depends on how robust your memory controller is.

        While that is a true statement, it is not true that it only depends on the memory controller, which makes the thrust of the statement false. If the variances between your RIMMS is such that it goes outside of the tolerances of the channel (which are measured in tens of picoseconds), there ain't shit your memory controller can do.

        They run hot but have twice the memory bandwidth of PC2100 memory.

        Unless you are only using one significant digit, that's just not true. Dual-channel PC800 RDRAM would provide 3.2GB/s theoretical, while single-channel PC2100 is 2.1GB/s. 3.2 is not twice 2.1... It's just over 50% more.

        But even that doesn't really excuse running so hot. Dual-channel DDR (which has 25% more bandwidth than RDRAM, not that it helps the platforms that have it) isn't incredibly hot.

        Though oddly in the end I have to agree with you... Interleaved and hot are kinda silly reasons to avoid rambus. Especially interleaved, because that only -helps- . :)
  • DDR333 manufacturers are starting to get the feeling that the technology is essentially moving too fast to allow companies to reap the benefits of their investment in research and development.

    This can be a problem. You should be able to make back the money so you cover your costs. Unfortunately, you may have to have deep pockets to stay in the game for a long time.

  • It's basically just regurgitating a fact we've known ever since we first saw the design of the PIV: The PIV was designed under the assumption that Intel would be able to force the public to spend way too much on CPUS and memory simply b/c there would be no alternative just as there hadn't been for the previous 20 years. In other words, the PIV was designed for RDRAM.

    It's not news that a CPU which was designed to use one memory doesn't perform as well when using a different kind of memory. The PIV needs memory bandwidth desperately. How was Intel supposed to know that all those Alpha engineers would go to AMD and give the public a decent alternative? How were they to know that the public would have the option of getting better performance for less than half the money? The PIV is the last processor of an era AMD just put an end to. It's foolhardy to derive the future of a memory technology from its performance in conjunction with a misdesigned processor. If you could test an Athlon with DDR versus RDRAM, of course, the DDR would perform better. Please let's not post "news" from Tom's anymore - Tom's has just really gone to crap.

  • Much like SGI boxes for the average desktop, Rambus plays it's part to a niche market. (Though I'll be damned if I'm going to stay in a job that makes this ram standard build)

    When price isn't an issue, and politics not a motivater, it's amazing what ends up in the niche.

    -Slashot is like a sewer, what you get out of it, pretty much depends on what you put into it.. - Updated Tom Lehr.
  • by Anonymous Coward on Friday February 15, 2002 @10:06PM (#3016385)
    The fact is that most IC testers today support the lower-speed parallel connections. High-speed serial connections like Rambus and SERDES require very expensive mixed-signal testers with expensive and complicated load boards (the PCB between the tester itself and the chip). These high-speed serial I/Os on the memory ICs themselves are also generally much larger than on a DRAM, probably by a factor of 5. So, you don't get die savings, you don't get lower test costs, and most of all you don't have any processors whose front-side buses exploit this. Plus, you have very expensive target products in terms of motherboards to support the Rambus ram requiring tight trace routing and signal isolation, and their very limiting 28ohm max impedance (at least with the PC800 RDRAM), almost completely opposite in difficulty to DDR. So where's the advantage?

    If you also figure that the memory controllers for Rambus are configured for dual-channel operation, it becomes much clearer that the advantage is not in the memory architecture itself but in the controllers. Suppose a server board manufacturer decides to support quad-channel PC2700 1GBx4. That's 10.8 GB/s of potential memory bandwith on sequential accesses! There's hope with chipsets like the Nvidia nForce420 dual-channel DDR, but the Athlon FSB is the limiting factor there. And let's not get into the infamous first-access latency issues which I hope they're finally addressing.

    Rambus is also notorious for poor tech support. I worked for a major silicon vendor using their core, and they never responded to our requests for minimum PLL-to-Rambus core distances. It was abjectly ridiculous, but not surprising considering that regular SDR/DDR memory interfaces outnumbered Rambus designs 100:1. Have things changed? Considering what their legal bills have been lately and an erosion of their tech support, I doubt they can afford to improve it much.
    • For those who don't know, IC testers are big ass machines used to production test chips (i.e. screen out the good chips/dice from the bad ones).

      Intel uses testers from schlumberger [slb.com] (their reps are quick to point that out). Typically, a tester cost anywhere from 1-10 million dollars + plus they require a lot of maintenance, calibration, etc. Basically, the faster and more pins you need, the more it cost.

      I've worked with some schlumbeger 'KX' testers, they're a big pain in the ass, unreliable, and are badly designed: just shutting the thing off can break it!! (especially if you use the emergency off button).

      There is another choice, however, you can use 'bist'(built-in self test) and have the chip basically test itself :-). This allows companies to get away with using cheaper, more reliable testers.

  • by ryusen ( 245792 ) on Friday February 15, 2002 @10:08PM (#3016394) Homepage
    on what systems you are working with? if you want a performance p4 system then obviously you use rambus... and if you want an amd system you use ddr(since there is no rambus/athlon chipset)
    and until there is a rambus/athlon chipset i don't really think we can gague the real world implication of it...
    either way i have better things to do with a few $100 than put it into a more expensive chipset/cpu/memory rig. if you have the extra money and the rambus system gives you what you want, then more power to you. overall, right now, you can't say either system is "the best" in ever possible catagory
    • If I wanted a high-performance Pentium 4 system I'd wait a little while for the Grand Champion chipset that supports two DDR channels.
      • now that is intresting... but based on the performance of the nForce (where the memeory bandwidth is twice the fsb, but performance wasn't increased that much) how much better do you think the grand champion is going to be unless they pump up the fsb again...
        the new p4 is going to be 133x4=600
        dual ddr, assuming it's 333, would be 166x2x2=666
        assuming each channel is the same width (i can't remember right now) would that extra 66mhz.x64bit make that much of a difference? might be intresting for overclocking though... according to Ace's [aceshardware.com] the NForce has really good overcloackability. i wonder if it's cause of the eaxtra memory bandwidth headroom...
        • The Pentium 4 FSB is 3.2GB/s; even the fastest DDR is only 2.7GB/s, so the Grand Champion uses two 1.6GB/s DDR channels to get balanced performance.

          I think the Grand Champion HE has four channels, so you'd get 6.4GB/s; that's probably only useful for workloads with a lot of DMA traffic (e.g. disk and network I/O).
  • I have a few problems with RDRAM.

    The first one is the price. It's simply more expensive than DDR Ram as far as I can see.

    Secondly, it appears to me that we are getting to the spliting hairs, angels dancing on a needle level here with RAM. Unless it dramaticly increases my boot time, time to do things in a word processor, makes mySQL fly like a greased dolphin, gives me kickass FPS in UT, or makes my G4 fill my bowl with Fruit Loops in the morning, I'm not going to spend one cent more for RAM than I have to.

    If RDRAM is 101 dollars for 256 and DDR is 100 dollars for 256, I'm going to go with the DDR and the hardware that supports it.

    • Well, not to make Tom's or RAMbus's argument for them... But just look at the benchmarks of the P4 2.6GHz/RMBS vs 3.0GHz/DDR vs 2.6GHz/DDR. It's clear that at those clock speeds RDRAM works much better with the P4. Much better. So if you are doing -anything- where such an increase is meaningfull -- which includes 3d games like UT -- then yes, you should go with the RDRAM.

      But if what you get with DDR is "good enough", then of course you should go with that, because it is cheaper.

      • Yes, but what about price/performance. I can get 768MB of PC2100 ram for what 512MB of RDRAM costs. Which will I see better real world performance with, 50% more ram or 33% faster ram? Guess it depends on your applications. I know that if I'm running a large animation in 3DS-Max that the extra ram (and not swapping) will get me done hours or days faster than the faster ram!
  • Who gives a crap if RAMBUS is on the way back? All /. users know that even if the number of channels double, Bill Gates will just jack up the memory usage of Windows. Then everyone will just start bitching about it. Then Jon Katz will just post an article about something that has nothing to do with "News for Nerd". The cycle never ends... ... ...
  • by t0qer ( 230538 ) on Friday February 15, 2002 @10:32PM (#3016452) Homepage Journal
    About a year or so ago, intel was doing demo's of Q3A at fry's to show off the new P4. Economy was good, I had like 4k from PTO that I aquired between jobs so I said what the hell and dropped $1500 on a 850dgb board, processor, case and 128 of ram.

    I gotta say, this stuff is hot, my friends have all gone off and bought gforce3's, amd's with DDR. I thought these new cards/systems would have score winmarks well above my own (around 3800 with a gforce2gts) but I was surprised to see they only score 1000 or so more than me.

    Out of curiosity, we put one of those GF3's in my system. Without fail I would score about 400 to come in around 6300 3dmarks above my buddies amd1.6. My P4 is just 1.4. Yet even with a lower clockrate the memory bandwidth made a huge difference.

    I'm not trying to cause a ruckus here, anyone with deep enough pockets (or access to enough systems) can just as easily do the same testing I did. Bottom line whether or not the moderators like it is rambus systems do provide the absolute best possible performance in 3D gaming. It certainly was expensive when it came out but now with the falling prices of all ram, it's within reach of anyone that want's that extra "oomph" in thier system.

    Does anyone know of any AMD boards that use rambus? I'm sorta curious what kind of scores those get in comparison to the intel one's. Anyways thats my comment.
    • by Chris Burke ( 6130 ) on Friday February 15, 2002 @10:53PM (#3016508) Homepage
      Were the AMD systems Duron-based? :)

      What about games that -don't- love the P4, like, say, -any other game- (even those based on the Q3 engine)? :)

      But no one needs to do the same testing you did. They can just look at all the tech sites. Hey, you already visited Tom's Hardware to read this article, check out who -he- thinks has the "best possible performance in 3D".

      At one time, your "best possible performance in 3D gameing" applied... That time was the year 2000, and Q3A was the most demanding benchmark anyone could cook up It is now 2002, and the world has moved on from Q3A, and P4 lost that crown. But nice try.

      You should have said "media encoding", because then you'd have been right even today.

      As to AMD using Rambus... It'd suck. P4 does better with RDRAM than DDR because it's a highly pipelined, high-clocked machine that craves bandwidth. The K7 is a very wide machine, and for it the worst thing that can happen is having to stall waiting for data. The latency of RDRAM would kill the K7. You'll note that the dual-channel nforce (higher theoretical bandwidth than the i850, and 2x the KT266A) doesn't outperform VIA's chipset. An likely reason would be that the KT266A has lower latency, and that more than makes up for the extra bandwidth (which K7 doesn't need).

    • Congrats on using a bad benchmark.

      Go read the hardware sites. Quake3 does ABSURDLY well on a P4 for some reason. Nobody can explain why. Jon Carmack doesn't even understand why last I read. But because of this, it's not a wonderful benchmark for AMD vs P4 comparisons. If it's all you play, then that's fine - benchmark it off that. Otherwise go look at other gaming benchmarks, like Serious Sam and Anandtech's new Unreal2 benchmark (which is of debatable value, admittedly).

      Of course, then you might realize that people who have a clue are right, and that RDRAM costs 2-4x as much as DDR for no performance gain. Or for a performance loss in some cases.

      And no, AMD doesn't use RDRAM. Nobody's even bothered to even design an AMD motherboard that uses Rambus. Partly because it makes no sense - AMD is still mostly used by people who are cost conscious, and RDRAM isn't desired in that catagory. Partly because it would be relatively difficult to design such a beast, due to lack of support from AMD. And partly because there's no performance advantage in the real world.

      Oh... and even in Q3... consider how much more you spent for RDRAM, a P4, and the premium on the motherboard as compared to a comparable Athlon system. Then figure that out as a percentage of system cost. Then figure out how much performance percentage you gained. I bet the first is greater than the latter.
  • I'm generally not too fond of Rambus. Why you ask? Look at what's it's done with the N64. Framerate problems, jaggies...etc. I'm just fine with my SDRAM, thankyouverymuch.
  • by Anonymous Coward
    That Intel needs Smallbus, err, Rambus memory to keep on par with AMD chips.

    This is the 'return of Rambus!'?

    Please. SDRAM is the standard. DDR is entrenching into that market. Rambus? It's like the Mac - some people wonder, 'What's that?' while the techs laugh at people who have it.

    Rambus is a horrible. The technology? No, the company. Not by speed, but for business should we continue ignoring it. They are a horrible company, and despite their products, should not be dealt with as a result.

    The technology.. Isn't really any better or worse than SDRAM/DDR save for price. I've seen boxes refuse to boot when two different brand yet same speed/size SDRAM chips were inserted into a computer. I've seen bits 'o SDRAM cause page faults, kernel panics, etc.

    I've seen Rambus do the same. *shrug*
    • Please. SDRAM is the standard. DDR is entrenching into that market. Rambus? It's like the Mac - some people wonder, 'What's that?' while the techs laugh at people who have it.

      Maybe. But, often it's the non-techs that make technical decisions, at least in a business environment. I'm sure companies like Dell are selling plenty of systems with RAMBUS in it.
      Having a technologically superior product does not mean you'll succeed commercially. Having an inferior product does not mean you'll fail. Unfortunately.

  • by Chris Burke ( 6130 ) on Friday February 15, 2002 @10:41PM (#3016476) Homepage
    It wasn't a bad article... I mean, the facts -do- show that the p4 runs better with RDRAM, and he addresses the consequences of that quite well, and quite neutrally. For that I commend him.

    But he does misrepresent some issues. For example, signal integrity issues. I can say with complete assurance that Rambus is loaded with signal integrity issues. These issues get -very bad- as the clock frequency goes up. Also Rambus is -not-, strictly speaking, a serial bus. First, it is 16 bits wide, while pure serial would be 1. Second, the depiction of a DIMM as being a unterminated stub with significant SI issues is correct, but this doesn't go away with rambus, and this definition of "serial" fails as well. While the signals do pass through a RIMM continuously, eliminating the RIMM itself as the source of major SI problems, you still have each and every RDRAM device itself acting as an unterminated stub, each of which causes reflections of its own. Especially for devices with tolerances as low as RDRAM, this can be difficult to manage. While in the balance I'd have to concede that at a given clock frequency RDRAM has the SI advantage, remember that RDRAM needs 4x the clock frequency of DDR to match bandwidth.

    Or you could have 2 channels of rambus, and only need 2x the frequency. Well, 2 channel DDR is becoming a reality. Not only does nForce support it, Sledgehammer will as well. Neither of these are Intel platforms, but I would guess that going dual-channel would be a natural step for VIA and others competing with Intel chipsets. It would especially make sense for p4, as it would more than make up the memory bandwidth disparity that currently exists.

    Speaking of nForce, another thing I take issue with is the suggestion that the nforce's DIMM-slot population problems are indicative that DDR is crippled by SI issues. I think more likely is that this was the first chipset designed by a company whose experience lies solely with graphics cards, on which the ram is directly soddered to the PCB. Lack of experience in the harsher SI conditions of a computer motherboard are to blame.

    Speaking of DIMM population, it's hard for me to see only having 2 DIMM's on some boards as a particularly black mark for DDR... That leaves you with 2GB per channel, the same as RAMBUS.

    So, he was right about some things, insightful on others, but the picture is -not- so clear-cut in the image of rambus Inc.
    • by sigwinch ( 115375 ) on Friday February 15, 2002 @11:13PM (#3016552) Homepage
      For example, signal integrity issues. I can say with complete assurance that Rambus is loaded with signal integrity issues. These issues get -very bad- as the clock frequency goes up.
      Bad as in you have to be aware of dielectric losses in the PCB material. I remember seeing reflectometer graph of a Rambus system where the plateaus were noticeably sloping from dielectric loss.
      Also Rambus is -not-, strictly speaking, a serial bus.
      Serial != 1 bit. Serial == takes more than one clock cycle to transfer a word.
      • Bad as in you have to be aware of dielectric losses in the PCB material.

        Which is pretty damn bad! ;)

        Serial != 1 bit. Serial == takes more than one clock cycle to transfer a word.

        Oh, shit. Well, a word in x86 land is 16 bits, so I'm in the clear, right? I mean, all the macros for 32-bit integers are all "DWORD", right? :)
    • by Perdo ( 151843 ) on Saturday February 16, 2002 @04:23AM (#3017183) Homepage Journal
      The Column was done by Frank Völkel. Based apon his lack of technical documentation, I'm going to guess it's just his opinion. I doubt very seriously that Tom Pabst himself agrees with the article. Tom tends to be much more objective in his articles than Frank is.
      • That's all nice and well, but I am a bit bothered by the fact that there is controversy about the info on THG. I used to trust the articles, now I'll have to apply my own judgement (oh dear...:-).
  • QDR is coming out soon, (though they are calling it something else as I recall, no idea why, QDR is such a nice logical name, even the laymen can understand it) and it (seems to be?) but a mere advance of DDR technology.

    Not to mention how far up Nvidia has managed to scale DDR RAM. Heh. I would like to see RAMBUS get that. :) (it never would, latency is too high, part of the base rambus technology)

    RAMBUS would settle down good in a video toaster type of applicance, but that is about it. Video editing seems to be one of its few strong suits.

    Besides, I would like to see a Motherboard that is halfway cheap and can support 3-4GigaBytes of RAMBUS RAM. :)
  • Cost control (Score:5, Interesting)

    by Waffle Iron ( 339739 ) on Friday February 15, 2002 @10:47PM (#3016495)
    From the article:

    The days of cheap memory are over.

    They say this because of the huge expense needed to provide 512MB or more of ultra fast memory. But what if they added yet another level of "cache"?

    Put in 128MB or more of super-fast RAM (faster than today's RDRAM or DDRAM, maybe using an exotic bus) backed by gigs of cheap, easy-to-make memory (PC266 DDRAM or slower). The cheap ram is still orders of magnitude faster than a disk drive. Manage them with hardware that does page swapping similar to virtual memory.

    You could get good system performance and lower overall cost.

    • They say this because of the huge expense needed to provide 512MB or more of ultra fast memory. But what if they added yet another level of "cache"?

      Intel's McKinley chip reportedly has 3MB of Level 3 on-chip cache. Not exactly what you are proposing, but the same basic principle.

      Slightly off-topic: there's an interesting column in Embedded Systems magazine where the author expresses concern about cosmic rays flipping bits in this cache. Apparently Intel acknowledges that this may be a problem (they have studied it). Intel claims a 'normal' user should experience this problem once every 1000 years. However, as the author points out, what if every airplane is equipped with a McKinley chip? Apparently there are 7000 planes in the air at any given moment (on average), so would that mean 7 plane crashes a year due to this problem?

      To get on topic again: your idea is interesting, but maybe we should try to avoid running monstrous applications that need ridiculous amounts of RAM.

      Having said that, I did run into a memory problem when I had to edit a 270MB text file the other day. For some reason, the 512 MB of memory in my machine wasn't enough. Emacs wouldn't load the darn file ("buffer size exceeded"), and Wordpad hung. I tried Notepad (I know, I'm nuts...), and it actually worked! The machine started thrashing like crazy, it took several minutes to scroll, but eventually I managed to do the minor edits. Yes, Linux probably would have done it without a problem, but that was just not an option, so save me your flames. And I hate vi, so don't bother...

  • It runs REALLY hot. The machine one of our techs has needed an extra fan to cool the memory!!

    But really Rambus is not the solution, but another technology will finally arrive. Damn it, I want those quantuam computers with 3D optical storage!

    How much does Tom get kick-backs for supporting Rambu$? They are one of poster boys for Patent reform for both consumers and the patent holders.
  • The prob with the P4 is it's crazy 20+ stage pipeline. It's design is almost counter-productive... I mean, it scales to really high Mhz pretty easy, but all of those pipeline flushes kill it in anything that isn't a 3-d game. And since most 3-d games use the graphics card almost exclusively now...I kinda wonder why you would want such a crazy processor.

    The one thing it's good at, it doesn't get to do...

  • by landley ( 9786 ) on Friday February 15, 2002 @11:07PM (#3016536) Homepage
    The guy starts out the article saying that Intel's DDR implementation was crippled for political reasons. He also states that Athlons benefit from DDR more than P4.

    Then the political aspect is ignored and he talks almost exclusively about technical issues about why Rambus might theoretically be better, and uses existing intel chipsets as evidence.

    Hello? Answer the question, please? Has Intel ever come out with a non-crippled DDR chipset for the P4? How do Intel's DDR P4 chipsets compare to non-intel DDR P4 chipsets? (ARE there any non-intel P4 chipsets?)

    How much of the problem is political, and how much of it is a real technical issue?

  • Thomas Pabst (who we all respect) posted scathing reviews not only of Rambus the company but also of Rambus the technology. If he is recanting he should do it in person, not through a couple of stoolies. By withdrawing such a controversial statement Tom's site is calling into question both the technical and political validity of his write-ups.

    Don't get me wrong, THG rocks and I respect Tom's advice. He knows 10x more than me about hardware. But he should explain why this review is so opposed to the ones he wrote himself...

  • by daviddennis ( 10926 ) <david@amazing.com> on Friday February 15, 2002 @11:32PM (#3016608) Homepage
    First, it looks to me like RDRAM is still about double the cost of SDRAM, according to Tom's Hardware's own price guide.

    They have $93 for 512mb SDRAM and $175-250 for 512mb RDRAM.

    My question is this: Let's say I have a choice between 512mb of SDRAM and 256mb of RDRAM. Would the SDRAM not almost always be faster because RAM, however slow, trumps swap space every time?

    In other words, isn't the amount of memory I have more important than how fast it is?

    Many moons ago, I had a SGI O2 workstation. Tremendous memory bandwidth, but memory that cost 10x more than anything else. As a result, it could be embarassed by lesser machines, since I couldn't afford to load it up with RAM.

    I see Intel repeating the same mistake when it decided to focus on RDRAM.

    Apple is putting L3 cache in their G4s so that the use of expensive RAM is confined to a relatively small and affordable amount. I can upgrade my PC133-equipped G4/450 dual processor to the latest 1ghz dual processor, put my 1.5gb RAM in it, and fly. That seems like a good compromise to me, maybe better than going to DDR, which I would have to buy new.

    Thoughts?

    D
    • First, it looks to me like RDRAM is still about double the cost of SDRAM, according to Tom's Hardware's own price guide.
      They have $93 for 512mb SDRAM and $175-250 for 512mb RDRAM.



      Tom's price guide is not usually the place to find the lowest prices on hardware. But it's also not entirely fair to compare RDRAM prices with normal SDRAM prices, because of the performance difference and shrinking platform availability for non-DDR SDRAM.
      If you look at pricewatch.com, though we can find some prices like these:
      Samsung 512MB RIMM for $156 + $9 shipping from some provider called 11cb. I simply picked this as the cheapest Samsung-labeled provider, since Samsung provides some of the best RDRAM. Keep in mind for interleaved operation that you'll actually be using two RIMMs, so you might instead want to compare 2x256MB or simply look at 2x512MB for your other RAM platforms.

      From SDRAM and DDR SDRAM I'll just pick Crucial/Micron, while they won't be picked as the high-performance provider (people would be more apt to pick Mushkin or Corsair for performance) you'll see much less flakiness than with a non-labeled generic provider.

      (Shipping not mentioned with Crucial, check their site)

      Micron 512MB PC133 CL2.5 $139 + $10 shipping from "Alpha International Business Inc."

      Crucial 512MB PC133 CL3 $139 Crucial 512MB PC2100 CL2.5 $152

      Now for the faster DDR I'll pick the lowest reputable name-brand item, since Micron/Crucial don't offer all speeds of DDR, currently.

      Corsair 512MB PC2400 CL2 $187 + $9.74 shipping from Googlegear.com

      Mushkin 512MB PC2700 CL2.5 $211 + $9 shipping from Mushkin


      Now, I don't intend for you to read too much into this, but provided you stay with "non-crap" providers of memory, the closer you come to the performance levels of RDRAM, the less you see a price difference in favor of SDRAM.


      My question is this: Let's say I have a choice between 512mb of SDRAM and 256mb of RDRAM. Would the SDRAM not almost always be faster because RAM, however slow, trumps swap space every time?

      In other words, isn't the amount of memory I have more important than how fast it is?



      If you aren't being limited by the amount of system memory, then no. Provided that for your application at hand you don't need more memory than you currently have available, swap access differences really aren't an issue. Does it matter that you have 1GB of memory if you only use a small portion of it for something other than disk cache, when compared to 512MB of memory with much more bandwidth?
      If you don't need or can't use the bandwidth, then of course it's not overly useful. Or if you need to access more data than you can realistically ever store in memory, then there will be a point where memory bandwidth is made mood by increased disk access. It's a matter of application and necessity of your processor.

      The Pentium 4 sees very realistic gains from using RDRAM versus DDR memory, because of how it was designed.
      At one point Intel was being embarrassed by the absurd cost of RDRAM, but times have changed. It's continued to go down in price, and DDR and normal SDRAM have recently increased in price.
    • In other words, isn't the amount of memory I have more important than how fast it is?

      No. If your software's working set is smaller than the amount of physical memory, you are better off with the faster memory. You can create software workloads that make either configuration look better.

    • In other words, isn't the amount of memory I have more important than how fast it is?

      I think this is probably true.

      However, as you point out, the price difference between RAMBUS and SDRAM is now very small. According to Sharky Extreme [sharkyextreme.com] the difference between 512 MB of SDRAM and RDRAM is about $80, and DDR RAM (PC2100) is actually more expensive than RDRAM! So if you plan to put 2GB in your machine, SDRAM is appreciably cheaper, but if you plan to do that, you probably plan on some serious hardware as well, so you'll probably spend $3000+ (probably coulnd't get a motherboard that would take 2GB SDRAM anyway...).

      My point is that both SDRAM, DDR RAM and RDRAM have come down in price dramatically in the past year (although memory prices seem to be on the rise again). The price difference is very small when compared to the total price of the machine, so why bother? I have nothing against DDR RAM, but it'll have to win on technical merit nowadays.

      As an example, I had to specify and buy a PC for my job some weeks ago. Now, this PC will be running a very specialized application, and nothing else. No CD burning, MPEG/MP3 encoding, no image processing, and no games. I like a cool machine as much as the next guy, but I simply could not justify putting more than 512 MB in this machine. Same for the hard drive, 40GB should be more than enough. A decent monitor was a requirement however. So why save $80 on memory when we're spending $700+ on the monitor alone?

      Summarizing: if 512 MB is enough for you, why bother? If it isn't enough, you'll likely spend a lot of dough anyway, so again, why bother?

  • I've noticed that alot of slashdot posters hate rambus with a passion. Why is this? I have not really been paying attention to the history of rambus vs. DDR. Other than hearing stuff with "rambus sucks" or somthing like that in them. Could someone post an article or explain the history of rambus and why people hate it?
    • Historically RDRAM has been plagued by cost, which has deterred its adoption, but this isn't why average reader of Slashdot dislikes it. They might claim that they think it's technically inferior (PC600 and PC800 have more latency than SDRAM, but PC1066 and PC1200 RDRAM will likely be out within the next year), I think a large majority of hatred of RDRAM comes from Intel and Rambus's business practicates.
      Intel and Rambus were hoping to strangle the market into adopting RDRAM in order to hurt Intel's competitors, and when this failed (RDRAM's prices lead people to adopt PC133 and then DDR), they attempted to obtain royalties or sue developers of alternative memory technologies for patent infringement of one form or another.
    • they've relied alot on patents with a few shady practices. Your average slashdot reader doesnt know anything about ram design though, so they tend to just follow the loud critcizers..
    • rambus is disliked because they comitted fraud.

      Fortunately, in May 2001 a Virginia jury convicted rambus of fraud.

      Unfortunately, they fine the jury imposed on Rambus ($3.5m) was reduced to a mere fraction of the original penalty ($350,000) by state laws capping the limit of punitive damages.

      A mere slap on the wrist for a company which acted so unethically.
  • by CMiYC ( 6473 ) on Friday February 15, 2002 @11:34PM (#3016612) Homepage
    I work for a test and measurement company and we sell logic analyzer tools for both DDR and RAMBUS. I service at least 1 site of each of the major computer manufactuers and I can tell you none of them are even considering RAMBUS. In fact, I can't remember the last time someone asked me about it. The only thing I have consulted customers on is the future of DDR. If anyone was interested in RAMBUS, I'd at least be hearing murmurs. Keep in mind I am only looking at computer manufactuers, not the 3rd party Asian motherboard manufactuers. Who knows what they are doing.
  • Instead of focusing upon some benchmarks with overclocked CPUs squaring off against one another, why not pay attention to another article [tomshardware.com] from Tom's that shows some benchmarks of the P4 2.0 ghz operating at factory spec on a number of different P4 boards with different memory configs(P4 + i850 + RDRAM vs P4 + Sis645/ViaP4X266/ViaP4X266a/i845 + DDR). As you can see from that article, at least on the pre-Northwood P4s, DDR did pretty well on the non-Intel chipsets, particularly on the Sis645 when PC2700 was used.
  • Tom's Credibility (Score:1, Interesting)

    by Anonymous Coward
    After Tom's Hardware printed that article where all they glossed over the PS2 and Gamecube as if they were useless (and all the information about those 2 consoles seems to have come straight from Microsoft) and then devoted 12 pages to details about the Xbox, I'll never really believe what they write about products again. I'm not even really sure if I believe all their AMD benchmarks, anymore. They were clearly bought by MS for that, so they probably were bought by AMD for those other articles.
  • If you doubt these words take a look at Compaq's
    Alpha EV7 with it's RAMBUS controller ON CHIP.
    Just because PC architecture is limited by chipsets with limited memory bus bandwidth does not mean there are no other uses for such a memory architecture.

    Peter
  • To be honest, I really don't plan on buying another desktop system ever again. If needed, I'll build one out of spare/cheap parts. Why? Because I use my laptop anymore 99% of the time.

    There will NEVER be a laptop with RIMMS in them because they are too damn hot. Unless the design of them drastically changes in some unknown way, this "NEVER" is a fact.

    I think there are some DDR laptop solutions in the pipe now. Yet, there is the problem of the slower system bus speeds on laptops, so it will not matter much until that's fixed too.

Don't panic.

Working...