Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
Technology

From Rambus to DDR:Memory Explained 62

rosewood sent us linkage to an article that explains memory and more. A fairly detailed story talking about RAM in general, as well as explaining Rambus, DDR (including 1.5 and 2). Well written and worth the read. And it even features lots of diagrams (although some of the tables seem to have been designed by someone who is color blind, using white text on very bright backgrounds. Why do people do that?) Anyway, highly recommended.
This discussion has been archived. No new comments can be posted.

From Rambus to DDR:Memory Explained

Comments Filter:
  • For completeness: there are 3 separate parts to the ArsTechnica story. Here's the entire list: part 1 [arstechnica.com] covers SRAM and DRAM basics, part 2 [arstechnica.com] covers asynchronous and synchronous DRAM, and part 3 [arstechnica.com] covers DDR DRAM and the RAMBUS thingie. It's good reading indeed.

    --

  • If you are using something that requires a lot of bandwidth (crazy serving and say CAD) then Rambus is crazy fast. However, in the real world - as the article says - its latency that is king. Btw - I had a lot of [H]s in my submission so all of us OCP'ers (you are out there) can rest assured I Tried
  • Nope, it's white on bright yellow using Mozilla.
  • I am partially color blind (red-green) and I cannot believe how many people design their websites with color rather than brightness. For this particular site you can see examples of this on page three and four, where the web designer uses white text on a bright background in some tables. My monitor is set to 1600x1200 (19") and I cannot read the text without blowing it up (zoomin) or selecting the text (which produces a nice white on dark blue selection)

    This site was obviously not designed by a color-blind person, it would have to have been designed by a person who 1) has their display set to a low resolution and 2) has no color blindness whatsoever.

    Good user interface design requires not only contrasting colors, but contrasting brightness (luminance ~= brightness, chominance ~= color). Too many sites have a dark font (navy blue, brick red, etc) on a black background, or a light font (pink, purple) on a white background. If you run a site such as this, PLEASE consider changing it (even a slight change can make a big difference!), or at least increasing the font size.

    -Adam
  • So from now on the /. crowd will be flaming Wintelbus instead?
  • look near the bottom of page 3 [hardocp.com] and top and bottom of page 4 [hardocp.com]

    DISCLAIMER: I clicked on preview, and the links worked there ...

  • Trust me, when I'm filtering 100 Meg Wave files I wouldn't really mind some extra bandwidth either...
  • by SethD ( 42522 )
    [trying to soak in this article]
    ERROR!
    Insufficient Memory this early in the morning!

    Someone needs to write an article on the effects of caffeine on memory...of course then all the charts would probably be horrible neon colors on black backgrounds! :/

    Gotta get some more coffee... =(
  • oh come on .... those colors are at least bright/dark .... the fact that they look like something an alien threw up after too many beers is another matter .... they're not bright/bright like in the article ....
  • > CPU clock rates have experienced an exponential growth, leaving the rest of the PC components behind

    I tend to disagree. Hard drive capacity had an exponential growth too. And hard drive bandwith too. And memory size. 1 had 4Mh computers with 64Kb of RAM. Now I have a 800Mhz (x200) with 512 Mb of RAM (x8192)

    My hard drives went from 20Mb on a 25 Mhz machine, and now it is 40Gb (x2000) on a 800Mhz (x32). And disk bandwidth went from 500K/s in 1991 (with good expensive SCSI hardware) to about 20Mb/s on ATAPI disk.

    Most component of a PC had an exponential growth. But agreed, there is a problem with memory latency. And display quality. And CPU count. And noise. And size. And heat (well, one could argue that heat had an exponential growth too :-) )

    Cheers,

    --fred
  • by Anonymous Coward
    Ars Technica, has a better ram guide that explaines these concepts and more. It has been up for a while now. check it out [arstechnica.com]
  • by taniwha ( 70410 ) on Tuesday November 28, 2000 @07:40AM (#596502) Homepage Journal
    they are talking about 'bandwidth-per-pin' - us chip designers can pump bandwidth by using more pins (modulo packaging, noise and power issues), and by doing interleave (the graphics accelerators I was designing back then were, from memory running 2-clock interleaved controllers at 50MHz so data was flowing at 25MHz [96-bits wide plus the VRAM fill hardware support gave us 1.5Gb/sec fill rates - not too shabby even by modern standards:-]).

    However another thing that may not be obvious - today's 133MHz DRAMs being used in PCs are top-of-the-line - back in 1989 the fastest DRAMs were only being used in high-end servers because of the price premium.

    (some background on why Rambus is good/bad in general) I've done designs with many of these technologies (traditional async ras/cas, sdram, rambus, not DDR) over the years - the older rambus designs were certainly harder to implement with (they used more of a network protocol paradigm) but not by much. The main thing about rambus is that at some level it trades off latency for bandwidth - there are some places where this is actually a good thing - display controllers for example.

    Rambus also is a win in places where lots of concurrent transactions are available - the finer grained banking allows parallel row senses - reducing average latency, even speculative row senses for CPUs doing speculative instructions. I beleive this is the main reason Intel went for rambus - they are building CPUs that are highly parallel at the low level - and can issue many overlapping memory requests at once - but they screwed up - this would have been great if they were hooking the rambus channels directly to their CPUs - but instead they are making them over the slot1 bus which forces complete serialization losing any possible advantage - AMD's slot A would have been a better choice but these buses still do a very basic serialization that's going to make obtaining almost any concurrency at the RAM channel level difficult (which is why IMHO rambus on Intel hardware sucks).

  • Am I the only person on slashdot that sees "DDR" and instantly thinks of jumping around on arrows to bad japanese pop music?

    I love that game

  • I also say nuc-u-lear all of the time, when I intend nuclear. I suffer from a kind of dyslexia. While retardation is too strong of a word, indeed, my disability is pathological in nature. While on the other hand, your colloquial grammar and lack of manners can only be attributed to ignorance and/or demeanor. Anyway, thanks for the tip ;)
  • Despite these other people siding with Rambus, I agree with you. I was okay with USB, but FireWire? Come on. 400Mbps is only worth it if there's less than three devices connected. If you go over that, performance is cut drastically due to the same reason as why you shouldn't connect ANYTHING off of the same USB port as a CD-RW drive: High speed scalable serial connections (like USB and IEEE 1394) drop drastically when you stick more devices on. Meanwhile, you can have a chain of umpteen drives on Ultra-SCSI all running at the same speed: maximum speed. If money was no object, then it'd be no contest over which drive connection when it comes to speed.

    The key Rambus argument is that the 16-bit bus at 800MHz will be faster. But when PC2100 and PC2133 64-bit DDR SDRAM (200MHz and 266MHz, respectively, effectively) came along, it was no contest: DDR wins. Add the latency due to the Rambus proprietary packet structure, and Rambus becomes even more the loser. And the "serial is more effective because there's less wires to cause loss" argument is moot; seriously, how far is it from the RAM sockets to the chipset? A maximum of 12 inches travelled along the line. You could set up a 256-bit RAM pipeline and not worry about signal loss!

    I've already begun a boycott of Rambus. I'm not buying a P4 system until they make the DDR chipset. I'm not buying a PS2. If there was a "sucks" site for Rambus, I'd visit often. However, it's not enough to boycott Rambus, but hopefully Intel will be able to pound them into the ground when those morons of the RAM industry try to litigate.

  • So that would be a chart with light lines/text on a fairly dark background with 3 pixel high text?

    Squint, until you can barely see anything.

    What you san still see it where the majority of the information is. I can see where the lines and the text are, but not the axes, which are too faint. However I can see the existance of a scale at the side to the lines aren't so important. The red lines are pretty dire, but there aren't many of those. On the whole I don't think the colours are as bad as they were made out to be.

    However, it _is_ a fairly unreadable graph from any distance, or with a poor contrast monitor or with unfavourable ambiant conditions nonetheless.

    So where do _you_ think the problem is? I'd say it's the 3 pixel high text myself.

    FP
  • On win32, there is a shareware program that purports to do just that, called MemTurbo [memturbo.com]. It also claims to recover leaked or unused memory from applications.

  • I'm wondering if we could improve bandwidth and latency by going back to banked memory, perhaps interleaved.

    You would definitely be able to increase bandwidth, as long as you kept signal paths clean (difficult, but not impossible). Latency gets iffy. In practice, the main benefit would be in SMP systems or SMT systems (simultaneous multithreading; two or more threads running concurrently on a chip, while sharing some or all of the pipeline hardware).

    The reason is that interleaved memory lets you have several outstanding cache row loads in progress from different parts of the memory system. A cache miss will still most likely stall the thread that missed, but as long as other threads or other processors are accessing memory, processing can continue. A non-banked/interleaved system would have to wait for the cache row to be transferred before servicing other requests.
  • One of the points mentioned is that rambus memory is poor at random access, but good for sequencial access. This actually makes rambus memory good for crt refresh applications, since a crt buffer DOES access sequencial locations as the screen is refreshed. I wonder if any graphic card designers are using rambus memories in their units.
  • As the article states, random accesses prove to be the largest problem, since new pages need to be output, and the cache has to be emptied. Does there exist (and if not, is it even possible) a program that manages the memory pages in a packing manner, similar to a defrag program for hard drives? Something that would note what addresses are generally request in sequence, and move those memory locations to a single page, to prevent cache misses? Seems like a simple idea, but I've not seen it before.
  • .......the reason why RDRAM is a lot dearer than SDRAM or DDR-SDRAM is because of quite a few factors. 1st, it involves a completely different Fab setup, while Fab changes were miniscule for SDRAM & then DDR-SDRAM too. This means a huge investment in plant for the memory makers which has to be passed on. Also because of the way RDRAM works (it runs very hot & fast, consequently heat-spreaders cum EMI-shields are needed) which means the individual RAM chips cannot be fully tested till they are moduled, so if one is fucked, the whole module is fucked & they are all wasted (whereas SDRAM & DDR-SDRAM chips can be fully tested before they are muduled). That is one of many the reasons why RDRAM yeilds will always be very low making RDRAM as expensive option. Well that's the way it was explained to me. This problem, apparently does not exist though as far as rambus memory chips in consols are concerned because they are setup differently.
  • Thanks, this answer is very helpful.
  • >>>1. No matter HOW they try to spin it with Perot-Esque charts, current RAMBUS designs will never have the potential speed of DDR, and probably never will. Pumping data really really fast 16-bits at a time just simply can't compete with pumping data not quite as fast (yet) 64-bits at a time. RAMBUS is like figuring out a way to run a `286 really really REALLY insanely fast, but it's still only 16-bits.

    That's ridiculous. With 16 data pins, PC800 RDRAM produces (800Mhz*2bytes=) 1.6GB/s of bandwidth. At 133Mhz, with 64 data pins, SDRAM produces (133Mhz*8bytes=) 1.064GB/s of bandwidth. Now, if you give both the same number of data pins (64), then you have 4 channels of RDRAM for 6.4GB/s compared to one bus of SDRAM for 1.064GB/s. If you look at all the pins necessary to make the interfaces work, and give the same amount of pins to both, then you are still looking at a 3:1 ratio of bandwidth for RDRAM vs. SDRAM.

    >>>Because of these limitations, RAMBUS has been proven time and again to be slower than even CURRENT PC-133 SDRAM. Even Intel unwittingly published proof of this. DDR vs RDRAM shouldn't even be a contest.

    Where has this been proven 'time and time again'? All you've seen is Intel's butchering of Rambus's protocol. They used two channels of RDRAM, but they piped both of them through a front side bus that had lower bandwidth than a single Rambus channel. Please don't measure the technology based on Intel's crappy implementation. Maybe their Willamette stuff is better, but I haven't seen the numbers yet. When we see the Spec2k numbers for a system with a decent RDRAM implementation vs. one with SDRAM, then we'll have our answers.

    >>>2. RAMBUS is and always will be more expensive to produce than DDR or other types of SDRAM. This is why RAMBUS is trying to slap the "RAMBUS tax" on all competing memory, to artificially raise prices, and gouge the customer.

    Why is RDRAM always going to be more expensive than SDRAM or DDR? It's the exact same fabrication technology. Rambus needs a little extra die area, which accounts for the expense difference. As the memories get larger, this die space will become a smaller percentage of the overall die. The only thing that is really more expensive to make (once the market matures) is the hardware, traces, and connectors needed to operate at 800Mhz+ data speeds--not the RAM itself.

    As for the rest of your stuff--most of it is probably right. I don't know how good Rambus's patent claims are--clearly they aren't frivoluous, but I still think it is a crummy way to do business. I'm not going to defend their business practices, because I think they are pretty low--but the technology is sound.

  • I'm not talking about the graph, but the tables .... they are white text on insanely bright background ....
  • Have a look at Visicheck and see what your site looks like to those with (among other things) red/green colour deficit.

    This sounded like a really neat service, so I went there immediately to check my own website. When the results came back, I saw stark black text on a plain white background.

    That's when I remembered that I don't keep any color information in my HTML pages themselves. I keep it locked away nice and tidy in a CSS file on my server. I guess Vischeck [vischeck.com] doesn't read style sheets.

    Oh well. Now I'm going to have to find a colorblind person and pay him to describe my own website to me.

  • Now where did I put my new 30 meg harddrive?

    Perhaps someone mistook it for a washing machine and hauled it off! ... they needed an entry for the latest washing machine races!

    --

  • A dual channel DDR has 4x the number of data pins and traces as a dual channel Rambus. Why is everyone so unwilling to compare apples with apples? Try 4-8 channels of RDRAM vs. 2 channels of DDR, and see how your math comes out. God, this forum is ridiculous.
  • You're right. I turn off style sheets due to fuckwits like them. I assume I don't follow any "style=" hints at all, and that's why the tables are perfectly readable for me.
    Sorry for the confusion.

    Hmmm, it still ain't a "colour-blind" mistake.

    FP.
  • I think that this would be a good idea. We would have to break it up into different categories.

    One time I tried installing 2 videocards into my system so I could have two monitors and see more of my desktop, 2 views in games, and maybe a little more gaming power with the two. It didn't work. But running Windows 98, I didn't know how it would have responded to it. I thought that it would have totally messed with my config. I called Creative, (the maker of both cards) and they didn't have a clue what I was talking about. A board like that would have been nice. I am sure that I am not they only one who has tried that!

  • by Anonymous Coward
    All you young whipper snappers with your fancy, shmancy DDR, SDRAM, Super Fast ECC EDO Fastpage nano RAM. Back in my day, we didn't have those confusing choices, we have 1 meg SIMMS that cost $450 each and and we liked it! You spoiled young-uns have too many choices...ow, my head hurts. Now where did I put my new 30 meg harddrive?

    Cool site>>> The Linux Pimp [thelinuxpimp.com]

  • ...is a hardware weblog. Everything that everyone does to their machines -- post it! Good, bad, strang things stuck in cooling fans... all of it. At the very least for learning, and at the very abstract that we have documentation of all the screwed-up-ness that is modern day computing.

    ----

  • DDR? Boring!

    Here is a really interesting link [lostcircuits.com] about a possible successor of DDR: EDDR.

    Warning! It is only for those who are really interested in memory as it goes quite into details..

    To summarise for everyone, the EDDR memory is trying to reduce latency by adding some SRAM next to the memory.
  • by Anonymous Coward
    1 Meg SIMMS? We had punch cards, and if you wanted to save a game of Wolfenstein, we had to punch it ourselves, with a pin. None of your chad removers here.
  • I saw no tables with white text on very bright backgrounds. I read all 5 pages.

    As a red-green anomalous dichromat (that's colour-blind to you), who has a background in user-interface design (including presentation technologies) I can assure you that in general the easiest to read presentations come from the colour-blind. This is due to a heavier reliance on using luminance to acheive contrast rather than chrominance. (To me chrominance doesn't provide much contrast, so it's obvious I would use luminance instead...).
    Careful what insults you throw around, 13% of the male population have some form of colour deficiency (and 13% of females are carriers...).

    FatPhil
  • I would think that a colorblind person (like me) would prone to add more contrast to a diagram, not less. I know that I have trouble seeing minor color changes (bright color to white) than I have seeing major changes ( white on black, or white on blue, green on black, etc.).

    So, enough with the colorblind references already ;-).
  • Gee, that's funny, I thought Samsung (20.7% of the memory market), NEC, Hitachi, Elpida (the NEC/Hitachi successor - 13.6% of the memory market), Oki and several other smaller (less than 10% of the market) companies had all agreed to pay royalties for RAMBUS's intellectual property (all types of DDR RAM including RDRAM AND DDR SDRAM)

    As far as downsizing and restructuring, RAMBUS is making money hand over fist now (doubling earnings estimates last quarter - that's earnings - making money, not shrinking their loss like many OS companies we know and love).

    RAMBUS legal fees are now coming straight out of profits - there is no chance they will "run out of money before the cases settle", and legal fees, with Three major ongoing cases are less than 10% of earnings (notice that's earnings, not revenues)

    Just some information to enlighten all the anti-RAMBUS posts.
  • Here's some things that RAMBUS is still in DE-Nile of (and not the river in Egypt):

    1. No matter HOW they try to spin it with Perot-Esque charts, current RAMBUS designs will never have the potential speed of DDR, and probably never will. Pumping data really really fast 16-bits at a time just simply can't compete with pumping data not quite as fast (yet) 64-bits at a time. RAMBUS is like figuring out a way to run a `286 really really REALLY insanely fast, but it's still only 16-bits.

    Because of these limitations, RAMBUS has been proven time and again to be slower than even CURRENT PC-133 SDRAM. Even Intel unwittingly published proof of this. DDR vs RDRAM shouldn't even be a contest.

    2. RAMBUS is and always will be more expensive to produce than DDR or other types of SDRAM. This is why RAMBUS is trying to slap the "RAMBUS tax" on all competing memory, to artificially raise prices, and gouge the customer.

    3. RAMBUS's policy of suing EVERYBODY for royalties on technology that RAMBUS had no part in developing has left them completely friendless. They are a pariah in the market. Not even Microsoft has (yet) done what RAMBUS has done to their partners.

    4. RAMBUS's claims on their SDRAM patent is very thin. For one thing, the entered into a legal agreement with the JEDEC group that was developing SDRAM to not do so, and reveal any relevant patents or patent applications to the group. JEDEC was formed to create an open INDUSTRY STANDARD, not to use one company's proprietary technology. Now that they've finally pissed off the biggest kid on the block (Micron), the validity of their spurious patent claims may finally be heard in a court.

    5. RAMBUS's threats to sue VIA, AMD, Intel,and anyone else who even THOUGHT of making a chipset that allows a CPU to talk to memory is probably the last straw... Even IF RAMBUS somehow prevailed in court on SDRAM and SDRAM chipsets, these companies (and the memory companies like Micron) WILL come up with a future memory technology that makes RAMBUS's dominance at best a few years.

    6. RAMBUS is a company that earns all revenues from dubious IP, royalties and lawsuits. This is not a sound business model. And back once upon a time when there were engineers on the RAMBUS payroll instead of lawyers, they should have hired BETTER ones to come up with better memory.

    Just my thoughts. Check out The Register (http://www.theregister.co.uk) and Toms Hardware (http://www.tomshardware.com) for articles backing up most of what I stated.
  • death to rdram too expensive!!
  • Have a look at Visicheck [vischeck.com] and see what your site looks like to those with (among other things) red/green colour deficit.
  • After a little reaserch and the initial help of tak amalak here is a little cost analasis. Assumeing a dual channel DDR Quadrupled the price of the motherboard because of additional layers/traces and assumeing you needed at least 512 mb of memeory, a dual channel DDR setup would still be cheaper than a dual channel RDRAM.

    Abit KT7raid @ $150 * 4 = $600 ($600motherboard?!?!)

    512mb PC2100 DDR ram $450

    dual channel DDR total, $1050

    P4 Dual channnel Rambus MB $200-$300

    512mb Rambus Ram $900

    Total Rambus setup: $1100-$1200

    dual channel Rambus = 3.2 mb/s

    dual channel DDR = 4.2 mb/s

    conclusion: dual channel DDR for server (512mb+) apps is a superior and chearper solution.

  • The white text probably means the author never bothered to preview the tables on any browser other than IE. It's probably the result of bad HTML grammar or the use of non-standard table properties which lets the text color of the rest of the page seep into the tables. Do Mozilla and NS 6 show it as intended (black text)?
  • It't attempting to show how bandwidth, as measured by several benchmarks, correlates to real-world performance, as measured by Expendable.

    Interesting how games have become some of the most quoted benchmarks in recent years.
  • This site was obviously not designed by a color-blind person, it would have to have been designed by a person who 1) has their display set to a low resolution and 2) has no color blindness whatsoever.

    Or 3) Only checked how the page looked in Internet Explorer, where the tables in question have black text.

  • The chart on the first page of the article says that the memory bus increased only 4X from 1989 to 2000. I have to disagree. The article says that the FP SIMMs on 486s ran at 16 MHz. Those SIMMs were either 8-bit SIMMs run in banks of 4 or 32-bit SIMMs. Today's DIMMs do 64-bits at 133 MHz. So that would be 16 times faster, or 32 times if you count DDR. That's approximately equal to the increase in processor speed.

    The whole point of the article, that RAM latencies have not kept up, is still a valid point. Although even the latencies have improved 8X. Remember, another reason that we don't have higher bandwidth memory is that it is hard to make motherboards and CPU interfaces that can handle higher clock frequencies.

    I'm wondering if we could improve bandwidth and latency by going back to banked memory, perhaps interleaved.
  • This article [tech-report.com] on tech report addresses DDR with the new Athalon 760 chipset.

    The hardocp article doesn't fully address overall system architecture. Although the article was interesting in is broad coverage of the memory latency/bandwidth bottleneck, I am warry of articles that don't use entire systems architectures in their performance reviews.

  • by dpilot ( 134227 ) on Tuesday November 28, 2000 @06:17AM (#596536) Homepage Journal
    This and the other articles didn't really get into some of the fundamental [S][DDR-S][R][ES]DRAM limits in terms of latency, and why this is just plain a losing battle.

    At the cell level, DRAMs work by charge transfer. (That part was covered, IIRC) To write, you push some charge into the cell. To read, you share that charge and let it disturb voltages in your sensing system, and then evaluate it to a 0 or 1. If that sounds fuzzy, it's because it really is.

    Anyway, there is a transistor used as a switch to get the charge in and out of the cell. It has to be a pretty darned good switch, especially in the 'off' position. We're about to the point where we can count the electrons that tell the difference between a 0 and a 1 - somewhere around 40,000. ANY leakage at all in that transistor HURTS.

    Therefore, that transistor has to be optimized for leakage, and speed has to take a back seat. It simply takes TIME for those 40,000 electrons to get in and out of the cell. Oh, don't forget that this whole structure is optimized for size too, and there isn't any significant room to play around.

    As we keep cramming more and more bits on to chips, the transistor (the D in the 1D memory) keeps getting smaller, and even with scaling, just can't get significantly faster. This aspect of performance just plain didn't show up, in the old days, because every other part swamped it out. We're now in the era where it shows, and causes pain. In the future, it will begin to dominate performance.

    All else is addressing moving bits around, getting them to and from cells. That stuff is where most of the improvements have been happening. (There have also been improvements in wordlines and sensing systems, to credit them, too.)

    My personal favorite is a large-ish L3 cache on the Northbridge. There have been some indications of this happening, already. Furthermore, on the Northbridge, you can differentiate between streaming data and random data, so streams don't flush the cache. You can't do that with cache in the DRAM.
  • This post is intended to be a useful programmer's guide to dealing with random access memory.

    Now as we all know, random access memory (or RAM as it's more commonly known) is memory that is NON-PERSISTANT. In other words, when you cut off power, whatever data was contained in that RAM is gone.

    There is another type of memory, called ROM, PROM, or EPROM (Erasable Programmable Read Only Memory), but we won't deal with that here. The name is self explanatory enough, really.

    There's one thing that sets good RAM apart from normal or low-quality RAM. Nowadays, you'll be hard pressed to find single input module memory RAM (SIMM). But that isn't the thing - in this day and age, that sets good RAM apart from bad RAM.

    Good RAM has built-in cyclic partiy checkers. Not only can this help to prevent incorrect binary descriptors, but it also allows one to redigitize random access memory compatible dual input values - saving many hours of programming time. Most RAM these days has this, but there are a lot of "cheap" DIMMs out there which don't support it. Be wary of them.

    Thanks, that's about all I have to say about RAM. I hope that you have found this useful.

  • I see a chart with cells in mint green or yellow, with white text. The source shows cells with background colors of "CCFFCC" and "FFFF99", with the overall page text as "FFFFFF".
  • 1. No matter HOW they try to spin it with Perot-Esque charts, current RAMBUS designs will never have the potential speed of DDR, and probably never will. Pumping data really really fast 16-bits at a time just simply can't compete with pumping data not quite as fast (yet) 64-bits at a time. RAMBUS is like figuring out a way to run a `286 really really REALLY insanely fast, but it's still only 16-bits.

    Actually, we are starting to find that serial buses can be made faster than parallel buses. Look at USB replacing parallel ports for printers and scanners. Look at the upcoming IDE specs -- they're moving to serial. I believe Fiber-channel uses the SCSI command set on a serial bus, and future SCSI interfaces will also be serial.

    The fact is that it is often actually easier to pump 1 bit at a super-fast rate than to try to synchronize 64-bits at a fast rate. Think about it -- which would be easier to run at 5 MHz, a CPU the complexity of a 286, or one the complexity of a Pentium IV? Also consider the money saved by having to run fewer data lines. Just because Rambus was incompetent does not mean that the technology is necessarily bad.

  • RTFA!!(Read the fscking article)

    (OK, I only read the first three pages of it, but still :)
    I believe the author is talking about the same thing, only he calls it ESDRAM ...

  • I wonder if it would be possible to make a DIMM out of SRAM exclusively and make it DDR capable. Yes, SRAM has more transistors and therefore takes up more die space, but with smaller processes and the low prices for conventional PC100 lately I think it's feasable. Lets say that it doubles or even triples the price per MB, so what, it's worth the extra performance! PC100 at 128 megs was down to 55 dollars last I checked, I would happily pay double or triple that price to have an equivalent DDRSRAM solution. AFAIK SRAM runs at speeds of up to 400 MHZ, couple that with DDR in a DIMM configuration and you have 800 MHZ 64bit memory! Add this to a system bus that could run synchronously with this fast memory and you would have monstrous performance gains. Back to my original question, why or why not?
  • You make some interesting points: especially the point about memory
    support for concurrency is interesting. It looks to me that better
    exploitation of concurrency is really the only way to deal with the
    CPU / motherboard bus bottleneck.

    I have only an incomplete understanding of how Rambus works. Is it
    fair to say that the Rambus system carries a big latency penalty
    compared to prevalent memory technologies, but it is a model in which
    increases in bandwidth can be designed in without incurring further
    costs in latency (ie. we have bandwidth scaleability)? Is this the
    strength that many designers have seen in Rambus?

  • I should have checked it out with IE before posting. I don't have the time, but I wonder what the HTML code is that makes IE use black and netscape default to white...Does it follow an HTML standard, and netscape fails, or is it bad HTML?

    -Adam
  • Apparently, you didn't read the article because it does mention EDDR.
  • How? I can just imagine Grove walking up to my
    door peddling P3/4s and RDRAM. Sure. I'm building
    my own Duron box... exactly how is intel going
    to crush me? Your call...
  • Don't be, check out these guys.. [talou.ca]
  • Is it fair to say that the Rambus system carries a big latency penalty compared to prevalent memory technologies, but it is a model in which increases in bandwidth can be designed in without incurring further costs in latency (ie. we have bandwidth scaleability)?

    It's not that big - the thing to realize is that because the outgoing address bus is multiplexed (at a ~GHz rate) you're wasting a clock or so compared with other drams to send enough of the address to start a RAS cycle in the DRAM (this used to be more like 4 clocks on older versions of Rambus - the current rdrams are basicly V3 of their protocol). Because the data comes back multiplexed (16-bits at a time) the latency to first data is about the same for a traditional wider type of dram, but the time to last data (important if you then need to send the data over a channel that can't be flow controlled - or to a CPU who's cache system can't use partial cache lines) is longer.

    Now this absolute latency is important for a system where every transaction is received by an idle memory controller - but if you have a transaction arriving at the dram while the drams are already busy and you can start multiple transactions in parallel mean latency may actually be lower (most modern drams can do one row sense while doing a data transfer - rdrams tend to have a finer granularity allowing more concurrency as well as a protocol that allows things like speculative row senses and out of order data transfers).

    Mind you the down side of designing a memory controller that supports this amount of concurrency is that it's HARD - really HARD and hard to test - normally you just manage a queue of pending transactions in order - managing an out-of-order memory queue is difficult - it actually probably makes sense to integrate all this stuff closely with a CPU - into the out-of-order part of the CPU's memory controller.

    Is this the strength that many designers have seen in Rambus?

    I think it depends on what you use it for - the systems I've used it for have been more bandwidth sensitive (and pin sensitive) - basicly they sort of fit a sweet spot as VRAMs stopped being usefull - but cost has always been an issue meaning that using them was always an iffy choice - doing chip designs usually means betting on supporting chips like DRAMS being available at a particular price a year in the future - choices are hard.

    Caveat - I've done designs with the previous 2 generations of rdrams, not the latest 'direct' ones which IMHO (from reading the data sheets, not experience) are simpler to use.

  • Rambus is an example of a design tradeoff. They sacrificed data pins for the ability to ramp up the clock speed. It's not at all an unfair comparison to look at a Dual Channel DDR vs. Dual Channel Rambus. Your 4-8 Channels of RDRAM is certainly not a better comparison. Besides that, Intel will probably never make such a monstrosity, because the cost of the controller would be hideous. The superior technology is the one that gets the most done for the least cost, at the highest quality. Dual channel DDR gets more done than Dual Rambus, costs less, by the OP's competent analysis, and is of a higher quality, in that the latency is far, far less.

    The natalie portman, hot grits, and goatse.cx trolls are ridiculous, but the above post is a perfect example of high quality signal rising above the noise.
  • dude: if i wanted a 16-bit memory bus i'd dig up
    a 286. Technical superiority?

    DDR companies paying royalties? Rambus is already
    dying. These lawsuits are their last gasp.
  • Shouldn't the industry have kicked their ass around a bit now for all their strong-arm tactics?

    Yeah, but most companies don't just die overnight. They restructure and downsize, downsize and restructure, and eventually get bought out.

    Besides, they are probably still looking for someone dumb enough to pay them...
  • by Fatal0E ( 230910 ) on Tuesday November 28, 2000 @04:49AM (#596551)
    Ars [arstechnica.com]also has a great write-up on the (pin) ins and outs of memory. Only they started at the very beggining with SRAM and stuff. They did a really great of not only explaining the (physical) layout of memory but the theory behind every step and technical innvoations too. A lot of it was way over my head but I liked reading it anyways...
    RAM Guide: Part I DRAM and SRAM Basics [arstechnica.com]

    And one other thing...
    And it even features lots of diagrams (although some of the tables seem to have been designed by someone who is color blind, using white text on very bright backgrounds.
    Check out this graph... [hardocp.com] I have no idea what it's explaining but its really spiffy and colorfull!
    "Me Ted"
  • Perhaps I didn't read the article right (you may fire when ready if so), but they seemed to have paid no attention to a proven tech that does get a lot better performance, interleaving. I've seen little, if not nothing on this technology that's proven, through many of years of use, that it can significantly increase memory performance. Anyone know if any of the makor chipset players are planning interleaved solutions for the PIII, or my current tech processor of choice, the Athlon?

Beware the new TTY code!

Working...