Follow Slashdot blog updates by subscribing to our blog RSS feed

 



Forgot your password?
typodupeerror
×
AMD Technology

AMD to debut multi-core CPUs in 2005 341

Scrooge919 writes "An article on ZDNet discusses AMD's plan for the successor to Opteron -- the K9. The biggest feature will be that it contains multiple cores. The K9 is currently slated for the second half of 2005, which would be less than 3 years after the Opteron shipped."
This discussion has been archived. No new comments can be posted.

AMD to debut multi-core CPUs in 2005

Comments Filter:
  • Comment removed based on user account deletion
  • It's about time.
  • ...Since someone is bound to say the K9 will be a real dog.
  • I am glad to see chipmakers getting off their asses and making progress finally.

    I guess they are ramping up due to the fact that the lack of corporate spending on tech is going to have to have a turnaround shortly saying that all the computers people bought a few years ago now need to be replaced. However comes out ahead here is bound to make a lot money.

    • I am glad to see chipmakers getting off their asses and making progress finally.

      "Finally"? They've been making steady improvements over the past twenty years. Just over the past ten years:

      - CPU frequencies have increased by 30 times.
      - Memory bandwidths have increased by 24 times.
      - CPU complexity has increased by a good factor.

      The times used to be when x86 processers were the slaggards of the computing world. Now, if you compare the computing power of a modern x86 processer to competing processers
  • ... oh nevermind. Several beat me to the punch already.
  • I don't like to see that K in there in the AMD chip name. It reminds me of my old, underperforming processor, and makes me sad.
  • Multiple cores? (Score:3, Insightful)

    by adaknight ( 553954 ) <lee AT gnat DOT com> on Thursday October 16, 2003 @04:07PM (#7232989) Homepage
    So, now we'll have multiple processing units and pipelines in each core, and multiple cores. The biggest question in my head is how much limitation there will be from memory bandwidth limitations. I just don't see how you can supply data and instructions fast enough to, say, three 3 GHz cores running on the same chip unless you have close to a thousand pins on the chip. The other question would be about cooling. :)
    • As for memory bandwidth, the nature of the Opteron already solves that: Each core has it's own memory controller, so for each core you add, you're adding more memory bandwidth.

      As for a thousand pins on the chip, I believe the Opterons are already near that. If you add in the extra pins for the memory controllers, you're probably looking at a minimum of 1500.

      steve
    • Honestly now, its been a very long time since data could be bussed to the CPU fast enough to take full advantage of the chip's speed. Chipmakers spend so much time convincing us that we need these insanely fast processors, when in reality, a large portion of the chip's cycles go wasted because the data simply can't get to the chip that fast. I have two main machines -- an intel Celeron 400 and an AMD Athlon 2400+ (so like, 1997mHz). In theory, my Athlon should be five times faster than my Celeron -- in p
      • Of course, if you do rendering, you will notice a slight bit more than 2x difference (or even 5x.)

        Some of us need the power.

      • since data could be bussed to the CPU fast enough

        Got that right.

        The whole design of systems is going in the direction where main memory will be considered as slow as disks once were.

        It will be considered as much a sin to miss the L2 cache as it is to swap.

        The chips that will be speed kings will be the ones that can afford huge fast caches.

    • Well, that's what L1 and L2 (and L3 if you have it) are for, to minimize the slowdown from RAM.
    • There are two things that AMD is working towards in this chip. The first is multiple cores, while the second is on-chip multithreading (SMT, same as Intel's Hyperthreading). The first may increase the bandwidth needs of the chip, but the latter is actually designed to reduce them, in a manner of speaking.

      SMT allows one thread to do a bit of processing for a while, until it runs out of data. It requests data from memory and then goes off to lala-land for a little while while the other thread takes over.
  • Let's see... AMD missed the original launch date of their Barton core CPU's by at least 3 months, missed the launch date of the Opteron by over 6 months, and the original launch date of the Athlon 64 by almost a year.

    If they're saying now that the chip will be 4Q 2005, when should we REALLY be expecting it to show up on store shelves? 3Q 2006? 1Q 2007, maybe? :)
    • Let's see... AMD missed the original launch date of their Barton core CPU's by at least 3 months, missed the launch date of the Opteron by over 6 months, and the original launch date of the Athlon 64 by almost a year.

      Ah, that's nothing compared to the Itanium.
      1998-->2001 if I remember correctly

  • by NerveGas ( 168686 ) on Thursday October 16, 2003 @04:09PM (#7233027)

    As the manufacturing process shrinks, and companies are able to put more transisters on a chip, the question arises: What should we use those extra transistors for?

    Now, there are several options. They could come up with a new processer design, but that takes a tremendous amount of R&D. They could just put tons of cache on the chip, but that gives diminishing returns.

    Or.... the Opterons already have very simply I/O mechanisms, namely, HyperTransport. Literally all they have to do is plop down two Opteron cores, connect the HyperTransport lines, and bam: Dual-core processer. I'm honestly surprised they're not doing it SOONER.

    Of course, the lines for memory controllers and the like have to be drawn out to the pins on the packaging, but that's a piece of cake.

    steve
    • The 2 on-chip cores would still share the L2 and actually have the same memory controller. That would in fact allow them to keep the packaging very similar (most of the external interface - memory and I/O - should be roughly the same)

      Furthermore, Hypertransport is *not* simple. As a point-to-point interconnect, maintaining cache coherency is not so easy ... But I think they have figured this out already.

    • As the manufacturing process shrinks, and companies are able to put more transisters on a chip, the question arises: What should we use those extra transistors for?

      I will be so bold as to predict what these extra transistors will be used for.

      Most people only need so much cpu power. Yet Moore's law continues to march onward.

      Computers will get cheaper and cheaper. Like pocket calculators. I think we haven't seen seen how cheap computers are going to get. You think the low end cheap Linux compute
    • ...companies are able to put more transisters on a chip...

      Put it this way, they both have penises.
  • As I understand it, part of the point of Source-Level cores (eg. like OpenCores.org [opencores.org]) is to be able to synthesize multiple cores into a single chip and have them talk amongst themselves via standard internal interfaces [vsia.com]. Eg. a chip contains a microprocessor, an implicit USB interface, and maybe has some hardware-accelerated DES encryption included as well. And OpenCores brings this capability to the common person.
  • These specifications were left in an abandoned blue police telephone booth. A car, containing two extremely life-like miniature figures, a bicycle and a couple of flat tires were nearby.
    • Memory wafer technology (believed unstable in strong time winds)
    • Irritating speech synthesis unit, known to sound like an electronic version of Bungle from Rainbow
    • Extremely loud electric motors
    • On-board RADAR and SONAR
    • Retractable energy weapon
    • 30 minute UPS

    The introduction of multiple "warp" cores introduces a cross-

  • by LWATCDR ( 28044 ) on Thursday October 16, 2003 @04:15PM (#7233115) Homepage Journal
    Folks we really do not need to run DOS applications any more. If we do couldn't we emulate them. I just do not believe that the IAx86 is the best IA for the future. The idea that in 30 years we will be runing some mutant 128 bit X86 chip makes my skin crawl. I guess I miss the days when new ideas where the norm for microcomputers. Rember when there was the 32032, 68020, TM990, Zilog z8000, the 6502 family, and the 88000? . How about it Transmeta? Let's see a version of Linux that does not run on top of the the translation layer. Lets get some new ideas out there I am betting bored.
    Now that I said that, GO AMD. While it is still X86 this is one of the more interesting ideas I have seen for a while.
    • While it is still X86 this is one of the more interesting ideas I have seen for a while.

      Good, because it's also part of the intel roadmap for the itanium.

    • As a famous nerdy guy (no, not Bill, the other one, starts with an L) once said, what some people see as x86's weaknesses are actually some of its great strengths, and if you design a very elegant architecture and then start optimising it for the real world, you might be surprised to end up with something that looks a lot like x86.

      AMD-64 (x86-64) addresses some of the main problems of x86 (namely the small number of registers). Since virtually no-one codes in assembly anymore, and as long as the compilers
    • When one can get a userful, supported, takes industry standard parts, ATX motherboard, in the same price range.

    • I know this may be hard to hear, but let me break it to you gently...People will be running x86 100 years after I am dead. Blame IBM and Microsoft - the decision was made a few decades ago and there's nothing you or I can do about it. There's far too much software out there already built on x86.

      Back in the 80s and early 90s, the ISA actually mattered. You could get a non-negligable performance boost with a good hardware-friendly clean ISA.

      These days, the ISA just does not matter anymore. We have enoug
    • When shall we be free of the X86?
      Folks we really do not need to run DOS applications any more.


      Do you know why railroad tracks are the width they are?



      On the other hand, being optimistic for a moment, I suppose that in some hypothetical future, technology may get to a point where an OS and the vast bulk of its applications could simply be recompiled using a retargetable compiler, and everything would just seem to work?

      Technology aside, maybe also market forces might align and then this could happe
    • How about it Transmeta? Let's see a version of Linux that does not run on top of the the translation layer.

      AFAIK, the translation layer of Transmeta CPUs is a good thing, as it can optimize the code on the fly. There is a cache for translated code, so this will mostly benefit repeating stuff like scientific computing.

      However, I completely agree with your point of discarding x86. Switching to a different CPU seems like the least hardware issue, at least with Linux and BSDs. Unfortunately things are diffe

    • I think the transmeta has a lot of potential in many ways. Being able (in software) to morph the instruction stream should make it possible to build a "native" (or as native as transmeta ever is) JVM or other virtual machine. (Has this been done yet? If not, is there a good reason?)

      Even better, with a fast interchip connection network, building a cluster of these things could be very nice indeed. (There was a comment about a transmeta cluster a day or two ago that was quite interesting.)

      Even better

  • Now I know why I stopped reading ZDNet rags so long ago. They're truely trash.

    'multiple chip cores--the "brain" of the chip'

    I thought the chip was the brain of the computer? So the brain has a brain?

    Sigh...
  • by PaschalNee ( 451912 ) <pnee@nosPam.toombeola.com> on Thursday October 16, 2003 @04:18PM (#7233148) Homepage

    For those of you who live mainly in the software world (myself included) there's a very good overview of all things CPU on Arstechnica [arstechnica.com]. Detailed enough to be interesting but starts at a basic enough level.

    And remember than nothing impresses the ladies more than sombody who knows why multiple cores might be interesting

  • Seeing The Core once was enough for me. There was a reason it was such a dog at the box office.
  • That little quote at the end of the article has me worried.
    " Designers will likely continue to increase the number of transistors on a chip by stacking them."
    What is the density of transistors going to do for the problem of generating too much heat when they are getting layered in the third dimension? Can I begin to expect the need for a cooling tower outside my apartment to handle the job of heat exchange?
  • K9? (Score:3, Funny)

    by DanThe1Man ( 46872 ) on Thursday October 16, 2003 @04:32PM (#7233324)
    K9 huh? Do we need a ziplock baggy when it takes a core dump?
    • K9 huh? Do we need a ziplock baggy when it takes a core dump?

      ziplock? do you save the poop you pick up? want to keep it from getting freezer burn or something?

      Just use an old grocery bag, or the little neighborhood blue poop bags, for goodness sake!
  • Nowadays the instruction decoder is such a small part of the chip, you could easily afford to put on two of them. Then, it would be easy to transition customers to a better instruction set, by supporting both simultaneously. A system based on such a chip could run programs for both, but the programs recompiled to the new instruction set would be faster because they can make better use of all the parts of the machine.

    People building for that target would naturally use the mode that produces the faster co

  • Versions of PowerPC comes with 2 and 4 cores. Playstation3 is already being designed with cell processors. Seems like we'll hit the clockcycle limit, already have hit the bits limit to 64-bits (128 bits will never be practical) and shrunk the die to have each wire 5 atoms wide. The next logical step is to increase the number of cores, maybe incorporate the memory itself (maybe 128MBs of it) at full core speed on the same die, maybe incorporate the GPU on the same die and start going vertical.

    I think for n
  • Seigfreid and Roy are running scared!
  • How on earth are they gonna fit him inside of my mini tower computer?
  • by jd ( 1658 ) <imipak@ y a hoo.com> on Thursday October 16, 2003 @04:47PM (#7233496) Homepage Journal
    "Multiple cores" is meaningless, with today's microprocessors. Typically, there will be multiple execution units for common instructions. Pipelining, pre-fetch and branch prediction all increase performance by more than can be obtained by using antiquated SMP-style approaches. It's far more important to distribute the bus load over time, as that is the larger bottleneck.


    By having multiple register sets within a single core, and tagging requests/results, you can avoid the complexity of SMP entirely, while producing the effect of having multiple processors.


    If you want to go further, improve the support for internal routing of operations. Thus, if you've instructions operating on the same data, the data can be directly sent from logic element to logic element. The entire chain could then be executed as a single instruction (albeit composite). This also eliminates the need to have a CISC-to-RISC layer in the processor, as complex instructions would be mapped by routing commands and not by multiple internal fetch/execute cycles.


    By adding input/output FIFO queues to each instruction, where each node in the queue tagged the "virtual" processor associated with that instruction, the CPU would be limited in the number of CPUs it would look like only by the number of bits used in the tag. (eg: An 8-bit tag gives you 256 virtual CPUs on a single die.)


    Why is this better than "true" SMP? Because 2 CPUs can't run a single thread faster than 1 CPU. Programs are generally written with single processor systems in mind, and therefore cannot run any better when the extra resources exist.


    Sub-instruction parallelism allows you to run as fast as you can fetch the instructions. Because the parallelism is merely at the bookkeeping level, there's no overhead for extra threads.


    Because the logic elements would pull off the queues, as and when they were free to do so, there's no task-switching latency.


    Because the parallelism is sub-instruction, and not at the instruction block or thread level, more of the resources get used more of the time, thus increasing CPU utilization. It also means that tasks that aren't parallel at a coarse-grain can likely get some benefit, as there may well be parallelizations that can be done at the element level.


    Because a single, larger die can carry with it more useful silicon than two or more seperate dies. (Which is likely why AMD are using multiple cores in their K9 CPU.)


    AMD's approach is an improvement over the seperate CPU schema, but it's nowhere near the potential an element-cluster could provide. The parallism that can be gained is way too coarse-grain. It'll offer about the same level of improvement the move from seperate 386 and 387 chips to the 486DX did, for much the same reason. Reduced distances and reduced voltages allowed for faster clock rates on the same technology.


    But engineering at the right level will always produce better results than cut-and-paste construction, even if it does require more thought.

    • "Multiple cores" is meaningless, with today's microprocessors.

      Not really. It has to do with managed atoms of complexity. You create this complex thing, give it well-understood interfaces, and connect it to other identical things. There is more than just pure technology involved. There is an issue of managing complexity. One advantage of the single core/ multi core approach is a sort of conceptual "assembly line," where your cheap product is the least atom, and your more expensive parts are compositions of
  • Is it true that the AMD K9 will be man's best friend?

    Might as well start the lame jokes now, I'm guessing the engineers at AMD saw that coming long ago, too.
  • A big bulldog with AMD painted on it's side marches up to a pile of P4's and urinates on them.
  • Multiple cores. Multiple Athlon / Opteron cores...

    AP Newswire, Aug 14, 2007: In a paper published in the journal Science, MIT researchers announced the achievement of a sustained nuclear fusion reaction. This stunning accomplishment was, oddly enough, purely accidental, triggered by the failure of the cooling system on MIT's new AMD-K9-based 256-node Beowulf cluster, which had gone into full operation only a week prior to the event.....

  • by Knights who say 'INT ( 708612 ) on Thursday October 16, 2003 @05:20PM (#7233837) Journal
    Why do I need -more- processing power?

    I don't do any 3D rendering, but I believe I do more processor-heavy work than the average Carlos - sp. big numerical differential equation and bigbigbig linear optimization stuff in Maple - and my tienda-de-descuentos K6-II still crunches the stuff faster than I could ever desire.

    The main problem with personal computers is that they use hard drivers for memory swap space when they should be using RAM memory to cache for hard drives.

    If I could spend $500 on my computer right now I'd fill it with as much memory as the architecture allows. I'd then run a ramdrive and direct many of the computer activities to there.

    I mean, when a webpage opens, a banner is downloaded to my hard drive. That's just irrational. And it prolly wears the hard drive's physical mechanism faster too.

    But then again, we don't have a benchmark of ram speed, nor do we have hypemakers touting new, faster RAM. And prolly there's not too much activity in technologically improving RAM either.
  • This is usually how the story goes:
    1. AMD announces new processor to rock all intel processors.
    2. Intel waits about a year, then unleashes a huge add campaign (aka centrino) about a previously unannounced project.
    3. AMD is once again on the backburner
    4. Skip the .....
    5. Profit for Intel.
  • Forget about all the processing you could do with multiple cores. Based on the current trend of AMD chips, you could use this baby to heat your home.

    It renders movies, it roasts meat, it's an all-in-one appliance.

    FORGET all those other processors.

    LOOK at this P4, it gets barely warm enough to melt the cheese on this burger [insert picture]. Now look at the K9: not only are you grilling that cheeseburger, but with that Texas-sized heat sink and those multiple cores, there's enough room and heat to grill

  • Sun's sparc strategy has been centered around the idea of multiple cores and chip-level multi-threading for a while (see this article [yahoo.com] for one of latest announcements). I guess this also validates Sun's approach with Sparc. Not that it's all that unique -- I guess all chip makers have similar goals -- but sometimes it's seems there's bit of bias here, whereas AMD rocks and everyone else sucks.

A morsel of genuine history is a thing so rare as to be always valuable. -- Thomas Jefferson

Working...