Unlock seamless, secure login experiences with Auth0—where authentication meets innovation. Scale your business confidently with flexible, developer-friendly tools built to protect your users and data.
Posted
by
CmdrTaco
from the coming-to-a-pc-near-you dept.
Scrooge919 writes "An article on ZDNet discusses AMD's plan for the successor to Opteron -- the K9. The biggest feature will be that it contains multiple cores. The K9 is currently slated for the second half of 2005, which would be less than 3 years after the Opteron shipped."
This discussion has been archived.
No new comments can be posted.
If the first thing you think of when you see "K9" is "an obscure klingon warship" rather than "dog" (or "crappy movie"), you, my friend, are a true geek.:)
I can just imagine the insane grin
on the face of the guy who named
the K1. "Long after I've moved on,
they'll have to release generation nine
of this thing, and then I'll have my revenge!
Ha ha ha."
I am glad to see chipmakers getting off their asses and making progress finally.
I guess they are ramping up due to the fact that the lack of corporate spending on tech is going to have to have a turnaround shortly saying that all the computers people bought a few years ago now need to be replaced. However comes out ahead here is bound to make a lot money.
I am glad to see chipmakers getting off their asses and making progress finally.
"Finally"? They've been making steady improvements over the past twenty years. Just over the past ten years:
- CPU frequencies have increased by 30 times. - Memory bandwidths have increased by 24 times. - CPU complexity has increased by a good factor.
The times used to be when x86 processers were the slaggards of the computing world. Now, if you compare the computing power of a modern x86 processer to competing processers
Everyone has now given into the fact that you need more complexity to get higher performance per clock.
Evidence of this: The multiple-logical-core (see Aurora, Intel HT, EPIC, Power5) trend is the next step in complexity beyond pipelining and predictive execution/caching.
Which chip would that be?? The only AMD CPU with a K moniker that I can recall as being slow was the original K6. Everything since then has been on par or better than the Intel equivalent.
You're not thinking of the K6 are you? That was actually a very good processor, it was faster than the Pentium. Unfortunately though for AMD, intel adapted the PPro for consumer use and called it the Pentium II, and suddenly the K6 had to compete with that. Even so, the K6 still compared favourably on integer operations, eg K6-2 and III's @ 450MHz both beat the faster clocked Celeron 500MHz [heise.de] and the K6 isnt that far off the Pentium II [heise.de] at SPECint95.
The K6 did suck a bit at floating point though, especially c
So, now we'll have multiple processing units and pipelines in each core, and multiple cores. The biggest question in my head is how much limitation there will be from memory bandwidth limitations. I just don't see how you can supply data and instructions fast enough to, say, three 3 GHz cores running on the same chip unless you have close to a thousand pins on the chip. The other question would be about cooling.:)
As for memory bandwidth, the nature of the Opteron already solves that: Each core has it's own memory controller, so for each core you add, you're adding more memory bandwidth.
As for a thousand pins on the chip, I believe the Opterons are already near that. If you add in the extra pins for the memory controllers, you're probably looking at a minimum of 1500.
Honestly now, its been a very long time since data could be bussed to the CPU fast enough to take full advantage of the chip's speed. Chipmakers spend so much time convincing us that we need these insanely fast processors, when in reality, a large portion of the chip's cycles go wasted because the data simply can't get to the chip that fast. I have two main machines -- an intel Celeron 400 and an AMD Athlon 2400+ (so like, 1997mHz). In theory, my Athlon should be five times faster than my Celeron -- in p
There are two things that AMD is working towards in this chip. The first is multiple cores, while the second is on-chip multithreading (SMT, same as Intel's Hyperthreading). The first may increase the bandwidth needs of the chip, but the latter is actually designed to reduce them, in a manner of speaking.
SMT allows one thread to do a bit of processing for a while, until it runs out of data. It requests data from memory and then goes off to lala-land for a little while while the other thread takes over.
Let's see... AMD missed the original launch date of their Barton core CPU's by at least 3 months, missed the launch date of the Opteron by over 6 months, and the original launch date of the Athlon 64 by almost a year.
If they're saying now that the chip will be 4Q 2005, when should we REALLY be expecting it to show up on store shelves? 3Q 2006? 1Q 2007, maybe?:)
Let's see... AMD missed the original launch date of their Barton core CPU's by at least 3 months, missed the launch date of the Opteron by over 6 months, and the original launch date of the Athlon 64 by almost a year.
Ah, that's nothing compared to the Itanium.
1998-->2001 if I remember correctly
As the manufacturing process shrinks, and companies are able to put more transisters on a chip, the question arises: What should we use those extra transistors for?
Now, there are several options. They could come up with a new processer design, but that takes a tremendous amount of R&D. They could just put tons of cache on the chip, but that gives diminishing returns.
Or.... the Opterons already have very simply I/O mechanisms, namely, HyperTransport. Literally all they have to do is plop down two Opteron cores, connect the HyperTransport lines, and bam: Dual-core processer. I'm honestly surprised they're not doing it SOONER.
Of course, the lines for memory controllers and the like have to be drawn out to the pins on the packaging, but that's a piece of cake.
The 2 on-chip cores would still share the L2 and actually have the same memory controller. That would in fact allow them to keep the packaging very similar (most of the external interface - memory and I/O - should be roughly the same)
Furthermore, Hypertransport is *not* simple. As a point-to-point interconnect, maintaining cache coherency is not so easy... But I think they have figured this out already.
As the manufacturing process shrinks, and companies are able to put more transisters on a chip, the question arises: What should we use those extra transistors for?
I will be so bold as to predict what these extra transistors will be used for.
Most people only need so much cpu power. Yet Moore's law continues to march onward.
Computers will get cheaper and cheaper. Like pocket calculators. I think we haven't seen seen how cheap computers are going to get. You think the low end cheap Linux compute
Yet another reason why you shouldn't look to magazine articles as gospel truth.
While the currently-available 2xx-series Opterons only support two-way processing, the 8xx series is due out shortly, and (as the model number suggests), supports up to 8-way processing.
As I understand it, part of the point of Source-Level cores (eg. like OpenCores.org [opencores.org]) is to be able to synthesize multiple cores into a single chip and have them talk amongst themselves via standard internal interfaces [vsia.com]. Eg. a chip contains a microprocessor, an implicit USB interface, and maybe has some hardware-accelerated DES encryption included as well. And OpenCores brings this capability to the common person.
These specifications were left in an abandoned blue police telephone booth. A car, containing two extremely life-like miniature figures, a bicycle and a couple of flat tires were nearby.
Memory wafer technology (believed unstable in strong time winds)
Irritating speech synthesis unit, known to sound like an electronic version of Bungle from Rainbow
Extremely loud electric motors
On-board RADAR and SONAR
Retractable energy weapon
30 minute UPS
The introduction of multiple "warp" cores introduces a cross-
Folks we really do not need to run DOS applications any more. If we do couldn't we emulate them. I just do not believe that the IAx86 is the best IA for the future. The idea that in 30 years we will be runing some mutant 128 bit X86 chip makes my skin crawl. I guess I miss the days when new ideas where the norm for microcomputers. Rember when there was the 32032, 68020, TM990, Zilog z8000, the 6502 family, and the 88000? . How about it Transmeta? Let's see a version of Linux that does not run on top of the the translation layer. Lets get some new ideas out there I am betting bored. Now that I said that, GO AMD. While it is still X86 this is one of the more interesting ideas I have seen for a while.
As a famous nerdy guy (no, not Bill, the other one, starts with an L) once said, what some people see as x86's weaknesses are actually some of its great strengths, and if you design a very elegant architecture and then start optimising it for the real world, you might be surprised to end up with something that looks a lot like x86.
AMD-64 (x86-64) addresses some of the main problems of x86 (namely the small number of registers). Since virtually no-one codes in assembly anymore, and as long as the compilers
I know this may be hard to hear, but let me break it to you gently...People will be running x86 100 years after I am dead. Blame IBM and Microsoft - the decision was made a few decades ago and there's nothing you or I can do about it. There's far too much software out there already built on x86.
Back in the 80s and early 90s, the ISA actually mattered. You could get a non-negligable performance boost with a good hardware-friendly clean ISA.
These days, the ISA just does not matter anymore. We have enoug
When shall we be free of the X86?
Folks we really do not need to run DOS applications any more.
Do you know why railroad tracks are the width they are?
On the other hand, being optimistic for a moment, I suppose that in some hypothetical future, technology may get to a point where an OS and the vast bulk of its applications could simply be recompiled using a retargetable compiler, and everything would just seem to work?
Technology aside, maybe also market forces might align and then this could happe
How about it Transmeta? Let's see a version of Linux that does not run on top of the the translation layer.
AFAIK, the translation layer of Transmeta CPUs is a good thing, as it can optimize the code on the fly. There is a cache for translated code, so this will mostly benefit repeating stuff like scientific computing.
However, I completely agree with your point of discarding x86. Switching to a different CPU seems like the least hardware issue, at least with Linux and BSDs. Unfortunately things are diffe
I think the transmeta has a lot of potential in many ways. Being able (in software) to morph the instruction stream should make it possible to build a "native" (or as native as transmeta ever is) JVM or other virtual machine. (Has this been done yet? If not, is there a good reason?)
Even better, with a fast interchip connection network, building a cluster of these things could be very nice indeed. (There was a comment about a transmeta cluster a day or two ago that was quite interesting.)
** Folks we really do not need to run DOS applications any more.
Hey, I wanna play X-Com II: Terror From the Deep!**
however the stand is in reality that many(almost all) of those old games are better experienced in properly configured emulator(dosbox is ok for 16bit stuff.. though i'm kinda waiting for 32bit) even on a real pc! so really, the biggest miss would be few year old titles, not the 10 year old titles as they're already unplayable without tinkering on most modern systems(bunch of games wont run e
like, i don't say this too often on slashdot, but WOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOO OOOOOOOOOOOOOOOOOOOOOT
i wonder if that'll get past the lameness filter. dosbox is one of the coolest(and practical) of emulators around! and it's also a project that helps being able to not hang on 10y platforms..
Let's see, from MW, we have these objectionable definitions for "liberal":
Main Entry: 1liberal Pronunciation: 'li-b(&-)r&l Function: adjective Etymology: Middle English, from Middle French, from Latin liberalis suitable for a freeman, generous, from liber free; perhaps akin to Old English lEodan to grow, Greek eleutheros free Date: 14th century 1 a : of, relating to, or based on the liberal arts b archaic : of or befitting a man of free birth 2 a : marked by generosity : OPENHANDED b : given or provide
This is kinda unethical. Open-Source ISN'T about helping a company out. AMD could help, but having gcc be 'biased' towards a company isn't the open-source philosphy. If RMS saw this, he'd have a panic attack.
However, people are welcome to write gcc backends to optimize for the K9. AMD could even write them, and if they were open source, they'd be used. You also mention 'for an efficient subset of x86', but gcc compiles for more than just x86.
Optimizing gcc for amd isn't a bad idea, but we're not l
AMD processors make sense for most people. They're cheaper and faster. Therefore no-one can make much of an argument against buying them.
But keep in mind that AMD is not a perfect company from the consumers' point of view.
AMD have expressed interest in supporting the TCPA by building hardware that only runs signed and authorized software. This can't be good for Linux, or any other OS apart from Windows.
AMD also withheld *very* important information from Linux developers regarding
For those of you who live mainly in the software world (myself included) there's a very good overview of all things CPU on Arstechnica [arstechnica.com]. Detailed enough to be interesting but starts at a basic enough level.
And remember than nothing impresses the ladies more than sombody who knows why multiple cores might be interesting
That little quote at the end of the article has me worried.
" Designers will likely continue to increase the number of transistors on a chip by stacking them." What is the density of transistors going to do for the problem of generating too much heat when they are getting layered in the third dimension? Can I begin to expect the need for a cooling tower outside my apartment to handle the job of heat exchange?
Nowadays the instruction decoder is such a small
part of the chip, you could easily afford to put
on two of them. Then, it would be easy to
transition customers to a better instruction set,
by supporting both simultaneously. A system based
on such a chip could run programs for both, but
the programs recompiled to the new instruction set
would be faster because they can make better use
of all the parts of the machine.
People building for that target would naturally
use the mode that produces the faster co
Versions of PowerPC comes with 2 and 4 cores. Playstation3 is already being designed with cell processors. Seems like we'll hit the clockcycle limit, already have hit the bits limit to 64-bits (128 bits will never be practical) and shrunk the die to have each wire 5 atoms wide. The next logical step is to increase the number of cores, maybe incorporate the memory itself (maybe 128MBs of it) at full core speed on the same die, maybe incorporate the GPU on the same die and start going vertical.
"Multiple cores" is meaningless, with today's microprocessors. Typically, there will be multiple execution units for common instructions. Pipelining, pre-fetch and branch prediction all increase performance by more than can be obtained by using antiquated SMP-style approaches. It's far more important to distribute the bus load over time, as that is the larger bottleneck.
By having multiple register sets within a single core, and tagging requests/results, you can avoid the complexity of SMP entirely, while producing the effect of having multiple processors.
If you want to go further, improve the support for internal routing of operations. Thus, if you've instructions operating on the same data, the data can be directly sent from logic element to logic element. The entire chain could then be executed as a single instruction (albeit composite). This also eliminates the need to have a CISC-to-RISC layer in the processor, as complex instructions would be mapped by routing commands and not by multiple internal fetch/execute cycles.
By adding input/output FIFO queues to each instruction, where each node in the queue tagged the "virtual" processor associated with that instruction, the CPU would be limited in the number of CPUs it would look like only by the number of bits used in the tag. (eg: An 8-bit tag gives you 256 virtual CPUs on a single die.)
Why is this better than "true" SMP? Because 2 CPUs can't run a single thread faster than 1 CPU. Programs are generally written with single processor systems in mind, and therefore cannot run any better when the extra resources exist.
Sub-instruction parallelism allows you to run as fast as you can fetch the instructions. Because the parallelism is merely at the bookkeeping level, there's no overhead for extra threads.
Because the logic elements would pull off the queues, as and when they were free to do so, there's no task-switching latency.
Because the parallelism is sub-instruction, and not at the instruction block or thread level, more of the resources get used more of the time, thus increasing CPU utilization. It also means that tasks that aren't parallel at a coarse-grain can likely get some benefit, as there may well be parallelizations that can be done at the element level.
Because a single, larger die can carry with it more useful silicon than two or more seperate dies. (Which is likely why AMD are using multiple cores in their K9 CPU.)
AMD's approach is an improvement over the seperate CPU schema, but it's nowhere near the potential an element-cluster could provide. The parallism that can be gained is way too coarse-grain. It'll offer about the same level of improvement the move from seperate 386 and 387 chips to the 486DX did, for much the same reason. Reduced distances and reduced voltages allowed for faster clock rates on the same technology.
But engineering at the right level will always produce better results than cut-and-paste construction, even if it does require more thought.
"Multiple cores" is meaningless, with today's microprocessors.
Not really. It has to do with managed atoms of complexity. You create this complex thing, give it well-understood interfaces, and connect it to other identical things. There is more than just pure technology involved. There is an issue of managing complexity. One advantage of the single core/ multi core approach is a sort of conceptual "assembly line," where your cheap product is the least atom, and your more expensive parts are compositions of
AP Newswire, Aug 14, 2007: In a paper published in the journal Science, MIT researchers announced the achievement of a sustained nuclear fusion reaction. This stunning accomplishment was, oddly enough, purely accidental, triggered by the failure of the cooling system on MIT's new AMD-K9-based 256-node Beowulf cluster, which had gone into full operation only a week prior to the event.....
I don't do any 3D rendering, but I believe I do more processor-heavy work than the average Carlos - sp. big numerical differential equation and bigbigbig linear optimization stuff in Maple - and my tienda-de-descuentos K6-II still crunches the stuff faster than I could ever desire.
The main problem with personal computers is that they use hard drivers for memory swap space when they should be using RAM memory to cache for hard drives.
If I could spend $500 on my computer right now I'd fill it with as much memory as the architecture allows. I'd then run a ramdrive and direct many of the computer activities to there.
I mean, when a webpage opens, a banner is downloaded to my hard drive. That's just irrational. And it prolly wears the hard drive's physical mechanism faster too.
But then again, we don't have a benchmark of ram speed, nor do we have hypemakers touting new, faster RAM. And prolly there's not too much activity in technologically improving RAM either.
This is usually how the story goes:
1. AMD announces new processor to rock all intel processors.
2. Intel waits about a year, then unleashes a huge add campaign (aka centrino) about a previously unannounced project.
3. AMD is once again on the backburner
4. Skip the.....
5. Profit for Intel.
Forget about all the processing you could do with multiple cores. Based on the current trend of AMD chips, you could use this baby to heat your home.
It renders movies, it roasts meat, it's an all-in-one appliance.
FORGET all those other processors.
LOOK at this P4, it gets barely warm enough to melt the cheese on this burger [insert picture]. Now look at the K9: not only are you grilling that cheeseburger, but with that Texas-sized heat sink and those multiple cores, there's enough room and heat to grill
Sun's sparc strategy has been centered around the
idea of multiple cores and chip-level multi-threading for a while (see
this article [yahoo.com] for one of latest announcements).
I guess this also validates
Sun's approach with Sparc. Not that it's all that
unique -- I guess all chip makers have similar
goals -- but sometimes it's seems there's bit
of bias here, whereas AMD rocks and everyone else sucks.
Do you hear anyone referring to AMD64 as "K8"? I hear AMD64, x86_64, Opteron, Athlon64, but never K8 (maybe once or twice, to relate how similar the FX, Athlon64 and Opteron are)
So we'll call it K9 in the same way we call PentiumPro/II/III all i686. And no one will get the dog jokes, except us, and maybe 4 readers and 1 of the creators of Penny Arcade.
May be I'm the wrong demographic here, but I actually found yours funnier than these dog jokes. I mean, didn't quite get it why there should be jokes on dogs...
"In contrast, the RISC architecture, at the root of chips from Sun Microsystems, IBM, Hewlett-Packard and ARM, isn't proliferating at the same rate."
I was under the impression that in the core there isn't much difference between CISC and RISC these days, as described by Ars Technica [ars-technica.com] last century.
It's not. The core of an UltraSparc or a Power4 look a LOT more like the cores of an Opteron or a P4 than they do an ARM. The CISC chips have become rather like RISC chips on the inside, while the high-end RISC chips have tended to get someone less "reduced" (ie more complex) instruction sets.
It's only a drag when operating in x86 Legacy Mode on an AMD64-based core. When you're operating in x86-64 Compatibility Mode or x86-64 Long Mode, you get access to sixteen 64-bit registers. Here's a graphic which explains it quite nicely: http://www.devx.com/assets/amd/5929.gif
The rest of the article explains the concepts of the AMD64 architecture. Link: http://www.devx.com/amd/Article/16018
Ironicly, these deficits in the x86, non-orthogonal instruction set and a paltry 4 "General Purpose" registers, have forced x86 cpu makers to develop advanced techniques.
One thing most people recognize is that CISC design allowed mem to mem streaming copies. And that has a performance advantage such that even RISC designers added similar instructions, even though it violates the LOAD/STORE doctrine.
Another issue where CISC design benifits performance is that the instruction stream is smaller. Given the gr
In all likelyhood, the K9 will be based on AMD's x86-64 instruction set, the same as the Opteron and Athlon 64/FX. x86-64, as far as I know, looks pretty much like a regular CISC x86 cpu at the assembly level, just with intruction and data blocks that are twice the size (it's even natively binary-compatible with regular 32-bit x86 code).
A good comparison here is Sun's SPARC line of RISC processors. The original SPARC chips were 32-bit, then Sun introduced the UltraSPARC, a 64-bit version of the same archit
Re: (Score:2)
Re:hmmmm??? (Score:2)
Re:hmmmm??? (Score:2)
Re: (Score:2)
Re:hmmmm??? (Score:2)
Hm (Score:2)
Might As Well Get It Over With... (Score:2)
Re:Might As Well Get It Over With... (Score:3)
Re:Might As Well Get It Over With... (Score:2)
CB
Re:Might As Well Get It Over With... (Score:2)
No - (Score:2)
Re:Might As Well Get It Over With... (Score:2)
Re:Might As Well Get It Over With... (Score:2)
this is good.. (Score:2)
I guess they are ramping up due to the fact that the lack of corporate spending on tech is going to have to have a turnaround shortly saying that all the computers people bought a few years ago now need to be replaced. However comes out ahead here is bound to make a lot money.
Re:this is good.. (Score:2)
"Finally"? They've been making steady improvements over the past twenty years. Just over the past ten years:
- CPU frequencies have increased by 30 times.
- Memory bandwidths have increased by 24 times.
- CPU complexity has increased by a good factor.
The times used to be when x86 processers were the slaggards of the computing world. Now, if you compare the computing power of a modern x86 processer to competing processers
Hi! '95 called,it'd like its prevailing logic back (Score:2)
Evidence of this: The multiple-logical-core (see Aurora, Intel HT, EPIC, Power5) trend is the next step in complexity beyond pipelining and predictive execution/caching.
Re:this is good..Linux condoms. (Score:2)
if you can run linux on a zilog [zilog.com], well, you can run it on anything.
Dog jokes in 3... 2...1... (Score:2)
Don't use letter K (Score:2)
Re:Don't use letter K (Score:2)
Re:Don't use letter K (Score:2)
The K6 did suck a bit at floating point though, especially c
Multiple cores? (Score:3, Insightful)
Re:Multiple cores? (Score:2)
As for a thousand pins on the chip, I believe the Opterons are already near that. If you add in the extra pins for the memory controllers, you're probably looking at a minimum of 1500.
steve
Re:Multiple cores? (Score:2)
Re:Multiple cores? (Score:2)
Some of us need the power.
Re:Multiple cores? (Score:2)
since data could be bussed to the CPU fast enough
Got that right.
The whole design of systems is going in the direction where main memory will be considered as slow as disks once were.
It will be considered as much a sin to miss the L2 cache as it is to swap.
The chips that will be speed kings will be the ones that can afford huge fast caches.
Re:Multiple cores? (Score:2)
Like say, the Alpha?
Re:Multiple cores? (Score:2)
Re:Multiple cores? (Score:2)
SMT allows one thread to do a bit of processing for a while, until it runs out of data. It requests data from memory and then goes off to lala-land for a little while while the other thread takes over.
Re:You want to see multiple cores? Check this out (Score:2)
Re:You want to see multiple cores? Check this out (Score:2)
AMD TimeLine to Reality Generator? (Score:2, Insightful)
If they're saying now that the chip will be 4Q 2005, when should we REALLY be expecting it to show up on store shelves? 3Q 2006? 1Q 2007, maybe?
Re:AMD TimeLine to Let's see... (Score:2)
Ah, that's nothing compared to the Itanium.
1998-->2001 if I remember correctly
This makes a lot of sense. (Score:3, Interesting)
As the manufacturing process shrinks, and companies are able to put more transisters on a chip, the question arises: What should we use those extra transistors for?
Now, there are several options. They could come up with a new processer design, but that takes a tremendous amount of R&D. They could just put tons of cache on the chip, but that gives diminishing returns.
Or.... the Opterons already have very simply I/O mechanisms, namely, HyperTransport. Literally all they have to do is plop down two Opteron cores, connect the HyperTransport lines, and bam: Dual-core processer. I'm honestly surprised they're not doing it SOONER.
Of course, the lines for memory controllers and the like have to be drawn out to the pins on the packaging, but that's a piece of cake.
steve
Not quite ... (Score:2)
Furthermore, Hypertransport is *not* simple. As a point-to-point interconnect, maintaining cache coherency is not so easy ... But I think they have figured this out already.
Re:This makes a lot of sense. (Score:2)
I will be so bold as to predict what these extra transistors will be used for.
Most people only need so much cpu power. Yet Moore's law continues to march onward.
Computers will get cheaper and cheaper. Like pocket calculators. I think we haven't seen seen how cheap computers are going to get. You think the low end cheap Linux compute
Re:This makes a lot of sense. (Score:2)
Put it this way, they both have penises.
Re:This makes a lot of sense. (Score:2)
Yet another reason why you shouldn't look to magazine articles as gospel truth.
While the currently-available 2xx-series Opterons only support two-way processing, the 8xx series is due out shortly, and (as the model number suggests), supports up to 8-way processing.
steve
DIY multi-core (Score:2)
Re:DIY multi-core (Score:2)
Re:DIY multi-core (Score:2)
Re:DIY multi-core (Score:2)
Rumored specs for the K9... (Score:2)
The introduction of multiple "warp" cores introduces a cross-
When shall we be free of the X86? (Score:5, Interesting)
Now that I said that, GO AMD. While it is still X86 this is one of the more interesting ideas I have seen for a while.
Re:When shall we be free of the X86? (Score:2)
Good, because it's also part of the intel roadmap for the itanium.
Do we want to? (Score:2)
AMD-64 (x86-64) addresses some of the main problems of x86 (namely the small number of registers). Since virtually no-one codes in assembly anymore, and as long as the compilers
Re:When shall we be free of the X86? (Score:2)
Re:When shall we be free of the X86? (Score:2)
Back in the 80s and early 90s, the ISA actually mattered. You could get a non-negligable performance boost with a good hardware-friendly clean ISA.
These days, the ISA just does not matter anymore. We have enoug
Re:When shall we be free of the X86? (Score:2)
Folks we really do not need to run DOS applications any more.
Do you know why railroad tracks are the width they are?
On the other hand, being optimistic for a moment, I suppose that in some hypothetical future, technology may get to a point where an OS and the vast bulk of its applications could simply be recompiled using a retargetable compiler, and everything would just seem to work?
Technology aside, maybe also market forces might align and then this could happe
Transmeta (Score:2)
AFAIK, the translation layer of Transmeta CPUs is a good thing, as it can optimize the code on the fly. There is a cache for translated code, so this will mostly benefit repeating stuff like scientific computing.
However, I completely agree with your point of discarding x86. Switching to a different CPU seems like the least hardware issue, at least with Linux and BSDs. Unfortunately things are diffe
Free at last (Score:2)
Even better, with a fast interchip connection network, building a cluster of these things could be very nice indeed. (There was a comment about a transmeta cluster a day or two ago that was quite interesting.)
Even better
Re:When shall we be free of the X86? (Score:2)
Hey, I wanna play X-Com II: Terror From the Deep!**
however the stand is in reality that many(almost all) of those old games are better experienced in properly configured emulator(dosbox is ok for 16bit stuff.. though i'm kinda waiting for 32bit) even on a real pc! so really, the biggest miss would be few year old titles, not the 10 year old titles as they're already unplayable without tinkering on most modern systems(bunch of games wont run e
Re:When shall we be free of the X86? (Score:2)
.60 came out yesterday, with the first go at supporting protected mode DOS progs.
Re:When shall we be free of the X86? (Score:2)
Re:When shall we be free of the X86? (Score:2)
i wonder if that'll get past the lameness filter. dosbox is one of the coolest(and practical) of emulators around! and it's also a project that helps being able to not hang on 10y platforms..
Re:When shall we be free of the X86? (Score:2)
Re:When shall we be free of the X86? (Score:2)
Main Entry: 1liberal
Pronunciation: 'li-b(&-)r&l
Function: adjective
Etymology: Middle English, from Middle French, from Latin liberalis suitable for a freeman, generous, from liber free; perhaps akin to Old English lEodan to grow, Greek eleutheros free
Date: 14th century
1 a : of, relating to, or based on the liberal arts b archaic : of or befitting a man of free birth
2 a : marked by generosity : OPENHANDED b : given or provide
They've been doing that since 1995 (Score:2)
No operation is directly executed anymore, it's all interpreted.
Re:GNU Community can Help AMD (Score:2)
However, people are welcome to write gcc backends to optimize for the K9. AMD could even write them, and if they were open source, they'd be used. You also mention 'for an efficient subset of x86', but gcc compiles for more than just x86.
Optimizing gcc for amd isn't a bad idea, but we're not l
Re:GNU Community can Help AMD (Score:2)
AMD processors make sense for most people. They're cheaper and faster. Therefore no-one can make much of an argument against buying them.
But keep in mind that AMD is not a perfect company from the consumers' point of view.
AMD have expressed interest in supporting the TCPA by building hardware that only runs signed and authorized software. This can't be good for Linux, or any other OS apart from Windows.
AMD also withheld *very* important information from Linux developers regarding
That's about enought of the K9 jokes... (Score:2)
Moron reporters (Score:2)
'multiple chip cores--the "brain" of the chip'
I thought the chip was the brain of the computer? So the brain has a brain?
Sigh...
Re:Moron reporters (Score:2)
mmmm.. braaaains...
They're doing what now? (Score:4, Informative)
For those of you who live mainly in the software world (myself included) there's a very good overview of all things CPU on Arstechnica [arstechnica.com]. Detailed enough to be interesting but starts at a basic enough level.
And remember than nothing impresses the ladies more than sombody who knows why multiple cores might be interesting
Multiple cores? (Score:2, Funny)
I'm worried about Moore's Law... (Score:2)
" Designers will likely continue to increase the number of transistors on a chip by stacking them."
What is the density of transistors going to do for the problem of generating too much heat when they are getting layered in the third dimension? Can I begin to expect the need for a cooling tower outside my apartment to handle the job of heat exchange?
K9? (Score:3, Funny)
Re:K9? (Score:2)
ziplock? do you save the poop you pick up? want to keep it from getting freezer burn or something?
Just use an old grocery bag, or the little neighborhood blue poop bags, for goodness sake!
Re:K9? (Score:2)
Bad Instruction Set (Score:2)
People building for that target would naturally use the mode that produces the faster co
PowerPC has already done that (Score:2)
I think for n
AMD to debut Montecore CPUs in 2005 (Score:2)
K9!? (Score:2)
And a serious comment... (Score:4, Interesting)
By having multiple register sets within a single core, and tagging requests/results, you can avoid the complexity of SMP entirely, while producing the effect of having multiple processors.
If you want to go further, improve the support for internal routing of operations. Thus, if you've instructions operating on the same data, the data can be directly sent from logic element to logic element. The entire chain could then be executed as a single instruction (albeit composite). This also eliminates the need to have a CISC-to-RISC layer in the processor, as complex instructions would be mapped by routing commands and not by multiple internal fetch/execute cycles.
By adding input/output FIFO queues to each instruction, where each node in the queue tagged the "virtual" processor associated with that instruction, the CPU would be limited in the number of CPUs it would look like only by the number of bits used in the tag. (eg: An 8-bit tag gives you 256 virtual CPUs on a single die.)
Why is this better than "true" SMP? Because 2 CPUs can't run a single thread faster than 1 CPU. Programs are generally written with single processor systems in mind, and therefore cannot run any better when the extra resources exist.
Sub-instruction parallelism allows you to run as fast as you can fetch the instructions. Because the parallelism is merely at the bookkeeping level, there's no overhead for extra threads.
Because the logic elements would pull off the queues, as and when they were free to do so, there's no task-switching latency.
Because the parallelism is sub-instruction, and not at the instruction block or thread level, more of the resources get used more of the time, thus increasing CPU utilization. It also means that tasks that aren't parallel at a coarse-grain can likely get some benefit, as there may well be parallelizations that can be done at the element level.
Because a single, larger die can carry with it more useful silicon than two or more seperate dies. (Which is likely why AMD are using multiple cores in their K9 CPU.)
AMD's approach is an improvement over the seperate CPU schema, but it's nowhere near the potential an element-cluster could provide. The parallism that can be gained is way too coarse-grain. It'll offer about the same level of improvement the move from seperate 386 and 387 chips to the 486DX did, for much the same reason. Reduced distances and reduced voltages allowed for faster clock rates on the same technology.
But engineering at the right level will always produce better results than cut-and-paste construction, even if it does require more thought.
Re:And a serious comment... (Score:2)
Not really. It has to do with managed atoms of complexity. You create this complex thing, give it well-understood interfaces, and connect it to other identical things. There is more than just pure technology involved. There is an issue of managing complexity. One advantage of the single core/ multi core approach is a sort of conceptual "assembly line," where your cheap product is the least atom, and your more expensive parts are compositions of
your best friend (Score:2)
Might as well start the lame jokes now, I'm guessing the engineers at AMD saw that coming long ago, too.
I can just see the advertisement now... (Score:2)
Whoa. (Score:2)
AP Newswire, Aug 14, 2007: In a paper published in the journal Science, MIT researchers announced the achievement of a sustained nuclear fusion reaction. This stunning accomplishment was, oddly enough, purely accidental, triggered by the failure of the cooling system on MIT's new AMD-K9-based 256-node Beowulf cluster, which had gone into full operation only a week prior to the event.....
I know this has been said countless times, but... (Score:3, Informative)
I don't do any 3D rendering, but I believe I do more processor-heavy work than the average Carlos - sp. big numerical differential equation and bigbigbig linear optimization stuff in Maple - and my tienda-de-descuentos K6-II still crunches the stuff faster than I could ever desire.
The main problem with personal computers is that they use hard drivers for memory swap space when they should be using RAM memory to cache for hard drives.
If I could spend $500 on my computer right now I'd fill it with as much memory as the architecture allows. I'd then run a ramdrive and direct many of the computer activities to there.
I mean, when a webpage opens, a banner is downloaded to my hard drive. That's just irrational. And it prolly wears the hard drive's physical mechanism faster too.
But then again, we don't have a benchmark of ram speed, nor do we have hypemakers touting new, faster RAM. And prolly there's not too much activity in technologically improving RAM either.
AMD vs. Intel (Score:2)
1. AMD announces new processor to rock all intel processors.
2. Intel waits about a year, then unleashes a huge add campaign (aka centrino) about a previously unannounced project.
3. AMD is once again on the backburner
4. Skip the
5. Profit for Intel.
K9 (Canine) chip (Score:2)
Obligatory Silicon Zoo reference [fsu.edu]
K9? They're all the rage in Trenton, New Jersey. (Score:2)
But do you have to have a license for them? [fortunecity.com]
The real benefits (Score:2)
It renders movies, it roasts meat, it's an all-in-one appliance.
FORGET all those other processors.
LOOK at this P4, it gets barely warm enough to melt the cheese on this burger [insert picture]. Now look at the K9: not only are you grilling that cheeseburger, but with that Texas-sized heat sink and those multiple cores, there's enough room and heat to grill
AMD following Sun? (Score:2)
Re:K9 Processor? (Score:2, Funny)
As compared to AIBO, which of course is a "fake" dog. But if they put a K9 Processor in the AIBO, we have a conflicted pet that is a real fake dog.
Re:K9 Processor? (Score:2)
It amazes me that at least 5 or 6 of you thought you'd be the first to say it.
Yet AMD Marketing will never use that name. (Score:2)
So we'll call it K9 in the same way we call PentiumPro/II/III all i686. And no one will get the dog jokes, except us, and maybe 4 readers and 1 of the creators of Penny Arcade.
How sad.
Re:Shut up with the dog jokes. (Score:2)
(Or was that a bit classical for
Re:Shut up with the dog jokes. (Score:2)
May be I'm the wrong demographic here, but I actually found yours funnier than these dog jokes. I mean, didn't quite get it why there should be jokes on dogs...
Re:Shut up with the dog jokes. (Score:2)
Re:Well ZDnet kinda blasted RISC... (Score:2)
I was under the impression that in the core there isn't much difference between CISC and RISC these days, as described by Ars Technica [ars-technica.com] last century.
Re:Well ZDnet kinda blasted RISC... (Score:2)
Re:When will it end??? (Score:3, Informative)
The rest of the article explains the concepts of the AMD64 architecture. Link: http://www.devx.com/amd/Article/16018
Re:When will it end??? (Score:2)
One thing most people recognize is that CISC design allowed mem to mem streaming copies. And that has a performance advantage such that even RISC designers added similar instructions, even though it violates the LOAD/STORE doctrine.
Another issue where CISC design benifits performance is that the instruction stream is smaller. Given the gr
Probably Means x86-64 (Score:2)
A good comparison here is Sun's SPARC line of RISC processors. The original SPARC chips were 32-bit, then Sun introduced the UltraSPARC, a 64-bit version of the same archit
Re:Intel--where art thou ? (Score:2)
Re:Intel--where art thou ? (Score:2)
Two words:- Innovation and Productivity.
We'll be there soon though.
Re:Hey Geeks, stop making bad dog jokes for a sec. (Score:2)
What is the difference then from just having a duel processor system?
Re:burns hot? (Score:2)
Re:AMD? 2005?!?! Who cares? G5 Available *NOW*! (Score:2)