Intrinsity Claims 2.2 Ghz Chip 308
PowerMacDaddy writes "Over at SiliconValley.com there's an article about an Ausin, TX startup named Intrinsity that has unveiled a new chip that utilizes a new logic process with conventional fab processes to acheive a 2.2GHz clock rate. The company is headed by former Texas Instruments and Apple Computer microprocessor developer Paul Nixon. The real question is, is this all FUD, will the real-world performance be part of The Megahertz Myth, or is this thing for real?"
slow (Score:1)
That sounds slow compared to Chuck Moore's new design [colorforth.com].
I don't doubt that it will work as he says since his previous designs ran at up to 800 MHz with a 0.8 micron process (see the middle of this page [ultratechnology.com]).
booooooooh-ring (Score:1)
More importantly - this sounds like a company looking for venture capital...but I get the feeling they are maybe a year too late.
Anyone want a 2GHz chip? I mean...really...
But what instruction set? (Score:1)
To sum up:
Re:But what instruction set? (Score:2)
According to the EETimes article pointed to elsewhere in this thread, the instruction set will either be MIPS or PowerPC, with the most likely nod being MIPS.
One place MIPS sees a huge market penetration is in networking equipment, especially Cisco routers. If Intrinsity can clock up to 2.2GHz without massively increasing power consumption and heat dissipation, I could see Cisco's high-end routers using the hell out of MIPS CPUs using that technology.
Marketing crap (Score:1)
Notice, the article is quietly misleading people who read it into thinking this chip is somehow compariable/compatible with x86 instruction sets... like they have somehow trumped Intel to the 2.2 gig mark, the same way AMD trumped them to the 1 gig mark about a year ago.
Watch the blip, then sell short.
Hmm..... (Score:2, Insightful)
IF this does come to desktops.....that is good. More competition = lower prices. But, lots of issues that are still unclear. What kind of packaging will this be in? Will it require a proprietary motherboard? If it does.....well......im sensing that this wont last too long. Intrinsity's test chip achieved faster performance using conventional methods, where other chip makers have generated chips running at 400 to 500 megahertz, or about one-fourth as fast as the Intrinsity chip So whats this supposed to mean? Maybe they should make that clear. Is that saying that any chip over 400 or 500 Mhz uses special manufacturing techniques. Now that would be the majority of chips......so how can that be special then?
Also.....Much of Intrinsity's work has involved making improvements to a fundamental building block for processor chips: the logic circuit. Intrinsity relies heavily on a faster but trickier type of circuit, called dynamic logic, than do conventional processors. Dynamic logic circuits can handle more complex functions with fewer steps than static logic circuits So does this mean specialized applications/OSes? Not worrying about linux....know it will be ported. But if this needs a special OS, and special new (read expensive) applications......think it will go under.
Proves the technology is there, though, which is a good thing
Re: Specialized apps/OSs? (Score:2)
Short answer: No.
Programs see the chip's high-level design only. Low-level implementation is hidden.
We can build 100mhz chips right now! (Score:5, Funny)
Back in the 60s, the power of a radio was measured by the number of transistors. That is, until one radio company put hundreds of useless transistors on their board and didn't even wire them up. After that, radios started getting measured on real abilities like quality of sound. Maybe computer marketting will catch up some day, marketting meaningful numbers: minimum FPS in Quake 3!
-Ted
Re:We can build 100mhz chips right now! (Score:2, Funny)
neh
FUD (Score:1)
A word on "The Megahertz Myth" (Score:2, Redundant)
And honestly, just because the G4 does better on some obscure Photoshop benchmarks really doesn't make up for its lack of scalability (as compared to RISC chips like the UltraSparc II and II) and its lack of good performance in real world applications (as compared to AMD and Intel x86 chips). Please stop the spread of pro-Apple FUD now.
Re:A word on "The Megahertz Myth" (Score:3, Insightful)
Re:A word on "The Megahertz Myth" (Score:2, Insightful)
Great Satans! (Score:2)
Slashdotters love Intel, not so long ago these boards were full of, "look at my cheap overclocked dual-celeron system!"
The Ghz myth (im updating it a little here) is true and Apple makes a point. I would think average consumers would be more comfortable with an Apple link than say Joe Blow's home made linux based benchmark tool. I'd rather refer non-techies to an Apple page than to something a bit more technical, especially if they're considering buying an Apple.
Re:Great Satans! (Score:2)
Intel (Score:1)
My guess.... probably.
Re:Intel (Score:2)
Not for the desktop, yet ... (Score:1)
There is an obvious problem that people, keeps forgetting; RAM-speed. The RAM (and mainboard) can't supply the CPU with enough data to process fast enough. Anyone care to elaborate this with some math and tech info, maybe some predictions on RAM vs. CPU-bus speed development?
Re:Not for the desktop, yet ... (Score:2)
There is still a latency problem, but intelligent caching and compiler design can mitigate that problem, especially if there is a bandwidth surplus available for speculative fetching.
Eventually, to conquer the latency beast, we will need to move more memory closer to the CPU. To do that is going to take moving to serial interconnects for lower pin counts, and reducing the physical footprint on the mainboard.
Unfortunately, as RAMBUS found out, running several hundred MHz over a motherboard trace is difficult. There is noise from other channels, stray capacitence, that sort of thing. This is especially bad if you use a multi-point bus systems. My guess is that eventually we will have to move to a point-to-point serial memory bus. This has the advantage of maintaining low latency, while scaling bandwidth with the number of memory modules.
Re:Not for the desktop, yet ... (Score:2)
I'm not sure that switching to a serial system would help enough. While you could clock it more quickly, you'd still have a hard time matching the bandwidth of a many-line solution. This could ironically result in longer latencies, because despite the higher clock speed, you'd have to sit there and wait for all 32+ bits of the missed word or 128+ bits of the cache line to be transferred before resuming operation.
IMO, a better approach might be running many shielded lines in parallel transmitting data with self-clocking codes. This allow faster clocking by removing the need to keep all lines in synch with each other; data could be rebuilt in buffers at the receiving end.
Regardless of the bus implementation, you'll still likely be limited by the speed of the RAM used.
The final solution to all of this will probably come when we can put a big enough L3 cache on a die to hold the entire working set of most programs. That will give us a short, fast, wide path to L3 memory. Main memory will only be accessed for streaming data or for random accesses to huge databases. In the first case, a high-bandwidth, high-latency bus is acceptable. In the second case, I doubt anything we do will overcome latency problems.
An interesting design problem to think about, in any event.
Re:Not for the desktop, yet ... (Score:2)
This has been tried. It didn't work very well. There are a few problems:
This is a serious bottleneck for many tasks.
Amdahl's Law and coherence operation overhead both conspire to bite you on this. Amdahl's law, especially - you can't parallelize all tasks.
And the main reason why processor+RAM modules haven't taken off:
An ordinary SMP box already has memory tied to processors - the processor caches. Add main memory to your multi-module machine, and you have something that looks suspiciously like an ordinary SMP box with big L3 caches made from DRAM.
For really, really large systems (hundreds of modules or more), this approach is still used (look up "NUMA" for more information), but for smaller boxes it doesn't make a lot of sense.
This is not news (Score:1)
MHz (Score:5, Informative)
It doesn't matter if it is real or vapour, it will still fall prey to the "Megahertz Myth". Maybe someday, people will understand: non-similar architectures can't be compared by MHz alone. And even most similar arch's can't be compared via MHz, as the Intel v. AMD war will tell you.
It is even worse than that! no single metric will ever give you the whole story.
Re:MHz (Score:3, Insightful)
1)they liscese the tech, which is what they should do from the begining.
2)AMD or Intel will buy them
3)AMD and Intel (independently) will gear up there marketing drones, and this chip will fade from memory.
what we need is a testing algrythem that all processors use. then we can rate chips as "it completed the Moffitt algorithem in 1.5 minutes!".
Re:MHz (Score:3, Interesting)
Compare actual performance, which means putting third-party applications onto demo computers at retail locations, and timing complete workdays and/or complete tasks in major applications. Apple does both of these, yet somehow they are depicted as cheating because they don't just offer the customer a range of beige boxes at 1.6, 1.7, and 1.8GHz along with a spec sheet of compiler shootouts. I have never seen computers demonstrated with actual applications outside of the Apple Store. To me, that just says that Apple has nothing to hide. If you don't believe Apple's performance demonstrations, go to an Apple Store and use your own media and see what results you get.
Re:MHz (Score:2)
Re:MHz (Score:1)
C//
Re:MHz (Score:2)
Re:MHz (Score:2)
wouldn't 667 be that guy across the street from the beast?
Re:People will never learn (Score:2)
Re:People will never learn (Score:1)
Isn't that a mis-use of the word FUD (Score:1)
Not a Processor! (Score:1)
looks that way (Score:2)
they seem to be trying to figure out placment of logic
remeber this is VERY important in chip 80% of wires and therefore heat comes from the clock sync inside of the chip (acording to IBM powerPC paper in the ACM microproccesor journel)
placment is very lucritive Cadence and such make millions from it but this seems to be FUD because you can run process at 0.10 micro @ TSMC now and they are standardiseing on it should be done
it does not seem to be anything but hoax e.g the clock rates mean nothing unless the whole chip runs at that frequancy and is RISC whith no caches no pipelines of which I assure you there are few
what counts is memory bandwidth and how often you use memory
regards
john jones
hmm (Score:1)
This is Great News (Score:1)
Just like the Amiga! (Score:1)
The custom chipset in the A1000 also used precharge/evaluate dynamic logic with a 4-phase clock. 'course it was only clocked at twice NTSC color-burst frequency, not 2.2 GHz...
Actually this was common design methodology in many 4 to 8 micron (not 0.4 or 0.8!) NMOS chip designs of that era.
I seem to recall... (Score:1)
must..try... not..... to .... imagine.... ..it! (Score:1)
Question about the Megahurz Myth (Score:1)
Weird article... (Score:5, Insightful)
In a nutshell this is saying "Someone said something, but it might be bogus, and the cycle speed really doesn't mean much anyways.". Alrighty then. This is like a "nothing to see here, move along!" type articles.
Re:Weird article... (Score:2)
Except in this case there isn't even a really cool, splattered dead guy to stare at.
A more technical article is available at... (Score:4, Informative)
yay..but cpu means poop (Score:3, Insightful)
What's the big deal? (Score:2)
If this story was two years old, it might be significant... but it is far from revolutionary right now.
Re:What's the big deal? (Score:2)
the big deal is they took a method thats used to creat 400 MHz chips, and created a 2.2 GHz chip.
Re:What's the big deal? (Score:2)
http://www.eetimes.com/story/OEG20010813S0060
makes it sound like this thing is targetted more towards the embedded market, where (so the article says), the top chips are running at 500MHz. Not sure why they wouldn't try for a desktop pc solution...?
Re:What's the big deal? (Score:2)
Not sure why they wouldn't try for a desktop pc solution...?
Power, efficiency & scalability. Embedded systems are far more complex than just a PC in a little box.
Re:What's the big deal? (Score:2)
Dynamic logic is nothing new .... (Score:3, Informative)
Given that net delays are becoming the gating factor in big chip designs dynamic logic seems to me to just be a sideshow - unless the long wires are themselves the dynamic nodes (transmission lines with solitons moving on them?) now that would be interesting ...
Potentially much more interesting IMHO is clockless asynchronous logic - but CAD tools just aren't up to supporting this methodology (oh yeah and the synchronous clock based mindset is pretty entrenched too).
Re:Dynamic logic is nothing new .... (Score:1)
Re: (Score:2, Interesting)
Re:What is dynamic logic? (Score:5, Informative)
Both dynamic and static logic use logic gates or blocks that are wired together. The difference is in how the gates are implemented internally, and how they pass data back and forth.
CMOS is a good example of static logic. It uses pull-up and pull-down transistor networks to make sure that outputs are always strongly asserted. This makes CMOS gates big and makes input capacitance larger than it otherwise needs to be. But, it's well-understood, has a few attractive features, and has a whole slew of design tools built for it.
Precharge logic is a good example of dynamic logic. It uses the parasitic capacitance of the output line to store the output value. The output node is charged up on one half of the clock (precharge phase), and left floating on the other half (readout phase). During the readout phase, the inputs are asserted. Inputs are fed into a pull-down transistor network that drives the output low if it should be low, and leaves it alone if it should be high. This style of logic takes up half the space of CMOS logic, has half the input capacitance, and has stronger driving capability (NFETs pulling down typically drive 2x-3x more strongly than PFETs pulling up). This means that if you play your cards right, you can make precharge logic circuits that are faster *and* more compact than CMOS logic circuits. The downsides are that designing and verifying precharge logic is a royal pain, and that you have to have a clock input into the logic block.
The article describes a more complicated dynamic logic scheme with a four-phase clock. These kinds of schemes have been floating around in research literature for years, but are usually not used because of the greater complexity and fewer tools available.
Q3 (Score:3, Funny)
*sigh* i want a turbo button on my computer. except, instead of halving my speed, i want it to drop down to 33MHz so i can play all my old games properly under dos.
Moslo (Score:2)
-Ted
Re:Moslo (Score:1)
Re:Moslo (Score:3, Interesting)
Fast forward to Today
We lost the complete source code, and our computers are so darn fast that the bit of code that estimates the speed of the computer over-runs it's 16 bit Int slot. The game now hangs hangs.
So we are forced to run our game in Windows to slow it down. It works half the time - it depends on the time slicing. Recently our computers are getting a bit to fast for even that - so we might have to move to an emulator.
The smart thing to do would be to fire up the hex editor and edit the cose, but that would be *cheating*
Re:Moslo (Score:2)
The better approach for your problem is to use the OS's delay function, if you can. (under unix, try nanosleep() or select() with all the fds fields cleared. Sleep() should work on windows.) You free up the processor for other tasks, you don't have to do that speed-estimating crap, and uh you don't hit yer bug.
I would assume you figured this out but you said your other friend who was smarter than you wrote the code :-) .. nice story.
Re:Q3 (Score:2)
One project I worship is http://exult.sourceforge.net which has rewritten the Ultima 7 engine with timer-based animation, etc. It is *so* cool. Even if you're not into Ultima games, you should check out the project.
-l
More on the MHz myth (Score:2)
For those of you who want more, gave a great [www.arstec...rstechnica]explanation [arstechnica.com] the week before I saw this live.
Re:More on the MHz myth (Score:1)
Depends on a lot of factors (Score:2, Informative)
It's not MHz that determines the speed. It's just one of them. The rest would be:
And many more. If you have learnt Computer Architecture, then you'd certainly able to list hundreds more.
Moreover, Apple wants to play catchup [theregister.co.uk] with x86... Hmmm... Do you smell something fishy?
This is nothing new (Score:2)
x86 chips are not simple, and creating a dynamic logic design is not likely. The company seems to have very good background in automatied design tools, but chips on the scale of x86 CPUs are not created in automated tools, they are created by hand and optimized (like assembly coding to the software guys)
"Intrinsity's bare-bones test chip operates at 2.2 GHz..." This is not that impressive on a bare bones chip. They haven't even created an ALU capable of that speed. Nevermind a full CPU. This company also doesn't have any fabs, so they will be at the disadvantage Cyrix and AMD were at in their youth.
Overall, they aren't likely to be making x86 CPUs any time soon. PDAs and laptops can't handle the power draw, so I'm not sure where that leaves them. Maybe they should team with Transmeta [transmeta.com] to solve their power problems.
note to the editor (Score:4, Insightful)
Well, I really doubt this will be fud, since that stands for fear, uncertanty, and doubt. This acticle seems to be more of a hype piece.
FUD is tearing down a competitor's product with vague statements and generalizations. FUD is not describing your own new product in glowing terms. That's just marketing BS.
I know, I know...shouldn't nitpick. But when the term FUD is so depricated on the main page at slashdot, I really must object.
Highlights... (Score:2)
No copper interconnects. No .13-micron process. These are things that I (as a non-chip engineer) can understand. Is this going to improve my life? Only time will tell. But I for one like technology for the sake of technology.
Quotes taken from the eetimes article [eetimes.com].
2.2Ghz is a BIG DEAL - if it's embedded... (Score:2, Insightful)
Even if they stick to their 2003 delivery date 2Ghz+ will still be fast in that market. They would be the leader in both speed and speed/Watt... but I bet they wouldn't be the cheapest... ;)
Using either MIPS or PPC code is smart for the embedded market... just look at AMDs announcment earlier about discontinuing the 486 and other embedded market chips.
Also - if this is normal .18 aluminum technology the potential for someone wielding .15 copper, stretched silicon, SOI - all of which decrease heat/power is pretty amazing...
=tkk
The Process Is The Real News (Score:2, Insightful)
Intrinsic claims to have developed a new way to design and fabricate high speed logic using some older ideas and this could be a significant achievement.
Does this mean that Intel, etc will be able instantly make 4GHz chips? Nope. And as we all know, the speed of the chip isn't a great measure of it's performance.
By the way, that siliconvalley.com article was pretty weak. Did they try to omit as many details as possible?
But the 1.8GHz Pentium 4 is on the same process (Score:1)
"Megahertz Myth" a myth? (Score:2, Insightful)
Here is a challenge for Mr. Jobs: run the same Linux distro (ie, Red Hat for Intel and Red Hat for G4) on each machine and then do the bench marks. And while he's at it, try this new micro-processor for speed...
Only human (Score:2, Offtopic)
TechReview's argument: Safe havens typically don't have enough pipe to host Napster volumes of data; and, to deter law-abiding companies in the "goodguy" international community from dealing with these outlaws, you will be punished with asset forfeiture if you so much as look at them.
My counterargument: The first point is invalidated by the eventuality of distributed networks being more efficient with that volume of data anyway (think anonymous, dynamic akamai), and the second only requires that the "outlaws" be self-sufficient. e.g. If/when South Korea cracks down on the physical servers located @ astalavista.box.sk [astalavista.box.sk], it would resurface in a nebulous new form.
Myth #2: The Net Is Too Interconnected to Control:
TechReview's argument: Gnutella had to implement supernodes in order to fix its old bottleneck problem. What once was completely distributed now has a bit of hierarchy, and hence, is easier to attack with the help of the mega-ISPs.
My counterargument: There's a big difference between a massive central server being targetted, and hundreds of thousands of potential supernodes, which can also pop into and out of existance with the same ease as regular peers. Also, they mention that ISPs may move from simple port blocking to traffic analysis in order to defeat gnutella, and other 'rogue' packets, by sniffing their signature. That will work, but it also means that they'll NEXT have to blacklist ALL encrypted communication too--fat chance of that happening.
Myth #3: The Net Is Too Filled with Hackers to Control
TechReview's argument: You can restrict free communication most effectively at the hardware level. If consumers won't buy the crippled products, it becomes governments' job to mandate it, "just like [they] insist that cars have certain antipollution methods."
My counterargument: I think people will get off their asses and 'revolt' before their last bastion of freedom be co-opted by the system. Also, as long as ANY communication is still possible, you can hide whatever data you want to communicate within that channel... defeating the orwell network.
Re:Only human (Score:2)
Microwave (Score:2, Funny)
-foxxz
Just more of a sign.. (Score:2)
Once again, this is a sign that operating systems that tie you to a given hardware architecture are holding us back, and that apple made a horrible mistake in not porting mac os x to alien hardware.
Those companies that make software platforms need to realize that they **need** to learn to be hardware agnostic. Completely. Tying yourself to a platform is just not safe. Your operating systems need to be designed such that the hardware communication bits and the operating system bits are totally seperated-- as os x/mach is-- and you need to find a way to make the practice of distributing biaries obsolete. We need, badly, some kind of abstract machine code that can be "compiled" to any hardware-specific machine code in an equally optimised fashion. I mean-- you would compile your program not to machine code, but to some kind of rpm-like package in a standard abstract machine code, the user would obtain and double-click this package, and the package would compile itself into the machine code of the computer the user is sitting at. (Since this would require retaining some algorhythm information in the machine code, this would make disassembling / reverse engineering easier, of course, but it would still be highly rpeferable from a corporation's point of view than releasing your source for people to compile would be.) And no, unless your hardware is designed to make JIT interpreters transparent, VMs are not the way to do this.
If they do not find a way to do this? Well, wholly open source operating environments (i.e., systems with no closed source portions, such as debian) will then have an incredible, incredible advantage at some indeterminite point in the future (once there is actually a) actual competition in the processor market between a variety of architecture types, instead of the current "you're imitating x86, you're apple, or you are very high-end" situation and b) a large enough portion of linux/bsd users to sustain actual competition in the processor architecture market). Why? Because once the current ways of doing things start to exhaust Moore's Law, and people start looking for incredibly different ways of doing things, we will start to see a whole class of devices that only really shine under open source software-- because the closed-source world has to ship a different installer for each hardware architecture that the OS runs on, and the open-source world only has to ship one
(Please note that i don't particularly think that open source software ruling the software industry would be a bad thing at all.)
I don't think microsoft would bother with either bytcode or emualtion, though; they'll just stay where they are, where they're comfortable, and assume that they'll halt change in the processor market rather than change in the processor market halting them. (Meaning once we're all using chips that realign their logic pathway map for each program, and MS is still using something x86-compatible, game companies will start noticing linux and it'll all be over for MS.) Apple, meanwhile, has ALREADY used their Super Kernel Messaging Mach Microkernel Powers to easily create an OS that, thanks to brilliant design, runs equally well on all architectures it is written for and can be ported to a new one in a matter of days ("there are billions of incompatible wintel devices, and you have drivers for none of them" nonwithstanding). And once they had done this, what did they do? Release it for one system and one system only. Had they come up with a way to distribute software in abstract machine code (in the way i clumsily described it above) and announced plans to at some point in the future release os x versions for all architectures in existence, they would now be poised to conquer the world; but they didn't. And they're not.
Either way. Someday, we will reach a point where the operating system must be completely agnostic as regards hardware. This means abstractly designed architectures like Hurd and Mac OS X will have an enourmous, enourmous advantage, and hardware-tied monolithic thingies like Linux will have to flounderingly transition to each new architecture. (PS: which of the above two camps does NT fall into? HAL? What's that?) It also means that debian's decision to let apt-get compile and install source packages for you as transparently as if they had been binaries is the only correct desicion they or anybody else could have made..
For embedded systems (Score:2, Informative)
1) This is old news. You can find a much better story [eetimes.com] from yesterday over at the EETimes.
2) This is for embedded systems and is not really relevent for PC based systems.
3) This isn't even taped out yet... matter of fact they are not even planning to have the design done for another 18 months... it is vapour until you can actually buy it and that isn't slated until sometime in 2003.
4) This might give Transmeta a serious run for its money if it is ever produced, because they are both in the same space... Of course, TMTA being still around in 2003 is a bit on the presumptious side.
5) Oh never mind, why do I even bother...
Re:Just a guess... (Score:3, Informative)
There really is some intelligence and talent working for this company, I'd like to see what they can produce. Maybe in a few months, if there's no decent benchmarks (by that time, someone somewhere should have written code to use their logic, right?), then I'll jump on the "it's a myth" bandwagon, but I'm willing to give them a chance first.
I used to work there, when they were called EVSX (Score:5, Informative)
They were the Austin branch of a company called Exponential Tech. Doing a google on that should bring you up to speed on the Apple connection. I wouldn't really consider them a startup as they've been around for several years and have designed a number of very popular things (e.g. DSPs for other chip manufacturers).
They were a great bunch to work for, especially for being kind to a rather wet-behind-the-ears sysadmin like I was. The only downside to working there was the gawd-awful commute I had to do from far NE Austin to far SW Austin. (If you're an EE type who'd like to live in Austin, they'd IMHO be a great place to work [intrinsity.com]for)
Re: (Score:2)
Re:Exponential? (Score:2)
Re:Exponential? (Score:2)
I remember the 533MHz model drawing well north of 80W of power -- which when compared to the PPC750 is almost an order of magnitude more, clock for clock.
Apple has shipped at least five million units since those times; imagine if all of them would draw,say, 50W more than they do now? At three hours use per day, that'd be, what, more than 600GWh wasted every year...
Don't hold your breath... (Score:2)
2) They seem aimed at the embedded market. I don't think that you will likely see "meaninful" benchmarks.
This quote is from the eetimes article [eetimes.com].
Re:Just a guess... (Score:1)
Re:Why is everything non-Apple a myth? (Score:2)
Re:Why is everything non-Apple a myth? (Score:5, Interesting)
> other than PS, ever, ever, ever, ever?
They also benchmark with Media Cleaner Pro, which is a very widely-used media encoding application. At this past Macworld Expo NY, a Mac with a G4/867 in it took a Spiderman movie trailer from tape to Web, and then played the result, well before the similar Intel machine (1.7GHz P4) could even finish encoding the clip. Same task, same media, same application, same RAM, same hard disk, same graphics adapter. Only thing that's different is Mac OS / Windows, G4 / P4 and the mobo. The machines even end up being equivalently-priced (I think they use Compaq workstations for these tests).
> How could anyone question the validity of an
> application that has always been primarily a mac
> application?
Photoshop has been running on both Mac and Windows platforms for years and years now. It is optimized for Intel with the assistance of Intel engineers. It is optimized for the Mac by Adobe engineers all on their own.
I work in music and audio, and it is very performance intensive
Video, music and audio, graphics, encoding and encryption
Re:Why is everything non-Apple a myth? (Score:3, Interesting)
Video, music and audio, graphics, encoding and encryption
And if Apple really wanted to let you tap into that power, they would have shared their hardware specs with Be.
The primary reason Be ported to x86 was because Apple got pissed at them for showing up MacOS on the PPC architecture. Apple took its ball, and went home.
So if you really want a fair comparison of architectures, why not compare BeOS on x86 to MacOS on PPC? I realize it's not likely, the same types of apps are not available for BeOS and probably will never be... but let's not chalk up these so-called benchmarks to the CPU architecture quite yet...
Re:Why is everything non-Apple a myth? (Score:4, Informative)
Saying that their OS was running apps slower is kindof silly when it's not preemptively multitasked. If you really wanted to, you could just steal the processor from the OS and never give it back.
And Apple stopped sharing specs because they didn't want harware competition.
That said, Be didn't stop porting because they needed the specs. They didn't need the specs. They stopped porting because they wanted to stop. Perhaps because they wanted to know that Apple would support them in the future, but whatever.
Re:Why is everything non-Apple a myth? (Score:2)
You're damn right someone got rankled -- Jobs did.
Saying that their OS was running apps slower is kindof silly when it's not preemptively multitasked.
I never said the apps run slower on MacOS. Freudian slip?
And Apple stopped sharing specs because they didn't want harware competition.
Your history is a little screwed up. It went like this:
Be created BeOS for their own hardware, using Hobbit processors. (The BeBox.)
These did not sell well.
They scrapped the idea of selling hardware and ported BeOS to PPC. BeOS began to win acclaim as it smoked, esp. compared to MacOS on the SAME EXACT HARDWARE.
G3 came out and Be could not get Apple to release the specs for G3 hardware that prevented BeOS from running on the hardware.
Be ported to the more open architecture of x86, for better or worse, and their user base grew beyond what they achieved as PPC-only.
I think it's pretty clear Be was no longer competing on hardware. Apple did not share the specs because they were tired of getting shown up on their own hardware.
That said, Be didn't stop porting because they needed the specs. They didn't need the specs. They stopped porting because they wanted to stop. Perhaps because they wanted to know that Apple would support them in the future, but whatever.
Gee, what an eloquent argument.
Re:Why is everything non-Apple a myth? (Score:2)
And, BeOS smoked on the exact same hardware... at *what*?
Of course it smoked at multitasking, and most everything that any user cares about. I'm not about to suggest that it wasn't more efficient than MacOS in the general case. But this was a discussion (long ago) of photoshop/media cleaner performance. That sort of app does not benefit from multitasking or any of the modern features of BeOS. They benefit from the ability to monopolize the processor. The only reason that I responded to your original post is that it is not likely that photoshop's performance would be improved hugely by BeOS.
Re:Why is everything non-Apple a myth? (Score:3, Informative)
Good theory. And it is what Be said.
Do you know how long it took for the PPC Linux developers to get the Linux kernel running on the new G3 machine? About 2 weeks. How many people work on the PPC specific parts of the Linux kernel? About 2 or 3. I can only guess how many software engineers worked at Be at the time, but I imagine more than 2 or 3. So, how stupid do you think people are? Be didn't get BeOS running on the G3 because -THEY DIDN'T WANT TO- just as Elwood said in a parent post to this. The fact that they lied and whined that it was Apple's fault made me lose a great deal of respect for them.
I'd also like to point out that Apple is a HARDWARE VENDER. Do you think Apple makes money selling MacOS X for err, $89 or so? Of course not. It's a loss-leader to get people to buy their hardware which has a higher markup than most consumer PC hardware. People have been talking for years about how Apple should give up on hardware and moving to software. It won't happen. Apple losing control over their hardware platform would greatly reduce the added value that their products give over consumer PCs.
Re:Why is everything non-Apple a myth? (Score:2)
When ported to the Macs, it flew. No doubt. But remember, the danger to Apple wasn't that Be was faster on Mac hardware, its that it could be faster on PowerComputing hardware.
BeOS+PowerTowerPro == some serious shit.
The port to Mac hardware was a no-brainer. The port to x86 was a gamble, and the decision to drop PPC support came suspiciously quickly on the heels of a fat wad of financing from Intel.
Re:Why is everything non-Apple a myth? (Score:2)
Except that they never dropped support for PPC...
Re:Why is everything non-Apple a myth? (Score:2)
Apple does real world demonstrations of Photoshop and Media Cleaner Pro. If you are working with images at all (in graphics, video, Web browsing even), then the calculations (such as resizing or blurring an image) that Photoshop is doing faster on the Mac are germaine (Photoshop's filters are even used in other graphics apps). If you are doing any kind of encoding or encryption, then the Media Cleaner Pro demos are germaine (the Mac took a clip from tape to Web, then played the clip for the audience, and the Intel machine still wasn't done encoding yet). The files for the Media Cleaner Pro demos are just whatever movie trailer is newest (last time was Spiderman). Apple also demonstrated realtime, high-quality MPEG-2 encoding on their fastest PowerMac at the last Macworld. There is no counterpart on a Pentium for them to benchmark against. These are the things Apple's customers do with Macs, which are built to run these kinds of apps. Apple just demonstrates that their machine is better for those users. What, exactly, makes you think you know better?
TechTV was also skeptical, and they recreated an Apple demo, pitting a G4/733 against a P4/1.8GHz and the Mac won. TechTV is not a Mac-friendly site, and indeed they had openly disparaged Apple's test before they did their own version. They called it a draw, but add up the numbers and you'll see that the G4 finished the overall set of tasks faster.
The reason they use the word "myth" is that it is a falsehood that people WANT TO BELIEVE. It would be great if the 1.8GHz P4 was really twice as fast as a 900MHz PIII, but it is not. At the same time, it's easy to point at the lower clock-speed of the G4 and get in some good Mac-bashing. Unfortunately, it just shows that you're an idiot who hasn't used both platforms. If you had, you'd hold your tongue. I mean, if P4-based machine were doing these huge CPU-intensive tasks twice as fast as Macs, then why are people still buying Macs? Photoshop and Media Cleaner Pro are mature apps with the same features on both platforms. In these fields, time is money, and what hardware you buy is almost irrelevant. The cost of switching is nothing if you get to go twice as fast once you get there, yet people who have to encode hours of video everyday are not trading in their Macs for P4's. Two of the three PowerMac models come with DVD burners and MPEG-2 encoding software built-in, and people are making high-quality DVD's at home now, in 2x or real-time. That's very heavy computational lifting.
Re:Why is everything non-Apple a myth? (Score:3, Informative)
There seems to be some confusion. SPARC, Athlon, Alpha, and Itanium are not faster performers than P4 (except the Itanium which beats P4 at FP).
Let's have a look:
P4/1.8GHz: SPECint - 574, SPECfp - 618
Athlon/1.4GHz: SPECint - 495, SPECfp - 426
Alpha/1001MHz: SPECint - 561, SPECfp - 585
SPARC/900MHz: SPECint - 439, SPECfp - 439
Itanium/800MH: SPECint - 314, SPECfp - 655
Re:So what? (Score:2)
Re:Hardware vs hacking (Score:1, Funny)
Re:Stephen King, author,**NOT** dead YET (Score:2, Funny)
Re:2.2 ghz didn't help me (Score:1)
TI makes more chips than Frito-Lay (Score:2)
we'll know whether it was built robustly, or whether they just jacked up the MHz and left the rest built real shoddily.
If Paul Nixon comes from the TI school of design, it will be built to last [glowingplate.com].
Remember, TI makes more chips than Frito-Lay [fritolay.com].
Re:TI makes more chips than Frito-Lay (Score:2)
A friend of mine recommended going to a 4.10 gear, but I think it would be too low - Im thinking maybe a 4.30 gear would be better. Narrowing or tubbing is not an option, and we've fitted the biggest tires possible. What is your opinion?
First off, it must be a lot of fun when you get a silly little Honda pull up beside you and think it's gonna race you - or is it strip only? (Ignoring slicks, because I've driven on the street with them, that setup is a little wild for Saturday night cruising!)
Okay. Let's do the math. TH-400, as far as I know, is a 1:1 ratio in top gear. If your rev limiter is kicking in at 7,000 RPM, and your torque converter stalls at 4,800 RPM, there's something wrong if your driveshaft isn't spinning at the same speed as the engine.
Okay. Take your engine speed (redline, 7,000 RPM) and divide it by your rear gears (4.56).
I'm coming up with 1,535 RPM. That's the speed at which your rear wheels will be spinning.
Now, once you know the circumference of your tires (you gave me the width, which is good for establishing that you're getting traction), you can calculate your speed. Remember also that your tires will be pulled larger by centrifugal force at high speeds, this will affect your calculations.
Compare this speed with the terminal velocity on your timeslips.
Experiment with the differential numbers and recalculate the speeds (fire up the spreadsheet in StarOffice) until you come up with a number bigger than the terminal velocity on your timeslip. I wouldn't go much higher than whatever rear gear gives you a small rise over your current terminal speed, going too far will eat your 60-foot times - but so will a bad day at the strip, you should be more concerned with wasting power on the rev limiter.
Since they're likely to be the tallest tires you can fit into the factory wheel wells, I'd take a guess that 4.30 gears will put you into range. Consider, though, where your engine's horsepower curve starts to come down - probably before the rev limiter kicks in. Maybe 4.10, but just by gut, I still think that 4.30 would be better.
That's a really cool car. :)
Re:Keep the 2GHZ CPU I want... (Score:2)
Re:I know i'm the last one who doesn't,... (Score:2)
but, in general, when the digerati start springing jargon on me, I visit www.everything2.com and just type it in. now you can laugh and revel in your 3733t-ness
regards,
sean