Follow Slashdot stories on Twitter

 



Forgot your password?
typodupeerror
×
Technology

Intrinsity Claims 2.2 Ghz Chip 308

PowerMacDaddy writes "Over at SiliconValley.com there's an article about an Ausin, TX startup named Intrinsity that has unveiled a new chip that utilizes a new logic process with conventional fab processes to acheive a 2.2GHz clock rate. The company is headed by former Texas Instruments and Apple Computer microprocessor developer Paul Nixon. The real question is, is this all FUD, will the real-world performance be part of The Megahertz Myth, or is this thing for real?"
This discussion has been archived. No new comments can be posted.

Intrinsity Claims 2.2 Ghz Chip

Comments Filter:
  • That sounds slow compared to Chuck Moore's new design [colorforth.com].

    I don't doubt that it will work as he says since his previous designs ran at up to 800 MHz with a 0.8 micron process (see the middle of this page [ultratechnology.com]).

  • It's just the other end of the MHz Myth isn't it?

    More importantly - this sounds like a company looking for venture capital...but I get the feeling they are maybe a year too late.

    Anyone want a 2GHz chip? I mean...really...

  • The question is what instruction set? If it's custom, then they are introuble (unless they expidite work on ports of Linux, GCC, X, etc). If it's instruction set is based on something current but not quite mainstream (Alpha, M68k) then they will be in decent shape. If it runs the x86 (IA-32) instruction set, then they've got a good chance. If it is something else (IA-64 or x86-64) then I'm not quite sure. But of those two, I think they would have a better chance if they went with AMD's x86-64. But whatever the instruction set, I think that we've seen that an easy way to get into the market fast is to embrace Linux and other open source projects head on. Transmeta works great (it supports IA-32 and can run Windoze, which is part), nVidia has done great because of great hardware and IMHO Linux support. ATI could have cared less, but then they got their act together, suppored Linux, and they are doing great now.

    To sum up:

    SUPPORT LINUX
    • According to the EETimes article pointed to elsewhere in this thread, the instruction set will either be MIPS or PowerPC, with the most likely nod being MIPS.

      One place MIPS sees a huge market penetration is in networking equipment, especially Cisco routers. If Intrinsity can clock up to 2.2GHz without massively increasing power consumption and heat dissipation, I could see Cisco's high-end routers using the hell out of MIPS CPUs using that technology.

  • Is this company having an IPO anytime soon? I think they are trying to cash in on the megahetz myth with unwary investors. They push the story now, have 2 or three follow up stories repeated by news organizations who don't know any better, and voila!, they have a ready-made blip for their first few days of trading.

    Notice, the article is quietly misleading people who read it into thinking this chip is somehow compariable/compatible with x86 instruction sets... like they have somehow trumped Intel to the 2.2 gig mark, the same way AMD trumped them to the 1 gig mark about a year ago.

    Watch the blip, then sell short.
  • Hmm..... (Score:2, Insightful)

    by forsaken33 ( 468293 )
    Hoping this comes to desktops next year....or at least the threat comes to the desktops. Its first products will be designed to control high-speed communications equipment. High speed as in what? Telecom/cable quality? Or professional networking material? (just had to put that there....most people dont read the article. Im usually one of them lol)

    IF this does come to desktops.....that is good. More competition = lower prices. But, lots of issues that are still unclear. What kind of packaging will this be in? Will it require a proprietary motherboard? If it does.....well......im sensing that this wont last too long. Intrinsity's test chip achieved faster performance using conventional methods, where other chip makers have generated chips running at 400 to 500 megahertz, or about one-fourth as fast as the Intrinsity chip So whats this supposed to mean? Maybe they should make that clear. Is that saying that any chip over 400 or 500 Mhz uses special manufacturing techniques. Now that would be the majority of chips......so how can that be special then?

    Also.....Much of Intrinsity's work has involved making improvements to a fundamental building block for processor chips: the logic circuit. Intrinsity relies heavily on a faster but trickier type of circuit, called dynamic logic, than do conventional processors. Dynamic logic circuits can handle more complex functions with fewer steps than static logic circuits So does this mean specialized applications/OSes? Not worrying about linux....know it will be ported. But if this needs a special OS, and special new (read expensive) applications......think it will go under.

    Proves the technology is there, though, which is a good thing

    • Dynamic logic circuits can handle more complex functions with fewer steps than static logic circuits So does this mean specialized applications/OSes?

      Short answer: No.

      Programs see the chip's high-level design only. Low-level implementation is hidden.
  • by Ted V ( 67691 ) on Tuesday August 14, 2001 @06:41PM (#2122434) Homepage
    Just take a normal processor and put an inverter ring off to the side, running at 100mhz, and connected to nothing but power and ground.

    Back in the 60s, the power of a radio was measured by the number of transistors. That is, until one radio company put hundreds of useless transistors on their board and didn't even wire them up. After that, radios started getting measured on real abilities like quality of sound. Maybe computer marketting will catch up some day, marketting meaningful numbers: minimum FPS in Quake 3!

    -Ted
  • FUD mean Fear, Uncertainty, and Doubt. The term you're looking for is vaporware.
  • What makes you think that something created by Apple and posted on their site is anything other than PR material and FUD? Just because it denounces one of the Slashdot "Great Satans" (Intel) doesn't neccessarily mean that it is anymore true than Intel marketing claiming that bigger Mhz == better.

    And honestly, just because the G4 does better on some obscure Photoshop benchmarks really doesn't make up for its lack of scalability (as compared to RISC chips like the UltraSparc II and II) and its lack of good performance in real world applications (as compared to AMD and Intel x86 chips). Please stop the spread of pro-Apple FUD now.

    • Good point that slashdot should have pointed to, say, the ArsTechnica article on the advantages of the PPC architecture instead of the Apple propaganda. Despite that, no one can doubt that there is a "Megahertz Myth" to a great extent, though perhaps not the the extent Apple suggests. Look at the AMD vs. Intel race right now - people assume that the fastest p3/4 is faster than the fastest Athlon without actually looking at performance results.
    • The MHz myth is true as you admit.It seems you just seem to have a problem with APPLE pointing it out.
    • Just because it denounces one of the Slashdot "Great Satans" (Intel)

      Slashdotters love Intel, not so long ago these boards were full of, "look at my cheap overclocked dual-celeron system!"

      The Ghz myth (im updating it a little here) is true and Apple makes a point. I would think average consumers would be more comfortable with an Apple link than say Joe Blow's home made linux based benchmark tool. I'd rather refer non-techies to an Apple page than to something a bit more technical, especially if they're considering buying an Apple.
  • If this 2.2 ghz chip was commercially available, do you think that Intel (and AMD??) would release one to match it/beat it in a week?

    My guess.... probably.
    • Not if the patent on the chip manufacturing process is protected. remember, they crated this chip with a fab that was designed to prodec 400 MHz chip. Imagine, if you can, being able to prodeced 8GHz chips in 18 months using the same fabs that they use to create ~1.5GHz fabs.
  • As the space between the lines in the article say: This won't be for desktop computers, yet ...

    There is an obvious problem that people, keeps forgetting; RAM-speed. The RAM (and mainboard) can't supply the CPU with enough data to process fast enough. Anyone care to elaborate this with some math and tech info, maybe some predictions on RAM vs. CPU-bus speed development?
    • Well, I think we could really get to enjoy bumping a 1.4 GHz Athlon to 2.2 GHz once there start to be motherboards with dual DDR channels (4.2 GB/s)

      There is still a latency problem, but intelligent caching and compiler design can mitigate that problem, especially if there is a bandwidth surplus available for speculative fetching.

      Eventually, to conquer the latency beast, we will need to move more memory closer to the CPU. To do that is going to take moving to serial interconnects for lower pin counts, and reducing the physical footprint on the mainboard.

      Unfortunately, as RAMBUS found out, running several hundred MHz over a motherboard trace is difficult. There is noise from other channels, stray capacitence, that sort of thing. This is especially bad if you use a multi-point bus systems. My guess is that eventually we will have to move to a point-to-point serial memory bus. This has the advantage of maintaining low latency, while scaling bandwidth with the number of memory modules.
      • Eventually, to conquer the latency beast, we will need to move more memory closer to the CPU. To do that is going to take moving to serial interconnects for lower pin counts, and reducing the physical footprint on the mainboard.

        I'm not sure that switching to a serial system would help enough. While you could clock it more quickly, you'd still have a hard time matching the bandwidth of a many-line solution. This could ironically result in longer latencies, because despite the higher clock speed, you'd have to sit there and wait for all 32+ bits of the missed word or 128+ bits of the cache line to be transferred before resuming operation.

        IMO, a better approach might be running many shielded lines in parallel transmitting data with self-clocking codes. This allow faster clocking by removing the need to keep all lines in synch with each other; data could be rebuilt in buffers at the receiving end.

        Regardless of the bus implementation, you'll still likely be limited by the speed of the RAM used.

        The final solution to all of this will probably come when we can put a big enough L3 cache on a die to hold the entire working set of most programs. That will give us a short, fast, wide path to L3 memory. Main memory will only be accessed for streaming data or for random accesses to huge databases. In the first case, a high-bandwidth, high-latency bus is acceptable. In the second case, I doubt anything we do will overcome latency problems.

        An interesting design problem to think about, in any event.
  • Big deal. Wake me up when we start talking terahertz .
  • MHz (Score:5, Informative)

    by room101 ( 236520 ) on Tuesday August 14, 2001 @06:37PM (#2127850) Homepage
    The real question is, is this all FUD, will the real-world performance be part of The Megahertz Myth, or is this thing for real?"

    It doesn't matter if it is real or vapour, it will still fall prey to the "Megahertz Myth". Maybe someday, people will understand: non-similar architectures can't be compared by MHz alone. And even most similar arch's can't be compared via MHz, as the Intel v. AMD war will tell you.

    It is even worse than that! no single metric will ever give you the whole story.
    • Re:MHz (Score:3, Insightful)

      by geekoid ( 135745 )
      Actuall the article has little to do with clock rate comparison the way you're thinking of it, it has more to do with manufacturing, and core improvements which could possible raise the MHz across the board. I'll wager they'll try manufacturing chips, but when that fails 1 of 3 things will happen:
      1)they liscese the tech, which is what they should do from the begining.
      2)AMD or Intel will buy them
      3)AMD and Intel (independently) will gear up there marketing drones, and this chip will fade from memory.
      what we need is a testing algrythem that all processors use. then we can rate chips as "it completed the Moffitt algorithem in 1.5 minutes!".
  • How does Fear Uncertainty and Doubt come into play?
  • This product looks like a way to create extremely fast logic that approaches the performace of dynamic logic. It looks like it would be used either to make FPGA or CPLD devices, or full custom logic (the site isn't clear on this). They claim this could be used for any high performance logic (which would imply it could be used in processors). Their site is extremely short on details and it looks like this product could be vapor, especially given the fact they start with a Flash animation...
    • it looks like it is not actually a processor

      they seem to be trying to figure out placment of logic

      remeber this is VERY important in chip 80% of wires and therefore heat comes from the clock sync inside of the chip (acording to IBM powerPC paper in the ACM microproccesor journel)

      placment is very lucritive Cadence and such make millions from it but this seems to be FUD because you can run process at 0.10 micro @ TSMC now and they are standardiseing on it should be done

      it does not seem to be anything but hoax e.g the clock rates mean nothing unless the whole chip runs at that frequancy and is RISC whith no caches no pipelines of which I assure you there are few

      what counts is memory bandwidth and how often you use memory

      regards

      john jones
  • interesting how apple's g4 propaganda manages to leave out any comparison to an amd processor
  • This is fantastic! Hopefully they will be able to live up to the hype unlike Transmeta. Motorola and Intel spend billions is new plants and manufacturing processes, and then a small Austin company comes into the market and will shock everyone.
  • Amiga-like technology lives on.
    The custom chipset in the A1000 also used precharge/evaluate dynamic logic with a 4-phase clock. 'course it was only clocked at twice NTSC color-burst frequency, not 2.2 GHz...
    Actually this was common design methodology in many 4 to 8 micron (not 0.4 or 0.8!) NMOS chip designs of that era.
  • ...Exponential... anyone?
  • Too late! The beowulf meme strikes again...
  • If a shorter pipeline is so much better, why didn't Apple produce a chip with 7 pipelines that runs on 1.8 Ghz? -Yuioup
  • Weird article... (Score:5, Insightful)

    by ergo98 ( 9391 ) on Tuesday August 14, 2001 @06:54PM (#2138697) Homepage Journal

    In a nutshell this is saying "Someone said something, but it might be bogus, and the cycle speed really doesn't mean much anyways.". Alrighty then. This is like a "nothing to see here, move along!" type articles.

    • In a nutshell this is saying "Someone said something, but it might be bogus, and the cycle speed really doesn't mean much anyways.". Alrighty then. This is like a "nothing to see here, move along!" type articles.

      Except in this case there isn't even a really cool, splattered dead guy to stare at.
  • by c-w-k ( 142705 ) on Tuesday August 14, 2001 @06:43PM (#2140479)
    eetimes [eetimes.com]
  • by Anonymous Coward on Tuesday August 14, 2001 @06:40PM (#2145946)
    now that the cpu isn't the bottleneck anymore lets work on memory and other buss bottlenecks..
  • What's so great about 2.2GHz? Intel is selling 1.8GHz processors right now, will be launching 2.0GHz processors within the next two weeks, and there are Pentium 4 processors -- both within Intel and outside in the hands of overclockers -- running at 2.2GHz or higher already. (And note that the ALU is double-clocked, ie running at 4.4GHz).

    If this story was two years old, it might be significant... but it is far from revolutionary right now.
    • If you read the article, you would know what the big deal is. If thay can scale this process to the current chips, we could see 8GHz chips.
      the big deal is they took a method thats used to creat 400 MHz chips, and created a 2.2 GHz chip.
    • The eetimes article
      http://www.eetimes.com/story/OEG20010813S0060
      makes it sound like this thing is targetted more towards the embedded market, where (so the article says), the top chips are running at 500MHz. Not sure why they wouldn't try for a desktop pc solution...?
      • Not sure why they wouldn't try for a desktop pc solution...?

        Power, efficiency & scalability. Embedded systems are far more complex than just a PC in a little box.

  • by taniwha ( 70410 ) on Tuesday August 14, 2001 @07:05PM (#2151323) Homepage Journal
    From memory 8080s hade some dynamic nodes - the upside is that you can squeeze some extra gate delays out of some circuits (dynamic carry chains are a good example) - the down side is a chip with a MINIMUM clock speed - which makes test (scan and ATE etc) much harder - those expensive testers we test chips with just don't go that fast.

    Given that net delays are becoming the gating factor in big chip designs dynamic logic seems to me to just be a sideshow - unless the long wires are themselves the dynamic nodes (transmission lines with solitons moving on them?) now that would be interesting ...

    Potentially much more interesting IMHO is clockless asynchronous logic - but CAD tools just aren't up to supporting this methodology (oh yeah and the synchronous clock based mindset is pretty entrenched too).

    • Charles Moore has made several 'clockless' (well, self-clocked) asynchronous CPU designs [colorforth.com] and created his own CAD tools [colorforth.com] to do it. He is able to do this by keeping his designs very small and simple... but they are quite fast. Prototype chips of one of his earlier designs are available from Ultra Technology [ultratechnology.com]. So far he has been backed only by small companies, probably because he is ten years or so ahead of conventional system designers. -- Mike Losh
  • Re: (Score:2, Interesting)

    Comment removed based on user account deletion
    • by Christopher Thomas ( 11717 ) on Tuesday August 14, 2001 @07:23PM (#2170205)
      What is dynamic logic? How is it different from conventional logic wired together with different types of gates?

      Both dynamic and static logic use logic gates or blocks that are wired together. The difference is in how the gates are implemented internally, and how they pass data back and forth.

      CMOS is a good example of static logic. It uses pull-up and pull-down transistor networks to make sure that outputs are always strongly asserted. This makes CMOS gates big and makes input capacitance larger than it otherwise needs to be. But, it's well-understood, has a few attractive features, and has a whole slew of design tools built for it.

      Precharge logic is a good example of dynamic logic. It uses the parasitic capacitance of the output line to store the output value. The output node is charged up on one half of the clock (precharge phase), and left floating on the other half (readout phase). During the readout phase, the inputs are asserted. Inputs are fed into a pull-down transistor network that drives the output low if it should be low, and leaves it alone if it should be high. This style of logic takes up half the space of CMOS logic, has half the input capacitance, and has stronger driving capability (NFETs pulling down typically drive 2x-3x more strongly than PFETs pulling up). This means that if you play your cards right, you can make precharge logic circuits that are faster *and* more compact than CMOS logic circuits. The downsides are that designing and verifying precharge logic is a royal pain, and that you have to have a clock input into the logic block.

      The article describes a more complicated dynamic logic scheme with a four-phase clock. These kinds of schemes have been floating around in research literature for years, but are usually not used because of the greater complexity and fewer tools available.
  • Q3 (Score:3, Funny)

    by jinx_ ( 88343 ) on Tuesday August 14, 2001 @06:39PM (#2156701)
    alright! another 2 fps in quake3!

    *sigh* i want a turbo button on my computer. except, instead of halving my speed, i want it to drop down to 33MHz so i can play all my old games properly under dos.

    • by Ted V ( 67691 )
      Or is it moslow? Anyway, there is a program you can use to run games slower. Like... "moslow 10 ultima4" runs ultima 4 at 10% speed. One test of how well a game is programmed, though, is whether or not it needs moslow after 10 years. Games like Doom, Commander Keen, and Prince of Persia all run fine without moslow. Ultima 7 is a different story...

      -Ted
      • i tried moslo, but it has(had?) the problem of running semi-jerky in most games that i was interested in. example: it worked fine with wing commander ii, but the game was so jerky that it was unplayable anyway.

        • Re:Moslo (Score:3, Interesting)

          by zulux ( 112259 )
          I friend and I made a small video game, and being the better programer than me - my frind made a bit of code that estimated the speed of the computer and added a delay loop to the game to slow it down.

          Fast forward to Today

          We lost the complete source code, and our computers are so darn fast that the bit of code that estimates the speed of the computer over-runs it's 16 bit Int slot. The game now hangs hangs.

          So we are forced to run our game in Windows to slow it down. It works half the time - it depends on the time slicing. Recently our computers are getting a bit to fast for even that - so we might have to move to an emulator.

          The smart thing to do would be to fire up the hex editor and edit the cose, but that would be *cheating*

          • Been there. Crappy dos game. Polled the clock until it fell over. The clock precision was crap so I had a slight hitch. Also wasn't taking into account the execution time of the *real* code so when I moved to a 386 (from an XT) I was like.. "damn this is fast"

            The better approach for your problem is to use the OS's delay function, if you can. (under unix, try nanosleep() or select() with all the fds fields cleared. Sleep() should work on windows.) You free up the processor for other tasks, you don't have to do that speed-estimating crap, and uh you don't hit yer bug.

            I would assume you figured this out but you said your other friend who was smarter than you wrote the code :-) .. nice story.

    • Dosemu under Linux has better slowdown capability than Moslo.exe. I have successfully played 4 or 5 of the elder Ultimas and they seemed to run decently with the artificial slowdown.

      One project I worship is http://exult.sourceforge.net which has rewritten the Ultima 7 engine with timer-based animation, etc. It is *so* cool. Even if you're not into Ultima games, you should check out the project.

      -l
  • The video on Apple's site on the myth gives a really good explanation, people who aren't a CS major can understand it.

    For those of you who want more, gave a great [www.arstec...rstechnica]explanation [arstechnica.com] the week before I saw this live.

  • It's not MHz that determines the speed. It's just one of them. The rest would be:

    • Pipeline or non pipeline with the number of stages. The more stages the better (but watch out with the cache trashing issue like in P4).
    • Scalar or vector
    • How many n in n-Multiscalar
    • RISC or CISC
    • Internal bus speed
    • Memory bandwidth
    • I/O speed

    And many more. If you have learnt Computer Architecture, then you'd certainly able to list hundreds more.

    Moreover, Apple wants to play catchup [theregister.co.uk] with x86... Hmmm... Do you smell something fishy?

  • The eetimes [eetimes.com] article clarifies. They are designing chips using dynamic logic which has the disadvantage of eating up significantly more power. It is actually fairly common to use dynamic logic in chips, just not on a wide scale where power is more important than transistor density or speed.

    x86 chips are not simple, and creating a dynamic logic design is not likely. The company seems to have very good background in automatied design tools, but chips on the scale of x86 CPUs are not created in automated tools, they are created by hand and optimized (like assembly coding to the software guys)
    "Intrinsity's bare-bones test chip operates at 2.2 GHz..." This is not that impressive on a bare bones chip. They haven't even created an ALU capable of that speed. Nevermind a full CPU. This company also doesn't have any fabs, so they will be at the disadvantage Cyrix and AMD were at in their youth.

    Overall, they aren't likely to be making x86 CPUs any time soon. PDAs and laptops can't handle the power draw, so I'm not sure where that leaves them. Maybe they should team with Transmeta [transmeta.com] to solve their power problems. :-)
  • note to the editor (Score:4, Insightful)

    by Gen-GNU ( 36980 ) on Tuesday August 14, 2001 @07:35PM (#2170259)
    The real question is, is this all FUD...

    Well, I really doubt this will be fud, since that stands for fear, uncertanty, and doubt. This acticle seems to be more of a hype piece.

    FUD is tearing down a competitor's product with vague statements and generalizations. FUD is not describing your own new product in glowing terms. That's just marketing BS.

    I know, I know...shouldn't nitpick. But when the term FUD is so depricated on the main page at slashdot, I really must object.

  • Here's what makes this announcment interesting to me:

    The company's test chips are fast. In an embedded market where the speediest MPUs push 500 MHz, Intrinsity's bare-bones test chip operates at 2.2 GHz in a 0.18-micron process with aluminum interconnects.

    No copper interconnects. No .13-micron process. These are things that I (as a non-chip engineer) can understand. Is this going to improve my life? Only time will tell. But I for one like technology for the sake of technology.

    Quotes taken from the eetimes article [eetimes.com].

  • This chip is aimed at the embedded markets. In those markets 500Mhz is currently very fast. If they really can produce even 1Ghz chips in the <5W or <10W embedded market they'll clean up.

    Even if they stick to their 2003 delivery date 2Ghz+ will still be fast in that market. They would be the leader in both speed and speed/Watt... but I bet they wouldn't be the cheapest... ;)

    Using either MIPS or PPC code is smart for the embedded market... just look at AMDs announcment earlier about discontinuing the 486 and other embedded market chips.

    Also - if this is normal .18 aluminum technology the potential for someone wielding .15 copper, stretched silicon, SOI - all of which decrease heat/power is pretty amazing...

    =tkk

  • The real news here is that the 2.2GHz speed was achieved using a relatively common silicon process (0.18micron, aluminum interconnects). Intel,AMD,and others are achieving higher speeds (~2GHz), but with much more developed processes (.12micron, copper).

    Intrinsic claims to have developed a new way to design and fabricate high speed logic using some older ideas and this could be a significant achievement.

    Does this mean that Intel, etc will be able instantly make 4GHz chips? Nope. And as we all know, the speed of the chip isn't a great measure of it's performance.

    By the way, that siliconvalley.com article was pretty weak. Did they try to omit as many details as possible?
    • Intel's Pentium 4 which is currently running at 1.8GHz is fabricated on a 0.18 micron 6-layer aluminium process too. Neither AMD nor Intel are selling 2GHz processors currently and no one is using 0.12 micron. There's an industry shift towards 0.13 micron, but it's not well established and currently only some Pentium III CPUs are using it. Intrinsic claims to be able to synthesize quad-phase dynamic logic, and this is interesting, but this is something that is present in CPU's already, and it certainly has it's downsides. If you have no latches, verification is very hard. Timing compution is difficult since many tools are optimized for 1-clock static, not 4-phase dynamic. Also, dynamic will be harder to implement in lower process levels due to the higher leakage current (Ioff). This will force up the size of keepers on the interstitial nodes, which will offset the speed advantage while dramatically increasing power.
  • I took a look at the web site referenced in this article and saw Steve Jobs talking about how the 800 Mhz G4 was faster than the 1.7Ghz Pentium 4. My question is this: what OS was running on the Pentium for? Was it Windows (known to be a resource hog) or a Unix/Linux OS? My guess is that running a better OS would give faster results to the Pentium.

    Here is a challenge for Mr. Jobs: run the same Linux distro (ie, Red Hat for Intel and Red Hat for G4) on each machine and then do the bench marks. And while he's at it, try this new micro-processor for speed...

  • Only human (Score:2, Offtopic)

    by Saeger ( 456549 )
    Myth #1: The Internet is Too International to Be Controlled:
    TechReview's argument: Safe havens typically don't have enough pipe to host Napster volumes of data; and, to deter law-abiding companies in the "goodguy" international community from dealing with these outlaws, you will be punished with asset forfeiture if you so much as look at them.

    My counterargument: The first point is invalidated by the eventuality of distributed networks being more efficient with that volume of data anyway (think anonymous, dynamic akamai), and the second only requires that the "outlaws" be self-sufficient. e.g. If/when South Korea cracks down on the physical servers located @ astalavista.box.sk [astalavista.box.sk], it would resurface in a nebulous new form.


    Myth #2: The Net Is Too Interconnected to Control:
    TechReview's argument: Gnutella had to implement supernodes in order to fix its old bottleneck problem. What once was completely distributed now has a bit of hierarchy, and hence, is easier to attack with the help of the mega-ISPs.

    My counterargument: There's a big difference between a massive central server being targetted, and hundreds of thousands of potential supernodes, which can also pop into and out of existance with the same ease as regular peers. Also, they mention that ISPs may move from simple port blocking to traffic analysis in order to defeat gnutella, and other 'rogue' packets, by sniffing their signature. That will work, but it also means that they'll NEXT have to blacklist ALL encrypted communication too--fat chance of that happening.


    Myth #3: The Net Is Too Filled with Hackers to Control
    TechReview's argument: You can restrict free communication most effectively at the hardware level. If consumers won't buy the crippled products, it becomes governments' job to mandate it, "just like [they] insist that cars have certain antipollution methods."

    My counterargument: I think people will get off their asses and 'revolt' before their last bastion of freedom be co-opted by the system. Also, as long as ANY communication is still possible, you can hide whatever data you want to communicate within that channel... defeating the orwell network.

    • We'll never be as fast as Akamai... Just because Akamai gets to trust all its nodes. Without that, I'm sure all their algorithms fall to shit. And if they are provably the most efficient, then we can never be as efficient as them.
  • Microwave (Score:2, Funny)

    by Foxxz ( 106642 )
    2.2 GHz? my moms microwave has been running at 2.45 for years. the bugger gets so hot it cooks food! oh, wait, or is that something different? ;)

    -foxxz

  • The below is almost wholly opinion based on vague observations of the universe. You may want to skip over this post, it rambles. I almost either posted it as AC or didn't post it, but i'm osting it using my account so i can get the score so people can see it and respond to it. I don't want moderator points, just responses, preferably from people who think i'm wrong (and can politely justify thinking i'm wrong).

    Once again, this is a sign that operating systems that tie you to a given hardware architecture are holding us back, and that apple made a horrible mistake in not porting mac os x to alien hardware.

    Those companies that make software platforms need to realize that they **need** to learn to be hardware agnostic. Completely. Tying yourself to a platform is just not safe. Your operating systems need to be designed such that the hardware communication bits and the operating system bits are totally seperated-- as os x/mach is-- and you need to find a way to make the practice of distributing biaries obsolete. We need, badly, some kind of abstract machine code that can be "compiled" to any hardware-specific machine code in an equally optimised fashion. I mean-- you would compile your program not to machine code, but to some kind of rpm-like package in a standard abstract machine code, the user would obtain and double-click this package, and the package would compile itself into the machine code of the computer the user is sitting at. (Since this would require retaining some algorhythm information in the machine code, this would make disassembling / reverse engineering easier, of course, but it would still be highly rpeferable from a corporation's point of view than releasing your source for people to compile would be.) And no, unless your hardware is designed to make JIT interpreters transparent, VMs are not the way to do this.

    If they do not find a way to do this? Well, wholly open source operating environments (i.e., systems with no closed source portions, such as debian) will then have an incredible, incredible advantage at some indeterminite point in the future (once there is actually a) actual competition in the processor market between a variety of architecture types, instead of the current "you're imitating x86, you're apple, or you are very high-end" situation and b) a large enough portion of linux/bsd users to sustain actual competition in the processor architecture market). Why? Because once the current ways of doing things start to exhaust Moore's Law, and people start looking for incredibly different ways of doing things, we will start to see a whole class of devices that only really shine under open source software-- because the closed-source world has to ship a different installer for each hardware architecture that the OS runs on, and the open-source world only has to ship one .tar.gz file and it will work on all architectures including future ones. Apple and Microsoft can port their OSes, sure, but what to? Moving an OS to a wholly different architecture is a HUGE undertaking, one i think only apple has done before, when they moved from 680x0 to PPC. Apple did that about as well as anyone could, and it was a tortorous process, in which the PPC macs had to have a built in 68k emulator that the last 10 years worth of software-- and at first, parts of the operating system-- all had to run through. The result was that until OS 8 came and the last bit of 68k asm was purged from the operating system, everything ran at a speed far under the PPC's potential. Emulators are *slow* and not fun, and convincing every app designer to recompile and redistribute their apps and/or release "fat" binaries for every mac app they sell is not easy. Besides which, this is only temporary; you just have to wait long enough, and eventually your architecture will exhaust its limits. Apple can cling to the PPC for a long time, and they can move again if they have to. But even if they do move to a different processor architecture-- which will be stronger? Mac OS XVII, which after much porting work by apple and all mac os vendors runs flawlessly on the Motorola DXM architecture, or ErOS 6, which can run on the Motorola DXM *and* the Intel Ubertanium86 *and* four other completely new architectures with alien instruction sets-- all completely flawlessly because all software is distributed as source code, and the user just compiles everything they install on their own machines, with the compiler optimising things for what the user needs most? Without a way for each user to compile the code, the decision of which architecture to switch to would have to be unanimous for all users of the operating system-- pick one path and stick to it-- instead of letting the individual user choose which architecture has the most cost-for-speed-efficient chips at the moment. There is, of course, the possibility of compiling your entire operating system and all apps into some VM, so that the OS and apps don't know which processor they're running on, but this would be slow too (unless you could have all machines regardless of processor have a coprocessor to do the JIT compiling for your VM in realtime, but that would be clumsy in practice)

    (Please note that i don't particularly think that open source software ruling the software industry would be a bad thing at all.)

    I don't think microsoft would bother with either bytcode or emualtion, though; they'll just stay where they are, where they're comfortable, and assume that they'll halt change in the processor market rather than change in the processor market halting them. (Meaning once we're all using chips that realign their logic pathway map for each program, and MS is still using something x86-compatible, game companies will start noticing linux and it'll all be over for MS.) Apple, meanwhile, has ALREADY used their Super Kernel Messaging Mach Microkernel Powers to easily create an OS that, thanks to brilliant design, runs equally well on all architectures it is written for and can be ported to a new one in a matter of days ("there are billions of incompatible wintel devices, and you have drivers for none of them" nonwithstanding). And once they had done this, what did they do? Release it for one system and one system only. Had they come up with a way to distribute software in abstract machine code (in the way i clumsily described it above) and announced plans to at some point in the future release os x versions for all architectures in existence, they would now be poised to conquer the world; but they didn't. And they're not.

    Either way. Someday, we will reach a point where the operating system must be completely agnostic as regards hardware. This means abstractly designed architectures like Hurd and Mac OS X will have an enourmous, enourmous advantage, and hardware-tied monolithic thingies like Linux will have to flounderingly transition to each new architecture. (PS: which of the above two camps does NT fall into? HAL? What's that?) It also means that debian's decision to let apt-get compile and install source packages for you as transparently as if they had been binaries is the only correct desicion they or anybody else could have made..
  • For embedded systems (Score:2, Informative)

    by trixillion ( 66374 )
    A couple of notes:

    1) This is old news. You can find a much better story [eetimes.com] from yesterday over at the EETimes.
    2) This is for embedded systems and is not really relevent for PC based systems.
    3) This isn't even taped out yet... matter of fact they are not even planning to have the design done for another 18 months... it is vapour until you can actually buy it and that isn't slated until sometime in 2003.
    4) This might give Transmeta a serious run for its money if it is ever produced, because they are both in the same space... Of course, TMTA being still around in 2003 is a bit on the presumptious side.
    5) Oh never mind, why do I even bother...

Get hold of portable property. -- Charles Dickens, "Great Expectations"

Working...