Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
Supercomputing

Latest Top 500 Supercomputer List Released 130

chrb writes "BBC News is reporting on the release of the June 2010 Top 500 Supercomputer list. Notable changes include a second Chinese supercomputer in the top ten. A graphical display enables viewing of the supercomputer list by speed, operating system, application, country, processor, and manufacturer."
This discussion has been archived. No new comments can be posted.

Latest Top 500 Supercomputer List Released

Comments Filter:
  • Computers still seem to be getting exponentially faster by the year ... when will Silicon give way? http://en.wikipedia.org/wiki/File:TOP500-2008.svg [wikipedia.org]
    • by somersault ( 912633 ) on Monday May 31, 2010 @08:36AM (#32406414) Homepage Journal

      I think power requirements are probably the main problem, rather than the hardware. It must be pretty trivial to add more cores to a system that's already using tens of thousands of them, but you're going to need a lot of power.

      These systems are only really getting "faster" for parallel tasks too - if you gave them a sequential workload then I assume they would fare worse than a high end gaming machine!

      • Re: (Score:3, Insightful)

        by TheRaven64 ( 641858 )

        These systems are only really getting "faster" for parallel tasks too - if you gave them a sequential workload then I assume they would fare worse than a high end gaming machine!

        I doubt it. A good fraction of them use POWER6 processors, which are still a lot faster than any x86 chip for most sequential workloads. On top of that, they typically have a lot more I/O bandwidth. They might only be a bit faster, but it would have to be a really high-end gaming rig to be faster.

        • Re: (Score:3, Informative)

          by Sique ( 173459 )

          "A good fraction" in this case means: Less than 10%. In fact, only 42 out of 500 use POWER.

          • by cdpage ( 1172729 )
            ...that's not nothing to grip at dude.

            What i wonder is, what % of flops they are making verses all the others... if it wasn't for the top 5 having close to 1mil cores, they might make up more then 10% of the computational contribution, no?

            I would like to see this graphic chart to include that...

            That, and perhaps a distribution chart too... i'd like to see how the PS3 is fairing now in distribution
      • by Entropius ( 188861 ) on Monday May 31, 2010 @08:59AM (#32406580)

        Parallel tasks are the whole point of using a supercomputer. The gains made in speed for sequential tasks really haven't been that great; Moore's Law for sequential tasks fell apart a while back.

        Being able to parallelize a task is a prerequisite for putting it on a supercomputer.

        • by wagnerrp ( 1305589 ) on Monday May 31, 2010 @11:28AM (#32408012)

          Parallel tasks are the whole point of using a supercomputer.

          Well it is now. The original supercomputers were based around a single very fast processor, and had a number of co-processors whose sole purpose was to offload IO and memory prefetch, so the CPU could churn away without interruption. Modern out-of-order CPUs are effectively an old style supercomputer on a chip. Heavy use of parallel processing didn't really take off until the late 80s. This paradigm shift is what caused the supercomputer market crash in the 90s, as development devolved from custom CPUs, to throwing as many generic cores at the problem as you can and using custom interconnects to mitigate parallel overhead.

          • by Anonymous Coward

            I always thought computers (and supercomputers) were nothing more than proprietary implementations of someone's attempt to simplify their pseudocode. It all boiled down to memory and bus bandwidth issues, not the speed of the processor. That's where the DEC/Compaq/HP Alpha was retired as was HP PA-RISC, yet theoretically the Sun SPARC and IBM Power designs should succeed. Instead we see these astonishingly bogus processors that you call "general purpose" when they are nothing more the biproduct of bad ma

            • by Sanat ( 702 )

              Back in early 1980 I headed up a team of techs to install Wang VS systems all over China. I had to take a train to Harbin (far north) from Beijing because it was still too cold to fly there.

              I was a visiting American scientist.. and as such in each province i visited the governor would have banquet in my honor and we would all drink wu-shing pigu (5-star beer) and eat great food. For those who know me personally know that I do not like being the center of attention so this was really out of my comfort zone.

              D

        • Actually, having been involved in a fair amount of work with the CERN and Fermilab parallel computing groups, I can attest that they are using the supercomputers at their disposal for highly sequential data processing, mostly parsing a data set looking for a given pattern. For the amount of data they collect, this sequential processing is paramount.

          Consider trying run a SELECT on a non-indexable 2TB database table. You have to look at every row, and it takes time. But if we can parse the table by several ma

      • Yes, but you could just give them multiple sequential workloads. It won't speed up any individual sequential workload very much, but you still get more work done overall.

        I wouldn't worry too much. We've been pretty good at finding things to help us keep up with the effects of Moore's law.
  • June?! (Score:5, Funny)

    by Anonymous Coward on Monday May 31, 2010 @08:31AM (#32406378)

    Holy crap, the supercomputers are so fast they're in the future!

    • It's like an early Christmas ...

      And speaking of that, a nice present would be an account on a supercomputer for running whatever I want. A Top 5000 would do, presumably.

      Hopefully the universe won't mind if we call May June and thereby manipulate Moore's Law in our favor. Year over year if Moore's Law holds while the calendar grows shorter, the light speed barrier shall be overcome.

      At any rate, this is the age of the Internet and global news updated by the minute. Supercomputers are expensive to upgrade so k

  • Looks like a 2nd NSCS supercomputer located in China is in the top 10. Does that make it "Chinese"?

  • Linux (Score:5, Informative)

    by B5_geek ( 638928 ) on Monday May 31, 2010 @08:33AM (#32406398)

    Ya for Linux!

    Seriously, if this doesn't make every PHB take notice I can't imagine what would. (Hey boss, its free too!)

    • Re: (Score:1, Insightful)

      by Anonymous Coward

      All our admins and all of our users only know Microsoft systems. Training isn't free.

      • Re:Linux (Score:5, Insightful)

        by Pharmboy ( 216950 ) on Monday May 31, 2010 @08:55AM (#32406550) Journal

        All our admins and all of our users only know Microsoft systems. Training isn't free.

        So your users can't use Linux on the server? Or is it that all the users use super computers on the desktop? Our biz has all MS on the desktop and all Linux on the server. Obviously it is completely seamless. As for the admins, any admin worth their salt is always learning new things to just keep up with technology as it changes. Learning Linux by installing it on one system to start is trivial, and in certain situations, much easier to setup than Windows, such as DNS servers, web servers, etc.

        If your admins can only work on a server if it uses a mouse, you need new admins.

        • Re: (Score:1, Insightful)

          by Anonymous Coward


          If your admins can only work on a server if it uses a mouse, you need new admins.

          Agreed. Often times you can't count on morons simply being canned or replaced though. The fact is there's a lot of fools out their that think "system administration" simply means knowing which button to click in the right order. Any understanding beyond that simply doesn't exist, and is lost on them.

          This limitation isn't simply one of "GUI vs CLI" or "Windows vs Linux". It's really one of wanting to understand something bey

        • Re:Linux (Score:4, Insightful)

          by Black Art ( 3335 ) on Monday May 31, 2010 @01:20PM (#32409012)

          In my experience Windows admins require *MUCH* more training than Linux admins. There is much more "black magic" that they need to know to be good at their jobs.

          A Windows admin needs to know all the secret registry hacks to make things run well. They need to know all the non-intuitive places that Microsoft hides the settings for whatever services need to be configured. They also need to know how to recover things when it all goes horribly wrong.

          Most Linux systems have text files to configure things. The files are in a predictable place. Updates are pretty easy and clear.

          But Microsoft has scammed people into believing that leaving it harder than just putting up with the same old crap. In this case I just wish that people did get what they pay for...

             

          • Re: (Score:3, Informative)

            I've done systems administration on both platforms for years and I don't think that there is any real appreciable difference between the amount of knowledge and training needed on one vs. the other when comparing systems that perform similar functions. Compare Active Directory to OpenLDAP+Kerberos 5, for example. They are very, very similar in a lot of ways; so much so, in fact, that OpenLDAP+Kerberos 5 can be used to host the directory portion of a Windows domain.

            • From my experience (mainly Linux) it is much easier to setup Linux boxen if you are using them for dedicated tasks, as it is pretty easy to only install the software you need for that task, thus reducing the amount of maintenance in the long run, and narrow down the possible causes of problems. Overall, I would tend to agree that the learning curve is likely equal on both platforms, although finding answers to common Linux issues online is pretty damn easy and fast.

              I'm not against Windows on the server per

            • Re: (Score:3, Interesting)

              by evilviper ( 135110 )

              I don't think that there is any real appreciable difference between the amount of knowledge and training needed on one vs. the other when comparing systems that perform similar functions.

              I'm afraid I'd have to completely disagree with you there. My preferred example isn't Active Directory, but CIFS...

              With Windows, you do all the user management, then click-through the painful server setup wizard to turn on file sharing, and everyone's happy and thinks it's oh so easy to do... Right up until some guy tries

              • CIFS? Okay, well...

                determines that your gigabit network is ACTUALLY a slow, high lag (most likely dial-up) link, and dynamically switched to some ancient version of the protocol to give you slightly less overhead, which seems to work fine at first blush, but just doesn't allow UPLOADING (downloading is okay) files over 2GBs.

                The type of problem you describe can just as easily happen with Samba as it can with Windows Server. (Not sure about the specific problem you're describing, but protocol negotiation p

                • The type of problem you describe can just as easily happen with Samba as it can with Windows Server.

                  Nope. I happen to know the described problem cannot happen with Samba.

                  (Not sure about the specific problem you're describing, but protocol negotiation problems in general between a Samba server and a Windows client are found in abundance in the Samba FAQs and on the Samba mailing lists, where I'm a regular) Part of the problem is that some aspects of the protocol can be controlled by the client as well as th

            • I've never did Windows admin work, but plenty of Solaris and Linux, mostly Linux and some other UNIX OSes over the years. I've also worked as a Windows developer for a while, and I kinda agree that Windows is harder to develop on and I would assume to admin as well.

              I've always held high regards for Windows developers. They don't have the internals (source), nor the POSIX standards (yes, I know there are some POSIX stuff for Windows). When I was a Windows developer, I felt like I always had to reinvent th

      • All our admins and all of our users only know Microsoft systems. Training isn't free.

        I guess you are pretty well !#@%ed, but then again the world still needs ditch diggers. ;)

      • All our admins and all of our users only know Microsoft systems. Training isn't free.

        So... you don't have internet access? I don't know of any Microsoft routers, switches, firewalls, etc.

        And I'd respond to your statement by saying that admins aren't free either. If you're using Windows on your servers, the overwhelming majority of studies say you have a lot more admins than you would need if you switched to some non-Windows server operating system.

        Honestly, for the cost of a couple idiot MCSEs, you could

    • The weird thing is that there are several entries in the statistics page [URL] http://www.top500.org/stats/list/35/os [top500.org] [/URL] that actually ALSO are linux, not just the top 405, but also the RedHat, CentOS CNL, SLES, (CellOS?) etc entries.... Looking at it that way UNIX is already outcompeted with a few entries of AIX and opensolaris. I wonder what happened to Plan9 on the Blue Gene....
    • Re: (Score:1, Insightful)

      by Anonymous Coward

      Ya for Linux!

      Seriously, if this doesn't make every PHB take notice I can't imagine what would. (Hey boss, its free too!)

      How is this relevant to the environment most PHBs control? We're talking supercomputers here.. Ferraris.. Lamborghinis... not super reliable diesel trucks. Most PHBs want uptime, not go-fast-real-quick.

    • welcome to 1995 (Score:3, Informative)

      by Colin Smith ( 2679 )

      um. you want a Beowulf with that?

      Linux has been in the supercomputer lists for decades.

      Google is a much better example of how you can use Linux to take over the world; which is what every self respecting middle manager want's to do.

      I.e. Shit loads of cheap compute power. Got any tasks which need that?

    • by AHuxley ( 892839 )
      Somewhere in Utah the Linux boxes are been counted.
      A nice round inflation adjusted number would be $1000?
      Darl McBride is handed a list of the naughty and very naughty.
  • By Processor (Score:4, Interesting)

    by TheRaven64 ( 641858 ) on Monday May 31, 2010 @08:38AM (#32406432) Journal
    The view by processor is quite interesting. AMD has the top spot, but the majority of the top 500 have Intel chips. There are still two SPARC64 machines in the top 100, and a third one down at 383. All three SPARC64 machines are in Japan, which isn't entirely surprising. IBM makes a good showing, but it's interesting to see how far behind x86 they are, in a market that was traditionally owned by non-commodity hardware.
    • Re: (Score:3, Insightful)

      by pwilli ( 1102893 )
      I would have expected more AMD-based systems in the top-100, because super computers are usually built with cheap and moderately fast Processors, the market segment where AMD gives lots of bang for the buck.
      • Re: (Score:3, Insightful)

        by Entropius ( 188861 )

        If you're Intel you have more money to spend on marketing, which means "we'll give you a cut rate on a lot of 10000 processors just so we can have the bragging rights."

        • by maxume ( 22995 )

          It's quite likely that they can offer a hefty discount and still make a profit on the transaction.

          • Re:By Processor (Score:4, Interesting)

            by stevel ( 64802 ) * on Monday May 31, 2010 @11:42AM (#32408170) Homepage

            System and component vendors don't make money on these "lighthouse account" supercomputer sales. My experience, having worked in the past for a vendor that did this a lot, is that they're a money-loser. The motivation is bragging rights, though that can be fleeting. I know of several times that my employer declined to bid on a supercomputer deal as it would just be too expensive.

            Typically, these systems are actually sold by system vendors (Dell, HP, IBM) and not processor vendors, though the processor vendor will support the bid. That #1 "AMD" system is actually a Cray. Software also plays a large part in success or failure.

    • Re: (Score:1, Interesting)

      by Anonymous Coward

      What's more interesting is is that the Chinese supercomputer is second overall with only 55680 cores (Intel) and 1.271 peta FLOPS.
      That's almost 170000 cores less than the number 1 (AMD), and only 500 tera FLOPS less.
      And it's 70000 cores less than the number 3 (IBM) and 200 tera FLOPS faster.

      • Re:By Processor (Score:4, Informative)

        by TheRaven64 ( 641858 ) on Monday May 31, 2010 @08:58AM (#32406576) Journal

        It's especially interesting for two reasons. Firstly, because at that sort of scale interconnect throughput and latency can make a much bigger difference than processor speed. With HyperTransport, AMD has had a huge advantage over Intel here (IBM also uses HyperTransport). It looks like QPI might have eliminated that advantage. Beyond that, you have the supporting circuitry - you don't just plug a few thousand processors into a board and have them work, you need a lot of stuff to make them talk to each other without massive overhead.

        The other interesting thing is that the Chinese are using Intel processors at all. I would have expected them to use Loongson 2F chips, or Loongson 3 if they were out in time. I'm not sure if Loongson wasn't up to the job, or if they had some other reason for using a foreign-designed chip.

        • The other interesting thing is that the Chinese are using Intel processors at all. I would have expected them to use Loongson 2F chips, or Loongson 3 if they were out in time. I'm not sure if Loongson wasn't up to the job, or if they had some other reason for using a foreign-designed chip.

          Loongson has great TDP but isn't all that ballsy. If you're trying to do the job with less cores, it's not in the running. So what if something else takes twice the power? It's the people's money. Same as here.

        • Re: (Score:3, Informative)

          by Pharmboy ( 216950 )

          Wikipedia [wikipedia.org] shows the highest performing Loongson system before April scored 1 teraflop peak, and "and about 350 GFLOPS measured by linpack in Hefei". Sounds like they are focusing on performance/watt more than being the fastest, from a read of the rest of the article. Still pretty fast stuff, considering their newest system has 80 quads and is claimed to have a peak around 1 teraflop.

          • "and about 350 GFLOPS measured by linpack in Hefei".

            Ouch. Most new desktop computers score 35-70gflops, right? That's only ~5-10x faster.

            I suppose if it only used 500 watts, it might be worth bragging about - but I can't find any hard power consumption figures.

      • Re:By Processor (Score:4, Informative)

        by Jeremy Erwin ( 2054 ) on Monday May 31, 2010 @09:51AM (#32407006) Journal

        What's even more interesting is that the nVidia chips that made Nebulae so fast seem to have escaped your notice.

    • The flash thingy is really neat: There is a demo [prefuse.org] of this free library [prefuse.org].

  • by Jah-Wren Ryel ( 80510 ) on Monday May 31, 2010 @08:45AM (#32406466)

    How about a direct link to the actual site [top500.org] - or even the actual list? [top500.org]

  • LINPACK (Score:3, Interesting)

    by ProdigyPuNk ( 614140 ) on Monday May 31, 2010 @08:49AM (#32406504) Journal
    I think this is the first benchmarking article I've read in years where the organizers actually know what their benchmark program does: http://www.top500.org/project/linpack [top500.org]. Refreshing to see real statistics (as good as they can make them), instead of the normal crap that is most hardware articles anymore.

    I wonder what kind of score these beasts would get on 3DMark ?
    • Linpack is no benchmark. Let me know when any of them can begin to manage Adobe Flash.

      - Steve J

  • by cshbell ( 931989 ) on Monday May 31, 2010 @08:50AM (#32406518)
    The list should more accurately be called, "Top 500 publicly-acknowledged supercomputers." You can go right on thinking that the US NSA, British MI6, and even some private industries (AT&T?) don't have vastly larger supers that are not publicly disclosed.
    • by Yaos ( 804128 )
      It depends on what you consider a supercomputer. If you have 100 systems running a single cluster for virtual machines, is that a supercomputer because all of the servers are working together? When you go to Google to search for something that goes to one of their datacenters, all of their systems are hooked together to allow very fast searching and serving of results. Is the system behind Google search a supercomputer?
  • I wish the graphic has a "By Cores". Wow, a computer with over 120,000 cores! Phenomenal.
  • Food? What food? (Score:5, Interesting)

    by hcpxvi ( 773888 ) on Monday May 31, 2010 @08:53AM (#32406532)
    Of the UK entries in this list, the first few are Hector (the national supercomputing facility), ECMWF, Universities, financial institutions etc. But there are also some labelled "Food industry". I wonder what I am eating that requires a supercomputer?
    • by Zembar ( 803935 )

      Calculating the caloric density of turkey twizzlers?

    • by sznupi ( 719324 )

      Simulations of chemical processes? Estimations of future harvests and researching chemicals used for agriculture? I can't know if that's it, but there you go - some examples where it might be worthwile.

    • Re: (Score:3, Funny)

      by tivoKlr ( 659818 )
      Maybe they're using it to determine why anyone would eat Haggis [wikipedia.org].
      • Re: (Score:1, Informative)

        by Anonymous Coward

        Because it's delicious, seriously! Don't knock it till you've tried it. It's not conceptually much different from a big sausage, anyway.

    • IATOOMA but maybe safety analysis for things like disease in foods?
    • Of the UK entries in this list, the first few are Hector (the national supercomputing facility), ECMWF, Universities, financial institutions etc. But there are also some labelled "Food industry". I wonder what I am eating that requires a supercomputer?

      Weather simulation, perhaps? Weather has a huge impact on crop yields.

      Or perhaps bioinformatics for genetic tinkering.

    • It's not what you are eating, but how they figure out how to sell their food to you. It takes some serious crunching to digest the enormous platter-fulls of data on consumer buying trends for pizza, based on age, geographical location, typical Google search histories, and reaction to percentage of red in existing pizza ads!

      On the other hand, I must admit to being curious about what the 'perfect' pizza, matched exactly to me by one of the world's fastest computers, would actually taste like...mmmm...pizza
    • I wonder what I am eating that requires a supercomputer?

      Doesn't the fast food industry use supercomputers to count the calories of its products, and to annually calculate the number of clogged arteries of its patrons?

    • Re: (Score:3, Insightful)

      by et764 ( 837202 )
      One guy I used to work with that used to work in supercomputing claimed he did one project involving aerodynamic simulations of Pringles chips. Apparently they were originally shaped like wings, and would become airborne when traveling along high speed conveyor belts. They used a simulation to find a shape that wouldn't generate so much lift.
  • Would be interesting to see how the botnets compare

  • Seriously, China is able to see a lot of the advancements made in the US through its army of grad students(the Chinese government essentially helps them cheat on all the tests they need to do well on in order to study in the US, they consider it to be in their national interests). Meanwhile China won't let a foreigner anywhere near their technology. Is it any surprise then that they are getting close to the top?
    • by Anonymous Coward

      Do you actually think that everything was and is invented in US? A man that doesn't know the history will lose the future.

      • Re: (Score:1, Troll)

        Nope, but nothing innovative has come out of China since the communists took over. I don't even have problems with pretty much any other nation on earth. It's just China that steals technology en masse then calls it their own. It's China that is trying to take over the world. It's China that is destroying the world economy.

        Thats why I boycott Chinese goods. I don't boycott any other nations stuff, and actually I am better for it. Chinese goods are insanely shoddy. I tend to get much better quality
  • by Plekto ( 1018050 ) on Monday May 31, 2010 @09:32AM (#32406814)

    I was curious if any privately owned(non-corporate or government) machines made the list, and where they placed.

  • actual purpose (Score:3, Interesting)

    by Iamthecheese ( 1264298 ) on Monday May 31, 2010 @09:40AM (#32406902)
    In years past as many as 7 out of 10 officially listed computers were for security research. Now, contrary to the article, that's down to 2.

    Jaguar -- general research (http://www.nccs.gov/computing-resources/jaguar/)
    Roadrunner -- security research (http://www.lanl.gov/)
    Kraken XT5 -- general research (National Institute for Computational Sciences/University of Tennessee)
    Tianhe-1 -- unstated
    Pleiades -- security research (nukes)

    "Recently expanded to accommodate growing demand for high-performance systems able to run the most complex nuclear weapons science calculations, BGL now has a peak speed of 596 teraFLOPS. In partnership with IBM, the machine was scaled up from 65,536 to 106,496 nodes in five rows of racks; the 40,960 new nodes have double the memory of those installed in the original machine"

    Intrepid -- General research
    Ranger -- General research
    Red Sky -- General research

    It makese me wonder whether the machines for nuclear research went underground or maybe it just doesn't take a top ranking supercomputer to calculate a nuclear explosion anymore.
    • perhaps nuke simulations have indeed reached a level where more crunching power isnt worth it anymore, why build a complete new system to do a blast-sim if your existing machine does it in two days. Perhaps there isnt a market for more then X blast simulations per year..

      anyway WOW, 40960 NEW nodes... If every BGL node is a single U of rackspace, then even ignoring network/UPS/etc requirements, that means adding 1000 racks, to the already existing ~1500...

    • Re: (Score:2, Informative)

      by zeldor ( 180716 )

      pleaides isnt nukes, its nasa. airplanes and weather.
      the others some are nukes some are open unclassified uses.
      noaa/nsf/etc

    • Re:actual purpose (Score:4, Interesting)

      by rdebath ( 884132 ) on Monday May 31, 2010 @12:18PM (#32408450)

      As I understand it most of the nuclear research simulations that it would be nice to run simply cannot be done on any modern machines. If it's only a few particles they can be simulated on a laptop but the interesting interactions need to simulate millions or billions of points with every single one of them influencing every other one in the simulation.

      As a simple example, a genetic algorithm was used to program some reconfigurable FPGA chips. A layout was grown on the chip the did the job but broke just about every rule for FPGA design. There were parts of the layout on the chip that were not connected to any circuit but removing them made the device fail to work. Transferring the layout to a different chip got you a non-working circuit. It would be great to be able to simulate this ... not a chance it's too big, by so very many orders of magnitude.

      http://www.netscrap.com/netscrap_detail.cfm?scrap_id=73 [netscrap.com]

    • Are you counting the entire list of computers or just the top 10? Is the first list supposed to be ones used for security research and the second for general research? If so, Red Sky and possibly others are used for security research.

      The change is that most super computers at the national laboratories are not single-use, and are thus listed as general research even if they spend a large proportion of their cycles on security research.

  • Crysis (Score:1, Redundant)

    by egcagrac0 ( 1410377 )

    All this talk of high end computers, and no mention of Crysis?

    Not sure how I feel about that.

  • The sidebar about treemaps is as interesting as the main article. An interesting way to display complex data in a compact form.
  • "It's measured against a theoretical benchmark - if you ran a real-world application you might get a very different answer".

    Next bulletin:

    "Vista-based benchmark testing complete - converts Jaguar to big pussycat"

    ;o)

  • by Anonymous Coward

    "Linux family" operating systems went from 89% in the previous list to 91% of this one [top500.org].

    Not that the field wasn't already dominated, but it's an interesting milestone. (FWIW, Linux passed 75% in 2006-11, 50% in 2004-06, and 25% in 2003-06.)

  • This is my chance to snark back at the SPARCophiles at my former employer, Sun. You'll notice that Sun has a respectable presence on this list, lagging just behind SGI. And not a single Sun system on the Top 500 runs SPARC. They're all x64.

    The SPARC Uber Alles mentality at Sun in its last days was really frustrating. I was working on x64 systems that were widely considered the best in their class. But you couldn't get the marketing and sales people to make an effort to sell them. They'd march into sales mee

  • Server farms that is! Thanks WETA.

"For the love of phlegm...a stupid wall of death rays. How tacky can ya get?" - Post Brothers comics

Working...