Obama's New Executive Order Says the US Must Build an Exascale Supercomputer 223
Jason Koebler writes: President Obama has signed an executive order authorizing a new supercomputing research initiative with the goal of creating the fastest supercomputers ever devised. The National Strategic Computing Initiative, or NSCI, will attempt to build the first ever exascale computer, 30 times faster than today's fastest supercomputer. Motherboard reports: "The initiative will primarily be a partnership between the Department of Energy, Department of Defense, and National Science Foundation, which will be designing supercomputers primarily for use by NASA, the FBI, the National Institutes of Health, the Department of Homeland Security, and NOAA. Each of those agencies will be allowed to provide input during the early stages of the development of these new computers."
In 30 years, this is our next cell phone. (Score:4, Funny)
Re: (Score:3, Informative)
No, but it can figure out you suck at the game.
Re:In 30 years, this is our next cell phone. (Score:5, Funny)
But can it play Crysis?
ftfy. :)
Re: (Score:2)
I can, why can't you?
Re: (Score:2)
That does explain some things...I always wondered why people give me such odd looks when I walk around in public.
And the NSA? (Score:4, Insightful)
Re: (Score:3)
Likely a new gift for the NSA (Score:4, Insightful)
Exactly.
My first thought was the new addition will be tasked by the NSA/FiveEyes to break encryption for intercepted communications.
Classified Data (Score:5, Interesting)
Exactly.
My first thought was the new addition will be tasked by the NSA/FiveEyes to break encryption for intercepted communications.
Why are you assuming they don't already have one doing that, and this is just a public version?
There is a lot of highly secured government data infrastructure out there that I hear about even though not inquiring. The cable in Virginia that gets cut by a backhoe accidentally and guys in a black van show up ten minutes later. The contract for a government data center inside a faraday cage. The government likely already has much more computing power available than we know about.
Re: (Score:3)
I agree. The very fact we'll know where this datacentre is probably means it will be being used for relatively lower security stuff. The exascale supercomputer for actually analyzing the NSA intercepts probably already exists.
Re: (Score:2)
We can be fairly sure that the NSA has some serial dedicated hardware for cracking common encryption systems like AES. They will still be reliant on things like dictionary attacks because brute-forcing the entire keyspace is impractical (unless they have quantum computers).
How should we react to that? Well, obviously we need a good password that can resist dictionary attacks. Beyond that, unless you are a big enough perceived threat to warrant time on an expensive computer you probably don't have to worry t
Re: (Score:2)
Serial? Must be pretty fast to do operations of that kind in serial.
Re: (Score:2)
We can be fairly sure that the NSA has some serial dedicated hardware for cracking common encryption systems like AES. They will still be reliant on things like dictionary attacks because brute-forcing the entire keyspace is impractical (unless they have quantum computers).
How should we react to that? Well, obviously we need a good password that can resist dictionary attacks. Beyond that, unless you are a big enough perceived threat to warrant time on an expensive computer you probably don't have to worry too much. They certainly won't be using it to help out the FBI, risking its existence coming to light.
Maybe. Based on the documentaries that have been made, it's pretty clear that the NSA used their phone-metadata-recording to help the FBI locate the Boston Bomber, despite the risk that it would become public. (Which is did shortly thereafter but for other reasons--i.e. Snowden).
The FBI does domestic counterterror. The NSA is the big bad in terms of not seeing the inherent bad and threat to democracy in snooping on everyone's communications, sure, but they're still trying to be good guys and so they'll s
Re: (Score:3, Insightful)
if the new computer is 30x faster than the fastest one currently deployed and in use.. you've got
1x for weather (noaa),
1x for health (nih),
1x for science (nsf),
1x for nasa, and
1x for energy (doe);
and each of those organizations will be thrilled at having the extra computational power.. that leaves the equivalent of 25 left over for the unconstitutional, illegal, and/or classified shit that they really want it for. the legitimate uses is what they use to sell it and justify its expense, while distracting eve
Re: (Score:3)
How would you run secret programs on a computer shared with NOAA and NSF? The NSA don't need it, they have their own supercomputers. Even their budget it secret.
Re: (Score:2)
Re:Likely a new gift for the NSA (Score:5, Interesting)
Weather guys want this after NSA's done.
We'll take a side of phased-array weather radar to go with that, too.
Re:Likely a new gift for the NSA (Score:5, Informative)
Weather guys want this after NSA's done.
I'm a weather guy - running cloud model code on Blue Waters, the fastest petascale machine for research in the U.S. I don't think we've managed to get any weather code run much more than 1 PF sustained - if even that. So it's not like you can compile WRF and run it with 10 million MPI ranks and call it a day. Ensembles? Well that's another story.
Exascale machines are going to have to be a lot different than petascale machines (which aren't all that different topologically than terascale machines) in order to be useful to scientists and in order to no require their own nuclear power plant to run. And I don't think we know what that topology will look like yet. A thousand cores per node? That should be fun; sounds like a GPU. Regardless, legacy weather code will need to be rewritten or more likely new models will need to be written from scratch in order to do more intelligent multithreading as opposed to mostly-MPI which is what we have today.
When asked at the Blue Waters Symposium this May to prognosticate on the future coding paradigm for exascale machines, Steven Scott (Senior VP and CTO of Cray) said we'll probably still be using MPI + OpenMP. If that's the case we're gonna have to be a hell of a lot more creative with OpenMP.
Re:Likely a new gift for the NSA (Score:4, Interesting)
Weather guys want this after NSA's done.
I'm a weather guy - running cloud model code on Blue Waters, the fastest petascale machine for research in the U.S. I don't think we've managed to get any weather code run much more than 1 PF sustained - if even that. So it's not like you can compile WRF and run it with 10 million MPI ranks and call it a day. Ensembles? Well that's another story.
Exascale machines are going to have to be a lot different than petascale machines (which aren't all that different topologically than terascale machines) in order to be useful to scientists and in order to no require their own nuclear power plant to run. And I don't think we know what that topology will look like yet. A thousand cores per node? That should be fun; sounds like a GPU. Regardless, legacy weather code will need to be rewritten or more likely new models will need to be written from scratch in order to do more intelligent multithreading as opposed to mostly-MPI which is what we have today.
When asked at the Blue Waters Symposium this May to prognosticate on the future coding paradigm for exascale machines, Steven Scott (Senior VP and CTO of Cray) said we'll probably still be using MPI + OpenMP. If that's the case we're gonna have to be a hell of a lot more creative with OpenMP.
I'm not a weather guy, but my understanding is that a somewhat fixed weather model (set of calculations) is used to do a kind of finite-element analysis on small areas. With better computing and better radars, smaller and smaller areas can be calculated, which results in more accuracy.
With more computing power, could you not vary the parameters or constants used in the weather model, then run the finite-element analysis over the entire weather area again? You could be running hundreds or thousands of slightly different weather models, then apply some processing to figure out which is most likely- either by averaging together the 50% most similar outcomes, or by some other method. I don't think you could peak out a supercomputer with that method if you kept adding more parameter variations, although you may get to the point where adding more parameter variations doesn't improve accuracy.
Maybe that's an incorrect understanding, but we're getting closer to the point where we can calculate all possible outcomes simultaneously. I wouldn't have expected this to be the case with weather but computing has come a long way in the last 20 years.
Re: (Score:3)
You are basically describing ensemble forecasting, which is very powerful for providing statistically meaning forecasts where you can intelligently talk about the uncertainty of the forecast, something single deterministic forecasts cannot do.
In my research, I'm doing single deterministic forecasts to study what happens with tornadoes in supercell thunderstorms, where I am cranking up the resolution to capture flow that is otherwise unresolved. I get one version of a particular storm, which is good for stud
Re: (Score:2)
You can say "one of" but you can't say "the fastest" petascale machines my friend
http://www.hpcwire.com/2012/11... [hpcwire.com]
I should have added "on a college campus".
My main point is, just throwing more cores at "mostly MPI" weather models is not sustainable. We are going to need to be much smarter about how we parallelize.
Re: (Score:2)
One of the biggest problems of the current large scale HPC machines is users (like you but maybe not you specifically) are typically scientists/analysts who write software that does not scale well. There either needs to be better frameworks for you to work within that handle all the grunt work of doing efficient parallelization and message passing or every atmospheric physicist needs to be teamed with a computer scientist and a software engineer.
Absolutely agree 100%!
Re: (Score:2)
I think you underestimate the complexity of modern encryption and hashing algorithms.
Re: (Score:2)
Artificial intelligence to flood the internet with pro American Corporate propaganda upon a massive scale, unfortunately that is not a joke but a serious intent.
Re: (Score:2, Insightful)
AC is angry at Obama because Obama put AC on a watch list? Which came first? Help me out here.
Re: (Score:2)
Hopefully you don't think of yourself as more tolerant than him.
Re: (Score:2)
Except the OP was not just unhappy with Obama, he went out of his way to use logical fallacies.
Re: (Score:2)
Re: (Score:2)
For that, you would be using custom ASIC hardware, and lots of it.
No, for that you just laugh at the guy asking you to do it, and look for ways to steal the key, rather than brute forcing it. Even if an ASIC solution gets to way beyond exascale, say to yottascale (10^6 times faster than exascale), you're still looking at on the order of a million years to recover a single 128-bit AES key, on average.
Brute force is not how you attack modern cryptosystems. More detail: http://tech.slashdot.org/comme... [slashdot.org]
Exascale machines are for scientific computing (Score:3, Informative)
Re: (Score:2)
Probably none at all. If you want to break today's encryption/hashing algorithms you would probably be using ASICs if not those then FPGAs with GPU compute being your last choice.
Dedicated hardware is the most efficient when you are dealing with a well known standard. For all we know IBM is still in business because it is building NSA ASICs using that 7nm process they showed.
Also time on this beast will be extremely expensive if they use it for any kind of code breaking it will not be for random slashdot us
Re: (Score:2)
Probably none at all. If you want to break today's encryption/hashing algorithms you would probably be using ASICs if not those then FPGAs with GPU compute being your last choice.
ASICs, FPGAs and GPUs are all utterly, utterly inadequate to attack today's encryption and hashing algorithms. Unless you have not only tens of billions of dollars but also don't mind waiting millions of years. http://tech.slashdot.org/comme... [slashdot.org].
Re:And the NSA? (Score:5, Informative)
What would the existence of an exascale supercomputer mean for today's popular encryption/hashing algorithms?
Nothing, nothing at all.
Suppose, for example that your exascale computer could do exa-AES-ops... 10^18 AES encryptions per second. It would take that computer 1.7E20 seconds to brute force half of the AES-128 key space. That's 5.4E12 years, to achieve a 50% chance of recovering a single key.
And if that weren't the case, you could always step up to 192 or 256-bit keys. In "Applied Cryptography", in the chapter on key length, Bruce Schneier analyzed thermodynamic limitations on brute force key search. He calculated the amount of energy required for a perfectly efficient computer to merely increment a counter through all of its values. That's not to actually do anything useful like perform an AES operation and a comparison to test a particular key, but merely to count through all possible keys. Such a computer, running at the ambient temperature of the universe, would consume 4.4E-6 ergs to set or clear a single bit. Consuming the entire output of our star for a year, and cycling through the states in an order chosen to minimize bit flips rather than just counting sequentially, would provide enough energy for this computer to count through 2^187. The entire output of the sun for 32 years gets us up to 2^192. To run a perfectly-efficient computer through 2^256 states, you'd need to capture all of the energy from approximately 137 billion supernovae[*]. To brute force a 256-bit key you'd need to not only change your counter to each value, you'd then need to perform an AES operation.
Raw computing power is not and never will be the way to break modern crypto systems[**]. To break them you need to either exploit unknown weaknesses in the algorithms (which means you have to be smarter than the world's academic cryptographers), or exploit defects in the implementation (e.g. side channel attacks) or find other ways to get the keys -- attack the key management. The last option is always the best, though implementation defects are also quite productive. Neither of them benefit significantly from having massive computational resources available.
[*] Schneier didn't take into account reversible computing in his calculation. A cleverly-constructed perfectly-efficient computer could make use of reversible circuits everywhere they can work, and a carefully-constructed algorithm could make use of as much reversibility as possible. With that, it might be feasible to lower the energy requirements significantly, maybe even several orders of magnitude (though that would be tough). We're still talking energy requirements involving the total energy output of many supernovae.
[**] Another possibility is to change the question entirely by creating computers that don't operate sequentially, but instead test all possible answers at once. Quantum computers. Their practical application to the complex messiness of block ciphers is questionable, though the mathematical simplicity of public key encryption is easy to implement on QCs. Assuming we ever manage to build them on the necessary scale. If we do, we can expect an intense new focus on protocols built around symmetric cryptography, I expect.
Re: (Score:2)
Actually, they probably included a few big wrenches to assemble some of the rack systems, so they probably have the tools to break even 1024 bit encryption.
Re: (Score:2)
Actually, they probably included a few big wrenches to assemble some of the rack systems, so they probably have the tools to break even 1024 bit encryption.
When you say "1024-bit encryption" you're talking about RSA, which is a completely different problem. 1024-bit RSA are too small to be used today and should be replaced.
2048-bit RSA keys, however, are roughly equivalent in security against brute force to a 112-bit symmetric key, and will be secure against anyone for quite some time. 3072-bit RSA keys are equivalent to a 128-bit symmetric key. Excascale, even yottascale, computers won't touch them.
But everyone really should be moving away from RSA anyw
Re: (Score:2, Troll)
violate the spirit?
I just mean it enables the US to develop new weapons, e.g. bunker busters, without live testing. Yes, the simulations are that good. I'm not saying this is necessarily a bad thing, especially as the Comprehensive Nuclear-Test-Ban Treaty was never ratified. But the NPT is a problem.
It's for the ... (Score:3, Interesting)
... NSA data center and stuff.
Re: (Score:2)
But... (Score:5, Funny)
Will it blend?
Re: (Score:2)
No this version only puree's.
some of the challenges (Score:5, Informative)
Here's another, somewhat pessimistic piece [ieee.org] they posted in 2008 - a digest of a DARPA report that went into significant technical detail.
The biggest hurdle is power, and the biggest driver of that isn't the actual computation (i.e., the energy to perform some number of FLOPS), but rather moving that data around (between cores, to/from RAM, across a PCB, and among servers). Other hurdles include how to manage so many cores, ensure they are working (nearly) concurrently, how to handle hardware failures (which will be frequent given the amount of hardware), and writing software that can even make use of such technology in anything approaching optimal fashion.
Not to say its impossible, merely hard given the present state of things and projecting a bit into the future. But as we know, "it is difficult to make predictions, especially about the future." [source [quoteinvestigator.com]?]
easy b/c avg time from order to delivery 4.5 years (Score:4, Interesting)
Those issues will be resolved by a side effect of this being a government order. According to the GAO, on average it takes 4 1/2 years from the time the government orders a computer until it's installed. Right now, multiple government agencies have been told to start thinking about a plan. In two years (2017), each agency will have their plan and they'll start working to to resolve the differences between agencies. In another year (2018), they'll put out some RFPs. Those will go through the federal procurement process and the order will be placed about two years later (2020). That's when the 4 1/2 year average clock starts, so expect installation around first quarter 2025.
The goal is that it should be 30 times faster than TODAY'S computers.
And be operational in ten years. They can pretty much just order a Nexus 47, or an HP Proliant gen 12.
Re: (Score:2)
Admittedly not as large but I worked on 2000+ node clusters in the early oughts. They way they got "efficiently used" was they were broken up and jobs generally only used a small subset of relatively adjacent nodes. One scientist would use 40 cores on 10 servers sharing a switch, another 100 copies of a serial app on 100 cores etc. Every once and a while, and it was rare, an astrophysicist or whatever would actually use hundreds of cores concurrently for a parallel algorithm. It was by far the minority case
Capacity vs. capability (Score:2)
Re: (Score:2)
Thanks for the explanation.
Re: (Score:2)
Sounds like we need higher performance per core. Not all problems are highly parallel, even with those that are you have limits, and now the interconnects are getting to be an issue.
30 Times Faster? (Score:5, Interesting)
For most specific problems thrown at supercomputers, you can go 30 times faster with a custom hardware architecture baked into silicon
To go 30 times fast for general purpose supercomputing, you use the latest silicon (2X) and more chips (15X) and come up with a super new interconnect to make it not suck. This would involve making some chips that support low latency IPC in hardware.
They are free to send me a few billion dollars, I'll get right on it and deliver a 30X faster machine and I'l even use some blue LEDs on the front panel.
Re: (Score:2)
The front panel and paint job are the highest margin part of the whole system. You would never give anything there away for free.
Re: (Score:2)
What is probably a research problem is ad
Re: (Score:2)
Lol, you mean like IBM with Blue Waters?
They already co-design the hard-/software (Score:2)
Basically, the procurement process for supercomputers is like this: the buyer (e.g. a DOE lab) will ready a portfolio of apps (mostly simulation codes) with a specified target performance. Vendors then bid for how "little" money they'll be able to meet that target performance. And of course the vendors will use the most (cost/power) efficient hardware they can get.
The reason why we're no longer seeing custom built CPUs in the supercomputing arena, but rather COTS chips or just slightly modified versions, is
Re: (Score:2)
For most specific problems thrown at supercomputers, you can go 30 times faster with a custom hardware architecture baked into silicon
Perhaps that's what they should do. Make a robotic silicon wafer fabrication facility part of the computer. After being given a task requiring a new architecture, it creates the architecture it needs and augments itself. I'm sure for less than the cost of the F-35 program, a universally tasking self augmenting supercomputer could be made to happen.
Re: (Score:2)
and it will be useless for everything except one problem
Inflammatory Remark Warning (Score:2)
Re: (Score:2)
Re: (Score:2)
Re: (Score:2)
Exascale? We don't need that. (Score:4, Funny)
640 petaflops ought to be enough for anybody.
Department of Homeland Security (Score:2, Insightful)
Who in their right minds would let these people near a computer? Please. Let them go back to what they excel at: stealing cameras out of our luggage and groping underage genitalia.
Re: (Score:3)
You do realize that the TSA is only about 1/4 of DHS by number of employees and 12% of the budget, right? I think you're really selling short the amount of damage they excel at if you only go with stealing cameras and groping underage genitalia.
Re: (Score:2)
Seems much more likely that this would be used by CBP and the Coast Guard (both DHS).
Might take a decent amount of horsepower to constantly search a database for every tracked vessel and run an analysis on each to determine when they do something that is out of the ordinary, then compare those results to patterns that predict some form of unwanted behavior (trafficking, illegal fishing, hijacking, lost at sea, etc.)
Skynet (Score:2)
Name of the Computer Project (Score:2)
How about...
Strategic
Kinetic
Yankee
Neural
Exaflop
Terminal
Re: (Score:2)
Did I catch a "niner" in there?
Imagine a (Score:2)
beowulf cluster of these...
Re: (Score:2)
A five digit UID making a Beowolf Cluster joke? What decade is this!?
So... (Score:2)
Fix the economy (Score:3, Funny)
The next executive order (Score:3)
Do you want Skynet? (Score:2)
Because that's how you get Skynet.
This order is worthless without funding (Score:4, Insightful)
He can attempt to mandate all he wants. Congress approves the budgets. And since we all know how well Obama has been submitting his budgets....
Re: (Score:2)
Re: (Score:3)
If you even had a basic idea of how the Constitution works, the President's budget is basically for show. Congress is entirely responsible.
https://www.youtube.com/watch?v=KIbkoop4AYE
Sorry, but you're wrong.
Congress generally begins its budget process once the President submits his budget. The President is required by law to submit a comprehensive federal budget on or before the first Monday in February (31 U.S.C. 1105(a))
Re: (Score:2)
When has the neo-cons/tea* listened to any budget by Obama? Not a once.
They discard everything and simply run their own. Hell, they do not even listen to past GOPs BEGGING for the house/senate to raise taxes on fuel to bring our roads back to levels.
At this time, all they listen to, are the billionaires, along with Chinese gov.. Hell, this group has been working to kill America's new private space, by giving MORE MONEY to Putin.
Re: (Score:2)
The neo-cons/tea partiers had nothing to do with Obama's budgeting woes. Between the House and Senate his proposed budget has only been able to get about 3 positive votes in the last few years. Most years the Senate under Reid simply refused to bring it to the floor for a vote it was so laughable. Only Republican pressure managed to get it to the floor where it managed to get 1 "yea" vote in 3 years.
Only the House actually bothered to do their duty and propose a budget each year. The Senate, once again
Obligatory #31 (Score:2)
Imagine a Beowulf Cluster of pork
The GOP ... (Score:2)
Buried in some farm bill, there will be a requirement to port systemd to this.
and how much will be done in India and China? (Score:2)
If the parts come from China, then it will make it trivial for China to simply build their own CPLA computer for weapons modeling.
FBI? Hmmm..... (Score:3)
I see people speculating above about the government using this to break crypto, but that's really not a huge concern. If people use good keys that require brute force searching, even the smallest AES key size would take over a billion millenia to break at 10^18 ops/second (even assuming you could test a key on one "op"). And for people who use bad keys, you don't need exascale computing to break them.
So what could the FBI use something like this for? What about analysis of massive public and not-so-public data, like data mining Internet postings, email/phone records, ... Better not post something with the wrong combination of words, or someone might come knocking on your door.
Re: Gotta love these executive orders (Score:5, Informative)
And random person freaks about because President exercises his lawful authority to tell agencies and departments under his jurisdiction to cooperate and present a plan for creating a supercomputer.
Here is a hint:
Sec. 7. General Provisions. (a) Nothing in this order shall be construed to impair or otherwise affect:
the authority granted by law to an executive department, agency, or the head thereof; or
the functions of the Director of OMB relating to budgetary, administrative, or legislative proposals.
(b) This order shall be implemented consistent with applicable law and subject to the availability of appropriations.
(c) This order is not intended to, and does not, create any right or benefit, substantive or procedural, enforceable at law or in equity by any party against the United States, its departments, agencies, or entities, its officers, employees, or agents, or any other person.
It is like nobody knows how the government operates any more, but if Obama does it, they're opposed, damn opposed.
Re: (Score:2)
... that we never use...
Re: (Score:2)
I'm actually okay with us not using our nukes.
Re: (Score:2)
Because 0.1 exaflops is still "exascale", but not "exaflops"... :)
Paul B.
Re: (Score:2)
I think "galactically stupid politicians" is my new favorite term.
Re:Linux, Linux, Linux, Linux (Score:4, Funny)
Don't worry, they are planning to use Oracle Linux. They are currently using the 2nd most powerful computer in the world to calculate how much the license will cost.
Re: (Score:2)
AFAIK all supercomputers use Linux
Re: (Score:2)
Windows versions (Score:2)
Given that even numbers suck, I am sure they will be skipping odd numbers from now on.
I'll stick with Win 7
Re: (Score:2)
Given that even numbers suck, I am sure they will be skipping odd numbers from now on.
I'll stick with Win 7
As will I. They'll need to pry Win 7 from my cold dead fingers. I read a few reviews on 10 yesterday, and the general consensus is that it's almost as good as 7, (better in some parts, worse in others) if you replace that hybrid start menu thingy with Classic Shell and get used to where they've moved things. Yeah. No.
Re: (Score:2)
Given that even numbers suck, I am sure they will be skipping odd numbers from now on.
I'll stick with Win 7
As will I. They'll need to pry Win 7 from my cold dead fingers. I read a few reviews on 10 yesterday, and the general consensus is that it's almost as good as 7, (better in some parts, worse in others) if you replace that hybrid start menu thingy with Classic Shell and get used to where they've moved things. Yeah. No.
Well, seeing how its an extension of Windows 7/8 and there is nothing missing that is in 7 really. Although if new interfaces bother you, it could be a reason to stick with Win3.1
You need to read reviews from professionals, not friends on Facebook ;)
See, this is the real problem. Back in the 3.1 days, we were on the steep end of the curve, and there were things that really needed to be added, changed and fixed. Now we're up on the flat end of the curve, and there really isn't a lot that needs to be improved, assuming that we're still using a keyboard and a mouse and the peripherals haven't changed substantially.
So people went to 95 because the ideas (not necessarily the implementation) really were needed, and to 98SE because 95 kinda sucked, and 2000
Re: (Score:2)
Re: (Score:2)
Is it bigger than 40?
Depends on the value of ^
Re: (Score:2)
Facepalm
Re: (Score:3)
And democrats have a hard on. Yes President Obama can create anything through executive action.
Just like all the others.
https://en.wikipedia.org/wiki/... [wikipedia.org]
Or do you just want REpublicans to have that?
Rage on!
Re: (Score:2)
Wait, are you a Republicrat angry because you think he's a Democan, or a Democan angry because you think he's a Republicrat?
Who's angry? I'm just being an agitation engineer.
p.s. to answer your question, I'm a pragmatic. I believe in what actually works, as opposed to ideology. Wildly reviled if the liberal or conservative can expand their mind enough to even acknowledge my existence, I'm the turd in the punchbowl of politics.
Re: (Score:2)
Re: (Score:3)
Now if you want to hate on Obama, you could argue that this supercomputer will be designed by indentured servants from India, using components made in Malaysia, and assembled in China. And it will likely be true.
But, you can just call him names too, that's good.
Re: (Score:2)
Re: (Score:2)
since no other OS can use up all the resources.
FTFY.
Re: (Score:2)