Please create an account to participate in the Slashdot moderation system

 



Forgot your password?
typodupeerror
×
Supercomputing United States

Obama's New Executive Order Says the US Must Build an Exascale Supercomputer 223

Jason Koebler writes: President Obama has signed an executive order authorizing a new supercomputing research initiative with the goal of creating the fastest supercomputers ever devised. The National Strategic Computing Initiative, or NSCI, will attempt to build the first ever exascale computer, 30 times faster than today's fastest supercomputer. Motherboard reports: "The initiative will primarily be a partnership between the Department of Energy, Department of Defense, and National Science Foundation, which will be designing supercomputers primarily for use by NASA, the FBI, the National Institutes of Health, the Department of Homeland Security, and NOAA. Each of those agencies will be allowed to provide input during the early stages of the development of these new computers."
This discussion has been archived. No new comments can be posted.

Obama's New Executive Order Says the US Must Build an Exascale Supercomputer

Comments Filter:
  • by GoodNewsJimDotCom ( 2244874 ) on Wednesday July 29, 2015 @06:02PM (#50210313)
    But can it run Crysis?
  • And the NSA? (Score:4, Insightful)

    by Otome ( 2999075 ) on Wednesday July 29, 2015 @06:02PM (#50210323)
    What would the existence of an exascale supercomputer mean for today's popular encryption/hashing algorithms?
    • A PL3/PL4 nightmare for the people building and running it...
    • by mrflash818 ( 226638 ) on Wednesday July 29, 2015 @06:28PM (#50210503) Homepage Journal

      What would the existence of an exascale supercomputer mean for today's popular encryption/hashing algorithms?

      Exactly.

      My first thought was the new addition will be tasked by the NSA/FiveEyes to break encryption for intercepted communications.

      • Classified Data (Score:5, Interesting)

        by Etherwalk ( 681268 ) on Wednesday July 29, 2015 @06:50PM (#50210637)

        What would the existence of an exascale supercomputer mean for today's popular encryption/hashing algorithms?

        Exactly.

        My first thought was the new addition will be tasked by the NSA/FiveEyes to break encryption for intercepted communications.

        Why are you assuming they don't already have one doing that, and this is just a public version?

        There is a lot of highly secured government data infrastructure out there that I hear about even though not inquiring. The cable in Virginia that gets cut by a backhoe accidentally and guys in a black van show up ten minutes later. The contract for a government data center inside a faraday cage. The government likely already has much more computing power available than we know about.

        • I agree. The very fact we'll know where this datacentre is probably means it will be being used for relatively lower security stuff. The exascale supercomputer for actually analyzing the NSA intercepts probably already exists.

        • by AmiMoJo ( 196126 )

          We can be fairly sure that the NSA has some serial dedicated hardware for cracking common encryption systems like AES. They will still be reliant on things like dictionary attacks because brute-forcing the entire keyspace is impractical (unless they have quantum computers).

          How should we react to that? Well, obviously we need a good password that can resist dictionary attacks. Beyond that, unless you are a big enough perceived threat to warrant time on an expensive computer you probably don't have to worry t

          • Serial? Must be pretty fast to do operations of that kind in serial.

          • We can be fairly sure that the NSA has some serial dedicated hardware for cracking common encryption systems like AES. They will still be reliant on things like dictionary attacks because brute-forcing the entire keyspace is impractical (unless they have quantum computers).

            How should we react to that? Well, obviously we need a good password that can resist dictionary attacks. Beyond that, unless you are a big enough perceived threat to warrant time on an expensive computer you probably don't have to worry too much. They certainly won't be using it to help out the FBI, risking its existence coming to light.

            Maybe. Based on the documentaries that have been made, it's pretty clear that the NSA used their phone-metadata-recording to help the FBI locate the Boston Bomber, despite the risk that it would become public. (Which is did shortly thereafter but for other reasons--i.e. Snowden).

            The FBI does domestic counterterror. The NSA is the big bad in terms of not seeing the inherent bad and threat to democracy in snooping on everyone's communications, sure, but they're still trying to be good guys and so they'll s

      • Re: (Score:3, Insightful)

        by Anonymous Coward

        if the new computer is 30x faster than the fastest one currently deployed and in use.. you've got

        1x for weather (noaa),
        1x for health (nih),
        1x for science (nsf),
        1x for nasa, and
        1x for energy (doe);

        and each of those organizations will be thrilled at having the extra computational power.. that leaves the equivalent of 25 left over for the unconstitutional, illegal, and/or classified shit that they really want it for. the legitimate uses is what they use to sell it and justify its expense, while distracting eve

      • by TigerPlish ( 174064 ) on Wednesday July 29, 2015 @07:42PM (#50210953)

        Weather guys want this after NSA's done.

        We'll take a side of phased-array weather radar to go with that, too.

        • by Orp ( 6583 ) on Wednesday July 29, 2015 @09:45PM (#50211563) Homepage

          Weather guys want this after NSA's done.

          I'm a weather guy - running cloud model code on Blue Waters, the fastest petascale machine for research in the U.S. I don't think we've managed to get any weather code run much more than 1 PF sustained - if even that. So it's not like you can compile WRF and run it with 10 million MPI ranks and call it a day. Ensembles? Well that's another story.

          Exascale machines are going to have to be a lot different than petascale machines (which aren't all that different topologically than terascale machines) in order to be useful to scientists and in order to no require their own nuclear power plant to run. And I don't think we know what that topology will look like yet. A thousand cores per node? That should be fun; sounds like a GPU. Regardless, legacy weather code will need to be rewritten or more likely new models will need to be written from scratch in order to do more intelligent multithreading as opposed to mostly-MPI which is what we have today.

          When asked at the Blue Waters Symposium this May to prognosticate on the future coding paradigm for exascale machines, Steven Scott (Senior VP and CTO of Cray) said we'll probably still be using MPI + OpenMP. If that's the case we're gonna have to be a hell of a lot more creative with OpenMP.

          • by dj245 ( 732906 ) on Thursday July 30, 2015 @09:52AM (#50214251)

            Weather guys want this after NSA's done.

            I'm a weather guy - running cloud model code on Blue Waters, the fastest petascale machine for research in the U.S. I don't think we've managed to get any weather code run much more than 1 PF sustained - if even that. So it's not like you can compile WRF and run it with 10 million MPI ranks and call it a day. Ensembles? Well that's another story.

            Exascale machines are going to have to be a lot different than petascale machines (which aren't all that different topologically than terascale machines) in order to be useful to scientists and in order to no require their own nuclear power plant to run. And I don't think we know what that topology will look like yet. A thousand cores per node? That should be fun; sounds like a GPU. Regardless, legacy weather code will need to be rewritten or more likely new models will need to be written from scratch in order to do more intelligent multithreading as opposed to mostly-MPI which is what we have today.

            When asked at the Blue Waters Symposium this May to prognosticate on the future coding paradigm for exascale machines, Steven Scott (Senior VP and CTO of Cray) said we'll probably still be using MPI + OpenMP. If that's the case we're gonna have to be a hell of a lot more creative with OpenMP.

            I'm not a weather guy, but my understanding is that a somewhat fixed weather model (set of calculations) is used to do a kind of finite-element analysis on small areas. With better computing and better radars, smaller and smaller areas can be calculated, which results in more accuracy.

            With more computing power, could you not vary the parameters or constants used in the weather model, then run the finite-element analysis over the entire weather area again? You could be running hundreds or thousands of slightly different weather models, then apply some processing to figure out which is most likely- either by averaging together the 50% most similar outcomes, or by some other method. I don't think you could peak out a supercomputer with that method if you kept adding more parameter variations, although you may get to the point where adding more parameter variations doesn't improve accuracy.

            Maybe that's an incorrect understanding, but we're getting closer to the point where we can calculate all possible outcomes simultaneously. I wouldn't have expected this to be the case with weather but computing has come a long way in the last 20 years.

            • by Orp ( 6583 )

              You are basically describing ensemble forecasting, which is very powerful for providing statistically meaning forecasts where you can intelligently talk about the uncertainty of the forecast, something single deterministic forecasts cannot do.

              In my research, I'm doing single deterministic forecasts to study what happens with tornadoes in supercell thunderstorms, where I am cranking up the resolution to capture flow that is otherwise unresolved. I get one version of a particular storm, which is good for stud

      • I think you underestimate the complexity of modern encryption and hashing algorithms.

      • by rtb61 ( 674572 )

        Artificial intelligence to flood the internet with pro American Corporate propaganda upon a massive scale, unfortunately that is not a joke but a serious intent.

    • Comment removed based on user account deletion
      • For that, you would be using custom ASIC hardware, and lots of it.

        No, for that you just laugh at the guy asking you to do it, and look for ways to steal the key, rather than brute forcing it. Even if an ASIC solution gets to way beyond exascale, say to yottascale (10^6 times faster than exascale), you're still looking at on the order of a million years to recover a single 128-bit AES key, on average.

        Brute force is not how you attack modern cryptosystems. More detail: http://tech.slashdot.org/comme... [slashdot.org]

    • These Peta/Exascale supercomputers are build for computer simulations (climate change, nuclear weapons stewardship, computational drug design, etc.), not for breaking encryption. That's also one reason no one is using them to mine Bitcoins: they're just not efficient at that job. To compute lots of hashes, dedicated hardware designs (read: ASICS) far outpace "general purpose" supercomputers.
    • by LWATCDR ( 28044 )

      Probably none at all. If you want to break today's encryption/hashing algorithms you would probably be using ASICs if not those then FPGAs with GPU compute being your last choice.
      Dedicated hardware is the most efficient when you are dealing with a well known standard. For all we know IBM is still in business because it is building NSA ASICs using that 7nm process they showed.

      Also time on this beast will be extremely expensive if they use it for any kind of code breaking it will not be for random slashdot us

      • Probably none at all. If you want to break today's encryption/hashing algorithms you would probably be using ASICs if not those then FPGAs with GPU compute being your last choice.

        ASICs, FPGAs and GPUs are all utterly, utterly inadequate to attack today's encryption and hashing algorithms. Unless you have not only tens of billions of dollars but also don't mind waiting millions of years. http://tech.slashdot.org/comme... [slashdot.org].

    • Re:And the NSA? (Score:5, Informative)

      by swillden ( 191260 ) <shawn-ds@willden.org> on Thursday July 30, 2015 @08:59AM (#50213887) Journal

      What would the existence of an exascale supercomputer mean for today's popular encryption/hashing algorithms?

      Nothing, nothing at all.

      Suppose, for example that your exascale computer could do exa-AES-ops... 10^18 AES encryptions per second. It would take that computer 1.7E20 seconds to brute force half of the AES-128 key space. That's 5.4E12 years, to achieve a 50% chance of recovering a single key.

      And if that weren't the case, you could always step up to 192 or 256-bit keys. In "Applied Cryptography", in the chapter on key length, Bruce Schneier analyzed thermodynamic limitations on brute force key search. He calculated the amount of energy required for a perfectly efficient computer to merely increment a counter through all of its values. That's not to actually do anything useful like perform an AES operation and a comparison to test a particular key, but merely to count through all possible keys. Such a computer, running at the ambient temperature of the universe, would consume 4.4E-6 ergs to set or clear a single bit. Consuming the entire output of our star for a year, and cycling through the states in an order chosen to minimize bit flips rather than just counting sequentially, would provide enough energy for this computer to count through 2^187. The entire output of the sun for 32 years gets us up to 2^192. To run a perfectly-efficient computer through 2^256 states, you'd need to capture all of the energy from approximately 137 billion supernovae[*]. To brute force a 256-bit key you'd need to not only change your counter to each value, you'd then need to perform an AES operation.

      Raw computing power is not and never will be the way to break modern crypto systems[**]. To break them you need to either exploit unknown weaknesses in the algorithms (which means you have to be smarter than the world's academic cryptographers), or exploit defects in the implementation (e.g. side channel attacks) or find other ways to get the keys -- attack the key management. The last option is always the best, though implementation defects are also quite productive. Neither of them benefit significantly from having massive computational resources available.

      [*] Schneier didn't take into account reversible computing in his calculation. A cleverly-constructed perfectly-efficient computer could make use of reversible circuits everywhere they can work, and a carefully-constructed algorithm could make use of as much reversibility as possible. With that, it might be feasible to lower the energy requirements significantly, maybe even several orders of magnitude (though that would be tough). We're still talking energy requirements involving the total energy output of many supernovae.

      [**] Another possibility is to change the question entirely by creating computers that don't operate sequentially, but instead test all possible answers at once. Quantum computers. Their practical application to the complex messiness of block ciphers is questionable, though the mathematical simplicity of public key encryption is easy to implement on QCs. Assuming we ever manage to build them on the necessary scale. If we do, we can expect an intense new focus on protocols built around symmetric cryptography, I expect.

      • Actually, they probably included a few big wrenches to assemble some of the rack systems, so they probably have the tools to break even 1024 bit encryption.

        • Actually, they probably included a few big wrenches to assemble some of the rack systems, so they probably have the tools to break even 1024 bit encryption.

          When you say "1024-bit encryption" you're talking about RSA, which is a completely different problem. 1024-bit RSA are too small to be used today and should be replaced.

          2048-bit RSA keys, however, are roughly equivalent in security against brute force to a 112-bit symmetric key, and will be secure against anyone for quite some time. 3072-bit RSA keys are equivalent to a 128-bit symmetric key. Excascale, even yottascale, computers won't touch them.

          But everyone really should be moving away from RSA anyw

  • It's for the ... (Score:3, Interesting)

    by CaptainDork ( 3678879 ) on Wednesday July 29, 2015 @06:08PM (#50210371)

    ... NSA data center and stuff.

    • This could bring Intel/HP's Itanium back to life. Make a center w/ 8192 Itanium CPUs, and all the storage needed. Have that located in AK, to make cooling easier.
  • But... (Score:5, Funny)

    by Anonymous Coward on Wednesday July 29, 2015 @06:11PM (#50210393)

    Will it blend?

  • by necro81 ( 917438 ) on Wednesday July 29, 2015 @06:19PM (#50210453) Journal
    IEEE Spectrum had an article last year [ieee.org] describing some of the challenges we'll need to overcome in order to achieve exascale computing.

    Here's another, somewhat pessimistic piece [ieee.org] they posted in 2008 - a digest of a DARPA report that went into significant technical detail.

    The biggest hurdle is power, and the biggest driver of that isn't the actual computation (i.e., the energy to perform some number of FLOPS), but rather moving that data around (between cores, to/from RAM, across a PCB, and among servers). Other hurdles include how to manage so many cores, ensure they are working (nearly) concurrently, how to handle hardware failures (which will be frequent given the amount of hardware), and writing software that can even make use of such technology in anything approaching optimal fashion.

    Not to say its impossible, merely hard given the present state of things and projecting a bit into the future. But as we know, "it is difficult to make predictions, especially about the future." [source [quoteinvestigator.com]?]
    • by raymorris ( 2726007 ) on Wednesday July 29, 2015 @07:11PM (#50210771) Journal

      Those issues will be resolved by a side effect of this being a government order. According to the GAO, on average it takes 4 1/2 years from the time the government orders a computer until it's installed. Right now, multiple government agencies have been told to start thinking about a plan. In two years (2017), each agency will have their plan and they'll start working to to resolve the differences between agencies. In another year (2018), they'll put out some RFPs. Those will go through the federal procurement process and the order will be placed about two years later (2020). That's when the 4 1/2 year average clock starts, so expect installation around first quarter 2025.

      The goal is that it should be 30 times faster than TODAY'S computers.
      And be operational in ten years. They can pretty much just order a Nexus 47, or an HP Proliant gen 12.

    • Admittedly not as large but I worked on 2000+ node clusters in the early oughts. They way they got "efficiently used" was they were broken up and jobs generally only used a small subset of relatively adjacent nodes. One scientist would use 40 cores on 10 servers sharing a switch, another 100 copies of a serial app on 100 cores etc. Every once and a while, and it was rare, an astrophysicist or whatever would actually use hundreds of cores concurrently for a parallel algorithm. It was by far the minority case

      • So, what you describe is essentially the difference between capacity and capability machines. The national labs have both, as there are use cases for both. But the flagship machines, e.g. Titan at the Oak Ridge Leadership Computing Facility (OLCF), are always capability machines -- built to run full system jobs, jobs that scale tens of or hundreds of thousands of nodes.
    • by LWATCDR ( 28044 )

      Sounds like we need higher performance per core. Not all problems are highly parallel, even with those that are you have limits, and now the interconnects are getting to be an issue.

  • 30 Times Faster? (Score:5, Interesting)

    by TechyImmigrant ( 175943 ) on Wednesday July 29, 2015 @06:25PM (#50210475) Homepage Journal

    For most specific problems thrown at supercomputers, you can go 30 times faster with a custom hardware architecture baked into silicon

    To go 30 times fast for general purpose supercomputing, you use the latest silicon (2X) and more chips (15X) and come up with a super new interconnect to make it not suck. This would involve making some chips that support low latency IPC in hardware.

    They are free to send me a few billion dollars, I'll get right on it and deliver a 30X faster machine and I'l even use some blue LEDs on the front panel.

    • The front panel and paint job are the highest margin part of the whole system. You would never give anything there away for free.

      • You apparently don't know what HPC stands for... Half Price Computing - everyone drives their margins to 0 to win one of these multi million dollar contracts. The only companies that win are the suppliers to the bidders. So lets see in 4 years CPU/GPUs will have 5-6x more compute power, we can throw 5-6x more CPUs at the problem and connect them up with nice 200-400Gbs (5-8x) interconnect. This isn't a research problem - just an interesting engineering problem.
        What is probably a research problem is ad
    • Basically, the procurement process for supercomputers is like this: the buyer (e.g. a DOE lab) will ready a portfolio of apps (mostly simulation codes) with a specified target performance. Vendors then bid for how "little" money they'll be able to meet that target performance. And of course the vendors will use the most (cost/power) efficient hardware they can get.

      The reason why we're no longer seeing custom built CPUs in the supercomputing arena, but rather COTS chips or just slightly modified versions, is

    • For most specific problems thrown at supercomputers, you can go 30 times faster with a custom hardware architecture baked into silicon

      Perhaps that's what they should do. Make a robotic silicon wafer fabrication facility part of the computer. After being given a task requiring a new architecture, it creates the architecture it needs and augments itself. I'm sure for less than the cost of the F-35 program, a universally tasking self augmenting supercomputer could be made to happen.

    • and it will be useless for everything except one problem

  • So can someone tell me, is better or worse than than banning stem cell research?
    • by digsbo ( 1292334 )
      I guess it depends on whether you think the Federal Government should be allocating tax dollars for things other than basic defense and infrastructure.
      • Stem Cell Research and Computational power are all defence. The stronger and more capable you are as a nation, the more effective your self defence.
  • by the_humeister ( 922869 ) on Wednesday July 29, 2015 @06:54PM (#50210663)

    640 petaflops ought to be enough for anybody.

  • by Anonymous Coward

    Who in their right minds would let these people near a computer? Please. Let them go back to what they excel at: stealing cameras out of our luggage and groping underage genitalia.

    • by cdrudge ( 68377 )

      You do realize that the TSA is only about 1/4 of DHS by number of employees and 12% of the budget, right? I think you're really selling short the amount of damage they excel at if you only go with stealing cameras and groping underage genitalia.

      • by Hodr ( 219920 )

        Seems much more likely that this would be used by CBP and the Coast Guard (both DHS).

        Might take a decent amount of horsepower to constantly search a database for every tracked vessel and run an analysis on each to determine when they do something that is out of the ordinary, then compare those results to patterns that predict some form of unwanted behavior (trafficking, illegal fishing, hijacking, lost at sea, etc.)

  • John Connor: "By the time Skynet became self-aware it had spread into millions of computer servers across the planet. Ordinary computers in office buildings, dorm rooms; everywhere. It was software; in cyberspace. There was no system core; it could not be shutdown. The attack began at 6:18 PM, just as he said it would. Judgment Day, the day the human race was almost destroyed by the weapons they'd built to protect themselves. I should have realized it was never our destiny to stop Judgment Day, it was merel
  • Hm, something like this needs a good, catchy name, that also makes for a good acronym.
    How about...

    Strategic
    Kinetic
    Yankee
    Neural
    Exaflop
    Terminal
  • beowulf cluster of these...

  • by pem ( 1013437 )
    I guess Obama's miffed he missed out on the early days of cheap bitcoin mining.
  • by Chad Smith ( 3448823 ) on Wednesday July 29, 2015 @07:58PM (#50211045)
    Mine Bit Coins
  • by ChrisMaple ( 607946 ) on Wednesday July 29, 2015 @08:11PM (#50211121)
    Synthesize the unicorn genome, to provide fuel for transportation and buy the votes of little girls.
  • Because that's how you get Skynet.

  • by acoustix ( 123925 ) on Wednesday July 29, 2015 @09:09PM (#50211379)

    He can attempt to mandate all he wants. Congress approves the budgets. And since we all know how well Obama has been submitting his budgets....

    • Comment removed based on user account deletion
  • Imagine a Beowulf Cluster of pork

  • ... will throw a wrench in the works.

    Buried in some farm bill, there will be a requirement to port systemd to this.

  • Seriously, building the computer is worthless, unless it comes 100% from America, or at least the west.
    If the parts come from China, then it will make it trivial for China to simply build their own CPLA computer for weapons modeling.
  • by Dr. Blue ( 63477 ) on Thursday July 30, 2015 @08:18AM (#50213639)

    I see people speculating above about the government using this to break crypto, but that's really not a huge concern. If people use good keys that require brute force searching, even the smallest AES key size would take over a billion millenia to break at 10^18 ops/second (even assuming you could test a key on one "op"). And for people who use bad keys, you don't need exascale computing to break them.

    So what could the FBI use something like this for? What about analysis of massive public and not-so-public data, like data mining Internet postings, email/phone records, ... Better not post something with the wrong combination of words, or someone might come knocking on your door.

Fast, cheap, good: pick two.

Working...