Follow Slashdot stories on Twitter

 



Forgot your password?
typodupeerror
×
Supercomputing

Satoshi Matsuoka Mocks 12 Myths of High-Performance Computing (insidehpc.com) 25

insideHPC reports that Satoshi Matsuoka, the head of Japan's largest supercomputing center, has co-authored a high-performance computing paper challenging conventional wisdom. In a paper entitled "Myths and Legends of High-Performance Computing" appearing this week on the Arvix site, Matsuoka and four colleagues offer opinions and analysis on such issues as quantum replacing classical HPC, the zettascale timeline, disaggregated computing, domain-specific languages (DSLs) vs. Fortran and cloud subsuming HPC, among other topics.

"We believe (these myths and legends) represent the zeitgeist of the current era of massive change, driven by the end of many scaling laws, such as Dennard scaling and Moore's law," the authors said.

In this way they join the growing "end of" discussions in HPC. For example, as the industry moves through 3nm, 2nm, and 1.4nm chips – then what? Will accelerators displace CPUs altogether? What's next after overburdened electrical I/O interconnects? How do we get more memory per core?

The paper's abstract promises a "humorous and thought provoking" discussion — for example, on the possibility of quantum computing taking over high-performance computing. ("Once a quantum state is constructed, it can often be "used" only once because measurements destroy superposition. A second limitation stems from the lack of algorithms with high speedups....")

The paper also tackles myths like "all high-performance computing will be subsumed by the clouds" and "everything will be deep learning."

Thanks to guest reader for submitting the article.
This discussion has been archived. No new comments can be posted.

Satoshi Matsuoka Mocks 12 Myths of High-Performance Computing

Comments Filter:
  • Missing myth (Score:4, Interesting)

    by rknop ( 240417 ) on Saturday January 21, 2023 @12:52PM (#63227768) Homepage

    I don't know if this is talked about, but for mere scientists trying to *use* HPC facilities, this is often the truth:

    Myth: our CPU/Accelerator performance metrics are meaningful even though our filesystems bog many jobs down to death

    • It dpends on what cycles are used for. If all you want to do is stream videos on demand for example then there will never be enough to go around as demand seems to be designed to outpace any available supply. As for quantum state only being usable once, the same was said about solid state binary computers. That the state was to volatile or happened to fast to be usable.

  • by SchroedingersCat ( 583063 ) on Saturday January 21, 2023 @01:09PM (#63227804)
    Scientific research, just as the economy, is not immune to speculative bubbles. Certain scientific or technological advances generate level of excitement and expectations that exceeds the practical application limits so the expected value of the research field vastly exceed the practical value. When the fear of missing out kicks in, objective reasoning breaks down. Eventually the bubble deflates and expectations get inline with practical application.
  • by PPH ( 736903 ) on Saturday January 21, 2023 @02:24PM (#63227974)

    Thanks, Slashdot etidors.

  • by Big Hairy Gorilla ( 9839972 ) on Saturday January 21, 2023 @02:48PM (#63228014)
    don't believe that technology can progress at the same rate it has... for good reasons, imho, it's quite brittle and it's NOT magic. I am quite skeptical of quantum computing, sure it's interesting, it could be good, but it seems like nuclear fusion, only 10 years away from being 10 years away.

    This is a trend I'm seeing, the people in the know are dissing "Quantum", "AI", "the Metaverse" and the like, while MBAs and PHBs everywhere see an opportunity to automate, layoff workers, and Cha Ching, Save on operating costs. Good for the quarterly bonus, but shortsighted in the long run.

    I doubt Technology will successfully scale out much further in many vertical markets, like your phone, computer, tablet. Dumbed down tools like the mythical "AI" will allow dumber people to create ever more bloviated "apps". Dumber people are being cranked out of "higher education" because it's a business. The kids already know that, my professor friends are now being told by the kiddies that they deserve a better mark, because they paid for it.

    People who can type 3 words in succession are now claiming to be "prompt engineers"... and of course, if you want to meet a "genius" just go to any Apple store.
    • Re: (Score:2, Interesting)

      by Anonymous Coward

      Clearly you're not someone in the know, your comments about AI at least are absolutely ignorant and stupid. No one in the know is dissing AI, on the contrary, it's an area of massive expansion and progress (you only have to look at the noise around things like ChatGPT to see we have regular progress).

      If you think AI is a walking talking human like Android then, well, that's a you problem. That's like saying physics has failed and isn't going any further because we haven't perfected the grand unified theory

      • Note the difference between "weak AI" and "strong AI".

        When people see things like chess/go computers and chat bots and say, "yes, but AI still sucks", they're talking about strong AI. Weak AI has clearly made huge strides. Strong AI... not so much, essentially zero since first defined.

        Weak AI is all of these pattern matching and neural network systems of all types. They have no consciousness and no internal thought patterns.

        Strong AI is a machine with some form of self awareness and consciousness which ha

        • Strong AI... not so much, essentially zero since first defined.

          Understandable, since it has never been defined outside of
          a bunch of hand-wavy philosophy. The goal of "Strong AI"
          has always been, and always will be, a set of continuously
          moving goalposts.

          So what?

      • > No one in the know is dissing AI,

        Yes they are. AI, or what I call Artificial Ignorance for how utterly stupid AI is currently, is nothing more then a glorified table lookup. There is ZERO intelligence in "AI".

        Progress? You mean where ChatGPT is dumb enough to:

        * believe 9 + 10 = 21 [youtube.com] ?
        * convert 1011 binary to 11 decimal but when "explaining" the steps completely fucks it up [imgur.io] and gets 13?

        • For me, the deciding factor is how much most people cannot do without computers doing it for them. And then how much we marginalize cultures that do not use them to express themselves yet "allow" them to define us. Plenty of Buddhist capoeira yoga kung fu Yoruba - er's out there just as a function of economic privilege... Weather you think they are "smart" or not they, include ai, will have to get even smarter or we will all go extinct. A pretty tidy paradox but I hear that is what lets people know how supe

        • > nothing more then a glorified table lookup

          So is your brain, according to classical neuroscience.

  • by sjames ( 1099 ) on Saturday January 21, 2023 @04:10PM (#63228206) Homepage Journal

    I still see a lot of FORTRAN code out there, sometimes code bases that obviously started out in FORTRAN 77 and slowly started using newer standards over time. iterative simulation is HARD and sometimes fragile. If there are any consequences whatsoever to the output being wrong, validation of a model is a HUGE issue and never achieves 100% certainty. Time worn code will continue to be used, if for no other reason, because years of tiny iterative refinements and comparisons to actual physical happenings have shown the model to produce reasonable results. The new Whizz-Bang code might be faster, but does it's output reflect reality? You can't just do a few runs and compare, the iterative nature means the system is chaotic and has a zillion corner cases.

    To further that, much of the code uses IEEE floating point, sometimes incorrectly. Just changing compiler or optimization flags given to the compiler can result in garbage out. Much less translating it to a whole new architecture and language.

    Progress is and will continue to be made, but it won't be over-night. For example, work on offloading to GPU has been going on for 2 decades now.

    Various NN techniques (such as Stable Diffusion) are interesting, but they also make verification even harder or even impossible. No biggie if a prompt to Stable Diffusion results in an image that is entirely wrong, but much bigger consequences if the hurricane prediction model goes that far wrong. On a related note, fairly often when SD gets it wrong, it often does so in an Escher like manner. It's wrong and unphysical but the eye tends to slide right past the error.

    If you have an ongoing need for supercomputing, doing it in-house is ALWAYS much cheaper than doing it in the cloud. The cloud is good for temporary needs. If you think the cloud is going to be cheaper for long-term, you probably also fall for the rent-to-own commercials on TV where you end up paying thousands for a very mediocre TV, a little at a time.

    Currently, quantum computing is like the gadgets in the blue screen commercials. They shout about the fast results from the rooftop but don't talk about the very long and expensive process of setting the computation up to run. It will get better but it will take time and even once it becomes practical, the set of practical use cases will be small for a while. I suspect that at least early on, any results will have to be confirmed by conventional supercomputers, but may still save time by narrowing the problem space.

    As for highly specialized hardware, I've seen 'solutions' involving code to FPGA compilers for decades now. It hasn't panned out. The one special case that seems to have gotten any traction in hardware is bitcoin mining.

    • Listing bitcoin as a singular FPGA success is way narrow blinkers. They have been highly successful since inception in the early 1990's.

      FPGAs eliminated a previously very expensive layer of cost in the development cycle of ASIC and custom chips in general. They provide iterative testing, just like with software development, of a design without the cost of rebuilding the hardware for every run! An alternative to simulation, you could say.

      • by sjames ( 1099 )

        I was referring specifically to the idea of implementing applications as an FPGA in a general purpose supercomputing environment. Think LS DYNA on a chip sort of thing. Or more broadly, a FORTRAN (for example) compiler that outputs firmware to be loaded into an FPGA rather then binaries to be run on a CPU.

        For other uses, I'll readily agree that FPGAs are a big success and have done a lot.

        • by evanh ( 627108 )

          Hmm, bitcoin mining doesn't fit that description at all. FPGAs were used there as I described - to proof the implementation before going ASIC.

          • by sjames ( 1099 )

            FPGAs configured by compiling code that would normally run on a CPU.

            Yes, the first generation hardware based on FPGAs was soon replaced by ASICs that were cheaper in volume.

            • by evanh ( 627108 )

              Not just cheaper than FPGA, the ASIC will be faster and far more efficient too. And much closer to GPU code rather than CPU. Not to mention the clock rate of both FPGA and ASIC will be far below that of the CPUs. Maybe a tenth.

              Actually, the only people buying the FPGA solution were doing it just to be first. They did the proving at scale because there was gold in them there hills. First dibs type thing.

              So there is an argument, when unit cost is not an issue and time-to-market is vital, for deploying wi

  • This guy seems to know a lot of high-falutin' computer stuff - could he be THE Satoshi? ;)
  • Apparently, they always end in an exclamation point. I did not know that.

"It's the best thing since professional golfers on 'ludes." -- Rick Obidiah

Working...