AI

Microsoft Invests $1 Billion in OpenAI To Develop AI Technologies on Azure (venturebeat.com) 28

Microsoft today announced that it would invest $1 billion in OpenAI, the San Francisco-based AI research firm cofounded by CTO Greg Brockman, chief scientist Ilya Sutskever, Elon Musk, and others, with backing from luminaries like LinkedIn cofounder Reid Hoffman and former Y Combinator president Sam Altman. From a report: In a blog post, Brockman said the investment will support the development of artificial general intelligence (AGI) -- AI with the capacity to learn any intellectual task that a human can -- with "widely distributed" economic benefits. To this end, OpenAI intends to partner with Microsoft to jointly develop new AI technologies for the Seattle company's Azure cloud platform and will enter into an exclusivity agreement with Microsoft to "further extend" large-scale AI capabilities that "deliver on the promise of AGI." Additionally, OpenAI will license some of its technologies to Microsoft, which will commercialize them and sell them to as-yet-unnamed partners, and OpenAI will train and run AI models on Azure as it works to develop new supercomputing hardware while "adhering to principles on ethics and trust."

According to Brockman, the partnership was motivated in part by OpenAI's continued pursuit of enormous computational power. Its researchers recently released analysis showing that from 2012 to 2018 the amount of compute used in the largest AI training runs grew by more than 300,000 times, with a 3.5-month doubling time, far exceeding the pace of Moore's Law. Perhaps exemplifying the trend is OpenAI's OpenAI Five, an AI system that squared off against professional players of the video game Dota 2 last summer. On Google's Cloud Platform -- in the course of training -- it played 180 years' worth of games every day on 256 Nvidia Tesla P100 graphics cards and 128,000 processor cores, up from 60,000 cores just a few years ago.

Earth

How The Advance Weather Forecast Got Good (npr.org) 80

NPR notes today's "supercomputer-driven" weather modelling can crunch huge amounts of data to accurately forecast the weather a week in advance -- pointing out that "a six-day weather forecast today is as good as a two-day forecast was in the 1970s."

Here's some highlights from their interview with Andrew Blum, author of The Weather Machine: A Journey Inside the Forecast : One of the things that's happened as the scale in the system has shifted to the computers is that it's no longer bound by past experience. It's no longer, the meteorologists say, "Well, this happened in the past, we can expect it to happen again." We're more ready for these new extremes because we're not held down by past expectations...

The models are really a kind of ongoing concern. ... They run ahead in time, and then every six hours or every 12 hours, they compare their own forecast with the latest observations. And so the models in reality are ... sort of dancing together, where the model makes a forecast and it's corrected slightly by the observations that are coming in...

It's definitely run by individual nations -- but individual nations with their systems tied together... It's a 150-year-old system of governments collaborating with each other as a global public good... The positive example from last month was with Cyclone Fani in India. And this was a very similar storm to one 20 years ago, that tens of thousands of people had died. This time around, the forecast came far enough in advance and with enough confidence that the Indian government was able to move a million people out of the way.

China

China Has Almost Half of The World's Supercomputers, Explores RISC-V and ARM (techtarget.com) 90

Slashddot reader dcblogs quote Tech Target: Ten years ago, China had 21 systems on the Top500 list of the world's largest supercomputing systems. It now has 219, according to the biannual listing, which was updated just this week. At its current pace of development, China may have half of the supercomputing systems on the Top500 list by 2021.... U.S. supercomputers make up 116 of the latest Top500 list.

Despite being well behind China in total system count, the U.S. leads in overall performance, as measured by the High Performance Linpack (HPL) benchmark. The HPL benchmark is used to solve linear equations. The U.S. has about 38% of the aggregate Top500 list performance. China is in second, at nearly 30% of the performance total. But this performance metric has flip-flopped between China and the U.S., because it's heavily weighted by the largest systems. The U.S. owns the top two spots on the latest Top500 list, thanks to two IBM supercomputers at U.S. national laboratories. These systems, Summit and Sierra, alone, represent 15.6% of the HPL performance measure.

Nathan Brookwood, principal analyst at Insight 64, says China is concerned the U.S. may limit its x86 chip imports, and while China may look to ARM, they're also investigating the RISC-V processor architecture.

Paresh Kharya, director of product marketing at Nvidia, tells Tech Target "We expect x86 CPUs to remain dominant in the short term. But there's growing interest in ARM for supercomputing, as evidenced by projects in the U.S., Europe and Japan. Supercomputing centers want choice in CPU architecture."
Supercomputing

Nvidia Will Support ARM Hardware For High-Performance Computing (venturebeat.com) 24

An anonymous reader quotes a report from VentureBeat: At the International Supercomputing Conference (ISC) in Frankfurt, Germany this week, Santa Clara-based chipmaker Nvidia announced that it will support processors architected by British semiconductor design company Arm. Nvidia anticipates that the partnership will pave the way for supercomputers capable of "exascale" performance -- in other words, of completing at least a quintillion floating point computations ("flops") per second, where a flop equals two 15-digit numbers multiplied together. Nvidia says that by 2020 it will contribute its full stack of AI and high-performance computing (HPC) software to the Arm ecosystem, which by Nvidia's estimation now accelerates over 600 HPC applications and machine learning frameworks. Among other resources and services, it will make available CUDA-X libraries, graphics-accelerated frameworks, software development kits, PGI compilers with OpenACC support, and profilers. Nvidia founder and CEO Jensen Huang pointed out in a statement that, thanks to this commitment, Nvidia will soon accelerate all major processor architectures: x86, IBM's Power, and Arm. "As traditional compute scaling has ended, the world's supercomputers have become power constrained," said Huang. "Our support for Arm, which designs the world's most energy-efficient CPU architecture, is a giant step forward that builds on initiatives Nvidia is driving to provide the HPC industry a more power-efficient future."
Businesses

Hewlett Packard Enterprise To Acquire Supercomputer Maker Cray for $1.3 Billion (anandtech.com) 101

Hewlett Packard Enterprise will be buying the supercomputer maker Cray for roughly $1.3 billion, the companies said this morning. Intending to use Cray's knowledge and technology to bolster their own supercomputing and high-performance computing technologies, when the deal closes, HPE will become the world leader for supercomputing technology. From a report: Cray of course needs no introduction. The current leader in the supercomputing field and founder of supercomputing as we know it, Cray has been a part of the supercomputing landscape since the 1970s. Starting at the time with fully custom systems, in more recent years Cray has morphed into an integrator and scale-out specialist, combining processors from the likes of Intel, AMD, and NVIDIA into supercomputers, and applying their own software, I/O, and interconnect technologies. The timing of the acquisition announcement closely follows other major news from Cray: the company just landed a $600 million US Department of Energy contract to supply the Frontier supercomputer to Oak Ridge National Laboratory in 2021. Frontier is one of two exascale supercomputers Cray is involved in -- the other being a subcontractor for the 2021 Aurora system -- and in fact Cray is involved in the only two exascale systems ordered by the US Government thus far. So in both a historical and modern context, Cray was and is one of the biggest players in the supercomputing market.
Supercomputing

'Pi VizuWall' Is a Beowulf Cluster Built With Raspberry Pi's (raspberrypi.org) 68

Why would someone build their own Beowulf cluster -- a high-performance parallel computing prototype -- using 12 Raspberry Pi boards? It's using the standard Beowulf cluster architecture found in about 88% of the world's largest parallel computing systems, with an MPI (Message Passing Interface) system that distributes the load over all the nodes.

Matt Trask, a long-time computer engineer now completing his undergraduate degree at Florida Atlantic University, explains how it grew out of his work on "virtual mainframes": In the world of parallel supercomputers (branded 'high-performance computing', or HPC), system manufacturers are motivated to sell their HPC products to industry, but industry has pushed back due to what they call the "Ninja Gap". MPI programming is hard. It is usually not learned until the programmer is in grad school at the earliest, and given that it takes a couple of years to achieve mastery of any particular discipline, most of the proficient MPI programmers are PhDs. And this, is the Ninja Gap -- industry understands that the academic system cannot and will not be able to generate enough 'ninjas' to meet the needs of industry if industry were to adopt HPC technology.

As part of my research into parallel computing systems, I have studied the process of learning to program with MPI and have found that almost all current practitioners are self-taught, coming from disciplines other than computer science. Actual undergraduate CS programs rarely offer MPI programming. Thus my motivation for building a low-cost cluster system with Raspberry Pis, in order to drive down the entry-level costs. This parallel computing system, with a cost of under $1000, could be deployed at any college or community college rather than just at elite research institutions, as is done [for parallel computing systems] today.

The system is entirely open source, using only standard Raspberry Pi 3B+ boards and Raspbian Linux. The version of MPI that is used is called MPICH, another open-source technology that is readily available.

But there's an added visual flourish, explains long-time Slashdot reader iamacat. "To visualize computing, each node is equipped with a servo motor to position itself according to its current load -- lying flat when fully idle, standing up 90 degrees when fully utilized."

Its data comes from the /proc filesystem, and the necessary hinges for this prototype were all generated with a 3D printer. "The first lesson is to use CNC'd aluminum for the motor housings instead of 3D-printed plastic," writes Trask. "We've seen some minor distortion of the printed plastic from the heat generated in the servos."
AI

The World's Fastest Supercomputer Breaks an AI Record (wired.com) 66

Along America's west coast, the world's most valuable companies are racing to make artificial intelligence smarter. Google and Facebook have boasted of experiments using billions of photos and thousands of high-powered processors. But late last year, a project in eastern Tennessee quietly exceeded the scale of any corporate AI lab. It was run by the US government. From a report: The record-setting project involved the world's most powerful supercomputer, Summit, at Oak Ridge National Lab. The machine captured that crown in June last year, reclaiming the title for the US after five years of China topping the list. As part of a climate research project, the giant computer booted up a machine-learning experiment that ran faster than any before. Summit, which occupies an area equivalent to two tennis courts, used more than 27,000 powerful graphics processors in the project. It tapped their power to train deep-learning algorithms, the technology driving AI's frontier, chewing through the exercise at a rate of a billion billion operations per second, a pace known in supercomputing circles as an exaflop.

"Deep learning has never been scaled to such levels of performance before," says Prabhat, who leads a research group at the National Energy Research Scientific Computing Center at Lawrence Berkeley National Lab. His group collaborated with researchers at Summit's home base, Oak Ridge National Lab. Fittingly, the world's most powerful computer's AI workout was focused on one of the world's largest problems: climate change. Tech companies train algorithms to recognize faces or road signs; the government scientists trained theirs to detect weather patterns like cyclones in the copious output from climate simulations that spool out a century's worth of three-hour forecasts for Earth's atmosphere.

Education

A Supercomputer In a 19th Century Church Is 'World's Most Beautiful Data Center' (vice.com) 62

"Motherboard spoke to the Barcelona Supercomputing Center about how it outfitted a deconsecrated 19th century chapel to host the MareNostrum 4 -- the 25th most powerful supercomputer in the world," writes Slashdot reader dmoberhaus. From the report: Heralded as the "most beautiful data center in the world," the MareNostrum supercomputer came online in 2005, but was originally hosted in a different building at the university. Meaning "our sea" in Latin, the original MareNostrum was capable of performing 42.35 teraflops -- 42.35 trillion operations per second -- making it one of the most powerful supercomputers in Europe at the time. Yet the MareNostrum rightly became known for its aesthetics as much as its computing power. According to Gemma Maspoch, head of communications for Barcelona Supercomputing Center, which oversees the MareNostrum facility, the decision to place the computer in a giant glass box inside a chapel was ultimately for practical reasons.

"We were in need of hundreds of square meters without columns and the capacity to support 44.5 tons of weight," Maspoch told me in an email. "At the time there was not much available space at the university and the only room that satisfied our requirements was the Torre Girona chapel. We did not doubt it for a moment and we installed a supercomputer in it." According to Maspoch, the chapel required relatively few modifications to host the supercomputer, such as reinforcing the soil around the church so that it would hold the computer's weight and designing a glass box that would house the computer and help cool it.
The supercomputer has been beefed up over the years. Most recently, the fourth iteration came online in 2017 "with a peak computing capacity of 11 thousand trillion operations per second (11.15 petaflops)," reports Motherboard. "MareNostrum 4 is spread over 48 server racks comprising a total of 3,456 nodes. A node consists of two Intel chips, each of which has 24 processors."
Cloud

Is Linux Taking Over The World? (networkworld.com) 243

"2019 just might be the Year of Linux -- the year in which Linux is fully recognized as the powerhouse it has become," writes Network World's "Unix dweeb." The fact is that most people today are using Linux without ever knowing it -- whether on their phones, online when using Google, Facebook, Twitter, GPS devices, and maybe even in their cars, or when using cloud storage for personal or business use. While the presence of Linux on all of these systems may go largely unnoticed by consumers, the role that Linux plays in this market is a sign of how critical it has become. Most IoT and embedded devices -- those small, limited functionality devices that require good security and a small footprint and fill so many niches in our technology-driven lives -- run some variety of Linux, and this isn't likely to change. Instead, we'll just be seeing more devices and a continued reliance on open source to drive them.

According to the Cloud Industry Forum, for the first time, businesses are spending more on cloud than on internal infrastructure. The cloud is taking over the role that data centers used to play, and it's largely Linux that's making the transition so advantageous. Even on Microsoft's Azure, the most popular operating system is Linux. In its first Voice of the Enterprise survey, 451 Research predicted that 60 percent of nearly 1,000 IT leaders surveyed plan to run the majority of their IT off premises by 2019. That equates to a lot of IT efforts relying on Linux. Gartner states that 80 percent of internally developed software is now either cloud-enabled or cloud-native.

The article also cites Linux's use in AI, data lakes, and in the Sierra supercomputer that monitors America's nuclear stockpile, concluding that "In its domination of IoT, cloud technology, supercomputing and AI, Linux is heading into 2019 with a lot of momentum."

And there's even a long list of upcoming Linux conferences...
Intel

Intel Cascade Lake-AP Xeon CPUs Embrace the Multi-Chip Module (techreport.com) 72

Ahead of the annual Supercomputing 2018 conference next week, Intel today announced part of its upcoming Cascade Lake strategy. From a report: The company teased plans for a new Xeon platform called Cascade Lake Advanced Performance, or Cascade Lake-AP, this morning ahead of the Supercomputing 2018 conference. This next-gen platform doubles the cores per socket from an Intel system by joining a number of Cascade Lake Xeon dies together on a single package with the blue team's Ultra Path Interconnect, or UPI. Intel will allow Cascade Lake-AP servers to employ up to two-socket (2S) topologies, for as many as 96 cores per server.

Intel chose to share two competitive performance numbers alongside the disclosure of Cascade Lake-AP. One of these is that a top-end Cascade Lake-AP system can put up 3.4x the Linpack throughput of a dual-socket AMD Epyc 7601 platform. This benchmark hits AMD where it hurts. The AVX-512 instruction set gives Intel CPUs a major leg up on the competition in high-performance computing applications where floating-point throughput is paramount. Intel used its own compilers to create binaries for this comparison, and that decision could create favorable Linpack performance results versus AMD CPUs, as well.

Operating Systems

Finally, It's the Year of the Linux... Supercomputer (zdnet.com) 171

Beeftopia writes: From ZDNet: "The latest TOP500 Supercomputer list is out. What's not surprising is that Linux runs on every last one of the world's fastest supercomputers. Linux has dominated supercomputing for years. But, Linux only took over supercomputing lock, stock, and barrel in November 2017. That was the first time all of the TOP500 machines were running Linux. Before that IBM AIX, a Unix variant, was hanging on for dear life low on the list."

An interesting architectural note: "GPUs, not CPUs, now power most of supercomputers' speed."

IT

HPE Announces World's Largest ARM-based Supercomputer (zdnet.com) 57

The race to exascale speed is getting a little more interesting with the introduction of HPE's Astra -- what will be the world's largest ARM-based supercomputer. From a report: HPE is building Astra for Sandia National Laboratories and the US Department of Energy's National Nuclear Security Administration (NNSA). The NNSA will use the supercomputer to run advanced modeling and simulation workloads for things like national security, energy, science and health care.

HPE is involved in building other ARM-based supercomputing installations, but when Astra is delivered later this year, "it will hands down be the world's largest ARM-based supercomputer ever built," Mike Vildibill, VP of Advanced Technologies Group at HPE, told ZDNet. The HPC system is comprised of 5,184 ARM-based processors -- the Thunder X2 processor, built by Cavium. Each processor has 28 cores and runs at 2 GHz. Astra will deliver over 2.3 theoretical peak petaflops of performance, which should put it well within the top 100 supercomputers ever built -- a milestone for an ARM-based machine, Vildibill said.

Cloud

Nvidia Debuts Cloud Server Platform To Unify AI and High-Performance Computing (siliconangle.com) 15

Hoping to maintain the high ground in AI and high-performance computing, Nvidia late Tuesday debuted a new computing architecture that it claims will unify both fast-growing areas of the industry. From a report: The announcement of the HGX-2 cloud-server platform, made by Nvidia Chief Executive Jensen Huang at its GPU Technology Conference in Taipei, Taiwan, is aimed at many new applications that combine AI and HPC. "We believe the future requires a unified platform for AI and high-performance computing," Paresh Kharya, product marketing manager for Nvidiaâ(TM)s accelerated-computing group, said during a press call Tuesday.

Others agree. "I think that AI will revolutionize HPC," Karl Freund, a senior analyst at Moor Insights & Strategy, told SiliconANGLE. "I suspect many supercomputing centers will deploy HGX2 as it can add dramatic computational capacity for both HPC and AI." More specifically, the new architecture enables applications involving scientific computing and simulations, such as weather forecasting, as well as both training and running of AI models such as deep learning neural networks, for jobs such as image and speech recognition and navigation for self-driving cars.

Network

On This Day 25 Years Ago, the Web Became Public Domain (popularmechanics.com) 87

On April 30, 1993, CERN -- the European Organization for Nuclear Research -- announced that it was putting a piece of software developed by one of its researchers, Tim Berners-Lee, into the public domain. That software was a "global computer networked information system" called the World Wide Web, and CERN's decision meant that anyone, anywhere, could run a website and do anything with it. From a report: While the proto-internet dates back to the 1960s, the World Wide Web as we know it had been invented four year earlier in 1989 by CERN employee Tim Berners-Lee. The internet at that point was growing in popularity among academic circles but still had limited mainstream utility. Scientists Robert Kahn and Vinton Cerf had developed Transmission Control Protocol and Internet Protocol (TCP/IP), which allowed for easier transfer of information. But there was the fundamental problem of how to organize all that information.

In the late 80s, Berners-Lee suggested a web-like system of mangement, tied together by a series of what he called hyperlinks. In a proposal, Berners-Lee asked CERN management to "imagine, then, the references in this document all being associated with the network address of the thing to which they referred, so that while reading this document you could skip to them with a click of the mouse."

Four years later, the project was still growing. In January 1993, the first major web browser, known as MOSAIC, was released by the National Center for Supercomputing Applications at the University of Illinois Urbana-Champaign. While there was a free version of MOSAIC, for-profit software companies purchased nonexclusive licenses to sell and support it. Licensing MOSAIC at the time cost $100,000 plus $5 each for any number of copies.

The Internet

Mosaic, the First HTML Browser That Could Display Images Alongside Text, Turns 25 (wired.com) 132

NCSA Mosaic 1.0, the first web browser to achieve popularity among the general public, was released on April 22, 1993. It was developed by a team of students at the University of Illinois' National Center for Supercomputing Applications (NCSA), and had the ability to display text and images inline, meaning you could put pictures and text on the same page together, in the same window. Wired reports: It was a radical step forward for the web, which was at that point, a rather dull experience. It took the boring "document" layout of your standard web page and transformed it into something much more visually exciting, like a magazine. And, wow, it was easy. If you wanted to go somewhere, you just clicked. Links were blue and underlined, easy to pick out. You could follow your own virtual trail of breadcrumbs backwards by clicking the big button up there in the corner. At the time of its release, NCSA Mosaic was free software, but it was available only on Unix. That made it common at universities and institutions, but not on Windows desktops in people's homes.

The NCSA team put out Windows and Mac versions in late 1993. They were also released under a noncommercial software license, meaning people at home could download it for free. The installer was very simple, making it easy for just about anyone to get up and running on the web. It was then that the excitement really began to spread. Mosaic made the web come to life with color and images, something that, for many people, finally provided the online experience they were missing. It made the web a pleasure to use.

Networking

There's A Cluster of 750 Raspberry Pi's at Los Alamos National Lab (insidehpc.com) 128

Slashdot reader overheardinpdx shares a video from the SC17 supercomputing conference where Bruce Tulloch from BitScope "describes a low-cost Rasberry Pi cluster that Los Alamos National Lab is using to simulate large-scale supercomputers." Slashdot reader mspohr describes them as "five rack-mount Bitscope Cluster Modules, each with 150 Raspberry Pi boards with integrated network switches." With each of the 750 chips packing four cores, it offers a 3,000-core highly parallelizable platform that emulates an ARM-based supercomputer, allowing researchers to test development code without requiring a power-hungry machine at significant cost to the taxpayer. The full 750-node cluster, running 2-3 W per processor, runs at 1000W idle, 3000W at typical and 4000W at peak (with the switches) and is substantially cheaper, if also computationally a lot slower. After development using the Pi clusters, frameworks can then be ported to the larger scale supercomputers available at Los Alamos National Lab, such as Trinity and Crossroads.
BitScope's Tulloch points out the cluster is fully integrated with the network switching infrastructure at Los Alamos National Lab, and applauds the Raspberry Bi cluster as "affordable, scalable, highly parallel testbed for high-performance-computing system-software developers."
China

All 500 of the World's Top 500 Supercomputers Are Running Linux (zdnet.com) 288

Freshly Exhumed shares a report from ZDnet: Linux rules supercomputing. This day has been coming since 1998, when Linux first appeared on the TOP500 Supercomputer list. Today, it finally happened: All 500 of the world's fastest supercomputers are running Linux. The last two non-Linux systems, a pair of Chinese IBM POWER computers running AIX, dropped off the November 2017 TOP500 Supercomputer list. When the first TOP500 supercomputer list was compiled in June 1993, Linux was barely more than a toy. It hadn't even adopted Tux as its mascot yet. It didn't take long for Linux to start its march on supercomputing.

From when it first appeared on the TOP500 in 1998, Linux was on its way to the top. Before Linux took the lead, Unix was supercomputing's top operating system. Since 2003, the TOP500 was on its way to Linux domination. By 2004, Linux had taken the lead for good. This happened for two reasons: First, since most of the world's top supercomputers are research machines built for specialized tasks, each machine is a standalone project with unique characteristics and optimization requirements. To save costs, no one wants to develop a custom operating system for each of these systems. With Linux, however, research teams can easily modify and optimize Linux's open-source code to their one-off designs.
The semiannual TOP500 Supercomputer List was released yesterday. It also shows that China now claims 202 systems within the TOP500, while the United States claims 143 systems.
China

China Overtakes US In Latest Top 500 Supercomputer List (enterprisecloudnews.com) 110

An anonymous reader quotes a report from Enterprise Cloud News: The release of the semiannual Top 500 Supercomputer List is a chance to gauge the who's who of countries that are pushing the boundaries of high-performance computing. The most recent list, released Monday, shows that China is now in a class by itself. China now claims 202 systems within the Top 500, while the United States -- once the dominant player -- tumbles to second place with 143 systems represented on the list. Only a few months ago, the U.S. had 169 systems within the Top 500 compared to China's 160. The growth of China and the decline of the United States within the Top 500 has prompted the U.S. Department of Energy to doll out $258 million in grants to several tech companies to develop exascale systems, the next great leap in HPC. These systems can handle a billion billion calculations a second, or 1 exaflop. However, even as these physical machines grow more and more powerful, a good portion of supercomputing power is moving to the cloud, where it can be accessed by more researchers and scientists, making the technology more democratic.
China

China Arms Upgraded Tianhe-2A Hybrid Supercomputer (nextplatform.com) 23

New submitter kipperstem77 shares an excerpt from a report via The Next Platform: The National University of Defense Technology (NUDT) has, according to James Lin, vice director for the Center of High Performance Computing (HPC) at Shanghai Jiao Tong University, who divulged the plans last year, is building one of the three pre-exascale machines [that China is currently investing in], in this case a kicker to the Tianhe-1A CPU-GPU hybrid that was deployed in 2010 and that put China on the HPC map. This exascale system will be installed at the National Supercomputer Center in Tianjin, not the one in Guangzhou, according to Lin. This machine is expected to use ARM processors, and we think it will very likely use Matrix2000 DSP accelerators, too, but this has not been confirmed. The second pre-exascale machine will be an upgrade to the TaihuLight system using a future Shenwei processor, but it will be installed at the National Supercomputing Center in Jinan. And the third pre-exascale machine being funded by China is being architected in conjunction with AMD, with licensed server processor technology, and which everyone now thinks is going to be based on Epyc processors and possibly with Radeon Instinct GPU coprocessors. The Next Platform has a slide embedded in its report "showing the comparison between Tianhe-2, which was the fastest supercomputer in the world for two years, and Tianhe-2A, which will be vying for the top spot when the next list comes out." Every part of this system shows improvements.
AMD

Six Companies Awarded $258 Million From US Government To Build Exascale Supercomputers (digitaltrends.com) 40

The U.S. Department of Energy will be investing $258 million to help six leading technology firms -- AMD, Cray Inc., Hewlett Packard Enterprise, IBM, Intel, and Nvidia -- research and build exascale supercomputers. Digital Trends reports: The funding will be allocated to them over the course of a three-year period, with each company providing 40 percent of the overall project cost, contributing to an overall investment of $430 million in the project. "Continued U.S. leadership in high performance computing is essential to our security, prosperity, and economic competitiveness as a nation," U.S. Secretary of Energy Rick Perry said. "These awards will enable leading U.S. technology firms to marshal their formidable skills, expertise, and resources in the global race for the next stage in supercomputing -- exascale-capable systems." The funding will finance research and development in three key areas; hardware technology, software technology, and application development. There are hopes that one of the companies involved in the initiative will be able to deliver an exascale-capable supercomputer by 2021.

Slashdot Top Deals