ExaFLOP: Computing 50 Times Faster Than Today’s Supercomputers
Supercomputers may seem like an esoteric technology, but making better supercomputers is expected to transform countries and economies by improving security, economic competitiveness, and technological and scientific research. The point of supercomputing is to deal with huge, unimaginably big data sets, extracting information from them that will help solve the challenges faced by humanity.
DARPA’s creation of the World Wide Web provided huge economic advantages to the United States. In coming decades, the countries that innovate the most in the realm of supercomputers will achieve a similar advantage.
But reaching the exaFLOP level is a tough nut to crack. First is to get enough funding to bring together the brains and talent that can achieve the goal. Another is to create a supercomputer that doesn’t just cobble together a large number of processors. Finally, huge energy consumption issues plague supercomputers.
According to an article in IEEE Spectrum, “Global Race Toward Exascale…“, Steve Conway, Sr. VP of Research at Hyperion, notes:
“It would take well over 100 megawatts, which nobody’s going to supply, because that’s over a 100 million dollar electricity bill. So it has to get the electricity usage under control. Everybody’s trying to get it in the 20 to 30 megawatts range.
The United States Falls Out of Top 3 in the World
An exaflop computer which computes on the “exascale” can do a billion billion, or one quintillion, calculations per second. That’s one thousand petaflops per second. Currently, the exaflop computer doesn’t exist, but researchers around the world are in hot pursuit and soon will break through this computing milestone.
Last month, Top500.org ranked the Sunway TaihuLight, created by China’s National Research Center of Parallel Computer Engineering & Technology (NRCPC), as the fastest supercomputer in the world with a speed of 93 petaflops per second. Coming in second place was the Tiahe-2 or the Milky Way-2 from China’s National University of Defense Technology (NUDT) at 33.9 petaflops per second.
For the first time since 1996, the United States didn’t make the top three on the list.
The rest of the lineup, from Top500.org, includes:
|3||Swiss National Supercomputing Centre (CSCS)|
|4||DOE/SC/Oak Ridge National Laboratory|
|7||Joint Center for Advanced High Performance Computing|
|8||RIKEN Advanced Institute for Computational Science (AICS)|
|9||DOE/SC/Argonne National Laboratory|
The Challenges of Exabyte Computing
Getting closer and closer to the Exabyte level means that new state-of-the-art components are allowing better computing performance. The main technological challenges include programmability, power efficiency, reliability, scalability, artificial intelligence, data analytics, and more.
According to Ars Technica’s article, “US gov’t tap The Machine to beat China to exascale supercomputing,” there are three main challenges to computing at the exascale:
- The inordinate power usage (gigawatts) and cooling requirements;
- Developing the architecture and interconnects to efficiently weave together hundreds of thousands of processors and memory chips;
- And devising an operating system and client software that actually scales to one quintillion calculations per second.
Memory vs. Processing
Hewlett Packard Enterprise (HPE) has created an “exascale reference design” architecture known as “Memory Driven Computing,” which it believes can achieve exascale computing with a high level of performance and low energy requirements. According to HPE in its white paper “Exascale: A race to the future of HPC,” exponential data growth is pushing the boundaries of the science and engineering of computing.
The following is a recent look at the world’s top supercomputers.