Data Backup Digest

Do-It-Yourself Windows File Recovery Software: A Comparison

results »

New Exascale Supercomputer Set to Study Nuclear Weaponry

Supercomputers aren’t exactly new. The first machine that was ever considered a supercomputer was built all the way back in 1960, when UNIVAC developed the Livermore Atomic Research Computer, or LARC, for use by the U.S. Navy. Additional supercomputers came throughout the following decades – but none of them can compare with the latest supercomputer known as Frontier.

In fact, the new supercomputer is now ranked as the second-fastest supercomputer in existence today. Pioneered by the Department of Energy and their National Nuclear Security Administration, Frontier is 2.5 times faster than the supercomputer that previously held the #2 rank. It achieved this through a variety of ways, including the implementation of almost 10,000 CPUs and nearly 38,000 GPUs. Although GPUs are generally used for high-end gaming, their processing power makes them relevant in scientific applications, too.

A Winning Combination of CPUs and GPUs

Much of Frontier’s performance can be attributed to the supercomputer’s CPUs and GPUs. While CPUs drive primary tasks and day-to-day operations, the GPUs are utilized to perform highly repetitive mathematical equations on behalf of the system.

Douglas Kothe, associate lab directory of computing and computational sciences with Oak Ridge National Laboratory, the facility that currently houses Frontier, spoke about this combination in a recent interview by stating: “You could say it’s a match made in supercomputing heaven. It reassembles the results into the final answer. You could compare each CPU to a crew chief in a factory and the GPUs to workers on the front line.”

But the supercomputer wasn’t created overnight. Instead, it’s taken years of research and hands-on work to create Frontier. Several obstacles needed to be overcome, including issues surrounding memory usage and data storage.

Kothe continued his interview by saying: “In principle, the community could have developed and deployed an exascale supercomputer much sooner, but it would not have been usable, useful and affordable by our standards.”

How Powerful Is It?

Simply put, the new Frontier supercomputer is a powerhouse of a machine. It’s able to process data seven times faster than the supercomputers that came before it, and it can store approximately four times more data than the now-outdated machines, too.

Frontier also contains nearly 9,500 individual processing nodes. With each one serving as a miniature computer of its own, and with the ability to transfer data across nodes as needed, Frontier is an incredibly powerful and efficient supercomputer.

Thanks to the increased amount of memory featured in Frontier, it’s able to run larger simulations that require considerable more processing power than what was available in the past. Not only is this expected to come up with some significant breakthroughs on its own, but the new Frontier supercomputer is also expected to serve as a benchmark for future machines.

But a supercomputer isn’t able to do much on its own – even one that has the power and efficiency of Frontier. Without the support of data scientists who know how to interpret and apply the data it produces, even the most powerful supercomputers are virtually useless.

Comments

No comments yet. Sign in to add the first!