How fast can a supercomputer work at the very most? Is there a theoretical limit on the speed of its operations?

The first true supercomputer, CRAY-1 (photo, left) made in 1976, ran at a speed of 100 megaflops. (A flop is the number of mathematical operations involving decimal fractions that a computer can make in a second. Mega = 1000.) At present the fastest supercomputer is China’s Tianhe-1 (see photo, below) which can operate at the speed of 1,000 trillion (10,00,00,00,00,00,00,000 calculations per second). Such extraordinary speed is made possible by the development of parallel processing. Dozens of microprocessors divide the task between them and work in unison to quickly arrive at the result. This obviates the need for creating free standing monoliths which in any case cannot cope with enormous data.

Although data-crunching power of a supercomputer may seem to be boundless, the main challenge in making such a machine lies in the length of its internal wiring. The speed of electrons which travel about 3,00,000 kilometers in a single second does seem exceedingly fast. But when the aim is to perform trillions of operations in that time electrons’ travel over individual wires result in a significant time lag.

To overcome this constraint, Seymour Cray, the designer of CRAY-1, kept the maximum length of any wire below 122 centimeters, below 40.5 centimeters in CRAY-2 and mere 7.6 centimeters in the CRAY-3 supercomputer. Unfortunately, all those electrons flowing through densely packed wires produced a tremendous amount of heat which could disable the entire system. Cray had to use a liquid coolant which absorbed the heat and kept temperature at ideal level. The problem of heat is easy to clear up by such methods, but the time lag in electron’ travel is likely to be a limiting factor in making faster supercomputers in future.

Take your PC as an example. Sending emails and watching movies etc. is not a problem, and any average PC can perform such jobs well. However, if you are working with heavy software and asking it to perform a complicated task, then it will, of course, require more hard work. The same is the case with games. Not every game will be able to run smoothly on an average computer. It will need a faster processor and graphics card to run the game properly i.e., with no lags.

Now suppose that you want to test a new drug, forecast the weather, and model how the climate will look like in 2050, is an entirely different story. You cannot perform such actions on your computer as it simply does not have the processing power. This is where supercomputers come in. It is such situations that put even the best supercomputers to the limit.

However, unlike your PC, which can be jammed up with more ram and a better processor, there is a limit to how fast can a supercomputer process/run or the difference additional memory would make. Therefore, supercomputers make use of parallel processing, which is a process of adding more processors, dividing your problems into chunks, and getting each processor to work on one chunk at a time.

Computer scientists made a breakthrough in terms of adding thousands of processors, but with almost everything, supercomputers also have a limit. Even though supercomputers can solve problems really quickly, but there is a need for a centralized management system that splits and controls the problems in addition to managing the workload and reassembling the results.

In case of some problems, it is easy to divide the workload as the nature of the problem is comparatively less complicated. However, in other cases, problems do not fly away easily. Returning to the example of forecasting weather, in order for a supercomputer to develop forecasts, it will need to take into account the forecasts elsewhere.

Sometimes depending on the problem, the processors of a supercomputer will communicate to solve a particular problem to move on, or one processor will have to wait while others are done completing their job. All in all, with computer scientists looking for ways to make supercomputers faster, it does not seem that they will ever have a theoretical limit. The rise of complicated problems such as the coronavirus epidemic today requires constant advancement, which is the need of the hour.