What is ebay expansion?  Computers have given a very long answer

You probably remember from math lessons that you can expand pi as 3.14. However, two decimal places is not a very impressive result when compared to the result achieved by Google computers.

In 2019, Google Cloud put this number at 31.4 trillion. However, the record did not last long, because only two years later, scientists from the University of Applied Sciences in Grisons increased this figure to 62.8 trillion decimal places. This year, however, there was another remarkable achievement.

Read also: Google’s latest text-to-image generator can create realistic images of what you dream of

Google played the main role again, more precisely – computers there. Thanks to their participation, it was possible to determine the value of pi up to 100 trillion decimal places. So Google Cloud is once again involved in breaking records, which is especially impressive when you consider that it has tripled over three years.

The pi expansion is now known as 100 trillion decimal places

This achievement is a testament to how much the Google Cloud infrastructure is accelerating from year to year. The core technology that made this possible is Compute Engine, the secure and configurable computing service from Google Cloud, plus the last few additions and improvements: Compute Engine N2 family of machines, 100Gbps bandwidth, Google Virtual NIC, and sustainable permanent disks. We can read in a statement published by Google

The program that managed to calculate 100 trillion digits of pi is known as R-Cruncher v0.7.8. The algorithm it depends on is known by name Chudnovsky’s algorithm. The computations began on October 14, 2021 and ended on March 21, 2022. More specifically, the computation took 157 days, 23 hours, 31 minutes, and 7.651 seconds.

Read also: They spent four months puzzled over this math question. Chinese genius solved it overnight

As you can imagine, such a large-scale operation requires a lot of computing power and resources. Google Cloud estimated the cache size needed to perform the calculations at 554 TB. It is worth noting that the company has created a cluster consisting of one computing node and 32 storage nodes. In total, this included 64 iSCSI block storage targets.

Leave a Reply

Your email address will not be published. Required fields are marked *

You May Also Like

Mars turns into a trash can. Not only should NASA be ashamed

The problem of Martian waste by subsequent research missions unexpectedly surfaced in…

The world’s largest thermonuclear reactor was launched. What can we expect from him?

It was officially launched yesterday, and not only the Japanese participated in…

Nuclear fusion is approaching. AI will control the plasma in the reactor

Scientists from DeepMind (which belongs to the Alphabet conglomerate) and the Plasma…

Has a second-level quantum computer been created? The number of errors generated has been significantly reduced

Yesterday at 11:20 | technology One of the biggest problems that developers…