What is TPU? Google AI?

 What is TPU and Google AI?


Google TPU Machine


Tensor Processing Unit (TPU) is an AI accelerator application-specific integrated circuit (ASIC) developed by Google for neural network machine learning, using Google's own TensorFlow software. Google began using TPUs internally in 2015, and in 2018 made them available for third party use, both as part of its cloud infrastructure and by offering a smaller version of the chip for sale.

Google 1st announced to make A first, but many circumstances happen a new company "Open AI" beat Google to help "Microsoft funding", they make "Chat GPT" and got huge success, we already know...

Now Google want to make won Chat AI name : "Bard AI"

Google talked about the TPU v4 supercomputer, which has 4,096 of the company’s Tensor Processing Units (TPUs). Google runs its AI applications on TPU chips, and that includes Bard, which is an early iteration of the company’s AI-infused search engine. The company has deployed dozens of TPU v4 supercomputers in Google Cloud.

Google AI Chip


Google’s paper on its supercomputing infrastructure comes after Microsoft made noise about its Azure supercomputer with Nvidia GPUs, which powers ChatGPT. By comparison, Google has been conservative in deploying AI in its web applications, but is now trying to catch up with Microsoft, which has deployed OpenAI’s GPT-4 large-language model in its Bing search engine.


Optical connections have been used for long-distance communications over telecom networks for decades, but now are considered ripe for use over shorter distances in datacenters. Companies such as Broadcom and Ayar Labs are creating products for optical interconnects.


Google’s TPU v4 supercomputer was deployed in 2020, and the paper was written as a retrospective piece that measures performance gains over the years.


The supercomputer is the “first with a circuit-switched optical interconnect,” Google researchers told HPCwire in an email. It had a total of 64 racks hosting 4,096 TPUs, plus 48 optical circuit switches connecting all racks across the system. Google calculated that the optical components account for under 5% of the system cost and under 2% of the power consumed by a system.


The TPU v4 chip outperforms TPU v3 chips by 2.1 times and improves the performance per watt by 2.7 times, Google researchers wrote. “The TPU v4 supercomputer is 4x larger at 4096 chips and thus ~10x faster overall, which along with OCS flexibility helps large language models,” the researchers said in the paper.


Google highlighted the flexibility of optics in deploying systems and adapting the topology on the fly depending on the application; the optical interconnect and its high bandwidth allowed each rack to be deployed independently, and each rack could be plugged in once production was completed.


Comments

Popular posts from this blog

Life is a complex and beautiful

Bougainvillea 'Lavender Queen'

Name:Charakonna, Copper pod tree, Yellow flame tree चरकोना, तांबे की फली का पेड़, पीली लौ का पेड़