alphabet inc GOOD Google Google on Tuesday in detail about the supercomputers with which it trains its Artificial intelligence models.
Google said the systems are faster and more energy-efficient than comparable systems Nvidia Corp NVDAReuters reports.
It has developed its custom chip called the Tensor Processing Unit, or TPU, which does more than 90% of the company’s AI training work, feeding data through models to respond to queries with human-like text or generate images. The Google TPU is now in its fourth generation.
Google has detailed how it built over 4,000 chips into a supercomputer, using its specially designed optical switches to connect individual machines.
The companys bard And microsoft corp MSFT supports ChatGPT by OpenAI increased competition between companies building AI supercomputers as the size of the so-called large language models that power the technologies has exploded.
Google trained its largest publicly available language model to date, PaLM, by splitting it across two of the 4,000-chip supercomputers over 50 days.
Google said its chips are up to 1.7 times faster and 1.9 times more energy efficient than a system based on Nvidia’s A100 chip for comparably sized systems.
Google hinted at working on a new TPU that would compete with the Nvidia H100, with Google Fellow Norm Jouppi telling Reuters that Google has “a healthy pipeline of future chips.”
In March, Microsoft announced it was looking into ways to string together tens of thousands of Nvidias A100 graphics chips, the workhorse for training AI models.
OpenAI has long required access to full cloud computing services as it attempts to train an ever-increasing number of AI programs, called models.
Scott Guthrie, Microsoft executive vice president, said it cost Microsoft over several hundred million dollars.
It’s already working on the next generation of the AI supercomputer, part of an expanded deal with OpenAI that saw Microsoft add $10 billion.
Microsoft adds the…
[ad_2]
Source story