Alphabet unveils long-awaited Gemini AI model

December 6, 2023 – 7:04 AM PST

SAN FRANCISCO (Reuters) – Alphabet (GOOGL.O) on Wednesday introduced its most advanced artificial intelligence model, a technology capable of crunching different forms of information such as video, audio and text.

Advertisement

Called Gemini, the Google owner’s highly anticipated AI model is capable of more sophisticated reasoning and understanding information with a greater degree of nuance than Google’s prior technology, the company said.

“This new era of models represents one of the biggest science and engineering efforts we’ve undertaken as a company,” Alphabet CEO Sundar Pichai wrote in a blog post.

Since the launch of OpenAI’s ChatGPT roughly a year ago, Google has been racing to produce AI software that rivals what the Microsoft (MSFT.O)-backed company has introduced.

Google added a portion of the new Gemini model technology to its AI assistant Bard on Wednesday, and said it planned to release its most advanced version of Gemini through Bard early next year.

Alphabet said it is making three versions of Gemini, each of which is designed to use a different amount of processing power. The most powerful version is designed to run in data centers, and the smallest will run efficiently on mobile devices, the company said.

Gemini is the largest AI model that the company’s Google DeepMind AI unit has helped make, but it is “significantly” cheaper to serve to users than the company’s prior, larger models, DeepMind Vice President, Product Eli Collins told reporters.

“So it’s not just more capable, it’s also far more efficient,” Collins said. The latest model still requires a substantial amount of computing power to train, but Google is improving on its process, he added.

Alphabet also announced a new generation of its custom-built AI chips, or tensor processing units (TPUs). The Cloud TPU v5p is designed to train large AI models, and is stitched together in pods of 8,960 chips.

The new version of its customer processors can train large language models nearly three times as fast as prior generations. The new chips are available for developers in “preview” as of Wednesday, the company said.

Reporting by Max A. Cherney and Stephen Nellis in San Francisco; Editing by Jamie Freed

Advertisements below

Share this post!