To no one's surprise,????? ?????? ???????? there have been a lot of AI-related announcements at the Google Cloud Next event. Even less surprising: Google's annual cloud computing conference has focused on new versions of its flagship Gemini model and advances in AI agents.
So, for those following the whiplash competition between AI heavy hitters like Google and OpenAI, let's unpack the latest Gemini updates.
SEE ALSO: Google reveals Reddit Answers is powered by Gemini AIOn Wednesday, Google announced Gemini 2.5 Flash, a "workhorse" that has been adapted from its most advanced Gemini 2.5 Pro model. Gemini 2.5 Flash has the same build as 2.5 Pro but has been optimized to be faster and cheaper to run. The model's speed and cost-efficiency are possible because of its ability to adjust or "budget" its processing power based on the desired task. This concept, known as "test-time compute," is the emerging technique that reportedly made DeepSeek's R1 model so cheap to train.
Gemini 2.5 Flash isn't available just yet, but it's coming soon to Vertex AI, AI Studio, and the standalone Gemini app. On a related note, Gemini 2.5 Pro is now available in public preview on Vertex AI and the Gemini app. This is the model that has recently topped the leaderboards in the Chatbot Arena.
Google is also bringing these models to Google Workspace for new productivity-related AI features. That includes the ability to create audio versions of Google Docs, automated data analysis in Google Sheets, and something called Google Workspace Flows, a way of automating manual workflows like managing customer service requests across Workspace apps.
Agentic AI, a more advanced form of AI that reasons across multiple steps, is the main driver of the new Google Workspace features. But it's a challenge for all models to access the requisite data to perform tasks. Yesterday, Google announced that it's adopting the Model Context Protocol (MCP), an open-source standard developed by Anthropic that enables "secure, two-way connections between [developers'] data sources and AI-powered tools," as Anthropic explained.
"Developers can either expose their data through MCP servers or build AI applications (MCP clients) that connect to these servers," read a 2024 Anthropic announcement describing how it works. Now, according to Google DeepMind CEO Demis Hassabis, Google is adopting MCP for its Gemini models.
This Tweet is currently unavailable. It might be loading or has been removed.
This will effectively allow Gemini models to quickly access the data they need, producing more reliable responses. Notably, OpenAI has also adopted MCP.
And that was just the first day of Google Cloud Next. Day two will likely bring even more announcements, so stay tuned.
Watching makeup tutorials made me feel confident wearing less makeupThe Sunday Scaries hit different in a pandemic. Here's how to help ease them.What is pelvic pain and what can you do to treat it?Antifa.com now redirects to the White House’s website. This doesn’t mean anything.How to avoid COVIDJoe Biden's first @POTUS tweet is refreshingly boringIt's finally time to unfollow the Trumps on every platformI'm over lifestyle influencersThe harsh history behind the internet's favorite sea shantyBet on 2024 candidates now by buying shares to campaign urls Scam Alert: Amazon Prime Video users are being tricked into paying fake fees Disney allegedly stole artwork, sold it in its park, and tried to cover it up Air fryer hot dog recipe from viral TikTok is an idiot Nicolas Cage's Reddit AMA was a nostalgic, honest delight Here's some Met Gala Gilded Glamor fashion. Also, America is imploding. A Twitter meme about 'cancellable takes' is revealing a lot Is there such a thing as too much talking about your pet? Twitter asks: Are books just an aesthetic? Dog park etiquette and best practices Tumblr's Hellsite High blog teaches new users how the site works
0.1464s , 9926.984375 kb
Copyright © 2025 Powered by 【????? ?????? ????????】Google Cloud Next: Gemini 2.5 Flash, new Workspace tools, and agentic AI take center stage,Feature Flash