Skip to content

At the 2025 Google I/O event, a dozen exciting new AI-driven tools, enhancements, and advancements were unveiled by Google.

Annual tech extravaganza Google I/O showcases groundbreaking technological advancements, including recently developed research prototypes...

Fresh Announcements of AI-Driven Tools, Features, and Progress at Google I/O 2025
Fresh Announcements of AI-Driven Tools, Features, and Progress at Google I/O 2025

At the 2025 Google I/O event, a dozen exciting new AI-driven tools, enhancements, and advancements were unveiled by Google.

At Google's annual developer conference, Google I/O 2023, the tech giant announced a series of AI-powered tools and advancements, centred around the new Gemini AI models and aimed at strengthening their AI ecosystem.

The highlight of the event was the introduction of the Gemini family of large language models, developed by DeepMind. Described as their most capable and natively multimodal AI to date, Gemini can understand and generate text, images, audio, code, and video. This marks a significant leap beyond text-only models. Initially, Gemini has been embedded into the Bard chatbot and is powering features on Pixel 8 phones. Future plans include integration with Google Search, Ads, Chrome, Workspace apps, and more [1][2].

One of the significant enhancements announced was the AI Mode in Google Search for desktop browsers. This builds upon features that were earlier mobile-only. Users can now ask Search to explain images, diagrams, or slides directly. Soon, they will be able to upload lecture notes or papers (PDFs) to ask questions or get summaries. Integration with Google Drive is planned to facilitate easy file access. A new side panel named Canvas was introduced, designed to help users map out study plans, organise group projects, or plan vacations interactively with AI assistance. This feature is initially rolled out in the US on desktop [3].

Google's strategic AI integrations are a testament to the power of the Gemini models. The models underpin much of Google’s recent AI strategy, integrating a powerful multimodal AI across their ecosystem. CEO Sundar Pichai emphasised that Gemini’s capabilities flow immediately through multiple Google products, advancing user productivity and search relevance. This is part of a broader AI-first vision backed by significant investment in AI research, custom hardware (Tensor Processing Units), and unifying AI labs (Google Brain and DeepMind) [1][2].

In addition, AI Mode in Labs will incorporate the capabilities of Project Mariner, enabling it to perform tasks on behalf of users such as purchasing tickets, making reservations, and booking local appointments. For Google AI Ultra subscribers, an experimental version of Agent Mode will be introduced in the Gemini app, allowing users to delegate complex planning and tasks with minimal oversight. AI Mode in Labs will provide personalised suggestions based on past searches and allow users to connect other Google apps for tailored responses [1].

More Project Astra live capabilities are coming soon to Gemini Live. The ability to use your camera or share your screen in Gemini Live is being rolled out to iOS users starting today, in addition to Android. Gemini Live in the app will soon be integrated with Google services like Maps, Calendar, Tasks, and Keep for deeper daily assistance [2].

Google is also introducing a new AI film-making tool, but no further details about this tool are provided in the given paragraph.

Finally, AI Mode in Labs will introduce Deep Search capabilities this summer, allowing users to quickly answer complex questions and generate expert-level, fully cited reports. The new shopping experience in AI Mode will combine Gemini model capabilities with Google's Shopping Graph, allowing for more efficient product browsing and a "try-on" feature for apparel listings [1].

In summary, Google I/O 2023 featured the unveiling of Gemini AI as a cornerstone model powering multimodal AI experiences, enhanced AI features for Google Search especially on desktop, and a strategic push to embed advanced AI consistently across Google applications and services.

The unveiling of Gemini AI, Google's most capable and natally multimodal AI, marked a significant leap beyond text-only models during Google I/O 2023. This AI, developed by DeepMind, can generate text, images, audio, code, and video. The AI Mode in Google Search for desktop browsers, a significant enhancement announced during the event, builds upon earlier mobile-only features and allows users to ask Search to explain images, upload lecture notes, or get summaries from PDFs [3]. Furthermore, Google plans to integrate Gemini AI with Google Search, Ads, Chrome, Workspace apps, and more, demonstrating their AI-first vision backed by substantial AI research and investment [1][2].

Read also:

    Latest