Google Gemini Expands Functionality with Major Updates Across Web and Android Platforms
Google is actively developing and rolling out new features for its Gemini AI, with updates surfacing across its web application and Android platforms. Recent beta versions of the Google app indicate significant upgrades for Gemini Live, the introduction of an "Experimental Labs" section on the web app, and the development of screen automation capabilities for Android.
Integrations into an Android desktop interface and explorations into a "Likeness" feature are also underway, signifying a broad expansion of Gemini's functionality and reach.
Gemini Live Enhancements on Android
The current Gemini Live service operates on the Gemini 2.5 Flash model. However, analysis of the Google app beta (version 17.2) points to upcoming enhancements for the service, accessible through a new "Labs" section within the Gemini app on Android.
Key features identified in development include:
- Live Thinking Mode: This mode is designed to allow Gemini Live more processing time, potentially utilizing Gemini Thinking or Pro models, to generate more detailed responses.
- Live Experimental Features: These features are expected to include multimodal memory, enhanced noise handling, the ability for Gemini to respond based on visual detection, and personalized results derived from integrated Google applications. These capabilities are linked to Gemini 3 Flash and Pro, with visual response detection potentially related to Project Astra.
- UI Control: This functionality would enable the Gemini agent to control phone operations to complete specific tasks.
- Deep Research: Users would be able to assign complex research tasks to Gemini.
These developments suggest a planned transition of Gemini Live to be powered by Gemini 3. The anticipated Gemini Agent functionality is expected to integrate into Android as part of a broader "Computer Use" framework.
Gemini Web App Updates Tools Menu with Experimental Labs
The Google Gemini web application has updated its Tools menu to include an "Experimental Labs" section. Previously, the Tools menu on gemini.google.com displayed a single list of up to eight items, with availability varying based on the user's Google AI subscription level.
The updated prompt box dropdown now organizes features into two distinct sections:
- Tools Section: This section encompasses features such as Deep Research, Create videos (available with AI Plus), Create images, Canvas, Guided Learning, and Deep Think (available with AI Ultra).
- Experimental Features Section (Labs): Distinguished by a "Labs" badge, this section includes features under active development like Agent (available with AI Ultra), Dynamic view or Visual layout (available to all users), and Personal Intelligence (available to all paid subscribers).
Additionally, a "Personalize chat when helpful" toggle has been introduced, allowing users to control Gemini's use of their Connected apps for the ongoing conversation. This setting is temporary and will re-enable automatically with the initiation of a new chat. This update is currently observed on the Gemini web app and has not been reported on mobile clients.
Android Screen Automation and Privacy Considerations
Google is developing a "screen automation" feature for Gemini on Android devices, identified in the Google app 17.4 beta under the codename "bonobo." This functionality aims to enable Gemini to assist with tasks such as placing orders or booking rides within specific applications. Android 16 QPR3 is reported to be laying the groundwork for this integration.
Google has issued advisories regarding the use of this feature:
- User Responsibility: Users are advised that they are accountable for actions performed by Gemini on their behalf and are recommended to supervise its operations. Users retain the ability to interrupt Gemini's automation and assume manual control.
- Privacy Protocols: Screenshots may be reviewed by trained personnel for the purpose of improving Google services, provided "Keep Activity" is enabled. Users are cautioned against entering login or payment information into Gemini chats and advised not to use screen automation for emergencies or tasks involving sensitive data.
Gemini Integration for Android Desktop Interface
New information has emerged concerning Gemini's integration into an Android desktop interface. The latest Google app beta (version 17.5) contains strings detailing aspects of the Gemini experience on desktop Android, following a previous leak that showed a Gemini icon in the status bar.
These strings indicate that users will be able to access Gemini for assistance with tasks such as writing, planning, and brainstorming. Access methods described include selecting a Gemini icon from the top-right corner of the screen or using a keyboard shortcut involving a Google Key and the Spacebar. The appearance of the Gemini icon in the status bar is consistent with earlier interface leaks.
The Google app is expected to power the Gemini experience on this desktop operating system, with an anticipated launch as an overlay, potentially similar to its current implementation on phones or within the Chrome side panel.
Other Developments: "Likeness" Feature
The beta also contains references to a feature or integration named "Likeness," codenamed "wasabi." This designation is associated with how Android XR employs 3D avatars, currently utilized in Google Meet calls. Related strings suggest potential accessibility of this feature for prompts.