Gemini Can Now ‘See’ Your Screen Real-Time: Exciting or Concerning? h3>
Google is rapidly enhancing Gemini by adding more meaningful features, one of which is Gemini Live with Video, a Project Astra-powered capability. This feature, now rolling out to select users, allows Gemini to analyze the device’s screen in real-time and provide natural language responses.
Project Astra, announced last year, is Google’s initiative to develop AI chatbots capable of real-time interaction using a smartphone’s screen or camera. Google confirmed plans to launch Gemini Live with Video last month, and now, early users are reporting that the feature is live on their devices.
Also read: Top Gemini features you need to master
Gemini’s “Screen Share with Live” Feature
A Reddit user with a Xiaomi device recently shared that they now have access to “Screen Share with Gemini”, which allows Gemini Live to view what’s on your screen in real-time—rather than analyzing a static image or recorded clip. Users can ask Gemini Live questions about what’s displayed and even engage in discussions based on the content.
The feature is reportedly functional across most apps, including the gallery and web browser. However, when the user asked Gemini Live to open YouTube, the AI responded that it is limited to chatting and collaboration. This tool can be accessed via the new “Screen Share with Gemini” button, located above the “Ask About Screen” button in the Gemini floating sheet.
Let Gemini Live Access Your Camera Feed
Another feature rolling out alongside Screen Share with Gemini is Gemini Live with Video, which enables users to feed live camera views to Gemini. Unlike screen sharing, this feature allows Gemini to analyze the live camera feed, whether from the rear or front camera.
Google demonstrated that this capability can help users discuss and interact with their surroundings in real-time. Once available, the feature can be accessed through the Gemini app by launching Gemini Live and tapping the new video button, which also supports a pause function similar to voice-only Gemini Live.
Unlike Google Lens or the “Ask About Screen” tool, Gemini Live with Video enables a more natural, speech-based conversation, rather than relying on a web-based search layout. This makes interactions more fluid and intuitive.
Exciting or Concerning?
One important consideration with Gemini Live’s capabilities is how Google may use the data and media shared through the feature. As with other Gemini multi-modal tools, inputs like voice, images, or other interactions might be leveraged to further train and refine Google’s AI models.
On the exciting side, this means that Gemini could become significantly more responsive, intelligent, and personalized over time. By learning from real-world usage, the system can adapt to users’ preferences and needs, ultimately offering more intuitive assistance and innovative features. This kind of continuous improvement is at the heart of cutting-edge AI.
However, on the concerning side, this also raises important questions about data privacy and control. Even if Google promises to uphold the same privacy and security standards applied across its services, users may wonder how exactly their data is used, who can access it, and how long it is stored. Transparency will be key. Users should expect—and demand—clear, accessible explanations of data handling practices.
We anticipate that Google will provide more detailed disclosures as Gemini Live rolls out, particularly around user controls, opt-in mechanisms, and privacy safeguards. Until then, users may find themselves balancing the thrill of smarter AI with a healthy dose of caution.
For now, Gemini Live features appear to be rolling out to a very limited number of users. I checked on my Samsung Galaxy device and found that the feature hasn’t been enabled yet. Additionally, access to these tools requires a Google One AI Premium subscription.
Have you tested any of Gemini’s AI tools? Which features do you find most useful? Share your thoughts in the comments!