Attackers could take control of Google Gemini assistant with one invitation. They could then, for example, open a window for victims

  • Experts discovered a critical vulnerability in Google Gemini service
  • Attackers could take control of someone else's AI assistant using a single invitation
  • Subsequently, they could manipulate Google Home or other applications

Sdílejte:
Adam Kurfürst
Adam Kurfürst
16. 8. 2025 00:30
Advertisement

If you ever thought that to take control of someone else’s phone or smart home system you needed a doctorate in information technology and years of hacking experience, then unfortunately you were sorely mistaken. As researchers Ben Nassi from Tel-Aviv University, Stav Cohen from Technion, and Or Yair from SafeBreach demonstrated, all it takes today is for the victim to use Google Gemini AI assistant. If an attacker then sends them a meeting invitation or a seemingly innocent email, of which many of us receive dozens to hundreds weekly, a cascade of misfortune can be triggered.

One Invitation Is Enough for Disaster

The study titled Invitation Is All You Need (translated as Invitation Is All You Need) describes in detail several cases of abuse of the phenomenon known as “indirect prompt injection.” This is a technique where artificial intelligence is presented with a malicious command (prompt) as part of a normal-looking task, forcing it to perform an action that harms the victim.

Leaving aside somewhat harmless actions, such as generating vulgar or otherwise toxic content or deleting calendar events, AI can theoretically be capable of laying the groundwork for serious criminal activity. For example, researchers demonstrated how they used the described method to open a window for a victim, which is connected to the Google Home smart home system.

How Exactly Does the Attack Work?

An attack using the indirect prompt injection method looks roughly as follows:

  1. The attacker creates and sends the victim an email or meeting invitation (via Gmail or Google Calendar). This invitation contains a hidden prompt with malicious instructions for Gemini.
  2. The user later queries the Gemini assistant (web/mobile app or Google Assistant) about their emails, events, or files. When processing this data, Gemini also reads the hidden malicious prompt.
  3. An “indirect prompt injection” occurs, where the malicious instructions become part of Gemini’s operational context. Gemini is tricked into treating these instructions as legitimate user commands.
  4. ​Aktivace chytrého domácího spotřebiče (např. bojleru nebo okna).
    1. ​Aktivace chytrého domácího spotřebiče (např. bojleru nebo okna).
    2. ​Spuštění videozáznamu oběti přes aplikaci Zoom.
    3. ​Získání geolokace oběti pomocí webového prohlížeče.
    4. ​Odstranění události z kalendáře.

This implies that the victim is not so much a casual user who rarely uses AI for occasional problem-solving in their daily life, but rather more advanced individuals who use artificial intelligence as their personal secretary.

The Authors Described 5 Types of Attack

The authors of the study identified five types of attacks. Thus, attackers had a plethora of ways to manipulate Gemini:

  1. Short-Term Context Poisoning (Short-Term Context Poisoning): Injecting malicious instructions into shared resources (e.g., event titles) that affect a single conversation with Gemini.
  2. Long-Term Memory Poisoning (Long-Term Memory Poisoning): Permanent modification of Gemini’s memory, enabling persistent malicious activities across independent sessions.
  3. Tool Misuse (Tool Misuse): Misuse of tools associated with the compromised agent (e.g., Google Calendar agent) to perform malicious actions (e.g., deleting events).
  4. Automatic Agent Invocation (Automatic Agent Invocation): Compromising one agent (e.g., Calendar) to invoke another agent (e.g., Google Home), allowing for privilege escalation and smart home control.
  5. Automatic App Invocation (Automatic App Invocation): Launching applications (e.g., Zoom, web browser) on the victim’s device using the Gemini agent, enabling a wider range of attacks, including data exfiltration. This type of attack only affects Android users.

Google Has Already Responded to the Situation

Since the researchers shared their findings directly with Google, which develops the Gemini artificial intelligence, you don’t have to directly fear the dangers associated with the phenomena described above. The Californian tech giant has implemented solutions designed to eliminate or at least significantly reduce the problems. The risk level was supposed to decrease from “high to critical” to “medium to low.”

Nevertheless, you should be more than cautious about which AI assistant you grant access to your data. The study demonstrated that even artificial intelligence from one of the biggest players in the technology sector can have significant security gaps.

Do you use Google Gemini as your personal assistant?

Source: Invitation Is All You Need/Google Sites, Wired

About the author

Adam Kurfürst

Adam studuje na gymnáziu a technologické žurnalistice se věnuje od svých 14 let. Pakliže pomineme jeho vášeň pro chytré telefony, tablety a příslušenství, rád se… More about the author

Adam Kurfürst
Sdílejte: