AI Privacy: Recent Cases Underscore Urgent Risks
As AI becomes embedded in everyday services, new incidents highlight how fragile privacy remains in the AI era. From chatbots leaking private conversations to browser assistants raising surveillance concerns and healthcare tools struggling with consent, these cases underline the critical need for stronger privacy safeguards.
Chatbot Data Leaks: Grok Under Scrutiny
Private conversations with xAI’s chatbot Grok were recently exposed online after its “share” button generated public URLs that were automatically indexed by search engines. This made sensitive exchanges accessible to anyone, including troubling content. Once indexed, those conversations became publicly searchable.
The flaw highlights a recurring problem with AI platforms. Features designed for convenience can create privacy nightmares if not implemented with safeguards. Similar indexing issues have affected other chatbots, including OpenAI’s ChatGPT. Developers across the industry need to adopt stronger privacy-by-design principles, but relying solely on developer safeguards has proven insufficient.
AI Web Browser Assistants: Convenience vs. Privacy
A new University College London study, presented at the latest USENIX Security Symposium, provided the first large-scale analysis of AI browser assistants. It uncovered widespread tracking, profiling, and data collection practices. Researchers found that popular extensions such as Merlin, ChatGPT for Google, and Microsoft Copilot can transmit entire webpage content, including sensitive form inputs like banking or health data, to their servers. Others shared identifiable user information with platforms like Google Analytics, enabling cross-site tracking and targeted advertising. Some assistants inferred personal traits such as age, gender, and income across browsing sessions.
A growing number of users rely on AI browser assistants for tasks like summarizing pages, automating browsing, and more. But researchers warn that these assistants operate with unprecedented access to users’ online behavior, often without transparency or consent. In some cases assistants may even breach privacy regulations such as HIPAA or FERPA. The authors argue that without stronger oversight and privacy-by-design safeguards, AI browser assistants risk becoming a surveillance layer embedded in everyday browsing.
Healthcare and Clinical Trials: Personalization at a Cost
In healthcare, AI is also being explored to personalize clinical trial consent forms, making them clearer and more engaging for participants, helping overcome a key barrier to recruitment. Yet experts warn of significant challenges. Beyond privacy and data security concerns, there are risks of AI-generated consent documents containing errors, biases, or even manipulative phrasing that could undermine true informed consent. Sensitive health data required to power these systems also heightens the risk of misuse, while regulators struggle to keep pace with technological change.
AI Privacy: The Path Forward
Despite their differences, these cases reveal a common thread: AI systems not designed with privacy at the core are vulnerable to breaches and erosion of trust. As recent cases show, design safeguards and regulation alone cannot guarantee privacy.
One promising technology is Fully Homomorphic Encryption (FHE) which enables computation on encrypted data without ever exposing the underlying information. By keeping data secure throughout its lifecycle, FHE could mitigate chatbot leaks, reduce surveillance risks, and protect sensitive health records, while still enabling AI innovation. With hardware acceleration (such as 3PU™), FHE offers a way to embed privacy directly into AI infrastructure, safeguarding trust without delaying progress.