CAMIA privacy attack reveals what AI models memorise

Daws, Ryan. “CAMIA privacy attack reveals what AI models memorise.” AINEWS, September 25, 2025. https://www.artificialintelligence-news.com/news/camia-privacy-attack-reveals-what-ai-models-memorise/

This article helped me learn about a new method that has been developed in recent days. This attack, called CAMIA (Context-Aware Membership Inference Attack), was created by researchers from Brave and the National University of Singapore to reveal whether or not your data was used in any way to train AI models. He shows the concern of the way that AI memorizes data that it uses when being prompted with questions, since it has the ability to not only store your information but leak sensitive information that is out there on the internet. This attack is used to see what information AI is using, if they are storing it, or if they are using it for other AI models. Daws notes that this measure if the first of its kind that is specifically geared towards exploiting modern AI models.

This article was interesting because it shows that people are interested in revealing exactly what information is being used and stored in AI learning models. This is something that more people should be aware of, considering most of our information on the internet could now be at the mercy of these AI learning models. This article gives hope that more privacy measures will be enacted against AI in the future, and also shows regular people like myself that there are researchers out there trying to combat the ever-looming invasiveness of AI.


Comments

Leave a Reply

Your email address will not be published. Required fields are marked *