Researchers Create Tool To Prevent AI Image Manipulation

PinIt

Researchers at MIT have developed a tool able to prevent AI image manipulation by scrambling the image at the sub-pixel level.

Researchers at MIT presented a new tool at aimed at the International Conference on Machine Learning aimed at preventing AI image manipulation.

The tool, called PhotoGuard, is a protective shield which alters photos at a level unnoticeable to the human eye. With this alteration, images which are then run through a generative system will look unrealistic or warped, which should prevent them from being used for indecent purposes. 

SEE ALSO: Generative AI Systems Could Force Changes To Patent Laws

“We present an approach to mitigating the risks of malicious image editing posed by large diffusion models,” said Hadi Salman, a PhD researcher at MIT who contributed to the development of PhotoGuard. “The key idea is to immunize images so as to make them resistant to manipulation by these models. This immunization relies on injection of imperceptible adversarial perturbations designed to disrupt the operation of the targeted diffusion models, forcing them to generate unrealistic images.” 

Before the launch of DALL-E and other AI image editing tools, experts in the artificial intelligence community feared that these tools could be used by bad actors to cyberbully, disinform and blackmail. 

While OpenAI and other operators have safeguards to avoid this, there have been many instances of generative tools being used to manipulate audio, images, and video. The launch of several open-source foundational models only makes it more likely this will happen in the future, as it is more difficult for operators to prevent the model being used for malicious purposes. PhotoGuard is one of the first steps taken to reduce the ability of these systems to manipulate images. 

“Consider the possibility of fraudulent propagation of fake catastrophic events, like an explosion at a significant landmark. This deception can manipulate market trends and public sentiment, but the risks are not limited to the public sphere. Personal images can be inappropriately altered and used for blackmail, resulting in significant financial implications when executed on a large scale,” said Salman. “Even when the deception is eventually uncovered, the damage — whether reputational, emotional, or financial — has often already happened. This is a reality for victims at all levels, from individuals bullied at school to society-wide manipulation.”

PhotoGuard is another option for photo storage and photo sharing services to consider as a potential privacy feature for users. Other operators, including Meta, Google, and OpenAI, have committed to creating or funding ways to recognize manipulated images or prevent them from being created. Another technique called watermarking has been considered as one of the ways to do this, which provides users with a way to detect if content is AI-generated or has been manipulated by an generative AI tool. 

For now, PhotoGuard has only been tested with Stable Diffusion, the most accessed open-source AI image generator. It is not clear how it will perform with other language models, and the team is planning to conduct more tests in the near future. 

While most AI research labs and developers are keenly aware of the risks of this technology, there is not a consensus as to the damage open-sourcing these foundational models and giving users more editorial power will have in the future. Some have called for a ban or at least a reduction in the amount of editorial functionality, while others have said that people need to get used to AI image manipulation and be able to better recognize it when it happens.

David Curry

About David Curry

David is a technology writer with several years experience covering all aspects of IoT, from technology to networks to security.

Leave a Reply

Your email address will not be published. Required fields are marked *