SHARE
Facebook X Pinterest WhatsApp

Researchers Create Tool To Prevent AI Image Manipulation

thumbnail
Researchers Create Tool To Prevent AI Image Manipulation

Researchers at MIT have developed a tool able to prevent AI image manipulation by scrambling the image at the sub-pixel level.

Written By
thumbnail
David Curry
David Curry
Aug 28, 2023

Researchers at MIT presented a new tool at aimed at the International Conference on Machine Learning aimed at preventing AI image manipulation.

The tool, called PhotoGuard, is a protective shield which alters photos at a level unnoticeable to the human eye. With this alteration, images which are then run through a generative system will look unrealistic or warped, which should prevent them from being used for indecent purposes. 

SEE ALSO: Generative AI Systems Could Force Changes To Patent Laws

“We present an approach to mitigating the risks of malicious image editing posed by large diffusion models,” said Hadi Salman, a PhD researcher at MIT who contributed to the development of PhotoGuard. “The key idea is to immunize images so as to make them resistant to manipulation by these models. This immunization relies on injection of imperceptible adversarial perturbations designed to disrupt the operation of the targeted diffusion models, forcing them to generate unrealistic images.” 

Before the launch of DALL-E and other AI image editing tools, experts in the artificial intelligence community feared that these tools could be used by bad actors to cyberbully, disinform and blackmail. 

While OpenAI and other operators have safeguards to avoid this, there have been many instances of generative tools being used to manipulate audio, images, and video. The launch of several open-source foundational models only makes it more likely this will happen in the future, as it is more difficult for operators to prevent the model being used for malicious purposes. PhotoGuard is one of the first steps taken to reduce the ability of these systems to manipulate images. 

“Consider the possibility of fraudulent propagation of fake catastrophic events, like an explosion at a significant landmark. This deception can manipulate market trends and public sentiment, but the risks are not limited to the public sphere. Personal images can be inappropriately altered and used for blackmail, resulting in significant financial implications when executed on a large scale,” said Salman. “Even when the deception is eventually uncovered, the damage — whether reputational, emotional, or financial — has often already happened. This is a reality for victims at all levels, from individuals bullied at school to society-wide manipulation.”

PhotoGuard is another option for photo storage and photo sharing services to consider as a potential privacy feature for users. Other operators, including Meta, Google, and OpenAI, have committed to creating or funding ways to recognize manipulated images or prevent them from being created. Another technique called watermarking has been considered as one of the ways to do this, which provides users with a way to detect if content is AI-generated or has been manipulated by an generative AI tool. 

For now, PhotoGuard has only been tested with Stable Diffusion, the most accessed open-source AI image generator. It is not clear how it will perform with other language models, and the team is planning to conduct more tests in the near future. 

While most AI research labs and developers are keenly aware of the risks of this technology, there is not a consensus as to the damage open-sourcing these foundational models and giving users more editorial power will have in the future. Some have called for a ban or at least a reduction in the amount of editorial functionality, while others have said that people need to get used to AI image manipulation and be able to better recognize it when it happens.

thumbnail
David Curry

David is a technology writer with several years experience covering all aspects of IoT, from technology to networks to security.

Recommended for you...

The Rise of Autonomous BI: How AI Agents Are Transforming Data Discovery and Analysis
Why the Next Evolution in the C-Suite Is a Chief Data, Analytics, and AI Officer
Digital Twins in 2026: From Digital Replicas to Intelligent, AI-Driven Systems
Real-time Analytics News for the Week Ending December 27

Featured Resources from Cloud Data Insights

The Difficult Reality of Implementing Zero Trust Networking
Misbah Rehman
Jan 6, 2026
Cloud Evolution 2026: Strategic Imperatives for Chief Data Officers
Why Network Services Need Automation
The Shared Responsibility Model and Its Impact on Your Security Posture
RT Insights Logo

Analysis and market insights on real-time analytics including Big Data, the IoT, and cognitive computing. Business use cases and technologies are discussed.

Property of TechnologyAdvice. © 2026 TechnologyAdvice. All Rights Reserved

Advertiser Disclosure: Some of the products that appear on this site are from companies from which TechnologyAdvice receives compensation. This compensation may impact how and where products appear on this site including, for example, the order in which they appear. TechnologyAdvice does not include all companies or all types of products available in the marketplace.