fbpx
News

OpenAI to add metadata to AI-generated images to help spot fakes

OpenAI hopes that this will raise awareness and encourage users to look for signs of images being generated using AI.

Following in Meta’s footsteps, OpenAI has announced that it will begin adding metadata markers to images generated by ChatGPT and DALL-E 3.

The invisible marker, embedded in the image’s metadata, will allow others to verify the origin of the image. For platforms like Meta’s Instagram, Facebook and Threads, this means that it will be able to add a label to the image, letting others know that it was generated using an AI tool.

This comes as part of a collective fight against misinformation and disinformation, accelerated by the upcoming U.S. presidential election. Similarly, the recent distribution of lewd AI-generated images of Taylor Swift might have also prompted the company to act by adding indicators to discern fake from real.

However, metadata tracking will only work in cases where the users download the AI-generated image directly from the AI tool. In cases where someone uses an AI tool to generate an image and then screenshots, it will lose the metadata. As shared by Engadget, OpenAI acknowledges that this is not a perfect solution but hopes that it will raise awareness and encourage users to look for signs of images being generated using AI.

OpenAI and Meta aren’t the only companies working on adding invisible markers to AI-generated content. Google’s DeepMind has a system called SynthID that can add digital watermarks to both images and audio. Read more about it here.

Source: Engadget

MobileSyrup may earn a commission from purchases made via our links, which helps fund the journalism we provide free on our website. These links do not influence our editorial content. Support us here.

Related Articles

Comments