OpenAI has made it easier to determine if an image was generated by DALL-E 3 by embedding two watermarks in each created image.
OpenAI recently decided to facilitate the process of identifying images generated with DALL-E 3. The company shared this information a few days ago, stating that they would start adding two types of watermarks to all images generated by DALL-E 3, in line with the standards highlighted by the Coalition for Content Provenance and Authenticity (C2PA). These changes have been implemented and are effective on all images generated through the service.
OpenAI Makes It Easier to Determine if an Image Was Generated by DALL-E 3
The first of the two watermarks exists only in the image’s metadata. You can verify the creation data of an image using the Content Credentials Verify website or a similar tool. The second is a visible CR symbol in the bottom left corner of the image itself.
This is a positive initiative that propels DALL-E 3 in the right direction while enabling proper identification when content is generated via artificial intelligence. Other AI systems use similar systems in metadata, and Google has implemented its own watermark to help identify images created through its image generation model, which was recently extended to Google Bard.
As of now, only images are equipped with this watermark. Videos and text are still without it. OpenAI states that adding the watermark to metadata should not cause any latency issues or affect the image generation quality. However, the size of the images themselves will be slightly increased.
Two Watermarks Integrated in Each Created Image
If this is the first time you are hearing about this topic, the C2PA is a group composed of companies such as Microsoft, Sony, and Adobe. These companies have consistently advocated for the inclusion of Content Credentials watermarks to determine if images are generated by an AI system or not. In truth, the Content Credentials symbol that OpenAI adds to DALL-E 3 images was created by Adobe.
While the watermark can be helpful, it is not a foolproof option to ensure that misinformation does not spread through AI-generated content. Metadata can still be removed through a simple screenshot, and visible watermarks can be easily cropped out. However, OpenAI is convinced that these methods will encourage users to recognize that these “signals are crucial for increasing trust in digital information” and will ultimately lead to a reduction in misuse of these AI systems.