The most recent executive order from the Biden White House earmarks the emergence of generative algorithm development that uses digital watermarks to authenticate content originated by the government. This order stands to influence content creators in the realm of generative AI misinformation.
Analog watermarking techniques were first employed in Italy in 1282 and have since been used by the government to protect against counterfeit currency and book publishers to encode hidden messages and authenticate books. Most recently, watermarking is being used to authenticate driver’s licenses and other sensitive documents. Over the years, modern digital watermarking has adopted the process of discreetly embedding additional information onto content, such as images, audio, and video. The watermark is easily recognized by a machine but is invisible to human users. However, there is nothing to protect copyrighted works from generative AI models, aside from AI companies’ unverifiable, unenforceable word. Even the White House’s executive order is lacking in technical specificity. With Congress unlikely to take any meaningful action, industry alternatives such as Content Credentials are being developed. Content Credentials adds additional information to an image that pulls from the image or video header. It cannot be removed easily and gives creators control over their content. Industry firms like Sony and Adobe are incorporating this system into their vast digital suites.