Ever wondered if that impressive photo on the web is real or an AI-generated image? Or worried about the safety of your business data in the AI era?
launches SynthID to put an end to these uncertainties. This watermarking tool offers an unprecedented level of protection against misrepresentations and copyright infringements that AI can cause.
The rapid rise of generative AI has led to numerous ethical and legal dilemmas, ranging from deepfakes to unsolicited pornography and copyright violations. Watermarking has become the latest trend to combat these issues. SynthID, designed to remain robust even after image editing, cleverly addresses this.
The focus is not only on smaller players; major technology conglomerates are also involved. Last July, companies like OpenAI, Google, and Meta committed to the White House to work on watermarking technologies to combat misinformation. Google DeepMind now has the distinction of being the first to publicly launch such a tool.
SynthID utilizes two neural networks to create and detect an almost invisible watermark. This increases resistance to editing and manipulation, although it is not completely infallible, according to Pushmeet Kohli from Google DeepMind.
The tool is currently still in the "experimental" phase. Google plans to evaluate its usage and performance before making it more widely available. However, because it is a proprietary tool, its usability remains limited, as noted by AI researcher Sasha Luccioni.
For businesses and organizations, this is a significant development. SynthID could play a key role in securing digital assets and countering the darker sides of AI. While it is still too early to measure the total impact, it is certainly a groundbreaking step towards a safer digital future.

