Inserting an undetectable 1-bit watermark into a multi megapixel image is not particularly difficult.
If you assume competence from Google, they probably have two different watermarks. A sloppy one they offer an online oracle for and one they keep in reserve for themselves (and law enforcement requests).
Also given that it's Google we are dealing with here, they probably save every single image generated (or at least its neural hash) and tie it to your account in their database.
The dual-watermark theory makes alot of sense for defensive engineering. You always assume your outer layer will be broken and so keep a second layer that isn't publicly testable. Same as defence in depth anywhere else. I'm curious - as new models are being built constantly and they're naturally non-deterministic, do you think it's possible for end users to prove that?
> I'm curious - as new models are being built constantly and they're naturally non-deterministic, do you think it's possible for end users to prove that?
How is the model relevant? The models are proprietary and you never see any of its outputs that haven't been watermarked.
Seems like a very low-quality AI-assisted research repo, and it doesn't even properly test against Google's own SynthID detector. It's not hard at all (with some LLM assistance, for example) to reverse-engineer network requests to be able to do SynthID detection without a browser instance or Gemini access, and then you'd have a ground truth.
I read a lot of comments on HN that say something is not hard, yet don't provide a POC of their own or link to research they have knowledge of.
I also read a lot of comments on HN that start by attacking the source of the information, such as saying it was AI assisted, instead of the actual merits of the work.
The HN community is becoming curmudgeonly and using AI tooling as the justification.
That's how life generally works. If your friend tells you, "I went to that new movie yesterday. It was very boring, I fell asleep midway." - then you either listen to his advice or don't. You don't ask your friend if they ever made a movie of their own.. And you don't ask for a 3rd party research of that movie's either.
As for AI specifically.. life is too short to read all the interesting pages already, and AI just makes is so much worse.
- AI is verbose in general, so you are spending a lot of time reading and not getting much new facts out of that.
- Heavy AI use often means that author has little idea about the topic themselves, and thus cannot engage in comments. Since discussion with authors are often most interesting part of HN, that makes submission less interesting.
And yes, it is possible to use AI assistance to create nice and concise report on the topics you can happily talk about, but then this would not be labeled as "AI".
It says not to use these tools to misrepresent AI-generated content as human-created. But the project is a watermark removal tool with a pip-installable CLI and strength settings named "aggressive" and "maximum." Calling this research while shipping turnkey watermark stripping is trying to have it both ways in a way that's uncomfortable to read.
The README itself reads like unedited AI output with several layers of history baked in.
- V1 and V2 appear in tables and diagrams but are never explained. V3 gets a pipeline diagram that hand-waves its fallback path.
- The same information is restated three times across Overview, Architecture, and Technical Deep Dive. ~1600 words padded to feel like a paper without the rigor.
- Five badges, 4 made up, for a project with 88 test images, no CI, and no test suite. "Detection Rate: 90%" has no methodology behind it. "License: Research" links nowhere and isn't a license.
- No before/after images, anywhere, for a project whose core claim is imperceptible modification.
- Code examples use two different import styles. One will throw an ImportError.
- No versioning. If Google changes SynthID tomorrow, nothing tells you the codebook is stale.
The underlying observations about resolution-dependent carriers and cross-image phase consistency are interesting. The packaging undermines them.
Okay... this tests its own ability to remove the watermark against its own detector. It doesn't test against Gemini's SynthID app. So it does nothing...
Im confident i saw the watermark in use today, in nano banana, i copied the image from chrome into slack. the resulting upload was a black square with a red dot. and not the image i had generated.
Ok i get that eventually someone was gonna do this but why would we want to purposely remove one of the only ways of detecting if an image is ai generated or not...?
I don't understand all the handwringing. If it's this easy to remove SynthID from an AI-generated image then it wasn't a good solution in the first place.
54 comments
If you assume competence from Google, they probably have two different watermarks. A sloppy one they offer an online oracle for and one they keep in reserve for themselves (and law enforcement requests).
Also given that it's Google we are dealing with here, they probably save every single image generated (or at least its neural hash) and tie it to your account in their database.
> I'm curious - as new models are being built constantly and they're naturally non-deterministic, do you think it's possible for end users to prove that?
How is the model relevant? The models are proprietary and you never see any of its outputs that haven't been watermarked.
I also read a lot of comments on HN that start by attacking the source of the information, such as saying it was AI assisted, instead of the actual merits of the work.
The HN community is becoming curmudgeonly and using AI tooling as the justification.
As for AI specifically.. life is too short to read all the interesting pages already, and AI just makes is so much worse.
- AI is verbose in general, so you are spending a lot of time reading and not getting much new facts out of that.
- Heavy AI use often means that author has little idea about the topic themselves, and thus cannot engage in comments. Since discussion with authors are often most interesting part of HN, that makes submission less interesting.
And yes, it is possible to use AI assistance to create nice and concise report on the topics you can happily talk about, but then this would not be labeled as "AI".
The README itself reads like unedited AI output with several layers of history baked in.
- V1 and V2 appear in tables and diagrams but are never explained. V3 gets a pipeline diagram that hand-waves its fallback path.
- The same information is restated three times across Overview, Architecture, and Technical Deep Dive. ~1600 words padded to feel like a paper without the rigor.
- Five badges, 4 made up, for a project with 88 test images, no CI, and no test suite. "Detection Rate: 90%" has no methodology behind it. "License: Research" links nowhere and isn't a license.
- No before/after images, anywhere, for a project whose core claim is imperceptible modification.
- Code examples use two different import styles. One will throw an ImportError.
- No versioning. If Google changes SynthID tomorrow, nothing tells you the codebook is stale.
The underlying observations about resolution-dependent carriers and cross-image phase consistency are interesting. The packaging undermines them.
> We're actively collecting pure black and pure white images generated by Nano Banana Pro to improve multi-resolution watermark extraction.
Oh hey, neat. I mentioned this specific method of extracting SynthID a while back.[1]
Glad to see someone take it up.
[1]: https://news.ycombinator.com/item?id=47169146#47169767