falindrith:

sumikatt:

the darling Glaze “anti-ai” watermarking system is a grift that stole code/violated GPL license (that the creator admits to). It uses the same exact technology as Stable Diffusion. It’s not going to protect you from LORAs (smaller models that imitate a certain style, character, or concept)

An invisible watermark is never going to work. “De-glazing” training images is as easy as running it through a denoising upscaler. If someone really wanted to make a LORA of your art, Glaze and Nightshade are not going to stop them.

If you really want to protect your art from being used as positive training data, use a proper, obnoxious watermark, with your username/website, with “do not use” plastered everywhere. Then, at the very least, it’ll be used as a negative training image instead (telling the model “don’t imitate this”).

There is never a guarantee your art hasn’t been scraped and used to train a model. Training sets aren’t commonly public. Once you share your art online, you don’t know every person who has seen it, saved it, or drawn inspiration from it. Similarly, you can’t name every influence and inspiration that has affected your art.

I suggest that anti-AI art people get used to the fact that sharing art means letting go of the fear of being copied. Nothing is truly original. Artists have always copied each other, and now programmers copy artists.

Capitalists, meanwhile, are excited that they can pay less for “less labor”. Automation and technology is an excuse to undermine and cheapen human labor—if you work in the entertainment industry, it’s adapt AI, quicken your workflow, or lose your job because you’re less productive. This is not a new phenomenon.

You should be mad at management. You should unionize and demand that your labor is compensated fairly.

some things in here are good points (larger watermarks, for one). However, it is also full of weird not really true info about the glaze project itself:

“glaze is a grift” – Glaze is an academic research project released for free. Only people being grifted here are grad students (that’s a different post entirely). The paper itself won awards at a peer reviewed conference.

(USENIX Best Papers, https://www.usenix.org/conferences/best-papers, Retrieved on 2/28/24)

“glaze violated gpl/stole code” – True to the letter, however extremely easy to show that this was rapidly resolved by the researchers. 3 days! complete rewrite!

(Release Notes, https://glaze.cs.uchicago.edu/release.html, Retrieved on 2/28/24)

“glaze uses the same tech as stable diffusion” – yes because it was designed as an attack against a class of models called diffusion models, of which, stable diffusion is the most well-known open source implementation. It uses the same encoders to develop image perturbations that interfere with the latent embedding of the image in a way that is honestly pretty cool:

(Shawn Shan et al., “Glaze: Protecting Artists from Style Mimicry by Text-to-Image Models,” in 32nd USENIX Security Symposium (USENIX Security 23) (Anaheim, CA: USENIX Association, 2023), 2187–2204, https://www.usenix.org/conference/usenixsecurity23/presentation/shan, p. 7)

To understand the above, you need to know that diffusion models represent what they’re generating in a “feature space” (numbers). The authors noticed that style transfer could be combated if you knew which numbers in that feature space affected artist style. They then did something pretty clever: they computed what something would look like if you applied a public domain style to it, and then made it so that your input would look like the public domain style in the feature space. This is why there are artifacts in a glazed image; it’s actually changing the image data so it looks different when the machine runs its encoder. The researchers’ choice to then use stable diffusion (it’s cited, [67] in section 5.2 step 2) to run style transfer should then make intuitive sense: if mr. AI then uses the same encoder the researchers did to fine tune his model, then their modified image will clog up his machinery just as shown in the paper.

“de-glazing images is as easy as upscaling it” – no it’s not lmao read the paper. this is directly addressed:

(ibid, p. 13)

The overall point of this not being a perfect defense is actually something I agree with. Glaze is so narrow that it only encompasses fine-tuning (e.g. Dreambooth) so it wasn’t really a global defense to begin with (nightshade does better, but not perfectly, in that regard, read their paper, it’s cool). However, the actual claim that you can “just upscale it” in this post is easily proven false.

As an aside, Glaze can be de-glazed pretty well, but it is not a simple process. There is even a paper and open source code that does this (and it, too, is pretty cool): https://github.com/paperwave/Impress. Just to show that it’s also like a published paper, here’s the citation:

Jinghui Chen Bochuan Cao Changjiang Li, Ting Wang, Jinyuan Jia, Bo Li, “IMPRESS: Evaluating the Resilience of Imperceptible Perturbations Against Unauthorized Data Usage in Diffusion-Based Generative AI,” in The 37th Conference on Neural Information Processing Systems (NeurIPS), New Orleans, Louisiana, USA., 2023, https://arxiv.org/abs/2310.19248.

way too much effort to put in to this post but like fr cite your got dam sources it’s so easy (and free!) to do.

(and use a big watermark/low quality images when posting online that’s also free and easy)