He understood the assignment
channeling the last of my hopium into this hrothgar lady before fanfest
Pride Month is upon us again and so it is time to repost my little guy, Hue! I’m wishing everyone a safe, supportive, positive, and enlightening Pride, whether you’re all the way “out” or not!
✦ ah the deep desire of having sharp teeth ✦
me when i fucking boop you
> walk into a pizza place > there's no visible menu > "how do you get the menu?" > "you have to scan the QR code" > "is that the only way?" > "yes" > walk out of the pizza place
I love when you see stray cats experience joy and comfort the likes of which they've never even dreamed and they literally don't know how to respond to it yet. it's like they're just mashing every response button wildly, trying to figure out how to express this overwhelming feeling: PURR? KNEAD? BITE? SCRATCH? WRITHE? SNARL? MEOW? RUB?? ???
Gosh, that's a healthy motor, lmao. Perfect little baby. :')
Me: In November, either Trump or Biden will win. There is no third outcome, so we should vote accordingly for the option that will mitigate the most harm.
The people in my notes: Oh, so you think direct action is bad? You think voting is literally the only thing we can do? You hate activism? You love Biden and endorse everything he does and think nobody should criticize him ever? You think voting will just magically solve all our problems? You think protesting is wrong? You love the status quo and think everything is fine?
Im Still occasionally getting people tagging me in that post to get mad at me for agreeing with you, its so funny (depressing)
I keep seeing people saying things like "Don't @ me with your weak-ass harm reduction argument/lesser of two evils argument" like bestie, literally, what is the alternative? You have a schedule for the glorious revolution I don't know about? Have you worked out how to have a glorious revolution that won't kill a bunch of people? 'Cause last time I checked..... (╯°□°)╯︵ ┻━┻
I know living in Oregon and having vote by mail I'm pretty spoiled with how little time it takes me to vote. I know there are lots of areas that make it hard, particularly if you are working class and can't get the day off, but voting is only a few days a year at most the people who vote have plenty of time to do other stuff as well.
If you want to do a revolution against the current power structure you need to pay attention to how that power structure works, and what needs people have that it isn't fulfilling. So it's not like ignoring current politics saves you time to work on other stuff either. You're making the other work you are doing harder through willful ignorance and not using all the tools available to you.
I have also seen people on other social media sites implying that the only people calling voting “harm reduction” are people who live in blue states or states that are currently leaning heavily blue (aka “people who have it ‘good’”)
My sibling in internet, I live in fucking Alabama and cannot name a currently elected Democrat anywhere in this state off the top of my head—I would have to Google for it—and I am seriously begging people to vote for Biden because another Trump term with Things As They Are is not something anyone in the world should be subjected to.
the darling Glaze “anti-ai” watermarking system is a grift that stole code/violated GPL license (that the creator admits to). It uses the same exact technology as Stable Diffusion. It’s not going to protect you from LORAs (smaller models that imitate a certain style, character, or concept)
An invisible watermark is never going to work. “De-glazing” training images is as easy as running it through a denoising upscaler. If someone really wanted to make a LORA of your art, Glaze and Nightshade are not going to stop them.
If you really want to protect your art from being used as positive training data, use a proper, obnoxious watermark, with your username/website, with “do not use” plastered everywhere. Then, at the very least, it’ll be used as a negative training image instead (telling the model “don’t imitate this”).
There is never a guarantee your art hasn’t been scraped and used to train a model. Training sets aren’t commonly public. Once you share your art online, you don’t know every person who has seen it, saved it, or drawn inspiration from it. Similarly, you can’t name every influence and inspiration that has affected your art.
I suggest that anti-AI art people get used to the fact that sharing art means letting go of the fear of being copied. Nothing is truly original. Artists have always copied each other, and now programmers copy artists.
Capitalists, meanwhile, are excited that they can pay less for “less labor”. Automation and technology is an excuse to undermine and cheapen human labor—if you work in the entertainment industry, it’s adapt AI, quicken your workflow, or lose your job because you’re less productive. This is not a new phenomenon.
You should be mad at management. You should unionize and demand that your labor is compensated fairly.
some things in here are good points (larger watermarks, for one). However, it is also full of weird not really true info about the glaze project itself:
"glaze is a grift" - Glaze is an academic research project released for free. Only people being grifted here are grad students (that's a different post entirely). The paper itself won awards at a peer reviewed conference.
(USENIX Best Papers, https://www.usenix.org/conferences/best-papers, Retrieved on 2/28/24)
"glaze violated gpl/stole code" - True to the letter, however extremely easy to show that this was rapidly resolved by the researchers. 3 days! complete rewrite!
(Release Notes, https://glaze.cs.uchicago.edu/release.html, Retrieved on 2/28/24)
"glaze uses the same tech as stable diffusion" - yes because it was designed as an attack against a class of models called diffusion models, of which, stable diffusion is the most well-known open source implementation. It uses the same encoders to develop image perturbations that interfere with the latent embedding of the image in a way that is honestly pretty cool:
(Shawn Shan et al., “Glaze: Protecting Artists from Style Mimicry by Text-to-Image Models,” in 32nd USENIX Security Symposium (USENIX Security 23) (Anaheim, CA: USENIX Association, 2023), 2187–2204, https://www.usenix.org/conference/usenixsecurity23/presentation/shan, p. 7)
To understand the above, you need to know that diffusion models represent what they're generating in a "feature space" (numbers). The authors noticed that style transfer could be combated if you knew which numbers in that feature space affected artist style. They then did something pretty clever: they computed what something would look like if you applied a public domain style to it, and then made it so that your input would look like the public domain style in the feature space. This is why there are artifacts in a glazed image; it's actually changing the image data so it looks different when the machine runs its encoder. The researchers' choice to then use stable diffusion (it's cited, [67] in section 5.2 step 2) to run style transfer should then make intuitive sense: if mr. AI then uses the same encoder the researchers did to fine tune his model, then their modified image will clog up his machinery just as shown in the paper.
"de-glazing images is as easy as upscaling it" - no it's not lmao read the paper. this is directly addressed:
(ibid, p. 13)
The overall point of this not being a perfect defense is actually something I agree with. Glaze is so narrow that it only encompasses fine-tuning (e.g. Dreambooth) so it wasn't really a global defense to begin with (nightshade does better, but not perfectly, in that regard, read their paper, it's cool). However, the actual claim that you can "just upscale it" in this post is easily proven false.
As an aside, Glaze can be de-glazed pretty well, but it is not a simple process. There is even a paper and open source code that does this (and it, too, is pretty cool): https://github.com/paperwave/Impress. Just to show that it's also like a published paper, here's the citation:
Jinghui Chen Bochuan Cao Changjiang Li, Ting Wang, Jinyuan Jia, Bo Li, “IMPRESS: Evaluating the Resilience of Imperceptible Perturbations Against Unauthorized Data Usage in Diffusion-Based Generative AI,” in The 37th Conference on Neural Information Processing Systems (NeurIPS), New Orleans, Louisiana, USA., 2023, https://arxiv.org/abs/2310.19248.
way too much effort to put in to this post but like fr cite your got dam sources it's so easy (and free!) to do.
(and use a big watermark/low quality images when posting online that's also free and easy)