So here's the thing about AI art, and why it seems to be connected to a bunch of unethical scumbags despite being an ethically neutral technology on its own:
The problem is competition. More generally, the problem is capitalism.
So the kind of AI art we're seeing these days is based on something called "deep learning", a type of machine learning based on neural networks. How they work exactly isn't important, but one aspect in general is: they have to be trained.
The way it works is that if you want your AI to be able to generate X, you have to be able to train it on a lot of X. The more, the better. It gets better and better at generating something the more it has seen it. Too small a training dataset and it will do a bad job of generating it.
So you need to feed your hungry AI as much as you can. Now, say you've got two AI projects starting up:
Project A wants to do this ethically. They generate their own content to train the AI on, and they seek out datasets that allow them to be used in AI training systems. They avoid misusing any public data that doesn't explicitly give consent for the data to be used for AI training.
Meanwhile, Project B has no interest in the ethics of what they're doing, so long as it makes them money. So they don't shy away from scraping entire websites of user-submitted content and stuffing it into their AI. DeviantArt, Flickr, Tumblr? It's all the same to them. Shove it in!
Now let's fast forward a couple months of these two projects doing this. They both go to demo their project to potential investors and the public art large.
Which one do you think has a better-trained AI? the one with the smaller, ethically-obtained dataset? Or the one with the much larger dataset that they "found" somewhere after it fell off a truck?
It's gonna be the second one, every time. So they get the money, they get the attention, they get to keep growing as more and more data gets stuffed into it.
And this has a follow-on effect: we've just pre-selected AI projects for being run by amoral bastards, remember. So when someone is like "hey can we use this AI to make NFTs?" or "Hey can your AI help us detect illegal immigrants by scanning Facebook selfies?", of course they're gonna say "yeah, if you pay us enough".
So while the technology is not, in itself, immoral or unethical, the situations around how it gets used in capitalism definitely are. That external influence heavily affects how it gets used, and who "wins" in this field. And it won't be the good guys.
An important follow-up: this is focusing on the production side of AI, but obviously even if you had an AI art generator trained on entirely ethically sourced data, it could still be used unethically: it could put artists out of work, by replacing their labor with cheaper machine labor. Again, this is not a problem of the technology itself: it's a problem of capitalism. If artists weren't competing to survive, the existence of cheap AI art would not be a threat.
I just feel it's important to point this out, because I sometimes see people defending the existence of AI Art from a sort of abstract perspective. Yes, if you separate it completely from the society we live in, it's a neutral or even good technology. Unfortunately, we still live in a world ruled by capitalism, and it only makes sense to analyze AI Art from a perspective of having to continue to live in capitalism alongside it.
If you want ideologically pure AI Art, feel free to rise up, lose your chains, overthrow the bourgeoisie, and all that. But it's naive to defend it as just a neutral technology like any other when it's being wielded in capitalism; ie overwhelmingly negatively in impact.