Class Action Filed Against Stable Diffusion, Midjourney, and DeviantArt

Hm. I had no idea the CA referred to SD as collage. Cynical minds think alike.

Hell of a zinger at the end of the CA too. Nice.

Seems to me this is just techbros flying wild for quick bucks in the wake of the NFT crash and burn.

In other words, criminals trying to outrun the necessary update to legal definitions. What that scumbag at Midjourney confirms that -- essentially, there is no *specific* law against what we are doing.

The only people who say that KNOW they're doing the wrong thing, especially if the only reason they say it is because someone asked.
The name says it all.
Last edited by 鬼殺し on Jan 16, 2023, 7:21:53 PM
AI is great

Its time everyone loses their jobs to AI equally
Need more brains, exile?
The problem as I see it, as in the very near future it will be impossible for anyone to distinguish what has been "stolen" versus influcenced, partially taken, modified, or otherwise altered into an "original version".

This is further complicated with the AI ability to scour all sorts of digital spaces. For example if something is ripped from a movie poster in some smaller-ish Italian town where someone takes a photo or post on social media, and is used in, lets say, a character profile developed for a concept in a tabletop board game in California, the chances the Italian artist has any clue is remote.

So again, fuckery taking place by less than ideal characters at these companies, is obfuscating the overall premise that AI is exponentially expanding, especially in this area, and the speed, efficiency, and costs are going to price most human artists right out of these markets.

It's sad in a way, but that's how technology works. Do we really feel bad for abacus makers losing their jobs as calculators and computers were invented? Or horse-carriage makers, or typewriter manufacturers, or drive-in movie theatres, or Blockbuster Video?

"Better to remain silent and be thought a fool than to speak out and remove all doubt."
- Abraham Lincoln
"
DarthSki44 wrote:
The problem as I see it, as in the very near future it will be impossible for anyone to distinguish what has been "stolen" versus influcenced, partially taken, modified, or otherwise altered into an "original version".
That's how art already works. Should artists have to pay everyone who influences them?

"
DarthSki44 wrote:
For example if something is ripped from a movie poster in some smaller-ish Italian town where someone takes a photo or post on social media, and is used in, lets say, a character profile developed for a concept in a tabletop board game in California, the chances the Italian artist has any clue is remote.
This doesn't really make sense. AI art systems aren't databases of images. Nobody's art is being 'ripped from one place and used in another'.
"
That's how art already works. Should artists have to pay everyone who influences them?


That is definitely not "how it works". You cant just take a original portrait, throw a mustache on it, and call it your own. Or a song, or a selection of writing if you expand beyond our digital AI conversation to apply to "art" in a general sense. You can be inspired by an artist to make an original concept of course, that's fine, but in cases such in this lawsuit that's not what's occurring, and I don't much think that is debatable. The amount of what is being sampled in a digital space is the issue here, and who is ultimately responsible for said generation. How much is "too much" who determines that, and to what extent?

"
This doesn't really make sense. AI art systems aren't databases of images. Nobody's art is being 'ripped from one place and used in another'.


The AI systems are in fact taking into account digital images and concepts as a basis for the logic understanding when you enter in key words and descriptions for generation. It's the only way the AI understands what the input intentions are in a general sense.

I'm not sure I would even understand how you think the AI would know what to base on image on without some fundamental basis on what the image represents in the first place? Unless you have a better explanation or understanding on what the AI is doing with the Human input requests for specific images?
"Better to remain silent and be thought a fool than to speak out and remove all doubt."
- Abraham Lincoln
Last edited by DarthSki44 on Jan 16, 2023, 11:30:18 PM
"
DarthSki44 wrote:
"
That's how art already works. Should artists have to pay everyone who influences them?


That is definitely not "how it works". You cant just take a original portrait, throw a mustache on it, and call it your own.
Of course you can't. Trivial extremes aren't useful cases here though. Nobody (serious) is concerned that artists are using AI tools to reproduce identical copies with moustaches. You could do that in MS Paint.

What I was saying by "that's already how it works" is that it's already the case that you can't look at something and say with certainty whether someone "stole" a work or whether they were "just" influenced by it. There is no algorithm to distinguish the options you laid out ("what has been 'stolen' versus influcenced, partially taken, modified, or otherwise altered into an 'original version'") in some neat binary yes/no answer. It's a subjective idea that ultimately comes down to "what you can convince a judge to be true".

All artists use knowledge of other works in their own work, consciously or not. Seeking to criminalise the use of knowledge of other works in original works is a hell of a can of worms that I don't think artists will like the end results of.

"
DarthSki44 wrote:
The amount of what is being sampled in a digital space is the issue here
Then that's easy: none of it is being sampled. A sample (in this context) is a partial copy. No works are copied by something like Stable Diffusion, not even in part. Works are read. Is looking at art and learning from it a crime now?

"
DarthSki44 wrote:
"
This doesn't really make sense. AI art systems aren't databases of images. Nobody's art is being 'ripped from one place and used in another'.
The AI systems are in fact taking into account digital images and concepts as a basis for the logic understanding when you enter in key words and descriptions for generation. It's the only way the AI understands what the input intentions are in a general sense.
"Taking into account" is a very vague phrase; it's not clear whether you are saying anything at all here. I'm taking your post into account when I respond to it, surely you can't sue me for that.

"
DarthSki44 wrote:
I'm not sure I would even understand how you think the AI would know what to base on image on without some fundamental basis on what the image represents in the first place?
I didn't say it has no fundamental basis of what the image represents. I said it's not a database of images. It's not taking a work, copying it, and transferring it (or even parts of it) somewhere else. It's reading a work, using that to update its internal weights which tell it what sorts of outputs tend to be associated with particular data. It is not storing the work, or redistributing the work: it is learning from the work. That's not a problem; that's called experiencing art.

You might read the response site I posted earlier if you haven't already (class-action post on the left side, response on the right); it might help shed at least a little light on what is and isn't happening in these systems if you're interested.
Whatever else, I appreciate lolozori, one of the finer artists in this community, making this thread. It's been an interesting little clickhole. Not really something I can discuss with the GF given she's far more invested in traditional visual arts than I am, but I try to keep it objective in my own assessment. I have also read some writers' takes on it, especially journalists who are side-eying ChatGPT. I think we're safe for now, mainly because the hardest rule you can teach a thing governed by rules is when and how to break them, i.e. the creation of literature. But fanfic level shit concerned only with plot and trope, lowest-common denominator trash? Probably already on the go.

As for SD, I maintain the stance that it is facilitating collages and not producing true art regardless of how the AI process works. I will probably keep that stance until we have something close to Data-level nuance, interpretation and sentience, so-called true AI...but by then I'm sure we'll have much bigger concerns than him painting a picture based on his flawless ability to reproduce any or all of them. Like, say, him using said paintings to figure out a complex dream.

Both ChatGPT and SD create an illusion of verisimilitude: enough to fool a casual observer taught only to see stuff like 'sentences that conform to basic grammar rules' and 'pictures with familiar elements' as legitimate, but to a trained eye neither stand up as anything but uncanny, grotesque mimicry. And that's important, because I find both to be an aesthetic version of 'dumb person's idea of smart'.

Which is great until it's your field of expertise that is being aped and commodified.

"
DarthSki44 wrote:
It's sad in a way, but that's how technology works. Do we really feel bad for abacus makers losing their jobs as calculators and computers were invented? Or horse-carriage makers, or typewriter manufacturers, or drive-in movie theatres, or Blockbuster Video?


Ah, the old 'robots are going to take our jobs' argument. Old, so old. And never not entirely illegitimate or irrelevant. But -- you knew there'd be one -- in each of the examples you made, I feel the redundancy was a result of improvement within that field, generally speaking. A calculator is, for the purpose of this argument, a super abacus that mathematicians can learn to use in lieu of an abacus -- and there are still people who use an abacus, even now, and quicker than a calculator sometimes. Carriages to cars, same thing. One type of engineering to another. And so on.

The reason why this is different and not simply luddite kneejerkery is that AI generated art/writing does not come from artists or writers. It is a poor facsimile generated by a tool designed to do 'good enough' by people from a different field altogether. People who, because they are not artists or writers, see art and writing as a free for all source. But then they *sell* their tool, of course they do. This is pretty blatantly hypocritical, even if the AI itself isn't 'stealing' others' work. It's monetising mediocrity that only exists because the real thing is expensive. If it weren't, we wouldn't have the 'starving artist' stereotype.

Early experiments in AI writing such as Racter are really fascinating but in the end there's always been someone behind the scenes 'massaging' the text into recognisable art. An editor, give or take.

So let's say AI art as it is now takes off. AI writing too. Eventually, it will be AI art and writing that teaches AI better art and writing. This creates a pretty unhealthy cycle of derivative degradation, one you can observe in certain genres such as newer YA novels -- a lot of them are written by people who read little more than YA novels themselves. What should be an evolution of style just becomes a stasis, at best. More realistically, it mistakes bad forms and habits for acceptable ones, because that's how the former generation did it.

And that happens even with human editors. Even with human writers.

Remove that element and the result is, like or not, the antithesis to art as a creative expression. Not an absence of it but, technically, an overabundance. And if this derivation of art becomes all people know of art, then art might as well be a dead word. Just call them 'pictures' and be done with it. Eh, not even that word fully encapsulates the end result. I will take refuge in analogy: could you imagine experiencing live music if all you'd ever heard was 320kpb mp3s? I can listen to those and enjoy them as a lay person, but even I can tell the vast difference between a recording of a live concert and actually being there. And that STILL doesn't capture it because AI generated content remains nothing but imitation. Imitation we already struggle to recognise as imitation because we're taught to see broadstroke similarities and call it good -- hence AI's existing ability to make cliched as fuck, generic book covers extremely cheaply and easily. But could an AI design a simple looking symbol or icon the meaning of which only becomes clear if you read the whole book? The sort of thing that looks like it cost a few dollars but actually is the result of a slow, expensive process of iteration and fine-tuning? Very, very unlikely.

The only way AI art can be a tool for artists the way the abacus, calculator and computer remain tools for mathematicians is to very strictly regulate its application, which is really what this class action is all about. No one is going for a payday. $1 per piece of artwork 'stolen' is the current target. Class actions are about setting precedent. Drawing lines against reprehensible behaviour by essentially unethical parties.

It's not just the artists who should be up in arms about what MJ, DA and SD have done; the scientists and futurists should be aggravated as well. The moment these greedy bastards put a price tag on any of it, they showed their hand. Their desired future isn't some lofty Star Trek scenario; it's Cyberpunk, with them as the fucking corpos.

PS do read the link the Croc linked. I did. A few times. It's only one side of the story, clearly, but the other side doesn't really exist -- the MJ reddit is just people using it; the Facebook page, the same. These companies know better than to respond directly. They put their foot in it any time someone catches them off-guard. I'm willing to bet their current legal advice is 'SAY FUCKING NOTHING'.

https://www.reddit.com/r/StableDiffusion/comments/10bj8jm/class_action_lawsuit_filed_against_stable/ -- although I did find this. Probably 'the other side' as good as it gets. Mostly pointy-stick clowning without AI tools, but I expected little more.

https://linktr.ee/wjameschan -- everything I've ever done worth talking about, and even that is debatable.
Last edited by Foreverhappychan on Jan 17, 2023, 1:19:39 AM
"
Imitation we already struggle to recognise as imitation because we're taught to see broadstroke similarities and call it good -- hence AI's existing ability to make cliched as fuck, generic book covers extremely cheaply and easily. But could an AI design a simple looking symbol or icon the meaning of which only becomes clear if you read the whole book? The sort of thing that looks like it cost a few dollars but actually is the result of a slow, expensive process of iteration and fine-tuning? Very, very unlikely.
Right, AI isn't magic, artists still exist. So what's the problem, exactly? Who is threatened?

"
It's not just the artists who should be up in arms about what MJ, DA and SD have done; the scientists and futurists should be aggravated as well.
Why should anyone be up in arms about it? Your whole post is a series of complaints that AI sucks at making art, that it isn't real art etc. "Could AI make this nuanced thing on its own? Very unlikely" etc. So I mean - yes, I agree, but so what? That isn't a problem, it's the nature of tools.

It's always worth remembering that AI doesn't make the art you see published; artists use AI to make art. Not getting "up in arms" at your tools for not making "true art" is just normal human behaviour.

"
No one is going for a payday. $1 per piece of artwork 'stolen' is the current target. Class actions are about setting precedent. Drawing lines against reprehensible behaviour by essentially unethical parties.
So would you be in favour of all artists having to pay $1 to any artist whose work they experienced in the past that may or may not have contributed some small influence on their work since (which is, arguably, literally every artificial construction we encounter)? If not, what is the $1 a payment for, exactly?

"
The moment these greedy bastards put a price tag on any of it, they showed their hand. Their desired future isn't some lofty Star Trek scenario; it's Cyberpunk, with them as the fucking corpos.
You make it sound like selling tools is some secret evil plan. Safer to just not assume up front that if people are putting loads of work into something there's a good change they're going to want to use it to help pay the bills at some point. It's nobody's responsibility to usher in some Star Trek utopia; that seems a fairly unreasonable bar to hold people to.

What about the greedy bastards that decided they're going to make really effective guitars, even though they can only produce an illusion of verisimilitude rather than true art, and require an artists to massage the results. Pianos. Drawing tablets. Sound editors. Game engines. Paintbrushes. Fucking Cyberpunk corpos the lot of them. Or, you know, not, obviously. People make thing, people sell thing, oh no.
Weird. I thought Devil's Proof would be the extent of it, but now just whataboutism? Yikes.

__

So I have to admit, I feel like a science-denier on this issue, which led me to a few gentle balms in that regard: slashdot, ars technica. I wasn't surprised to see slashdot folk incline towards the defendants because yeah the language of the class action is dodgy, emotive and misleading, but ars technica proved a great source of centrist discussion for the most part. Which then lead me to watch a lovely 17 minute video on that site that simply will not and cannot share here because it'd get deleted very...very quickly, showing how scientists deal with anti-science people. How to bridge that gap. Some of it was fluffy, pie-in-the-sky stuff, but most of it iterated that science is all about proving itself wrong and that when 'good' scientists change their mind, it's because they've been proved wrong and need to amend their views. And that's ONLY when speaking about science -- most scientists have all sorts of untenable or at least non-evidence based beliefs in their lives otherwise. That's just human.

Naturally, there's always the line about the importance of scientists not talking down to non-scientists, providing lay summaries of their work, engaging in every day language. There's this term: Information Deficit Model. It's typically used to describe situations where the public is sceptical or hostile towards science due to a lack of understanding resulting from a lack of information (hence 'deficit'). In this context, however, the scientist bringing it up sort of reversed it back onto science educators: you can't just throw a bunch of knowledge at a populace that isn't trained to understand it and expect them to know as much as someone who is i.e. an expert. Or, perhaps more pertinent to now, you need to find some way of convincing the populace that knowledge misunderstood is as bad as actual falsehoods.

The general vibe was people want to be right, everyone wants to be right, but being actually right requires a lot of being wrong first. Or, in lay terms: learn from our mistakes by recognising them as such.

The final message of the video was basically, you can't change someone's mind. That's not how it works. You can, however, change their attitude. I really dug that.

___

There was one comment on ars I also really dug. Broadly, I pointed out that while AI image generation doesn't 'collage' or 'steal' art, it does use art as a sort of feed. Without that art, it wouldn't have anything to 'scrape' (what a violent word these silly techbros picked -- it's going to feature so much in the case). Artists should be allowed to volunteer their work for that feed, or to say no thank you, or ask for a credit. Fair enough I think. But that post brought me back to what I was trying to explain regarding the potential slippery slope away from art as a creative, deliberate expression and towards simply the generation of images fitting a criterion based on existing images: what happens when the 'feed' becomes nothing but what the 'feed' generated?

That's a closed system, cannibalising itself. Creatively, a vacuum. And as the saying goes (I can't find the true source [Tarkovsky? Kitaj? GOGGINS!?] -- how ironic): art is not made in a vacuum.


https://linktr.ee/wjameschan -- everything I've ever done worth talking about, and even that is debatable.
Last edited by Foreverhappychan on Jan 17, 2023, 2:45:39 AM
I see whataboutism as dismissive, rather than illustrative or exploratory. This was just analogy to me; discussing similar applications of a concept to show or inquire about the shape of that concept. But no worries.

Hope you’re doing okay man. Brains, hey? Weird things.

Report Forum Post

Report Account:

Report Type

Additional Info