Yeah, it’s more of this. If you’re sick of hearing about AI and/or machine learning, then you may want to skip this post, but I’ll probably be writing about the subject every so often as the technology develops (and as usual, all the legal stuff here may only apply in the US.)
There have been a shocking number of developments since I wrote on the subject late last year. A few lawsuits have been filed against image and code generator owners and operators, perhaps the most prominent being Andersen v. Stability AI Ltd. now pending in federal court in California. The grounds for this class-action suit can be found on the plaintiff firm’s website, where they show if nothing else that they have a good sense for clean and simple webpage design.
The case is still in its very first stages. None of the defendants have filed an answer yet, and it’s possible they’ll be filing motions to dismiss first. Such a motion should be filed in place of an answer if the defendant has an argument that the plaintiffs’ initial complaint is technically faulty somehow. There are various grounds to base a motion to dismiss on, but the one I might expect here is failure to state a claim for which relief can be granted (i.e. “you’re not actually claiming I’m doing anything illegal/infringing on your rights.”) I doubt very much that the court would grant such a motion given how novel this case is, but it might still be worth a try. The fact that Stability AI has announced artists can opt out of having their work used to train Stable Diffusion 3 may make a difference in that decision, though I can’t say how much of a practical effect it will have either on this case or on the operation of the next Stable Diffusion model.
The complaint isn’t airtight, nor can we expect it to be given how novel this case is. This isn’t Thaler v. Perlmutter: in that case, I argued that the US Copyright Office was on extremely solid ground in denying Dr. Thaler’s image generator copyright ownership over its images based on the human authorship requirement contained in the office’s interpretation of the Copyright Act. Andersen will instead raise the question of fair use, and specifically of transformative use. Have the companies behind Stable Diffusion, Midjourney, and whatever DeviantArt is using violated the legal rights of artists by scraping the internet for billions of images to train their programs on, or will the defense of transformative use act as a shield against further litigation?
I’m not going to pretend that I can possibly predict the ultimate outcome of Andersen. However, it is certain that fair use will be the primary defense in this case. I’ve already seen some people conflating the legal issues involved in this case and Thaler, which is understandable considering how much of a labyrinth the American system of legal precedence, statutes, and regulations can be. But keep in mind that the doctrine of transformative use, a subset of fair use, is only a defense to a charge of copyright infringement. If the court in Andersen were to find, for example, that the output of Stable Diffusion is transformative enough to not infringe on the rights of the artists whose works were used to train the system, it wouldn’t automatically follow that said AI-generated output is a copyrightable work in itself given the Copyright Office’s stance against granting protection to AI-generated works. The courts’ findings in Thaler and Andersen, together with other proposed and pending AI-related cases, may create a new framework of legal precedents to work from, though I wouldn’t hold your breath waiting through the years it will take for these cases to get through discovery, countless motion hearings, back-and-forth negotiations, and finally appeals.
I won’t go into transformative use in depth today. It seems pointless to do so this early in the case — I’d rather wait for those potential motions to dismiss, responses to said motions, and the court’s order, which may take a few months to come out depending on the deadlines. Like I said, don’t hold your breath.
But there’s still plenty to examine here, even in these early days. When I was digging around for more information about the Andersen complaint, I found this attempt at a takedown of the plaintiffs’ complaint by a group of “tech enthusiasts uninvolved in the case.” I disagree that plaintiffs’ lawsuit is frivolous, and I think there are some fairly disingenuous and even a few outrageous remarks on this page. However, this unnamed group of enthusiasts also raises some counterarguments to the Andersen plaintiffs’ allegations that are worth considering.
Setting aside all the technical analysis of how AI “art” is generated on the page, which I admit I don’t have the expertise to address, the most interesting point they raise (though not their strongest argument) is the lack of a bright line between the mere use of a tool in the creative process and the generation of an AI image as an act independent of the prompt-writer. I also believe that there isn’t a bright line here but more of a spectrum. This reminded me of something I’d seen a few days earlier, when I was watching an artist in a livestream using a lighting tool in Photoshop to “place” the sunlight source in the image. I’m sure there are painters who would consider that sacrilege, and the same goes for line-straightening and even maybe for working in easily editable layers. There may also be automated tools that can be applied to certain parts or aspects of human-created works that would be further towards the “AI end” of that spectrum.
This seems to be one of the AI proponents’ favorite defenses, and for good reason: there’s something to it. Rejecting AI tools in the creation of visual art, they say, is akin to rejecting the camera or digital art tools and methods, all of which also happened in their own times. Yet I still insist, at the risk of being called a Luddite (which they would definitely say I am anyway, so it hardly matters now) that this time, it is different. I’ll refer back to an argument I made in the context of Thaler, because it applies here as well. An artist who wields a tool still completely or substantially controls the end result; the tool only aids them in getting there.1 By contrast, a system like Stable Diffusion generates an image according to the user’s parameters, said image being substantially outside the user’s control until they start editing it.
The use of AI tools in editing or supplementing human-created art may sit in a gray area between these two points. We’ve already seen such cases, again sparking serious anger — see for example the Netflix-produced anime short Dog and Boy, with background art credited to “AI (+ human)”. And there’s already been a legal controversy over such mixed human/AI visual works in the denial of copyright registration for a comic with AI-generated elements.
Again, I won’t argue over the specifics of how Stable Diffusion or similar systems generate their images — I lack the necessary technical knowledge, and I’m sure that will be gotten into in great depth in the coming filings in Andersen, so I may as well let the people actually getting paid to do the work argue those points instead. But though I probably will address those issues later on, I’m not just approaching this matter as an attorney. From that legal perspective, I can be more dispassionate and can easily put myself in the defendants’ position.
But as an amateur writer, I admit I have a bias here and a personal stake in the outcomes of these cases, even if a small one. Today, if I felt like it, I might use NovelAI, ChatGPT, or a similar system to help me fill in a story with descriptive scenes. Even after editing, however, that part of the story wouldn’t be mine, and as far as I’m concerned, that lack of human authorship — my authorship — would taint the entire work. Maybe some writers don’t feel the same way and would say I just have a silly hangup, or that I’ll change my tune later on as times change and start to pass me by. They’re free to hold those opinions, and I’m free to hold mine. And if both the visual arts and literature end up congealing into a dull, stagnant mush as a result of reliance on “automated creativity” then kindly don’t talk to me about it, because I’ll be busy with my horrible, utterly mind-numbing legal work. I’d rather do that than try to put out my own painstaking writing where reliance on automation has become the standard.
And since this is also partly an anime review/analysis/etc. site, if you want my opinion on Dog and Boy, there it is. I certainly don’t agree with everything he says about anime, very far from it in fact, but Miyazaki was spot on in this case:
Before I’m done with this post, though, I have one warning for those who are not just excited about the advent of AI but are giddy over its replacing and “improving” on the work of human artists (or who insist there’s no risk of replacement, about which a little more in the endnotes.) Those who believe AI technology can be wielded only in ways that they like will probably discover uses of AI down the road — and perhaps not far down this road we’re on — that they don’t like or agree with, uses that may even damage human relationships and society itself.
Well then, if that happens, go ahead and close Pandora’s box pretending everything will be all right. Don’t think about the possible decay of social, family, and even potentially romantic bonds as AI expands into areas of life you might have thought would always be left to humans. At that point, only one thing is certain to me: you won’t have my help if the shit really hits the fan. Because I figure that if you’re going to support the wresting away from human hands of the one thing in life that makes me feel fulfilled, I may as well go ahead and escape reality even more fully when I have the time by drowning in some AI-powered fantasy where I live in a mansion staffed by catgirl maids and where I don’t have to resent every moment of a life I live purely out of obligation to others anyway. Is that acting out of hypocrisy or just sheer spite? No, neither: I call it being practical.2
Well, I got heated this time, but can you blame me? Maybe you can, and that’s what the comments section is for. If you think I’m an idiot, go ahead and say so, but at least you know you’ve come to expect something more than dry legal analysis from these posts (which has its place, just not on this site.) Until next time.
1 I apply the same argument in favor of the use of sampling in music, and for that matter the use of that light source tool.
2 Okay, I was pretty pissed when I wrote this part, and maybe I shouldn’t have left it in. But here’s what set me off: the glib attitude we so often see from the all-in AI enthusiasts. In the takedown of the Andersen plaintiffs’ complaint linked above, see the following, taken from the pro-AI tech group’s (since I have no other name to use) response to the bios of the three named plaintiffs. Their text follows in blockquotes:
I have genuine sympathy for the plaintiffs in this case. Not because “they’re having their art stolen” – they’re not – but because they’re akin to a whittler who refuses to get power tools when they hit the market, insisting on going on whittling and mad at the new technology that’s “taking our jobs!” When the one who is undercutting their job potential is themselves.
I’ve already argued against what I see as the above faulty comparison between AI image generators and digital art tools — the latter seem to me the proper analogy to power tools. More striking, however, is the writer’s condescending tone. I doubt just how genuine the sympathy is when it’s expressed in such a way. “Bless your heart” as they say down South — polite code for “what an idiot.” See also the writer’s assumption that artists aren’t having their art stolen. From a legal perspective, that one is for the court to decide when the defendants raise their transformative use defenses. And I won’t even get into the moral concept of theft in this context — that will likely take an actual philosopher to write an entire book about.
Jevon’s Paradox is real. Back when aluminum first came out, it was a precious metal – “silver from clay”. Napoleon retired his gold and silver tableware and replaced it with aluminum. The Washington Monument was capped with a tiny block of what was then the largest piece of aluminum in the world.[30] Yet, today – where aluminum is a commodity metal costing around $2/kg, rather than a small fortune – the total market is vastly larger than it was when it was a precious metal. Because it suddenly became affordable, sales surged, and that overcame the reduction in price.
AI art tools increase efficiency, yes. Contrary to myth, they rarely produce professional-quality outputs in one step, but combined into a workflow with a human artist they yield professional results in much less time than manual work. But that does not inherently mean a corresponding decrease in the size of the market, because as prices to complete projects drop due to the decreased time required, more people will pay for projects that they otherwise could not have afforded. Custom graphics for a car or building. An indie video game. A mural for one’s living room. All across the market, new sectors will be priced into the market that were previously priced out.
These are a set of massive assumptions unsupported by any actual evidence. I’ve heard a lot of claims that artists won’t be shoved out of the market by the use of AI systems, that they won’t be replaced etc. etc., and these arguments very often rely on historical analogy. The problem with such an analogy in this case (aside from it being overly simplistic and reductive in general) is that this new technology is unlike anything we’ve seen before, and its effects have already begun to extend beyond the world of art and into most other professions — including my own. (Not that I’d be all that broken up about finding something to do other than practicing law, but I still need a livelihood, you know? But apparently that’s simply a concern that can be hand-waved away by referring to a century-plus-old drop in the price of aluminum.)
And I won’t even get very far into the specifics here because this post is long enough as it is, but the assertion that indie video games can be made more affordable through the use of AI is just bizarre. Do they know how cheap (or even free) some of the best and most creative indie games out there are? I’ve featured some of them here on the site. Check out the games index page up top. I’ve also played a couple of games heavily featuring procedural generation, and they didn’t seem to be any cheaper than the rest. The assertion that the market will expand (without limit?) to accommodate supply in itself is faulty anyway, since we only have so much time in the day to “consume content” that’s pumped out by whatever the hypothetical future artificial intelligence machines can come up with.
What can be said, however, is those who refuse to acknowledge advancements in technology and instead fight against them are like whittlers mad at power tools. Yes, people will still want hand-made woodwork, and it’ll command a premium. But you relegate yourself to a smaller market.
Here’s my final point (I promise.) The writer(s) behind this attack on the Andersen plaintiffs’ complaint may very well be right about the ultimate effects on the market. I don’t believe they’ve proven a damn thing here, but that doesn’t mean they’re wrong either. However, the dismissal of legitimate concerns over the use of deep learning and image generation systems that many artists have expressed is pretty god damn glib, and combined with the condescension we’ve seen from some of these proponents, the strong resistance to their views and arguments should make sense. To put it bluntly, even crudely, it all comes off as pissing in the face of human creativity. In saying that, I’m not even blaming the technology itself, which I believe can have great uses. This isn’t a case of “tech bad”, as I’ve seen my arguments and others’ reduced to. But see my Pandora’s box reference above.