I have to admit this, even as a lawyer (or especially as a lawyer) and as commenters confirmed in the last post: the law is often designed to protect those with power, money, and influence. I stand by my analysis of the legal situation in the first part of this post series, but I also don’t have very much faith in a political system that’s structurally pretty sound but run largely by people who don’t give a damn about the ideals they spout. If Hell exists, there may well be a special area for such people to hang around in, but in the meantime we have to deal with their rot and near-open corruption.
Partly for that reason, I think the US legal framework regarding the use of AI “art” may change soon, almost certainly through legislation, after corporate interests realize they can save a lot of money by not paying humans to create real art for their intellectual properties. I don’t believe artists will ever be out of a job entirely, but with the right (read: wrong) amendments made to the Copyright Act, plenty can be effectively made destitute.
So it feels pointless going on about the law, even if that is my field. I will be following legal developments and pending cases like Thaler v. Perlmutter (which if you aren’t, check my last post for more on why you should) but today I want to shift to a few moral and ethical questions surrounding AI “art” generation that I’m less qualified to talk about but that I’m more interested in in some sense.
I was originally going to put up a post about something else today, but man if Twitter didn’t just step in as usual to piss me off enough to push this post up in the queue: clowns pretending that their AI-generated images they pieced together in a matter of dozens of minutes using word prompts are “real art.” I’ve already addressed my feelings about whether this stuff is art (it isn’t) but clearly some people disagree with me.
Once again, the above is impressive. One year ago, we weren’t seeing AI producing images with this much detail. That’s ignoring the fact that some elements of these images are still off and clearly not human-created even after the fine-tuning this guy says he did — people have brought up the still-uncanny aspects of these images like their eyes and certain aspects of anatomy AI still can’t seem to quite pin down like fingers, the finer parts of the human body. I won’t get into that myself because 1) I’m no visual artist and 2) I think it’s reasonable to believe at the rate AI is advancing that it will get these down pretty well soon.
But yet again, the technical quality of these images is beside the point. Is it right for society to accept such AI-generated works as legitimate? Their legal status certainly has something to do with that, especially approaching it from a profit motive, but societal acceptance of this kind is a broader issue.
Before you might think “does it matter?” consider how culturally frowned upon plagiarism is. If you’ve ever written a paper for a school assignment as the vast majority of us probably have, you’ve been warned about not copying work without properly quoting and attributing it. A paper, article, or hell, a blog post — any of these can be beautifully written, but if they’re products of plagiarism, they’re widely deemed totally worthless. Plagiarism is rightly recognized as theft.*
Now consider how these AI engines operate. I got into it briefly before, but my basic understanding is that these programs generate images using vast pools of art for reference, basing the results off of the word prompts they’re given by the user. So for example, if you feed a prompt akin to “victorian large breasted hot woman in a fancy dress” you might get the following or similar:
Setting aside technical qualities again (these feature a few of the uncanny quirks that I still think will likely be sorted out in the near future as the AI engines continue to improve) what are these pieces exactly? They’re mashups of human-created works. Of course, a “mashup” of this kind produced by a human is art as well — it’s not like the existence of influences in an artist’s work makes it not art. In fact, it’s impossible to imagine art totally uninfluenced by other art unless you have some kind of highly unethical “person locked in a cage without human contact” experiment going.
The difference here is the method and the degree of copying. People have pointed out that some of the most popular AI-produced images use the work of artists like Greg Rutkowski and others who never consented to their art being used to train these systems. It’s not that we’re guessing at this outcome — users are actually typing “in the style of Greg Rutkowski” or whoever else into their prompts, so there’s no doubt about the copying.
This leads to what I think is the heart of the issue. Certain people advancing this technology as actually creating art have been, as far as I can tell, taking just the same soulless, empty approach to the value of creative works as our friends peddling their NFT garbage. No surprise then that there’s a fair overlap between the two groups: they both seem to have a love of reducing all human creation to “wow this looks cool” and “can it be marketed and sold”, ignoring the meaning behind art, the feeling, the context, everything that actually makes it interesting as art. Going back to art I don’t even like, I’m far more interested in understanding what motivated Mark Rothko to paint his color field works — there was a man who clearly was not in it for the money, not when you read his thoughts on his work and about how he even refused sales of some of his art to luxury hotels because he didn’t want it used as mere decoration for wealthy diners, instead donating it to galleries for public viewing.
Well, we have no use for this way of thinking anymore, do we? It’s old-fashioned. AI images are cool, and you can easily create large-breasted women with them or whatever else you like in a matter of minutes. Never mind that you can do exactly the same with a copy of fucking Koikatsu, yet nobody is trying to convince society that scenes out of that game are art worthy to be hung in galleries. In fact, a typical Koikatsu or MikuMikuDance scene is generally speaking far more creative with far greater human input required, so I’d strongly argue for its legitimacy as art over this nonsense. Even if 99% of it is made for one purpose alone.
Yes, I do believe the technology is going to continue improving and that legal standards will likely change to the great detriment of artists and art. But I also believe that AI won’t kill art. There are plenty of forms of art so complex that they simply can’t be replicated** — imagine an advanced AI-produced game with all the moving parts necessary to making that work, or an animated series for the same reason. Or take a novel or even a short story: as far as AI story generators have also come in the last few years, they still can’t produce anything better than somewhat coherent but ultimately meandering and meaningless trash without heavy human editing.
The same is true of visual art. As closely as some of the best AI engines can ape human artists’ styles or replicate or produce images based on photos, that’s all they’re doing. There’s still no thought behind the base results, not before a human starts making those edits, and even then if the base result is meaningless, how much meaning can touching up give the work?
If you’ve read this site for a while, you know I’m absolutely not a romantic type. But I do believe in the power of emotion and passion when it’s poured into a work or an activity. I wouldn’t write about art here if I didn’t care about that. And despite the tech bros’ gleeful insistence that AI is overtaking the arts, I believe most people still feel the way I do.
If you need some proof, look at the world of chess — programs built just for the purpose of playing the game have advanced far beyond Deep Blue in the 90s and are now unquestionably far better at it than the greatest champion. But do people now watch championships of rival AIs pitted against each other? I’m sure some people do, but many more watch inferior human intellects playing chess. Why? Maybe because that human element makes the game more interesting. People still give a shit about who Magnus Carlsen is despite the fact that Stockfish 13 has a far higher ELO. That fact gives me some comfort.
I don’t have much more to say on this subject, but I’m happy to hear readers’ thoughts on it, especially since I’m certainly not an AI expert. I was in a comfortable area talking about copyright in that first post, but I’m outside my zone of expertise now, so I’m happy to be corrected on the details if it’s necessary.
* [EDIT] A note on derivative works here, since I don’t think I addressed this point very well — just because a work is derivative of another doesn’t make it not art, and it doesn’t make it bad, but even the derivative elements and how they’re treated need actual thought behind their creation that I believe doesn’t figure into these AI works. We can get into hair-splitting pretty easily at this point, and I’m sure courts probably will do that at some point with these AI works as they have in the past in other copyright cases.
** And if we ever get to this point, it’s very likely AI will be so advanced that it can’t be distinguished from humanity, in which case we’ve entered the territory of works like Time of Eve and Her where we have to start thinking about them as having self-awareness and being integrated in some sense into human society. I’m not even going to get into that here, but I’ll just note that possibility in the endnote down here so the future AI superbeings at least know I considered it.