Why AI content generators can’t kill art (part 2: the one that actually matters)

I have to admit this, even as a lawyer (or especially as a lawyer) and as commenters confirmed in the last post: the law is often designed to protect those with power, money, and influence. I stand by my analysis of the legal situation in the first part of this post series, but I also don’t have very much faith in a political system that’s structurally pretty sound but run largely by people who don’t give a damn about the ideals they spout. If Hell exists, there may well be a special area for such people to hang around in, but in the meantime we have to deal with their rot and near-open corruption.

Partly for that reason, I think the US legal framework regarding the use of AI “art” may change soon, almost certainly through legislation, after corporate interests realize they can save a lot of money by not paying humans to create real art for their intellectual properties. I don’t believe artists will ever be out of a job entirely, but with the right (read: wrong) amendments made to the Copyright Act, plenty can be effectively made destitute.

So it feels pointless going on about the law, even if that is my field. I will be following legal developments and pending cases like Thaler v. Perlmutter (which if you aren’t, check my last post for more on why you should) but today I want to shift to a few moral and ethical questions surrounding AI “art” generation that I’m less qualified to talk about but that I’m more interested in in some sense.

“Look how cool it looks, it’s real art! I spent 1.5 hours tweaking my prompts”

I was originally going to put up a post about something else today, but man if Twitter didn’t just step in as usual to piss me off enough to push this post up in the queue: clowns pretending that their AI-generated images they pieced together in a matter of dozens of minutes using word prompts are “real art.” I’ve already addressed my feelings about whether this stuff is art (it isn’t) but clearly some people disagree with me.

Once again, the above is impressive. One year ago, we weren’t seeing AI producing images with this much detail. That’s ignoring the fact that some elements of these images are still off and clearly not human-created even after the fine-tuning this guy says he did — people have brought up the still-uncanny aspects of these images like their eyes and certain aspects of anatomy AI still can’t seem to quite pin down like fingers, the finer parts of the human body. I won’t get into that myself because 1) I’m no visual artist and 2) I think it’s reasonable to believe at the rate AI is advancing that it will get these down pretty well soon.

But yet again, the technical quality of these images is beside the point. Is it right for society to accept such AI-generated works as legitimate? Their legal status certainly has something to do with that, especially approaching it from a profit motive, but societal acceptance of this kind is a broader issue.

Before you might think “does it matter?” consider how culturally frowned upon plagiarism is. If you’ve ever written a paper for a school assignment as the vast majority of us probably have, you’ve been warned about not copying work without properly quoting and attributing it. A paper, article, or hell, a blog post — any of these can be beautifully written, but if they’re products of plagiarism, they’re widely deemed totally worthless. Plagiarism is rightly recognized as theft.*

Now consider how these AI engines operate. I got into it briefly before, but my basic understanding is that these programs generate images using vast pools of art for reference, basing the results off of the word prompts they’re given by the user. So for example, if you feed a prompt akin to “victorian large breasted hot woman in a fancy dress” you might get the following or similar:

If AI thinks big tiddy is the be-all end-all of the female form it is totally uncultured, but then perhaps that’s a reflection of humanity? Not that I have a problem with that particular form, but if I say any more I’ll get sidetracked so never mind.

Setting aside technical qualities again (these feature a few of the uncanny quirks that I still think will likely be sorted out in the near future as the AI engines continue to improve) what are these pieces exactly? They’re mashups of human-created works. Of course, a “mashup” of this kind produced by a human is art as well — it’s not like the existence of influences in an artist’s work makes it not art. In fact, it’s impossible to imagine art totally uninfluenced by other art unless you have some kind of highly unethical “person locked in a cage without human contact” experiment going.

The difference here is the method and the degree of copying. People have pointed out that some of the most popular AI-produced images use the work of artists like Greg Rutkowski and others who never consented to their art being used to train these systems. It’s not that we’re guessing at this outcome — users are actually typing “in the style of Greg Rutkowski” or whoever else into their prompts, so there’s no doubt about the copying.

This leads to what I think is the heart of the issue. Certain people advancing this technology as actually creating art have been, as far as I can tell, taking just the same soulless, empty approach to the value of creative works as our friends peddling their NFT garbage. No surprise then that there’s a fair overlap between the two groups: they both seem to have a love of reducing all human creation to “wow this looks cool” and “can it be marketed and sold”, ignoring the meaning behind art, the feeling, the context, everything that actually makes it interesting as art. Going back to art I don’t even like, I’m far more interested in understanding what motivated Mark Rothko to paint his color field works — there was a man who clearly was not in it for the money, not when you read his thoughts on his work and about how he even refused sales of some of his art to luxury hotels because he didn’t want it used as mere decoration for wealthy diners, instead donating it to galleries for public viewing.

Well, we have no use for this way of thinking anymore, do we? It’s old-fashioned. AI images are cool, and you can easily create large-breasted women with them or whatever else you like in a matter of minutes. Never mind that you can do exactly the same with a copy of fucking Koikatsu, yet nobody is trying to convince society that scenes out of that game are art worthy to be hung in galleries. In fact, a typical Koikatsu or MikuMikuDance scene is generally speaking far more creative with far greater human input required, so I’d strongly argue for its legitimacy as art over this nonsense. Even if 99% of it is made for one purpose alone.

If you really want to know, look it up (in private.) And here’s a clue to my likely next post. The screenshot, I mean. I’m not writing a post about Koikatsu unless someone really wants me to do it, but I have no idea what I’d even say about it.

Yes, I do believe the technology is going to continue improving and that legal standards will likely change to the great detriment of artists and art. But I also believe that AI won’t kill art. There are plenty of forms of art so complex that they simply can’t be replicated** — imagine an advanced AI-produced game with all the moving parts necessary to making that work, or an animated series for the same reason. Or take a novel or even a short story: as far as AI story generators have also come in the last few years, they still can’t produce anything better than somewhat coherent but ultimately meandering and meaningless trash without heavy human editing.

The same is true of visual art. As closely as some of the best AI engines can ape human artists’ styles or replicate or produce images based on photos, that’s all they’re doing. There’s still no thought behind the base results, not before a human starts making those edits, and even then if the base result is meaningless, how much meaning can touching up give the work?

If you’ve read this site for a while, you know I’m absolutely not a romantic type. But I do believe in the power of emotion and passion when it’s poured into a work or an activity. I wouldn’t write about art here if I didn’t care about that. And despite the tech bros’ gleeful insistence that AI is overtaking the arts, I believe most people still feel the way I do.

If you need some proof, look at the world of chess — programs built just for the purpose of playing the game have advanced far beyond Deep Blue in the 90s and are now unquestionably far better at it than the greatest champion. But do people now watch championships of rival AIs pitted against each other? I’m sure some people do, but many more watch inferior human intellects playing chess. Why? Maybe because that human element makes the game more interesting. People still give a shit about who Magnus Carlsen is despite the fact that Stockfish 13 has a far higher ELO. That fact gives me some comfort.

Me, I’d like to see what Osaka would do at the chessboard.

I don’t have much more to say on this subject, but I’m happy to hear readers’ thoughts on it, especially since I’m certainly not an AI expert. I was in a comfortable area talking about copyright in that first post, but I’m outside my zone of expertise now, so I’m happy to be corrected on the details if it’s necessary.


* [EDIT] A note on derivative works here, since I don’t think I addressed this point very well — just because a work is derivative of another doesn’t make it not art, and it doesn’t make it bad, but even the derivative elements and how they’re treated need actual thought behind their creation that I believe doesn’t figure into these AI works. We can get into hair-splitting pretty easily at this point, and I’m sure courts probably will do that at some point with these AI works as they have in the past in other copyright cases.

** And if we ever get to this point, it’s very likely AI will be so advanced that it can’t be distinguished from humanity, in which case we’ve entered the territory of works like Time of Eve and Her where we have to start thinking about them as having self-awareness and being integrated in some sense into human society. I’m not even going to get into that here, but I’ll just note that possibility in the endnote down here so the future AI superbeings at least know I considered it.

4 thoughts on “Why AI content generators can’t kill art (part 2: the one that actually matters)

  1. If readymades are art, which they’re widely accepted to be, why isn’t the AI example in image #1? Didn’t we already have this conversation a hundred years ago and wasn’t the end result art is what the artist says it is?

    Ditto, didn’t we already have this conversation a couple of decades ago around sampling in music? And mashups online? Does the involvement of an automated process in the form of an AI differ fundementally from the same process carried out manually? If so, why?

    • No, I do believe the difference matters here. To me, there’s not enough of a connection between the result an AI puts out and the creator’s input to say there was sufficient intent behind its generation to call it art. I don’t feel the same way about sampling or collage since it involves a lot more thought and intent — take a work like Everywhere at the End of Time, almost all samples stitched together but in a way that gave them new meaning and power.

      I also don’t agree that art is whatever the artist says it is, though I don’t dismiss readymades totally. I don’t know that these questions have been answered as you say, either. I might be in a minority in my opinions here, but I haven’t heard any arguments to sway my views on what definitely does not count as art.

  2. As someone involved in the software field, I can say surely that we are too far away from the point where AI can make anything close to art masterpieces, amd will kill off the need for human artists. And even if we do somehow get there: the guarantee that we’re going to look at it, and think “Oh, we don’t need to modify this any further, this is perfect!” is unlikely. We are too much of an arrogant, creative species who sees art in a different light that AI just can’t possess, and will nitpick over the slightest mishaps. Sure, if the AI comes to learn our preferences, then this might improve over time, but still that’s gonna take a whole other field of research which we haven’t even got near to conveying. Until then, the way I see it: there’s guaranteed going to be artists who will at best take the AI art as merely a baseline for the final product.

    You mentioned the AI story generators – I’ll give you an example from my field: AI that can generate software code. As wonderful at that sounds, ain’t no way that’s gonna replace programmers. Like visual art, programming is an art – and I’ve seen too many coding styles be critiqued by different people that even if an AI churns it out, 9/10 times people will leave comments or call it “crap”. Myself included.

    • This is some good perspective to get, since it’s admittedly not my area. I was surprised by the apparent leaps AI work has made with these visuals, but I can see the issues that still exist with it that need editing by actual artists, and it sounds like that remaining work might be hard for the AIs to work out. AI might be a good tool to use for general concepts to work off of, then.

      Interesting to hear that about AI-generated code too. Since it has so many moving parts from what I can tell, it seems like AI wouldn’t be able to handle anything complex — at least not for quite a while.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

This site uses Akismet to reduce spam. Learn how your comment data is processed.