Why AI content generators can’t kill art (part 1: the legal framework)

Last year, I wrote about the short anime series Time of Eve, a science fiction story dealing with the relationship between humans and a set of androids with newly found sentience. In the course of that deep dive into the anime, I went on a tangent into AI “art” generation tools, asking whether artificial intelligence can truly create art as opposed to simply whipping up something derivative based entirely on user prompts and inputs and drawing from a pool of human-created work.

Time of Eve does indirectly deal with the issue of AI and creative expression, but I’ll be getting to that in the second part of this post run. The series is good though; go watch it.

I’m happy to say, reading that post back that after just a year, that it feels badly outdated, because we now have tools available to create far more impressive images, word collections, and sound files. The image generator Midjourney is so impressive that it generated a piece that won first prize in a digital art contest at the Colorado State Fair — a choice by the board of judges that pissed a lot of people off and got a ton of publicity for the winner, one Jason Allen, a tabletop RPG creator.

And I’m right there with the angry torch and pitchfork mob this time. The idea that an AI-generated image can win an award in the category of “digitally manipulated photography” might seem pretty logical — after all, that sounds like digital manipulation, doesn’t it? But digital tools made for artists like Photoshop and GIMP still require complete human control and input, whereas Midjourney requires that the user enter text prompts. If you’ve used DALL-E, you know how this works: type in whatever it is you’re looking for and you’ll get some depiction of said thing, assuming the AI can work out quite what it is you want. These tools don’t create art, as far as I can tell: they generate images that may or may not resemble art depending on how broadly you define the term.

In that Time of Eve analysis last year, I suggested that this material isn’t art, in part because there’s no intent to express anything behind the generation of each specific work. Some people may define art differently, but I can’t consider something art without at least this intent to express. I don’t like abstract expressionism; the drip paintings of Jackson Pollock and the color field paintings of Mark Rothko leave me completely cold, but I would never accuse them of not being art for just that reason. The same is even true of two guys I completely dumped on a while back, Damien Hirst and Jeff Koons — I find their work totally soulless and empty, but I can’t say they’re not trying to express something through it, even if that something is just “I like tricking rich people into thinking this piece has value” (in which case I actually respect that, but that’s a different matter.) And no matter how I might feel about their work, all four undoubtedly used techniques and put thought into those techniques in the creation of their work.

I actually like this “Théâtre D’opéra Spatial” image in itself more than a lot of what the above four guys have created. If you had just showed this to me and told me a human had painted it, I’d believe you, and it seems the judges felt the same way. It’s aesthetically pleasing.

I was going to post “Théâtre D’opéra Spatial” here, but the legal status and proper attribution of the image is in question, so instead here’s Kiryu on the dancefloor as a placeholder. I might replace this in the future if possible.

It’s also not art. It’s a mix of elements from pieces, images many of which were created with human thought and intent behind them, but put together in a way without that intent to express, by a machine fulfilling the requirements of a series of prompts.

Shockingly enough, US law isn’t lagging as far behind on this high-tech matter as it usually would be. One of the most relevant cases to this issue has to do not with AI but with animals. You may have heard of the “monkey selfie” case Naruto v. Slater, billed as an Indonesian macaque named Naruto (?) suing a wildlife photographer, David Slater, over the ownership of photos a group of macaques took when Slater placed his camera on the forest floor and let them approach it.

A few of these photos turned out well, and when they were published on Wikipedia without Slater’s permission, he came down on the Wikimedia Foundation arguing that he held copyright to the “monkey selfies”, with Wikimedia arguing in its defense that no one held copyright because the creator of the photos was a non-human and hence that they were in the public domain. The US Copyright Office found in favor of Wikimedia, putting the case to rest and letting Slater at least compile these and other photos into a book of his otherwise copyrighted work.

And now in comes People for the Ethical Treatment of Animals, inserting themselves into a controversy that nobody asked their opinion about but that they insisted on contributing to. PETA disputed both Slater’s position that he held copyright and Wikimedia’s that nobody did, instead asserting that Naruto himself held copyright since he snapped the photos. PETA sued as next friend of Naruto, a method of bringing legal action on behalf of a minor or of someone not mentally competent to act in their own interest. (They also very generously volunteered to act as administrator of the proceeds resulting from Naruto’s copyright.)

The Ninth Circuit on appeal found that PETA hadn’t properly established next friend status with Naruto, but far more importantly for our purposes here, it also found that PETA had failed to show that Naruto had standing under the Copyright Act, standing being someone’s right to sue in the first place.1 Part of the reasoning behind the decision not to extend copyright protection in such a case is that non-human animals don’t have the ability to express an “original intellectual conception.”2

The caselaw surrounding AI-generated images and other media is still nearly nonexistent, but early this year, the US Copyright Office confirmed that it will not recognize copyright in such media because it lacks sufficient human authorship. This decision arose from an ongoing fight in which one Dr. Stephen Thaler, co-creator of another AI image generation system called the Creativity Machine, claimed copyright over an image titled “A Recent Entrance to Paradise”. Dr. Thaler has filed suit disputing the Copyright Office’s rejection of his application to the DC district court, where the case is now pending.3

As Thaler’s attorney has argued (linked in the Smithsonian article above):

A.I. is able to make functionally creative output in the absence of a traditional human author and protecting A.I.-generated works with copyright is vital to promoting the production of socially valuable content. Providing this protection is required under current legal frameworks.

Setting aside Mr. Abbott’s use of the extremely vague term “socially valuable” to describe AI-generated content (it might have some social value anyway, sure, but that may even work against his argument considering the social value of the public domain concept) I believe his argument is total nonsense.

So let’s take it apart. AI makes “functionally creative output in the absence of a traditional human author.” But there is a human author involved. Many human authors, in fact, since the AI wouldn’t be able to generate a damn thing without its pool of existing human-created work to draw upon. That argument also ignores the human input necessary to generating said output, though I’d argue as the Copyright Office does that creating prompts, even many of them for the purposes of fine-tuning, doesn’t count as “human authorship” and doesn’t invest the prompt-writer with copyright ownership.

I suppose that’s why Abbott inserted the “traditional” to qualify his use of “human author” here, but it hardly matters, because again, this is an unforgivably vague term. What’s a non-traditional human author? The prompt-writer? The thousands or more of artists whose work was used (with permission or without, I’m not sure — that’s another issue entirely) to train these AI generators?

And while we’re picking apart his words, how about the use of the term “functionally creative”? Either something is creative or it isn’t. Using “functionally”, as far as I’m concerned, is basically an admission that both Abbott and his client know they can’t exactly argue AI-generated works are “created” in the same way as human-made works are, with actual thought and consideration instead of mechanical processes. In fact, I’d argue Naruto the macaque had a better claim to copyright over the photo he took than Dr. Thaler has to “A Recent Entrance to Paradise” through the Creativity Machine for the simple reason that a non-human animal can at least act independently and make its own choices without human prompting, whereas an AI can’t.4

Not yet, at least. But I’ll leave that for the second part of this post when I move away from the legal aspect of the AI “art” generation question and into the philosophical and moral ones. That’s something I’m not actually qualified to talk about, but I will anyway, because this really is far more than just a legal issue and I take an interest in both of its sides.

In the meantime, if you have any interest at all in these questions, be sure to follow the progress of Thaler v. Perlmutter, linked above at Court Listener. It may turn out to be a landmark case, though I hope not for the wrong reasons. Really, the suit should probably be thrown out on summary judgment.

From Pupa (2014), as a reminder that even bad art is still art. This isn’t a question of quality but rather of intentional creation.

Before I end this post and start working on the next, I’ll make a depressing prediction (as you expect from me!) I believe that as large, influential corporations see the value of AI-generated content, they and their special interest groups will push for changes to the Copyright Act to get rid of any ambiguity around the human authorship requirement commonly read into it in favor of AI-generated work that they don’t have to pay real flesh-and-blood visual artists for (and eventually also writers and musicians.) At the very least, I think they’ll be attracted by Stephen Thaler’s “my AI system is working for me under the work for hire doctrine” argument. I don’t see how this can be avoided unless these corps can somehow be shamed into not taking this route.

Well, regular people like us have just about no power over that. It still matters, but as with so many other things in this world, it feels to me like watching a trainwreck in slow motion — you can’t stop it from happening but just have to look on in horror as other people cry and laugh at it all.

But since it’s all useless, an effort may as well be made, because what do we have to lose? More on that whenever I can get to it, and maybe even without an endnote section nearly as long as the post proper. Until then.

 

1 I should note that the standing of animals under US law in general is a more complex issue. However, the Copyright Act specifically excludes non-human standing, and as a practical matter it’s hard to imagine what a macaque would do with its copyright ownership if the court had found in its favor. Naruto certainly didn’t know or give a damn about any of this nonsense, and the court rightly expressed doubts about PETA’s ultimate motives in their involvement.

2 Arcane legal hair-splitting time now, because it can’t be avoided. The US Copyright Office states in section 310.5 of its guide to copyrightable authorship that it “will not consider the author’s inspiration for the work, creative intent, or intended meaning” in determining the originality of authorship. However, it also states in section 306, The Human Authorship Requirement, that “copyright law only protects ‘the fruits of intellectual labor’ that ‘are founded in the creative powers of the mind.’ (TradeMark Cases, 100 U.S. 82, 94 (1879).)” (Citation included in case you really want to dive deep into the exciting history of US copyright law.)

I read this to mean that while the Copyright Office doesn’t care about the artist’s specific intent and won’t bother making courts try to be mind-readers in that sense, it also demands that there be some kind of intent to create, which is proved simply by the creation of the work. Whether any of the smartest/most potentially self-aware non-human animals are capable of “original intellectual conception” (higher apes? Dolphins? Maybe crows?) is an interesting scientific question, but the legal one has been answered, at least for now.

Finally, I should note to be fair that the Copyright Act doesn’t explicitly require human authorship — this is just the understanding of the Act’s plain language by the Copyright Office and courts up until today. There’s good reason to believe that won’t remain the case too much longer, as I note near the end of the non-endnote section of this post.

3 Thaler is trying to pull a pretty absurd trick here. While acknowledging that the Creativity Machine can’t legally hold copyright in its product, Thaler himself claims the copyright under one of two possible legal theories: first, as the owner of the AI and therefore the AI’s product according to the accession and first possession doctrines of property law, just as a farmer owns the calf birthed by a cow he owns, or as the owner of a 3D printer owns whatever it produces. The trouble for Thaler here is that neither these nor any of his other comparisons (see p 13 of his complaint on) have anything to do with copyright investment and ownership.

In the alternative (meaning if the court were to properly reject the above argument) Thaler argues that he holds copyright in the Creativity Machine-generated piece under the work for hire doctrine. This is a well-established doctrine that grants copyright ownership to employers who hire artists and writers to create works.

Thaler is using this doctrine in a novel way here, to put it politely. To put it impolitely, I think his interpretation of work for hire is a load of shit. But we’ll see what the court thinks assuming his suit gets that far.

And just one more dig at Dr. Thaler and/or his attorney because I can’t help it: read paragraph 31 of the complaint. This argument of “well we could have tricked the Copyright Office by not telling them it was AI-created” is so beside the point it’s almost hilarious. But it’s also a scary point to make considering that the best AI-generated visual “art” does closely resemble human-created work.

4 I believe this point highlights one of the contradictions from the pro-AI copyright camp in this argument: on one hand, some defending Jason Allen and “his” piece claim a program or system like Midjourney or Stable Diffusion is merely a tool like a paintbrush or Photoshop, but then Dr. Thaler in his complaint claims that these systems possess creativity (see the “functionally creative” comment from his lawyer above.)

I see these as more tools than not, though they’re clearly not simply like a paintbrush or a program manipulated directly by a human artist as some are claiming (and even disingenuously claiming, maybe.) But I think the “AI generated content is art” crowd will have to pick an argument and stick with it. You can even see this self-contradiction in Thaler’s complaint: his first argument implicitly takes this “my system is just a tool” approach.

Deep reads #6: Artificial life in a natural world

It’s been a while since the last one of these, hasn’t it? It takes a long time to put these deep read posts together, but I always feel good by the end. This time, I dive into artificial intelligence, a field I have a lot of interest in but absolutely no technical knowledge about beyond the most basic level. For that reason, I’ve tried to avoid getting into those technical areas I don’t understand well, sticking to the more philosophical aspects that I can actually sort of write about. If you know more about the subject and can bring your own perspective to the comments section, I’d welcome that.

Also, some story spoilers for Time of Eve, and very very general ending spoilers for the film Ex Machina just in case you plan to watch these and want to go in blind, which is always best in my opinion. Just being safe as usual. And now on to the business.

Sometime in the future, society has started to integrate realistic human-looking androids into everyday life. Rikuo, a high school student, relies on his family’s household android to make his coffee and breakfast in place of his seemingly always absent parents.

One day, Rikuo checks on the movements of this android and discovers that she’s been visiting a mysterious location on a regular basis, a place that he never told her to visit. After letting his friend and classmate Masaki know about it, he decides to investigate by going there himself. And so he finds Time of Eve, a café with a special rule: no discrimination between humans and androids allowed.

Time of Eve is a six-episode original anime series aired online in 2008, sometimes listed under its Japanese name Eve no Jikan. It was on my list to watch for a long time until I finally got to it last year. And while I enjoyed it, the series also raised some questions, or maybe reminded me of questions I’d already been asking myself — questions way too big for my own puny mind about the future of humanity.

Most of the action in Time of Eve takes place in the café it’s named after. Rikuo and Masaki don’t fit in very well at first, though. The lone proprietor Nagi is welcoming and friendly, but she also demands that they stick to the house rule: no discrimination between human and android patrons. This even includes asking whether a patron is human or not, leading Rikuo and Masaki to look around and speculate about all the café’s customers.

But why would this even be an issue? As Masaki explains to Rikuo, Time of Eve operates within a gray area of the law. In response to the creation of humanoid robots so realistic that they’re passing the Turing test left and right, legislators have passed laws that require they use holographic halo-like rings to differentiate them from humans. At the café, nobody has a ring, but Rikuo knows his family’s household android has been here, and considering the house rule, it’s safe to assume that at least some of the patrons are androids with their rings turned off in violation of this law.

Further complicating matters is the fact that all the café’s customers seem human enough from the way they act. When Rikuo and Masaki meet Akiko, a chatty, excitable girl, they assume she’s a human like them.

The next day, while Masaki is teasing Rikuo about his wanting to see her at the café again, Akiko shows up at their school — not as a student, but as an android to deliver something to her owner there, now with the holographic ring over her head. The pair are shocked, and the effect is made all the stranger when she doesn’t acknowledge them there but is just as friendly as before when they return to the café later.

This strangeness ends up hitting Rikuo at home when he realizes that his family’s android — after a couple of episodes finally referred to by a name, Sammy — went to Time of Eve because she wanted to make a better cup of coffee for him and his family. Rikuo first loses it, demanding to know why she was taking her own initiative without any orders. Soon enough, however, Rikuo starts to accept the situation, and we can see him thinking of Sammy as more human-like. This invites mockery from both his older sister and his friend Masaki, who say he’s starting to sound like an “android-holic”, or someone who relies too much on androids in place of fellow humans.

This fear of androids isn’t totally unjustified. Time of Eve presents a world in which these humanoid beings, far more skilled than humans in technical ability, are taking jobs, not just as household servants and couriers but also as teachers and musicians. Rikuo has already been feeling the effects of this change — it’s revealed that he gave up playing the piano because android players were starting to overtake human ones. This is a change that hits Rikuo all the harder because being a pianist was a dream of his before, one that he clearly felt was taken away from him.

Another social change, one potentially disastrous for birth rates, is the new phenomenon of human-android sexual relations. Android-holics are even referred to as living with androids in romantic relationships. These people are somewhat ostracized and are heard being criticized and mocked. However, it’s still enough of a problem that an “Ethics Committee” headed by Masaki’s father works to keep human-android relationships in line, even running ads discouraging people from seeking out partnerships with androids, as human as they might seem on the surface.

All this boils down to a question that works in the sci-fi genre have been asking for a long time now: if an android is created that acts like a human and seems to have thoughts and feelings like we do, is it any different from a human in a meaningful way? Every year, with the development of more advanced robotics, augmented and virtual reality, and AI technologies, this question comes closer to leaving the realm of fiction and entering that of reality. How will people and their governments around the world react if or when AI starts to be integrated into society itself, even into the roles traditionally played by one’s relatives and partners?

I already wrote a bit about this theme in my extended look at Planetarian, a visual novel that’s largely about the relationship between a human and an android. But that story took place in the post-apocalypse. There’s no real concern about society in that world, where civilization has already been destroyed. Looking back, the contrast with Ex Machina might have been slightly off for that reason, though I still basically stand by everything I wrote then. However, I do think Time of Eve makes for a more effective contrast because it deals with some of the same questions Ex Machina did about the social implications of the human-android relationship, but again in a very different way.

I already wrote about all the faults I found with the treatment of this relationship in Ex Machina; you can find all that in the link above. But to put it briefly, director/writer Alex Garland seems to have assumed that humans and androids can never understand or empathize with each other. At least that’s the idea I felt Garland was communicating through the ending of Ex Machina.

Time of Eve, like Planetarian, doesn’t make that assumption. In fact, I’d say the central relationship between Rikuo and Sammy changes throughout the series because Rikuo realizes from his time at the café that they can understand and empathize with each other. The fact that Sammy is an android doesn’t seem to matter by the end; Rikuo accepts that she, Akiko, and the other androids around them may as well basically be treated as fellow humans instead of mere pieces of machinery.

These deeper issues surrounding human-AI relations are still some years off, since we’re still not close to creating a convincingly human android or AI for that matter — certainly not if Sophia is the best we can do at the moment. For that reason, Time of Eve still comes off very much as science fiction to me. Unless some of the wilder conspiracy theories I’ve heard are true, we don’t have realistic-looking human-styled androids walking among us.

However, the AI musician aspect of Time of Eve isn’t quite as far-fetched now as it might have seemed 13 years ago when it was aired, because AI has actually begun moving into — or intruding upon, depending on your perspective — artistic areas that were previously thought to be purely “natural”, purely human. In the last few years, AI tools to generate images, text, and sound files have become available to the general public. I am absolutely not an expert when it comes to the technology behind these tools, but my understanding is that consumer-level AI tools can roughly imitate human-created media by using pattern recognition.

Some of these tools are pretty damn impressive. Some time ago I came across a site featuring AI-generated paintings for sale, each piece created through a process described here. Again, I don’t quite understand the specifics behind how this works, but it seems like these pieces are generated when the AI analyzes human-created art and produces something original based on a particular style.

The AI comes up with some interesting-looking stuff as well. Here’s one example I like. Quite an abstract piece as you might expect, but the AI can also produce human figures and other subjects in more classical or traditional styles.

Visual art isn’t the only place AI has dabbled either. AI-produced music has made impressive strides, putting together songs that sound like something that might have come from a human composer if you didn’t know the difference. The above piece is a pretty basic sort of instrumental rock song, something that you might expect out of a studio that produces background or soundtrack music, but the AI does follow that formula well enough to create something coherent.

The same is even true of writing. This obviously hits home closest for me, since I’m a writer. An amateurish writer to be sure, but I still take pride in my thoroughly unprofessional work full of f-words and mediocre grammar. However, I can’t ignore the fact that AI is edging in on my territory. Predictive writing AI programs like AI Dungeon and NovelAI1 are designed to build stories based on the user’s prompts. Older programs produced pretty obvious nonsense, sometimes ending with an entertainingly bad result — see the AI-written Harry Potter and the Portrait of What Looked Like a Large Pile of Ash for an example of such material. But the newest technology is again pretty impressive, producing text that’s at least coherent most of the time.

The use of emerging technology for the purposes of art and entertainment is nothing new. You could argue that this process extends back thousands of years, through the creation of new musical instruments and drawing/painting tools. In that sense, even a modern innovation like Vocaloid is just one part of that long trend. For all the concern over synthesized singers replacing human ones, Hatsune Miku and her friends are essentially just new types of instruments, only with avatars and some fan-created backstory and personality attached. The songs are still composed by humans; they’re only artificial in the sense that they use synthetic as opposed to acoustic instruments.

Miku is basically a cute anime girl vocal synthesizer you can dress up. The best musical instrument since the piano, and maybe even better, because you damn well can’t put a piano in a cheerleader outfit or a swimsuit.

In the same sense, the trend towards VTubers in place of “real-life” streamers shouldn’t be a concern for people worried about the replacement of humans with AI. Funny enough, the original VTuber Kizuna Ai played on this theme, her character being an advanced AI learning about the human world. However, the only difference between a “real” human streamer and a VTuber is the use of an avatar. The fascination with VTubers might be more a part of an escapist trend, adding an element of fantasy to streaming with its cute angels, demons, and fox/dog/shark girls.2

Even so, between the increased use of synthetic instruments and tools and emerging AI art generation technologies, it’s not hard to imagine a future in which AI can put out work that resembles human-created art closely enough that it turns from a novelty to a viable, cost-effective alternative. This may be especially true of formulaic art created for mass consumption, the sort you hear and see and don’t think too much about. And I’d say it’s already somewhat true of the more abstract-looking pieces you can find on various AI-generated illustration sites — the sort that I could imagine hanging in an office hallway or hotel lobby somewhere, a piece that might just be vaguely noticed and passed by.

There’s an obvious objection to all this: that the works generated by AI lack meaning. There’s no intent behind them. It’s true that the general form of an AI-generated work might be determined by humans, who set the parameters for the program: what style to follow, what colors or tones to use, and essentially what sorts of human art it should imitate when generating something original. But the end result is something that can’t connect with an audience on an emotional level, or at least not intentionally. We humans are great at finding patterns when we want to find them, seeing shapes in clouds, hearing hidden messages in music played backwards. On that level, it might be possible to read some kind of meaning into a piece of AI-generated art, but that reading says nothing about the art itself and everything about its audience.

Shocking news: people who think rock is inspired by Satan hear Satanist messages in rock albums played backwards! I don’t need more proof than that.

To me, this lack of intent behind these artificial pieces of art makes them feel empty. Not that I hate or even dislike them — I find some of them really interesting, but only on a technical level. And some of that interest comes from seeing how these AI-generated works differ from human ones.

I think the lack of human-like thinking and intent is most obvious when an AI tries its hand at realistic-looking human figures; the ones I’ve seen have come out close but somewhat off and wrong, especially in their faces. Not in the way a human unskilled at drawing would mess them up, either — there’s a kind of technical “skill” in the AI work if you want to call it that, but details in the figure make it clear that the AI isn’t “thinking” about what it’s drawing in the same way a human would. See Edmond de Belamy,3 an AI-generated portrait of a fictional French nobleman, and how the face is smudged. Similar paintings that try for more detail seem to do a little worse, misplacing eyes and noses in curious ways and, for me, planting themselves firmly in that infamous Uncanny Valley.

Of course, there’s a lot of argument to be had over how much the intent of the artist should be taken into account when examining art. I take what I feel to be a pretty balanced view: that both how an artistic work is meant to be perceived and how it’s actually perceived are important to understanding it. When art is put out to public view, the public takes their own kind of ownership of it in the sense that they get to interpret it for themselves. But the artist’s intent still matters. Some people may feel differently, but if there is no intent behind the art, I can’t connect with it in the same way I could with a human-created piece.

But what if the art in question is so convincing and feels so meaningful that you can’t tell the difference? At that point, does the divide between the artificial and the organic even matter? This comes back to one of the central questions asked in Time of Eve. By the end of the series, Rikuo answers this question for himself by returning to the piano and playing for the café’s audience. By returning to the music he’d previously rejected because he felt it had been invaded by androids, he accepts them.

It’s clear enough that the androids in Time of Eve are essentially human in this sense. They’re completely differently when we see them in the outside world — Sammy and Akiko both act in a sort of robotic “just carrying out commands” way while in sight of humans, as if they’d get in trouble if they acted otherwise. When they’re in the café, by contrast, they act much more naturally, as if they’re letting out their breath after holding it in for a long time. It seems that all they want is to be spoken to as equals, as though they’re humans as well; the fact that they’re synthetic and we’re organic doesn’t make a difference.4

That’s the key to that central question in Time of Eve. Its androids are self-aware and have that intent and even emotion behind their actions. I think if a real-world AI can express that intent through the creation of original art not just based on analyzing scraps of existing human-created work, that would be a sign of AI so self-aware that it might essentially be considered human in the same way.5

Of course, as far as we know, we’re nowhere near that point yet. Any AI out there that the general public knows about (leaving a gap there for any possible ultra-secret experiments in progress) still thinks like AI. When I’m out driving and I have Google Maps guiding me, it still tells me to take a left turn by swinging through five lanes of busy traffic over a few hundred feet. That direction might make sense to an AI, but any human who’s ever been in a car will understand why it’s actually a terrible direction to give.

Maybe that’s the real test: when the AI understands what I’m going through when I’m driving my car in rush hour traffic and empathizes with my experience. At least enough to not suggest such a suicidal route.

Hey Google, I get that this is technically the fastest path to my destination by one and a half minutes but maybe consider my fucking blood pressure too. (Source: B137 – Own work, CC BY-SA 4.0.)

As for Time of Eve, there is one criticism I can make: that it might be a little too optimistic, especially for the reason that it doesn’t really address the whole “humans losing their jobs to more skilled androids” problem beyond just acknowledging it. It is absolutely a problem, in some sense one we’ve been facing for centuries now with automation of work starting in agriculture and leading up to the development of advanced AI today. It’s not a problem we can’t solve, but it is one that will probably cause a lot of social strife before that point.

Then again, this series provides a nice counterpoint to all the overly pessimistic science fiction we have today, the sort that’s practically anti-scientific development. Again, I’m definitely biased on this subject, but the Luddite approach to this problem is absolutely the wrong one. We shouldn’t try to limit our development out of fear of what might happen as a result.

Time of Eve doesn’t imply that everything will be sunshine and rainbows in the future. But it does deliver a more hopeful message than we usually see out of Hollywood these days. As much of a pessimist as I am generally, I can really appreciate that, and I’d say it’s absolutely worth watching even if you end up coming to a different conclusion. I, for one, welcome our new android friends, and I sincerely hope they don’t become our android overlords instead. 𒀭

 

1 AI Dungeon and the AI writing programs that gained popularity afterward make for another potential deep read rabbit hole subject. AI Dungeon was previously the premier AI story creation tool, but developer Latitude placed sexual and other mature content control filters on the program leading to suspensions, bans, and an exodus of users to alternative services.

I’ve messed around with both AI Dungeon and the much newer NovelAI, and while they’re interesting (well, AI Dungeon was interesting before it was utterly fucked by its own developer — the filter was supposedly meant to prevent certain types of extreme/gray-area material from being written, but it didn’t work properly and was extremely overbroad) the few times I tried writing a story with them, I ended up taking the prompt away from the AI and continuing it on my own. And now I have the rough rough draft of a very short fantasy action-adventure-romance novel that will never be published. Not unless there’s a market for shitty novellas that indulge in escapist fantasies that are somewhat different from the Fabio-on-the-cover supermarket romance trash variety.

Not that my story isn’t also trash, because it is, but I still like it. Maybe I should rework it into a visual novel script?

2 The parasocial relationship aspect of VTubing is still another deep dive that I’m sure a few people have taken already. I don’t know if I’m qualified to address it myself, but it is an interesting subject. Maybe it’s one I should address — not like I’m qualified at all to be writing about AI, yet here I am completely bullshitting about it.

Actually, I do know more about this other subject, since I’ve spent enough time in VTuber chats on YouTube to know that at least a few people are quite serious when they send love confessions and marriage proposals to their beloveds. Then again, that’s always been a thing idols have had to deal with, so maybe nothing’s really changed.

3 You can say this image lacks intent and meaning, but it sure as hell doesn’t lack value: it sold for almost half a million dollars when it was put up for auction a few years ago, probably for its novelty value since it was touted as the first piece of AI-generated art to ever come to auction. I wouldn’t buy it for more than $20 myself, but since I’m not a member of the idle rich set, my opinion doesn’t matter when it comes to these big-ticket auctions.

4 Of course, there’s also a religious aspect to this question, since many people believe that a God-given or otherwise divinely created soul is the most essential part of what makes us human. That’s a debate I don’t feel qualified to get into — I leave it to the scientists, theologians, and philosophers to argue over all that.

5 To complicate matters further: you could argue that this is exactly what we humans do when we create art, since everything we make takes at least some inspiration from past works of art. But there’s usually more to the creation of art than just copying our influences — we filter those older works through our personal experiences and feelings and create something that’s our own, even if it’s somewhat derivative. The same can’t be said for these AI artists, at least not yet.