in conversation with Emmanuel Olunkwa
“If I ever reached a milestone I would run away in horror.”
So, you have a new book out, Amor Cringe. You also used predictive text—an AI program—to write some of it. But before we get to that, can you tell us about how Amor Cringe worked out?
Emily Segal from Deluge Books and I had been talking about what cringe is, from the perspective of empathy and seeing oneself through the eyes of another. When you push some of these ideas of cringe—what cringe is based on, which is judgement, relationality, intersubjectivity—you can get into an almost Dharmic space of questioning the subject entirely. Emily said, “You should write about cringe.” That was summer 2021. So, I just started writing about cringe or through cringe. And it became this fictional piece. When I say writing, I mean generating text and writing in a collaborative process with AI.
You’ve used this process before, when you were writing Pharmako-AI. You write a little, then you feed it into GPT-3, which is a predictive text AI, which guesses at what’s likely to come next. So, you and GPT-3 effectively take turns writing, in a sense, right?
Yes, you have to start with a prompt, then the model generates text, and what you do with that is up to you. In the case of Amor Cringe, I was very freeform with how I used the generated text. There were no rules. I started writing about cringe in a non-fictional mode, then the writing and generation organically turned toward fiction. So, I followed it in that direction and started writing a novella. My rubric was whenever I had to make a decision, I would lean into the cringe option. There were themes and ideas that came to me, like having the protagonist live in the basement of a TikTok house—ideas inspired by the places where cringe originates as an articulated phenomenon. I explored in this way for about a month, then put it aside after writing a few pages. Months later, I came back from a trip and spent a few days working only on the book. It came out very quickly. I gave it to Deluge, and here we are.
It sounds like it came together even faster than Pharmako-AI did.
In a way. Yeah.
The writing process, at least—
Yeah, that's true. I think one thing that's evolved or emerged for me since writing Pharmako-AI is that there are all these other ways to write with AI.
You mean as opposed to in Pharmako-AI, you had clearly delineated sections. The typeface set in bold was written by you, while roman text was written by GPT-3. Which was part of the “shock value” of Pharmako-AI, in a sense: We could see the exact sections written by artificial intelligence, and often they were very eerily coherent. But so here, what was your process? How was it different?
Rather than being really specific and clear about whose voice is whose, through typography, this involved a process of mixing and blending voices: forgetting where things originated. That’s what I was experimenting with: immersing myself in that way and not worrying about attribution. With fiction, it can be harder to create narrative energy purely through generative methods, so this was also necessary to make a coherent narrative.
Generative methods as in artificial intelligence. AI doesn’t write an engaging story so well yet.
AI-generated text can sometimes read like a drone of text or like a tapestry of text that has a similar character throughout. So, you know, interjecting narrative elements, change, conflict, things like that—these have to be done more manually. But I was trying to push myself to write something that felt like fiction, and moved like fiction, using tools that required a certain degree of handholding.
You could have just as well written this without any generative component. It could’ve just come straight out of your mind. Does this mean, in general, you think of yourself as a writer who only writes in collaboration with AI? Or what made you decide to turn to AI as a collaborator again?
There are a couple elements of that. One is at a certain point, maybe it was after Pharmako-AI, I’d written a few pieces that didn't have any AI at all. And I'd written some pieces that did. It's a handy tool, and I started using it the way I would a word processor. So, I stopped thinking of it as something that needed to be seen as outside of a normal writing process.
With Pharmako-AI there's a certain generosity that a reader will bring to it because of its novelty. In becoming more familiar with this process, I started to become even more critical of what came out of the model; I wanted to produce things that read at least as well as fully human written prose. Fiction seemed like a good place to test that. And writing fiction with AI requires different techniques and ways of working with the model. I've started to model AI-writing internally in my process, too—I can imagine a structure that would lead to good generative-text sections—I'll leave gaps. For example, with a new book I'm writing, I'll write an outline for a chapter and then begin to write into it and leave spaces open to fill in, kind of the way you would for a freestyle rap verse or something. Or I’ll take chunks out of it and generate text around those, then fold it back in and start to collage.
Since you’re not using typography to mark what comes out of generative text, do you expect people to kind of be gathering clues here and there to kind of deduce where AI is coming in? Or did you try to kind of erase that and make it as seamless as possible?
I intentionally lost track of that myself. It's not stated in the introduction, and it's only in the acknowledgements at the end of the book that the process is mentioned. I think if you're reading for that, it can be frustrating or opaque and it adds a layer of mystery or just changes the dynamics of the reading experience. If you’re unable to locate markers of the generative process, it frustrates the attempt to interpret, but also adds another layer—that you can either enjoy or not.
Right. So, the subjects you've chosen. I want to ask about that. What you’re writing about in Amor Cringe is so much more keyed into youth culture and contemporary media culture, than I’d imagine was part of anything in the corpora that AI was trained on. I’d have to look back to double check, but it seemed like the texts they were scraping into this giant data set, for training, came from more published work. Work that probably had an editor, work written by people with a certain number of degrees. So, I’m wondering, in this new project, if you were kind of trying to use a different set of vocabularies, did that work at cross purposes with the AI? Or did it pick up on more colloquial spoken texts, or you know, what you'd see in like TikTok comments as opposed to like Google Books?
A lot of GPT-3’s training data comes from the Common Crawl text corpus, which is scraped from the web, so there would definitely be social media comments, blogs, etc. in there.
I've often seen text come out of GPT-3 that feels like a blog post, and it'll even tag in things you’d see at the end of a blog post: “Like and Subscribe” and those sort of social-media phrases. So that stuff is in there somewhere. I think, as my first attempt at producing coherent fiction, the book does land on a kind of pulpy tone. Declarative sentence structures, windowpane prose. It's not trying to be high literature, and symbolically there's not a lot of deep structure. Although there are themes moving through the piece: themes of religion and mental illness, narcissism. I was aiming at a younger set of themes: like the way that the protagonist treats their romantic partners in a really distanced, almost mocking way through the use of media. In one scene they describe using their phone to film the person they're dating and then watching videos of them and mocking them with their roommates—there’s this very brutal social mechanic applied to an intimate relationship which feels like something that wasn't really possible to imagine until recently, because of social media.
And then the search for spiritual and religious meaning. I feel like this is something that wasn't maybe foreseen by people that were building social media, but is part of a trend, the return to organized religion as something people can perform as a part of their personal identity or brand online. Early on, the GPT-3 model said something about the narrator going to church, and the narrator says “I was completely fooled by all of it” in a very self-aware way. Which feels to me like part of the social media experience: to be hyper self-aware and referential, but also lacking in the real-world experience that grounds the understanding of references and self-awareness.
There's a fluency with memetics and cultural iconography that isn't grounded in lived experience. There’s one scene where the narrator is journaling in the kitchen of the TikTok house and they write these nice things about themselves—they're fun on the dance floor, they like people— and then their roommate comes in and says, “Oh, let me see that.” And they flip it over and write this excoriating critique of the person: how they're manipulative and how they'll do anything to get free drugs. So a lot of it is about this inherent mechanic of cringe, which is being self-aware, but there being a limit to your self-awareness and then becoming aware of that limit and then cringing at yourself for failing to understand yourself.
It sort of begs the observation, this dual existence: this seeming self-awareness, but lack of lived experience, seems to describe exactly AI right now.
That's a really good observation. It's a very disembodied but seemingly complete sense of reality. Like it knows every reference, but it has no actual understanding of what anything means or feels like other than the through associated written descriptions.
So human writers struggle with writer's block constantly. Whatever it is, there’s something that’s preventing them from getting the text out. Does that experience happen when you're collaboratively writing with AI? Have you experienced “writer’s block” writing alone? And then does it go away if you’re co-writing with AI?
Well, as somebody who’s often at conferences and speaking almost constantly, I know that just because words are coming out doesn't mean that there's necessarily anything meaningful being said. But I have definitely experienced writer's block. There is a kind of writer's block that can come with generativity too. If you don't have a deep structure for what you're trying to get out, or you don't have an image that you're driving toward, or you don't have something you're trying to understand, then the act of writing can just be automatic, like a bodily function. It’s the same with generativity. You can generate endless amounts of text that isn’t good or meaningful or useful to you in that moment.
One thing that is very freeing about generated text is that it doesn’t come from you. You’re free to treat it however you want. You don't necessarily have the experience of cringing at it the way you would at your own writing. Part of the initiation of a writer is to write something that you think is brilliant, and then in the harsh light of the morning, you revisit it and it makes you feel really bad about yourself, that you wrote it and thought it was good.
I’m very interested in what kinds of affects about writing or affects around writing might emerge when AI co-writing or AI writing processes are more common. I could see a more relaxed or sprezzatura affect around writing, rather than the kind of tortured 20th-century alcoholic womanizer trope, where it’s less about destroying your ego or judging yourself constantly, and more about what you can create without having to be the sole owner of all the words. There's something freeing about not owning every word.
It's like using found objects in a sculpture or something. When you're using generated text, you have to ask yourself: Is this helping my thought process? You’re thinking more, is this something that people would want to read? and less, What does this say about me? So, there's potentially less cringe involved for the writer, which could produce a less neurotic experience.
It's funny you were speaking generally when you said there's potentially less cringe involved. Can I ask about your own personal experience? Was this process less cringe for you, personally?
Well, there's a, there's a farcical or parodic aspect to it. I'm very curious to know if that was how you experienced it. Some people might experience it as very sincere, but my rubric was to try to be as cringe as possible. So, the character has this big DJ gig, and then they get so fucked up on drugs right before their set and they can't even beat match out of the first track. To me this is an insanely cringe experience. But you know, other people might not read it that way. Or even some of the trope-iness the narrative: do we need a narrative about a young hipster who does too many drugs and goes to rehab and finds a watered-down form of spirituality that doesn't quite change their life in a material way? Do we need that? I don't know. Maybe I’m revealing too much right now, but to me that was very cringe.
Although, you know, about two thirds of the way through, I started feeling there needed to be redemption. There needed to be some kind of hope for this character. And they find it, but only for a while, and it’s unclear if their life truly changes. I’ve been told that the narrator’s attitude of self-objectification and self-distancing makes them enjoyable to be around. I’ve also been told that some of it “hits a little close to home”. So it seems that the cringe element is really subjective.
The language of it is super fascinating. The subject obviously is cringe culture, youth culture, social media, TikTok, and so on. But I was getting a very like cyberpunk vibe from some of the language. I don't know if this is coming from you or the AI, but I’m thinking of how there’s this clipped dialogue alongside really vivid poetry, even that second line, like the, their shoulders twisting in time with the saccharine chords of the speaker—there's something about it that reminds me of William Gibson. A sort of fine-tuned poetry with a fast-paced, almost like brutal prose. I'm not describing it very well.
That's interesting. There is a pulpy tone to it that cyberpunk borrows from. I love Raymond Chandler.
I mean, this, maybe this isn't the best thing to pull out, but one remarkable line that came from GPT-3 describes a character having “the second pair of fake tits that were ever made,” which felt very Chandler to me.
Absolutely. So, you were working on this while you were teaching the class about generative co-writing? What was that class titled?
I taught a class on Pharmako-AI at SCI-Arc— oh, I'm getting my whole reality mixed up. COVID memory brain. But there was that, and then, the next class I taught was at the Institute for Advanced Architecture Catalonia, and that was writing with AI.
Okay. And so, had you been working on Amor Cringe while you taught?
No. No, I hadn't. I started Amor Cringe the summer after teaching that class.
So now that you've done this latest project, do you think that would color how you’d teach “Writing With AI” if you were to teach it again?
That's an interesting question. I was teaching architecture students, so that conditioned somewhat the understanding of the technology and what the implications are for tech infrastructure. If I were to teach a writing class for creative writers—or people whose main practices are writing—I think it would be a very different type of class. I don't have a degree in literature and in a way my entry point for writing has been through theory and ideas. So, pushing into literature has been a new challenge for me. My editors have been really helpful in terms of showing me that this is also literature rather than, you know, engineering or infrastructure.
But what does it even mean, to teach how to write with AI? Is that a thing that can be taught?
Well, it does help to understand how these predictive-text models work and what's going on. Just like how if you were teaching music, it might be helpful for students to understand the physics of vibrating strings as they learn how to put their fingers on them. I did give the students a background on how the models work: helping students understand that they're working with a statistical system. That there are elements of randomness. I tried to help them understand what some of the parameters mean, like temperature, or presence or frequency penalties.
And they were using GPT-3?
We were using an older model, GPT-2, which as of now is free for anyone to use. I walked them through the process I had developed; pruning outputs and, trying to find specific statements that I could follow through, that might seem more generative than others.
Pruning outputs, as in having GPT-3 write out at length, then cutting back, then adding more of your own? And then what would the students write?
Yes, exactly. Like pruning a plant. I give them assignments: Write a poem. Write about a dream. Write a manifesto, things like that, where they could start to feel the model pushing back on their ideas. Or extending their ideas. And there were differences working with GPT 2 and 3, though they weren't major. One of them was just the amount of processing time required, because GPT-2 is a free web interface. So, it was a little slower for them, and maybe flowed less easily because of that.
How would you say their work evolved over time? As they go from neophyte to being a more learned co-writer? What does it even mean to become more adept at writing collaboratively with AI?
Well, there was definitely a discovery process in the beginning where they would write and be surprised at what came out and go through a lot of the common reactions. I think most people go through a period of being a little bit amazed as they see their ideas come back to them or experience their own ideas changing in the process of writing with a model.
When you talk about these parameters—temperature, and so forth—can you explain?
In a model like this, when you put in a prompt, it produces a list of possible responses. And then it ranks them by probability. Meanwhile, you’re allowed to tweak various parameters that affect how it chooses from that list of possible responses. You can increase the “temperature” for example, and when the temperature goes up, it will select from a larger range of probabilities. So that does introduce an element of greater randomness.
I guess what I'm wondering is: if you had the exact same temperature settings and gave the text generator the exact same inputs, would the output always be the same, or?
No. As long as there's any element of randomness, if the temperature is anywhere other than 0. And this is where we get into questions about statistical systems and meaning and temporality. It’s like a tarot deck: you shuffle, you pick the card, you have a question in mind, the card gives you a symbol and then you interpret the answer through that symbol. And if you had shuffled differently, you would have drawn a different card. Reality is chaotic. The question becomes: What structures can you put in around indeterminate processes in order to produce more meaning from them?Ritual, I think, has a role to play. Oracular uses of AI, though it's not really even the AI that's serving an oracular function. It's the randomness function that’s oracular, just like, you know, using yarrow sticks or coins to throw the I-Ching. The idea of using an AI system in this way—it's not controversial from a mathematical point of view. It’s just another statistical system. It might be ontologically controversial, depending on your orientation.
“If I ever reached a milestone I would run away in horror.”