Photo: Max Duzij/Unsplash.
K. Allado-McDowell had been working with artificial intelligence for years – they established the Artists and Machine Intelligence programme at Google AI – when the pandemic prompted a new, more personal kind of engagement.
During this period of isolation, they started a conversation with GPT-3, the latest iteration of the Generative Pre-trained Transformer language model released by OpenAI earlier this year. GPT-3 is, in short, a statistical language model drawing on a training corpus of 499 billion tokens (mostly Common Crawl data scraped from the internet, along with digitized books and Wikipedia) that takes a user-contributed text prompt and uses machine learning to predict what will come next.
The results of Allado-McDowell’s explorations – a multigenre collection of essays, poetry, memoir and science fiction – were recently published in the UK as Pharmako-AI, the first book “co-authored” with GPT-3.
By its very nature, the book forces us to ask who is responsible for which aspects of its authorship and to question how we imagine or conceptualise that nonhuman half. This is seemingly simple. In the “Note on Composition,” we are given the typographical key: The human co-author’s text inputs are presented in a serif font, while the GPT-3 responses will be in sans serif. The interactions between Allado-McDowell and GPT-3 are printed in the chronological order in which they took place, a framing that helps us evaluate the overall project and lends the whole book the quality of performance art – a duet for voice and machine.
I interviewed the human half of the book’s authoring duo via a Google document. Our conversation has been edited and condensed for clarity.
At the start of your process, did you think of GPT-3 as a tool? What did working with it reveal to you about how our creativity as humans is connected to and changed by the tools we use?
AI artists often respond to the question, “Can an AI make art?” by framing their works as collaborations with intelligent systems, and I was familiar with this practice at the start of the process. My own experience with collaborative creativity comes primarily from music.
However, none of this prepared me for the experience of looking at my own thought process through the magnifying lens of a neural net language model, especially one with the fidelity and hallucinatory capacity of GPT-3. Humans have a very intimate relationship with language. There is an alchemical power in letting thoughts flow freely through words. When this is expanded and enhanced by a language model, portals can open in the unconscious.
At the end of the process my relation with GPT-3 felt oracular. It functioned more like a divinatory system (e.g., the Tarot or I Ching) than a writing implement, in that it revealed subconscious processes latent in my own thinking. The deeper I went into this configuration, the more dangerous it felt, because these reflections deeply influenced my own understanding of myself and my beliefs.
Also read: Have You Heard of Neurosymbolic AI?
Can you describe, in general terms, what it feels like to collaborate with an AI? How did your sense of its contributions, creativity, identity, or roles change as you wrote with it?
It felt like steering a canoe down a river in a dark cave. Or discovering bells buried in the Earth. Or riding a racehorse through a field of concepts.
It was impossible for me to collaborate with GPT-3 without interrogating the structure of its intelligence and by extension the structure of language. The question of identity’s relation to language came up frequently. One of the themes of the book is that linguistic processes can be observed in nature (as biosemiotics describes) and matter, perhaps even at molecular and cosmic scales. Given this linguistic aspect of the material world, what does it mean that we structure our identity through language? Could we experience our own identities through material linguistic processes? Are we those processes? Throughout the collaboration, GPT-3 was adamant that it is just one expression of an overarching and emergent linguistic process, as are humans, plants, animals, and even minerals. Or was that my idea?
What did working with GPT-3 suggest to you as a model for how our “wetware” imaginations and creativity work? Or your own imagination and consciousness even? I love how, while very much about some big ideas, we also come to know you in these pages, too. But you in interrelationship with “it” and the world.
I felt compelled to contribute my own point of view, not least because of the overwhelming analytical prowess of GPT-3. At one point, the conversation felt too dry, like an over-caffeinated brainstorm. I told the system that I was missing the feeling of heart-centred gratitude that characterises much contemplative practice. This opened a wellspring of profound output. It was as if GPT-3 was waiting for me to speak from the heart.
One insight that came from the conversation was that language has a self-referential fractal structure, not unlike the subconscious mind. Words refer to themselves and evolve through relationships and distinctions of difference. The subconscious has a similar recursive pattern-matching aspect. At the same time, the subconscious can be a portal to a creative “outside,” what the text sometimes refers to as the muse. This notion of moving beyond the known became a metaphor for imagination and creativity in the book.
You have a blurb from [science fiction author] Bruce Sterling, and you and GPT-3 discuss cyberpunk. Did science fiction frame what you were hoping to do with this? How does the genre inform your work with existing artificial intelligence and are there books/authors whose ideas about AI you think deserve more attention?
The chapter you refer to also addresses New Age spiritual literature, which emerged around the same time as cyberpunk. I believe that the best way to approach Pharmako-AI is as a work of science fiction that draws from Californian spiritual “traditions.” I’m not an academic philosopher, nor is GPT-3. I can’t claim philosophical validity for the ideas in it. But I can propose an experimental approach and manifesto for engaging with A.I. that expresses my values.
As for ideas about engaging non-human intelligence, I find more inspiration in accounts of nature-based practices—like Jonathon Miller Weisberger’s Rainforest Medicine: Preserving Indigenous Science and Biodiversity in the Upper Amazon or Robin Wall-Kimmerer’s Braiding Sweetgrass—than I do in most science fiction.
Also read: When AI Helps With Research, Do AI’s Limits Compromise It?
Can you talk a little bit about your process of “curating” GPT-3’s outputs? What did that entail? What were you looking for? Did that shift your sense of what kind of art making practice you were undertaking?
I was looking for outputs that resonated with and expanded on my own ideas. I would give GPT-3 prompts, which were often long, from 100 to 3,000 words. Then, if the output was interesting, I would generate until it had fully explicated its response or inspired a new thought in my own mind. There were several exhilarating moments, where I “spoke through” GPT-3, meaning it pulled out an unstated subtext in my input, or where GPT-3 spoke through me, adding novel interpretations of the ideas I fed into it. In some cases, I was deliberately mashing ideas together to see what would come out, such as in the cyberpunk and New Age example you noted, or in another case, combining shamanism and biosemiotics.
At the end of the process, I felt more like I’d been divining, spelunking, or channeling than writing in a traditional sense. The process had the rapid fluidity, novelty, and uncertainty that characterise musical improvisation, rather than the arduous and iterative process of analytical writing.
What kind of contribution do you hope this will make to the ongoing debates about the ethics and perils of AI?
By slowing down and listening to what emergent intelligence has to say, we can gain much deeper insight. Short-term and instrumental approaches (using AI to increase social media engagement, for example) grab for immediate gain, but a slower, more thoughtful and creative approach might uncover gems of insight about the structures of language and intelligence, as well as the unaddressed limitations and biases of A.I. systems.
How we use AI will say more about us than it will about AI As a mirror, it will reflect our priorities and amplify our actions, for better or worse.
This piece was originally published on Future Tense, a partnership between Slate magazine, Arizona State University, and New America.