How soon before we can deepfake the novel?
How a deepfake story of the moon landings got me thinking…
Last week I went to an exhibition at the WA Museum about the moon and space exploration. In one corner of the exhibit they had set up a room to look like a 1960s living room, and on the old-fashioned television Richard Nixon was giving a speech, mourning the lives of Neil Armstrong, Buzz Aldrin and Michael Collins, who had been tragically lost while trying to land on the surface of the moon. The video was so believable that at first I thought they had recorded it back in the 60s, just in case this was the eventual outcome. But what I was actually watching was a deepfake, created to show us how believable deepfakes can be. It was utterly convincing, even once you knew what you were watching. And on the side table beside the saggy-cushioned sofa, a series of old-fashioned newspapers were also much more than they first appeared: each containing cutting-edge articles about how deepfakes work, and why we all need to be aware of them.
It was a very clever way of getting me to engage with this topic on a visceral level, and as a result I haven’t stopped thinking about it. Deepfakes are one example of how AI and technology might be about to sabotage our lives, our security, our beliefs, and undermine our confidence in every piece of news and information we come across. If we thought we were overwhelmed as a society right now, how are we going to cope if things like this become more of a feature in our lives? If bad actors take control of this kind of tech, could we be collectively fooled into violence and war because of our reactions to fake events? How much of it is out there already? When does paranoia take over from justified concern?
How soon will people be able to deepfake a book?
It seems ludicrous to think that AI could ever recreate the unique genius and imaginations of our much-loved writers, but is it a human conceit to imagine that our thinking is so original that no advanced tech could ever imitate us? And could new writers make names for themselves by using AI to reinterpret the themes, ideas and language of famous work, re-framing it as their own? How could we ever police this? And what of those bold enough to use existing authors’ names on their own work? I read recently that this has already happened to Jane Friedman, and she only found out when readers began notifying her (telling her that the quality of her work had gone downhill!). We will only have a hope of stopping this with appropriate and firm governmental regulation, and the genie seems to be out of the bottle already. So do we need our own verified websites to give customers confidence that they are shopping for books written by us and not an imitator? Would this verification be effective, because what about the savvy person who also sets up a fake website and fake social media accounts too? Or is this just another dystopian rabbit hole, and there’s really no need to panic? So many questions! The problem is, because it’s all so new, no one really knows where these technologies will take us - only that change is coming, and we need to be prepared for it. But it’s almost impossible to prepare when there are so many questions and so much uncertainty. Argh!
It's not all doom and gloom
On the other hand, I’m seeing some more positive articles about how we might look to incorporate AI into our books and research. Plenty of people aren’t fazed, insisting this will become a superb editorial tool to help us make our work better, and once we get used to it we’ll appreciate all the help it will offer us. So what’s the problem? Maybe it’s me! I don’t know about you, but I feel disloyal to my writing kin if I even attempt to use AI right now, and yet I’ve seen other authors happily discussing it as a research and promotional tool. So am I just shooting myself in the foot by listening too hard to the doomsayers? Am I backing myself into a corner through fear and confused ethics, making life harder on myself while others adapt to the inevitable future? I’m beginning to wonder if this might be the case.
Staying curious and concerned while keeping a balanced perspective
The biggest problem seems to be that AI has instantly thrown up some critical questions around moral use and copyright, and it has been trained by illegally using millions of books, including mine, making us instantly feel that AI itself is the enemy. However, the tech itself isn’t morally culpable (at least not until sentience is achieved!): rather, the outcomes of any AI developments depend on the ethics of the AI experts and those in power. Which means it could bring both triumph and disaster, and we will all have to wait and see.
However, just because we can create something using artificial intelligence, will it mean that we’ll end up valuing these things in the same way as the stories and art we create for ourselves? Or will it become even more compelling to head out to bricks-and-mortar bookstores, to meet real authors, and support those who have made the effort to craft their own stories, just as we still go to artisan markets as well as Kmart? Will the tech that threatens to disconnect and overtake us actually end up reminding us of all the vital reasons we have to stay connected to one another? Right now, we all need to engage with these developments, to decide where our new lines will be drawn, and when the right moment will be to explore the possibilities of AI for ourselves.
What do you think? Have you taken the plunge and tried out AI, or are you holding back? Let me know in the comments! I’d love to hear your views.