Authorship Indeterminate
I’ve noticed a shift in the AI conversation lately. Folks seem to be converging on the idea that “AI” means systems that create artifacts (text, images, video, sound, code) so similar to those made by humans that it is hard – for the untrained eye at least – to tell the difference. The underlying technology and algorithms certainly overlap with machine learning, data science, and statistics, but it’s the “human like” part that makes it AI these days.
It’s more “authorship indeterminate” than “artificially intelligent.”
It’s fun, but not terribly useful, to debate whether these systems might be or become self-aware in anything like the way that we believe ourselves to be. Humans map consciousness and intentionality onto all kinds of stuff. When my cat meows and pesters me, my instinct is to ask what she wants – as though she was a tiny cat-person. I don’t spend too much time worrying about whether there is really a tiny cat-person inside her fuzzy head.
For what it’s worth, I do totally believe that there is a tiny cat-person in there. I also believe that there is a full-size person in your head – for mostly the same reasons.
We’ve all heard the trope about how consciousness arose because our ancestors outcompeted our not-ancestors by being better at identifying predators in the shadows. Survival of the fittest and all that. I prefer the emerging story that the most successful communities among our ancestors were the ones who best supported, carried, and cut each other slack. As Dr Lou Cozolino says, those who are nurtured best, survive best. Most people understand that it’s better, in a purely self-interested sense, to live in a community of mutual support, and that the way you get that sort of society is by supporting the people around you.
“Do unto others,” rests on a belief in others. AI puts that assumption in question.
For me, the biggest risk of AI is that indeterminacy of authorship – the knowledge that there is no human behind most of the words, sounds, or images around us – will prove corrosive to society itself. We have all become jaded to the fact that the person on the other end of the unexpected phone call, letter, or email is not actually looking out for our best interests. Increasingly, we’re cynical that there was ever a person on the other end at all.
Did a bot write this? LOL!
For all that I’m hugely optimistic about the promise of technology to make life better – we also always need to guard against the downsides. We are entering a period where generative AI systems will make it much more difficult to tell the difference between authentic communication and fully-automated manipulation. I’m an optimist in the long run, but I think that this is going to get worse before it gets better.
It is critically important that we, collectively, figure out how to maintain our shared reality and mutual self-interest in the face of systems that cause any reasonable person to doubt. Without undue hyperbole, I think that this could well be the most important work of our age.
I’m interested to hear your thoughts.