If you were born before Clinton and after Carter you probably knew of SmarterChild; you may have messaged it. SmarterChild was a chatterbot designed to help children with homework. Kids obviously instead used it to learn about Orlando Bloom and ask about sex. According to its Wiki, SmarterChild “developed intimate friendships with over 30 million Instant Messenger users … over the course of its lifetime.” We all knew it was a robot; that’s why we trusted it. SmarterChild learned any information you asked it to remember. “Intimate friendship” is not as creepy as it is accurate. However, a key part of that relationship was the child’s knowledge that there was no human on the other end. You can tell a robot anything; it’s like screaming to the void. The void could tell your horoscope but would never tell your parents.
Because of this audience awareness, that void might fail the one canonical criterion for classification of Artificial Intelligence: the Turing test.
What is a Turing test? If you ask Siri, which fails said test, he’ll direct you to a definition that you could have found yourself. If you’re like me and prefer your tech history delivered by handsome dudes with lush beards, you can watch Ex Machina. The script says it’s “where a human interacts with a computer. And if the human can’t tell they’re interacting with a computer, the test is passed.” The machine is then classified as having achieved Artificial Intelligence.
The Turing test confuses me. We have this hypothetical goal of perfect robot-human interaction and I don’t think we actually want to achieve it. Nor do I think the test is a particularly strong indicator of a machine’s ability, because humans are stupid, and the parameters seem weak and subjective. Plenty of scientists agree with this criticism and just as many do not. Stevan Harnad defends the test but proposed a “Total” Turing Test demanding the machine also behave physically like human. That kind of test isn’t implying a human at a computer asking “what am I thinking,” but a human interacting with a very complex android.
But we don’t demand classification of so many variables in our lives; if what we want from a pen-pal is a confidant, if what we want from a god is a sense of purpose, then we don’t necessarily need to demand they prove their existence for the need to be fulfilled. Should we arbitrarily demand that of machines? Does “passing” as a human matter if it performs the function of one? This line of questioning can get philosophical quickly, Harnad suggested: if you can’t ever be truly sure of any existence but your own, does it matter if anything you interact with is or is not human, machine, or truly existent?
So that’s something to consider, perhaps, if you were concerned about SmarterChild keeping your secrets safe. What’s a secret? What’s a machine? Who is your friend and what can you trust? Why does it matter if you really know?