The Thinking Placebo
A lot of skepticism leveled at OpenAI’s latest (as of writing) text generator focuses on the notion that machines cannot think unless there is strong evidence of their producing genuinely new things—while a novel advancement in AI, GPT-3 is essentially copy-paste (albeit very clever and context-sensitive copy-paste) and ultimately does little, if anything, to bridge the gap of cognition.
There are many things machines do today that, if they were done by human beings, we might be tempted to call thinking. Even something as fundamental as language acquisition can appear on the surface to be mere copying of basic structures and rules, i.e. young children tracing letters of the alphabet or repeating spoken words, and if we try to dig deeper we run into theories that conflict on exactly how and why it is so effective given the sheer combinatorial complexity of grammar. Even so, most people would comfortably conclude that children are still thinking throughout the process. And yet when we witness a computer program appear to produce the same outputs given the same inputs (recently with more success and to greater effect and nuance) we don’t hesitate to discount that as mere pattern recognition, a mindless and rote execution of algorithms. Is a child acquiring language not a form of pattern recognition? Are we confident there is an essential difference between the two?
One interesting question isn’t whether computer programs are actually capable of thinking, but rather if what we consider to be thinking is possibly more banal and substrate-independent. If we reduce our conceptual model of thinking to materialist hardware that processes inputs, forms patterns, and generates outputs then the actual hardware should, at least in theory, become less and less interesting as the outputs become more and more robust. This is after all the main premise of the Turing Test—the differences between a computer program and a human thinker are less important than the ability to differentiate between the human and the program in the first place. In fact there are examples of GPT-3 applied to these tests and even more creative tasks with compelling results.
There remain differences in hardware (brain and silicon) that we need to continue to track. We still don’t understand human intelligence on a fundamental level, and there are many human abilities that machines simply cannot do. But that list is swiftly diminishing, and we may want to be wary of how our current epistemology of thinking confounds the progress being made.
Perhaps GPT-3 and the class of AI it represents is a kind of placebo, and much like its pharmaceutical counterpart who’s mechanisms of action are not always fully understood, the effects are no less real and possibly just as important.