Thanks to popular culture, we have a good idea of what to expect when “strong” AI arrives. Machines attain consciousness? Prepare to be harvested as food. Detroit introduces talking cars? “Hello, Kit“.
What to expect in the near-term is less clear. While strong AI still lies safely beyond the Maes-Garreau horizon1 (a vanishing point, perpetually fifty years ahead) a host of important new developments in weak AI are poised to be commercialized in the next few years. But because these developments are a paradoxical mix of intelligence and stupidity, they defy simple forecasts, they resist hype. They are not unambiguously better, cheaper, or faster. They are something new.
What are the implications of a car that adjusts its speed to avoid collisions … but occasionally mistakes the guardrail along a sharp curve as an oncoming obstacle and slams on the brakes? What will it mean when our computers know everything — every single fact, the entirety of human knowledge — but can only reason at the level of a cockroach?
I mention these specific examples — smart cars and massive knowledge-bases — because they came up repeatedly in my recent conversations with AI researchers. These experts expressed little doubt that both technologies will reach the market far sooner, and penetrate it more pervasively, than most people realize.
But confidence to the point of arrogance is practically a degree requirement for computer scientists. Which, actually, is another reason why these particular developments caught my interest: for all their confidence about the technologies per se, every researcher I spoke to admitted they had no clue – but were intensely curious – how these developments will affect society.
Taking that as a signal these technologies are worth understanding I started to do some digging. While I am still a long way from any answers, I think I’ve honed in on some of the critical questions.