Thanks to popular culture, we have a good idea of what to expect when “strong” AI arrives. Machines attain consciousness? Prepare to be harvested as food. Detroit introduces talking cars? “Hello, Kit“.
What to expect in the near-term is less clear. While strong AI still lies safely beyond the Maes-Garreau horizon1 (a vanishing point, perpetually fifty years ahead) a host of important new developments in weak AI are poised to be commercialized in the next few years. But because these developments are a paradoxical mix of intelligence and stupidity, they defy simple forecasts, they resist hype. They are not unambiguously better, cheaper, or faster. They are something new.
What are the implications of a car that adjusts its speed to avoid collisions … but occasionally mistakes the guardrail along a sharp curve as an oncoming obstacle and slams on the brakes? What will it mean when our computers know everything — every single fact, the entirety of human knowledge — but can only reason at the level of a cockroach?
I mention these specific examples — smart cars and massive knowledge-bases — because they came up repeatedly in my recent conversations with AI researchers. These experts expressed little doubt that both technologies will reach the market far sooner, and penetrate it more pervasively, than most people realize.
But confidence to the point of arrogance is practically a degree requirement for computer scientists. Which, actually, is another reason why these particular developments caught my interest: for all their confidence about the technologies per se, every researcher I spoke to admitted they had no clue - but were intensely curious - how these developments will affect society.
Taking that as a signal these technologies are worth understanding I started to do some digging. While I am still a long way from any answers, I think I’ve honed in on some of the critical questions.
There is a sense of excitement that infects everyone, whether Detroit exec or Silicon Valley VC, who is involved with electric cars. It comes from the belief, propagated by an enthralled media, that what they are doing is important - even vital. Electric vehicles, they insist, are revolutionary.
They are delusional.
Whether a car runs on gas, electricity, or steam, it remains a deadly weapon, with victims denominated not just in bodies, but in wasted wages and lost time. No matter what your attitude toward suburbs and urban sprawl (personally, I’m a fan) anyone who has tried driving the I405 at rush hour knows that cars need far more than a new motor.
But, fortuitously, the hype over the electrical car is providing covering fire for a true revolution: the computational car. It is the increasingly autonomous intelligence of automobiles, far more than a new drive train, that stands to fundamentally alter how we interact with cars, and how they affect our planet.
Already, more than a dozen 2010 car-year models offer intelligent safety features such as lane departure warning and adaptive cruise control. Crucially, they do not just flash a light or sound a buzzer when a problem is detected: they autonomously apply the brakes or adjust the steering. The driver is no longer the fail-safe that ensures the machine is running correctly. The driver is a problem to work around. The driver, you might say, is a bug.
Of course, I am far from the first to recognize the importance of this development. Even Wards, the automotive tradeweekly, recently acknowledged that artificial intelligence is poised to change cars more thoroughly than electric propulsion ever will. And Brad Templeton, a well-known ‘net entrepreneur, has written extensively and persuasively on how today’s intelligent safety features will inexorably lead to autonomous vehicles.
Making this technology all the more notable is that it wasn’t supposed to happen.
For many years, the conventional wisdom, certainly within the auto industry, was that carmakers would never introduce intelligent safety features so long as there were plaintiff lawyers. Autonomous technology shifted the liability for accidents from the car’s owner to the car’s maker, said industry spokespeople, and was tantamount to corporate suicide.
Three developments changed their minds. First, active safety technologies have become substantially more robust, thanks to improvements in sensor design, and, most importantly, in sensor fusion and planning algorithms. Second, drive-by-wire has rendered the legal debate largely academic - car functions are already mediated by computers, one way or another. Lastly, and probably most importantly, the auto industry experienced an unprecedented, violently destabilizing, massive contraction. Technology that previously seemed like a grave, existential threat now seems like the least of their problems. It turns out that, innovation, like freedom, “is just another word for having nothing left to lose.” 2
All those developments made autonomous technology possible, even practical. But the impetus to actually do something about it came from charts like the one below. The line shows the automotive fatality rate declining steadily for the last twenty-five years of the 20th century, from 3.5 deaths per 100 million miles traveled in 1975 to just over 1.5 deaths in 2000. Then the line flattens out. For the last ten years the fatality rate has barely budged.
The gains in the 1980s and 1990s stemmed primarily from mechanical improvements in car bodies — better vertical rigidity, intentional crumple zones. By the end of the millennium, engineers were butting up against the physical limits of materials, chasing rapidly diminishing returns. Today, any significant decline in the fatality rate will have to come from changes in how cars are driven- or, ideally, not driven. And pressure is mounting: the extraordinary growth in texting and its deleterious effects on driver attention means that even holding everything else constant, the fatality rate will rise.
This still begs the critical question: do intelligent safety features work? Do they save lives and limbs? We know that changing lanes unintentionally and rear-ending a forward vehicle - the accident types the two most popular intelligent safety features address — account for a very significant percentage of fatalities, although estimates vary substantially. But we have almost no data on the efficacy of the new intelligent safety solutions, and what we do have is contested.
This uncertainty is surprising given that auto accidents are the leading cause of death for teenagers, and one of the top-ten causes for adults. You might think the National Highway Traffic Safety Administration rigorously evaluates new safety features, akin to how the FDA evaluates new drugs and devices.
That is not the case. At best, the NHTSA does some simple, unrealistic in vitro style tests. They never perform double blind experiments in the real world. Even the statistics the agency collects about automotive accidents are of dubious usefulness, due to poor sampling design and paucity of detail.
Still, we can thank the NHTSA for a recent report that at least throws the uncertainty about autonomous safety features into stark relief. The NHTSA had volunteers drive a test track in cars with automatic lane departure correction, and then interviewed the drivers for their impressions. Although the report does not describe the undoubted look of horror on the examiner’s face while interviewing one female, 20-something subject, it does relay the gist of her comments.
After she praised the ability of the car to self-correct when she drifted from her lane, she noted that she would love to have this feature in her own car. Then, after a night of drinking in the city, she would not have to sleep at a friend’s house before returning to her rural home.
This phenomenon, where improved safety spurs on greater risk taking, is known as risk compensation, or “risk homeostasis”. Most of us became familiar with the concept from debates over anti-lock brakes (ABS), but its specter has plagued nearly every attempt to improve automotive safety, from seat belts to night vision. Yet almost nothing about risk compensation - its etiology, its prevalence, its significance - is certain.
To prove the phenomenon even exists, one particularly inspired British researcher had volunteers ride bicycles on a closed course, with half the people wearing helmets and proper attire, and the other half clad in their underwear. Graduate students positioned on the sidelines graded the volunteers performance and tallied any unsafe maneuvers. The results showed that the unclothed group practiced much safer driving habits, thereby supporting risk compensation theory - and Britain’s reputation for eccentricity
Many other, more targeted studies from the 1990s also painted automotive safety as a zero-sum game, with any gains in safety vitiated by greater risk taking. Not only did careful, well-designed experiments in Europe show that anti-lock brakes lead to more aggressive driving, but many of the countries that adopted seat-belt legislation found that auto fatalities barely budged, while the number of pedestrians injured by cars actually increased.
These studies make for fascinating reading but can be hard to integrate with common sense. Anyone who has driven a vintage car knows they do not feel as safe. Fortunately, over the last ten years the scholarly consensus has shifted - pushed by both empirical and theoretical developments - to a much more nuanced view.
The key empirical development was the overwhelming success of electronic stability control (ESC). Introduced in 1995, the technology works in conjunction with ABS to prevent over- and under-steer. The NHSTA reports that ESC reduces accidents by 35% — a number large enough to outweigh the study’s methodological shortcomings, which were legion. This success prompted researchers to reexamine ABS, and with the benefit of hindsight, many now believe that ABS is ineffective for very specific reasons. (Essentially, when the brake pedal automatically ‘pumps’, it disconcerts drivers and they instinctively raise their foot.)
Theoretical developments have had an even more profound effect on how we think about risk compensation. These developments reflect an ongoing revolution in statistical practice — enabled by Moore’s law as well as Bayes law — that allows us to peek into the black box of causation. Thanks to books like Freakanomics and Jared Diamond’s new anthology, the reverberations of this revolution have started to enter the public consciousness, but the full sweep of its implications remains vastly under-appreciated.3
It is, admittedly, both technically and philosophically complex. But at the most concrete level, the use of MCMC (Markov Chain Monte Carlo) type methods to iteratively ‘solve’ Bayesian networks, allow us - in certain cases - to make strong claims about causes from naturally observed data rather than from carefully randomized experiments.
This may be easier to explain with an example.
Traditionally, to determine the efficacy of seatbelts in preventing fatalities, we would randomly assign people to two classes and then ensure that the control class never wore their seatbelt, while people in the other class always buckled up. We could not simply look at people who already wear seatbelts and those who do not, because the people who naturally wear seatbelts are more likely to be naturally cautious drivers. We couldn’t even do the study longitudinally - by, say, looking at a country before and after seat-belt legislation - because confounding factors like a steadily aging population, or the growth in texting would distort our conclusions.
But these rules of statistical best practice are being overturned. There are now at least half-dozen studies that use sophisticated causal inference to tease apart the root causes and consequences of automotive safety. What they have found is satisfying in its lack of surprise. Concisely, risk compensation exists, but not universally– it is personality dependent. “Sensation seeking” is one blunt-edged, but not totally wrong, way to characterize the people who do exhibit risk compensation.
Nonetheless, the insight that intelligent safety features will only help a subset of the population can seem deflating. The big stories in technology are always the ones that surpass expectations.
I will argue that for at least one industry - auto insurance - autonomous safety features will lead to better than expected performance. The argument, detailed below, is circuitous, but stems from the realization that the mathematical risk models auto insurance companies rely on have never described reality very well. The coming innovations in automotive safety will actually push reality in the direction of the model, making the business significantly more profitable.
Insurance: Not as boring as you think
But … In-sur-ance. It does not whisper alluringly, like diamonds, or sigh seductively, like oil. It creaks; it pops. Insurance is not usually associated with fast-growing companies, charismatic CEOs, or technological discontinuities. The very nature of insurance seems most amenable to gradual, incremental progress. It’s safer.
But don’t mistake characteristics of the recent past for inalienable traits. Study the history of insurance - as the industry’s fastidious, compulsive record-keeping uniquely allows - and you notice that the most lucrative periods always come in the wake of big socio-technical changes. Changes that eliminate risk faster than prices can fall.
Fire insurance, for example, 1907 to 1927. The massive destruction caused by the 1906 San Francisco earthquake and fire sets off a nation-wide overhaul of building safety codes, decisively reducing the frequency and potency of commercial fires4. Or, perhaps more analogous to automotive safety: Marine insurance, late 1880s. Wood-and-sail ships are forcefully retired by faster and safer steel-and-steam vessels. England dominates in the construction and operation of these new steel ships, fostering a powerful local marine insurance industry and relegating American insurers to table scraps. Lloyds becomes Lloyds.
This same innovation-driven dynamic also occurs on the smaller, micro- scale as well, producing the cyclic pattern so characteristic of insurance industries. These cycles are often explained with vague supplications to supply and demand, but those are rarely the real drivers– regulatory hurdles (for supply) and regulatory requirements (on demand) leech their force. Instead, small, predictable social and technical changes are constantly reducing risk, and thus cost. The regulatory rate setting process inserts a lag between a risk reduction and the associated drop in price, and voila: cycle!
The chart below shows the historical combined ratio for auto insurance since 1930. (Remember, the combined ratio is incurred losses plus expenses divided by earned premiums. The further the ratio is below 100, the more profitable the underwriting.) For context, today’s auto insurance companies have combination ratios right at, or slightly above, 100 and depend on ancillary services and investment income for profitability5.
You can easily make out the cycle staring just after 1945 and repeating every six or seven years.The initial peak was of course the end of WW II - gasoline was no longer rationed, servicemen returned, inflation soared — but the cycle was the result of the McCarran-Fergunson act, which resulted in most states regulating auto insurance rates, and passed in 1945.
I have shown that there are good reasons - both historical precedent and structural mechanisms - why significant risk reductions lead to increased profitability. What’s left is to show that autonomous safety technologies will reduce the risk covered by insurers more than is expected… even in the face of “risk compensation”.
To do this, you first need to understand how auto insurance companies think about accidents.
Anyone who has had a car accident knows there are two perspectives. Other people’s perspective, also known as the negligence theory, which says accidents are the result of momentary carelessness. Or coincidence theory, which says that if you drive enough miles, something bad is bound to occur.
Both, of course, have some element of truth. Your grandmother is truly a hazard, despite only driving to church on Sundays. And Mario Andretti would have accidents too, if he commuted three hours to work. The question is which factor dominates,
The data unequivocally says the latter. Accidents are most correlated to the number of miles driven. To put it in in actuarial terms, miles driven is an exposure variable, and is multiplicative, while negligence is a class variable, and additive.
Nonetheless, for historical, political, and idiosyncratic reasons, insurance premiums have always been firmly rooted in negligence theory. It is this tension — between how insurance companies think about accidents, and how accidents actually are — that leads to logical inconsistencies and inefficient pricing. .
For example, insurance companies almost never consider “no-fault” accidents when evaluating your driving record. (In fact, doing so is prohibited in some states.) However, no-fault accidents are an extremely good predictor of future fault accidents. The correlation would be bizarre if accidents were truly the result of negligence, but makes perfect sense if accidents are largely stochastic.
A far more pernicious inefficiency stems from the empirical correlation between low credit scores and auto accidents. Insurance analysts, viewing the world through negligence theory blinders, explain the higher number of accidents by characterizing people with bad credit as impulsive, reckless, and - frankly - not that sharp. This explanation, in only marginally more politic terms, is frequently trotted out as fact in the popular press.
It is bullshit.
The real explanation is more subtle. Because insurance acts as a per-car tax, people naturally try to reduce the number of cars they have assessed. In practice, this means letting the insurance on their second vehicle lapse, and using their primary car exclusively. Both actions increase the average miles driven per poor-credit person car - and, therefore, the number of accidents per -PCPC. The unfortunate end game is that people with less money are stuck with disproportionally high insurance premiums
The solution, say some policy experts, is to price insurance on a per-mile, rather than per-car, basis. People with poor credit would be disincented to drive, and would thus have fewer accidents and lower premiums.
Coincidently enough, in the last two years, nearly every auto insurance company has announced just such a “pay-per-mile” plan with an excess of fanfare. Even insurance companies like to be on trend, and this press release stampede was all about a shiny new piece of technology: the secure GPS system, used to track miles driven.
Try to actually sign up for one of these per-mile plans, however, and you will face a seemingly infinite number of obstacles. Most insurers killed the plans before the press releases went cold because they would have been a drag on profits. It is easy to see why: all the customers who drive very little would sign-up for the GPS programs, leaving just the long haul drivers in the pool. The cross subsidies and mixing of means that lies at the heart of any insurance program would be eliminated.6
Autonomous safety features offer a much more sustainable model for insurance companies. The computational car will allow the majority of drivers - the non-risk takers - to reduce their chance of accident asymptotically, to the point where miles driven is no longer the determining factor. Then, insurer’s models, which price as if your personality rather than miles driven controls your accident rate, will accord with reality.
This counter-intuitive phenomenon — the real world remade in the form of the model, rather than the model adjusted to reflect reality - is currently a hot topic among economists, under the rubric of ‘performativity’. It turns out to be a surprisingly ubiquitous process, underlying many economic developments. The canonical example is the Black-Scholes equation, first published in 1973. Before then, option prices on the Chicago Board of Trade varied markedly from what Black-Scholes predicted. Within a few months of the equation’s publication, however, options were trading in-line with theory.7
In other words, Black-Scholes became an accurate model of option pricing … because people began using it to price options. But it was also self-fulfilling in a deeper sense. Just as models in physics rely on simplifying assumptions - frictionless inclines, no wind resistance – Black-Scholes assumes zero transaction costs, unlimited borrowing at the riskless interest rate, and unconstrained short-selling. These were all wildly unrealistic in the pre-E*TRADE world of 1973. However, as regulators adopted Black-Scholes to govern everything from bank risk to executive compensation, the model’s assumptions rode along like stowaways, becoming deeply embedded in economic policy. The world was remade in the model’s image.
Performativity is a powerful prism to view events through, but like previous big ideas- Kuhnian paradigms, Shannon information theory - it is in danger of being over-used to the point of meaninglessness.8
LEARNING BY READING
The second trend in AI worth paying attention to goes by the ungraceful acronym LbR: Learning by Reading. The technology is slightly earlier in its development than autonomous vehicles (although well ahead of where most people assume) so I will restrict myself to describing the technology and a few of its implications.
Earlier, I described how an LbR system “knows everything — every single fact, the entirety of human knowledge — but can only reason with the intelligence of a cockroach.” You might argue this is hardly revolutionary: you have friends, even colleagues, who fit this description. The difference, of course, is LbR systems won’t just act like they know everything… they really will. With just a few caveats concerning the word “everything”. And the word “know”.
In this context, “everything” means every fact accessible through the public Internet. (So not technically everything, but plenty.) “Know” is trickier to define: it is more than just storing the text in a database, but still well shy of conscious understanding. A concrete demonstration of how LbR works (from a 2005 paper about Yago, an early German system) makes this clear:
“Assume that a knowledge-gathering system encountered the following sentence:Einstein attended secondary school in Germany.
Knowing that “Einstein” is the family name of Albert Einstein and knowing that Albert Einstein was born in Germany, the system might deduce that “X attended secondary school in Y” is a good indicator of X being born in Y. Now imagine the system also found the sentenceElvis attended secondary school in Memphis.
Many people have called themselves “Elvis”. In the present case, assume that the context indicates that Elvis Presley is meant. But the system already knows (from the facts it has already gathered) that Elvis Presley was born in the State of Mississippi. Knowing that a person can only be born in a single location and knowing that Memphis is not located in Mississippi, the system concludes that the pattern “X attended secondary school in Y” cannot mean that X was born in Y. Reconsidering the first sentence, it finds that “Einstein” could have meant Hermann Einstein instead. Hermann was the father of Albert Einstein. Knowing that Hermann went to school in Germany, the system figures out that the pattern “X attended secondary school in Y” rather indicates that someone went to school in some place. Therefore, the system deduces that Elvis went to school in Memphis.”
If this still sounds like little more than a glorified encyclopedia, I recommend finding an eight year-old and asking to borrow their 20-Q ball.9 Despite having a knowledge base many, many orders of magnitude smaller than Yago, and the most rudimentary logic possible, the combination can project a disconcerting amount of intelligence.
The field has advanced rapidly since the turn of the century because most of the software plumbing required for LbR was already developed. Detection of linguistic relationships is reasonably mature, thanks to information extraction applications. Logical reconciliation is a standard feature in semantic web ontology systems. Even the “soft” problems that might seem unique to LbR bear a striking resemblance to classic concerns of literary criticism. (Questions like: where does the meaning of a text reside? In the intent of the author, in the reader’s perception, or is it somehow intrinsic to the text?)10
Despite the technology’s looming maturity, there has been almost no discussion about potential implications. That is concerning because massive knowledge bases, and the kind of weak omniscience they provide, can be profoundly unsettling, in a way that autonomous vehicles cannot.
Admittedly, LbR is nostalgic in at least one sense. Remember that one of the profound effects of the internet was it turned everyone into a researcher. We learned how to dig through dozens of web sites in search of the name of an actor from a dimly remembered show, or to determine whether a skin lesion is cause for concern. Perhaps this was just a momentary phenomenon, an interregnum in the normal order of things, and once our PC knows everything, digging for facts will be like doing long division by hand.
(You might object that you primarily search for opinions, not facts, so a LbR system will be of little use. This is usually a false distinction. Consider that “what is the best burrito in SF” (an opinion), and “what do most people consider the best burrito in SF” (a fact) are normally considered equivalent.)
This LbR scenario is important because of the commercial implications. After all, a giant database that knows everything..? I-It’s the Google killer! Or, to define the threat more precisely: When pressing doublebuckey-F automatically fact-checks a document, will Google’s algorithmic search be commoditized out of business?
Man vs. Google
What we can say for certain is that LbR will not catch Google unawares. It was already the subject of a paul revere-style (”the LbRs are coming..!”) internal memo, sent out by Google senior VP Jonathan Rosenberg early last year. And the company certainly understands the technology’s intricacies: many of the grad students who built the first generation of LbR systems now work at Google.
But LbR is just one component - one instantiation — of a schism that is reshaping the entire search market. While this rift has been widely discussed, Google’s positioning relative to it is typically misunderstood.
As search queries become increasingly complex - driven by rising expectations and growing familiarity - search providers are heading two different directions. Sites like Hunch, Bing, and Alpha view the rising complexity as a software problem and are busy adding more intelligence to their engines, using techniques that trend asymptotically toward LbR. Alternatively, sites like Aardvark and Answerly see complex queries as primarily a social design issue and are mining social graphs in order to connect questioners with live experts.
Partly because the software solution sounds the most promising (cheap usually trumps correct), people assume Google leads the software faction. They do not. Although Google has done a terrific job of marketing themselves as a rigorously objective, by-the-numbers, Mr. Transistors - with all the attendant benefits from a regulatory point-of-view - their spin elides a more interesting truth.
Google trusts man over machine. Most famously with their original page-rank algorithm, most importantly in their reluctance to use machine optimization for advertising11, and most recently with their acquisition of Aardvark, Google sides with the subjective, ineffable stink of humanity nearly every time. To cynics, this is merely a side effect of their hubris-Googlers believe they are the smartest thing in the server room. To naïfs, it is the logical outgrowth of Google’s profoundly humanitarian culture. (No points for guessing my allegiance.) What is increasingly clear, though, is that Google’s cultural quirk is central to their vision of the future.
For starters, it helps explain Google’s February acquisition of Aardvark. Purchased for $50m, Aardvark was started by two former Google employees to provide real-time search over social graphs. For example, a user can ask ‘Vark where to eat in Los Angeles. Aardvark searches your social graph - the content generated by your friends and followers — looking for people that mention LA restaurants. It is then up to Aardvark to optimize across availability (response time), depth of knowledge, and repeat intrusions to determine which one or two people should receive your question. Any response from them goes straight to you.
Aardvark’s technology steers questions to the right people, and then steps out of the way. Other systems and solutions are less reticent. If you spelunk through the Office 2010 Beta binaries, or study some of MSR’s recent projects like WhatShouldIBuy, you get a good sense of how Microsoft hopes to do an end-run around Google. They are using sophisticated linguistics and network analysis out of the LbR playbook to mine social graphs, and then automatically selecting an appropriate strategy. If a similar question has been asked, Microsoft will reuse, after tweaking, the previous answer. Alternatively, if there were no similar questions, the software tries to automatically divine the answer (using extraction and logical entailment). Finally, if all else fails, the program falls back on forwarding the question to the appropriate people in your network and then saving the answer in its growing database.
To handicap these competing approaches, we need to understand the kinds of questions people ask search engines, versus the kinds people ask their friends. Researchers have just started studying this issue, but their preliminary findings offer some intriguing hints. The majority of questions can be answered equally well by search engines or social networks. But a sizable fraction — 12% according to one study — of the questions people ask social network aren’t really questions at all. They are complaints. (What are they thinking?)
I do not want to oversell the point I am making here– it is small, and easily misconstrued. I am not arguing that Google is incapable of sophisticated linguistic processing. Or that LbR solves everything. I am suggesting that a conceptual axis — with automated, LbR techniques on the left, and people-centric, social media solutions on the right — is a useful way to delineate emerging search systems. I am also suggesting that Google culturally favors people-centric solutions to a much greater degree than typically acknowledged.
Finally, if I may digress one last time, this conceptual axis turns out to be a useful way to look at the broader industry, and to understand the impact of social media.
The social messaging vs. computational intelligence divide is even relevant to ecommerce sites. Gilt Groupe, for example, spends no money on SEO or behavioral advertising and, by design, will never be found in Google’s automatically constructed price comparisons. Instead, Gilt relies entirely on the social graph for marketing, promotion, and customer acquisition.
Just as paid-search changed the advertising industry by emphasizing metrics and accountability - the traditional subjects of advertising’s blue-collar relation, direct marketing — social media is now causing its own upheaval. PR firms and advertising agencies both claim social media as their rightful domain, resulting in a few high-profile clashes. This battle is tough to call. Few institutions can achieve the level of dysfunction of ad agencies, but the big PR firms like Edelman certainly give it a good college try.
The most important impact of social media is a fuzzy, conceptual one. As social graphs are mined to answer questions, aggregate opinions, influence consumers, and provide sales leads, civilians are becoming intertwined in the Market to an unprecedented extent. When advertisers talk about buying the networks, they are no longer referring to the big three– they mean buying access to your friends. Companies are even starting to talk about “virtual sales forces” — essentially mercenaries who sell to their friends and followers. It is easy to lampoon the four-part series The Times will undoubtedly run someday soon: “The Price Of Friendship: How Technology Puts People For Sale”, or something equally lumbering. But it doesn’t make this trend any less disquieting.
(If you think this all sounds far-fetched, just pick up a biology journal. That was an upstanding academic community that, thanks to changes in research practices and IP laws, became deeply intertwined with industry, and went from morally superior to ethical train wreck in about a decade.)
Alternatively, you may want to take a look at some products I really, really believe in.
Essays about artificial intelligence normally end with some ponderous treacle proclaiming how the more we study AI, the more we appreciate the human mind in all its splendiferous mystery.
This is, to be charitable, wishful thinking.
AI does teach us about intelligence. It teaches us that “intelligence” is a motley assortment of heuristics, kludges, and cheap tricks. The danger with AI is not that machines will become smarter than us, but that we will become as dumb as machines. The absurdly prescient William S. Burroughs was wise to this fifty years ago. “The study of thinking machines teaches us more about the brain than we can learn by introspective methods,” pronounces Dr. Benway, in 1959’s Naked Lunch. “Western man is externalizing himself in the form of gadgets.”
I have tried to sketch out some of the near-term implications of AI, but it is substantially easier to predict the long-term. Then we can rely on a few core principles, such as that technology, like all life forms, is fundamentally self-perpetuating. That means the technologies I described - autonomous vehicles and learning by reading - will be most important for what they enable.
That makes sense. If we were trying to build a true, general AI, we would first need to create a way for it to get around and interact with the larger world. And we would need a system for rapid knowledge acquisition, so that we wouldn’t have to manually explain every detail of how the world works.
Which, of course, is precisely what we’re building.
- Steve G. Steinberg
1 The Maes-Garreau law comes from research by MIT professor Pattie Maes and journalist Joel Garreau which found that predictions of a technological Deus Ex Machina - whether an immortality pill or thinking machines - are always scheduled to occur within the last few years of the prognosticators expected lifespan.
2 Joplin, Janice.
3 One contribution to this field, “Scalable Techniques for Mining Causal Structures”, was written by a pre-Google Sergey Brin. I take this as evidence that Brin has a knack for fingering the technological zeitgeist, but also, based on the paper’s startling acontextuality — it betrays no knowledge of the huge body of related research — that he was bad at searching.
4 The irony, of course, is that much of the fire damage in 1906 was from insurance-motivated arson. Almost nobody had earthquake coverage, so people burned their already collapsed buildings to ensure reimbursement. This may also explain why the city of San Francisco saw nothing wrong with weakening safety codes, in contradiction with the rest of the country, to help expedite rebuilding.
5 In a bid to understand what sort of “ancillary services” auto insurers offer, I recently looked at Admiral (ADM.LN) a well-respected UK auto insurer. I was gobstopped to discover that the largest chunk of the company’s profits comes from selling leads to lawyers. Call me naïve: I did not realize that the insurance industry’s kvetching about runaway litigation forcing higher premiums came with a smirk. Back in my misspent youth, we had a word for this sort of thing. We called it stealing.
6 Although the GPS-based programs are largely a mirage, you do see some insurers promoting a “low miles” discount. These are primarily offered in California and Texas, where the state regulatory bodies have been pushing insurers toward a per-mile system for some time.
7 The process is also reminiscent of that well-worn joke about the French and their love of theory. (After many hours of debate on the UN floor, the French delegate says, “Fine, I accept that your idea works in practice… but does it work in theory?”)
8 In this month’s Interview, Michael “Club Kid Killer” Alig rationalizes his drug abuse as “performative”: by acting like a drug addict, he became one.
10 Anyone who lived through the Science Wars of the 1990s, when physicists and English professors engaged in pitched battle over the nature of objectivity, the meaning of (post)modernism, and the drawing of academic borders, will be amused to learn it is no longer unusual for computer scientists to cite works by Barthes, Derrida, and Foucault. Once dismissed as impenetrable, pernicious nonsense, their books are now simply instructional. (A strong rebuttal, to my mind, of the current conventional wisdom that lit crit must become more “scientific” to survive.)