safe internet slot games how to win money on online slots best payout slot machines usa slots no download bonus slots no download play online slots machines best slots download play casino vegas slot online online slot machine real money play casino slots only play slots no download real money las vegas online slots online slot games no download online slot machine games no download download casino games for us residents best slot machines to play play slots no download play slots in usa only play online slots at trusted casinos play real money casinos in usa best vegas slot tournaments play casino slots games online how to win at slots in casinos casino slot machines to play best slots odds 20 reel bonus slots sign up bonus for slots no download casino games accepting usa players bonus slots online no download us online slot machines slots games online for real money us friendly online slots gambling slots casino online slots us us players accepted online slot play real money slot play slots online usa casino slots machines for usa players mac real money slots usa online slots sign up bonus sign on bonus slots usa play online slot machines for real money slots usa friendly no download slots for macs online casino slots to play online slots real money us best vegas slot machines best online casinos us real money internet slots machines play bonus slots for real money 5 reel slots online online slots best payout real money slots bonus no download slot machine games how to play casino slot machines play online slots no downloading play no download slots online online casino for real money play slot machines online without download online casino slots win cash no download slots casino play casino games for real money bonus slots for us players casino play with real money us slots online no download online casino slot gambling 5 reel slots bonus games online slots no downloads american real money slots playing slot machines at casino internet slot game play slots for real money no download play casino slots with real money vegas slots online no download best online slots site play slots online without download casino in usa slots

new developments in AI

sgs | academic,AI,review | Saturday, July 3rd, 2010

Thanks to popular culture, we have a good idea of what to expect when “strong” AI arrives. Machines attain consciousness? Prepare to be harvested as food. Detroit introduces talking cars? “Hello, Kit“.

What to expect in the near-term is less clear. While strong AI still lies safely beyond the Maes-Garreau horizon1 (a vanishing point, perpetually fifty years ahead) a host of important new developments in weak AI are poised to be commercialized in the next few years. But because these developments are a paradoxical mix of intelligence and stupidity, they defy simple forecasts, they resist hype. They are not unambiguously better, cheaper, or faster. They are something new.

What are the implications of a car that adjusts its speed to avoid collisions … but occasionally mistakes the guardrail along a sharp curve as an oncoming obstacle and slams on the brakes? What will it mean when our computers know everything — every single fact, the entirety of human knowledge — but can only reason at the level of a cockroach?

I mention these specific examples — smart cars and massive knowledge-bases — because they came up repeatedly in my recent conversations with AI researchers. These experts expressed little doubt that both technologies will reach the market far sooner, and penetrate it more pervasively, than most people realize.

But confidence to the point of arrogance is practically a degree requirement for computer scientists. Which, actually, is another reason why these particular developments caught my interest: for all their confidence about the technologies per se, every researcher I spoke to admitted they had no clue – but were intensely curious – how these developments will affect society.

Taking that as a signal these technologies are worth understanding I started to do some digging. While I am still a long way from any answers, I think I’ve honed in on some of the critical questions.


There is a sense of excitement that infects everyone, whether Detroit exec or Silicon Valley VC, who is involved with electric cars. It comes from the belief, propagated by an enthralled media, that what they are doing is important – even vital. Electric vehicles, they insist, are revolutionary.

They are delusional.

Whether a car runs on gas, electricity, or steam, it remains a deadly weapon, with victims denominated not just in bodies, but in wasted wages and lost time. No matter what your attitude toward suburbs and urban sprawl (personally, I’m a fan) anyone who has tried driving the I405 at rush hour knows that cars need far more than a new motor.

But, fortuitously, the hype over the electrical car is providing covering fire for a true revolution: the computational car. It is the increasingly autonomous intelligence of automobiles, far more than a new drive train, that stands to fundamentally alter how we interact with cars, and how they affect our planet.

Already, more than a dozen 2010 car-year models offer intelligent safety features such as lane departure warning and adaptive cruise control. Crucially, they do not just flash a light or sound a buzzer when a problem is detected: they autonomously apply the brakes or adjust the steering. The driver is no longer the fail-safe that ensures the machine is running correctly. The driver is a problem to work around. The driver, you might say, is a bug.

Of course, I am far from the first to recognize the importance of this development. Even Wards, the automotive tradeweekly, recently acknowledged that artificial intelligence is poised to change cars more thoroughly than electric propulsion ever will. And Brad Templeton, a well-known ‘net entrepreneur, has written extensively and persuasively on how today’s intelligent safety features will inexorably lead to autonomous vehicles.

Making this technology all the more notable is that it wasn’t supposed to happen.

For many years, the conventional wisdom, certainly within the auto industry, was that carmakers would never introduce intelligent safety features so long as there were plaintiff lawyers. Autonomous technology shifted the liability for accidents from the car’s owner to the car’s maker, said industry spokespeople, and was tantamount to corporate suicide.

Three developments changed their minds. First, active safety technologies have become substantially more robust, thanks to improvements in sensor design, and, most importantly, in sensor fusion and planning algorithms. Second, drive-by-wire has rendered the legal debate largely academic – car functions are already mediated by computers, one way or another. Lastly, and probably most importantly, the auto industry experienced an unprecedented, violently destabilizing, massive contraction. Technology that previously seemed like a grave, existential threat now seems like the least of their problems. It turns out that, innovation, like freedom, “is just another word for having nothing left to lose.” 2

All those developments made autonomous technology possible, even practical. But the impetus to actually do something about it came from charts like the one below. The line shows the automotive fatality rate declining steadily for the last twenty-five years of the 20th century, from 3.5 deaths per 100 million miles traveled in 1975 to just over 1.5 deaths in 2000. Then the line flattens out. For the last ten years the fatality rate has barely budged.

The gains in the 1980s and 1990s stemmed primarily from mechanical improvements in car bodies — better vertical rigidity, intentional crumple zones. By the end of the millennium, engineers were butting up against the physical limits of materials, chasing rapidly diminishing returns. Today, any significant decline in the fatality rate will have to come from changes in how cars are driven- or, ideally, not driven. And pressure is mounting: the extraordinary growth in texting and its deleterious effects on driver attention means that even holding everything else constant, the fatality rate will rise.

Automotive fatalities

Risk Compensation

This still begs the critical question: do intelligent safety features work? Do they save lives and limbs? We know that changing lanes unintentionally and rear-ending a forward vehicle – the accident types the two most popular intelligent safety features address — account for a very significant percentage of fatalities, although estimates vary substantially. But we have almost no data on the efficacy of the new intelligent safety solutions, and what we do have is contested.

This uncertainty is surprising given that auto accidents are the leading cause of death for teenagers, and one of the top-ten causes for adults. You might think the National Highway Traffic Safety Administration rigorously evaluates new safety features, akin to how the FDA evaluates new drugs and devices.

That is not the case. At best, the NHTSA does some simple, unrealistic in vitro style tests. They never perform double blind experiments in the real world. Even the statistics the agency collects about automotive accidents are of dubious usefulness, due to poor sampling design and paucity of detail.

Still, we can thank the NHTSA for a recent report that at least throws the uncertainty about autonomous safety features into stark relief. The NHTSA had volunteers drive a test track in cars with automatic lane departure correction, and then interviewed the drivers for their impressions. Although the report does not describe the undoubted look of horror on the examiner’s face while interviewing one female, 20-something subject, it does relay the gist of her comments.

After she praised the ability of the car to self-correct when she drifted from her lane, she noted that she would love to have this feature in her own car. Then, after a night of drinking in the city, she would not have to sleep at a friend’s house before returning to her rural home.

This phenomenon, where improved safety spurs on greater risk taking, is known as risk compensation, or “risk homeostasis”. Most of us became familiar with the concept from debates over anti-lock brakes (ABS), but its specter has plagued nearly every attempt to improve automotive safety, from seat belts to night vision. Yet almost nothing about risk compensation – its etiology, its prevalence, its significance – is certain.

To prove the phenomenon even exists, one particularly inspired British researcher had volunteers ride bicycles on a closed course, with half the people wearing helmets and proper attire, and the other half clad in their underwear. Graduate students positioned on the sidelines graded the volunteers performance and tallied any unsafe maneuvers. The results showed that the unclothed group practiced much safer driving habits, thereby supporting risk compensation theory – and Britain’s reputation for eccentricity

Many other, more targeted studies from the 1990s also painted automotive safety as a zero-sum game, with any gains in safety vitiated by greater risk taking. Not only did careful, well-designed experiments in Europe show that anti-lock brakes lead to more aggressive driving, but many of the countries that adopted seat-belt legislation found that auto fatalities barely budged, while the number of pedestrians injured by cars actually increased.

These studies make for fascinating reading but can be hard to integrate with common sense. Anyone who has driven a vintage car knows they do not feel as safe. Fortunately, over the last ten years the scholarly consensus has shifted – pushed by both empirical and theoretical developments – to a much more nuanced view.

The key empirical development was the overwhelming success of electronic stability control (ESC). Introduced in 1995, the technology works in conjunction with ABS to prevent over- and under-steer. The NHSTA reports that ESC reduces accidents by 35% — a number large enough to outweigh the study’s methodological shortcomings, which were legion. This success prompted researchers to reexamine ABS, and with the benefit of hindsight, many now believe that ABS is ineffective for very specific reasons. (Essentially, when the brake pedal automatically ‘pumps’, it disconcerts drivers and they instinctively raise their foot.)

Theoretical developments have had an even more profound effect on how we think about risk compensation. These developments reflect an ongoing revolution in statistical practice — enabled by Moore’s law as well as Bayes law — that allows us to peek into the black box of causation. Thanks to books like Freakanomics and Jared Diamond’s new anthology, the reverberations of this revolution have started to enter the public consciousness, but the full sweep of its implications remains vastly under-appreciated.3

It is, admittedly, both technically and philosophically complex. But at the most concrete level, the use of MCMC (Markov Chain Monte Carlo) type methods to iteratively ‘solve’ Bayesian networks, allow us – in certain cases – to make strong claims about causes from naturally observed data rather than from carefully randomized experiments.

This may be easier to explain with an example.

Traditionally, to determine the efficacy of seatbelts in preventing fatalities, we would randomly assign people to two classes and then ensure that the control class never wore their seatbelt, while people in the other class always buckled up. We could not simply look at people who already wear seatbelts and those who do not, because the people who naturally wear seatbelts are more likely to be naturally cautious drivers. We couldn’t even do the study longitudinally – by, say, looking at a country before and after seat-belt legislation – because confounding factors like a steadily aging population, or the growth in texting would distort our conclusions.

But these rules of statistical best practice are being overturned. There are now at least half-dozen studies that use sophisticated causal inference to tease apart the root causes and consequences of automotive safety. What they have found is satisfying in its lack of surprise. Concisely, risk compensation exists, but not universally– it is personality dependent. “Sensation seeking” is one blunt-edged, but not totally wrong, way to characterize the people who do exhibit risk compensation.

Nonetheless, the insight that intelligent safety features will only help a subset of the population can seem deflating. The big stories in technology are always the ones that surpass expectations.

I will argue that for at least one industry – auto insurance –  autonomous safety features will lead to better than expected performance. The argument, detailed below, is circuitous, but stems from the realization that the mathematical risk models auto insurance companies rely on have never described reality very well. The coming innovations in automotive safety will actually push reality in the direction of the model, making the business significantly more profitable.

Insurance: Not as boring as you think

But …  In-sur-ance. It does not whisper alluringly, like diamonds, or sigh seductively, like oil. It creaks; it pops. Insurance is not usually associated with fast-growing companies, charismatic CEOs, or technological discontinuities. The very nature of insurance seems most amenable to gradual, incremental progress. It’s safer.

But don’t mistake characteristics of the recent past for inalienable traits. Study the history of insurance – as the industry’s fastidious, compulsive record-keeping uniquely allows – and you notice that the most lucrative periods always come in the wake of big socio-technical changes. Changes that eliminate risk faster than prices can fall.

Fire insurance, for example, 1907 to 1927. The massive destruction caused by the 1906 San Francisco earthquake and fire sets off a nation-wide overhaul of building safety codes, decisively reducing the frequency and potency of commercial fires4. Or, perhaps more analogous to automotive safety: Marine insurance, late 1880s. Wood-and-sail ships are forcefully retired by faster and safer steel-and-steam vessels. England dominates in the construction and operation of these new steel ships, fostering a powerful local marine insurance industry and relegating American insurers to table scraps. Lloyds becomes Lloyds.

This same innovation-driven dynamic also occurs on the smaller, micro- scale as well, producing the cyclic pattern so characteristic of insurance industries. These cycles are often explained with vague supplications to supply and demand, but those are rarely the real drivers– regulatory hurdles (for supply) and regulatory requirements (on demand) leech their force. Instead, small, predictable social and technical changes are constantly reducing risk, and thus cost. The regulatory rate setting process inserts a lag between a risk reduction and the associated drop in price, and voila: cycle!

The chart below shows the historical combined ratio for auto insurance since 1930. (Remember, the combined ratio is incurred losses plus expenses divided by earned premiums. The further the ratio is below 100, the more profitable the underwriting.) For context, today’s auto insurance companies have combination ratios right at, or slightly above, 100 and depend on ancillary services and investment income for profitability5.

Combined ratio

You can easily make out the cycle staring just after 1945 and repeating every six or seven years.The initial peak was of course the end of WW II – gasoline was no longer rationed, servicemen returned, inflation soared  — but the cycle was the result of the McCarran-Fergunson act, which resulted in most states regulating auto insurance rates, and passed in 1945.

I have shown that there are good reasons – both historical precedent and structural mechanisms – why significant risk reductions lead to increased profitability. What’s left is to show that autonomous safety technologies will reduce the risk covered by insurers more than is expected… even in the face of “risk compensation”.

Accident Theory

To do this, you first need to understand how auto insurance companies think about accidents.

Anyone who has had a car accident knows there are two perspectives. Other people’s perspective, also known as the negligence theory, which says accidents are the result of momentary carelessness. Or coincidence theory, which says that if you drive enough miles, something bad is bound to occur.

Both, of course, have some element of truth. Your grandmother is truly a hazard, despite only driving to  church on Sundays. And Mario Andretti would have accidents too, if he commuted three hours to work. The question is which factor dominates,

The data unequivocally says the latter. Accidents are most correlated to the number of miles driven. To put it in in actuarial terms, miles driven is an exposure variable, and is multiplicative, while negligence is a class variable, and additive.

Nonetheless, for historical, political, and idiosyncratic reasons, insurance premiums have always been firmly rooted in negligence theory. It is this tension — between how insurance companies think about accidents, and how accidents actually are — that leads to logical inconsistencies and inefficient pricing. .

For example, insurance companies almost never consider “no-fault” accidents when evaluating your driving record. (In fact, doing so is prohibited in some states.) However, no-fault accidents are an extremely good predictor of future fault accidents. The correlation would be bizarre if accidents were truly the result of negligence, but makes perfect sense if accidents are largely stochastic.

A far more pernicious inefficiency stems from the empirical correlation between low credit scores and auto accidents. Insurance analysts, viewing the world through negligence theory blinders, explain the higher number of accidents by characterizing people with bad credit as impulsive, reckless, and – frankly – not that sharp. This explanation, in only marginally more politic terms, is frequently trotted out as fact in the popular press.

It is bullshit.

The real explanation is more subtle. Because insurance acts as a per-car tax, people naturally try to reduce the number of cars they have assessed. In practice, this means letting the insurance on their second vehicle lapse, and using their primary car exclusively. Both actions increase the average miles driven per poor-credit person car – and, therefore, the number of accidents per -PCPC. The unfortunate end game is that people with less money are stuck with disproportionally high insurance premiums

The solution, say some policy experts, is to price insurance on a per-mile, rather than per-car, basis. People with poor credit would be disincented to drive, and would thus have fewer accidents and lower premiums.

Coincidently enough, in the last two years, nearly every auto insurance company has announced just such a “pay-per-mile” plan with an excess of fanfare. Even insurance companies like to be on trend, and this press release stampede was all about a shiny new piece of technology: the secure GPS system, used to track miles driven.

Try to actually sign up for one of these per-mile plans, however, and you will face a seemingly infinite number of obstacles. Most insurers killed the plans before the press releases went cold because they would have been a drag on profits. It is easy to see why: all the customers who drive very little would sign-up for the GPS programs, leaving just the long haul drivers in the pool. The cross subsidies and mixing of means that lies at the heart of any insurance program would be eliminated.6

Autonomous safety features offer a much more sustainable model for insurance companies. The computational car will allow the majority of drivers – the non-risk takers – to reduce their chance of accident asymptotically, to the point where miles driven is no longer the determining factor. Then, insurer’s models, which price as if your personality rather than miles driven controls your accident rate, will accord with reality.

Economic Performativity

This counter-intuitive phenomenon — the real world remade in the form of the model, rather than the model adjusted to reflect reality – is currently a hot topic among economists, under the rubric of ‘performativity’. It turns out to be a surprisingly ubiquitous process, underlying many economic developments. The canonical example is the Black-Scholes equation, first published in 1973. Before then, option prices on the Chicago Board of Trade varied markedly from what Black-Scholes predicted. Within a few months of the equation’s publication, however, options were trading in-line with theory.7

In other words, Black-Scholes became an accurate model of option pricing … because people began using it to price options. But it was also self-fulfilling in a deeper sense. Just as models in physics rely on simplifying assumptions – frictionless inclines, no wind resistance — Black-Scholes assumes zero transaction costs, unlimited borrowing at the riskless interest rate, and unconstrained short-selling. These were all wildly unrealistic in the pre-E*TRADE world of 1973. However, as regulators adopted Black-Scholes to govern everything from bank risk to executive compensation, the model’s assumptions rode along like stowaways, becoming deeply embedded in economic policy. The world was remade in the model’s image.

Performativity is a powerful prism to view events through, but like previous big ideas- Kuhnian paradigms, Shannon information theory – it is in danger of being over-used to the point of meaninglessness.8


The second trend in AI worth paying attention to goes by the ungraceful acronym LbR: Learning by Reading. The technology is slightly earlier in its development than autonomous vehicles (although well ahead of where most people assume) so I will restrict myself to describing the technology and a few of its implications.

Earlier, I described how an LbR system “knows everything — every single fact, the entirety of human knowledge — but can only reason with the intelligence of a cockroach.” You might argue this is hardly revolutionary: you have friends, even colleagues, who fit this description. The difference, of course, is LbR systems won’t just act like they know everything… they really will. With just a few caveats concerning the word “everything”. And the word “know”.

In this context, “everything” means every fact accessible through the public Internet. (So not technically everything, but plenty.) “Know” is trickier to define: it is more than just storing the text in a database, but still well shy of conscious understanding. A concrete demonstration of how LbR works (from a 2005 paper about Yago, an early German system) makes this clear:

“Assume that a knowledge-gathering system encountered the following sentence:

Einstein attended secondary school in Germany.

Knowing that “Einstein” is the family name of Albert Einstein and knowing that Albert Einstein was born in Germany, the system might deduce that “X attended secondary school in Y” is a good indicator of X being born in Y. Now imagine the system also found the sentence

Elvis attended secondary school in Memphis.

Many people have called themselves “Elvis”. In the present case, assume that the context indicates that Elvis Presley is meant. But the system already knows (from the facts it has already gathered) that Elvis Presley was born in the State of Mississippi. Knowing that a person can only be born in a single location and knowing that Memphis is not located in Mississippi, the system concludes that the pattern “X attended secondary school in Y” cannot mean that X was born in Y. Reconsidering the first sentence, it finds that “Einstein” could have meant Hermann Einstein instead. Hermann was the father of Albert Einstein. Knowing that Hermann went to school in Germany, the system figures out that the pattern “X attended secondary school in Y” rather indicates that someone went to school in some place. Therefore, the system deduces that Elvis went to school in Memphis.”

If this still sounds like little more than a glorified encyclopedia, I recommend finding an eight year-old and asking to borrow their 20-Q ball.9 Despite having a knowledge base many, many orders of magnitude smaller than Yago, and the most rudimentary logic possible, the combination can project a disconcerting amount of intelligence.

The field has advanced rapidly since the turn of the century because most of the software plumbing required for LbR was already developed. Detection of linguistic relationships is reasonably mature, thanks to information extraction applications. Logical reconciliation is a standard feature in semantic web ontology systems. Even the “soft” problems that might seem unique to LbR bear a striking resemblance to classic concerns of literary criticism. (Questions like: where does the meaning of a text reside? In the intent of the author, in the reader’s perception, or is it somehow intrinsic to the text?)10

Despite the technology’s looming maturity, there has been almost no discussion about potential implications. That is concerning because massive knowledge bases, and the kind of weak omniscience they provide, can be profoundly unsettling, in a way that autonomous vehicles cannot.
Admittedly, LbR is nostalgic in at least one sense. Remember that one of the profound effects of the internet was it turned everyone into a researcher. We learned how to dig through dozens of web sites in search of the name of an actor from a dimly remembered show, or to determine whether a skin lesion is cause for concern. Perhaps this was just a momentary phenomenon, an interregnum in the normal order of things, and once our PC knows everything, digging for facts will be like doing long division by hand.

(You might object that you primarily search for opinions, not facts, so a LbR system will be of little use. This is usually a false distinction. Consider that “what is the best burrito in SF” (an opinion), and “what do most people consider the best burrito in SF” (a fact) are normally considered equivalent.)

This LbR scenario is important because of the commercial implications. After all, a giant database that knows everything..?  I-It’s the Google killer! Or, to define the threat more precisely: When pressing doublebuckey-F automatically fact-checks a document, will Google’s algorithmic search be commoditized out of business?

Man vs. Google

What we can say for certain is that LbR will not catch Google unawares. It was already the subject of a paul revere-style (“the LbRs are coming..!”) internal memo, sent out by Google senior VP Jonathan Rosenberg early last year. And the company certainly understands the technology’s intricacies: many of the grad students who built the first generation of LbR systems now work at Google.

But LbR is just one component – one instantiation — of a schism that is reshaping the entire search market. While this rift has been widely discussed, Google’s positioning relative to it is typically misunderstood.

As search queries become increasingly complex – driven by rising expectations and growing familiarity – search providers are heading two different directions. Sites like Hunch, Bing, and Alpha view the rising complexity as a software problem and are busy adding more intelligence to their engines, using techniques that trend asymptotically toward LbR. Alternatively, sites like Aardvark and Answerly see complex queries as primarily a social design issue and are mining social graphs in order to connect questioners with live experts.

Partly because the software solution sounds the most promising (cheap usually trumps correct), people assume Google leads the software faction. They do not. Although Google has done a terrific job of marketing themselves as a rigorously objective, by-the-numbers, Mr. Transistors – with all the attendant benefits from a regulatory point-of-view – their spin elides a more interesting truth.

Google trusts man over machine. Most famously with their original page-rank algorithm, most importantly in their reluctance to use machine optimization for advertising11, and most recently with their acquisition of Aardvark, Google sides with the subjective, ineffable stink of humanity nearly every time. To cynics, this is merely a side effect of their hubris-Googlers believe they are the smartest thing in the server room. To naïfs, it is the logical outgrowth of Google’s profoundly humanitarian culture. (No points for guessing my allegiance.) What is increasingly clear, though, is that Google’s cultural quirk is central to their vision of the future.

For starters, it helps explain Google’s February acquisition of Aardvark. Purchased for $50m, Aardvark was started by two former Google employees to provide real-time search over social graphs. For example, a user can ask ‘Vark where to eat in Los Angeles. Aardvark searches your social graph – the content generated by your friends and followers — looking for people that mention LA restaurants. It is then up to Aardvark to optimize across availability (response time), depth of knowledge, and repeat intrusions to determine which one or two people should receive your question. Any response from them goes straight to you.

Aardvark’s technology steers questions to the right people, and then steps out of the way. Other systems and solutions are less reticent. If you spelunk through the Office 2010 Beta binaries, or study some of MSR’s recent projects like WhatShouldIBuy, you get a good sense of how Microsoft hopes to do an end-run around Google. They are using sophisticated linguistics and network analysis out of the LbR playbook to mine social graphs, and then automatically selecting an appropriate strategy. If a similar question has been asked, Microsoft will reuse, after tweaking, the previous answer. Alternatively, if there were no similar questions, the software tries to automatically divine the answer (using extraction and logical entailment). Finally, if all else fails, the program falls back on forwarding the question to the appropriate people in your network and then saving the answer in its growing database.

To handicap these competing approaches, we need to understand the kinds of questions people ask search engines, versus the kinds people ask their friends. Researchers have just started studying this issue, but their preliminary findings offer some intriguing hints. The majority of questions can be answered equally well by search engines or social networks. But a sizable fraction — 12% according to one study — of the questions people ask social network aren’t really questions at all. They are complaints. (What are they thinking?)

I do not want to oversell the point I am making here– it is small, and easily misconstrued. I am not arguing that Google is incapable of sophisticated linguistic processing. Or that LbR solves everything. I am suggesting that a conceptual axis — with automated, LbR techniques on the left, and people-centric, social media solutions on the right — is a useful way to delineate emerging search systems. I am also suggesting that Google culturally favors people-centric solutions to a much greater degree than typically acknowledged.

Finally, if I may digress one last time, this conceptual axis turns out to be a useful way to look at the broader industry, and to understand the impact of social media.

The social messaging vs. computational intelligence divide is even relevant to ecommerce sites. Gilt Groupe, for example, spends no money on SEO or behavioral advertising and, by design, will never be found in Google’s automatically constructed price comparisons. Instead, Gilt relies entirely on the social graph for marketing, promotion, and customer acquisition.

Just as paid-search changed the advertising industry by emphasizing metrics and accountability – the traditional subjects of advertising’s blue-collar relation, direct marketing — social media is now causing its own upheaval. PR firms and advertising agencies both claim social media as their rightful domain, resulting in a few high-profile clashes. This battle is tough to call. Few institutions can achieve the level of dysfunction of ad agencies, but the big PR firms like Edelman certainly give it a good college try.

The most important impact of social media is a fuzzy, conceptual one. As social graphs are mined to answer questions, aggregate opinions, influence consumers, and provide sales leads, civilians are becoming intertwined in the Market to an unprecedented extent. When advertisers talk about buying the networks, they are no longer referring to the big three– they mean buying access to your friends. Companies are even starting to talk about “virtual sales forces” — essentially mercenaries who sell to their friends and followers. It is easy to lampoon the four-part series The Times will undoubtedly run someday soon: “The Price Of Friendship: How Technology Puts People For Sale”, or something equally lumbering. But it doesn’t make this trend any less disquieting.

(If you think this all sounds far-fetched, just pick up a biology journal. That was an upstanding academic community that, thanks to changes in research practices and IP laws, became deeply intertwined with industry, and went from morally superior to ethical train wreck in about a decade.)

Alternatively, you may want to take a look at some products I really, really believe in.


Essays about artificial intelligence normally end with some ponderous treacle proclaiming how the more we study AI, the more we appreciate the human mind in all its splendiferous mystery.

This is, to be charitable, wishful thinking.

AI does teach us about intelligence. It teaches us that “intelligence” is a motley assortment of heuristics, kludges, and cheap tricks. The danger with AI is not that machines will become smarter than us, but that we will become as dumb as machines. The absurdly prescient William S. Burroughs was wise to this fifty years ago. “The study of thinking machines teaches us more about the brain than we can learn by introspective methods,” pronounces Dr. Benway, in 1959’s Naked Lunch. “Western man is externalizing himself in the form of gadgets.”

I have tried to sketch out some of the near-term implications of AI, but it is substantially easier to predict the long-term. Then we can rely on a few core principles, such as that technology, like all life forms, is fundamentally self-perpetuating. That means the technologies I described – autonomous vehicles and learning by reading – will be most important for what they enable.

That makes sense. If we were trying to build a true, general AI, we would first need to create a way for it to get around and interact with the larger world. And we would need a system for rapid knowledge acquisition, so that we wouldn’t have to manually explain every detail of how the world works.

Which, of course, is precisely what we’re building.

– Steve G. Steinberg


1 The Maes-Garreau law comes from research by MIT professor Pattie Maes and journalist Joel Garreau which found that predictions of a technological Deus Ex Machina – whether an immortality pill or thinking machines – are always scheduled to occur within the last few years of the prognosticators expected lifespan.

2 Joplin, Janice.

3 One contribution to this field, “Scalable Techniques for Mining Causal Structures”, was written by a pre-Google Sergey Brin. I take this as evidence that Brin has a knack for fingering the technological zeitgeist, but also, based on the paper’s startling acontextuality — it betrays no knowledge of the huge body of related research — that he was bad at searching.

4 The irony, of course, is that much of the fire damage in 1906 was from insurance-motivated arson. Almost nobody had earthquake coverage, so people burned their already collapsed buildings to ensure reimbursement. This may also explain why the city of San Francisco saw nothing wrong with weakening safety codes, in contradiction with the rest of the country, to help expedite rebuilding.

5 In a bid to understand what sort of “ancillary services” auto insurers offer, I recently looked at Admiral (ADM.LN) a well-respected UK auto insurer. I was gobstopped to discover that the largest chunk of the company’s profits comes from selling leads to lawyers. Call me naïve: I did not realize that the insurance industry’s kvetching about runaway litigation forcing higher premiums came with a smirk. Back in my misspent youth, we had a word for this sort of thing. We called it stealing.

6 Although the GPS-based programs are largely a mirage, you do see some insurers promoting a “low miles” discount. These are primarily offered in California and Texas, where the state regulatory bodies have been pushing insurers toward a per-mile system for some time.

7 The process is also reminiscent of that well-worn joke about the French and their love of theory. (After many hours of debate on the UN floor, the French delegate says, “Fine, I accept that your idea works in practice… but does it work in theory?”)

8 In this month’s Interview, Michael “Club Kid Killer” Alig rationalizes his drug abuse as “performative”: by acting like a drug addict, he became one.


10 Anyone who lived through the Science Wars of the 1990s, when physicists and English professors engaged in pitched battle over the nature of objectivity, the meaning of (post)modernism, and the drawing of academic borders, will be amused to learn it is no longer unusual for computer scientists to cite works by Barthes, Derrida, and Foucault. Once dismissed as impenetrable, pernicious nonsense, their books are now simply instructional. (A strong rebuttal, to my mind, of the current conventional wisdom that lit crit must become more “scientific” to survive.)



  1. I know it’s a bank-shot claim to fame, but I introduced Mr. Garreau to Patti Maes’ work.

    Comment by Geoff Cohen — July 5, 2010 @ 5:26 pm

  2. Janis Joplin may have sung ‘Me and Bobby McGee’ for your quote but it was written by Kris Kristofferson. You might want to correct that citation to properly reflect its origin.

    Comment by bilbous — July 5, 2010 @ 6:00 pm

  3. Not that I think you need it, but compliments on an amazingly well written, well reasoned piece. It was a joy to read!

    Comment by Charles_p — July 5, 2010 @ 6:27 pm

  4. re: fn #2, your lbr computer ought to know that “me and bobby mcgee” was written by Kris Kristofferson, along with Fred Foster.

    on a more substantive note, part of my job involves insurance regulation and in that context, I have been involved in the debate over pay-by-the-mile auto insurance (aka “pay as you drive” or “save as you drive less” or what you call a “GPS plan”). The resistance you report on the part of insurance companies is certainly real, but there’s more to it than the explanation you give.

    many advocates for pay-by-the-mile are enviros who want to change the economic incentives so that people drive fewer miles, using less oil, emitting less pollutants, and causing less road congestion, and ultimately less road construction and sprawl. (all good things, btw) however, the per mile cost of insurance for a low risk insured would be around 3 or 4 cents – which seems to be too little to have a major impact on vmt – given what we’ve learned about the elasticity of demand with gasoline prices over the past 3 years.

    So the biggest impact would not be to change driving behavior, rather, as you point out, it would be to segregate drivers. drivers/car owners who currently pay too much today because they drive their vehicles fewer miles than average would figure out quickly that pay-by-the-mile plans would save them money and would migrate to such plans. as the general fixed price (or, “all you can eat”) pool lost increasing numbers of lower risk (low miles) drivers, costs and premiums would rise, increasing the financial incentives for lower mile drivers to move to a GPS plan, and so on. eventually, even if insurance companies kept unlimited miles plans around as a option, the premiums in those plans would effectively be based on assumed very high miles per car.

    right now, the low miles drivers are lower risks for the insurance companies, but pay average premiums w/o regard for this lower risk factor, so they are presumably pretty profitable for the insurance companies. For the insurance industry in general, this is not a bad status quo.

    But for any individual insurance company, there should be a financial incentive to offer — and aggressive market — pay-by-the-mile plans. A company marketing such a plan would be able to offer significant savings to low miles/low risk drivers and therefore attract lots of market share. And at least initially, they would only have to share part of the the risk and cost reductions from having a lower mileage pool with their customers, making this strategy more profitable than average. Eventually, as other market participants offered their own plans, outsized profits would go away, but the industry would be back to an equilibrium that was roughly as profitable as it is today.

    Presumably, something similar happened in the industry as other innovations to segment risk were introduced; e.g., rating based on driving records or even credit scores. I.e., prior to the rating innovation, lower risk customers were paying the companies average premiums, but the companies decided they’d be better off to charge their lower risk customers less and their higher risk customers more. What’s changed today? The economist in me wonders whether it is a greater willingness to engage in collusive behavior across the industry, but my inner sociologist tells me that it may be more likely that the industry is receptive to ideas hatched by its own bean counters and hostile to ideas that are pushed on it by outsiders.

    Comment by Jason Marks — July 5, 2010 @ 7:26 pm

  5. thanks for this. I’ll just need a couple of hours checking relevant searches I opened in tabs now.

    Comment by Maarten. rotor — July 5, 2010 @ 7:32 pm

  6. > This is, to be charitable, wishful thinking.

    I think you just made that up

    Comment by Basil — July 5, 2010 @ 8:52 pm

  7. > I think you just made that up.

    When did this get to be the most popular trite comment on any statement that, in the reader’s mind, cannot be supported by facts? In the context of statements about the future it is truly ludicrous. One hopes it was meant as (lame, reddit-style) humor.

    Comment by Bob Foster — July 6, 2010 @ 12:41 am

  8. > […] it is no longer unusual for computer scientists to cite
    > works by Barthes, Derrida, and Foucault. Once dismissed as
    > impenetrable, pernicious nonsense, their books are now simply
    > instructional.

    Can you point to some specific CS papers that cite postmodern
    literary theory?

    Comment by 23Skidoo — July 6, 2010 @ 12:42 am

  9. Say that drivers of regular cars with no safety features have a fatal crash once every 1 billion miles, and automated safety feature cars are much safer, but the software screws up, causing a fatal accident every 1 trillion miles. The automated cars are 1000 times safer, but how do you assign the liability for the accident? Does this mean people are going to sue Toyota, the AI software company, the insurer, or the driver whose car malfunctioned?

    When a human makes a mistake the liability is clear. Unless Toyota and the software company are taking in tons of money from the people who aren’t crashing because of the enhanced safety features, I can’t see how this product is going to avoid the liability problem.

    Comment by Kevin Burke — July 6, 2010 @ 12:43 am

  10. That was the most sharply written and reasoned piece I have read in recent memory.

    Thank you.

    Comment by Tony — July 6, 2010 @ 12:51 am

  11. @Jason, if cherry-picking low-mileage drivers result in high-mile customers getting vastly increased premiums, then that would become a fairly powerful incentive to reduce your miles driven so you qualify for a low-mileage plan. Which would have beneficial effect on oil use, sprawl and so on.

    Comment by Janne — July 6, 2010 @ 4:07 am

  12. > The automated cars are 1000 times safer, but how do you assign the liability for the accident?

    If accidents are 1000 times less frequent, so are lawsuits, so it’s unlikely to matter that much.

    Seriously though, auto manufacturers have litigation insurance. If using the safety features can lower those costs more than what it costs to implement the features, they’ll take the features and damn the liability.

    Comment by Bob Bobson — July 6, 2010 @ 4:59 am

  13. Considerably, this post is really the sweetest on this notable topic. I harmonise with your conclusions and will thirstily look forward to your incoming updates. Saying thanks will not just be sufficient, for the phenomenal clarity in your writing. I will directly grab your rss feed to stay informed of any updates. Admirable work and much success in your business dealings!  Please excuse my poor English as it is not my first tongue.

    Comment by angelic — July 6, 2010 @ 5:35 am

  14. Great article! Thanks for posting it.

    Your final point – that a general AI will need to be an autonomous vehicle with reasoning capability – is of course a central thesis of AI researcher Hans Moravec and robotics chief Rod Brooks. But 20 years on, and you don’t see many of these robots crawling around. But the proliferation of robotic, drone aircraft – military and civillian – is astounding. In the past ten years autonomous aircraft have become standard issue in the US military, and are being deployed for border patrol. Munincipal police are experimenting with small helicopter drones: these things will soon be everywhere. Of course, there are people in the loop operating the robots, but they can fly without a pilot and carry out missions with only general direction from a remote operator. There is significant pressure to increase the information-synthesis and reasoning capability of these drones. Does a hunter/surveillance drone need conciousness? No, but it can get pretty smart anyway.

    Comment by Boyd Waters — July 6, 2010 @ 6:06 am

  15. So good, I almost wet my pants reading it.
    Now I just need to print it, and read it three more times, to really absorb, consider and appreciate all the detail.

    Comment by IncredibleMouse — July 6, 2010 @ 6:53 pm

  16. @Janne:

    my point was that, in terms of insurance costs, there’d eventually be a pretty big difference – several hundred dollars a year – that would drive those that could benefit to pay-per-mile plans. And there’d be no reason not to switch if you could save.

    But when it came to miles driven, the savings wouldn’t be all that great, and there would be a “cost” to driving less, e.g., convenience, keeping a job, etc. After all, many people (myself included) drive a lot of miles today, even though the cost per mile including gasoline, wear and tear on the car, maintenance, consumables like tires, and depreciation are at least 30 cents/mile. Making this variable cost 10% higher unfortunately won’t change the equation for most high mileage drivers. We saw this in action when gas prices spiked in 2008 – the first $1/gallon increase on the way up didn’t affect consumption much.

    Comment by Jason Marks — July 7, 2010 @ 2:00 am

  17. In some respects, auto insurance profits come from incomplete rating factors. As an insurer you want to be able to do a few things: 1) dramatically overcharge the really low risk drivers (relatively speaking – the person who drives 20 miles/month pays only slightly less than the person who drives 2000). 2) drive away people every other company can accurately rate too, because you make little or no profit on those customers. 3) Take on and retain the vast middle of drivers on whom a modest profit can be had on the insurance or through cross-selling or investing premiums.

    Everyone assumes #3 is what makes a sucessful insurance company, but #1 and #2 is where all the money is made. It’s all about finding spots where risk and pricing is out of whack. For instance, years ago Progressive made tons and tons of money when they rated drivers with DUI’s but otherwise clean records as being far less a risk than typically assumed. Most companies would shun DUI-convicted drivers or price them out of a policy. Progressive’s internal ratings often had these people as only slightly more risky than average… but pricing would not be based on that risk assessment. Instead, it was all about charging a couple hundred bucks under what other companies would charge.

    Anyway… I think some of the resistance in the insurance industry to a per-mile pricing structure is that it would, in the end, be a big step forward in assessing risk and it would be quite visible to the customer. Whereas a lot of profit is currently made on driving histories that aren’t as risky as commonly assumed. Rating customers too accurately is a threat to how many companies make a lot of money. That’s a change that will cause some upheaval and pain. It will happen eventually no doubt, but probably not until a small or medium sized insurer aggressively implements it and steals a bunch of market share.

    Comment by Gloam — July 7, 2010 @ 5:37 pm

  18. Your insurance-as-car-tax explanation for why credit scores correlate (negatively) with number of claims per 100 car years is right on. The effect on premiums is non-trivial. From the industry funded study in 2003 by Miller and Smith, property damage liability claims are double for those with the lowest credit versus those with the best credit scores, all else equal. That means 6 claims versus 3 claims per 100 car years. But that’s the difference between a category of cars averaging 14,000 miles per year versus one averaging 7,000 annual miles.

    I first included the explanation in a report to the Texas Legislature in 2000 on zip-code rating and have widely circulated it in testimony and academic presentations. Before the current article, however, I have only had the insight independently validated once before–as a footnote citation to a paper of mine in the 2007 Federal Trade Commission report to Congress on credit score rating of auto insurance.

    Possibly the current article will be the beginning of expanded public discussion of the correlation and its implications for the car-year exposure unit and the alternative odometer-mile exposure unit. So far the discussions of this that are posted on “” have been pillowed by law and economics academics, along with risk and insurance academics, and, it should be added, by environmental economists.

    By the way, my 2005 paper “Driving With The Brakes On: Guido Calabresi’s Failed 1970 Auto Insurance Case Against Safety-Device Mandates” has relevant comment on the anti-lock brakes discussion above. Yale Law School Dean and now 2nd Circuit judge Calabresi managed to finally admit “you are right, of course” to me. Have a look.

    Comment by Patrick Butler — July 8, 2010 @ 11:11 pm

  19. Last line in penultimate section: “Alternatively, you may want to take a look at some products I really, really believe in.”

    I laughed. Out loud.

    Great paper, well worth a careful read. Thanks.

    Comment by nealabq — July 13, 2010 @ 1:09 am

  20. I’m interested in the bicycle helmet/underwear study, but haven’t been able to find a reference. Can anyone help me find the source?

    Comment by Karsten Loepelmann — July 13, 2010 @ 3:33 am

  21. The discussion about the poor-credit person paying disproportionately high insurance premiums is illogical. If it is true that individuals with poor credit tend to put more miles on their cars (presumably because they have fewer cars, so they put all of their miles onto a single car), and if the “coincidence theory” is also true (more miles driven = more chance of accident), then those who drive more miles per vehicle should pay higher premiums. If an individual can afford multiple vehicles (because they have better credit) then that individual pays an insurance premium for each car. They spread the risk of accident over several cars, but since each car has a premium, the overall cost to the individual with good credit is much higher than for the individual with poor credit, even though the “per-car tax” is higher for the credit-poor person.

    Comment by Joshua Larson — July 20, 2010 @ 2:21 pm

  22. very, very well done

    gets me to recall my ai/expert systems days under donald michie…

    this co-evolution of automated/augmented & personal/social ‘knowledge’ systems will be as interesting to observe on facebook as it will be on google & bing

    Comment by michael schrage — August 22, 2010 @ 10:06 am

RSS feed for comments on this post. TrackBack URI

Leave a comment

You must be logged in to post a comment.

Powered by WordPress