In the original Knight Rider TV series, David Hasselhoff starred as Michael Knight, “a young loner” – as the high-flown intro put it – “on a crusade to champion the cause of the innocent, the helpless, the powerless, in a world of criminals who operate above the law.” A latter-day lone ranger, Hasselhoff was aided in this crusade by a souped-up, artificially intelligent Pontiac Trans-Am hardwired to fight crime, with a cycloptic stock ticker for a grille, a winning disposition, and all manner of sensors, scanners, and synthesizers, not to mention a flame thrower and a tear-gas launcher for when things got especially tight.
The idea of the self-piloting “driverless” vehicle has a long precedent in science fiction; and yet, as appealing as such technologies may seem, the sci-fi genre as a whole tends to be highly sceptical of their benefits and ever wary of the dangers of technological dependency and malfunction. From HAL in Kubrick’s 2001: A Space Odyssey to Skynet in the Terminator franchise, the fear of automation, of “intelligent” technologies rising up to thwart and terrorize their old masters, is rampant within the genre. Even Knight Rider took up this idea in an episode in which the ordinarily trusty and well-mannered “K.I.T.T.” turns on Hasselhoff and seems to develop a will of its own.
But in spite of science fiction’s many cautionary tales, most of us still consider technology a trusted friend and ally, a tool we can use as we see fit to solve our problems and make our lives simpler, safer, and more efficient in the process. At any rate, this is certainly the thinking behind Google’s ongoing efforts to develop a driverless car, an autonomous vehicle capable of navigating busy urban environments with limited or no human supervision (but that – at least as of this writing – does not appear to be equipped with any onboard tear-gas launchers).
Under Sebastian Thrun, a senior Google engineer who is also director of the Artificial Intelligence Laboratory at Stanford University and one of the inventors of Google Street View, the driverless-vehicle project has made significant headway in the past several years. In June 2011, after tireless lobbying by Google, Nevada became the first state to pass legislation permitting driverless vehicles to be operated legally on public roads. As of December 2011, when Google was awarded a US patent for its driverless-vehicle technology (which allows human operators to switch between combined and fully autonomous modes), Google’s fleet of adapted Totoya Prius and Audi TT models have put in some 160,000 miles on the road with limited human intervention and over 1,000 miles on auto pilot between California and the Silver State. What was once a mere fantasy of science fiction is on the verge of becoming a reality within the next ten years, and possibly sooner; as so often in our brave new world, the question isn’t if, but when.
As for the question why, Google speculates that driverless vehicles have the potential to save millions of lives. According to the World Health Organization, 1.2 million people are killed every year in motor-vehicle accidents around the world, and the vast majority of these collisions – over 90 percent by some estimates – are the result of human error. Equipped with artificially intelligent software, radar and lidar sensors, and sophisticated cameras that work together to create a dynamic three-dimensional map of the vehicle’s surroundings, Google’s driverless cars can detect obstacles around them and navigate accordingly without succumbing to the effects of fatigue, distraction, intoxication, road rage, and other uniquely human frailties. Because of their precise, 360° computer vision and rapid response time, driverless vehicles can also travel in close proximity to one another and may thus reduce strain on infrastructure by improving traffic flow and increasing the capacity of roads. And when driverless cars aren’t in use, they can be obediently sent home and recalled when needed, thereby removing the necessity to find and pay for parking.
In much the same way, after dropping off a passenger, a driverless car can be reassigned and sent on a route pre-programmed into its GPS unit to pick up and transport new passengers. Greater car-sharing means that most families would no longer have any use for more than one car, and, because vehicles would be used more efficiently, there would be fewer of them on the road, with obvious benefits for the environment in the form of reduced fuel consumption and lower emissions. Moreover, since driverless cars operate autonomously, special licensing and permits would no longer be required: even young children or people with disabilities would be able to ride in them safely. (The question of who would be liable in the event of a collision is difficult to answer; in many ways, the technology is ahead of the law, which always assumes that a human is at the wheel.)
There’s no doubt that the promise of Google’s driverless-vehicle technology is immense, particularly when it comes to improving road safety. And yet, from the industrial to the digital revolution, major technological innovations almost always transform the societies that develop and implement them, usually with significant social, cultural, and economic disruption. So the real question is: what else might the driverless car entail?
For one, the potential loss of hundreds of thousands of jobs in transportation, public transit, and automotive manufacturing. For another, the potential loss of revenue generated by motor-vehicle sales, car insurance, licensing, and even parking. That might seem like small potatoes when weighed against the prospect of fewer fatal collisions; but if we’re willing to surrender control of our vehicles, what else are we willing to give up into the bargain? And to what other, perhaps more ethically problematic uses might such technology be put? To take one example, since at least 2004 the American government has been conducting rogue attacks on strategic al-Qaeda targets using self-piloting drones, and while these unmanned aerial units may have become the scourge of cave-dwelling terrorists, they’ve also led to considerable so-called “collateral damage” amongst civilian populations.
Despite all the evangelist talk of “thinking” computers and “smart” machines, the problem with artificial intelligence is that, in the end, it’s still artificial, no more than a crude attempt by ingenious software engineers to mimic our habits of mind and our abilities to reason, infer, extrapolate, and bring judgment and experience to bear when operating our vehicles (as in all walks of life). By entrusting our lives to our machines, is there not a danger that we might in turn become less human, more machine-like, as countless sci-fi writers have warned? Do we not at some point risk becoming the tools of our tools? Of course, the driverless car is in some ways simply a logical extension of technologies like cruise control, antilock brakes, automatic transmissions, blind-spot alerts, and automated parking assistance. For those of us who actually enjoy driving, we’ll no doubt be able to continue to operate our vehicles in manual mode, at least for a little while. But the quaint idea of driving our own cars may ultimately be destined to go the way of the stick shift. And maybe that’s all to the good, especially if after dropping us off safely at the office our cars can put in a few hours as charming crime-fighting sleuths.