5th April, 2016: Joshua Brown uploads a video of his Tesla Model S automatically steering itself to the right, saving him from a possibly fatal crash with a boom lift truck. The truck was entering an interstate road and the truck driver didn’t notice Brown’s Tesla Model S. The Model S, thanks to it’s autopilot system, sensed the possibility of a collision and immediately swerved to the right, and transferred control back to the driver.
7th May, 2016: Joshua Brown dies… in his Tesla Model S, marking the first death caused by a self-driving vehicle. Quoting from Tesla’s blog post about this incident: "What we know is that the vehicle was on a divided highway with Autopilot engaged when a tractor trailer drove across the highway perpendicular to the Model S. Neither Autopilot nor the driver noticed the white side of the tractor trailer against a brightly lit sky, so the brake was not applied. The high ride height of the trailer combined with its positioning across the road and the extremely rare circumstances of the impact caused the Model S to pass under the trailer, with the bottom of the trailer impacting the windshield of the Model S."
In light of the above three incidents, there is a lot we can conclude about today’s autonomous vehicles.
The very concept of letting a computer drive a vehicle has been a topic of debate - both at the technology level and the legalities following its implementation. While humans can adapt to their surroundings and actively & consciously be aware of everything that is happening around them, computers greatly rely on pre-written code and limited sensor information. However, with the help of an increased diversity of sensors, deep learning algorithms and artificial intelligence, the ability to detect and comprehend the surroundings is improving day-by-day. Where computers may really stumble is while taking decision - about life and death.
Imagine this: you are driving up a mountain road, and you notice a 3 bikers coming from the opposite direction. In a split second, you need to decide whether you want to continue driving straight and possibly kill the 3 bikers and save your own life, or drive off the cliff, save the 3 bikers’ lives and possibly kill yourself. It’s a very, very difficult decision that your mind takes, and notice that there is no correct decision. What would an autonomous car do in such an event? We have no answer.
Within a few years, the exponential advancements in autonomous driving technology will enable for much smarter cars. Quoting Elon Musk, “We’re going to end up with complete autonomy, and I think we will have complete autonomy in approximately two years.” Musk promises complete autonomous driving by 2018, which might seem too early but is practically possible.
And remember, it’s not just Tesla. Autonomous cars are going to be the future, and all companies are here to embrace it. Apple seems to be developing an autonomous electric car, Google is actively testing it’s pod-like self-driving cars, and concepts by BMW, Mercedes Benz, Volkswagen, Porsche, Rolls Royce and Ford (to name a few) all have autonomous driving systems.
Joshua Brown’s case is especially interesting, because he survived and died because of the same autopilot system. In the first incident, the car did its job perfectly - avoiding the collision and giving back control to the driver. You see, the autopilot system on a Tesla wants the driver to be the commander of the car. Tesla understands that their cars will be safer in the driver’s hands during the possibility of an accident. In the second incident, there are two cases we can comprehend: either the car couldn’t identify the truck because of the bright sky, or (maybe, although Tesla states he former) it detected the truck but didn’t consider it to be a threat and drove on, taking a wrong decision. Tesla does push blame on the driver, and on the rare but unique situation of the crash, but according to Tesla’s terms, the driver must be responsible at all times. Nonetheless, this incident has raised questions about autonomous cars.
Statistically speaking, Tesla still holds an excellent safety record. Not only are the cars comparatively safe in a collision, but they are also fairly aware of their surroundings. Of the 130 million miles that have been driven in Autopilot mode, Joshua Brown’s incident is the first known fatality. Compare that to the fatality rate of one death per 60 million miles of non-autonomous driving (considering that there are millions of cars on the road globally that are moving at the same time, 60 million miles is barely anything), and you’ll immediately note that autonomous cars are indeed safer.
On a positive ending, it is good to see Tesla taking the right steps in improving autopilot, slowly but eventually giving rise to autonomous vehicles. Tesla’s cars “learn” through experiencing a situation, and the more they are driven, the greater the database of autopilot driving. Perhaps some of these incidents have made an impact - so next time you are about to drive into a lorry, your Tesla would be more cautions.
Autonomous cars are coming, and they are going to be safe as hell. Just not yet.
Read next: The future car!
Sources: Tesla, Slate, Fortune Magazine
Follow Technonerds on Facebook, Twitter and Flipboard.
You write technology? Get in touch!