Read these 3 incidents first:
5th April, 2016: Joshua Brown uploads a video of his Tesla Model S automatically steering itself to the right, saving him from a possibly fatal crash with a boom lift truck. The truck was entering an interstate road and the truck driver didn’t notice Brown’s Tesla Model S. The Model S, thanks to it’s autopilot system, sensed the possibility of a collision and immediately swerved to the right, and transferred control back to the driver.
Brown’s video currently has over 3 million views on YouTube, thanks to Elon Musk’s tweet sharing it.
7th May, 2016: Joshua Brown dies… in his Tesla Model S, marking the first death caused by a self-driving vehicle. Quoting from Tesla’s blog post about this incident: "What we know is that the vehicle was on a divided highway with Autopilot engaged when a tractor trailer drove across the highway perpendicular to the Model S. Neither Autopilot nor the driver noticed the white side of the tractor trailer against a brightly lit sky, so the brake was not applied. The high ride height of the trailer combined with its positioning across the road and the extremely rare circumstances of the impact caused the Model S to pass under the trailer, with the bottom of the trailer impacting the windshield of the Model S."
6th August, 2016: Joshua Neally, an attorney in Springfield, Missouri, was driving home in his one-week-old Model X (with autopilot turned on). He suddenly felt excruciating pain, so he called his wife and decided to go to an emergency centre. Instead of calling 911 and waiting for an ambulance, Neally let his Model X drive itself 20 miles till an exit ramp, and then took control and drove to a nearby hospital. Neally had suffered from pulmonary embolism, a blockage of lung arteries that often leads to death. Neally credits the Model X for saving his life.
In light of the above three incidents, there is a lot we can conclude about today’s autonomous vehicles.
The very concept of letting a computer drive a vehicle has been a topic of debate - both at the technology level and the legalities following its implementation. While humans can adapt to their surroundings and actively & consciously be aware of everything that is happening around them, computers greatly rely on pre-written code and limited sensor information. However, with the help of an increased diversity of sensors, deep learning algorithms and artificial intelligence, the ability to detect and comprehend the surroundings is improving day-by-day. Where computers may really stumble is while taking decision - about life and death.
Imagine this: you are driving up a mountain road, and you notice a 3 bikers coming from the opposite direction. In a split second, you need to decide whether you want to continue driving straight and possibly kill the 3 bikers and save your own life, or drive off the cliff, save the 3 bikers’ lives and possibly kill yourself. It’s a very, very difficult decision that your mind takes, and notice that there is no correct decision. What would an autonomous car do in such an event? We have no answer.
Today, we are still looking at the first generation of autonomous cars. Actually, these cars - namely, the Tesla Model S and Model X - aren’t even completely autonomous. The autopilot feature on Tesla cars is more of a cutting-edge cruise control with collision avoidance and lane-keeping systems. When you opt to use the autopilot system, you are responsible for any mistakes the autopilot makes. Tesla makes it clear that the autopilot system is still in beta and that drivers must be alert and ready to take control of the car at any moment.
Within a few years, the exponential advancements in autonomous driving technology will enable for much smarter cars. Quoting Elon Musk, “We’re going to end up with complete autonomy, and I think we will have complete autonomy in approximately two years.” Musk promises complete autonomous driving by 2018, which might seem too early but is practically possible.
And remember, it’s not just Tesla. Autonomous cars are going to be the future, and all companies are here to embrace it. Apple seems to be developing an autonomous electric car, Google is actively testing it’s pod-like self-driving cars, and concepts by BMW, Mercedes Benz, Volkswagen, Porsche, Rolls Royce and Ford (to name a few) all have autonomous driving systems.
Whatever be the case, as the end user, you want assured safety in an autonomous car. In fact, the whole point behind autonomy is to 1. reduce the stress on a driver and 2. ensure a safer (or, at least, equally safe) ride. At the developmental stage, neither of these two are completely true. But some of the above incidents remind us that there are indeed lesser chances of accidents while using an autonomous vehicle.
Joshua Brown’s case is especially interesting, because he survived and died because of the same autopilot system. In the first incident, the car did its job perfectly - avoiding the collision and giving back control to the driver. You see, the autopilot system on a Tesla wants the driver to be the commander of the car. Tesla understands that their cars will be safer in the driver’s hands during the possibility of an accident. In the second incident, there are two cases we can comprehend: either the car couldn’t identify the truck because of the bright sky, or (maybe, although Tesla states he former) it detected the truck but didn’t consider it to be a threat and drove on, taking a wrong decision. Tesla does push blame on the driver, and on the rare but unique situation of the crash, but according to Tesla’s terms, the driver must be responsible at all times. Nonetheless, this incident has raised questions about autonomous cars.
Statistically speaking, Tesla still holds an excellent safety record. Not only are the cars comparatively safe in a collision, but they are also fairly aware of their surroundings. Of the 130 million miles that have been driven in Autopilot mode, Joshua Brown’s incident is the first known fatality. Compare that to the fatality rate of one death per 60 million miles of non-autonomous driving (considering that there are millions of cars on the road globally that are moving at the same time, 60 million miles is barely anything), and you’ll immediately note that autonomous cars are indeed safer.
The third incident gives light to another question: did the Model X really save Neally? It may have, because waiting for an ambulance would be a waste of precious time, perhaps killing Neally. At the same time, what would have happened if the Model X met with an accident in autopilot mode and made the whole situation worse?
On a positive ending, it is good to see Tesla taking the right steps in improving autopilot, slowly but eventually giving rise to autonomous vehicles. Tesla’s cars “learn” through experiencing a situation, and the more they are driven, the greater the database of autopilot driving. Perhaps some of these incidents have made an impact - so next time you are about to drive into a lorry, your Tesla would be more cautions.
Autonomous cars are coming, and they are going to be safe as hell. Just not yet.
Read next: The future car!
Sources: Tesla, Slate, Fortune Magazine
Follow Technonerds on Facebook, Twitter and Flipboard.
You write technology? Get in touch!