Recent news of self-driving cars from Uber, Google, and Tesla is both exciting and a bit unnerving. Handing over the wheel to a computer can give drivers a sense that they are not in control, and therefore more unsafe. Every so often, news of a self-driving car crash emerges. However, self-driving cars are proven to be even safer than human drivers.

Google’s fleet of self-driving cars have traveled more than 1 million miles since their launch in 2009 and have only experienced 16 accidents. Interestingly, none of the incidents were the fault of the computer system. Accidents that involve self-driving cars are usually the fault of those around them, as aggressive drivers are not accustomed to the degree to which autonomous vehicles adhere to rules of the road.

Data collected over multiple years of testing shows that self-driving cars crash less often. 90% of all car crashes are caused by human error, killing 40,000 in just 2016 alone, per the National Highway Traffic Safety Administration. The fact of the matter is that autonomous vehicles cannot get drowsy, distracted, drunk, or become enraged at the wheel.

However, the 2016 Tesla crash proves that these cars are not perfect in discerning their surroundings. A white tractor-trailer made a turn in front of the self-driving car, blending into the overcast sky. The car essentially did not see the danger ahead, and neither did the driver, who was watching a movie at the wheel and was killed.

Hacking is another concern car makers have about these new machines. Proof-of-concept attacks have been tested on these vehicles, showing that laser pointers can force the car to swerve, slow down, and come to a complete stop.

It is important to weigh the risks associated with self-driving cars, especially until legislation can effectively regulate the advancing technology. One of the ethical gray areas of self-driving cars is that decision making is written into the computer code of the vehicle. Although it removes the element of driver error, it requires relying on a computer rather than a human to make a judgment call in the face of potential danger. As the 2016 Tesla case showed, the vehicle’s computer did not detect an impending danger in order to avoid an accident.

In the event there is a motor vehicle accident involving a self-driving vehicle and a person is injured, the avenue for seeking compensation for those injuries depends on whether the at-fault vehicle was a self-driving vehicle or one operated by a human. If the at-fault vehicle was driven by a human, compensation can be sought by filing a claim with the driver’s automobile insurance company. However, if the at-fault vehicle was a self-driving vehicle, the cause of the accident would not be driver error, but either a faulty computer or the inability to detect an impending threat prior to an accident occurring. In this case, the claim would not be made against an individual, but the manufacturer of the self-driving vehicle. This would essentially be a product liability claim. So far though, the majority of accidents reported involving self-driving vehicles were caused by error on the part of the driver of another vehicle. If this remains the case, the majority of personal injury claims resulting from motor vehicle accidents will still be made through the driver and/or vehicle owner’s insurance company. As technology evolves, it will be interesting to see how self-driving vehicles will impact the area of personal injury law.