Several legal obstacles will need to be overcome if autonomous vehicles are to become the ‘next big thing’ in the automobile sector, says Josep Bori, research director of the thematics division at GlobalData.
Fully autonomous vehicles (AVs), a.k.a. driverless cars, have become a mainstream aspiration in recent years, as a result of the popularity of Tesla, with its Autopilot capability and the significant progress of artificial intelligence (AI) technology in various domains — as well as the early overpromising of various vendors, including Tesla itself, Waymo/Google, and Uber (now Aurora).
Indeed, GlobalData has identified autonomous vehicles as one of the four disruptive megatrends currently impacting the automotive industry.
Yet we are still far from what the average person would understand as a driverless car, which is what the trade body Society of Automobile Engineers (SAE) calls ‘Level 5’, or full driving automation in any environment with no involvement from the passenger.
In fact, there are no Level 4 or 5 vehicles in the market, and even Tesla cars with Autopilot functionality are currently classified as only Level 2, because the passenger is required to remain attentive and take over driving when necessary.
This contrasts with Tesla’s CEO Elon Musk’s statements in mid-2020 that the company was “very close” to achieving Level 5 autonomous driving technology, and the forecasts from companies including General Motors, Google’s Waymo, Toyota, and Honda that they would have self-driving cars on the roads by 2020.
However, beyond the challenges with the driverless technology itself, those related to regulation (and particularly insurance) are no less daunting.
Arguably, the lack of progress in self-driving insurance remains the major stumbling block to its industry endorsement, and could ultimately slow down adoption if the technology finally delivers on its expectations.
GlobalData’s last thematic report on Autonomous Vehicles (AV) highlighted that there are crucial and difficult regulatory, legal, and insurance issues to grapple with as vehicle autonomy evolves, especially when partially self-driving vehicles still require human control from time to time.
Interestingly, while regulators are gradually working towards developing a framework that will allow the operation of AV in increasingly more demanding scenarios, insurance and reinsurance companies are, concerningly, relatively quiet.
To understand why it is important to consider the basics of the insurance industry. An insurance contract or policy involves the policyholder assuming a guaranteed, known and relatively small loss in the form of a premium payment to the insurer in exchange for the insurer’s promise to compensate the insured in the event of a covered loss.
The insurer makes a profit in this business over the long run as long as the total amount of premiums is higher than the compensation payments as a result of loss events happening to a few of the policyholders. Therefore, insurers need lots of historical data from which they can calculate expected accidents and losses based on historical averages and probability distributions. Also, the losses must be rare and random, so risk diversification plays in the insurer’s favour.
With this in mind, there are two significant challenges in insuring autonomous vehicles. First, there is very limited historical data on accidents involving this type of car, so historical averages and probability distributions cannot be relied upon as predictors of future accidents, and there is not much depth in the data allowing analysis by age, gender, education, or geography, which could allow proper pricing of the risk.
Secondly, and very importantly, accidents involving autonomous vehicles are likely to be highly correlated. While they could be due to patchy satellite coverage, firewall failures, or corrupted software downloads — all pretty random events — there is the risk that they could be due to faulty manufacturer’s software or cyber-attacks.
Reality bites: Autonomous Vehicle
There have been many reports of crashes involving Tesla cars while Autopilot was activated, and Waymo has also had a few accidents during trials.
As an example, in early 2022 Tesla had to recall 53,822 US vehicles with its Full Self-Driving (Beta) software that was allowing some models to conduct ‘rolling stops’ and not come to a complete stop at some intersections, posing a safety risk. How many accidents had happened due to this malfunction before the recall is not known, but the number must have been behind the decision of the US’s National Highway Traffic Safety Administration (NHTSA).
In the case of faulty software or cyber-attacks, accident-related losses would be highly correlated, as they would happen for all the vehicles from that manufacturer, especially when they make a point of keeping all its clients running on the latest software version. Algorithms do not make random mistakes like human drivers, which are emotional and affected by personal circumstances, and rather, are deterministic.
From an insurance company perspective, this is a huge issue because correlated risk leads to higher variance in the expected losses, which would require higher insurance premiums. Furthermore, with very limited historical data on autonomous driving, there are no proven and reliable actuarial tables for compensation for injury and death in driverless car accidents. As such, it is very difficult to ‘price the risk’, which explains why insurers and reinsurers are deliberately taking a very cautious approach.
For instance, in the UK, the Association of British Insurers (ABI) is currently running various pilots with insurers like Direct Line Group (DLG), AXA, XL Catlin, and RSA, yet it clearly states on its homepage: “However, the insurance industry is clear that drivers must not be given unrealistic expectations – for the foreseeable future, we don’t expect these cars to have sufficient back-up features to allow drivers to completely disengage from the road”.
The risk from algorithm failure is pretty unique. While it may initially seem comparable to natural disasters such as earthquakes and floods, which also affect many insured at once, there are important differences. In the case of natural disasters, insurance companies diversify by spreading their policyholders across geographies, so if an earthquake strikes, say, Japan, they do have many policyholders in other regions, whose premiums help cover the massive compensations due to the Japanese policyholders. Yet, how does an insurance company diversify away the risk of algorithm failure, if all users across the world are running the same software?
Despite the hurdles, there has been progress
Last year, global reinsurance giant Swiss Re and Chinese tech giant Baidu announced a partnership to advance insurance and risk management for autonomous driving and autonomous vehicles. However, the first product they had in mind was a not-too-ambitious autonomous valet parking insurance, and beyond that, they only committed to focusing on risk management research and insurance innovation for automated driving products.
An alternative would be the driverless car makers themselves providing the insurance on their products, as Rivian does, which arguably provides a good alignment between the insurance risk and the cost of software quality assurance. However, good risk management practice would require them to reinsure part of that risk, again necessitating the large reinsurance providers to get involved.
That said, challenges with autonomous driving insurance need not be seen as a show stopper, but rather as an argument to temper expectations. There is still plenty of time until we actually have Level 5 cars on the roads, and secondly, there is an argument to be made that a lot fewer accidents will take place overall when the majority of vehicles are autonomous. And, not surprisingly, several insurtech start-ups are looking at this complex problem, such as Koop Technologies, Avinew, and Trov, just to name a few.