Search Menu

JURIDICA INTERNATIONAL. LAW REVIEW. UNIVERSITY OF TARTU (1632)

Issues list

Issues

Differences and Similarities between Branches of Law

28/2019
ISBN 978-9985-870-44-0

Cover image
Download

Issue

PDF

What Safety are We Entitled to Expect of Self-driving Vehicles?

A manufacturer of self-driving vehicles could face claims involving assertions of the product’s defectiveness. Under the Product Liability Directive, a product is deemed defective when it does not provide the safety that a person is entitled to expect. Efforts to ascertain the possibility of defectiveness connected with a self-driving vehicle could necessitate evaluating the design of the vehicle, matters of human–machine interaction, and the role of the human in the relevant incident of damage. This article lays groundwork by considering the capabilities of self-driving vehicles, the role and expectations of human beings, and legislation aimed at ensuring safety and preventing damage. This discussion concretely situates the concept of the safety of self-driving vehicles in the context of product liability law, which is inherently preoccupied mainly with the consequences.

Keywords:

Self-driving vehicles; Directive 85/374/EEC; product liability; autonomous vehicles; driverless cars

1. Introduction

Self-driving cars are seen as a solution to problems of, in particular, traffic safety  *2 and access to transportation.  *3 Only a few years ago, expectations of reaching full driving automation sooner rather than later were high. Even though this optimism seems to have now become moderated by a heavy dose of reality,  *4 efforts to attain full driving automation continue throughout the world, including in Estonia.  *5 While the level of traffic safety to be provided by fully self-driving vehicles seems to be one of their main advantages,  *6 accidents caused by them cannot be precluded. To name a few issues, one can cite concerns about the consequences of possible hardware and software malfunctions, as well as security breaches.

Strict liability schemes seem to be the approach best suited for covering damage possibly caused by self-driving cars. However, in certain situations a manufacturer of self-driving vehicles may be faced with a claim hinging on the defectiveness of the product.  *7 Under Article 6 (1) of the Product Liability Directive (PLD),  *8 a product is deemed defective when it does not provide the safety that a person is entitled to expect, taking into account all the circumstances. The non-exhaustive list of circumstances set out in Article 6 (1) includes the presentation of the product, the use to which it could reasonably be expected that the product would be put, and the time when the product was put into circulation. Recital 6 of the PLD clarifies that an assessment of the lack of safety should be carried out having regard to the reasonable expectations of the public at large.

Given that concepts such as safety, entitlement and reasonableness are open to interpretation, one is bound to wonder what kind of safety the EU public at large can expect of self-driving vehicles. That, in turn, may lead to enquiries into poor design, issues of human–machine interaction, and the role of the human in the event of damage. Thus, an answer to the safety question depends not only on safety legislation and case-law but also on the characteristics of self-driving vehicles and of human beings. Taking into account the capabilities of the self-driving vehicle and the role and expectations of the human alongside the safety legislation aimed at ensuring safety and preventing damage, this article has been written to answer the above-mentioned question in the context of product liability law, which concerns itself mainly with the consequences.

2. Driving automation

2.1. Levels of driving automation

Building on the definitions used by the NHTSA  *9 and the BASt  *10 and seeking to simplify communication and facilitate collaboration in the technical and policy domains worldwide, SAE International  *11 has provided common classification and terminology frameworks for automated driving the involves ground vehicles, including six levels of driving automation, which range from no automation to full automation.  *12 Further, SAE International has divided these levels of driving automation into two groups. In the first group (levels 0–2), the human driver monitors the driving environment, while in the second (levels 3–5) the automated driving system is entrusted with this task. At level 0 (no automation), the human driver handles all aspects of the dynamic driving task  *13 at all times, regardless of the vehicle’s warning or intervention systems. At level 1 (driver assistance), a driver assistance system performs steering or acceleration/deceleration, in a manner dependent on the driving mode, while the human driver is expected to execute all remaining aspects of the dynamic driving task. For instance, a vehicle with a cruise control feature can be considered a level-1 vehicle. At level 2 (partial automation), one or more driver assistance systems execute both steering and acceleration/deceleration, while the remainder is left to the human driver to perform. The driver assistance systems used in level-2 vehicles are more advanced than those of level-1 vehicles, in being able to, among other things, maintain a set distance from the vehicle in front or to one side, keep the vehicle in its lane, and brake automatically in the event of an emergency. In level 3 (conditional automation), an automated driving system handles all aspects of the dynamic driving task in the manner corresponding to the driving mode, but the human driver is expected to remain alert and respond to any request to intervene. At level 4 (high automation), an automated driving system performs all parts of the dynamic driving task even if a human driver does not respond appropriately to a request for intervention. However, certain geographical or terrain-based, weather, and speed constraints still apply to such vehicles.  *14

In SAE International level 5 (full automation) vehicles – the only truly self-driving vehicles and the main focus of attention in this article – an automated driving system deals with all aspects of the dynamic driving task at all times under all roadway and environmental conditions that can be managed by a human driver. The related complexity is further increased by the fact that, in reality, individual parts of the automated driving system of a self-driving vehicle may involve different levels of automation. However, it has been pointed out that the crucial issue is going to be not the level of automation the car is capable of, but how the transition between different levels of automation at various stages in the journey is managed.  *15

To cope with the operational and tactical aspects of the tasks, the vehicle needs to be aware of the surrounding environment (the weather, the road conditions, non-moving and moving objects, traffic signs, other road users, birds and other animals, etc.) and of events and occurrences that are relevant from the point of view of the passengers (traffic signals and other road users’ behaviour). For that purpose, it has to take into account not only internally obtained information but also external information: maps, traffic rules, etc.

Should level-5 automation be reached, fully self-driving cars could provide many advantages, in reducing human errors in traffic, making navigation easier, improving access to mobility for disabled people and the elderly, and reducing traffic congestion. However, it is argued that the delegation of the driving function to an automated driving system does not come without certain disadvantages, which include, above all, software malfunctions and vulnerabilities that could cause serious damage at a far larger scale than an individual human driver ever could.  *16

2.2. Distinct characteristics and properties of a self-driving vehicle

Human beings’ senses give them the ability to perceive what is happening around and inside them, owing to sense organs and receptors that transform physical stimuli into nerve impulses, and, with the aid of perception, the human being is able to organise, identify, and recognise that information.  *17 This gives humans the ability to cope with the complexity of the surrounding environment, including traffic.

The full dynamic driving task imposes what computer scientists call a ‘hard problem’.  *18 Firstly, the vehicle needs to perceive what is happening around it – in particular, what is moving and what is not. To perceive the surroundings, self-driving vehicles need various sensors (e.g., radar, LIDAR, GPS components, an odometer system, vision, and an inertial measurement unit).  *19 Researchers have pointed out that accurate and reliable perception of the surroundings necessitates the data from these various sensors being co-ordinated (in terms of data fusion and sensor fusion).  *20

It has been noted that perception technologies can be divided into two main categories: computer-vision approaches (traditional software programming) and machine-learning approaches (a subset of artificial intelligence, AI).  *21 The prerequisite for computer-vision approaches is the ability to come up with explicit instructions. It has been explained that, since traffic is such a complex environment, the ability to adapt to dynamic environments through learning becomes more important.  *22 Google’s decision scientist C. Kozyrkov explains that the idea of the machine-learning approach is to feed data into an algorithm that turns patterns into models.  *23 According to her, a model is merely a recipe, which the computer uses to transform future inputs into outputs.  *24 In machine learning, the main indicator of success is the quality of the model.  *25 While machine learning comprises techniques that enable computers to figure things out from data, deep learning (more precisely, the use of deep neural networks) is a subset of machine learning that allows for solving more complex problems.  *26 It is has been stressed that deep learning is good for identifying objects in images and describing images, but usually requires large quantities of computing power and data, whose quality critical to achieving solid performance.  *27 Therefore, not everyone believes that deep learning is the key to solving the problem of driving automation.  *28 Both approaches are argued to have their advantages and disadvantages, but self-driving vehicles tend to rely on a combination of the two to understand the surrounding environment.  *29

Researchers have expressed the concern that certain machine-learning approaches may adversely impact the safety of a self-driving vehicle due to their non-transparency, probabilistic error rate, training-based nature, and instability.  *30 Similar concerns are shared by legislators.  *31 The machine-learning community is said to be coming to the realisation that in many applications domains, for AI to be trusted, it not only needs to demonstrate good performance in its decision-making but also explain these decisions and convince us that it is making them for the right reasons.  *32 Such realisations have given rise to the new emerging research field of explainable artificial intelligence (XAI).

Various sensors and perception technologies set self-driving vehicles apart from conventional ones. Knowledge of the principles of operation of these devices and technologies enables more appropriate assessment of the kind of safety that can be reasonably expected of them.

3. Safety requirements for self-driving vehicles

3.1. Safety under product liability legislation

At this juncture, one can consider more fully Article 6 (1) of the PLD, under which a product is deemed defective when it does not provide the safety that a person is entitled to expect, taking all the circumstances into account, including the following: the presentation of the product, the use to which it could reasonably be expected that the product would be put, and the time when the product was put into circulation. Recital 6 of the PLD explains that the defectiveness of the product should be determined by reference not to its fitness for use but to the lack of the safety that the public at large is entitled to expect. In spite of the respectable age of the PLD, related case-law of the Court of Justice of the European Union (CJEU) that might elaborate on the concept of safety remains scarce.  *33

Some guidance for the manufacturers of self-driving cars can be derived from the CJEU judgment in Joined Cases C-503/13 and C-504/13 (Boston Scientific, paragraphs 36-43).  *34 The CJEU explains that the safety which the public at large is entitled to expect, in accordance with Article 6 (1) of the PLD, must be assessed by taking into account, among other things, the intended purpose, the objective characteristics and properties of the product in question and the specific requirements of the group of users for whom the product is intended (see para. 38 of the judgment). Even though the passengers in a fully self-driving vehicle may not be in a position that renders them as vulnerable as the users of pacemakers and implantable cardioverter defibrillators who were considered in Boston Scientific, they still trust their health and life to the vehicle, which makes the level of safety that such persons are entitled to expect to be demanded of those vehicles particularly high as well. Furthermore, self-driving vehicles are unlike implantable medical devices in their potential to pose a greater danger not only to their direct users but also to other people (in the vehicles’ case, road users) and to surrounding property. The ‘group of users’ in the context of self-driving vehicles is considerably larger. The element of having a broader circle of affected parties was also pointed out by the CJEU in its judgment in Case C-661/15 (para. 30)  *35 wherein the Court noted, regarding the steering coupling of a car, that it is legitimate and reasonable to require a high degree of safety in the light of the serious risks to the physical integrity and life of drivers, passengers, and third parties connected with these products’ use.

Driving automation-related parallels can be drawn with the CJEU’s reasoning in para. 40 of Boston ­Scientific also. The CJEU explained that the potential lack of safety that would give rise to liability on the part of the producer under the PLD stems, for pacemakers and implantable cardioverter defibrillators, from the abnormal potential for damage that the relevant products might cause to the person concerned. While the potential for damage that fully self-driving cars could cause to a person is not necessarily always equivalent to that attended to such medical devices, it cannot be denied that a defective fully self-driving vehicle has the potential to cause the death of its passengers or other road users. Numerous incidents involving vehicles of lower levels of automation serve as a proof of this.

The high level of safety expected of vehicles is further confirmed by the CJEU in para. 30 of Case C-661/15, in which the Court points out that the safety requirement is not met where there is a manufacture-related risk of failure of a component. In the Court’s opinion, this entails those goods not providing the safety that a person is entitled to expect and, accordingly, the conclusion that they must be regarded as defective.

According to Recital 8 of the General Product Safety Directive (GPSD),  *36 safety should be assessed in consideration of all the relevant aspects. Under Article 2 (b) of the GPSD, ‘safe product’ means any product that poses no risk or poses only the minimum risks compatible with the product’s use considered to be acceptable and consistent with a high level of protection for the safety and health of persons. It follows from this provision that, with regard to self-driving vehicles, the following factors should be taken into account, among others: the characteristics (incl. the composition) of the vehicle; its effect on other products; the presentation of the vehicle, any warnings and instructions for its use and disposal, any other indication or information regarding the vehicle; and the categories of consumers at risk when using the vehicle, in particular children and the elderly. Article 2 (c) of the GPSD explains that any product that does not meet the definition of ‘safe product’ is considered dangerous.  *37 The author of this article finds that, while the definition of safety rooted in the GPSD cannot serve as the basis for establishing the lack of safety of a self-driving car within the product liability regime, the former does assist us in understanding the objective characteristics of self-driving vehicles.

3.2. Traffic legislation governing self-driving vehicles

Some countries, most notably Germany and the United States, which both have a strong automotive industry, have already passed traffic legislation governing driving automation, including related legal definitions. Subsection (2) of §1a of the German Road Traffic Act (Straßenverkehrsgesetz or StVG)  *38 lists the technical equipment that qualifies a vehicle as a highly or fully automated power-driven vehicle: equipment that, once switched on, is able to perform the driving task (including exercising of longitudinal and lateral control of the vehicle); during highly or fully automated driving, is capable of following the traffic rules applicable to the vehicle; can at any time be manually overridden or switched off by the driver; is able to recognise the need for the exclusive manual control by the driver; with sufficient time to spare, is able to visually, acoustically, tactilely, or otherwise perceptibly alert the driver to handing over of control of the vehicle to the driver; and alerts to a use that is in conflict with the system description.

It follows from subsection (4) of §1a that the driver is the one to switch on the highly or fully automated driving function and apply it for controlling the vehicle. Such an approach to automated driving means that even a vehicle with a fully automated driving function is required to have a steering wheel and to have a licensed human driver behind it at all times. This person is required to sit in the front seat to drive, and certain controls, displays, and indicators need to be visible to the driver so that they would be able to drive the vehicle properly. This also means that even a vehicle equipped with fully automated driving functionality must not drive ‘empty’ – even when there are no passengers, there must be at least one occupant (the driver) while it is driving. In addition, it follows from subsection (4) of §1a of the StVG that the driver must be prepared to take over control of the vehicle at all times.

Such legislative choices strip self-driving vehicles of some of their alleged key advantages (disabled people’s access to mobility  *39 , reduction of human errors, etc.), while giving rise to a plethora of new issues related to the human driver taking back control of the vehicle and, more generally, to human–machine interaction. Once the driver has transferred control of the vehicle to the system, it is difficult to get it back in an instant. Nevertheless, the driver remains responsible and is required to stay alert and ready to retake control in the blink of an eye. While the approach taken by the German legislature is acceptable for SAE levels 1–4, it practically precludes the introduction of level-5 vehicles. This might be associated with the fact that, as has the rest of the EU, Germany has ratified the 1968 Vienna Convention on Road Traffic,  *40 which rules out driverless road vehicles. While such restrictions are inevitable in the case of semi-autonomous vehicles, the entire concept of a fully self-driving vehicle is based on the underlying assumption that no human driver is required, under any circumstances. Therefore, it may well be that the current solution in Germany is merely a temporary one in place until the Vienna Convention on Road Traffic can be amended and the level of full automation is truly reached.

Unlike the EU Member States, the United States is not party to the Vienna Convention. The US has ratified the 1949 Geneva Convention on Road Traffic, which does not categorically prohibit automated driving. This gives the US more flexibility in regulating driving automation.  *41 Although various federal bills  *42 have been put forward on highly automated vehicle technology, none have been enacted yet. The US legislators drafting the relevant bills have focused on addressing a high level of automation rather than full automation, thereby making references to the SAE International standard J3016. Until a federal bill has been enacted, the rules governing self-driving vehicles remain up to each state, and these have proved highly divergent. For instance, in Florida and Michigan a self-driving car is not required by law to have a driver,  *43 while the approach taken by California seems to be more similar to that of Germany.

3.3. The safety that a person is entitled to expect of fully self-driving vehicles

According to the rules laid down in Article 6 (1) of the PLD and the guidance given by the CJEU in Boston Scientific, the safety that a person (the public at large) is reasonably entitled to expect of self-driving vehicles should be assessed in a manner that takes into account all the circumstances. It should be reiterated that this encompasses, among other things, the presentation of the vehicles in question; the use to which they could reasonably be expected to be put, their intended purpose; the time of putting the vehicles into circulation; the requirements specific to the group of users for whom the vehicles are intended; and, above all, the vehicles’ objective characteristics and properties.

At present, it is impossible to assess the ‘presentation’ of fully self-driving vehicles, as no such vehicles have been put into circulation yet. The usual purpose of a road vehicle is to transport people or goods. As noted above, the requirements applicable to self-driving vehicles stem not only from their passengers but also from other road users and the surrounding environment – principally, the property that might get damaged by a self-driving vehicle. In that regard, legal entities too are affected, not merely individuals.

As for the objective characteristics and properties of self-driving vehicles, any road vehicle is, for reason of its mass and speed of movement, objectively a source of greater danger. In this respect, self-driving vehicles are not different from conventional human-driven vehicles. What makes them stand apart from the latter is the absence of a human driver, who is replaced by their sensors and software components, which draw together such elements as computer vision and machine learning. The absence of a human driver has far-reaching implications for interaction between such vehicles and other road users. Not only has the self-driving vehicle to understand the body language of humans engaged in traffic, but those humans have to understand the behaviour of self-driving vehicles. There are large amounts of visual and, to a lesser extent, audio communication between human road users. People are very good at interpreting human body language and the sounds in their environment, but this remains a hard problem for self-driving
vehicles.

It has been pointed out that the computer-vision and machine-learning components of self-driving vehicles need to be attuned to the particular settlement.  *44 The characteristics of the locale’s infrastructure, its traffic flows, and all the related issues are part of the set-up of a self-driving vehicle. Hence, inhabitants of Tartu may have somewhat different expectations of self-driving vehicles than people in, for instance, London. Every area of operation is unique. The landscape, road conditions, and weather are important facets of this uniqueness.

The importance of constructing driving-automation-supporting infrastructure should not be underestimated. Manufacturers and municipalities keen on getting self-driving vehicles on the roads as soon as possible face a serious dilemma. On one hand, the manufacturers need to collect high-quality real-world data. At the same time, however, self-driving vehicles that could gather such data are not ready yet, and appropriate infrastructure for them does not exist yet. Allowing such semi-autonomous vehicles onto public roads is likely to increase the number of traffic accidents at first.

Furthermore, the general public are reasonably entitled to expect that self-driving vehicles follow traffic rules.  *45 However, breaking traffic rules does not necessarily result in damage in the sense addressed by the PLD. Should such a situation involve any abnormal potential for damage, it may nevertheless meet the criteria for defectiveness established by the CJEU in Boston Scientific.

It follows from Article 9 of the PLD that among the legal rights defended thereunder are those to life, health, and property. Every individual has the right to expect their life, health, and property not to be harmed by a self-driving car, and every entity has the right to expect its property not to be harmed by one. This does not necessarily entail being entitled to expect completely flawless self-driving vehicles. A vehicle of a lower level of automation is not necessarily less safe than a vehicle of a higher level of automation. Leaving the issues of giving up and taking back control of the vehicle aside, the lower the level of automation of a vehicle, the more limited its automated functions and the greater the role and responsibility of a human driver. No software developer would be willing to give any guarantee that the software developed by it is entirely flawless, yet, as is clear from the foregoing discussion, software is a key component of any self-driving vehicle, which means that such an assurance must be obtained for the purposes of compliance with product safety legislation if the relevant vehicle is ever to be allowed to enter circulation.

Declaring a self-driving vehicle unsafe (i.e., defective) merely because it has caused damage would constitute too strict a standard of liability, which is not supported by the PLD. For ascertaining the standard for the minimum safety expected of self-driving vehicles, one needs to keep in mind that it is the human being who has been eliminated from the equation. Therefore, as long as self-driving vehicles are unlikely to cause more or worse traffic accidents than humans, they should be allowed on the roads. Whether they will cause more or worse traffic accidents than humans is, however, a matter of trial and error.

4. Conclusions

The development of self-driving vehicles continues, notwithstanding the related complexity. Their ultimate safety will be a crucial matter. Therefore, the definition of safety used in the GPSD can be of help in identifying the objective characteristics and properties of self-driving vehicles within the meaning of the PLD.

The German legislature’s approach towards self-driving vehicles in the StVG is understandable and, given the current setting of international law, perhaps even inevitable, but it nevertheless precludes the introduction of truly self-driving vehicles and will need to be revised if the push towards full autonomy is to continue. This striving should continue because problems with human-machine interaction are likely to adversely affect the safety of semi-autonomous vehicles.

Under Article 6 (1) of the PLD and in accordance with the guidance given by the CJEU, the safety that the public at large is reasonably entitled to expect of self-driving vehicles should be assessed taking into account all the circumstances, among other things, their intended purpose as well as their objective characteristics and properties. For self-driving vehicles to be put into circulation, the level of safety demonstrated by self-driving vehicles should at least equal that demonstrated by human drivers.

PDF

pp.95-102