Risks of hands-off driving and system-initiated lane changes in Level 2 driver assistance systems

  • September 6, 2024

New regulations under development at the UNECE in Geneva for future EU and UK versions of driver assistance systems such as Tesla’s Autopilot could result in cars making manoeuvres like lane changes without the driver’s consent and allowing drivers to take their hands off the steering wheel. But because the new rules would apply to so-called Level 2 driver assistance systems, the driver would remain responsible in the case of a collision, even though they did not make the manoeuvre themselves. ETSC’s automation expert Frank Mütze takes a look at the risks of hands-off driving and system-initiated lane changes and argues that these functionalities should be prohibited for Level 2 systems, for as long as there is no scientific agreement on their safety.


Executive Summary

On 9 September, a workshop at the UNECE World Forum for Harmonization of Vehicle Regulations (WP.29) will be held to discuss the possibility of allowing more driving tasks to be automated by so-called DCAS (Driver Control Assistance Systems), which is the regulatory term used for Level 2 assisted driving systems – think Tesla’s Autopilot, Volvo’s Pilot Assist and BMW’s Driving Assistant Professional. This would allow such assistance systems to initiate manoeuvres by themselves, such as automatically changing lanes, without any driver input or confirmation – even though the driver would remain responsible in the event of a crash.

The possibility of hands-off driving will also be discussed, which would allow drivers to take their hands off the steering wheel while using these assistance systems. Again, the driver would remain responsible.

At ETSC, we are seriously concerned about these developments. The currently available hands-on assisted driving systems already have a myriad of negative side effects on the driver. Drivers overestimate the capabilities of these systems while underestimating their limitations; drivers intentionally and unintentionally over-rely on the system; the system increases the propensity of drivers to be inattentive or distracted with other things besides driving; drivers confuse these assisted driving systems for automated driving systems; etc., etc. The use of these systems has unsurprisingly yet regrettably already led to deaths and injuries on the road.

Allowing drivers to take their hands off the steering wheel and allowing even more driving tasks to be automated significantly risks further increasing these negative side effects. Moreover, as DCAS is categorised as a “Level 2” system, the driver would remain responsible. In a “Level 3” system such as the “ALKS” system used by Mercedes-Benz, the responsibility would shift to the automated driving system instead.

But from the perspective of an ordinary citizen, there is very, very little that differentiates a DCAS from an ALKS with an automated lane change functionality on a highway. Both systems have to control the vehicle, scan the driving environment and react to events appropriately. The DCAS acts like an automated driving system, yet it is not an automated driving system, because it is declared to be a “Level 2” system and thus the responsibilities remain with the driver. As a result, the main differences between these systems therefore come from the role that is imposed on the driver.

We argue that it is not fair that drivers remain responsible for Level 2 systems. Because is it fair that drivers remain responsible for the vehicle, even though virtually all the vehicle control is done by the system? And is it fair that, in order to ensure safety, drivers have to take on an additional and difficult task: monitoring the assisted driving system’s operation? And is it fair that drivers are burdened with this additional task, even though they are not trained for it during driver education, and even though we know that humans are not good at prolonged monitoring?

There is currently a further push at UNECE WP.29 to allow these DCAS to initiate manoeuvres by themselves while drivers may take their hands off the wheel. Such a move would increasingly blur the line between assisted and automated driving.

Current thinking on DCAS regulation is fundamentally flawed

Because DCAS is declared to be a Level 2 system and therefore operates under the driver’s responsibility, it is argued that the performance of DCAS does not need to be as good as an automated driving system.

This in turn means the human driver needs to constantly monitor the system’s operations and intervene when it does something it should not or when it fails to detect something.  

But humans are not good at prolonged monitoring of automated systems, so we therefore need systems that monitor whether drivers are monitoring the road environment as well as the DCAS operation, and warn them when they are not paying attention.

However, driver monitoring systems also have major shortcomings. Having “eyes on the road” does not mean “mind on the road”. So this presents yet another problem that should be addressed. We are now so far down the rabbit hole, that we need to look again at the fundamental question of whether the decision to keep the driver involved and responsible is sensible in the first place.  Indeed, why is it even necessary when Level 3 “ALKS” systems are already available on the market that take all responsibility away from the driver and let the system take full control, under certain circumstances.

No proven safety benefits

And all of this for a system with no proven safety benefits. The suggested (hypothetical) benefits of having an assisted driving system (DCAS) with an automated lane change function can be applied to an automated driving system (ALKS) with an automated lane change function. But the ALKS would not come with all the negative side effects, as the human is no longer responsible, nor included in the driving task, when the system is operational in normal circumstances.

We therefore call on European Member States participating at WP.29 to apply the precautionary principle: that if there is no scientific agreement on a policy that might cause harm to the public, the policy in question should not be carried out. We call on them not continue down this path and regulate additional comfort capabilities for DCAS and in fact to prohibit them, for as long as there is no scientific agreement on their safety. Instead, their efforts should be used to focus on delivering safe automated driving.


Briefing

Introduction

In a previous briefing, I discussed the driver monitoring requirements that are included in the new rules for the so-called DCAS (Driver Control Assistance Systems), which is the regulatory term used for assisted driving systems like Tesla’s Autopilot, Volvo’s Pilot Assist and BMW’s Driving Assistant Professional.

These DCAS automate part of the driving task, such as taking control over the vehicle’s speed and steering. The driver remains responsible and is therefore ultimately in control of those driving tasks, or at least: is supposed to be.

Because a fundamental flaw of these assistance systems is that drivers come to over-rely on them and they make drivers more willing to do other things besides paying attention to the road, they regrettably have already led to many people being killed and seriously injured. In the previous briefing, I set out why the new driver monitoring rules are two steps forward towards preventing this driver distraction problem, though regrettably regulators have allowed for drivers to be distracted up to five seconds (which is 2.5 times of what is considered safe).

The newly adopted rules for these assisted driving systems however only reflect the results of the regulatory discussions during “Phase 1”. As part of a second phase, a workshop will be held on 9 September to discuss allowing two things that we at ETSC are gravely concerned about: allowing the driver to take their hands off the steering wheel and allowing the system to make moves by itself.

In this briefing I will be showing why it is undesirable that assisted driving systems take over more and more parts of the driving task. This to explain why we call on European Member States participating at WP.29 to apply the precautionary principle: that if there is no scientific agreement on a policy that might cause harm to the public, the policy in question should not be carried out. We call on them to not continue down this path and regulate additional comfort capabilities for DCAS that take over driving tasks from the human as well as allow drivers to take their hands off the steering while, and instead to use the resources and focus on delivering safe automated driving.

Not just a Tesla problem

The links provided to investigation reports on fatal and injury crashes all involve Tesla vehicles. However, this briefing is not about singling out Tesla. The risks to road safety that are mentioned in this briefing apply to the technology in general. The graph below may however help understanding why it was easier to find crash reports involving Tesla vehicles to link to, as does the ‘peer comparison’ section on page 6 of NHTSA’s recent report on its investigation into Autopilot.

Crashes involving other manufacturers are also under investigation at the moment of publishing this briefing (early September 2024). For example, two crashes (link 1, link 2) in which BlueCruise – Ford’s hands-off assistance systems – was active are currently under investigation by the US National Transportation Safety Board (NTSB). The driver of the second crash has since been charged with homicide by vehicle while driving under the influence, which raises the question whether the driver had overrelied on the capabilities of the assisted driving function – which hopefully the investigations will answer in due course.

World Forum for Harmonization of Vehicle Regulations

The new rules for assisted driving systems are being developed at the World Forum for Harmonization of Vehicle Regulations (WP.29) of the United Nation Economic Commission for Europe (UNECE). A majority of the EU’s technical rules for vehicle systems are made there, in cooperation with countries outside of the EU and Europe.

Despite it being a World Forum, not all of its rules apply across the globe and in this case, the current and forthcoming rules on assisted driving systems are relevant for the EU, Russia and Japan, but do not apply in the USA or Canada.

The United Nations offices in Geneva, where WP.29 meetings are held.

What is DCAS, what is hands-off driving and what are system-initiated manoeuvres?

DCAS

In the latest regulation assisted driving systems are referred to as ‘DCAS’, short for ‘Driver Control Assistance Systems’. These DCAS are a subset of advanced driver assistance systems (ADAS), and it is helpful to also divide ADAS into two categories.

Firstly, safety ADAS such as intelligent speed assistance (ISA), advanced emergency braking systems (AEBS), lane departure warning systems (LDW), electronic stability control (ESC), among many others that only intervene for a short period in the case of safety risks by providing a warning and/or controlling the vehicle in some way (e.g. AEBS automatically slamming on the brakes when it detects the risk of a potential collision).

Secondly, there are ADAS that continuously control the vehicle’s speed and direction, not just in critical situations. It is still unclear whether these systems bring actual safety benefits (something which I will return to later on in this briefing), and they are therefore often referred to as “driver comfort” systems. DCAS fall in this latter category.

Hands-off driving

In general, the assisted driving systems that are available on the European market require the driver to keep their hands on the steering wheel, even if the system is the one actually controlling the steering. If a driver takes both hands from the steering wheel, the system will provide a warning to the driver after several seconds. The new DCAS rules furthermore add visual disengagement monitoring that warns the driver when they are not looking towards the road.

The proposed rules would allow drivers to take their hands off the steering wheel, without receiving a warning as long as they remain visually engaged with the driving task. This proposed change was pre-empted by the EU’s decision earlier this year to allow exemptions from existing rules requiring ‘hands on’ driving for Ford and BMW in order to market such hands-off systems in Europe, despite safety fears

System-initiated manoeuvres

System-initiated manoeuvres are, as the name indicates, certain driving tasks or actions that the system conducts without driver input or confirmation. This can for example be the DCAS changing lanes in order to overtake another vehicle.

One could argue that the safety ADAS also perform system-initiated manoeuvres, and it is therefore important to acknowledge that the DCAS rules could regulate safety-enhancing system-initiated manoeuvres as well. One example, currently regulated in another UN Regulation, is the risk mitigation function (RMF). If and when this emergency function detects that the driver becomes unresponsive, for example due to a health related cause, then the system automatically steers “the vehicle with the purpose of bringing the vehicle to a safe stop” which could be the hard shoulder.

I want to underline that our concerns are not with system-initiated manoeuvres that operate for a limited duration in emergency and critical situations. Our issues are with system-initiated manoeuvres that primarily aim to improve the comfort of the driver by taking over a (non-emergency) driving task for extended periods of time.

Difference between assisted driving systems and automated driving systems

The difference between DCAS or another assisted driving system on the one hand, and an automated driving system on the other hand, is commonly explained using the SEA’s levels of driving automation.

DCAS and other assisted driving systems are classified as Level 2 systems, which SAE has termed “partial driving automation”, as they assist the driver both with steering and speed. This in contrast to, for example, an emergency ADAS as the advanced emergency braking system which only brakes, and is thus classified as a Level 1 system.

But common for Levels 0, 1 and 2 is that the driver is deemed to be driving the vehicle at all times, even when you have your feet off the pedals as adaptive cruise control is engaged and/or when you are not actively steering as the system is centring the car in lane for you. Moreover, as the driver you are expected to constantly supervise what the system is doing.

This is in contrast to automated driving systems, which are Levels 3, 4 and 5, where the driver is no longer deemed to be driving the vehicle when the automated driving system is operating. You are therefore also no longer expected to be paying attention to what the system is doing or what is happening on the road, though Level 3 systems may still request your re-engagement.

Automated driving systems are expected to handle the traffic situations they encounter within their so-called ‘operational design domain (ODD)’, which is the set of conditions in which it is allowed to operate (e.g. on the motorway during day time when it is not raining), although Level 5 systems should be able to drive everywhere under all conditions.

An example of an automated driving system is the “automated lane keeping system (ALKS)”, which is a Level 3 system that can drive the vehicle on highways up to speeds of 130 km/h, and that may come with an automated lane change functionality.

The Implication of DCAS as a Level 2 System

As a Level 2 system, drivers using DCAS are deemed to be driving (and thus in control of, and responsible for) the vehicle and are expected to keep monitoring both the system’s operation as well as the road environment.

This driver involvement is used as a justification by those that argue that we should expect and accept a lower performance from DCAS than the performance of an automated driving system. It is argued that a DCAS should not be required to be capable of handling every traffic situation in its ODD, unlike an automated driving system.

This naturally creates the problem that safety risks may occur from any limitation of the system’s capabilities. For example, the system failing to detect a certain obstacle (that is not another road user) that is obstructing the lane of travel, or the system not being able to navigate the roundabout it is about to encounter.

The solution proposed for this problem is that the driver should monitor the system’s operation and intervene when needed.

There are provisions in the regulation that aim to ensure the functional and operational safety of DCAS. But that does not negate the fact that by accepting those limitations on system capabilities, you are accepting significantly more safety risks compared to automated driving systems (Level 3 and above).

An Inherently Flawed Design

So the solution to address the risks from limitations on system capabilities is human supervision and intervention. This leads us to the next issue, which is a set of problems related to human factors, of which I will highlight two: overreliance and distraction.

Crash investigations and studies have found that drivers over-rely on these assisted driving systems, as they overestimate the systems’ capabilities and underestimate (or are unaware) of their limitations.

“The National Transportation Safety Board determines that the probable cause of the Mountain View, California, crash was the Tesla Autopilot system steering the sport utility vehicle into a highway gore area due to system limitations, and the driver’s lack of response due to distraction likely from a cell phone game application and overreliance on the Autopilot partial driving automation system.”

NTSB investigation report on the Mountain View crash in 2018.

“The National Transportation Safety Board determines that the probable cause of the Delray Beach, Florida, crash was the truck driver’s failure to yield the right of way to the car, combined with the car driver’s inattention due to overreliance on automation, which resulted in his failure to react to the presence of the truck.”

NTSB investigation report on the Delray Beach crash in 2019.

On the contrary to this, there is a study – or was a study (as it appears the paper has been removed from official channels and only a copy over at a Tesla fan club remains) – that argued that assisted driving systems might not cause drivers to over-rely on them. However, its reasoning also points to the paradox of assisted driving systems relying on human supervision.

The authors found that users of Tesla’s Autopilot did not appear to over-trust the system to a degree that would compromise safety. Their key hypothesis as to why drivers would not over-rely is due to the imperfections of the system: because they would regularly encounter situations that the system could not handle and had to intervene, the drivers would remain attentive.

In other words, in order for the driver to stay motivated to keep monitoring what the system is doing, the system would need to regularly fail and require driver intervention.

This therefore leads to the paradoxical situation where a system that works well for most of the time would lead to unsafe use because drivers start to over-rely and overestimate, while safe use requires a system that is far from perfect and requires frequent interventions by the driver.

It should go without saying that a regulation requiring a vehicle system to fail regularly–and thereby putting both car occupants as well as other road users in regular danger–would be utterly unacceptable.

That leaves us with the situation where we would need to deal with the overreliance side-effect of a nearly perfect system.

Assisted Driving Promotes Distracted Driving

A second side-effect of assisted driving is disengagement from the most important task of a driver: driving the car. This point was covered in a previous briefing, but I will restate it here due to its importance.

We have known for a while now that increasing the automation of the driving task results in drivers being increasingly prone to engage with non-driving related activities (Carsten et al, 2012).

It has furthermore been confirmed that assisted driving systems also increase the propensity of drivers to engage in non-driving related activities, both in studies and self-reported surveys confirming it (IIHS, 2022; supporting: Dunn et al, 2021; Noble et al, 2021; Reagan et al, 2021).

Dunn et al, for example, show that drivers with prior experience using assisted driving systems were almost twice as likely to be driving distracted when the assistance systems were active than during manual driving, while Reagan et al show that the longer drivers used assisted driving systems, the more likely they were to become disengaged, with a significant increase in the odds of observing participants with both hands off the steering wheel or manipulating a cell phone relative to manual control.

Both the points on overreliance and distraction underline the need for a system that monitors what the driver is doing and warn them in case they are not paying attention to the road.

Ironies of Automation

Instead of preventing all previously mentioned risks (resulting from shortcomings inherent in its design) from occurring altogether, we put a monitoring system – in this case a human supervisor – in place to address those shortcomings.

But this monitoring system also has shortcomings: we have known for years that humans are not good at monitoring automated systems for prolonged periods of time – this is well established in the aviation industry, for example.

So what do we do? We install a camera-based monitoring system for the flawed human monitoring system.

Let us assume for a moment that the camera-based driver monitoring systems are 100% perfect.  We don’t need to add any further back-up systems to ensure the driver is paying attention to the driving task. Problem solved, right?

Wrong. Even though we have made sure that drivers’ eyes are looking in the right direction, this does not mean that they are actually concentrating on the driving task. As the IIHS states in their previously mentioned report on page 30:

“Over-trusting either hands-free (Schneider et al., 2022) or hands-on-wheel partial automation (Victor et al., 2018) can lead drivers to not intervene even when they see a hazardous situation forming in front of them because they incorrectly believe the system can handle more than it was designed to do.”

The problem of overreliance is therefore yet to be addressed satisfactorily.

The Unfairness of Driver Responsibility

The above shows why it is undesirable to continue the trend of automating more and more of the driving task on driver assistance systems (Level 2). Already now we can debate whether the driver is still a driver when the assisted driving system is active. I would argue that ‘supervisor’ is a more appropriate term. And if we keep automating more driving tasks, that term becomes even more appropriate.

But is it fair that responsibility remains with the driver / supervisor?

Already with a ‘normal’ DCAS, a majority of vehicle control is done by the system, and not the driver. If more and more driving tasks (e.g. changing lanes) are automated, the system’s share of vehicle control will only increase. Yet the driver remains responsible.

And is it fair that in order to ensure the safety of the vehicle, the drivers have to take on an additional task, that of supervising the system? And is it fair that they get tasked with that, even though they are not trained in it during their driving education? We currently train drivers to drive a car, not monitor a system driving the car.

And moreover, is it fair that drivers get tasked with this monitoring role, even though we know humans are not good at it for prolonged periods of time?

Mode Confusion

From the perspective of an ordinary citizen, there is very, very little that differentiates a DCAS (Level 2) with an automated lane change functionality on a highway, from an ALKS (Level 3) with an automated lane change functionality on a highway. Both systems have to control the vehicle, scan the driving environment and react to events appropriately.

A Mercedes Drive Pilot (Level 3 system) in operation.

Increasingly automating driving tasks would keep blurring the line between assisted and automated driving especially when you are also allowed to take your hands off the steering wheel. However, this line should be crystal clear in order for drivers to understand what is expected of them and prevent mode-confusion.

Industry argues that driver monitoring systems will warn drivers when it is detected that a driver mistakes (whether intentionally or unintentionally) an assistance driving system for an automated driving system, and starts being engaged in non-driving related tasks.

But as shown previously, such driver monitoring only makes sure the driver is looking at the road ahead, and does not guarantee engagement with the driving task (e.g. inattention due to mind wandering). Moreover, as previously mentioned, a driver may actually be looking at the hazard ahead of them, and not intervene as they expect and (over)rely on the system to do so.

Safety benefits of DCAS and system-initiated manoeuvres?

The above shows that assisted driving systems come with a myriad of negative side-effects, resulting mainly from requiring supervision and tasking the human driver with this. But what about the safety benefits?

So far I and my colleagues have not come across any credible evidence that suggests that assisted driving systems improve road safety, when taking into account the side-effects. Moreover, some of the benefits attributed to assisted driving systems, such as lane centring and emergency braking, are also covered in the EU’s General Safety Regulation for motor vehicles through the emergency lane-keeping and advanced emergency braking systems which are mandatory on all new vehicles sold in the EU as of July 2024.

Other benefits often mentioned include reducing driver workload. Although previous research indicated this might indeed be the case (except for in critical situations, in which it was higher), recent research shows that the mode of driving – manually or assisted – does not seem to affect the drivers’ workload.

The industry has recently also presented the benefits that system-initiated manoeuvres would bring by using system-initiated lane changes as an example. Ignoring the fact that the industry provides no (scientific) evidence for the claimed benefits compared to manual driving, one thing is strikingly obvious: these suggested benefits can also be attributed to automated driving systems with lane change capabilities, such as ALKS.

But the ALKS would not come with all the negative side effects mentioned throughout this briefing, as the human is no longer responsible, nor included in the driving task, when the system is operational in normal circumstances.

In the US, research from the Insurance Institute for Highway Safety and the Highway Loss Data Institute (IIHS-HLDI) shows that crash records and insurance data offer little evidence that partial automation systems are preventing collisions.

“Everything we’re seeing tells us that partial automation is a convenience feature like power windows or heated seats rather than a safety technology.”

IIHS President David Harkey

Apply the Precautionary Principle and Focus on Automated Driving

This briefing explained why at ETSC we are seriously concerned about allowing vehicles to initiate manoeuvres without driver confirmation and to let drivers take their hands off the steering wheel. The line between assisted and automated driving needs to be crystal clear, and allowing system-initiated manoeuvres and hands-off driving will only blur that line with potentially fatal consequences.

For now, we remain unconvinced of the safety potential of assisted driving, especially when the promised safety benefits can also come from the technologies already mandated by the GSR. And if there are any safety benefits, we wonder whether they will more than compensate for the additional risks to road safety that result from their use.

Let’s stop the expansion of features and functions for assisted driving for as long as there is no scientific agreement on their safety, and instead focus on making sure that automated vehicles are safe and are safely deployed.

Let us not use assisted driving as a test bed for automated vehicles – with all the risks resulting from the side-effects of its inherently flawed design.

What about the US?

While the regulatory developments mentioned above do not apply in the United States, it is worth looking at the US market especially given that some manufacturers already sell cars with Level 2 systems that allow hands-off driving and system-initiated lane changes there.

One matter that stands out is that assisted driving receives more scrutiny in the US than it does in Europe. The US National Transport Safety Board (NTSB) has investigated several collisions (referenced earlier in this briefing) and the National Highway Traffic Safety Administration (NHTSA) has launched investigations and issued recalls.

At the same time, very little is known in Europe about the number of crashes that assisted driving systems are contributing to on our roads. The monitoring and reporting requirements in the new DCAS regulation are a welcome step forward, but we urgently need an EU agency with powers to carry out and coordinate crash investigations in Europe in order to make sure we can learn from the mistakes, including those made by the assisted driving systems already on the roads which fall outside of the DCAS monitoring and reporting requirements.

Also worth noting is the attention the problems related to assisted driving systems have received from senior politicians. Several US Senators sent a letter to the acting administrator of NHTSA, urging the authority to ensure vehicles equipped with partially automated as well as automated driving systems are safe for all road users, following several high-profile crashes that highlighted the risks that such systems pose for road users.

“We cannot allow partially automated driving systems and ADS to accelerate the road safety crisis. NHTSA must take firm control of the wheel and steer manufacturers towards prioritizing safety.”

US Senators in their letter to NHTSA.

One final matter that I would like to highlight is that the IIHS rated the safeguards of several systems available on the US market. They evaluated driver monitoring, attention reminders, emergency procedures and other aspects of the system design. Out of the first 14 systems tested, only one system was rated acceptable, two were rated marginal, while the other 11 systems were all rated as poor for their safeguards. No system was rated as good.

The IIHS rating system furthermore requires the drivers to be involved in all of the system’s manoeuvres, and lane changes therefore must be initiated or confirmed by the driver. The reason for rating systems that make lane changes without driver confirmation as poor, was explained to us by the IIHS as followed (Slide 29):

While the debate still rages about whether vehicle-autonomously-initiated-and-executed maneuvers have a crash risk, no one knows what safeguards are “enough” to keep the driver adequate in the loop, which is what matters at the end of the day. Many people, myself included, think that as long as the system is partially automated it would be a design flaw for the system to do something without the driver’s involvement. What safeguards could be in place with such a functionality that would truly ensure the driver is aware of their responsibility and is prepared enough for the maneuver to intervene if necessary? We just don’t know.

The IIHS has taken the stance that a safeguard for this problem is to place limits on the functionalities that these systems have, which includes automated lane changes. Those limits require the involvement and verification of the driver for the initiation of maneuvers. This is one category of our ratings program on partial driving automation.

Given the risks and harm we have seen with systems that can be easily misused (intentionally or otherwise), we argue that the burden of proof is on the automakers to demonstrate the safety of such an automated lane change feature.

The experiences and actions from the United States are further supporting evidence that the precautionary principle should be applied in the EU, meaning that system-initiated manoeuvres and hands-off driving should not be allowed until such time that there is ample proof and agreement that they contribute to road safety – or at the very least do not endanger it.

Photo Credits

Cover photo: (c) Natecreation on Wikimedia. This photo is licensed under the Creative Commons Attribution-Share Alike 4.0 International license.

United Nations offices: Own photo.

Mercedes Drive Pilot: (c) Automotive Rhythms on Flickr. The photo is licensed under the Attribution-NonCommercial-NoDerivs 2.0 Generic license.

Loading...