Introduction

This paper considers issues relating to partially automated vehicles, that is a vehicle which is able to operate without driver input for part of a journey, but not capable of self-driving in all conditions (AV). This type of automated vehicle must share operational responsibility with a driver. A legal framework is required to support the handover of driving responsibility between vehicle and driver, and provide a predictable delineation of liability following an accident.

A convenient mechanism for facilitating this framework, may be to present the conditions of operation to the driver through an interactive digital interface, and to require the driver’s agreement to such conditions before the vehicle is activated. A voluntary agreement such as this, is a form of consent. However, it will be argued that the human factors involved in AV and digital interfaces, do not support the elements required for consent. Voluntary agreements which are struck via a digital interface, may fail to affect liability frameworks in the manner which manufacturers hope to achieve. This paper examines the the reasons for such failures in law, highlighting the efficacy of long-standing legal frameworks which were created to protect individuals, and how these may ultimately intervene to protect drivers from systems which may appear to follow the letter of the law, but not honour the intention of the law. If an accident occurs during a journey where operational responsibility has transferred, legal challenges will follow, and the basis for liability will be examined by the courts. We suggest that where the transfer of liability is not predictable, this presents problems for insurance frameworks and ripens the basis for litigation, and as such this is a matter to which manufacturers and policy-makers must focus their attention.

Studies which suggest that AV will reduce vehicle accidents and road deaths tend to assume a technology that is advanced and integrated (Kalra and Groves, 2017). AV is marketed by manufacturers as a safe and comfortable alternative to traditional vehicles.Footnote 1 The technological hype surrounding the projected benefits of this technology, (Stilgoe, 2020) such as the potential for drivers to engage in other activities whilst the vehicle is in control, has captured the attention of regulators. Governments around the world are advance-planning infrastructure and regulation for vehicles with capabilities that do not yet exist on our roads. However, AV may represent a new type of risk to drivers and other road users. When a driver engages in a secondary task whilst a vehicle is in automated mode, drivers require time to transition back to the driving task. Where drivers are given limited time, the transition has the potential to be poorly executed (Merat et al., 2012). The vehicle-to-human transition of driving requires a generous time-frame for it to be safe. Introducing this type of manoeuvre onto our roads is a high-stakes aspiration, due to the potential for injury and loss of life when things go wrong. Drivers of highly automated vehicles would have to understand this risk.

In the event that an AV which is in control of the driving, encounters a situation on the road which is beyond its capacity, it must hand-over control to the driver, or safely stop if the driver is not ready to take over the driving task. However, drivers themselves may not be aware of their own deficiencies in the handover process, and may take control when they are not ready (Saito et al., 2018). Alternatively, if the vehicle detects the driver is not ready, there may be no safe place to stop. Problems will arise if drivers operate an AV with insufficient understanding of their obligations to take over the vehicle and the potential risks involved where this does not occur smoothly. Drivers who misunderstand the operational parameters of the AV and the essential role of a human driver to resume control of the vehicle when it has reached the limit of its operational parameters, place themselves, their passengers and all other road users at risk. Accidents are more likely to occur if a driver miscalculates the nature of the risk they take by sharing operational responsibility, placing too much trust in the vehicle’s capabilities, or underestimating the extent of the action they may have to take in an emergency. Conversely, a driver who is aware of the limitations of the technology, and their associated responsibilities to take control when necessary, may be more likely to avoid an accident.

It is argued that in order to educate drivers to the required level, specialised driver training for AV must be considered. The formulation of user-centric driver training programmes are considered as a potential, partial solution to the problems posed by the limitations of instructing users via interactive digital interfaces. To this point, the projected outcomes of the ongoing automated vehicle research project, PAsCALFootnote 2 include the formulation of an automated vehicle driver training framework, and this framework is considered as part of the planning required to educate drivers to a sufficient level, whereby their perception of risk in operating the AV is appropriately calibrated. A suitably trained driver, cognisant of the issues surrounding automated-to-manual transfer would be desirable step towards safe AV operation. However, a framework which alters the legal rights of the driver which is administered via an interactive digital interface, remains a potential cause for concern, beyond that of driver education.

It will be argued that presenting the driver with legal conditions by means of an interface, is problematic as there are known difficulties with obtaining meaningful consent to legal conditions using ubiquitous computing devices (Luger, 2012). Obtaining consent from an individual via digital technology is vulnerable to influences associated with software design which incorporates an underlying appreciation of human psychology. How individuals access online services and deal with complex information and legal terms, may provide some lessons for interactive interfaces in AV. When accessing online services, people tend to ignore content such as conditions and warnings presented electronically (Solove, 2012/13), and it is known that people tend to agree to terms without understanding the legal implications (Obar and Oeldorf-Hirsch, 2018). Contractual terms presented electronically, commonly bind the user to a less favourable legal position. Further there is evidence people may also ignore safety information presented in video format, such as safety demonstration videos presented on commercial airlines (Ragan et al., 2017). We do not yet have sufficient understanding of how well drivers comprehend safety information and legal conditions delivered by interactive digital interfaces, nor do we know how likely a driver’s agreement by consent will withstand scrutiny by courts following an accident.

The issue of a legal framework which transfers driving responsibility is relevant to all users and stakeholders in AV. While it is predominantly in the interests of drivers and other road users for this framework to be sufficiently addressed, developers, manufacturers and regulators may benefit from engaging with the problems identified, in order to proactively deal with these issues in the interests of supporting a safe system of AV on public roads (Mordue et al., 2020).

This paper aims to invoke regulatory and industry discussion regarding the use of interactive digital interfaces to facilitate legal arrangements between driver and manufacturer, in particular voluntary agreements validated by the consent of the driver. On the basis of frameworks being drafted by the UNECE Global Forum for Road Safety (Global Forum for Road Safety, 2020) this paper assumes a digital interface is likely to be used in AV to provide operational and safety information to the driver. We will also consider on a conceptual basis, the potential for a manufacturer to utilise the digital interface to obtain consent from the driver, for the purpose of predetermining liability in the case of an accident. The concept of using the driver’s consent communicated via a digital interface to facilitate a legally enforceable framework between driver and manufacturer, will be subject to a theoretical examination in the context of human factors research relevant to shared driving responsibility. The narrative is built upon findings from psychological literature about risk in operating AV, and the human capacity for comprehension of complex information being delivered electronically. We conclude by suggesting a potential way forward in addressing known issues, by utilising the outputs of existing research relating to driver education, and recommending other avenues for research and development.

Human factors—risk

The concept of shared operational responsibility in AV has produced significant research highlighting the difficulties associated with transferring the driving task between vehicle and driver. Research findings include that there may be a significant delay with an individual’s ability to reorientate themselves back to the driving task, ranging from a minimum of 8 s (Agrawal et al., 2017) and up to 40 s before a driver is able to refocus upon the road layout and driving task (Merat et al., 2014). Humans are not well adapted to a task where they must regain control in a limited time frame (Lu et al., 2017). People with disabilities may have particular difficulties with handover (Glennie-Smith, 2017–2018). When a vehicle initiates an emergency handover, a driver will be asked to resume control in a situation where they are not likely to be attentive. A response to a surprise event while engaged in a secondary task in an AV takes longer than when a driver is focused upon the driving in a traditional vehicle, leaving less time for an AV operator to react in an emergency (Blommer et al., 2017). Monitoring an AV while engaged in another task, and not engaged in the driving task may be difficult, as drivers are not able to process multiple sources of information at once or for a sustained amount of time (Biondi et al., 2019). The role of the human as exception handler in a highly automated system is problematic, and this is more pronounced where the automation becomes more reliable, and the driver is called upon less frequently (Schutte, 2017). It is possible for a system of ‘over-trust’ to develop, whereby the trust of the driver-in-charge exceeds system capabilities, leading to misuse (Lee and See, 2004). Where drivers are encouraged to ‘multi-task’ by answering emails or watching movies due to highly reliable automation, the advent of the multi-tasking influences the level of trust and increases reliance. Operators are poor monitors of automation if they are engaged concurrently in other tasks (Singh et al., 1997). Certain drivers may have slower reaction times due to; individual physical or mental characteristics, whether they are on medication, if they have consumed alcohol (yet remain under the legal limit for driving), whether they are an inexperienced or older driver, if they have passengers or pets in the car, or if they are driving in bad weather (Fofanova and Vollrath, 2011). The shortcomings of humans as drivers, and the improbability that they will always be ready to take over the driving task at short notice, is well acknowledged in the research community. It is recognised there are limitations in respect of the shared driving dynamic, including that the driver can suffer from inattention, over-trust, skill atrophy and complacency (Alonso Raposo et al., 2017). Consequently, there is an emphasis on the development of systems capable of detecting the foreseeable situation of a human driver who is not prepared to take over (Harris et al., 2019).

Technology factors—safe stop and driver monitoring

Designs which allow an AV to default to an emergency manoeuvrer and safe stop, may alleviate the worst cases of drivers being unavailable to take over the controls of a vehicle (Magdici and Althoff, 2016). Systems monitoring heart rate, pupil diameter and gaze may be able to detect when a person is not able to respond to a takeover situation (Alrefaie et al., 2019). These systems could potentially work in conjunction with an emergency fall back of safe stop or safe manoeuvrer, where the driver is not ready (Svensson, 2018). However, even where drivers are alerted to a vehicle handover via auditory/visual and haptic warning messages, drivers struggle to regain attention to driving following a period of automated driving (Bloomberg, 2017). Such techniques designed to recapture the driver’s attention, may even have adverse effects such as annoyance, or shock (van den Beukel et al., 2016).

There are also finer grades of ‘readiness’ which we are yet to categorise and grapple with. Failure to detect the driver’s functional state may not be due to a fault in the vehicle, but related to the type of sensors fitted in the vehicle, their sensitivity, and the reliability of how such multiple sets of behavioural and physiological data are interpreted and whether this is calibrated to the individual (Collett and Musicant, 2019). As technology develops, problems are still likely to occur, and a ‘safe stop’ may not always be possible. A driver may, in this period, accept a transfer of responsibility, and begin driving without being aware of their inferior perception of surroundings, in comparison to if they had driven the entire journey. Drivers’ own assessment of their readiness to take over may not be accurate (Saito et al., 2018). When drivers place their hands on the steering wheel to take control, they are likely to prefer to wait additional time before initiating transfer as they make a subjective assessment of their own capabilities, including awareness of the surrounding traffic and their ability to keep the vehicle on a safe trajectory (Maggi et al., 2020). A driver may take only 1.5 s to disengage from a secondary activity and place their hands on the steering wheel (Kerschbaum et al., 2015) however this does not mean they are ready for the driving task.

These findings highlight the significant risk drivers face in sharing operational responsibility with AV. If risk is not adequately communicated before the driver provides consent to share the driving, there may be an argument that the conditions required for lawful consent have not been met. Consequently the effectiveness of communicating this risk to drivers via an interactive digital interface is relevant to whether the driver’s consent to operate the AV, is valid.

The relevance of consent in AV

Interactive digital interfaces are likely to be used in AV to provide information to the driver, including information which relates to safety and driver responsibility (Global Forum for Road Safety, 2020). Interactive interfaces may also be used to set out other conditions under which the driver may operate the vehicle. This may include the demarcation of liability. For example, a condition may state that the driver is legally responsible for an accident which occurs while the driver is in control of the vehicle. For the conditions to have legal significance, the driver must understand and agree to be bound by such conditions (Bix, 2010). When terms are presented to the driver via an interactive interface, drivers may indicate their consent by selecting ‘I agree’ or ‘I confirm’ on the interface (or by communicating this verbally) before setting off in the vehicle. The communication of consent to conditions which are intended to be legally binding, is a framework which may arguably facilitate the transfer of liability between driver and vehicle, and make the legal consequences flowing from an accident predictable.

We are accustomed to expressing consent electronically, particularly when accessing online services which require access to our personal data. In the UK and the EU under data protection legislation (General Data Protection Regulation, 2016)Footnote 3 data cannot be processed without the user’s consent. Users must be provided with information about the purposes for which their data will be processed, and this information is commonly included in a privacy notice. Before accessing online services, users are required to communicate whether they agree to the terms of service, and for their data to be processed as set out in a privacy notice, such as the example noticed in Fig. 1.

Fig. 1: Example privacy notification in a social media application.
figure 1

This figure is included to depict the type of language that is commonly used in social media applications when asking users to accept terms and conditions.

The concept of consent also features in face-to-face interactions, such as during the provision of medical treatment. Before undergoing medical treatment, a patient must be provided in advance, with information regarding their treatment including potential risks and complications. This facilitates ‘informed consent’ as required by the law (Beauchamp and Childress, 2001). The use of consent in AV to delineate responsibility and liability, is not governed under the same legislative frameworks as personal data, and the context differs to that of informed consent in medicine. However, the uniform requirement for all frameworks requiring consent, involve providing the person who is asked to give their consent the relevant information and conditions to consider in advance of making their decision. In short, the person giving their consent must be aware of the consequences of their agreement. If a person agrees with the conditions after having had the opportunity to consider relevant information (such as risk), the communication of their consent signifies their intent for the conditions to have effect under the relevant law.

In the UK the common lawFootnote 4 may permit parties such as the driver and manufacturer of a vehicle, to agree in advance the conditions under which the driver may operate the vehicle. These types of voluntary agreements may include, allowing a person to willingly accept the possibility of a known risk. This common law doctrine is referred to as volenti non-fit injuria (to a willing person, no injury is done), where an individual who voluntarily places themselves in harm’s way, cannot subsequently make a claim against the other party (Jaffey, 1985). A driver who is informed of the risks associated with driving an AV, and agrees to operate the vehicle, consents to ‘take what comes’ (Titchener vs. British Railways Board, 1983) and (Reid, 1999). Liability cannot be excluded in this way for a vehicle with manufacturing defects. The scope of this discussion is confined to an AV which is not defective, and is operating within its expected operational parameters, which includes that as a partially automated vehicle, it is not capable of self-driving in all conditions, all the time. In such circumstances, an AV will inevitably transfer the driving back to the driver at some stage during the journey, and may need to transfer the driving during an emergency. Where a driver agrees to operate an AV, in circumstances where the risk and responsibilities associated with driving the AV may not be obvious, the driver must be informed of the extent and nature of the risk faced (Faith, 2016). The education of the driver about risk to the extent required by law to effect consent, is not only necessary from a legal perspective (McLean, 2009) but desirable to calibrate the driver’s perception of the vehicle’s capabilities.

However the means of informing the driver about risk, the interactive digital interface, is also associated with known issues, whereby even if all relevant information about risk and legal conditions are presented as required, the driver may still not absorb that information prior to communicating their consent. It is argued that if it is known that drivers are not likely to comprehend the information being given to them electronically, this is also a matter which may affect the validity of consent. If a driver institutes a claim of negligence against the AV manufacturer following an accident, and successfully argues they did not understand the risk to which they purportedly agreed by way of consent, the common law defence of volens (the state of mind necessary to voluntarily accept risk) would not be available to protect the manufacturer from liability.

The following will examine how issues in human–machine interaction may impact on the validity of the ‘consent’ provided by the driver in AV.

Human factors—the digital interface

We have already referred to circumstances where we are familiar with being presented information and legal conditions electronically, such as accessing internet-based services (as shown in Fig. 1). Such electronic communication methods are not always effective. When information and legal terms are presented in text format, it has been shown that users often agree to such terms after having skim-read or ignored the terms entirely, in order to access their desired service as soon as possible (Obar and Oeldorf-Hirsch, 2018). Convenience is a driving factor (Mulder and Tudorica, 2019). It is known that privacy warnings are considered a nuisance, or they are mistaken for a guarantee that data will not be shared when often the opposite is the case (Brandimarte et al., 2013). The reasons why users would allow themselves to be led through digital prompts which result in reducing their rights are complex. These reasons may include: individuals may misunderstand the nature of the notices which grant permissions to other parties (Solove, 2007); sufficient explanations of complex ideas are likely to be lengthy, with significant effort and time needed to understand them (Solove, 2012/13); it is difficult to assess the potential consequences regarding the future loss of a particular right (such as privacy) and compare that with a current benefit (Cohen, 2000); individuals find the medium of long text notices an unsatisfactory method of communication, in that the notice is too confusing or interminable (Acquisti et al., 2015); when faced with numerous notices which all follow a similar format, users tend to assume they mostly contain the same terms and will agree in the belief the terms presented are similar to contract terms they have seen in the past (Strahilevits and Kugler, 2016; Böhme and Köpsell, 2010). Furthermore, users have warning fatigue (Woyke, 2014) as they are confronted daily by requests to consider their rights. To improve engagement with the application, the time necessary to navigate these interactive pathways and the cognitive load is reduced as much as possible (Luger, 2012). The task of informing a user to the standard by which they may make an educated decision as required for the effective operation of consent, is difficult.

Successful interface design incorporates the golden combination of ‘motivation’ and ‘ability’. Technology users are highly motivated to engage with prompts in order to access services. As demonstrated by the Fogg behaviour model (Fogg, 2003) design assists people to do what they already want to do. Interface design can eliminate the barriers that make preferred decisions difficult. When motivation is high, and the ability to carry out the task is easy, people are likely to do the behaviour (Fogg and Euchner, 2019). Digital environments can guide people by deliberately presenting choice or organising workflows (Schneider et al., 2019). Interfaces may permit users to truncate the consent process (Brandimarte et al., 2013). Assuming a driver has already hired or purchased an AV before sitting in the vehicle and being presented with text upon an interactive interface, the driver is arguably already inclined to agree to the types of conditions offered, which would allow them to operate the vehicle.

AV interfaces differ from the types of potentially deceptive human–machine interfaces associated with online sales and marketing techniques (Brignull, 2011). These types of interfaces may utilise ‘nudging’ where people are influenced into taking action which they had not planned, such as purchasing an item they previously had not intended to purchase, or where users are deliberately deceived into an action not in their best interest, referred to as ‘dark patterns’ (Greenberg et al., 2014). On the assumption a driver of an AV is situated in the vehicle when they are using the interface, the intention to operate the vehicle is already likely to be present. Further, the information about risk and liability may be clearly stated upon the interface. However, the digital interface is likely to be designed to support the choice of operating the vehicle. Consequently, the problem lies in how the interface design may hide, in plain sight, the dangers associated with operating such a vehicle, as drivers may be willing to ignore information in order to access the vehicle they had already decided to operate. If all the relevant information is available, arguably, the user is not being manipulated. When we suspect that information is being ignored, this presents a dilemma. The complexity of human–machine interactions may mean it is not viable to assume that users are capable of making informed decisions in this context (Luger, 2012).

In order to create a good user experience, manufacturers aim to create a ‘seamless’ experience for the user, however creating this experience while simultaneously engaging the user with enough safety information, may be irreconcilable (Thaler and Sunstein, 2008). The more ‘seamless’ and enjoyable the experience for the user, the less likely they have contended with complex information requiring them to make difficult choices (Watcher, 2018). AV manufacturers are presented with the difficult task of designing an unobtrusive digital interface which provides a comfortable and secure experience for the driver, whilst also informing them of the risk they inevitably take, and of their responsibilities in operating the vehicle. Consent may not necessarily provide ‘safety self-management’ (Peppet, 2014).

While we have evidence that people do not pay attention to terms and conditions presented in text form online, there are also indications that people also ignore video instructions about safety, as seen by the retention of airline passengers of safety demonstration videos, which may be relevant and instructive about the potential behaviour of AV operators. Safety briefings on board an aircraft are prone to perceptions of reduced relevance due to repetition (Australian Transport Safety Bureau, 2006). In 2018, numerous passengers aboard a Southwest Airlines flight during an emergency landing, were recorded as being unable to quickly locate or effectively use their oxygen masks despite an inflight safety demonstration on that flight, and on every other flight those passengers had taken in the past (thejournal.ie, 2018). Passenger attention to pre-flight safety demonstrations is low, despite its impact on a positive chance of survival in an aviation accident. Attempts by airlines to use humour and entertainment in the demonstrations (Purtill, 2017) may have some impact on how many passengers watch the video, but it may not improve the retention of the safety message (Ragan et al., 2017).

It is less well known how drivers absorb safety information and other terms and conditions in in the context of AV, and it is not known whether drivers will behave in a similar fashion to users of online content or airline passengers, and ignore safety warnings, instructions, and legal conditions. It may be that the impetus for reading instructions and warnings before operating an AV could be made more significant, particularly if warnings are coupled with notices about legal repercussions such as licence penalties or criminal charges. This is a matter which has not been properly explored to date, and requires further research.

However, it is argued that if we consider the studies highlighting problems in human–machine interaction, it is reasonable to suggest there may also be problems with driver interaction with the AV digital interface, the communication of information, and providing valid consent. This is relevant to liability and insurance frameworks.

How responsibility leads to liability

The Vienna Convention on Road Traffic (1968) set out that drivers should remain in control of their vehicle at all times, and that the driver is responsible for the vehicle’s actions. This Convention was amended in 2016 (UNECE, 2016) to allow for driving tasks to be transferred to automated systems.

Liability in the context of shared operation between a driver and an automated system is often defined in a binary fashion, with either the driver being described as ‘in charge’ or the AV ‘driving itself’ (Automated and Electric Vehicles Act, 2018). When the driver is in charge of the vehicle, unless there is a fault with the AV, the driver is likely to be liable for any accident (McCall et al., 2018). Fledgling legislation in the UK also contemplates additional responsibility for the driver to ensure the AV is operated only in accordance with manufacturer’s instructions. For example, a driver may be found to have caused an accident if they allow the vehicle to begin driving itself ‘when it was not appropriate to do so’ (Section 3(2) Automated and Electric Vehicles Act, 2018). Draft international guidelines on AV safety (Global Forum for Road Safety, 2020) also emphasise the responsibility of the driver. The guidelines currently being contemplated include that the driver should; maintain the capability to drive; resume the timely, safe and proper control of the vehicle upon a takeover request; familiarise themselves with the requirements and rules regarding takeovers and the types of activities which can be undertaken while the vehicle is in automated mode; and should consider their individual capability to use a shared driving system, as some individuals may not have the mental or physical ability to use it safely. The responsibility for operating the AV appears asymmetric, benefiting the AV manufacturer and placing great responsibility on the driver. Bellet et al. (2019) considers a framework of liability which takes into account the blurred and often concurrent tasks of AV and driver, which may occur while the vehicle is being driven at any given time. During automated driving, the AV monitors the driver and surroundings and informs the driver of pertinent information, such as an upcoming exit. Simultaneously, if an AV has control of the vehicle, the driver also is expected to remain in a monitoring role, in that they must remain alert to signals from the AV, and be capable of receiving any information being provided. This situation may evolve to the AV warning the driver, where the AV detects an upcoming situation where the driver may be required to takeover. When the driver engages in a handover process, the transition evolves to both the driver and the AV managing the handover. The driver must make the decision of whether to take control, while the AV seeks confirmation from on-board sensors to identify whether or not the driver is ready to take control. During the operation of AV, there is a constant merging of tasks where at any one time, both the driver and the AV are likely to have some responsibility (Bellet et al., 2019). This presents an alternative to the unsatisfying and dualistic ‘driver in charge’ or ‘vehicle in charge’ dichotomy (Bellet et al., 2019). However, this framework necessarily assumes a closed loop of potential action, whereby if during the monitoring phase, the vehicle detects the driver is not ready to take over, the AV will deal with the situation either by continuing the driving until the driver is ready, or by making a safe stop. This framework requires the technology to be sufficiently advanced to guarantee that the vehicle will correctly detect if the driver is not ready, or where the driver is ready there is adequate time for a safe handover. Failing that, where the driver is not ready, the vehicle should always be able to make a safe stop. As we have argued above, we may not be able to rely upon the AV to accurately detect the readiness of the driver, or to always perform a safe stop. In circumstances where these technical mechanisms fail, it is necessary to deal with liability which occurs where an AV has reached the end of its operational parameters, the driver has not been able to take over adequately, and there is an accident. When an accident does occur, there will be intense scrutiny of the mechanisms which attempt to apportion liability. It is argued that due to the human factors explored above, an interactive digital interface, which encourages the driver to communicate their consent to risk, and legal conditions, may not withstand such scrutiny. The human factors research suggests that the known propensity of people to return poorly to situational awareness in a handover from automated mode to manual driving is a matter of great significance, and drivers ought to be aware of this before deciding to drive an AV. Further, we have set out that there are reasons to doubt that an interactive digital interface is an inappropriate mechanism to communicate this risk, or to communicate legal conditions which potentially alter the rights of a driver. We have identified the potential problems with a consent framework facilitated by an interactive digital interface in AV, and will attempt to project the way forward as to how these problems may be resolved.

Overcoming the challenges: The way forward

Two central problems have been identified in the use of an interactive digital interface to provide information and legal conditions to drivers of AV, which may require the driver’s consent prior to operation. Firstly, in respect of handover situations, drivers may lack sufficient understanding of their vulnerability during a takeover. This includes not comprehending their own skill deficit to manage shared driving with an AV, and drivers may also potentially misunderstand the limitations of AV systems. Secondly, assuming that enough information can be assembled and delivered via digital interface to educate a driver about these matters, the problem persists in presenting the information in such a way that it is likely to engage the user so that it is properly understood and taken into account when using the AV, and in deciding whether or not to operate the AV in the first place.

For AV to operate safely on roads, drivers must be aware of the risks and their responsibilities. This is also essential for a transfer of liability to take place between vehicle and driver. If users provide their consent to operate a vehicle in circumstances where they have not been provided with all of the relevant information, or where they have been provided information in circumstances where it is known they are unlikely to pay attention (Manson and O’Neill 2010), the validity of that ‘consent’ is susceptible to challenge during any litigation following an accident, and will weaken the predictability of liability and insurance frameworks. The uncertainty around assigning liability in AV highlights the desirability of a coordinated effort by policymakers and manufacturers to devise organisational structures and regulatory measures which support a fair criteria for culpability attribution (Bonnefon et al., 2020).

The current policy debate surrounding certification and licencing of AV, tends to focus on the vehicle and not the driver. On the international level, the World Forum for Harmonisation of Vehicle Regulations, part of the United Nations Economic Commission for Europe (UNECE) is working towards a concept for the safe certification of AV based on principles of real world test drives, physical certification tests and audit processes (GRVA 2019). At present, certification is being planned for AV, and not the drivers of AV (Law Commission 2019). Part of the reason for this, is that as AV technology is still being developed, generalised driver training may not be able to incorporate distinctive design features offered by different manufacturers. Formulating a training programme which is relevant to all models of AV is not an inevitable conclusion in the short term. However this is not to say specific training for AV will remain unworkable. A model whereby manufacturers become responsible for training drivers of their vehicles, may be an option (Law Commission 2019). The current policy of not having specific driver training may be more symptomatic of the relative immaturity of industry and technology, and the need to investigate training further. Hence it is suggested that if there is any possibility for drivers to acquire the requisite knowledge and skill not only to drive safely but to accept responsibility in a voluntary and informed manner, drivers would require dedicated driver training in order to interact with AV safely. The type of training which may be developed to help overcome the issues identified, must be specific to AVs requiring operational transfer between human and vehicle, and focus upon improving the comprehension and skill of the driver. Furthermore, the challenges which drivers face in achieving a safe level of skill and comprehension, caused by the opacity and complexity of AV should be addressed in a collaborative effort by policymakers and manufacturers (Bonnefon et al., 2020).

PAsCAL is an ongoing 3-year research project regarding the public acceptance of connected and automated vehicles, focusing on producing a user-centric framework (referred to as the Guide2Autonomy) aimed at facilitating the transition to automated vehicles, part of which involves research into specialised driver training systems for AV. The investigations aim to incorporate enhanced driver comprehension and skill into future driver training.

The components of the Guide2Autonomy framework were designed as a result of identifying potential areas for expanding upon current research, including driver risk perception and attitudes, and how this impacts upon the use of AVs. A gap was identified in the literature surrounding driver risk perceptions, in particular, objective existing risk and subjective perceived risk of driving an AV. Lowering the perception of risk of driving AV may have a positive impact upon the public confidence in AV technology. However conversely, drivers who report excessive trust towards assistive technologies tend to overestimate their capabilities (Ebnali et al., 2019). Over-trust tends to distort the accuracy of perceived risk. This may encourage those who experience difficulty in driving, such as older adults, to use AV, while they may have less ability than other drivers to manage it (Choi and Ji 2015). Risk perception also has a role to play in the willingness to adopt safer behaviour (Rosenbloom et al., 2016). People who have a perception of there being a risk to safety, are more likely to engage in behaviour commensurate with safety advice (Dinh et al., 2020).

This is relevant to the problem identified above, where drivers of AV may not appreciate their level of responsibility in operating such a vehicle. In order to operate AV safely, drivers must develop ‘calibrated trust’, which is made possible based on knowledge of how the system operates (Biondi et al., 2019). Existing research showcasing simulator and video training for AV has been found to have a positive effect on user attitudes as well as higher-order cognitive skills, and provide an improved mental model of AV. To some extent simulator training may be useful, and training drivers via virtual reality systems or simulators may play a part in acquiring the skills to interact with AV (Sportillo et al., 2018). However, while a simulated environment approach allows for an approximate use of the technology, it has been found this approach cannot give users a sense of large scale and complete systems (Moran et al., 2014). Overall, simulators have been found to fall short in many areas of driver training for AV. While simulators have uses, they may not necessarily prepare drivers for an automated driving task. ‘Learning by doing’ is essential (Boelhouwer et al., 2019) and drivers require training strategies to support them during handover situations. Drivers who are trained in ‘real-life’ AV may benefit from improved skills, knowledge and safer automated-to manual-recovery, as their driving experience provides a more transparent view of how highly automated cars operate (Ebnali et al., 2019), and specialised training improves response time (Payre et al., 2016). The current state of this research provides a basis on which the Guide2Autonomy aims to build. The training and evaluation programme within Guide2Autonomy, seeks to extend research links between drivers’ psychological and physical abilities and investigate how new driver training and education solutions could improve user’s skills. This corresponds with the framework’s user-centric approach and focus on user perceptions (Van Egmond et al., 2019).

Training specific to the demands of the human-to-vehicle and vehicle-to human transfer would address the desirability of individuals being cognisant of the risks and responsibilities involved in operating highly automated vehicles. Meanwhile certification (or the provision of an AV driving licence) may deal with the issue of only allowing the individuals on public roads who have proven some proficiency at vehicle-to-human operational transfer. Specific driver training is more likely to produce drivers who appreciate the realities of operating an AV. A trained driver who has received certification, which indicates a specific competence in operating an AV is ‘informed’ of their risks and responsibilities. This may better enable a driver to understand the significance of the information provided via an interactive digital interface and facilitate a valid acceptance of shared operational responsibility with a vehicle’s automated system.

The Guide2Autonomy framework aims to formulate methods in conjunction with traditional driver education providers, to produce new driver education aimed at improving the risk perception and skills of both new and experienced drivers. The modelling of newly required skills is designed to take into account drivers’ perceptions and attitudes, and ascertain how driving instructors may effectively teach the skills necessary for a manual-to-automated transfer (Van Egmond et al., 2019). The Guide2Autonomy framework, illustrated at Fig. 2, comprises interconnected elements including: a platform on which a range of users from all transport sectors can identify their concerns, provide feedback, and share lessons learned; a simulation environment in which research questions and hypotheses derived from the issues are designed, studied, and verified; a training and education programme in which new driving needs and certification requirements for different levels of automation are identified and tested by both experienced and new drivers; and real-world case studies to validate the proposed research using trials and demonstrations.

Fig. 2: Overall concept of the Guide2Autonomy.
figure 2

This figure is included to demonstrate the methodology being used to educate drivers and improve their skills, and how this may be validated.

However, the Guide2Autonomy framework, will comprise only part of the solution to the problems identified. Even if new training programmes for drivers are developed, issues will remain regarding interactive digital interfaces. While the lessons we have learned from people’s interaction with internet-based services and airline safety demonstration videos are potentially instructive, we do not yet know how this translates to AV.

In light of the essential function digital interfaces are set to play within AV, additional research is required to ascertain how well drivers absorb information from these devices, and whether the legal conditions presented via digital interfaces are likely to withstand the scrutiny of a court if the liability of the driver for an accident, is called into question.

Further research is necessary to specifically test user’s comprehension of risk and their understanding of their personal responsibility for the safe operation of an AV. It is necessary to ascertain whether the driver’s trust in the technology is appropriately calibrated, and on the basis of the instructions presented on the digital interface, whether they are likely to engage in behaviour which is suited to capabilities of the AV.

Conclusion

This article presents a theoretical examination of the use of consent as a mechanism for transferring operational responsibility and legal liability between driver and AV, specifically where consent is facilitated by an interactive digital interface. Draft international guidelines for AV suggest such interfaces will be used to inform the driver about their responsibilities in operating the AV, and to provide safety information. It is considered that digital interfaces may also set out to create legally binding conditions upon the driver, which attempt to alter the driver’s legal position in the event of an accident. However, it is doubted that consent provided by the driver, and communicated via a digital interface would be sufficient to alter liability frameworks. This is due to known fundamental weaknesses associated with creating legal relations via a digital interface, which may be particularly problematic in the context of AV. The knowledge gap between AV manufacturers and drivers about risk is great. Drivers may not be aware that shared operational responsibility carries a new risk to their safety, and that when the driving is transferred from automated to manual, they may manage the transfer poorly, at great risk to themselves and all other road users. It emerged from an examination of the relevant human factors research, that drivers may not appreciate the technological limits of AV, and the corresponding extent of their own responsibility. Further, it is likely that drivers will not thoroughly consider legal conditions presented via digital interfaces. There may be a tendency for users of technology including drivers of AV, to skip over instructions, legal conditions and safety demonstrations presented via a digital interface. This is because people may have difficulties effectively absorbing information presented electronically. However, as yet we do not know how distinct this tendency will present in the context of AV. As a result of the potential problems outlined, the use of driver consent may fail to effectively create predicable liability frameworks.

The technological hype surrounding AV presents partially automated vehicles as safe and comfortable alternative forms of transport. However, AV will require drivers to learn new skills in order to manage automated-to-manual driving transfers. Training via driving simulators may be of use, however low fidelity prototypes may not convey the necessary information and experience to the driver to optimise their knowledge and safety. Real-life training in an AV and corresponding certification may be necessary. The Guide2Autonomy framework being produced by the PAsCAL research project, aims to build upon current research regarding AV driver training, and address knowledge gaps surrounding the risk perception of the driver, and incorporate an emphasis on human behaviour into the design of driver training programmes to improve interactions between drivers and AV. Specialised AV driver training is not only required to improve drivers’ skills, but driver education is necessary to more appropriately calibrate drivers’ trust in AV to a level commensurate with the capabilities of the technology, and provide drivers with a more realistic sense of what they are required to do in order to operate the vehicle safely.

In addition to driver training, further research must be undertaken to verify the substance and quality of the communication between driver and AV via an interactive digital interface for the purposes of communicating instructions, legal conditions, risk and responsibility. As current studies indicate that people ignore complex information provided via a digital interface, problems are likely to arise if an interactive display is the only means to relay information and legal terms to the driver. This problem may be exacerbated where manufacturers are incentivised to create an effortless digital pathway for the user, and provide an uncomplicated interface which supports a driver in their decision to drive the AV. Rather than confronting the driver with accurate, yet potentially complicated information about risk and legal conditions, manufacturers are more likely to create an enjoyable and seamless interface experience for the driver.

In the circumstances of an accident and consequential litigation or insurance dispute, driver consent to legal terms and the demarcation of liability, facilitated by an interactive digital interface, may fail under the intense scrutiny of the courts. For a driver’s consent to operate an AV to be valid, the driver must be aware of the risks and their responsibilities. It is likely that this may only be achieved through specific AV driver training, and interfaces designed to support the legal elements of consent which include that the driver must appreciate the nature and extent of the risk being undertaken.

Without sufficient education and training, consent taken via an interactive digital interface as a mechanism to transfer responsibility and liability between driver and AV, is a step which is likely to disadvantage the driver, who may not appreciate the risk or legal obligations of operating an AV. Manufacturers cannot be absolved from liability upon a driver communicating agreement to terms via a digital interface, irrespective of whether they have comprehended the risk and responsibilities involved. AV manufacturers and policy makers must be cognisant of these issues when preparing to place AV on the market, and work to create a fair system of culpability.