Ethical Decision making in Autonomous vehicles

Spandan Bhattarai
5 min readMay 6, 2023

--

Autonomous vehicles (AVs) are being developed to address the risk of human error, which is believed to cause more than 90% of accidents (A.R., & A. I. Gledon, 1987). These sophisticated computer systems have sensors and artificial intelligence that allow them to drive on public roads without human assistance. We’ve seen the emergence of several ethical issues surrounding autonomous cars and trucks, with many more to come as this technology continues to develop. When an AV is unable to continue driving safely, it must make a decision about whether or not it can continue operating. If it can, it needs to decide how — at what speed, in what weather conditions, etc. — it should drive until it’s safe again. There are various other dilemmas facing developers of these vehicles: How will AVs respond if they detect a person or animal in their path? Should they always give priority to humans? If the car detects that a user has fallen asleep while driving, will it alert them or take control? How should an AV behave when faced with a situation that presents unavoidable risk for either party? This article explores some of these conundrums and explains why ethics is so essential when building AVs.

Why is Ethical Thinking Important?

Computer scientists and engineers have an ethical responsibility to develop autonomous vehicles that benefit society and provide a net positive impact on road safety. AVs will have to make complex and difficult decisions about when to take risks and when to prioritize safety. The choices they make in these situations will have a significant impact on the lives of everyone on the road. To build a trustworthy AV, engineers must consider what values should guide the behaviour of their vehicle. They must then design the technology to reflect these values.

What Makes a Decision Ethical?

When engineers design the functionality of an AV, they must consider how the vehicle will respond to different situations. Designers must also decide what values will guide the behaviour of their vehicle. AVs should be designed with a set of ethical principles in mind. The outcome of any given situation should reflect the values that were built into the system. Engineers should be transparent about the ethical principles that guide their vehicle. This will help society understand how the AV responds to different situations. It will also help engineers make difficult decisions if necessary.

Can an AV Always be Run Safely?

Engineers will have to grapple with difficult situations in which an AV must decide between running safely and maximizing the chance of saving lives. For example, imagine an AV is driving through a densely populated area when it suddenly detects an imminent collision with a pedestrian. If the AV can’t brake in time, it must decide whether to swerve into a crowd or hit the pedestrian. This is a difficult and complex decision to make. If the AV chooses to hit the pedestrian, it risks killing one person in order to minimize the probability of killing many others. If it decides to swerve into the crowd, it risks killing one person in order to save many others. In this situation, engineers should aim to minimize the number of fatalities. This can be done by ensuring that the AV always tries to brake before swerving.

Should an AV Always Maximize Safety?

Engineers should always prioritize safety above everything else. However, there are situations in which engineers would be wise to take a risk. For example, imagine an AV is driving down a narrow road in bad weather. If the vehicle is travelling too slowly, it risks being rear-ended by another vehicle travelling behind it. Should the AV always brake to a safe speed, or should it take a risk and drive faster despite the poor conditions? Again, engineers should ensure that an AV always tries to be safe. However, they should design the vehicle to respond to this situation by first determining the probability of being rear-ended. If that probability is high, the AV should brake to a safe speed. If that probability is low, the AV should take a risk by driving faster despite the bad conditions.

When Should an AV Take a Risk?

AVs should always take risks if they will save lives. For example, imagine an AV is driving on a road in which it is raining heavily. When the vehicle approaches a narrow bridge, it detects that the bridge has become impassable due to flooding. Should the AV take a risk and attempt to drive across the bridge? Or should it turn around and find another route? If the AV turns around, it risks getting stuck in the rain. If it chooses to attempt the passage, it risks getting stuck in the flooding. In this situation, there is no safe choice. The AV can either take a risk & hope for the best, or it can take a risk and hope for the worst.

Which Humans Should Be Protected in a Dilemma?

Engineers will have to consider which humans their AV should prioritize in a dilemma situation. This is called agential decision-making, and it refers to the AV’s choice of which humans it will protect. Agential decision-making will be necessary in some situations, but it should be avoided when possible. For example, imagine an AV is driving down a narrow road in bad weather, as described in the above section. The AV is travelling too slowly, and a motorcycle is approaching from behind. The AV must decide whether to take a risk and drive faster despite the conditions of the road, or to brake to a speed at which the motorcycle rider can safely pass. The AV can choose to protect either the motorcycle rider or the passengers in the AV. If the AV protects the motorcycle rider, the AV will almost certainly be rear-ended and cause a serious accident. If the AV protects the passengers, the AV is likely to drive more slowly and be safe until the motorcycle arrives and passes.

When Is Human Life Sacrosanct?

Human life is sacrosanct in some situations but not others. For example, imagine an AV is driving on a narrow, two-lane road. It is raining heavily, and the road is flooded. There are two ways to get off this road: The AV can turn right and drive through a neighbourhood until it reaches a more major road. The AV can turn left and drive through a park until it reaches a more major road. When faced with these two options, the AV should prioritize safety. The AV should also take into account the number of humans who might be affected by its decision. If turning left will result in a higher number of casualties, the AV should turn right.

Where Does Ethics Fit into All This?

Engineers must carefully consider the ethical implications of their design decisions. They must ask themselves which values should guide the behaviour of their AV. They must then design the technology to reflect these values. For example, engineers might decide that their AV should prioritize the safety of humans over the safety of animals. If the AV encounters a situation in which one choice will result in more human deaths but fewer animal deaths, the engineers should choose the option that minimizes human deaths.

Conclusion

Ethical thinking is an important aspect of designing autonomous vehicles. These vehicles will face many difficult decisions, and engineers must be careful to consider the ethical implications of their choices. If autonomous vehicles are to be trustworthy and safe, engineers must design them with a set of ethical principles in mind.

References:

· H.R. Hale, & A.I. Glendon (1987). Individual behavior in the control of danger.

--

--

Spandan Bhattarai
Spandan Bhattarai

Written by Spandan Bhattarai

cybersecurity and networking enthusiast

No responses yet