The trolley problem is a classic thought experiment in ethics that presents a moral dilemma: should one sacrifice one person to save five others. It explores the complexities of moral decision-making and the conflict between different ethical frameworks.
If a self-driving car is faced with an unavoidable collision, should it prioritize the safety of its passengers or pedestrians? The trolley problem highlights the difficult ethical choices that may be faced by autonomous vehicles in real-world situations.
However, while this problem does raise important questions about how to program autonomous vehicles to handle unavoidable accidents, it has been criticized for its unrealistic simplification of real-world scenarios and potential to distract from more pressing ethical issues.
While trolley problem is a common thought experiment in ethics, it is not directly relevant to the real-world crashes involving Tesla's Autopilot system — mainly because its current systems are cognitively inferior to an alterted human mind.
The visual environment would be much more complex than on a big multi-lane highway. For autopilot AIs to work, they need to:
Learn much more interactions with pedestrians, especially children, and bicycles.
Be able to understand odd and partially occluded presentations, and every lighting condition — straight into the sun to heavy rain at night, plus reflections from other light sources.
They would also need to understand and predict scenarios such as:
Parked car door opening into bicycle causing it to swerve.
Child following a ball into traffic.
An on-coming truck slowing in a turn lane.
Reduced traffic deaths can be achieved without AI approaches by:
Safe road design and pedestrianized areas (separating bikes and pedestrians from traffic). Ethical trade-off with speed (not just going the legal 'speed limit') — such as KE = 1/2 mv2 — stopping distance and harmfulness of impact increase non-linearly with speed.
Safe modern vehicles with less SUVs and light trucks – high grill is more deadly to cyclists and pedestrians, and vehicle is more likely to roll.
Setting and enforcing speed limits with zero tolerance for fatalities. The stricter EU countries have lower rates of traffic deaths than US, even when adjusting for distance driven. Sweden is safest at 2.2 deaths per 100,000, while US is 12.9.
Speaking of AI approach, ask yourself the following when striding towards this path:
Do we feel it is 'wrong' for machines to take over life-or-death decisions? Or maybe we are willing to be utilitarian/consequentialist? An AI driver will be making a different error when it runs you over; but who cares?
We need objective measurement. Are AIs safer per unit time across the range of conditions they’ll be used for? If the systems will be used at night or near schools, then they have to be safe at night and near schools.
We cannot believe manufacturers. And we should not give in to a desire to seem 'high-tech'. We cannot let manufacturers pass off responsibility to drivers if features are clearly leading them to a certain way of using the vehicle.
While human casualties create governmental pressure and indiscriminate deaths are possible, the opposite is also true. Using robots logically removes human Soldiers from the battlefield, automatically reducing casualties. Also, robots would adhere to doctrine without emotional interference.
Context
Fratricides
'Fire and forget'
Unmanned aircraft and boats are becoming increasingly important in military operations such as spotting targets, guided munitions ('kamikaze' drones), and weapons platforms (e.g. dropping a grenade from a quadcopter).
Keep in mind that these drones are mostly remote-controlled by a human operator, so they are not really about AI… although they may have sophisticated algorithms (e.g. for stable flight).
Examples
Deniability
Implement ethics
Military 'intelligence'
Not a strong case (yet)
'Edge' computing
Future implications and further research
Back in 1966, Joseph Weizenbaum made simple dialogue agent ELIZA that mimicked a Rogerian psychotherapist. Weizenbaum chose this domain because it was a particular case where no knowledge of the world was required.
In his words: "If I said…'I went for a long boat ride' and he responded, 'Tell me about boats', one would not assume that he knew nothing about boats, but that he had some purpose in so directing the subsequent conversation."
Xiaoice is an AI chatbot developed by Microsoft that aims to create emotional connections with users. It is a long-term model build with artificial empathy, a positive personality, and youthfulness. Xiaoice is not just a task completer or personal assistant — it has many domain-specific capabilities and skills.
Xiaoice is predominately based on deep learning rather than human-crafted dialogue. For example, the chatbot has the ability to comment on photos (images) with a convolutional neural network (CNN), matching responses to large database, then ranks to fit its personality.
In general, it uses a mix of retrieval and generation to get candidate responses, then ranks them to fit personality, topic continuity, and ethics. Now it often retrieves from billions of its own past responses.
Invoking AI ethics, Xiaoice's ability to invoke 'skills' (e.g. report time, weather) is moderated for what will keep the conversation going and fit the bot's personality.
As mentioned earlier, Xiaoice generates
Measures and motivation
More than entertainment?
SPARX is a self-help, online e-therapy program that most notably takes the form of a first-person adventure video game — designed for young people aged 12-19 who are feeling down, depressed, worried, or stressed. It is a free resource in New Zealand that uses cognitive behavioral therapy (CBT) techniques to help young people manage their mental health.
Headstrong is an
Human (expert counsellor) scripted dialogue that can be easily modified content scripts with a GUI flowchart. Although it still requires 'tech support' to engineer module structure, set internal variables, write logical expression for branching, etc., the flowchart still recognizes 'priority intents' and jumps to response modules, such as intent of self-harm and less intense texts like 'quit.'
Its architecture was flexible enough for experts to create a pandemic stress management intervention while under lockdown. However, lacks conversational skills for open-ended active listening and empathetic dialogue (e.g. questioning, sympathizing). In response to this shortcoming, the team is starting to add LLM features in limited contexts.
Further potential improvements to chatbot safety look into giving dialogue manager guidance on need to invoke specific assessments and scripts — even if it is just from text (without camera, or microphone, or keyboard pressure/timing).
Borrowing pre-trained language model and encoder BERT and used texts from a Mechanical Turk exercise where people described a time they felt, the
Tay
Replika
Professional empathy
Preference for real friends
Emulate empathy