Some people think ethical or moral arguments are all mistaken, because ethical moral truths are objectively a human construct. This is where ethical skepticism steps in. It challenges the notion of objective moral truths, arguing that ethical principles are human-made constructs shaped by culture, history, and psychology — not universal or inherent facts.
Ethical skepticism forces us to confront whether AI ethics should prioritize human consensus (e.g., democratic input) over 'discovered' principles and how to avoid cultural imperialism in global AI governance (e.g., Western vs. Eastern ethical priorities). While this skepticism challenges objectivity, it does not negate the practical need for ethical frameworks — especially as AI blurs lines between tool, agent, and potential sufferer. The key is to build systems that reflect negotiated human values, not illusory absolutes.
Different cultures or communities — whether currently, in different places or times — have or had different ethical views. Perhaps some cultures think/thought polygamy is/was immoral. These could be nothing more than different cultural customs. None can be condemned as wrong or honored as right, since there is no 'acultural standpoint' from which such a universal assessment could be made.
So that when we say it was "right (or wrong) to kill and eat the cabin boy" in our case study, all we can be saying is that it was "right (or wrong)" by the moral standards of that culture or our own if we’re describing our own view. In conclusion, there is objectively no truth of the matter independent from cultures or practices. That ethical truth is relative to the individual (subjectivism) and the culture or society (cultural relativism).
…But even if there are ethical differences between cultures, it does not mean that there is no universal moral truth:
It does not follow from the fact that people disagree about whether or not polygamy is wrong that there is no fact of the matter, any more than it followed from the fact that people disagreed about the shape of the earth that there was no fact of that matter.
It might still be the case that there are larger areas of ethical consensus. All communities at all times may have thought 'gratuitous killing is wrong'.
Different practices may not show different cultures had different moral rules. What looked like gratuitous killing to Western explorers may have been essential to the survival of communities so not gratuitous.
There is at least one way in which ethical judgements are 'personal': we have to make them ourselves. Legal issues can be authoritatively settled by specified institutions. The law can authoritatively determine what legally should or should be done. There is no analogous ethical institution and to this extent the skeptical views about ethics are right.
To quote the tutor: "If we disagree about an ethical matter, I can think you mistaken in a way that will eventually seem merely misguided in the legal case. My moral views are arrived at by me and there is no ethical court who can overrule me. Each person’s assessment of the right thing to do is in this sense at least as good as anyone else’s."
The idea that ethics is 'personal', in the sense that one cannot be definitively overruled by others in ethical matters, does not mean that ethics is personal as taste is personal. Several important differences between the two include:
We can think we were mistaken about matters of morality, but it is hard to make sense of that judgement about matter of taste.
The way in which taste is personal seems to make certain kinds of disagreement over matters of taste impossible.
Subjectivism and relativism make it hard to explain the everyday practice role of discussing and arguing about ethical issues.
A problem for relativism is moral progress. We often think things have improved morally, or at least that they could improve.
E.g., There is less slavery and discrimination against women, more tolerance and acceptance of difference. We might regret that we have not made more progress, but it is hard to see how to make sense of those judgements if relativism is true. Relativists struggle to explain the common view that moral progress (and deterioration) is possible.
In conclusion, the elective believes that the real problem is not that relativism is necessarily self-defeating, but rather that relativism cannot capture our actual practices and beliefs: the idea that we can be mistaken, that we disagree, that we can give one another reasons, or that we make progress on some issues.
Starting off with a list of different senses of artificial intelligence:
We may be referring to a branch of computer science, that has the study of AI as its object… but this still leaves open the question of what is the nature of the AI that is being studied.
You will see many references to ethical guidelines by government agencies and other organizations interested in 'responsible AI', although this seems to be focused on safe practices for the development, implementation, and maintenance of AI systems.
We may be referring to artificial intelligence systems, a specific application or implementation of AI techniques to solve a particular problem or perform a specific task.
In philosophy, a descriptive definition will provide the characteristics that pick out all and only the instances of that term. So, what is AI? Well, there is no generally accepted definition of AI. Yet there are lots of human items that do not have precise definitions, but we can reason about them ok (e.g., 'love', 'art').
Referencing the words of Alan Turing, John McCarthy, and Marvin Minsky, machines having intelligence have to fulfill the criterion of performing tasks identically to humans with intelligence. What do we see that demonstrates intelligence? Why, the usage of certain high-level cognitive abilities, including but not limited to:
Carrying out complex reasoning: solving physics problems, proving mathematical theorems
Drawing plausible inferences: diagnosing automobile faults, solving murder cases
Using natural language: reading stories, carrying out extended conversations
Solving novel and complex problems: completing puzzles, generating plans, designing artifacts
As for the definition of AI as a field of study, there are also several different senses on the topic:
Elaine Rich: defines AI as "…the study of how to make computers do things at which, at the moment, people are better."
Name a task that humans are better at, at least as in 1983. If a computer can perform this task, it counts as artificial intelligence (AI).
More inclusive of activities that would not necessarily be described as 'intelligent' like identifying an object or navigating through a cluttered space.
Margaret Boden: defines AI as "…the study of how to build or program computers to enable them to do what minds can do."
A weaker version that is a functional characterization good for engineering tools: “…the development of computers whose observable performance has features which in humans we would attribute to mental processes.”
Also elsewhere characterizes AI as: "…the use of computer programs and programming techniques to cast light on the principles of intelligence in general and human thought in particular."
John McCarthy: defined AI as "…the science and engineering of making intelligent machines, especially intelligent computer programs. It is related to the similar task of using computers to understand human intelligence, but AI does not have to confine itself to methods that are biologically observable."
Stuart Russell and Peter Norvig: defined AI as "the study of agents that receive percepts from the environment and performs actions."
AI ethics is not explicitly located within the dynamic between society and AI systems but serves as an overarching framework that intersects with four quadrants of AI development and behavior. Here is how it connects to each quadrant and what each quadrant means:
Thinking Humanly (Human Thought): Ethics ensures AI systems mimicking human cognition avoid biases.
E.g., fairness in decision-making algorithms.
Thinking Rationally (Rational Thought): Ethics guides logical reasoning.
E.g., embedding moral principles like justice into AI’s decision trees.
Acting Humanly (Human Behavior): Ethics ensures AI behaviors align with human values.
E.g., social robots adhering to empathy and transparency.
Acting Rationally (Rational Behavior): Ethics defines 'optimal' actions in moral terms.
E.g., autonomous vehicles prioritizing safety and accountability.
Greek philosopher Aristotle (350 BCE) claimed that living things have a soul, but different categories of living things have souls with different powers.
When we talk about artificial 'intelligence', how do cognitive and mental capabilities relate to it?
In classical philosophy, intelligence is often associated with nous — the faculty of understanding — and rationality. The tutor presents two distinct approches to defining intelligence:
Behaviorist/functionalist approach: External behavior matters. If it behaves intelligently, then it is intelligent. An example of this is the Turing test.
Cognitive approach: What happens internally matters. We must consider how it thinks, not just look at the behavior. An example of this is the Chinese room argument. To elaborate on this example:
Premise: Person who has no understanding of Chinese or ancient Greek is put in a closed room. In the room contains a manual with instructions detailing the appropriate response to every possible input — all in Chinese characters. Person in the room can communicate via written responses with the outside world through a slot in the door.
Scenario: Person outside who speaks Chinese passes messages written in Chinese to the person in the Chinese Room. Person in the room responds using the manual; they appear to be conversant in Chinese despite not understanding any of the communication.
Argument: Without 'understanding', a machine's activity cannot be described as 'thinking' (see below). Since a machine does not think, it does not have a 'mind' in the same way you would say a person does.
Linguistic theory is the study of the principles and patterns that govern language, aiming to understand the underlying rules and structures of language, rather than just describing its surface-level features. German philosopher Rudolph Carnap described the following influential division of linguistic theory:
Syntax: the rules for correctly structuring a sentence; the grammar.
Semantics: meaning of words and sentences.
Pragmatics: how you use language to do things.