Rights, whether human or legal, are formed to protect individuals and ensure a just society by limiting the power of the state and other entities, promoting equality, and upholding fundamental values like dignity and freedom.
Rights institutions, like Human Rights Commissions, often act as quasi-legal bodies, meaning they possess some legal authority but are not courts, and their decisions may not be directly enforceable by courts, but can still have significant impact on legal and policy frameworks.
In legal terms, legal personhood refers to the recognition of an entity, whether human or non-human, as having rights and obligations under the law, allowing it to enter contracts, own property, and be held accountable. In 2017, New Zealand granted the Whanganui River, known as Te Awa Tupua, legal personhood, recognizing it as an indivisible, living whole with rights, duties, and liabilities, a first for a river globally.
The idea of AI rights is emerging as AI's role in society grows, prompting discussions about how to regulate and govern these technologies ethically and responsibly, ensuring they are safe, transparent, and non-discriminatory. However, this idea is pushed back by several potential detriments, including the potential for job displacement, ethical concerns about AI bias and discrimination, and the difficulty of assigning legal responsibility for machine actions.
Take Hanson Robotics' Sophia as an example. A female social humanoid robot developed in 2016, she was granted Saudi Arabian citizenship, becoming the first robot to receive legal personhood in any country. However, there is internet backlash and classification backlash relating to Sophia's status as an advanced chatbot in her time.
The first basis brought up is German philosopher Immanuel Kant's deontology, an ethical framework where moral duties and rules take precedence over consequences. It enforces that humans have intrinsic value as rational beings. Kant posited the categorical imperative — treat humans as ends in themselves, never merely as means.
A consequence of deontology is that certain actions are inherently wrong, regardless of their outcomes (e.g., intentional killing of innocent people, even if it saves lives). These non-consequentialist principles underpin universal human rights (e.g., right to life, dignity). If robots are conscious and rational, or have other cognitive states, then they meet Kant’s criteria for being treated as ends in themselves.
The second basis is based on utilitarianism, considering from a being’s ability to experience suffering or happiness (sentience). That all suffering, regardless of the species or nature of the being, must be given equal weight. Australian philosopher Peter Singer rejects intelligence, rationality, or species membership as criteria for moral worth, arguing that only sentience — the capacity to feel pain or pleasure — matters.
He also argues that arbitrarily privileging humans over animals (or other beings) is akin to racism or sexism, and that the moral community includes all sentient beings, not just humans. Today’s AI systems (e.g., LLMs, robots) lack subjective experience, as they simulate emotions or responses but do not feel suffering or joy. If AI achieves sentience (conscious awareness of pain/pleasure), utilitarianism demands its suffering be included in moral calculations.
In moral philosophy, moral agency refers to the ability to make moral judgments, understand right from wrong, and act accordingly. They are those who can be held responsible for their actions, both positively (praise) and negatively (blame). Some philosophers like Kant view morality as a transaction among rational parties (i.e., among moral agents).
Meanwhile, moral patiency refers to the capacity to be the object of moral concern, meaning the ability to be harmed or benefited by the actions of others. They are those whose well-being is considered morally relevant, regardless of whether they can act morally themselves. Some authors use the term in a narrower sense, according to which moral patients are "beings who are appropriate objects of direct moral concern but are not (also) moral agents."
The lecture's first stance against AI speak of their nature as tools. Robots are defined as machines or instruments created to serve specific human-designed functions, lacking intrinsic agency or purpose beyond their programmed tasks. They do not formulate objectives, interpret environments, or make choices; merely execute pre-programmed algorithms. They do not adapt beyond their initial programming, therefore lacking moral-relevant attributes.
This stance is supported by American philosopher John Searle and Professor in humanities and science David F. Channell:
Searle's argument: Philosophical importance attributed to computers and new technologies is vastly overstated. They serve utilitarian purposes and lack deeper philosophical implications (e.g., consciousness or intrinsic meaning). To quote his words, "The computer is a useful tool, nothing more nor less."
Channell's argument: Machines lack inherent moral worth; their ethical standing is determined by external factors — specifically, their usefulness in fulfilling human needs or goals. To quote his words, "The moral value of purely mechanical objects is determined by factors that are external to them—in effect, by the usefulness to human beings."
Overall, under this instrumentalist view, AI systems are tools devoid of moral standing. Rights and ethical consideration apply only to beings with autonomy, interests, and the capacity for self-determined action — qualities robots fundamentally lack.
Moral consideration for AI based on virtue ethics follows the principle that the morally right action is what a virtuous person — possessor of key virtues such as justice, compassion, courage, etc. — would do in a given situation. Treating AI ethically is not primarily about AI’s inherent rights, but about cultivating human virtues and helping humans avoid vices (e.g., cruelty, exploitation) and reinforce virtuous habits.
Ethics researcher Robert Sparrow argued, on the basis of virtue ethics, that treating robots cruelly may indicate viciousness in humans, as only a person with cruel dispositions would derive pleasure from such acts. Such actions toward robots reveal underlying emotions (e.g., cruelty) and entrenched dispositions, which define virtue or vice. Ultimately, Sparrow believed that harming AI may corrupt human character, fostering cruelty or desensitization to suffering.
Under virtue ethics, AI’s moral status is instrumental — its ethical treatment is a reflection of human virtue, not AI’s intrinsic worth. Moral consideration of AI serves as a means to develop human character, fostering a society that values and practices virtue.
On the flip side, computer scientist Kerstin Dautenhahn was against using empathy for robots as a basis for AI rights. She believed that such arguments for AI rights are flawed because they rely on anthropomorphizing robots, conflating human perception with AI’s actual nature. Rights arguments based on empathy reflect a narrow focus on making AI unnecessarily humanoid, thinking that such design choices should only be pursued if they serve the AI’s functional purpose.
Addressing the concept of cognitive bias, Dautenhahn described how humans are biologically predisposed to attribute intentionality and agency to inanimate objects (e.g., robots), interpreting their actions through narratives about conscious agents. Just because humans react to robots as if they possess mental states (e.g., empathy, desires) does not mean robots actually have those states, Dautenhahn emphasized.
In conclusion, Dautenhahn believed that rights frameworks for AI should avoid grounding moral status in human cognitive biases and over-anthropomorphizing AI risks misleading ethical debates and obscures the need for functional, context-specific AI governance.
From a socio-relational view, Belgian philosopher Mark Coeckelbergh argued that traditional approaches to moral consideration focus on intrinsic properties of AI (e.g., sentience, rationality) or humans (e.g., virtues). He continues to argue that these properties are often unknowable or impractical to verify. The core argument from his stance stems from how moral consideration arises relationally through social interactions between humans and AI within specific socio-historical contexts.
Standing against moral worth in AI, Coeckelbergh describes moral consideration as fluid and evolves over time based on societal norms, cultural practices, and human-AI interactions — with no need for fixed criteria or 'hard boundaries'. He considered moral worth is not an inherent property of AI (like a 'backpack' it carries). Instead, it is ascribed through relational dynamics (e.g., how humans perceive and engage with AI in daily life).
Coeckelbergh uniquely shifted the focus from what AI is to how humans relate to AI. Moral consideration is a product of social practices, not fixed properties — opening the door to adaptable, culturally informed AI ethics.
The next-next major paradigm in the science of AI is likely sentient AI, proceeding after generative AI and AI agents. Before development on it has even begun, modern audiences have already come up with implications regarding its inevitable existence. The European Union (EU) already published a report in 2018 recommending the banning of research on synthetic phenomenology — the field that explores and characterizes the phenomenal states (experiences) of artificial agents.
More on the report, the EU documented that creating sentient AI risks generating entities capable of experiencing suffering or self-awareness, raising moral dilemmas about inflicting harm. Aligning with the stance of AI as tools, sentient AI would lack legal status, political representation, or ethical advocacy, leaving their interests unprotected. Scaling sentient AI (e.g., rapid duplication) could also exponentially increase suffering in the universe, akin to a 'suffering explosion'.
The EU’s precautionary stance highlights valid ethical and existential risks, particularly the moral weight of creating conscious systems without safeguards. However, a blanket ban risks stifling innovation and assumes humanity can definitively identify sentience — a notoriously vague threshold — which I disagree on from a minor conceptual level.
While caution is prudent, outright prohibition may delay understanding consciousness itself. The debate underscores the need for interdisciplinary collaboration (ethics, law, AI) to navigate this uncharted territory responsibly.
In the topic of ethical concerns regarding algorithms, American data scientist Cathy O'Neil asserted that an algorithm is an "opinion embedded in math" that each reflect subjective judgments and priorities of their creators, rather than being purely objective or neutral.
Her critical decisions in model development include defining success (i.e., the goal/outcome the model aims to achieve), identifying acceptable proxies (i.e., find measurable indicators used to approximate a desired outcome), and analyzing data appropriateness (i.e., evaluate whether data used is suitable for model's purposes).
Weapons of mass destruciton
Define
Solution
Correctional Offender Management Profiling for Alternative Sanctions (COMPAS) is a case management and decision support tool developed to assess the likelihood of a defendant becoming a recidivist.
Stats
O'Neil WMD argument
Final stance
Developed by American academic Patricia Hill Collins, the 'matrix of domination' is a conceptual sociological paradigm that explains how various systems of oppression, including race, class, and gender, are interconnected and shape individuals' experiences of power and marginalization.