Under the assumptions that an agent is one who performs an action, and that action is the expression/manifestation of agency, we can better understand what an AI agent is from an understanding of what an action is. Several different concepts of action and relevant examples are:
Active/passive distinction: A fire acts on firewood.
Basic action: A cat eating its food acts in a way that fire does not act.
Intentional human action: People can go to the store to sign up for a mobile plan.
A significant number of AI applications involves tasks that would otherwise typically fall under intentional human action (e.g., playing chess or generating prices for flights) but some applications of AI involve navigating spaces (e.g., NASA remote agent and self-driving cars).
In the argument regarding AI having autonomy, Taiwanese assistant professor Justin Nnaemeka Onyeukazuri argues that AIs can have agency but not responsibility. They can perform certain intelligent actions, showing they are capable of actions. So, from the definition of agency, AIs being capable of performing actions implies agency.
On the contrary, not all actions are free actions — not even for humans. It takes not only rationality, but most importantly free will, to perform free actions. Therefore, AI does not have free will, cannot perform free actions, and is therefore not a free agent.
Continuing from before, free agency implies autonomy. AI cannot have autonomy. Autonomy is necessary for moral and ethical responsibility. By definition, autonomy is non-deterministic, giving room for moral and ethical responsibility. Therefore, AI cannot be morally and ethically responsible.
On a more disagreeing argument, in order to have free will, one needs to be able to choose to do something, which implies that they could have chosen to do otherwise. If one could not have done otherwise, then that action is not free.
AI is controlled by bits and bytes implemented on a physical machine (i.e., computer). Everything action a computer takes is determined by the laws of physics. As the actions of a computer are determined by said laws, then a computer does not have the option to do something other than what it actually does. So, as the AI could not have done anything other than it did, AI has no free will.
Research professor of cognitive science Margaret Boden claims that autonomy exists on a spectrum and posits three components of autonomy in controlling behavior:
The extent to which response to environment is direct or indirect.
The extent to which the controlling mechanisms are self-generated rather than externally imposed.
The extent to which inner directing mechanisms can be reflected upon and/or selectively modified.
Boden adds that there are non-deterministic (stochastic) processes in many AI systems. Moreover, determinism does not always imply predictability. Chaos theory may mean models are sensitive to initial conditions and you simply need to run them to see the result.
Strong AI and artificial general intelligence (AGI) are publicly seen as machines that attribute to human-like mental states, this perspective originating from impressions of consciousness. Now what exactly determines a thing is conscious? In this elective, consciousness is defined into four experience and one awareness components:
Sensory experience: Subjective perception of external or internal stimuli through sensory modalities (e.g., sight, sound, touch). For example, seeing red involves not just detecting light wavelengths (objective) but experiencing the qualia — the 'redness' itself.
Affective experience: Subjective feeling of emotions, moods, or physical sensations (e.g., pain, joy, fear). For example, feeling pain involves not just nociception (nerve signals) but the aversive emotional response.
Cognitive experience: Subjective awareness of internal mental processes, such as thinking, reasoning, or imagining. For example, solving a math problem involves not just computation but the introspective sense of "working it out."
Agentive experience: Sense of volition — of being the author of one’s actions. For example, deciding to raise your hand involves the feeling of intentionality: "I chose to do this."
Self-conscious awareness: Ability to reflect on oneself as a distinct entity with thoughts, emotions, and agency. For example:, recognizing oneself in a mirror or thinking, "Why did I say that?"
Quoting American philosopher Thomas Nagel: "…if extrapolation from our own case is involved in the idea of what it is like to be a bat, the extrapolation must be incompletable. We cannot form more than a schematic conception of what it is like. For example, we may ascribe general types of experience on the basis of the animal’s structure and behaviour. Thus we describe bat sonar as a form of three-dimensional forward perception; we believe that bats feel some versions of pain, fear, hunger, and lust, and that they have other, more familiar types of perception besides sonar. But we believe that these experiences also have in each case a specific subjective character, which it is beyond our ability to conceive."
Why exactly would we care debating if our AIs are conscious? Because conscious entities, sentient animals, have moral status. They may experience harm so humans may have obligations to their welfare. Such a scenario would escalate to the establishment of AI rights — as is the case with animal rights. Conscious AI, by extension, free will, will inevitably result in new kinds of harms for human beings.
Scholars from past and present have been coming up or have drawn up evidence of AI consciousness. On the (past) supporting side, Australian philosopher David Chalmers considered what evidence there might be for large language models (LLMs) possibly being conscious. While he concluded that LLMs are not currently conscious, each of the objections could form a research project for heading in the direction of conscious AI.
Software engineer and former Google employee Blake Lemoine attributed consciousness to LaMDA, a family of conversational large language models developed by Google. His stance is that if a thing seems conscious, maybe that is evidence that it is conscious.
2
3
4
5
Some claim that consciousness depends on carbon-based biology, with American philosopher John Searle being an advocate of this position. Computers are not made of the same biological matter to be able to have mental capabilities like understanding. Should this be true, this would rule out silicon-based AI consciousness.
This stance is regarded by some as biological chauvinism (including Chalmers). In a sense, it is an empirical question to ask whether silicon-based life might exist, so it seems imprudent to rule the possibility of silicon-based consciousness out in principle.
Canadian cognitive scientist Stevan Harnad argued that in order for symbols to have meaning, they must be causally grounded in sensory connections to the environment. If thinking requires meaning then thinking requires sensing. As LLMs do not have senses, they cannot understand meaning, and so do not think.
In response to this, however, Chalmers responded by arguing that thinking and understanding do not require sensing of the real world. Bringing up an example, a brain in a vat could still have conscious thought (even if limited). Similarly, an LLM could reason about math, the world, and itself. Also, the massive training data for LLMs provides a grounding. And extended LLMs may be grounded in images of the world (e.g., Flamingo VLM by DeepMind that can take text and image as input). Also, models might also be grounded in virtual worlds, which are more tractable than the real world.
The criticism is that large language models are modelling text rather than modelling the world. There is no understanding. Just models doing statistical text processing. But once again, Chalmers drew an analogy with evolution, where natural selection — an easily explainable process — may give rise to amazing capabilities. Similarly, world-models and self-models might emerge from the mechanism of training neural networks. So, there is no reason in principle why LLMs could not have world models or self-models, but currently any evidence of world models and self-models is fragile (not robust).
A technical point to this argument is that transformer-based LLMS are feedforward systems. They lack memory-like internal states. But several theories of consciousness, like that of professor of cognitive neuroscience Victor Lamme, requires recurrent processing and learning. However, perhaps not all consciousness requires this. Perhaps current models have a limited form of recurrence and memory. There are recurrent large language models (long short-term memory), so this is probably just a technical challenge.
Relatedly, chief scientist of Meta AI Yann LeCun claims that human-level AI would not be achieved without being able to learn from high-sensory inputs because that is where most of our information comes from. However, Chalmers (what a passionate guy) would probably reply that this seems like a related engineering problem just for a different sense modality.
5
6
Overall, Chalmers estimates approximately 1-in-4 odds of AI consciousness within a decade. Please note that, as Chalmer also does, that these are finger-in-the-air (roughly estimated) calculations. Chalmers focuses on changing the software architecture to better mimic human cognitive processing, but AI will still be processing bits and bytes on computer hardware.
If you, like Searle, think the biological challenge is a real impediment for computers to have consciousness, then you will rate the odds a lot lower than Chalmers. Probably a big fat zero.
There is no precise definition for creativity, although the elective has provided a cluster of characteristics that help define it. Here are some candidates:
Production of novel ideas: text
Production of valuable novel ideas: text
Bringing back Margaret Boden, she said: "a creative idea is one that is novel, surprising, and valuable."
She also distinguished two types of novelty:
Psychologically creative (P-creative): text
Historically creative (H-creative): text
Continuing to classify her taxonomy of creativity, Boden segregated three different kinds of surprise:
Combinatorial creativity: Refers to the statistically improbable and unfamiliar juxtaposition of new ideas.
Exploratory creativity: Refers to the unexpected and unconsidered that can be seen as fitting with an established style of thinking. limited by constraints of style. Might test limits of the style.
Transformational creativity: Refers to the shock of a new idea that is not just improbable and unexpected but impossible. Novel idea does not fit into any style and may drop an old constraint or add a new one. Sometimes will take time for practices to recognize value in this case.
American philosopher Sean Dorrance Kelly music
Games
Mathematics
Artificial creativity, aka computational creativity, explores the use of computer technologies to emulate, study, stimulate, and enhance human creativity, often using AI to create art, literature, music, and more.
Weak
Strong
Boden on autonomy and creativity