The elective defines artificial intelligence (AI) as "the simulation of human intelligence in machines that are programmed to think and act like humans." It also considers learning, reasoning, problem-solving, perception, and language comprehension examples of cognitive abilities.
They comprehend that AI is accomplished by studying the patterns of the human brain and by analyzing the cognitive process. The outcome of these studies develops intelligent software and systems.
Weak AI refers to AI systems designed to perform specific tasks and are limited to those tasks only. These systems excel at their designated functions but lack general intelligence. Weak AI operates within predefined boundaries and cannot generalize beyond their specialized domain. Examples of weak AI include: voice assistants like Siri or Alexa, recommendation algorithms, image recognition systems, etc.
Strong AI, also known as general AI, refers to AI systems that possess human-level intelligence or even surpass it across a wide range of tasks. Strong AI would be capable of understanding, reasoning, learning, and applying knowledge to solve complex problems in a manner similar to human cognition. To date, development of strong AI remains largely theoretical and unachieved.
AI alignment is the process of encoding human values and goals into large language models, aiming to ensure that AI systems act in ways that are beneficial and aligned with human values and intentions, preventing unintended consequences and promoting ethical AI development.
As AI systems become more powerful and autonomous, there is a risk that they might pursue objectives that conflict with human well-being or societal values. Thus, AI alignment is crucial for ensuring that AI systems are developed and used in a way that upholds ethical principles and avoids unintended harm. Several guiding principles of AI alignment include:
Robustness: AI systems should be able to function reliably even in the face of unexpected situations or inputs.
Interpretability: it should be possible to understand how an AI system makes its decisions, which is essential for ensuring that it is aligned with human values.
Controllability: developers and users should have a degree of control over the AI system's behavior, allowing them to adjust or correct it as needed.
Ethicality: AI systems should be designed and used in a way that aligns with ethical principles, such as fairness, justice, and human rights.
An encroaching concern known as the AI alignment problem arises from the fact that AI systems, designed to perform specific tasks, might not naturally understand or act in accordance with human values, goals, or intentions. They could prioritize a specific goal (e.g., maximizing efficiency) to an extent that conflicts with broader human values or lead to unintended consequences.
The risk of misalignment increases alongside increases in AI autonomy, potentially leading to outcomes that conflict with human well-being or established moral values. Misalignment could manifest as:
An AI designed to reduce healthcare costs might, in its pursuit of efficiency, make decisions that result in poorer patient outcomes.
An AI tasked with finding optimal routes might prioritize speed over safety, potentially leading to dangerous situations.
An AI could unintentionally or intentionally perpetuate biases present in its training data.
Taxonomy is the branch of science concerned with classification Synthetic creations such as AI can also be classified and categorized into different groups. My approach to AI taxonomy is to look into their usage of AI in the form of techniques:
AI techniques
Machine learning: uses data to learn patterns and adapt to new situations. Examples include:
AlphaGo: an algorithm that uses deep neural networks (DNN) and tree search technique to play the Korean board game Go (natively, baduk).
ChatGPT: a chatbot that is trained with reinforcement learning using human feedback, then runs a transformer algorithm to learn how to generate human-like text.
Amazon Go: a system that use computer vision, deep learning algorithms, and sensor fusion for partially automate cashier roles in convenience stores.
Sensor fusion: the process of using information from several different sensors to estimate the state of a dynamic system. The result is estimated to be better than individual sensors' results.
Expert system: relies on explicitly defined rules and human expertise to solve problems. Generally rigid and struggles to adapt to new situations or changing data but can still make decisions on its own. Examples include:
MYCIM: an early expert system that relied on a knowledge base (of medical rules), an inference engine (to derive diagnoses/treatment recommendations), and a user interface to interact with users, and potentially an explanation module to articulate the reasoning.
Mars Rover Curiosity: a car-sized robotic vehicle that employs AI via AEGIS software to autonomously select targets for analysis, enhance research, and improve scientific output.
ELIZA: the first chatbot AI in AI history that could create a seemingly intelligent, albeit limited, conversational experience.
Automation: uses technology to perform repetitive tasks, which can include utilizing narrow AI as a tool to enhance its capabilities. Examples include:
Smart factories: digitized manufacturing facilities using interconnected devices and systems (including AI) to continuously collect and share data, which is then used to improve processes, make informed decisions, and address issues in real-time.
Basic: does not include AI into its runtime, essentially relying solely on pre-programmed logic without any learning or adaptation abilities. Examples include:
Slide rule (1620 ~ 30): a mechanical analog calculator used primarily for multiplication, division, and other mathematical functions like roots, logarithms, and trigonometric functions, but not addition or subtraction.
Pocket calculator (1970): a small device that performs basic arithmetic functions/calculations through pre-programmed algorithms and logic gates.
The concept of AI has been studied since the 19th century:
Early 1900s — roots of AI development
1837 — Charles Babbage designed the analytical engine, arguably the first human computer.
1843 — Ada Lovelace illustrated of the analytical engine's first computer program/algorithm.
1888 to 1906 — Santiago Ramón y Cajal's work on the structure of the nervous system, which founded modern neuroscience and early understanding of neural basis of thoughts and learning.
1940s and 1950s — beginnings of AI
1943 — Warren McCulloch and Walter Pitts proposed the first artificial neuron. Named after them, it is a simple binary model that simulated the basic function of neurons in the brain.
1950 — Alan Turing created a thought experiment to determine if a machine could think, which is now named the Turing Test in his honor.
1955 — John McCarthy, Marvin Minsky, Nathaniel Rochester, and Claude Shannon organized the Dartmouth Summer Research Project on Artificial Intelligence, a summer workshop widely considered to be the founding event of artificial intelligence as a field.
1956 — Allen Newell and Herbert Simon created the Logic Theorist, one of the first artificial intelligence (AI) programs. It is designed to mimic human reasoning and solve complex problems.
1957 — Frank Rosenblatt invented the perceptron, one of the simplest artificial neural network architectures that could learn and classify inputs into two categories. It was also the first machine to have original ideas.
1960s and early 1970s — laying groundwork of AI
1966 — Joseph Weizenbaum developed ELIZA, one of the first computer programs to process natural language and engage in conversations with humans.
1971 — Terry Winograd developed SHRDLU, an early natural language understanding computer program that allowed users to interact with a virtual world of geometric shapes by giving instructions and asking questions in plain English, demonstrating AI's groundbreaking potential for computers to understand and respond to complex instructions.
1966 to 1972 — Charles Rosen, Nils Nilsson, Peter Hart, Alfred Brain, Sven Wahlstrom, Bertram Raphael, Richard Duda, Richard Fikes, Thomas Garvey, Helen Chan Wolf, and Michael Wilber of the Stanford Research Institute (SRI) developed Shakey the robot, the first mobile robot to be able to reason about its own actions using planning (A* search), computer vision, and NLP.
Late 1970s and early 1990s — AI winter
1972 to 1977 — Bruce G. Buchanan, Stanley N. Cohen, and others at Stanford University developed MYCIN, an early backward chaining, knowledge-based expert system to identify bacterial infection and recommend antibiotics.
1982 — John McDermott at Carnegie Mellon University developed R1/XCON (eXpert CONfigurer), a groundbreaking early expert system designed to automatically select components for VAX (Virtual Address Extension) computer systems based on customer requirements.
1985 — David Rumelhart, Geoffrey Everest Hilton, and Ronald J. Williams co-published a paper on using backpropagation to train multilayer neural networks.
1990s to 2000s — AI warmup
1997 — A team led by Feng-hsiung Hsu and Murray Campbell fully developed IBM Deep Blue, a chess-playing supercomputer that beat reigning world chess champion Gary Kasparov at chess in a six-game match, using brute force tree search techniques.
2000 — Cynthia Breazeal made Kismet, a pioneering social robot head capable of simulating emotion through various facial expressions, vocalizations, and movement.
2000s to 2019 – AI growth
2011 — IBM Watson defeated two all-time champions, Ken Jennings and Brad Rutter, of American television general knowledge quiz game show "Jeopardy!" in a televised contest.
2011 and 2014 — Apple Inc.'s Siri and Amazon's Alexa, virtual assistants using speech recognition and natural language processing but limited to understand a limited list of questions, were released.
2010s — Deep learning takes over with larger datasets and larger computing power available, introducing new applications such as recommender systems, image analysis, machine translation, etc.
2016 — Google DeepMind's AlphaGo beats 18-time world champion Lee Sedol at a game of Go via reinforcement learning, deep neural networks, and tree search.
2000s to 2019 – AI surge: generative AI success story
2020 — OpenAI released GPT-3, a large language model (LLM) that can understand and generate human-like text for various versatile applications. Based on 2017's Transformer deep learning architecture.
2021 — OpenAI released DALL-E, a text-to-image model that additionally has the capacity for understanding and combining concepts, visualizing three-dimensionality, and exploring creative visual expressions. Based on versions of the Transformer and Variational Auto-Encoder (VAE) deep learning architectures.
2022 — OpenAI publicly released chatbot ChatGPT, trained to interact in a conversational way. Based on the GPT-4o LLM.
After 2022 — Subsequent releases include multimodal AI, reasoning models, agentic AI, etc.