Business is mainly about creating value, professionally for target stakeholders that may not know what value they want to begin with. In the broad speaking of Thomas H. Davenport and Rajeev Ronanki, AI can support three important business needs:
Automating business processes.
Gaining insight through data analysis to assist with decision-making.
Engaging with customers and employees.
Michael Porter's value chain model delineates how businesses can create value through primary and support activities. By focusing AI automation on specific niches within these activities, companies can enhance efficiency, reduce costs, and gain a competitive edge.
The lecture separates each category of AI application in terms of their capabilities (e.g., business or human-like), how they contribute to business or compare to humans), the type of technologies/algorithms that make them, outcomes/actions each are designed to deliver (e.g., recommendation or actions), level of intelligence that it possesses compared to humans, and the type of engagement between it and humans
Do note that there is no 'right' taxonomy for applications of AI in business, as seen in previous articles that each include different classifications for AI (e.g., assistive, augmentative, and autonomous intelligence template). Instead of thinking about them as 'types', we could think about characteristics of AI artifact and how each fit with business processes in particular contexts.
The topic of building machines that think and act like us is, in my opinion, a foolishly ambitious endeavor — where we humans have not collectively agreed on clear definitions for concepts that define us as human to begin with. Then again, the market's goal of making AIs to most began with creating extra value, and human cognition by default is not the most efficient work engine.
Rationality is, to me, a definition created by humankind to contain our mental capabilities for following an often-shared set of logic. Be it thinking or acting, whether the former facilitates the latter, the myriad definitions that casual and experienced thinkers come up with provide much room for expanding our worldview — which in turn could inspire long-term solutions that can earn creators a huge buck and satisfy the desires of many former dissidents.
For now, in the context of business, AI refers to a broad range of intelligent technologies that encompass cognitive automation, machine learning, reasoning, hypothesis generation and analysis, natural language processing, and autonomic systems that self-manage their own operations and the processes they oversee.
The key characteristic of such intelligent technologies is their designed ability to adapt to local environments and/or update their own lines of inquiry while interacting with other systems and/or the environment.
If the names were not a dead giveaway, IT and AI are distinctively different in various artificial traits despite originating from respective domains:
IT artifact: Tools and documents used in software development and maintenance. Functionally acts as roadmaps for developers, allowing them to trace the software development process and resolve issues.
AI artifact: Outputs of AI models (i.e., trained models or generated content), used for various tasks. Functionally represents the knowledge, application, algorithm, and benchmark used in an AI tool (see above).
As for the processes of implementation for both artifacts, they primarily defer in the types of value they bring to existing businesses. To elaborate:
IT implementation: Primarily concerned with ensuring the smooth and reliable operation of existing IT systems, networks, and infrastructure. The goal is to maintain stability, security, and efficiency.
AI implementation: Aims to create systems that can perform tasks which typically require human intelligence (e.g., learning, reasoning, and decision-making). The goal is to develop intelligent systems that can learn from data and adapt to new situations.
At the beginning of the article, the author referenced the tale of Snow White — specifically the part where its antagonist, the evil queen, talking with a magic mirror — to kickstart their discussion of AI. The magic mirror was perceived as a potential AI application that can tweak its reflections to suit its user's desires, returns accurate statements, and information on another person's address and preferences. For those who have read the story, the evil queen uses said knowledge to poison Snow White with maliciously envious intent.
After referencing several standalone AI applications that mirror (pun intended) the magic mirror's functions, the author began to look into the fundamentals of marketed devices under the broad umbrella of AI. They define AI generally as "the idea that computers can think like humans."
Then, the author brought up the latest and upcoming variants of AI — starting with the artificial narrow intelligence (ANI) designed to perform a narrow range of task, to the second-gen artificial general intelligence (AGI) that can autonomously complete actions it was never even designed for, and finally, to the third-gen artificial super intelligence (ASI) that is self-aware and conscious that would make humans redundant.
In this article, the author planned to look more deeply into the concept of AI and a type of taxonomy for AI types in terms of business use. Starting with interpretations of AI, although articles about AI are abundant in popular and business press in recent years, the author found it surprisingly difficult to define what AI is and what it is not. Or, to put it differently, how there are about as many different definitions of AI.
The problem of defining intelligence itself is not an easy task nor is it a static one, as the field of AI is moving so fast that what used to be considered as intelligent behavior exhibited by machines years ago is now considered barely noteworthy. The author therefore started their analysis by providing a personally built definition of what it means to be AI, followed by a classification of three main types of AI based on this definition:
Definition: a system’s ability to interpret external data correctly, to learn from such data, and to use those learnings to achieve specific goals and tasks through flexible adaptation.
Debunked similarities: Differs from Internet of Things (IoT), which describes the idea that devices around us are equipped with sensors and software to collect and exchange data, and big data, which describes datasets characterized by huge amounts of frequently updated data in various formats from various means (e.g., social media applications and a firm's internal database).
Applicational process: Uses external information obtained through IoT or other big data sources as an input for identifying underlying rules and patterns by relying on approaches from machine learning, which, broadly speaking, describes methods that help computers learn without being explicitly programmed. Methods' complexities vary (i.e., from regression analysis to deep learning).
Presence of machine learning: Essential part of AI, but the topic of AI is broader than it since AI also covers a system’s ability to perceive data (e.g., voice or image recognition) or to interact and influence objects based on learned information — be it a robot or another connected device.
Classification (sideline): Borrowing management literature and studies investigating the skills shared by successful managers and employees with above-average performances, the author used a taxonomy of AI based on three types of skills (and another one that will be covered in-depth later):
Cognitive intelligence: Includes competencies related to pattern recognition and systematic thinking.
Emotional intelligence: Includes adaptability, self-confidence, emotional self-awareness, and achievement orientation.
Social intelligence: Includes empathy, teamwork, and inspirational leadership.
Breaking away from the bullet list for now, while the use of cognitive intelligence to classify AI seems straightforward, the applicability of emotional and social intelligence requires further elaboration. The mainstream view in psychology is that intelligence is generally innate (i.e., a characteristic that individuals are born with rather than something that can be learned). Still, emotional and social intelligence are related to specific emotional and social skills and it is these skills that individuals can learn and that AI systems can mimic.
While machines and AI systems can obviously not experience emotions themselves, they can be trained to recognize them (e.g., through analysis of facial) and then adapt their reactions accordingly.
Before the next set of classifications, the author brought up their analysis on how expert systems — collections of rules programmed in the form of if-then statements — do not qualify as AI since they lack the ability to auto-learn from external data. They now represent a different approach altogether since these systems assume that human intelligence can be formalized through rules and hence reconstructed in a top-down approach (symbolic or knowledge-based approach). For example, if an expert system were programmed to recognize a human face, then it would check for a list of criteria (e.g., presence of certain shapes) before making a rule-based judgment.
The author stated that 'real AI' uses a bottom-up approach (connectionist or behavior-based approach) by imitating the brain’s structure (e.g., neural networks) and using vast amounts of data to derive knowledge autonomously. This is similar to how a human child would learn to recognize a face — not by applying rules formalized by its parents but by seeing hundreds of thousands of faces and, at some point, being able to recognize what is a face and what is not a face.
This allows dealing with tasks vastly more complex than what could be handled through expert systems. For example, while chess can be formalized through rules, the board game Go cannot. Therefore, it was never possible to build an expert system able to beat a human Go player. A deep neural network (DNN) had to be trained to play Go simply by observing a very large number of games played by humans. Hence, the AlphaGo algorithm was made in 2016.
Back to the bullet list:
Classification (upper side): Based on these three types of competencies, the author classified AI systems into three groups:
Analytical AI: Can generate a cognitive representation of the world and use learning based on past experience to inform future decisions.
Human-inspired AI: Can understand human emotions and consider them in their decision-making.
Humanized AI: Shows characteristics of the three types of skills/competencies (i.e., cognitive, emotion, and social intelligence). Able to be self-conscious and self-aware in its interactions with others. Currently a work-in-progress.
Classification (footer): Each AI group above can borrow from the following learning processes:
Supervised learning: Maps a given set of inputs to a given set of labeled outputs. Usually the simplest method for managers since supervised learning include methods many may be familiar with (at least in principle), such as linear regression or classification trees. That being said, more complex methods like neural networks also fall into this group.
Unsupervised learning: Takes in labelled inputs but unlabeled outputs, needing it to infer the underlying structure from the data itself. Since output is derived by algorithm itself, it is impossible to assess its accuracy or correctness. Requires placing greater user trust and confidence to ease managers' concerns.
Reinforcement learning: Receives an output variable to be maximized and a series of decisions that can be taken to impact the output. For example, an AI system that aims to learn playing Pac-Man — simply by knowing that Pac-Man can move up, down, left and right and that its objective is to maximize score obtained in the game.
The author brought up that looking at AI this way raises the question of whether there are any skills that remain characteristic for humans and out of reach of AI. While this question remains difficult to answer, given the tremendous progress AI has made over the past decade, they still believe that humans will likely always have the 'upper hand' when artistic creativity is concerned.
Fundamentally, AI is based on pattern recognition or curve fitting (i.e., finding a relationship that explains an existing set of data points), while creativity as we widely define it is an irrational concept without need for strict patterns. In the words of Albert Einstein, it is "intelligence having fun." At this stage/paradigm, the author found it unlikely that AI systems will be able to solve creative tasks.
The author then looked into three distinct industries — universities, corporations, and governments — that AI has already been implemented into, to the level where it may have shaped their futures as a whole.
First, many of AI's most significant advances originated from a university context, and this trend will likely go on given the technical nature of AI. Referencing 1956's Dartmouth College workshop and university grounds-created AI DeepMind in 2014, the author thought it natural to start their analysis in the practical applications of AI in an academic context to answer the question of whether universities may have "sown the seeds of their own destruction" by their research on AI.
Bringing up recent examples of AI in university use, Georgia Tech (i.e., Georgia Institute of Technology) uses an AI-based virtual teaching assistant named Jill Watson to answer student questions. Its performance is remarkable enough to ward off students' realization of its artificial identity. Besides teaching, multinational information and analytics company RELX uses AI for automating systematic academic literature reviews or for supporting the review process through checks for plagiarism or misuse of statistics.
Human-inspired AI can bring all of the above to the next level. In an online learning context, universities could use AI to test whether students pay attention during a virtual class by analyzing facial impressions collected through a webcam. In a traditional setting, systems like RENEE (named for retain, engage, notify, and enablement engine), developed by U.S.-based Campus Management Corporation (CMC), can automatically launch interventions based on student profiles, best practices, and other inputs. RENEE might in the future be able to read student emotions like sadness or fear, allowing faculty and staff to identify the most effective coaching strategies or to spot cheating in exams.
All these systems aim to help faculty outsource tedious tasks such as grading and responding to repetitive student questions, which, in principle, leaves professors more time for their core competence of coaching, moderating, and facilitating discussions… until the next generation of humanized AI applications take care of those tasks as well.
Whether this will ever be the case, the author proposed this fundamental question: Will students prefer to be educated by smart machines or by human professors?
The fact that AI systems are cheaper than highly paid faculty members, at least in the long run, makes them preferable from the perspective of university deans who struggle for funding. But are they really the better choice if education becomes less personal?
Universities will have to make a conscious decision in this context and prepare themselves for the rise of AI. This will also allow them to better prepare their students for a workplace in which AI will become increasingly prominent. In this context, some researchers suggest that universities should introduce a course on artificial intelligence and humanity to answer questions of equity, ethics, privacy, and data ownership, which are of relevance in this context.
In corporate space, AI has already started to impact every single element of a firm’s value chain and, in the process, transform industries in a fundamental manner, especially service industries. Analytical AI applications are used in human resource management to help with the screening of CVs and selection of candidates in the form of advanced application tracking systems (ATS). How do you think so many companies on LinkedIn ghost job-needers at record-breaking speeds in the first place?
In marketing and sales, AI is used to allow for better targeting and personalized communication. AI systems can identify thousands of psychotypes — a distinct category or type of personality, often based on Jungian theory — and create messages that resonate well with their preferences, leading to tens of thousands of variations of the same message used every day.
In customer service, AI can be applied in the form of chatbots that can generate automatic responses to inquires sent through social media channels or emails. Modern versions like Google Duplex are even able to conduct phone calls that are difficult to distinguish from conversations with a human counterpart.
Looking at industry effects, the financial services sector has seen the rise of financial technology (fintech) startups which have revolutionized asset management through the creation of robo-advisors and the analysis of financial transaction data.
In retailing, AI is used for inventory management with the holy grail being Amazon’s anticipatory shipping patent that deals with sending items to customers before they even ordered them.
In the entertainment sector, AI has been used by newspapers like The Los Angeles Times to automatically write articles. In the near future, AI could go beyond written text and create artificial videos in which the moving picture of a person can be overlaid to any text the creator desires.
Human-inspired AI allows companies like Walmart to identify unhappy and frustrated customers by applying facial recognition techniques to people queuing up at checkouts, thus enabling intervention by either opening new cashiers or proposing snacks and drinks to customers. The same tools can be used to automatically detect fraud and theft orders of magnitude more efficiently than a traditional store detective could.
In the future, an analysis of one's past choices combined with facial recognition through your phone’s camera (referencing iPhone X) allows those firms to also detect your current mood and propose matching entertainment content. Alternatively, standalone applications like Replika, a AI 'your friend' application, allows users to build a diary and, in a way, acts like an AI-enabled therapist. This will likely be a major threat to online therapy providers like BetterHelp or Talkspace.
The combination of human-inspired AI and robotics is also where we can get a first glimpse into the world of humanized AI. In 1964, Joseph Weizenbaum from MIT created the first natural language processing computer program called ELIZA. The idea was to generate a program that can pass the Turing test — if a person cannot distinguish whether they were talking to a human or a machine, the machine exhibits intelligent behavior.
Today, ELIZA has evolved into Sophia, an AI-inspired robot developed by American roboticist David Hanson Jr. that is so convincing Saudi Arabia granted it citizenship in 2017. Such tools are considered more than a PR stunt — Sophia is a highly demanded speaker and generated press coverage reaching 10 billion readers in 2017. These robots can serve as companions for senior citizens who live alone and could revolutionize the field of elderly care.
Sophia’s citizenship status naturally leads to the question of how AI should and could impact governments, both directly and indirectly. Like universities and corporations, governments can use AI to make tasks more efficient and it is in this context where arguments related to morality become most obvious. The city of Jacksonfield uses analytical AI to manage intelligent streetlights which adjust the brightness of their lamps depending on traffic and pedestrian movements collected by street cameras.
In the same vein, human-inspired AI is apparently used by the U.S. Army in the recruitment of future soldiers through an advanced SGT Star AI system that is rumored to be able to recognize emotions. SGT Star is an interactive virtual agent that applies AI to respond to questions, review qualifications, and assign selected candidates to actual human recruiters. SGT Star does the workload of more than 50 recruiters with a 94% accuracy rate and boosted engagement time for applicants from 4 to over 10 minutes.
Another more science fictional idea brought up by the author is the development of AI-enabled robotic soldiers, one which is, to the alertedness of the author, is becoming a reality. In response to this possibility, over 100 researchers, security experts, and company leaders wrote an open letter to the UN asking to ban AI-enabled robots in war. Automatic systems, including drones, missiles, and machine guns, can lead to a level of escalation that the author reckons that older readers may remember from the 1983 movie War Games.
Another natural question arises when combining AI and governments: Where does improvement end and an Orwellian surveillance state (a society characterized by constant and pervasive state surveillance) begin? China has proposed a social credit system that combines mass surveillance, big data analytics, and AI to reward the trustworthy and punish the disobedient. In the proposed initiative, punishment for undesirable behavior can include flight bans and restriction related to private schools' access, real estate purchases, or even taking a holiday.
In Shenzhen, authorities already use facial recognition systems to crackdown on crimes like jaywalking; in Xiamen, users receive mobile phone messages when they are calling citizens with low social credit scores. The examples bring up the question of regulation and the need for government intervention in the domain of AI, especially when reaching humanized AI. While some voices argue for immediate and proactive regulation on a national and international level given the quick progress of AI — though it may otherwise be too late — others are concerned that regulation could slow down AI development and limit innovation.
The middle ground is to develop common norms instead of trying to regulate technology itself, similar to the consumer and safety testing done for physical products. Such norms could include requirements for the testing and transparency of algorithms, possibly in combination with some form of warranty. This would also allow for regulations to remain stable and eliminate the need for constant updates in response to technological advances.
However, this proposal is complicated by the idea of what AI is and what it can do. AI is itself a moving target and more an issue of interpretation than definition. Should AI be vaguely defined for legal purposes with the risk that everything could count as AI, or defined narrowly, focusing only on certain aspects? Or perhaps no definition is better in the hope that we know it when we see it?