0% found this document useful (0 votes)
28 views3 pages

The Evolution of AI

Uploaded by

hbopucit
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
28 views3 pages

The Evolution of AI

Uploaded by

hbopucit
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 3

The Evolution of AI: From “Narrow”, to “Broad”, to “General”

In recent years, machines have met and surpassed human performance on many cognitive tasks,
and some longstanding grand challenge problems in AI have been conquered. We’ve encountered
machines that can solve problems, play games, recognize patterns, prove mathematical theorems,
navigate environments, understand and manipulate human language, but are they truly
intelligent? Can they reach or surpass human capabilities, and where are we in this evolution?
The community of AI practitioners agrees that the practical applications of AI today belong to
so-called “narrow” or “weak” AI. Narrow AI refers to computer systems adept at performing
specific tasks in a single application domain. For example, Apple’s Siri virtual assistant is
capable of interpreting voice commands, but the algorithms that power Siri cannot drive a car,
predict weather patterns, or analyze medical records. It is the same with other systems; factory
robots, personal digital assistants, and healthcare decision support systems are designed to
perform one narrow task, such as assemble a product, provide a weather forecast or make a
purchase order, or help a radiologist in interpreting an X-ray. When they learn after deployment,
they do so in the context of that narrow task, and do not have the ability to learn other tasks on
their own, or apply them to different domains. In contrast, “strong” AI, also referred to as
artificial general intelligence (AGI), is a hypothetical type of AI that can meet human-level
intelligence and apply this problem-solving ability to any type of problem, just as the same
human brain can easily learn how to drive a car, cook food, and write code. Strong AI involves a
system with comprehensive knowledge and cognitive capabilities such that its performance is
indistinguishable from that of a human. AGI has not yet been developed, and expert opinions
differ as to the possibility that it ever might be, the timeline for when it might happen, and the
path toward it. Narrow AI and General AI are two ends of the spectrum in the evolution of AI,
with many years, or decades of development in between. We refer to that evolution and the
period in between as broad AI. Here we outline several key challenges for advancing the field.

Ethics and Trust of AI:

Today, AI-powered systems are routinely being used to support human decision-making in a
multitude of applications. Yet broad adoption of AI systems will not come from the benefits
alone. Many of the expanding applications of AI may be of great consequence to people,
communities, or organizations, and it is crucial that we be able to trust their output. Trusting a
decision of an AI system requires more than knowing that it can accomplish a task with high
accuracy; the users will want to know that a decision is reliable and fair, that it can be accounted
for, and that it will cause no harm. They will need assurance that the decision cannot be tampered
with and that the system itself is secure. As we advance AI capabilities, issues of reliability,
fairness, explainability, and safety will be of paramount importance. In order to responsibly scale
the benefits of AI, we must ensure that the models we create do not blindly take on our biases
and inconsistencies, and then scale them more broadly through automation. The research
community has made progress in understanding how bias affects AI decision-making and is
creating methodologies to detect and mitigate bias across the lifecycle of an AI application:
training models; checking data, algorithms, and service for bias; and handling bias if it is
detected. While there is much more to be done, we can begin to incorporate bias checking and
mitigation principles when we design, test, evaluate, and deploy AI solutions. Another issue that
has been at the forefront of the discussion recently is the fear that machine learning systems are
“black boxes,” and that many state-of-the-art algorithms produce decisions that are difficult to
explain. A significant body of new research work has proposed techniques to provide
interpretable explanations of “black-box” models without compromising their accuracy. These
include local and global interpretability techniques of models and their predictions, visualizing
information flow in neural nets, and even teaching explanations. We must incorporate these
techniques into AI model development workflows to provide diverse explanations to developers,
enterprise engineers, users, and domain experts. It has also been shown that deep learning
models can be easily fooled into making embarrassing and incorrect decisions by adding a small
amount of noise, often imperceptible to a human. Exposing and fixing vulnerabilities in software
systems is a major undertaking of the technical community, and the effort carries over into the AI
space. Recently, there has been an explosion of research in this area: new attacks and defenses
are continually identified; new adversarial training methods to strengthen against attack and new
metrics to evaluate robustness are being developed. We are approaching a point where we can
start integrating them into generic AI DevOps processes to protect and secure production-grade
applications that rely on neural networks. Human trust in technology is based on our
understanding of how it works and our assessment of its safety and reliability. We drive cars
trusting that the brakes will work when the pedal is pressed. We undergo laser eye surgery
trusting the system to make the right decisions. In both cases, trust comes from confidence that
the system will not make a mistake thanks to extensive training, exhaustive testing, experience,
safety measures, standards, best practices and consumer education. Many of these principles of
safety design apply to the design of AI systems; some will have to be adapted, and new ones will
have to be defined. For example, we could design AI to require human intervention if it
encounters completely new situations in complex environments. And, just as we use safety labels
for pharmaceuticals and foods, or safety datasheets in computer hardware, we may begin to see
similar approaches for communicating the capabilities and limitations of AI services or solutions.
Finally, it is worth emphasizing that deciding on whom to trust to train our AI systems will be the
most consequential decision we make in any AI project.

You might also like