This quote by Niccolò Machiavelli highlights three types of intelligence, each with a different
level of value and usefulness.
   1. The First Kind (Understanding for Itself):
         o This type of intelligence refers to the ability to think critically, analyze, and
            understand things independently. People who possess this intelligence don't just
            rely on what others say or believe; they can process information, draw
            conclusions, and form their own understanding of the world.
         o Machiavelli considers this the "excellent" kind of intelligence because it enables
            personal growth, self-reliance, and deep understanding. These individuals can
            solve problems and make decisions without needing to depend on others.
   2. The Second Kind (Appreciating What Others Can Understand):
         o This intelligence involves understanding and appreciating the ideas or
            perspectives of others. While it may not be as independent as the first kind, it still
            involves some level of awareness and empathy. People with this intelligence can
            recognize valuable insights from others and consider them thoughtfully.
         o Machiavelli considers this "good" intelligence because it allows individuals to
            work with others, communicate effectively, and integrate the ideas of others into
            their own thinking. While it is useful, it isn't as powerful as having original,
            independent understanding.
   3. The Third Kind (Neither Understanding for Itself nor Through Others):
         o This type of intelligence is essentially lacking in understanding. These people
            cannot think critically for themselves, nor do they grasp or appreciate what others
            think or know. They are disconnected from both personal comprehension and
            external insights.
         o Machiavelli calls this "useless" because people who lack both self-understanding
            and an ability to learn from others struggle to contribute meaningfully to any
            situation. Without any understanding, they cannot effectively navigate the world
            or engage with it in a productive way.
Machiavelli’s quote suggests that true intelligence is marked by the ability to think for oneself,
while the ability to understand others' viewpoints is valuable, but not as powerful. The worst
kind of intelligence is one that is incapable of either independent thought or understanding
others, as it holds no practical use.
Artificial Intelligence (AI) is the field of study focused on creating computers or machines that
can perform tasks that normally require human intelligence. This includes tasks like
understanding language, recognizing images, making decisions, and solving problems.
The definition you mentioned says that AI is about making computers do things that people can
currently do better. However, as technology advances, the tasks that computers can perform may
continue to grow, so this definition keeps changing. For example, today we want AI not just to
play chess but also to physically interact with the world, like a robot that can play chess with us
in person.
AI is a constantly evolving field, and the full extent of its capabilities isn’t clear yet. As AI
develops, it may eventually become so advanced that it won’t be considered “artificial” anymore
—it will just be “intelligence” that’s been implemented and used in the real world.
In short, AI is about creating machines that can think, learn, and act in ways that seem
intelligent, but we’re still working on figuring out exactly what "intelligent" means in this
context.
In summary: While we may never fully understand either artificial or natural intelligence, the
attempt to create AI systems that mimic human thinking brings us closer to uncovering the
mysteries of how we, as humans, think and learn. Even partial understanding and progress in AI
development can illuminate the workings of natural intelligence step by step.
1.1 THE AI PROBLEMS
In the early days of Artificial Intelligence (AI), researchers focused on tasks that seemed
straightforward and could demonstrate intelligence, like game playing and theorem proving
(proving mathematical statements). Some examples include:
   1. Checkers-playing program (Samuel’s program): This AI could play checkers, and
      more importantly, it could learn from its past games to improve its performance over
      time.
   2. Chess: Chess was another major focus. AI programs were designed to play chess, and
      some were able to make decisions based on looking at many possible moves and counter-
      moves.
   3. The Logic Theorist: This was one of the first AI programs designed to prove
      mathematical theorems. It could prove several theorems from a famous math book called
      Principia Mathematica.
These tasks—game playing and theorem proving—are often seen as intelligent activities because
they require reasoning, strategy, and problem-solving. In the early days, people thought that AI
could perform well at these tasks just by being fast and exploring lots of possible solutions, and
that this would not require much specialized knowledge or advanced programming.
However, this assumption turned out to be wrong. It became clear that computers, even though
they are fast, struggle with many problems because there are too many possible solutions to
explore (a problem known as the "combinatorial explosion"). In other words, the number of
possible moves or solutions grows so quickly that no computer can handle it efficiently.
In short: Early AI work assumed that being fast and checking many solutions would be enough
for tasks like games and math problems, but it turned out that the sheer number of possibilities
makes this approach impractical for many problems. AI needs more than just speed—it needs
smarter ways of handling complex problems.
In the early days of AI, researchers also tried to solve problems that are part of our everyday
lives, like how we figure out the best way to get to work in the morning. This kind of thinking is
called commonsense reasoning. It includes understanding basic things about the world, such as:
   1. Physical objects and their relationships: For example, knowing that an object can only
      be in one place at a time.
   2. Actions and their consequences: For example, if you let go of something, it will fall and
      possibly break.
To study this, researchers Newell, Shaw, and Simon built a program called the General
Problem Solver (GPS). This program tried to handle everyday reasoning and also worked with
logical expressions (basic symbols and rules). However, the program didn't have much
specialized knowledge about specific problems—it only worked on simple tasks.
As AI research advanced, new methods were developed to handle more complex knowledge
about the world. With these improvements, researchers made progress on the tasks like
commonsense reasoning, and they also began working on new problems, including:
   1. Perception (like vision and speech recognition).
   2. Natural language understanding (teaching machines to understand human language).
   3. Problem-solving in specialized fields (such as diagnosing medical conditions or
      analyzing chemicals).
In summary: Early AI tried to mimic our everyday reasoning and solve simple tasks, but as
research advanced, AI became capable of handling more complex tasks, including understanding
language, recognizing objects, and solving specialized problems like medical diagnoses.
Perception, or the ability to understand the world around us, is essential for survival. Even
animals with less intelligence than humans can have more advanced visual perception than
current machines. This is because perception involves processing analog signals (which are
continuous, not digital) that can be very noisy (hard to distinguish), and there are often many
things to notice at once, some of which may block others.
Understanding language, especially spoken language, is another major challenge for AI. It’s a
perceptual problem too, since we need to process sound and meaning together. Solving this is
hard for the same reasons as visual perception. But if we narrow it down to written language
(text), it doesn’t get much easier. To understand written sentences, AI needs to not only
understand the vocabulary and grammar of the language but also the context or background
knowledge about the topic being discussed. This helps recognize unstated assumptions (things
people understand without being told directly).
Beyond these basic tasks, humans can also perform specialized tasks that require deep
knowledge and expertise, like engineering design, scientific discovery, medical diagnosis, and
financial planning. AI programs are also being developed to handle such complex problems.
In short: Perception and language understanding are difficult for AI because they require
handling complex, noisy signals (like vision or speech). Furthermore, understanding written
language involves both knowing the language and understanding the topic. AI is also being
developed to solve specialized tasks that require deep expertise in various fields.
The idea here is that humans typically learn skills in a specific order. First, we learn basic
perceptual skills (like seeing and hearing), language skills (how to understand and use
language), and commonsense reasoning (understanding everyday things, like how objects
interact). Only later, some people learn specialized skills like engineering, medicine, or finance.
At first, it might seem logical to think that basic skills are easier for computers to replicate than
specialized skills because they come earlier in learning. That’s why early AI research focused on
these simpler tasks like perception and language.
However, this assumption turned out to be wrong. Expert skills, such as those in engineering or
medicine, actually require less knowledge than basic skills like perception or commonsense
reasoning. The knowledge needed for specialized tasks is often more structured and easier to
represent in a program, while more general tasks (like understanding the world or interpreting
language) involve complex, varied knowledge that’s harder to program.
In short: While we might think that basic skills are easier to replicate with AI, it turns out that
expert skills can sometimes be easier to handle because they involve more specific and
manageable knowledge.
AI is now having the most success in areas where specialized knowledge is needed, but
commonsense reasoning isn't as important. These are the areas where we can build programs
that focus on specific problems that used to require human expertise.
For example, there are expert systems—AI programs that help solve problems in various
industries and governments. These systems are designed to tackle important problems that would
normally need experienced professionals. Today, there are thousands of these expert systems
being used in real-world applications, like in healthcare, finance, and engineering.
In Chapter 20, the book will dive deeper into how these expert systems work and explain how
they are built.
In short: AI is thriving in areas where the task requires specialized knowledge (like medical
diagnosis or technical troubleshooting), and there’s no need for everyday commonsense
reasoning. Many expert systems are now used in industries to help solve complex problems.
1.2 THE UNDERLYING ASSUMPTION
A physical symbol system is made up of symbols (physical patterns) that form expressions
(structured groups of symbols). These expressions can be created, modified, copied, or
detected by the system. Over time, the system continuously updates and evolves its symbols.
The key idea of the hypothesis is:
👉 A physical symbol system is capable of general intelligent action, meaning it has everything
needed to perform tasks that require intelligence.
At the core of AI research is the Physical Symbol System Hypothesis introduced by Newell and
Simon. This hypothesis suggests that intelligent action can be achieved by a system that
manipulates symbols in specific ways. Here’s a breakdown of their idea:
   1. Physical Symbol System:
         o A symbol is a physical pattern (like a letter or a number) that represents
            something.
         o A symbol structure is made up of several symbols arranged in a particular way.
            For example, a word or a sentence is a structure made from individual letters or
            symbols.
         o The system has processes that manipulate these symbol structures to create new
            ones, like modifying them or combining them. These actions can help solve
            problems or make decisions.
   2. The Hypothesis:
         o Newell and Simon propose that any system that can manipulate symbols in this
            way has the necessary and sufficient means for general intelligent action. In
            simple terms, they believe that if a system can handle symbols (like words,
            numbers, etc.) and manipulate them, it can perform tasks that we think require
            intelligence.
            o  This is a hypothesis, which means it’s an idea that has yet to be fully proven. The
               only way to test it is through experimentation.
    3.   Computers as a Tool:
            o Computers are perfect for testing this hypothesis because they can be
               programmed to handle and manipulate symbols. Early thinkers like Ada Lovelace
               suggested that machines (like Charles Babbage’s proposed “analytical engine”)
               could handle more than just numbers—they could manipulate abstract concepts
               (like music or ideas) if they were set up to do so.
            o As computers have become more powerful, it’s become easier to test whether
               symbol manipulation can lead to intelligent behavior.
    4.   Empirical Testing:
            o In AI, tasks that require intelligence (like game playing, perception, or medical
               diagnosis) are selected, and programs are created to perform those tasks. Even
               though we haven't been able to create perfect programs yet, many believe that the
               challenges will be overcome with better AI.
    5.   Challenges and Subsymbolic Models:
            o Some tasks, like visual perception, might seem to require a different kind of
               approach called subsymbolic models (like neural networks), which don’t rely
               on symbols but on patterns or connections in data. These models are challenging
               the idea of symbols being at the core of intelligence.
            o However, the success of subsymbolic models doesn’t necessarily mean that the
               Physical Symbol System Hypothesis is wrong. There could be multiple ways to
               accomplish a task, and symbolic systems might still be important for many
               aspects of intelligence.
    6.   The Importance of the Hypothesis:
            o The hypothesis is important for two main reasons:
                    Psychologists are interested in understanding whether human intelligence
                       can be explained by symbol manipulation.
                    AI researchers believe that if this hypothesis is true, we can build
                       programs that perform tasks currently done by humans.
In summary, the Physical Symbol System Hypothesis suggests that intelligent behavior can arise
from manipulating symbols. This idea forms the foundation of much of AI research, though it’s
still being tested, and there’s ongoing debate about whether other models, like neural networks,
might be equally or more important for certain tasks.
1.3 WHAT IS AN AI TECHNIQUE?
Artificial intelligence (AI) tackles a wide variety of problems, but they all share one common
challenge: they are difficult to solve. However, there are techniques that can help solve different
kinds of problems in AI. The question is, what makes these techniques effective, and can they be
applied to problems outside of AI too?
The answer is yes, some techniques are useful in solving many different problems, even if they
aren't strictly AI tasks. Before we get into the details of each technique, it’s useful to look at the
broader characteristics they should have.
Key Point 1: Intelligence Requires Knowledge From the early days of AI research, it has been
clear that knowledge is essential for intelligence. But knowledge has its challenges:
      It's voluminous (a lot of it).
      It's hard to describe accurately.
      It's always changing.
      It's organized in a way that fits how we plan to use it.
So, What Makes an AI Technique Effective? An AI technique should be able to work with
knowledge that meets these conditions:
   1. Generalization: Knowledge should capture general patterns or rules, not just individual
      situations. If it doesn’t, AI programs would need a huge amount of memory and constant
      updates. When we don’t have generalizations, it’s just data, not knowledge.
   2. Understandable by People: Most of the knowledge a program needs must be provided
      by humans in a way that makes sense to them. Even though some data can be
      automatically collected (like measurements), AI programs still need input from people
      who understand the problem.
   3. Modifiable: Knowledge should be easy to change. This is important to fix mistakes or
      keep up with new information.
   4. Usefulness in Various Situations: Knowledge doesn’t need to be perfect or complete.
      Even if it's a little incomplete or inaccurate, it should still be useful in many situations.
   5. Helps Narrow Down Options: Even though there’s a lot of knowledge, it should help
      reduce the number of possibilities the program needs to consider to find a solution.
Independence of Problems and Techniques: AI techniques aren’t always tied to specific
problems. You can solve an AI problem without using traditional AI techniques (though the
solution might not be as effective). You can also apply AI techniques to solve non-AI problems
if they share similar characteristics.
To understand AI techniques better, it’s helpful to look at different problems and how AI
techniques can be applied to them. Even though problems can be very different, the underlying
approaches might overlap, making AI techniques adaptable and powerful.
In short: AI techniques work best when they handle knowledge that is general, understandable,
modifiable, and useful in many situations. While AI problems are complex, the techniques for
solving them can be applied to many different kinds of problems.
1.4 THE LEVEL OF THE MODEL
Before starting an AI project, it’s important to decide what we want to achieve. The main
question is: Are we trying to make a program that works exactly like humans do, or are we
aiming for a program that just does the task in the easiest way possible? There have been AI
projects based on both of these goals.
Two Types of AI Programs:
   1. Programs that try to do tasks like people do:
         o Some programs try to solve problems the same way humans would, even if a
            computer could easily solve them using a different method.
         o For example, EPAM (a program designed in 1963) simulated how humans might
            memorize pairs of nonsense syllables. While a computer could easily store and
            retrieve data like this, humans find it difficult. EPAM tried to model human
            memory by using a system that helped it remember things based on patterns,
            similar to how a human might recall them. But, like people, it sometimes "forgot"
            things if the clues weren’t specific enough.
         o Programs like this are useful for testing human behavior theories, but many
            people find them uninteresting because they focus on tasks computers can easily
            do.
   2. Programs that handle tasks that are difficult for both people and computers:
         o These programs aim to solve true AI problems, tasks that are complex for both
            computers and humans.
         o Reasons to model human performance in AI tasks:
                1. Test psychological theories: Some programs, like PARRY (a simulation
                    of a paranoid person), were designed to model human behavior and test
                    psychological theories. PARRY was good enough that experts diagnosed
                    its behavior as paranoid, just like a real person.
                2. Help computers understand human reasoning: For a computer to
                    answer complex questions like "Why did the terrorists kill the hostages?"
                    it needs to simulate human reasoning to understand the context.
                3. Help people understand how computers reason: If a computer’s
                    reasoning process is similar to human reasoning, it will be easier for
                    people to trust and understand the computer's decisions.
                4. Learn from human expertise: Humans are the best at performing most
                    tasks, so studying how we approach these tasks can help create better AI
                    systems.
In Simple Terms:
AI can be developed in two main ways: One way is to model human behavior even for tasks that
computers can easily solve, which can be useful for studying human performance. The other way
is to focus on solving complex AI problems that require both human and computer reasoning.
These AI programs are designed to either model how humans think or to help both computers
and people understand each other's reasoning.
In AI research, one of the main goals is to understand how human intelligence works and use that
understanding to create intelligent machines. This goal has motivated many different approaches,
some of which focus on trying to imitate human thought processes at the level of individual
neurons, while others focus on higher-level cognitive models.
Imitating Human Behavior at the Neuron Level:
In the past, researchers like McCulloch and Pitts (1943) and Frank Rosenblatt (1950s) tried to
replicate intelligent behavior by mimicking the behavior of human neurons. These early attempts
involved perceptrons, which are simple neural networks designed to learn patterns. However,
these models faced significant limitations. Perceptrons and similar architectures couldn't handle
more complex tasks due to their simplistic design.
Despite these challenges, the idea of using neural networks as the basis for intelligent behavior
didn’t disappear. More recently, new connectionist neural network architectures were
developed, which overcome the problems of earlier models. These new architectures have been
successful in various AI tasks like learning and problem-solving. These networks are now used
in many AI programs and are a part of ongoing research. However, it's important to note that
while the human brain operates in parallel (many neurons working at the same time), most
computers process information serially (one task at a time). This difference makes it challenging
to implement the same type of parallel processing used by the brain in traditional computers. But
with advancements in parallel computing and cognitive models, there’s renewed interest in
designing machines that can work like the human brain, processing many things simultaneously.
Higher-Level Theories and Cognitive Models:
However, not all AI researchers believe that mimicking the human brain at the neuron level is
the best approach. Many have shifted focus to higher-level cognitive models that do not require
such complex, parallel systems. For example, early AI programs like GPS (General Problem
Solver) aimed to solve problems using reasoning processes that were closer to how humans
think, but without needing to simulate neurons directly. This type of AI approach deals with
abstract reasoning and doesn’t focus on mimicking the physical structure of the brain.
Additionally, when AI researchers focus on tasks like natural language understanding, the
difficulties of simply analyzing sentence structures using rules (syntax) have led them to look at
how humans understand language. We know that humans don't just rely on rules but use context
and world knowledge, so AI researchers look to human language processing as a guide for
improving machine understanding of language. Similarly, AI researchers studying computer
vision might look at human cognitive processes to help them develop better models for
recognizing and interpreting images.
Motivation for Both Approaches:
In the long run, the difference between trying to simulate human performance (imitating humans
closely) and building intelligent systems in other ways seems less significant than it originally
appeared. In both cases, what matters most is creating a good model of intelligent reasoning—a
model that explains how intelligent processes work and how to replicate them in machines. This
is where cognitive science comes in, bringing together psychologists, linguists, and computer
scientists to develop such models.
In short, whether trying to mimic human neurons or focusing on higher-level cognitive models,
AI research is deeply influenced by human cognitive theories. Both approaches seek to
understand and replicate intelligent behavior in ways that make the most sense for solving AI
problems. By studying human performance, AI researchers aim to improve their models and
build better systems that can handle complex tasks. The field of cognitive science is crucial to
this process, providing valuable insights into how human intelligence can be replicated or
understood by machines.
Key Points on AI Goals and Human Performance Modeling
Two Main Approaches to AI Development
    1. Simulating Human Thinking – Creating AI that performs tasks the way humans do.
    2. Task-Focused AI – Creating AI that solves tasks in the easiest way possible, not necessarily like
       humans.
Two Classes of AI Programs That Simulate Human Thinking
    1. Simple Tasks (Not True AI Tasks)
            o   Example: EPAM (1963) memorized nonsense syllables like humans but sometimes
                “forgot” responses due to limited cues.
            o   These programs are mainly useful for psychologists to test human learning models.
    2. Complex AI Tasks (True AI Tasks)
            o   Why Simulate Human Thinking?
                   Test psychological theories (e.g., PARRY (1975) simulated a paranoid person).
                   Help computers understand human reasoning (e.g., AI answering questions
                       about news articles).
                   Help people trust AI by making reasoning understandable.
                   Learn from human intelligence to improve AI development.
Early AI Inspired by Human Brain Models
       Early AI (e.g., perceptrons, neural nets) tried to imitate neurons but had theoretical limitations.
       Modern AI (connectionist models) overcomes past limitations and performs better.
Parallel vs. Serial Computing
       Human brains are parallel (many processes at once).
       Most computers are serial (step-by-step processing).
       Massively parallel AI systems are being developed to improve AI performance.
Influence of Cognitive Science on AI
       AI research now looks at higher-level human thinking beyond neurons.
       Example: GPS system and Natural Language Understanding programs mimic human thought
        processes.
       AI researchers take inspiration from how humans process language and images.
Final Thought
      The line between simulating human intelligence and building smart AI is blurring.
      Both approaches need a strong model of human reasoning—the focus of cognitive science.
   2.3 PROBLEM CHARACTERISTICS
Here’s a simple explanation of each characteristic:
       Decomposability – Can the problem be broken down into smaller independent
       subproblems? If yes, solving these smaller parts separately might make the problem
       easier. Example: Solving a large puzzle by working on different sections separately.
       Reversibility – Can mistakes be undone? If a step leads to a bad outcome, can we go
       back and try something else? Example: In a maze, if you take the wrong turn, you can
       backtrack and try another path.
       Predictability – Do actions always lead to expected results? If yes, planning ahead is
       easier. Example: Moving chess pieces follows set rules, so the outcome of a move can be
       predicted.
       Obviousness of a Good Solution – Can a good solution be recognized immediately, or
       do we need to compare multiple solutions? Example: In a jigsaw puzzle, a correct piece
       fits perfectly, making it obvious.
       Goal Type – Is the goal to reach a specific state, or is the path taken also important?
       Example: In chess, the final checkmate position is crucial (state), while in GPS
       navigation, the best route matters (path).
       Knowledge Requirement – Does solving the problem require extensive domain
       knowledge, or is knowledge mainly used to guide the search? Example: Diagnosing a
       disease requires medical knowledge, but solving a maze may only need trial and error.
Understanding these characteristics helps in choosing the best problem-solving approach!
Here’s a simple explanation of decomposable and non-decomposable problems:
Decomposable Problems
A problem is decomposable if it can be broken down into smaller, independent subproblems that
can be solved separately and then combined to form the final solution.
      Example: Symbolic Integration
           o   Consider solving the integral:
           o   Each integral can be solved individually and then combined to get the final answer.
           o   This makes integration decomposable, as subproblems don’t interfere with each other.
Non-Decomposable Problems
A problem is non-decomposable if splitting it into smaller subproblems leads to dependencies,
meaning solving one part affects another, making it difficult to solve separately.
      Example: Blocks World Problem
          o We want to rearrange blocks to achieve a specific goal (e.g., stacking A on B, B on C).
          o If we try to solve them separately:
                   1. Move B onto C → ✅ Done.
                   2. Move A onto B → ❌ But A is blocked by C!
           o   The subproblems are dependent; solving one affects the other, making independent
               decomposition impossible.
Key Difference
      Decomposable problems can be solved in parts without interference.
      Non-decomposable problems have dependencies, meaning we need a more careful strategy to
       solve them.
Here’s a simple explanation of Ignorable, Recoverable, and Irrecoverable problems:
1. Ignorable Problems (e.g., Theorem Proving)
      If a step is useless, you can ignore it and continue without any consequences.
      Example: In proving a mathematical theorem, if you explore a wrong lemma, you can just ignore
       it and continue with the correct approach.
2. Recoverable Problems (e.g., 8-Puzzle)
      If you make a mistake, you can undo it by backtracking.
      Example: In the 8-puzzle, if you move the wrong tile, you can move it back and try another
       move. However, you must keep track of previous moves to undo them correctly.
3. Irrecoverable Problems (e.g., Chess)
      If you make a mistake, you cannot undo it; you must adapt and make the best of the new
       situation.
      Example: In chess, if you make a bad move, you cannot take it back (unless you're playing
       casually with an undo option). Instead, you must adjust your strategy based on the new board
       position.
Why is this important?
      Ignorable problems are the easiest to solve, as mistakes don’t matter.
      Recoverable problems need a backtracking mechanism to undo errors.
      Irrecoverable problems require careful planning and decision-making, as every step is
       permanent.
This helps determine how complex a problem-solving system needs to be!
2.3.3 Is the Universe Predictable?
Predictable vs. Unpredictable Problems
Some problems allow us to predict exactly what will happen, while others involve uncertainty,
making perfect planning impossible.
1. Certain-Outcome Problems (Predictable Universe)
      The result of every action is known and predictable.
      Example: 8-Puzzle – When you move a tile, you know exactly where it will go.
      Planning works well since every step is guaranteed to lead to a known result.
2. Uncertain-Outcome Problems (Unpredictable Universe)
      The result of actions is not always predictable, requiring adjustments along the way.
      Example: Playing Bridge – You don’t know what cards other players have or how they will play.
      Planning helps, but decisions must be updated as new information is revealed.
The Challenge of Irrecoverable + Uncertain Problems
The hardest problems are both uncertain and irrecoverable, meaning:
      You cannot undo mistakes (irrecoverable).
      You cannot predict the outcome perfectly (uncertain).
Examples of hard problems:
   1. Playing bridge – You must make decisions without knowing opponents’ cards.
   2. Controlling a robot arm – Unexpected obstacles or mechanical failures can occur.
   3. Legal defense – A lawyer must prepare for unpredictable arguments, evidence, or jury reactions.
Why is this important?
      Predictable problems allow detailed planning.
      Unpredictable problems require flexibility and real-time adjustments.
      The hardest problems require balancing careful planning with adaptability.