Logical Agents
22MDC45 Unit 3-2
(Chapter 7)
1
Logic and Knowledge Bases
◼ Logic: means of representation and
reasoning
◼ Knowledge Base (KB): set of sentences
(expressed in some language)
◼ Inference: deriving new sentences from
sentences in the KB
2
Knowledge base
◼ Knowledge base (KB) = set of sentences or facts in a
formal language
◼ e.g., a set of statements in a logic language
◼ Inference
◼ Deriving new sentences from old
◼ e.g., using a set of logical statements to infer new ones
◼ A simple model for reasoning
◼ Agent is told or perceives new evidence
◼ E.g., A is true
◼ Agent then infers new facts to add to the KB
◼ E.g., KB = { A -> (B OR C) }, then given A and not C we can infer that B is
true
◼ B is now added to the KB even though it was not explicitly asserted, i.e.,
the agent inferred B
Knowledge-based Agents
◼ Declarative approach to building an agent (or other system):
◼ Tell it what it needs to know
◼ Then it can Ask itself what to do: answers should follow from the KB
◼ Agents can be viewed at the knowledge level
i.e., what they know, regardless of how implemented
◼ Or at the implementation level
◼ i.e., data structures in KB and algorithms that manipulate them
Knowledge-Based
Agent Function
TELL: adds a sentence to the KB
ASK: queries the KB
5
A simple knowledge-based agent
◼ The agent must be able to:
◼ Represent states, actions, etc.
◼ Integrate new percepts
◼ Update internal representations of the world
◼ Deduce hidden properties of the world
◼ Deduce appropriate actions
Example: Wumpus World
◼ 4 by 4 grid of rooms
◼ A room may contain:
Agent, Wumpus, Pit, Gold
◼ Agent can perceive pit or
wumpus from neighboring
squares
◼ Agent starts in lower left
corner, can move to
neighboring squares, or
shoot an arrow N,E,W, or S
7
Wumpus World
PEAS Description
◼ Performance measure:
◼ gold +1000
◼ death –1000
◼ -1 per step
◼ -10 for using up arrow
◼ Environment:
◼ 4 by 4 grid of rooms
◼ one room contains the agent (initially at [1,1] facing right)
◼ one room (not [1,1]) contains the wumpus (and it stays there)
◼ one room contains the gold
◼ the other rooms may contain a pit
8
PEAS Description, continued
◼ Actuators: Left turn, Right turn, Forward, Grab, Shoot
◼ Shooting kills wumpus if you are facing it
◼ Shooting uses up the only arrow
◼ Grabbing picks up gold if in same square
◼ Agent dies when it enters a room containing pit/live wumpus
◼ Sensors: Stench, Breeze, Glitter, Bump, Scream
◼ Squares adjacent to wumpus are smelly
◼ Squares adjacent to a pit are breezy
◼ Glitter perceived in square containing gold
◼ Bump perceived when agent hits a wall
◼ Scream perceived everywhere when wumpus is hit
9
Wumpus world characterization
◼ Fully Observable
No – only local perception
◼ Deterministic
Yes – outcomes exactly specified
◼ Episodic
No – sequential at the level of actions
◼ Static
Yes – Wumpus and Pits do not move
◼ Discrete
Yes
◼ Single-agent?
Yes – Wumpus is essentially a natural feature
Wumpus World and
Knowledge
◼ State of knowledge
◼ What is known about the rooms at time t
◼ Associate one or more values to each room, when
known: A, B, G, OK, P, S, V, W
(use ? to indicate possibility)
◼ Contrast against what are actually in the rooms
◼ A move and resulting percept allow agent to
update the state of knowledge
◼ Next move would depend on what is known
11
Example: Initial State
and First Move
[None,None,None,None,None] [None,Breeze,None,None,None]
12
Sample Action Sequence: forward, turn around, forward,
turn right, forward, turn right, forward, turn left, forward
13
Later Moves
Actions: forward, turn around, forward, turn right,
forward, turn right, forward, turn left, forward
14
Inference
◼ Agent can infer that there is a wumpus
in [1,3]
◼ Stench in [1,2] means wumpus is in [1,1], [1,3],
or [2,2]
◼ Wumpus not in [1,1] by the rules of the game
◼ Wumpus not in [2,2] because [2,1] had no stench
◼ Agent can also infer that there is a pit
in [3,1] (how?)
15
Logic in general
◼ Logics are formal languages for representing
information such that conclusions can be drawn
◼ Syntax defines the sentences in the language
◼ Semantics define the "meaning" of sentences;
◼ i.e., define truth of a sentence in a world
◼ E.g., the language of arithmetic
◼ x+y = 5 is a well-formed sentence.
◼ x+2 ≥ y is a sentence; x2+y > {} is not a sentence
◼ x+2 ≥ y is true if the number x+2 is no less than the number y
◼ x+2 ≥ y is true in a world where x = 7, y = 1
◼ x+2 ≥ y is false in a world where x = 0, y = 6
Logic
◼ Representation
◼ Syntax: how well-formed sentences are
specified
◼ Semantics: “meaning” of the sentences;
truth with respect to each possible world
(model)
◼ Reasoning
◼ Entailment: sentence following from
another sentence ( a ╞ b )
17
Models and Entailment
◼ Logicians typically think in terms of models, with respect to
which truth can be evaluated
◼ model: a possible world
◼ We say m is a model of a sentence α if α is true in m
◼ M(α) is the set of all models of α
◼ Then KB ╞ α iff M(KB) M(α)
◼ E.g.
KB = I am smart
and you are pretty
α = I am smart
18
19
Entailment
◼ Entailment means that one thing follows from
another:
KB ╞ α
◼ Knowledge base KB entails sentence α
if and only if α is true in all worlds
where KB is true
◼ E.g., the KB containing “Perspolis won” and “the
Esteghlal won” entails “Either Perspolis won or
Esteghlal won”
◼ E.g., x+y = 4 entails 4 = x+y
◼ Entailment is a relationship between sentences
(i.e., syntax) that is based on semantics
Models and Entailment
in the Wumpus World
Situation after detecting
nothing in [1,1], moving
right, breeze in [2,1]
Consider possible models
for KB assuming only pits
3 Boolean choices
8 possible models
21
Wumpus Models
22
Wumpus Models
KB = wumpus-world rules + observations
23
Wumpus Models
α1 = "[1,2] is safe", KB ╞ α1
proved by model checking
24
Wumpus Models
α2 = "[2,2] is safe", KB ╞ α2
25
Entailment and Inference
◼ Definition of entailment can be applied to
derive conclusions – Logical Inference
◼ Model checking: Enumerate all possible
models to check that α is true in all models in
which KB is true, that is, that M(KB) ⊆ M(α).
◼ Grounding: if KB is true in the real world,
then any sentence α derived from KB by a
sound inference procedure is also true in the
real world.
26
Inference Algorithm
◼ An inference algorithm i is a procedure
that derives sentences from a
knowledge base: KB ├i s
◼ i is sound if it derives only entailed
sentences
◼ i is complete if it can derive any
sentence that is entailed
27
Inference
◼ KB ├i α = sentence α can be derived from KB by
procedure i
◼ Soundness: i is sound if whenever KB ├i α, it is also
true that KB╞ α
◼ Completeness: i is complete if whenever KB╞ α, it is
also true that KB ├i α
◼ Preview: we will define a logic (first-order logic) which
is expressive enough to say almost anything of interest,
and for which there exists a sound and complete
inference procedure.
◼ That is, the procedure will answer any question whose
answer follows from what is known by the KB.
LOGIC
◼ Models
◼ Sentence – Syntax and Semantics
◼ Entailment / Logical Inference
◼ Model Checking
◼ Sound / truth preserving, complete
inference algorithm
◼ Grounding
29
Propositional Logic (PL)
◼ PL: logic that consists of proposition
symbols and connectives
◼ Each symbol is either true or false
◼ Syntax: describes how the symbols and
connectives form sentences
◼ Semantics: describes rules for
determining the truth of a sentence wrt
to a model
30
Syntax
◼ A sentence in Propositional Logic is either
Atomic or Complex
◼ Atomic Sentence
◼ Symbol: e.g., P, Q, R, …
◼ True
◼ False
◼ Complex Sentence
◼ Let S and T be sentences (atomic or complex)
◼ The following are also sentences:
S, S T, S T, S T, S T
31
Connectives
◼ S: negation
◼ if P is a symbol, P and P are called literals
◼ S T: conjunction
◼ S and T are called conjuncts
◼ S T: disjunction
◼ S and T are called disjuncts
◼ S T: implication
◼ S is called the premise, T is called the conclusion
◼ S T: biconditional
32
33
Back to the Wumpus World
◼ Start with a vocabulary of proposition
symbols, for example:
◼ Pi,j: there is a pit in room [i,j]
◼ Bi,j: there is a breeze in room [i,j]
◼ Sample sentences (could be true or false)
◼ P1,2
◼ B2,2 P2,3
◼ P4,3 B3,3 B4,2 B4,4
◼ P3,4 B1,3
◼ Note issue of precedence with connectives
34
Semantic Rules
◼ ¬P is true iff P is false in m.
◼ P ∧Q is true iff both P and Q are true in m.
◼ P ∨ Q is true iff either P or Q is true in m.
◼ P ⇒ Q is true unless P is true and Q is false in
m.
◼ P ⇔Q is true iff P and Q are both true or both
false in m.
35
Semantics
◼ Truth of symbols are specified in the
model
◼ Truth of complex sentences can be
determined using truth tables
36
Knowledge Base for
the Wumpus World
◼ Rules constitute the initial KB and can be expressed
in PL; for example:
◼ P1,1
◼ P4,4 B3,4 B4,3
◼ As the agent progresses, it can perceive other facts
and incorporate it in its KB; for example:
◼ B1,1 if it doesn’t perceive a breeze in room [1,1]
◼ B2,1 if it perceives a breeze in room [2,1]
◼ Can view the KB as a conjunction of all sentences
asserted as true so far
37
Inference in the
Wumpus World
◼ We want to decide on the existence of
pits in the rooms; i.e. does KB╞ Pi,j ?
◼ Suppose we have already perceived
B1,1 and B2,1
◼ KB contains the rules and these facts
◼ What can we say about:
◼ P1,1, P1,2, P2,1, P2,2, P3,1 ?
38
Truth Table Depicting
128 Possible Models
39
Inference Examples
◼ KB is true when the rules hold—only for three
rows in the table
◼ The three rows are models of KB
◼ Consider the value of P1,2 for these 3 rows
◼ P1,2 is false in all rows
(the rows are models of α1 = P1,2)
◼ Thus, there is no pit in room [1,2]
◼ Consider the value of P2,2 for these 3 rows
◼ P1,2 false in one row, true for 2 rows
◼ Thus, there may be a pit in room [2,2]
40
Inference by Enumeration
◼ We want an algorithm that determines
whether KB entails some sentence α
◼ Strategy:
◼ Enumerate all possible models (true-false
combinations of symbols in KB)
◼ Consider only those models of KB (models
where KB is true)
◼ Return true if α is true for all such models
41
Inference by Enumeration
42
Analysis
◼ Inference by Enumeration is sound and
complete
◼ By definition of sound and complete ☺
◼ Runs in exponential time - O(2n)
◼ Requires linear space - O(n)
43
Logical equivalence
◼ Two sentences are logically equivalent, iff true in same
models: α ≡ ß iff α╞ β and β╞ α
Validity & satisfiability
A sentence is valid if it is true in all models (tautology )
e.g., True, A A, A A, (A (A B)) B
Validity is connected to inference via the Deduction
Theorem:
KB ╞ α if and only if (KB α) is valid
A sentence is satisfiable if it is true in some model
e.g., A B, C
A sentence is unsatisfiable if it is true in no models
e.g., AA
Unsatisfiability is connected to inference via the following:
KB ╞ α if and only if (KB α) is unsatisfiable
Proof methods
◼ Proof methods divide into two kinds (approximately) :
◼ Application of inference rules
◼ Legitimate (sound) generation of new sentences from old
◼ Proof = a sequence of inference rule applications
Can use inference rules as operators in a standard search
algorithm
◼ Typically require transformation of sentences into a normal form
◼ Model checking
◼ truth table enumeration (always exponential in n)
◼ improved backtracking, e.g., Davis--Putnam-Logemann-Loveland
(DPLL)
◼ heuristic search in model space (sound but incomplete)
e.g., min-conflicts-like hill-climbing algorithms
Reasoning Templates in PL
Modus Ponens
The notation means that, whenever any sentences of the form α ⇒ β and α are given, then the
sentence β can be inferred. For example, if (WumpusAhead ∧WumpusAlive) ⇒ Shoot and
(WumpusAhead ∧WumpusAlive) are given, then Shoot can be inferred.
And-Elimination
From a conjunction, any of the conjuncts can be inferred
For example, from (WumpusAhead ∧ WumpusAlive), WumpusAlive can be inferred.
Bi-conditional Elimination
◼ Using logical equivalents as inference rules
◼ Employ search algorithms to prove – formulate the proof as search problem
and apply proofs
Conjunctive Normal Form
(CNF )
Eventually we
want to prove: Knowledge base KB entails sentence α
Conjunctive Normal Form (CNF ):
conjunction of disjunctions of literals clauses
We first rewrite into CNF
A “conjunction of disjunctions” literals
(A B) (B C D)
Clause Clause
• Theorem: Any KB can be converted into an equivalent CNF.
• k-CNF: exactly k literals per clause
Example: Conversion to CNF
B1,1 (P1,2 P2,1)
1. Eliminate , replacing α β with (α β)(β α).
(B1,1 (P1,2 P2,1)) ((P1,2 P2,1) B1,1)
2. Eliminate , replacing α β with α β.
(B1,1 P1,2 P2,1) ((P1,2 P2,1) B1,1)
3. Move inwards using de Morgan's rules and double-negation:
(B1,1 P1,2 P2,1) ((P1,2 P2,1) B1,1)
4. Apply distributive law ( over ) and flatten:
(B1,1 P1,2 P2,1) (P1,2 B1,1) (P2,1 B1,1)
Resolution
◼ Resolution inference rule (for CNF):
l1 … lk, m 1 … mn
l1 … li-1 li+1 … lk m1 … mj-1 mj+1 ... mn
where li and mj are complementary literals, i.e. li = mj.
E.g., P1,3 P2,2, P2,2
P1,3
◼ Resolution is sound and complete for propositional logic
◼ Using factoring → no same literals in result
PROOF USING RESOLUTION
• To prove a fact α, i.e. show KB╞ α
or KBα unsatisfiable
repeatedly apply resolution until either:
– No new clauses can be added, (KB does not entail α)
– The empty clause is derived (KB does entail α)
• This is proof by contradiction:
if we prove that KB α derives a contradiction (empty
clause) and we know KB is true, then α must be false,
so α must be true!
• To apply resolution mechanically, facts need to be in
Conjunctive Normal Form (CNF)
• To perform the proof, need a search mechanism that will
enumerate all possible resolutions.
Resolution algorithm
◼ Proof by contradiction,
i.e., show KBα unsatisfiable
Resolution example
◼ KB = (B1,1 (P1,2 P2,1)) B1,1
α = P1,2
About Resolution
◼ Resolution is not generatively complete, i.e. it is not
possible to find resolution derivations for all clauses
that are logically entailed by KB.
◼ It is complete in another sense: if a set S of
Propositional Logic is unsatisfiable, then Resolution
guarantees to derive the empty clause from S.
◼ To show an argument is valid it is only required to give a
single derivation of the empty clause;
but
◼ to prove an argument is invalid it is necessary to
exhaustively apply the resolution rule to all combinations
of complimentary literals.
Horn Clauses
Horn Clause: A clause with at most 1 positive literal.
A Horn Clause is a CNF clause with exactly one
positive literal
The positive literal is called the head
The negative literals are called the body
English: “To prove the head, prove body1, …”
Implication: If (body1, body2 …) then head
Prolog: head:- body1, body2, body3 …
E.g. KB= ( B1,1 P1,2 P2,1) (W2,2 W2,2)
Horn Clauses
•Every Horn clause can be rewritten as an implication with
a conjunction of positive literals in the premises and at most a single
positive literal as a conclusion.
proposition symbol; or
(conjunction of symbols) symbol
E.g.
KB: B C A
1 positive literal: definite clause
0 positive literals: Fact or integrity constraint:
KB: (W1,1 W1,2 )
(W1,1 W1,2 False) (W1,1 W1,2 False)
0 negative literal (body) = truth
Forward and backward chaining
◼ Horn Form
KB = conjunction of Horn clauses
KB: C (B A) (C D B)
Forward and backward chaining
◼ Modus Ponens (for Horn Form): complete for Horn
KBs
α1, … ,αn, α1 … αn β
β
◼ Can be used with forward chaining or backward
chaining.
◼ These algorithms are very natural and run in linear
time
Forward chaining
◼ Idea: fire any rule whose premises are satisfied in the
KB,
add its conclusion to the KB, until query is found
◼
AND-OR Graph
multiple links joined by an arc indicate conjunction – every link must be proved
multiple links without an arc indicate disjunction – any link can be proved
Forward chaining algorithm
◼ count: how many premises are unknown
◼ agenda: not processed, but true symbols
◼ Forward chaining is sound and complete for Horn KB
Forward chaining example
Forward chaining example
Forward chaining example
Forward chaining example
Forward chaining example
Forward chaining example
Forward chaining example
Forward chaining example
Backward chaining
Idea: work backwards from the query q:
to prove q by BC,
check if q is known already, or
prove by BC all premises of some rule concluding q
Avoid loops: check if new subgoal is already on the goal
stack
Avoid repeated work: check if new subgoal
1. has already been proved true, or
2. has already failed
Backward chaining example
Backward chaining example
Backward chaining example
Backward chaining example
Backward chaining example
Backward chaining example
Backward chaining example
Backward chaining example
Backward chaining example
Backward chaining example
Forward vs. backward chaining
◼ FC is data-driven, automatic, unconscious processing,
◼ e.g., object recognition, routine decisions
◼ May do lots of work that is irrelevant to the goal
◼ BC is goal-driven, appropriate for problem-solving,
◼ e.g., Where are my keys? How do I get into a MSC program?
◼ Complexity of BC can be much less than linear in size
of KB
Efficient propositional inference
Two families of efficient algorithms for propositional
inference:
1. Complete backtracking
2. Local search
Efficient propositional inference
◼ Complete backtracking search algorithms
◼ DPLL algorithm (Davis, Putnam, Logemann, Loveland)
◼ Incomplete local search algorithms
◼ WalkSAT algorithm
The DPLL algorithm
Determine if an input propositional logic sentence (in CNF) is satisfiable.
Improvements over truth table enumeration:
1. Early termination
A clause is true if any literal is true.
A sentence is false if any clause is false.
e.g., {A = true} satisfies (A ∨ B) ∧ (A ∨ C)
2. Pure symbol heuristic
Pure symbol: always appears with the same "sign" in all clauses.
e.g., A and B are pure in (A ∨ ¬B) ∧ (¬B ∨ ¬C) ∧ (C ∨ A)
⇒ assign symbol to make literals true
3. Unit clause heuristic
Unit clause: only one literal in the clause
e.g., if {A = true} already, (¬A ∨ ¬B) is a unit clause
⇒assign symbol to make clause true
The DPLL algorithm
The WalkSAT algorithm
◼ Incomplete, local search algorithm
◼ Evaluation function: The min-conflict heuristic of
minimizing the number of unsatisfied clauses
◼ Balance between greediness and randomness
The WalkSAT algorithm
Inference-based agents in the wumpus world
A wumpus-world agent using propositional logic:
P1,1
W1,1
Bx,y (Px,y+1 Px,y-1 Px+1,y Px-1,y)
Sx,y (Wx,y+1 Wx,y-1 Wx+1,y Wx-1,y)
W1,1 W1,2 … W4,4
W1,1 W1,2
W1,1 W1,3
…
64 distinct proposition symbols, 155 sentences
Summary
◼ Logical agents apply inference to a knowledge base to derive
new information and make decisions
◼ Basic concepts of logic:
◼ syntax: formal structure of sentences
◼ semantics: truth of sentences wrt models
◼ entailment: necessary truth of one sentence given another
◼ inference: deriving sentences from other sentences
◼ soundness: derivations produce only entailed sentences
◼ completeness: derivations can produce all entailed sentences
◼ Wumpus world requires the ability to represent partial and
negated information, reason by cases, etc.
◼ Resolution is complete for propositional logic
Forward, backward chaining are linear-time, complete for Horn
clauses
◼ Propositional logic lacks expressive power