In artificial intelligence, STRIPS (Stanford Research Institute Problem Solver) is an automated planner developed by Richard
Fikes and Nils Nilsson in 1971 at SRI International.[1]The same name was later used to refer to the formal language of the inputs to
this planner. This language is the base for most of the languages for expressing automated planning problem instances in use
today; such languages are commonly known as action languages.
A STRIPS instance is composed of:
An initial state;
The specification of the goal states – situations which the planner is trying to reach;
A set of actions. For each action, the following are included:
preconditions (what must be established before the action is performed);
postconditions (what is established after the action is performed).
Mathematically, a STRIPS instance is a quadruple <P,O,I,G>, in which each component has the following meaning:
1. P is a set of conditions (i.e., propositional variables);
2. O is a set of operators (i.e., actions); each operator is itself a quadruple , each element being a set of conditions.
These four sets specify, in order, which conditions must be true for the action to be executable, which ones must be false,
which ones are made true by the action and which ones are made false;
3. I is the initial state, given as the set of conditions that are initially true (all others are assumed false);
4. G is the specification of the goal state; this is given as a pair , which specify which conditions are true and false,
respectively, in order for a state to be considered a goal state.
A plan for such a planning instance is a sequence of operators that can be executed from the initial state and that leads to a goal
state.
Formally, a state is a set of conditions: a state is represented by the set of conditions that are true in it. Transitions between states
are modeled by a transition function, which is a function mapping states into new states that result from the execution of actions.
Since states are represented by sets of conditions, the transition function relative to the STRIPS instance <P,O,I,G> is a function
Succ: 2P x O -> 2P
where 2P is the set of all subsets of P, and is therefore the set of all possible states.
A sample STRIPS problem
A monkey is at location A in a lab. There is a box in location C. The monkey wants the bananas that are hanging from the ceiling in
location B, but it needs to move the box and climb onto it in order to reach them.
Initial state: At(A), Level(low), BoxAt(C), BananasAt(B)
Goal state: Have(Bananas)
Actions:
// move from X to Y
_Move(X, Y)_
Preconditions: At(X), Level(low)
Postconditions: not At(X), At(Y)
// climb up on the box
_ClimbUp(Location)_
Preconditions: At(Location), BoxAt(Location), Level(low)
Postconditions: Level(high), not Level(low)
// climb down from the box
_ClimbDown(Location)_
Preconditions: At(Location), BoxAt(Location), Level(high)
Postconditions: Level(low), not Level(high)
// move monkey and box from X to Y
_MoveBox(X, Y)_
Preconditions: At(X), BoxAt(X), Level(low)
Postconditions: BoxAt(Y), not BoxAt(X), At(Y), not At(X)
// take the bananas
_TakeBananas(Location)_
Preconditions: At(Location), BananasAt(Location), Level(high)
Postconditions: Have(bananas)
K Strips in Artificial Intelligence
K-STRIPS
Modal Operator K :
We are familiar with the use of connectives ∧ and V in logics. Thinking of these connectives as operators that
construct more complex formulas from simpler components. Here, we want to construct a formula whose
intended meaning is that a certain agent knows a certain proposition.
The components consist of a term denoting the agent and a formula denoting a proposition that the agent
knows. To accomplish this, modal operator K is introduced.
For example, to say that Robot (name of agent) know that block A is on block B, then write,
K( Robot, On(A,B))
The sentence formed by combining K with the term Robot and the formula On(A,B) gets a new formula, the
intended meaning of which is “Robot knows that block A is on block B”.
The words “knows” and “belief” is different in meaning. That means an agent can believe a false proposition,
but it cannot know anything that is false.
Some examples,
K(Agent1, K(Agent2, On(A,B) ) ], means Agent1 knows that Agent1 knows that A is on B.
K(Agent1, On(A,B)) V K(Agent1, On(A,C) ) means that either Agent1 knows that A is on B or it knows that
A is on C.
K(Agent1, On(A,B)) V K(Agent1, ¬On(A,B) ) means that either Agent1 knows whether or not A is on B.
Knowledge Axioms:
The operators ∧ and V have compositional semantics (depends on truth value) , but the semantics of K are not
compositional. The truth value of K(Agent1, On(A,B) ) for example, cannot necessarily be determined from
the properties of K, the denotation of Agent1 and the truth value of On(A,B). K Operator is said to be
referentially opaque.
Example in Planning Speech Action:
We can treat speech acts just like other agent systems. Our agent can use a plan-generating system to make
plans comprising speech acts and other actions. To do so, it needs a model of the effects of these actions.
Consider for example, Tell( A, φ ) , where A is Agent and φ is true.
We could model the effects of that action by the STRIPS rule :
Tell( A, φ ) :
Precondition : Next_to(A) ∧ φ ∧ ¬K(A, φ)
Delete : ¬K(A, φ)
Add : K(A, φ)
The precondition Next_to(A) ensures that our agent is close to agent A to enable communication.
The precondition φ is imposed to ensure that our agent actually believes φ before it can inform another agent
about the truth.
The precondition ¬K(A, φ) ensure that our agent does not communicate redundant information.