0% found this document useful (0 votes)
34 views16 pages

Srarm - Unit 1

Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
34 views16 pages

Srarm - Unit 1

Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 16

12-08-2022

Syllabus
Unit-1
Introduction: Approximate Methods, Heuristics, Metaheuristic Techniques and Local Search.
Swarm Intelligence & Unit-2
Evolutionary Algorithms Introduction to Evolutionary Algorithms: Genetic Algorithms: Genetic Algorithms: Elementary
Concepts, Genetic algorithm, Subset-Coded Genetic Algorithm, Permutation-Coded Genetic
Algorithm, Grouping-based Genetic Algorithm.
Unit-3
Introduction to Swarm Intelligence Techniques: Basic concepts, examples of Swarm Intelligence
By Techniques such as Ant-Colony Optimization and Artificial Bee Colony Algorithm.
Dr. Naeem Ahmad Unit-4
Assistant Professor Warm-up to Swarm Intelligence and Evolutionary Algorithms: Introduction to implementation of
swarm intelligence techniques and evolutionary algorithm on some COPs such as 0/1 Knapsack
NIT Raipur, India problem, TSP problem, job-scheduling problem, bin packing problem.

1 2

Books
• An Introduction to Genetic Algorithms
by Melanie Mitchell

• Ant-Colony Optimization
by Marco Dorigo and Thomas Stützle
Unit 1
Introduction
• Heuristic Search- Theory and Applications
by Stefan Edelkamp, Stefan Schrodl By
Dr. Naeem Ahmad
• Handbook of Metaheuristics Assistant Professor
by Michel Gendreau NIT Raipur, India

3 4

Search Algorithm Applications of search algorithms


• Definition-In computer science, algorithms designed to solve a search problem. • Problems in combinatorial optimization, such as:
• The vehicle routing problem, a form of shortest path problem
• The knapsack problem
• Search algorithms work to retrieve information stored within particular data • The nurse scheduling problem
structure, or calculated in the search space of a problem domain, with either
discrete or continuous values. • Problems in constraint satisfaction, such as:
• The map coloring problem
• Filling in a sudoku or crossword puzzle
• How are they evaluated? • In game theory and especially combinatorial game theory
• Finding a combination or password from the whole set of possibilities
• How can we made it easy? • Factoring an integer (an important problem in cryptography)
• Optimizing an industrial process, such as a chemical reaction
• Search Engine Optimization
5 6

1
12-08-2022

Classification of optimization problems


• Non-constrained optimization
• Constrained optimization
• Linear and non-linear programming
• NP-complete and harder problems
• Optimization problems related to machine learning

7 8

Non-constrained optimization Constrained Optimization


• Generally speaking, an optimization problem has an objective function(x). Loss • The problem is represented by
function, cost function, and recognition rate are examples of objective functions.
• The problem is represented by

• Where domain can be a sub-space D of RN


• D again can be defined by some functions
• This is called an non-constrained optimization problem.
• Usually, x is a point in N-dimensional Euclidean space RN, and f(x) is a point in
RM. Here, we will use M = 1.
• Some special considerations are needed to extend the results obtained here to
“multiple objective” cases.

9 10

Linear & Non-linear programming Examples


• If both f(x) and gj(x) are linear functions, we have linear optimization • Linear programming problem:
problem, and this is usually called linear programming (LP).
• For LP, we have very efficient algorithms already e.g. well-known The prices of the products are 25 and 31 (in million yen), and those of the materials are 0.5 and 0.8 (in million yen).
simplex algorithm, and metaheuristics are not needed. The problem can be formulated as

• If f(x) or any gj(x) is non-linear, we have non-linear optimization • Non-linear programming problem:
problem, and this is often called non-linear programming (NLP). • Given N observations: (x1, p(x1)), ( x2, p(x2)), … , ( xN, p(xN)) of an unknown
function p(x).
• Conventional methods usually finds local optimal solutions.
Metaheuristic methods are useful for finding global solutions. • Find a polynomial q(x) = a0 + a1 + a2x2, such that

MSE to measure the “goodness”


of a solution
11 12

2
12-08-2022

Combinatorial optimization problems Traveling salesman problem (TSP)


• If f(x) and gj(x) cannot be given analytically (in closed-form), • Given N users located in N different
we have combinatorial problems. places (cities).
• For example, if x takes k discrete values (e.g. integers), and if • The problem is to find a route so that
there are K variables, the number of all possible solutions will
the salesman can visit all users once
be kK.
(and only once), start from and return
• It is difficult to check all possible solutions in order to find to his own place (to find the
the best one(s).
Hamiltonian cycle).
• In such cases, metaheuristics can provide efficient ways for
obtaining good solutions using limited resources (e.g. time
and memory space).
13 14

Problems similar to TSP NP-hard and NP-complete


• Network routing problems • An optimization problem is called NP-hard if its associated decision
– Optimize the total routing costs (distance dependent). problem is NP-complete.
• Scheduling problems • Any optimization problem can be reduced to a decision problem (a
– Optimize the overall time for making a product (using different machines, with problem that answers yes or no only).
different materials and costs). • NP-complete problems are a class of decision problems that can be
• Vehicle Routing Problem solved by a non-deterministic algorithm in polynomial time.
– Optimize a delivery plan for a transportation company (e.g. minimum distance, • The set of problems that can be solved by a deterministic algorithm in
minimum number of vehicles).
polynomial time is called class P.
For TSP, the number of all possible solutions is (n-1)!/2 and this is a well- • If we can find a deterministic algorithm for solving one of the NP-
known NP-hard combinatorial problem complete problem, we can solve all other NP complete problems.

15 16

The Knapsack problem Optimization problems related to ML


• Knapsack problem is another NP complete problem defined • Many optimization problems related to machine learning (learning from a given
set of training data) are NP-hard/complete.
by:
• Examples included:
– There are objects; – Finding the smallest feature sub-set;
– Each object has a weight and a value; – Finding the most informative training data set;
– The knapsack has a capacity; – Finding the smallest decision tree;
– Finding the best clusters;
– The user has a quota (minimum desired value); – Finding the best neural network;
– Interpret a learned neural network.

The problem is to find a sub-set of the objects that can be put


into the knapsack and can maximize the total value.
17 18

3
12-08-2022

Problem solving: State space representation


Maze game
• Initial state: (0,0)
• Goal state: (2,2)
• Rules:
An Brief Review of – Rule1 : Move upward
– Rule2: Move downward
Conventional Search Algorithms – Rule3: Move left
– Rule4: Move right
• The state after moving will be different for
different current states.
• For some states, some rules cannot be used.

20

Problem solving: State space representation Problem solving: graph-based search


8-puzzle problem • Based on the state representation,
• Initial state: (2,B,3,4,5,6,7,1,8) we can draw a search graph for the
• Goal state: (1,2,3,4,5,6,7,8,B) maze game.
• Rules:
– Rule 1: Move B to North
• The problem is to find a path
– Rule 2: Move B to South between node (0,0) and (2,2)
– Rule 3: Move
– Rule 4: Move
B to East
B to West
• Since there are different solutions,
• The state after moving will be different for
we should find the best path.
different current states.
• For some states, some rules cannot be used.

21 22

Problem solving: Graph-based search Brute-force search


• For the 8 puzzle problem, however, finding the
solution using search graph is not efficient because
• Random search
there are too many nodes (about 180 thousands) . • Search with closed list
• In such cases, we can use a search tree (tree is a
special graph). • Search with open list
• Start from the root (initial state), we can find the • Depth-first search
solution by expanding the tree recursively.
• Hopefully, we can find a solution at a certain • Breadth-first search
point.
• Uniform-cost search

23 24

4
12-08-2022

Algorithm-1: Random Search Algorithm-2: Search with Closed List


• Step 1: n = initial node. • Step 1: n = initial node.
• Step 2: Finish if n is the goal.
• Step 2: Finish if
• Step 3: Expand n to get a set S of child nodes.
o n is the goal.
– If S is empty, finish with failure;
• Step 3: Expand n to get a set S of child-nodes. – Else put n to the closed list C.

• Step 4: Select a node n’ from S at random. • Step 4: Select a node 𝑛′ ∈ 𝐶 from S. If such n’ does not exist, finish
with failure.
• Step 5: n = n’, return to 2. • Step 5: n = n’, and return to Step 2.
Quiz-1: What is “node expansion”? Point: Do not re-visit the same node!
Quiz-2: What are the weak points of random search?
25 26

Properties of Algorithm-2 Algorithm-3: Search with Open List


• If the graph is finite (has finite number of nodes), Algorithm-2 always stops in a • Step 1: Put the initial node to the open list O.
limited number of steps.
• Step 2: Select a node n from O.
• But, we may not get a solution.
o If O is empty, finish with failure;
– Example: (0,0)→(0,1)→(0,2)→(0,1)  Closed already !
o Else-if n is the goal, finish with success.
• Step 3: Expand n to get S of child nodes, and put n to the closed list C.
• Step 4: Select from S a node n’ ∉ C, link it to n, and put n’ to O.
• Step 5: Return to Step 2.

Point: Keep un-visited (child) nodes in the open list!

27 28

Features of Algorithm-3 Depth-first Search


• Algorithm-3 can find the solution if the graph is finite and the • If we implement the open list using a STACK, Algorithm-3 is a depth-first search
algorithm.
solution exists.
• If the path from the initial node to the goal is the solution we
want, we can get the solution by tracing back from the goal to
the initial node, using the links found during search.

29 30

5
12-08-2022

Breadth-first Search To find the best solution


• If we implement the open list using a queue, Algorithm-3 is a breadth- • When there are many solutions, we would like to find the one
first search algorithm. with the minimum cost.
• In breadth-first search, all nodes in the same level are first visited • Examples include: the cheapest or fastest route for traveling
before going down.
around the world.
• We can find the solution if the solution exists, even when the tree is
infinite (number of branches is limited). • The algorithms studied so far may not be able to find the best
• For depth-first search, we may not be able to find the solution if the
solution.
tree is infinitely large.

31 32

Algorithm-4: Uniform-cost Search Properties of Algorithm-4


• Step 1: Put the initial node ni and its cost g(ni)=0 to the open list O. • Uniform-cost search finds the lowest cost path from ni to n when n is
• Step 2: Take a node n from the head of O. expanded. That is, g(n) = g*(n).
– If O is empty, finish with failure; • Even if the graph is infinite, the algorithm stops in limited number of
– else if n is the goal, finish with success.
steps.
• Step 3: Expand n to get a set S of child nodes, and put n into the closed list C.
• If the cost of each edge c(ni, nj) is 1, the algorithm is reduced to
• Step 4: Select a node n’ ∉ C from S, and link n’ to n.
– Find g(n’)=g(n) + c(n, n’), and put n’ and its cost into O.
breadth-first search.
– If n’ is included in O already, and the new cost is smaller, replace the old one with n’, along • If each node is a solution, c(ni,nj) represents the increment of cost
with its cost and link. when we replace ni with nj.
– Sort O in non-decreasing order according to the cost.
• • Step 5: Return to Step 2.
Cost of edge between n and n’

33 34

Example of uniform cost search

Do you have any idea to speed-up the


uniform cost search algorithm?

35 36

6
12-08-2022

Local optimal and global optimal


• For minimization problem,
⁃ A solution x* is local optimal if (x*) < (x) for all x in the ε-neighborhood of x*,
where ε > 0 is a real number, and is the radius of the neighborhood.
⁃ A solution ∗ is global optimal if (x*) < (x) for all in the search space (problem
domain).
• Metaheuristics are useful for obtaining global optimal solutions
efficiently.

37 38

Heuristics Heuristics
• Heuristics is an approach to problem-solving in which the objective is to produce • Heuristics are meant to be estimates of the remaining distance from a node to the
a working solution within a reasonable time frame. goal.
• These techniques are used to find the approximate solution of a problem when • Graph representation of a weighted state space problem
classical methods do not. These are derived from past experience with similar
problems. 𝐺 = (𝑉, 𝐸, 𝑠, 𝑇, 𝑤)
• This information can be exploited by search algorithms to assess whether one state
is more promising than the rest. • A heuristic h is a node evaluation function, mapping V to IR ≥ 0.
• Estimate of the total path cost from the start node to the target destination:
• Why do we need heuristics? 𝑓 𝑢 = 𝑔 𝑢 + ℎ(𝑢)
• Node with minimum f is most promising node to be expanded.

39 40

Heuristics Heuristics Heuristics are particularly useful if we can make


sure that they sometimes may underestimate, but
• No heuristic Vs heuristic never overestimate, the true goal distance.

41 42

7
12-08-2022

Heuristics Heuristic search techniques


• Admissible heuristics • Some of the techniques are listed below:
- An estimate h is an admissible heuristic if it is a lower bound for the optimal solution costs; • Hill Climbing
that is, h(u) ≤ δ(u, T) for all u ϵV • A* search
• Simulated Annealing
• Bidirectional Search
• Best First search
Assignment 1 (Part 1) • Beam search

Try to find out the definition of the following terms:


‾ Consistent Heuristic
‾ Monotonic Heuristic

43 44

Traditional Mathematical Methods Traditional Mathematical Methods


Gradient-based Optimization
• Identify the slope and move up it. • How do we know that we’ve got the ideal solution?
• Compute the slope of x, that is, we have f ’(x).
• Start with an arbitrary value of x
• Then repeatedly add to it a small portion of its slope
𝑥 = 𝑥 + 𝛼 𝑓 ′ (𝑥)

45 46

Traditional Mathematical Methods Gradient Ascent with Restarts


Newton’s Method
This variation on Gradient Ascent includes an additional −1/𝑓 ′′ (𝑥) which is
𝑓 ′ (𝑥)
𝑥= 𝑥 − 𝛼
𝑓 ′′ (𝑥)

How do you escape local optima? A global optimization algorithm is guaranteed to find the global
optimum if it runs long enough.
47 48

8
12-08-2022

Hill Climbing Algorithm Hill Climbing


• It is a technique for optimizing the mathematical problems. Hill Climbing is • Hill Climbing is a local search algorithm which continuously moves in the
widely used when a good heuristic is available. direction of increasing value to find the peak of the mountain or best solution to
- continuously moves in the direction of increasing elevation/value the problem.
- terminates when it reaches a peak value • It terminates when it reaches a peak value where no neighbor has higher value.

• Traveling-salesman Problem is one of the widely discussed examples of the Hill


climbing algorithm, in which we need to minimize the distance traveled by the
salesman.

• It is also called greedy local search as it only looks to its good immediate neighbor
state and not beyond that.

49 50

Hill Climbing Types of Hill Climbing

Hill Climbing

Simple Hill Steepest Ascent Stochastic Hill


Climbing Hill Climbing Climbing
It is a variant of Generate and Test Algorithm with the addition of Greedy Algorithm.

51 52

Simple Hill Climbing Steepest Ascent Hill Climbing


• It only evaluates neighbor node state at a time and selects first one that • It examines all the neighboring nodes of the current state and selects
optimizes the current cost and set it as current state. one neighbor node which is closest to the goal state.
• This enables you to climb up the hill until you reach a local optimum. • It is more time consuming.
• It is less time consuming.

53 54

9
12-08-2022

Hill Climbing Stochastic Hill Climbing


• It does not examine all its neighboring nodes before moving. Rather, it
selects one neighbor state at random and decides whether to choose it
as current state or examine another state.

55 56

Travelling salesman problem (TSP) Travelling salesman problem (TSP)


• Representation: • Brute force algorithm is unfeasible even for small N.
– Permutations of {1, … , n}. Example: (3,6,4,7,1,8,2,5) • Imagine a computer can compute a solution in a microsecond. It would take 3
– Size of search space: n! (actually (n-1)!/2 ) seconds to solve the problem for 10 cities, a bit more than 39 seconds for 11
– Notation: candidate solution 𝑆 = 𝑆1 , 𝑆2 , … , 𝑆𝑖 , … 𝑆𝑛 cities… 77 146 years for 20 cities.

• Objective function

• Initial solution mechanism


– Random permutation

57 Solution to a symmetric TSP with 7 cities using brute force search. Note: Number of permutations: (7−1)!/2 = 360 58

Travelling salesman problem (TSP) Travelling salesman problem (TSP)


• Definition of Neighborhood.
Swap (aka 2-exchange operator):
Candidate solutions are neighbors iff they differ in two elements

Solution of a TSP with 7 cities using a simple Branch and bound algorithm.
Note: The number of permutations is much less than Brute force search
• Not very appropriate for TSP, where the adjacency of elements is important. (weak
locality)
59 60

10
12-08-2022

Travelling salesman problem (TSP) Travelling salesman problem (TSP)


• Definition of Neighborhood - Other alternatives • Definition of Neighborhood. Other alternatives

2-edge exchange (aka 2-opt) k-edge exchange Swap operator Insertion operator

Neighbor solutions differ by at most 2 edges.


Neighbor solutions differ by at most k edges
𝑛
Size of neighborhood: −𝑛 Appropriate for permutation scheduling problems (strong locality), but not for TSP (weak locality)
2

Appropriate for TSP, where the adjacency of elements is important. (strong locality). 61 62

A* Algorithm A* Algorithm
• A* Search Algorithm is a simple and efficient search algorithm that can be used to • If the heuristic function is admissible meaning that :
find the optimal path between two nodes in a graph. – it never overestimates the actual cost to get to the goal
• A* is an informed search algorithm, or a best-first search. – A* is guaranteed to return a least-cost path from start to goal.

• It does this by maintaining a tree of paths originating at the start node and
extending those paths one edge at a time until its termination criterion is satisfied. • It is an extension of Dijkstra's shortest path algorithm.
• The cost of a node is evaluated using both the estimated cost and the true cost as
follows
𝑓 𝑥 = 𝑔 𝑥 + ℎ(𝑥)

• A* terminates when the path it chooses to extend is a path from start to goal or if
there are no paths eligible to be extended.

63 64

A* Algorithm Example of Maze Game


• Step 1: Put the initial node x0 and its cost f(x0) = h(x0) to the open list (O).
• Step 2: Get x from the top of O.
– If O is empty, stop with failure.
– If x is the target node, stop with success.
• Step 3: Expand x to get a set S of child nodes. Put x to the closed list (C).
• Step 4: For each x’ in S, find its cost
𝑓 𝑥 = 𝑔 𝑥 + ℎ(𝑥)
– If x’ in C but the Costnew< Costold, move x’ to O and update edge (x,x’) & the cost.
– Else if x’ in O, but Costnew< Costold, update the edge (x,x’) & the cost.
– Else (if x’ not in O nor in C), put x’ & edge (x,x’) and cost f(x) to O.
• Step 5: Sort the open list according to the costs of the nodes, and return to Step 2.

65 66

11
12-08-2022

Example Properties of the A* Algorithm


• The A* algorithm can obtain the best solution because it
considers both the cost calculated up to current node and the
estimated future cost.
• A sufficient condition for obtaining the best solution is that
the estimated future cost is smaller than the best possible
value.
• If this condition is not satisfied, we may not be able to get the
best solution. This kind of algorithms are called A algorithms.

67 68

Metaheuristics Metaheuristics
• The term was coined by Glover (1986). • A meta heuristic:
Meta– (beyond, at a higher level) –heuristic (to find, to search, to discover)
oGuides a subordinate heuristic in an iterative manner
• “A metaheuristic is a high-level problem-independent algorithmic framework that
provides a set of guidelines or strategies to develop heuristic optimization oAre guided random search techniques
algorithms” (Sörensen and Glover, 2013) oExplore and exploits the search space
• “a metaheuristic is a higher-level procedure or heuristic designed to find, generate, oIncludes methods to avoid getting trapped in local optima
or select a heuristic (partial search algorithm) that may provide a sufficiently good oUses search experience intelligently to guide further search
solution to an optimization problem, especially with incomplete or imperfect
information or limited computation capacity” (Bianchi et al, 2009) oFind near–optimal solutions
• “Metaheuristics is a rather unfortunate term often used to describe a major
subfield, indeed the primary subfield, of stochastic optimization. Stochastic
optimization (also called randomized optimization) is the general class of
algorithms and techniques which employ some degree of randomness to find
optimal (or as optimal as possible) solutions to hard problems.” (Luke 2013)
69 70

Heuristics Vs Metaheuristics Solution strategies for Optimization problems


Properties Heuristics Methods Metaheuristics Methods Nature of Solution
Methods Linear or Non-linear Exact Solution
Nature Deterministic Randomization + Programming
Heuristics
Branch and Bound Exact Solution
Type Algorithmic, Iterative Nature inspired,
Iterative Heuristics Methods Inexact, Near optimal Solution
Example Subtour Reversal or Genetic Algorithm for Metaheuristics Methods Inexact, Near optimal Solution
Nearest-Neighbor in TSP
TSP
• In heuristics and Metaheuristics methods, we make a tradeoff between solution
Nature of solution Inexact, Near Optimal Inexact, Near Optimal quality and computational time.
Solution Solution • Sometimes solution quality is compromised to get a quick solution.
71 72

12
12-08-2022

Metaheuristics Taxonomies
• Advantages • Many classification criteria may be used
– General purpose for metaheuristics. The most common
– Successful in practice are:
– Easy implementation – Nature inspired vs non-nature inspired
– Easy parallelization – Memory usage vs memoryless methods
– Deterministic vs stochastic
– Iterative vs greedy
• Drawbacks – Population vs single-solution based search
– They are not exact algorithms
– They are not deterministic
– Poor theoretical foundations

73 74

Nature inspired vs non-nature inspired Memory usage vs memoryless methods


• Many metaheuristics are inspired by natural processes: • Some metaheuristic algorithms are memoryless;
– evolutionary algorithms and artificial immune systems, from biology; that is, no information extracted dynamically is
– Particle swarm optimization and swarm intelligence from ants, bee colonies; used during the search. Some representatives of
– simulated annealing from physics. this class are local search, GRASP, and
simulated annealing.

• While other metaheuristics use a memory that


contains some information extracted online
during the search. For instance, short-term and
long-term memories in tabu search.

75 76
https://commons.wikimedia.org/wiki/File:ParticleSwarmArrowsAnimation.gif

Deterministic vs stochastic Iterative vs greedy


• A deterministic metaheuristic solves an optimization • In iterative algorithms, we start with a complete solution (or population of
problem by making deterministic decisions (e.g., local solutions) and transform it at each iteration using some search operators.
search, tabu search). • Greedy algorithms start from an empty solution, and at each step a decision
• In stochastic metaheuristics, some random rules are variable of the problem is assigned until a complete solution is obtained.
applied during the search (e.g., simulated annealing, • Most of the metaheuristics are iterative algorithms.
evolutionary algorithms).
• In deterministic algorithms, using the same initial
solution will lead to the same final solution, whereas in
stochastic metaheuristics, different final solutions may be
obtained from the same initial solution. This characteristic
must be taken into account in the performance evaluation
of metaheuristic algorithms.

77 78

13
12-08-2022

Single-solution vs Population-based search Mechanisms


• Single-solution based algorithms (e.g., local search, simulated • There is usually a trade-off between two objectives in the
annealing) manipulate and transform a single solution during the search process
search while in population-based algorithms (e.g., particle – Diversification or exploration: to generate diverse solutions to
swarm, evolutionary algorithms) a whole population of solutions explore the search space on a global scale
is evolved. – Intensification or exploitation: to focus the search in a local region
knowing that a current good solution has already been found in this
• These two families have complementary characteristics: single- region
solution based metaheuristics are exploitation oriented; they have
the power to intensify the search in local regions. Population- • A good balance is needed to:
– Identify quickly the search space with high quality solutions
based metaheuristics are exploration oriented; they allow a better
diversification in the whole search space. – Not waste too much time in the region space which has already
been explored or the solutions are not promising
• Basically each metaheuristic technique applies different
strategies to balance these two forces. Nearly all
metaheuristics are essentially elaborate combinations of
79
hill-climbing and random search. 80

When should we use metaheuristics? Design vs control problems


• The complexity of a problem gives an indication on the The relative importance of the two main performance measures, quality of
hardness of the problem solutions and search time, depends on the characteristics of the target
optimization problem. Two extreme problems may be considered here:
‾ But it is important to consider the size. Small instances may be
solved by an exact approach. And vice versa, easy problems – Design problems: Design problems are generally solved once. They need a
with high size can be solved with metaheuristics very good quality of solutions whereas the time available to solve the problem
is important (e.g., several hours, days, months). In this class of problems, one
• The required search time to solve a given problem is an can find the strategic problems (long-term problems), such as
important issue in the selection of an optimization algorithm. telecommunication network design and processor design. These problems
• For nonlinear continuous (NLP) optimization, metaheuristics involve an important financial investment; any imperfection will have a
should be applied, if derivative-based methods, for example longtime impact on the solution. Hence, the critical aspect is the quality of
solutions rather than the search time. If possible, exact optimization
quasi-Newton method or conjugate gradient, fail due to a algorithms must be used.
rugged search landscape (e.g., discontinuous, nonlinear, ill-
conditioned, noisy, multimodal, nonsmooth, and – Control problems: Control problems represent the other extreme where the
nonseparable). The function must be at least of moderate problem must be solved frequently in real time. This class of operational
dimensionality (considerably greater than three variables). problems involves short-term decisions (e.g., fractions of a second), such as
routing messages in a computer network and traffic management in a city. For
Those properties characterize many real-world problems. operational decision problems, very fast heuristics are needed; the quality of
• Optimization by simulation. Nonanalytic models with black- the solutions is less critical.
box scenarios 81 82

Random Search Random Search


• Random Search is the extreme case of exploration. • Random Search example for the travelling salesman problem.

83 84

14
12-08-2022

Random Search Random Search


• How likely is to find the (unique) optimal solution (among m possible solutions)
in n iterations?
• Probability of hitting the target in any given iteration = 1/m
• Probability of hitting the target in n iterations =
= Probability of hitting the target in at least 1 of them =
= 1 – Probability of not hitting the target in any of the n iterations =
= 1 – (Probability of not hitting the target in one iteration)n =
= 1 – (1 – 1/m)n = P(success in n iterations)

• How many iterations n do we need to have an x probability of success?


𝑛 > log(1 − 𝑥)/ log(1 − 1/𝑚)
85 86

Local search Neighborhood


• Single-solution based family of metaheuristics. The main property that must characterize a neighborhood is locality. We say that a
• Emphasis in exploitation neighborhood mapping has strong locality if neighbors have similar quality. In this
case, local search will perform a meaningful search in the landscape of the problem

The circle represents the neighborhood • Nodes of hypercube represent solution of


of sin a continuous problem with 2-D the problems.
• The neighbors of a solution (e.g. (0,1,0))
87 are the adjacent nodes in the graph. 88

Neighborhood Problems of local search


• Gradient descent • The main problems in local search algorithms are their convergence towards local
𝑥 = 𝑥 − 𝛼 ∗ 𝛻𝑓(𝑥) optima and their sensitivity to the initial solution. Different approaches have been
proposed to address these limitations.
– Iterating from different initial solutions
– Changing the landscape of the problem
– Accepting non-improving neighbors

89 90

15
12-08-2022

Problems of local search

Thank You

91

16

You might also like