Lecture 1 An Introduction to Optimization -- …qf-zhao/TEACHING/MH/Lec01.pdfUn-constrained...
Transcript of Lecture 1 An Introduction to Optimization -- …qf-zhao/TEACHING/MH/Lec01.pdfUn-constrained...
Lecture 1An Introduction to Optimization –
Classification and Case Study
An Introduction to Metaheuristics: Produced by Qiangfu Zhao (Since 2012), All rights reserved (C) Lec01/1
Un-constrained Optimization
• Generally speaking, an optimization problem has an objective function f(x).
• The problem is represented by
min(max) 𝑓(𝑥), 𝑓𝑜𝑟 𝑎𝑙𝑙 𝑥
• This is called an un-constrained optimization problem (無制約最適化問題).
An Introduction to Metaheuristics: Produced by Qiangfu Zhao (Since 2012), All rights reserved (C) Lec01/2
Un-constrained Optimization
• Usually, 𝑥 is a “point” in an N-dimensional Euclidean space 𝑅𝑁, and 𝑓(𝑥) is a point in 𝑅𝑀.
• In this course, we study only the case in which 𝑀 = 1. That is, we have only one objective to optimize.
• Some special considerations are needed to extend the results obtained here to “multiple objective” cases.
• Interested students may also study optimization in “non-Euclidean” spaces (i.e. manifolds).
An Introduction to Metaheuristics: Produced by Qiangfu Zhao (Since 2012), All rights reserved (C) Lec01/3
Constrained Optimization
• The domain can be a sub-space 𝐷 of 𝑅𝑁.
• We have constrained optimization problem:
• 𝐷 again can be defined by some functions
– 𝑥𝑖 > 0, 𝑖 = 1,2, …
– 𝑔𝑗(𝑥) > 0, 𝑗 = 1,2, …
An Introduction to Metaheuristics: Produced by Qiangfu Zhao (Since 2012), All rights reserved (C)
Subject to
Lec01/4
• min 𝑚𝑎𝑥 𝑓 𝑥
• 𝑠. 𝑡. 𝑥 ∈ 𝐷
Linear programming(線型計画法)
• If both 𝑓(𝑥) and 𝑔𝑗(𝑥) are linear functions, we have linear optimization problem, and this is usually called linear programming (LP).
• For LP, we have very efficient algorithms already, and meta-heuristics are not needed.
An Introduction to Metaheuristics: Produced by Qiangfu Zhao (Since 2012), All rights reserved (C) Lec01/5
Non-linear programming(非線形計画法)
• If 𝑓(𝑥) or any 𝑔𝑗(𝑥) is non-linear, we have non-linear optimization problem, and this is often called non-linear programming (NLP).
• Many methods have been proposed to solve this class of problems.
• However, conventional methods usually finds local optimal solutions. Meta-heuristic methods are useful for finding global solutions.
An Introduction to Metaheuristics: Produced by Qiangfu Zhao (Since 2012), All rights reserved (C) Lec01/6
Local optimal and global optimal
• For minimization problem, – A solution 𝑥∗ is local optimal if 𝑓(𝑥∗) < 𝑓(𝑥) for all 𝑥 in
the 𝜀-neighborhood of 𝑥∗, where 𝜖 > 0 is a real number, and is the radius of the neighborhood.
– A solution 𝑥∗ is global optimal if 𝑓(𝑥∗) < 𝑓(𝑥) for all 𝑥in the search space (problem domain).
• Meta-heuristics are useful for obtaining global optimal solutions efficiently.
An Introduction to Metaheuristics: Produced by Qiangfu Zhao (Since 2012), All rights reserved (C) Lec01/7
Example 1: Linear Programming
• 2 materials are used for making two products. • The prices of the products are 25 and 31 (in million yen), and those
of the materials are 0.5 and 0.8 (in million yen). • Suppose that we produce x1 units for product1, and x2 units for
product2.• We can get 25*x1+31*x2 million yen by selling the products.• On the other hand, we must pay (7*x1+5*x2)*0.5 +
(4*x1+8*x2)*0.8 million yen to buy the materials.
An Introduction to Metaheuristics: Produced by Qiangfu Zhao (Since 2012), All rights reserved (C)
Material used in Product1 Material used in Product2
Material 1 7 5
Material 2 4 8
Lec01/8
Example 1: Linear Programming
• The problem can be formulated as follows:
max 𝑓(𝑥1, 𝑥2) = 18.3𝑥1 + 22.1𝑥2𝑠. 𝑡. 𝑥1 > 0; 𝑥2 > 0;
6.7𝑥1 + 8.9𝑥2 < 𝐵
• The first set of constraints means that both products should be produced to satisfy social needs; and the second constraint is the budget limitation.
• This is a typical linear programming problem, and can be solved efficiently using the well-known simplex algorithm.
An Introduction to Metaheuristics: Produced by Qiangfu Zhao (Since 2012), All rights reserved (C) Lec01/9
Example 2: Non-linear programming
• Given 𝑁 observations: (𝑥1, 𝑝(𝑥1)), (𝑥2, 𝑝(𝑥2)), … ,(𝑥𝑁, 𝑝(𝑥𝑁)) of an unknown function 𝑝(𝑥).
• Find a polynomial 𝑞(𝑥) = 𝑎0+ 𝑎1𝑥 + 𝑎2𝑥2, such that
min 𝑓 𝑎0, 𝑎1, 𝑎2 =
𝑖=1
𝑁
𝑝 𝑥𝑖 − 𝑞 𝑥𝑖 + 𝜆 𝑞(𝑥)
• Note that in this problem 𝑞(𝑥) is also a function of 𝑎0, 𝑎1, 𝑎𝑛𝑑 𝑎2.
• The first term is the approximation error, and the second term is regularization factor that can make the solution better (e.g. smoother).
An Introduction to Metaheuristics: Produced by Qiangfu Zhao (Since 2012), All rights reserved (C) Lec01/10
Combinatorial optimization problems
• If 𝑓(𝑥) or 𝑔𝑗(𝑥) cannot be given analytically (in closed-form), we have combinatorial problems.
• For example, if 𝑥 takes 𝑘 discrete values (e.g. integers), and if there are 𝐾 variables, the number of all possible solutions will be 𝑘𝐾.
• It is difficult to check all possible solutions in order to find the best one(s).
• In such cases, meta-heuristics can provide efficient ways for obtaining good solutions using limited resources (e.g. time and memory space).
An Introduction to Metaheuristics: Produced by Qiangfu Zhao (Since 2012), All rights reserved (C) Lec01/11
Example 3: Traveling salesman problem (TSP)
• Given 𝑁 users located in 𝑁different places (cities).
• The problem is to find a route so that the salesman can visit all users once (and only once), start from and return to his own place (to find the Hamiltonian cycle).
An Introduction to Metaheuristics: Produced by Qiangfu Zhao (Since 2012), All rights reserved (C)
From Wikipedia
Lec01/12
Example 3: Traveling salesman problem (TSP)
• In TSP, we have a route map which can be represented by a graph.
• Each node is a user, and the edge between each pair of nodes has a cost (distance or time).
• The evaluation function to be minimized is the total cost of the route.
An Introduction to Metaheuristics: Produced by Qiangfu Zhao (Since 2012), All rights reserved (C)
From Wikipedia
Lec01/13
For TSP, the number of all possible solutions is 𝑁!, and this is a well-known NP-hard combinatorial problem.
NP-hard and NP-complete
• Problems that can be solved by a deterministic algorithm in polynomial time is called class P.
• NP is a class of decision problems that can be solved by a non-deterministic algorithm in polynomial time.
• A problem H is NP-hard if it is at least as hard as any NP problem.
• NP-hard decision problems are NP-complete.
• NP-complete and NP-hard problems can be solved more efficiently if we use meta-heuristics.
An Introduction to Metaheuristics: Produced by Qiangfu Zhao (Since 2012), All rights reserved (C) Lec01/14
P
NP-complete
NP
NP-hard
Example 4: The Knapsack problem
• Knapsack problem is another NP-hard problem defined by:– There are 𝑁 objects;
– Each object has a weight and a value;
– The knapsack has a capacity;
– The user has a quota (minimum desired value);
An Introduction to Metaheuristics: Produced by Qiangfu Zhao (Since 2012), All rights reserved (C) Lec01/15
The problem is to find a sub-set of the objects that can be put into the knapsack and can maximize the total value.
Example 4: The Knapsack problem KNAPSACK (in OS : set of objects; QUOTA : number; CAPACITY : number;
out S : set of objects; FOUND : boolean) Begin S := empty;
total_value := 0; total_weight := 0; FOUND := false; pick an order L over the objects; loop
choose an object O in L; add O to S; total_value:= total_value + O.value; total_weight:= total_weight + O.weight; if total_weight > CAPACITY then fail
else if total_value > = QUOTA FOUND:= true; succeed;
end enddelete all objects up to O from L;
end end
An Introduction to Metaheuristics: Produced by Qiangfu Zhao (Since 2012), All rights reserved (C) Lec01/16
This is a non-deterministic algorithm. Each time we run the program, we get a different answer. By chance, we may get the best answer.
Example 5: Learning problems
• Many optimization problems related to machine learning (learning from a given set of training data) are NP-hard/complete.
• Examples included:– Finding the smallest feature sub-set;– Finding the most informative training
data set;– Finding the smallest decision tree;– Finding the best clusters;– Finding the best neural network;– Interpret a learned neural network.
An Introduction to Metaheuristics: Produced by Qiangfu Zhao (Since 2012), All rights reserved (C) Lec01/17
Homework
• Try to find some other examples of optimization problems (at least two) from the Internet.
• Tell if the problems are NP-hard, NP-complete, NP, or P.
• Provide a solution (not necessarily the best one) for each of the problems.
• Summarize your answer using a pdf-file, and submit the printed copy before the class of next week.
An Introduction to Metaheuristics: Produced by Qiangfu Zhao (Since 2012), All rights reserved (C) Lec01/18