The Gurobi Solver V1co-at-work.zib.de/berlin2009/downloads/2009-09-29/... · `1.45X faster than...

Post on 28-Jul-2020

2 views 0 download

Transcript of The Gurobi Solver V1co-at-work.zib.de/berlin2009/downloads/2009-09-29/... · `1.45X faster than...

The Gurobi Solver V1.0

Robert E BixbyRobert E.  BixbyGurobi Optimization & Rice University

Ed Rothberg, Zonghao Gug, gGurobi Optimization

1 1‐Oct‐09

OverviewOverview

• Backgroundac g ou d• Rethinking the MIP solver

– Introduction– Tree‐of‐Trees– Parallel MIP– Other

• Heuristics• Cutting Planes• Cutting Planes

• Where we are now– PerformancePerformance

2 1‐Oct‐09

Gurobi Optimization, Inc. founded July, 2008p yFounders:  Gu, Rothberg, Bixby

Building a algorithmic baseLP simplex and MIP in Version 1.0

Development timelineMarch 2008

Development beganO t b 2008October 2008

Version 1.0 development completed18 May 2009

Version 1 0 releasedVersion 1.0 released2 October 2009

Version 2.0 to be released

3 1‐Oct‐09

Redesigning the MIP Solver

4 1‐Oct‐09

MIP problemMIP problem

• Mixed Integer ProgrammingMixed Integer Programming

Minimize cTx

Subject To Ax = b

l ≤ x ≤ uSome xj must be integral

5 1‐Oct‐09

MIP Solution ApproachMIP Solution Approach

• Branch and bound [Land & Doig, 1960][ g, ]– Explore a tree of relaxations (ignore integrality)

Root node: (x=0 5 y=1 z=0 5 )Root node: (x=0.5, y=1, z=0.5, …)  

x ≤ 0 x ≥ 1

…y ≤ 0 y ≥ 1

6 1‐Oct‐09

Key IngredientsKey Ingredients

• For an effective MIP solver, you need …, y– Heuristics

• Explore solutions near the relaxation quickly• Find feasible solutions at nodes other than leaf nodes• Find feasible solutions at nodes other than leaf nodes

– Parallelism– Presolve

• Tighten formulation before starting branch & bound• Once you start branching, mistakes replicate

– Branch‐variable selection– Cutting planes– LP dual‐simplex solver

7 1‐Oct‐09

A Fresh Look

8 1‐Oct‐09

A Fresh LookA Fresh Look

• Things have changed:Things have changed:– Two examples:

• “Sub‐MIP” as a pervasive approachSub‐MIP  as a pervasive approach

• Ubiquitous parallel processing

9 1‐Oct‐09

Sub‐MIP As A ParadigmSub MIP As A Paradigm

• Key recent insight for heuristics:Key recent insight for heuristics:– Can use MIP solver recursively as a heuristic

Solve a related model:– Solve a related model:• Hopefully smaller and simpler

– Examples:– Examples:• Local cuts [Applegate, Bixby, Chvátal & Cook, 2001]

• Local branching [Fischetti & Lodi, 2003]Local branching [Fischetti & Lodi, 2003]

• RINS [Danna, Rothberg, Le Pape, 2005]

• Solution polishing [Rothberg, 2007]

10 1‐Oct‐09

RINSRINS

• Relaxation Induced Neighborhood SearchRelaxation Induced Neighborhood Search– Given two “solutions”:

• x*: any integer feasible solution (not optimal)y g ( p )

• xR: optimal relaxation solution (not integer feasible)

– Fix variables that agree

– Solve the result as a MIP• Possibly requiring early termination

• Extremely effective heuristic– Often finds solutions that no other technique finds

11 1‐Oct‐09

Why Is RINS So Effective?Why Is RINS So Effective?

• MIP models often involve a hierarchy of decisionsy– Some much more important than others

• Fixing variables doesn’t just make the problem llsmaller

– Often changes the nature of the problem• Extreme case:

– Problem decomposes into multiple, simple problems• More general case:

– Resolving few key decisions can have a dramatic effect

– Strategies that worked well for the whole problem may not work well for RINS sub‐MIP

• More effective to treat it as a brand new MIP

12 1‐Oct‐09

Rethinking MIP Tree Search

13 1‐Oct‐09

Tree‐of‐TreesTree of Trees

• Gurobi MIP search tree manager built toGurobi MIP search tree manager built to handle multiple related trees– Can transform any node into the root node of a– Can transform any node into the root node of a new tree

• Maintains a pool of nodes from all trees• Maintains a pool of nodes from all trees– No need to dedicate the search to a single sub‐treetree

14 1‐Oct‐09

Tree‐of‐TreesTree of Trees

x=0 x=1

Node pool:

Search tree:

y=0 y=1p

15 1‐Oct‐09

Tree‐of‐TreesTree of Trees

• Each tree has its own relaxation and its ownEach tree has its own relaxation and its own strategies...– Presolved model for each subtree– Cuts specific to that subtree– Pseudo‐costs for that subtree only– Symmetry detection on that submodel– Etc.

• Captures structure that is often not visible in the original model

16 1‐Oct‐09

Parallel MIP

17 1‐Oct‐09

Why Parallel?Why Parallel?

• Microprocessor trends have changedp g• Transistors are:

– Still getting smaller– But not faster

• Implications:New math for CPUs: more transistors more cores– New math for CPUs: more transistors = more cores

• 4 cores now, 8 cores in late 2009, …

– Sequential software won’t be getting significantly faster in the foreseeable future

• Gurobi MIP solver built for parallel from the ground upSequential is just a special case– Sequential is just a special case

18 1‐Oct‐09

Need Deterministic BehaviorNeed Deterministic Behavior

• Non‐deterministic parallel behavior:Non deterministic parallel behavior:– Multiple runs with the same inputs can give different resultsdifferent results

• “Insanity: doing the same thing over and over again and expecting different results ”again and expecting different results.– Albert Einstein

C l i d i i i ll l• Conclusion: non‐deterministic parallel behavior will drive you insane

19 1‐Oct‐09

Building Blocks

20 1‐Oct‐09

Building BlocksBuilding Blocks

• Parallel MIP is parallel branch‐and‐bound:Parallel MIP is parallel branch and bound:

Available for simultaneous processing

21 1‐Oct‐09

Deterministic Parallel MIPDeterministic Parallel MIP

• Multiple phasesp p• In each phase, on each processor:

– Explore nodes assigned to processor– Report back results

• New active nodes• New solutions• New cuts• Etc.

• One approach to node assignment:One approach to node assignment:– Assign a subtree to each processor– Limit amount of exploration in each phase

22 1‐Oct‐09

Subtree PartitioningSubtree Partitioning

• Problem:Problem:– Subtree may quickly prove to be uninteresting

• Poor relaxation objectivesPoor relaxation objectives– May want to abandon it

• Pruned quickly– Leaves processor idle

23 1‐Oct‐09

More Global Partitioningg

• Node coloring: assign a color to every node

Processor can only process nodes of the appropriateProcessor can only process nodes of the appropriate colorNew child node same color as parent nodePerform periodic re‐coloringPerform periodic re coloring

24 1‐Oct‐09

More Dynamic Node ProcessingMore Dynamic Node Processing

• Allows much more flexibilityAllows much more flexibility– Processor can choose from among many nodes of the appropriate color

• Deterministic priority queue data structure required to support node coloring– Single global view of active nodes– Support notion of node color

• Processor only receives node of the appropriate color

– Efficient, frequent node reallocation

25 1‐Oct‐09

Heuristics

26 1‐Oct‐09

Summary of HeuristicsSummary of Heuristics

• 5 heuristics prior to solving root LP5 heuristics prior to solving root LP– 5 different variable orders, fix variables in this order

• 15 heuristics within tree (9 primary several• 15 heuristics within tree (9 primary, several variations)– RINS rounding fix and dive (LP) fix and diveRINS, rounding, fix and dive (LP), fix and dive (Presolve), Lagrangian approach, pseudo costs, Hail Mary (set objective to 0)

• 3 solution improvement heuristics– Applied whenever a new integer feasible is found

Cutting PlanesCutting Planes

28 1‐Oct‐09

Cutting Planes• Gomory

Gu ISMP 2006 strengthen by lifting in GUBs

Cutting Planes

– Gu ISMP 2006 , strengthen  by lifting in GUBs 

• Flow covers– Strengthened by lifting before un‐transforming

• MIR• MIR– Different aggregation for MIR and flow covers

• Knapsack covers

Cli• Clique

• Implied bound

• Flow paths– Results fed into flow covers

• GUB covers

29 1‐Oct‐09

Performance

30 1‐Oct‐09

Performance BenchmarksPerformance Benchmarks

• Performance test sets:– Mittelmann optimality test set:

• 55 models, varying degrees of difficulty• http://plato.asu.edu/ftp/milpc.html

– Mittelmann feasibility test set:• 34 models, difficult to find feasible solutions• http://plato.asu.edu/ftp/feas_bench.html

– Our own broader test set:• A set of 263 models that require between one minute and one hour to solve on one core

– Publicly available models, plus a few customer models

• Test platform:– Q9450 (2.66 GHz, quad‐core system)

31 1‐Oct‐09

Performance ResultsPerformance Results

• Two basic questions:Two basic questions:– Is P=1 efficient?

How much does performance improve with P>1?– How much does performance improve with P>1?

• P=1 efficiency (geometric means):– Mittelmann optimality test set

• Gurobi 1.1 is 1.18X slower than CPLEX 12.0

Mi l f ibili– Mittelmann feasibility test set• Gurobi 1.1 is 2.3X faster than CPLEX 12.0

32 1‐Oct‐09

Parallel PerformanceParallel Performance

Comparisons:Comparisons:Gurobi

1.45X faster than CPLEX 12.0 for p=4 on Mittelmann  Gurobi p=1 versus p=4

1.73X speedup on Mittelmann 1 61X speedup on broader set1.61X speedup on broader set

CPLEX  p=1 versus p=41.21X speedup for CPLEX 12 on Mittelmann  (non‐deterministic)1.29X speedup for CPLEX 11 on their broader set 

33 1‐Oct‐09

Speedups for P=4 (By Model)Speedups for P 4 (By Model)

5

4

3

2

0

1

0

34 1‐Oct‐09

Gurobi 2.0Gurobi 2.0

• Improve simplex solverImprove simplex solver

• Cutting planesAdd i i tti l– Add missing cutting planes

• Add missing presolve reductions

Thank You