h1.
Run your new code (through main, with both the lazy and userCut options) on the same TSPLIB problems as before and compare the performance. My running times were We see that the running time increased on every example! What is going on here? Lets add a few timely print statements to our UserCutCallback to try and better understand how the performance is worse. Code Block |
---|
V source = graph.getVertices().iterator().next();
double best = Double.MAX_VALUE;
System.out.print("looking for cut... ");
for(V sink : graph.getVertices()){
if(sink != source){
Set<V> cutNodes = minCutSolver.computeMinCut(source, sink);
Set<E> cutEdges = tspInstance.cutEdges(cutNodes);
double cutVal = Util.calcSum(cutEdges,edgeWeights);
best = Math.min(best, cutVal);
if(cutVal < minCutVal){
cutSets.add(cutEdges);
}
}
} were
{chart:type=bar|title=Run time on TSPLIB instances|yLabel=Run Time (sec)|width=600|height=400|displayData=after}
||Strategy||eil51||bier127||ch130||ch150||d198||
|Without User Cuts|0.13|1.23|1.96|5.01|8.58|
|With User Cuts|0.40|3.76|12.52|83.62|103.93|
{chart}
We see that the running time increased on every example! What is going on here? Lets add a few timely print statements to our {{UserCutCallback}} to try and better understand how the performance is worse.
{code}
V source = graph.getVertices().iterator().next();
double best = Double.MAX_VALUE;
System.out.printprintln("looking for cut... ");
for(V sink : graph.getVertices()){
if(sink != source){
Set<V> cutNodes = minCutSolver.computeMinCut(source, sink);
Set<E> cutEdges = tspInstance.cutEdges(cutNodes);
double cutVal = Util.calcSum(cutEdges,edgeWeights);
best = Math.min(best, cutVal);
if(cutVal < minCutVal){
cutSets.add(cutEdges);
}
}
}
System.out.println("found " + cutSets.size() + " cuts, best " + best);
{code}
Lets run the code again on d198 and take a closer look at the output.
{noformat}
Reading file d198... complete!
Building problem... complete!
Solving TSP...Lazy constraint(s) or lazy constraint callback is present.
Disabling dual reductions (CPX_PARAM_REDUCE) in presolve.
Disabling non-linear reductions (CPX_PARAM_PRELINEAR) in presolve.
Tried aggregator 1 time.
Presolve time = 0.01 sec. (9.91 ticks)
Warning: Control callbacks may disable some MIP features.
Probing time = 0.01 sec. (3.35 ticks)
MIP emphasis: balance optimality and feasibility.
MIP search method: traditional branch-and-cut.
Parallel mode: none, using 1 thread.
Root relaxation solution time = 0.01 sec. (13.55 ticks)
Nodes Cuts/
Node Left Objective IInf Best Integer Best Bound ItCnt Gap Variable B NodeID Parent Depth
0 0 11793.0000 28 11793.0000 282
0 0 11823.0000 18 Cuts: 9 291
0 0 11837.0000 21 Cuts: 5 295
0 0 11845.1667 10 Cuts: 5 303
0 0 11846.0000 10 Cuts: 5 306
0 0 11846.7500 14 Cuts: 3 309
0 0 11848.0000 8 Cuts: 3 311
0 0 11858.0000 10 Cuts: 5 317
0 0 11860.8750 18 Cuts: 5 323
0 0 11869.5000 12 Cuts: 3 329
0 0 12442.0000 22 Cuts: 3 331
0 0 12450.0000 20 Cuts: 8 337
0 0 12456.0000 14 Cuts: 5 351
0 0 12461.5000 14 Cuts: 6 358
0 0 12462.5000 22 Cuts: 3 367
0 0 12463.0000 14 ZeroHalf: 1 370
looking for cut... found 2 cuts, best 0.0
0 0 12463.0000 22 User: 2 372
looking for cut... found 1 cuts, best 0.0
0 0 12463.0000 33 User: 1 374
looking for cut... found 3 cuts, best 0.0
0 0 12468.5000 20 User: 3 389
looking for cut... found 3 cuts, best 0.0
0 0 12468.5000 20 User: 3 392
looking for cut... found 3 cuts, best 0.0
0 0 12469.7500 60 User: 3 399
looking for cut... found 2 cuts, best 0.0
0 0 12471.0000 40 User: 2 401
looking for cut... found 3 cuts, best 0.0
0 0 12479.5000 44 User: 3 413
looking for cut... found 3 cuts, best 0.0
0 0 12494.0000 49 User: 3 428
looking for cut... found 1 cuts, best 0.0
0 0 12494.0000 49 User: 1 429
looking for cut... found 3 cuts, best 0.0
0 0 12980.0000 34 User: 3 449
looking for cut... found 4 cuts, best 0.0
0 0 12981.2500 61 User: 4 454
looking for cut... found 8 cuts, best 0.0
0 0 12997.5000 14 User: 8 471
looking for cut... found 2 cuts, best 0.0
0 0 13000.5000 29 User: 2 474
looking for cut... found 6 cuts, best 0.0
0 0 13003.3750 57 User: 6 479
looking for cut... found 11 cuts, best 0.0
0 0 13006.5000 22 User: 11 487
looking for cut... found 2 cuts, best 0.0
0 0 13008.5000 73 User: 2 495
looking for cut... found 8 cuts, best 0.0
0 0 13023.2500 62 User: 8 516
looking for cut... found 6 cuts, best 0.0
0 0 14782.0000 8 User: 6 543
looking for cut... found 1 cuts, best 0.0
0 0 14782.0000 28 User: 1 547
looking for cut... found 2 cuts, best 0.0
0 0 14782.0000 8 User: 2 551
looking for cut... found 1 cuts, best 0.0
0 0 14782.6667 45 User: 1 555
looking for cut... found 7 cuts, best 0.0
0 0 14795.5000 68 User: 7 572
looking for cut... found 5 cuts, best 0.0
0 0 14846.0000 40 User: 5 588
looking for cut... found 3 cuts, best 0.0
0 0 14871.5000 42 User: 3 596
looking for cut... found 7 cuts, best 0.0
0 0 14873.0000 14 User: 7 598
looking for cut... found 2 cuts, best 0.0
0 0 14873.4000 52 User: 2 602
looking for cut... found 6 cuts, best 0.0
0 0 14902.5000 38 User: 6 613
looking for cut... found 3 cuts, best 0.0
0 0 14906.0000 16 User: 3 619
looking for cut... found 2 cuts, best 0.0
0 0 14906.8000 54 User: 2 623
looking for cut... found 6 cuts, best 0.0
0 0 14935.2500 44 User: 6 643
looking for cut... found 2 cuts, best 0.0
0 0 15356.5000 32 User: 2 647
looking for cut... found 2 cuts, best 0.0
0 0 15360.0000 52 User: 2 660
looking for cut... found 5 cuts, best 0.0
0 0 15388.7500 74 User: 5 679
looking for cut... found 5 cuts, best 0.0
0 0 15401.7500 70 User: 5 682
looking for cut... found 5 cuts, best 0.0
0 0 15422.2500 66 User: 5 693
looking for cut... found 5 cuts, best 0.0
0 0 15508.2500 64 User: 5 703
looking for cut... found 5 cuts, best 0.0
0 0 15549.7500 58 User: 5 712
looking for cut... found 5 cuts, best 0.0
0 0 15560.0000 20 User: 5 720
looking for cut... found 2 cuts, best 0.0
0 0 15560.0000 20 User: 2 722
looking for cut... found 2 cuts, best 0.0
0 0 15560.0000 20 User: 2 723
looking for cut... found 2 cuts, best 0.0
0 0 15560.0000 20 User: 2 724
looking for cut... found 2 cuts, best 0.0
0 0 15561.0000 42 User: 2 732
looking for cut... found 5 cuts, best 0.0
0 0 15561.0000 38 User: 5 735
looking for cut... found 5 cuts, best 0.0
0 0 15562.7500 67 User: 5 738
looking for cut... found 7 cuts, best 0.0
0 0 15564.7500 71 User: 7 743
looking for cut... found 8 cuts, best 0.5
0 0 15610.0000 47 User: 8 761
looking for cut... found 1 cuts, best 0.0
0 0 15675.0000 47 User: 1 771
looking for cut... found 2 cuts, best 0.0
0 0 15701.2500 61 User: 2 797
looking for cut... found 1 cuts, best 0.0
0 0 15726.7500 71 User: 1 801
looking for cut... found 0 cuts, best 1.9999999999999998
0 2 15726.7500 71 15740.2500 801 0 0
Elapsed time = 23.90 sec. (10515.28 ticks, tree = 0.00 MB, solutions = 0)
looking for cut... found 0 cuts, best 1.9999999999999998
looking for cut... found 1 cuts, best 0.0
looking for cut... found 0 cuts, best 1.9999999999999998
2 4 15753.0000 47 15741.2500 816 x16243 N 2 1 2
looking for cut... found 0 cuts, best 1.9999999999999998
looking for cut... found 1 cuts, best 1.0
looking for cut... found 0 cuts, best 1.9999999999999998
looking for cut... found 1 cuts, best 0.0
looking for cut... found 0 cuts, best 1.9999999999999998
5 7 15762.0000 49 15741.2500 834 x25486 N 5 4 4
looking for cut... found 1 cuts, best 1.0
looking for cut... found 0 cuts, best 1.9999999999999998
looking for cut... found 1 cuts, best 1.5
looking for cut... found 0 cuts, best 1.9999999999999998
looking for cut... found 0 cuts, best 1.9999999999999998
looking for cut... found 0 cuts, best 1.9999999999999998
looking for cut... found 1 cuts, best 0.0
looking for cut... found 1 cuts, best 0.0
looking for cut... found 1 cuts, best 0.0
looking for cut... found 1 cuts, best 0.0
looking for cut... found 1 cuts, best 0.0
looking for cut... found 1 cuts, best 0.0
looking for cut... found 1 cuts, best 0.0
looking for cut... found 1 cuts, best 0.0
looking for cut... found 1 cuts, best 0.0
looking for cut... found 1 cuts, best 0.0
looking for cut... found 1 cuts, best 0.0
looking for cut... found 1 cuts, best 0.0
looking for cut... found 1 cuts, best 0.0
looking for cut... found 0 cuts, best 1.9999999999999998
* 10+ 10 15900.0000 15741.2500 887 1.00%
10 12 15776.0000 23 15900.0000 15741.2500 887 1.00% x5629 N 10 9 7
* 12 12 integral 0 15787.0000 15741.2500 891 0.29% x6893 U 12 11 9
looking for cut... found 0 cuts, best 2.0
looking for cut... found 1 cuts, best 1.3333333333333335
looking for cut... found 0 cuts, best 1.9999999999999998
* 15 11 integral 0 15781.0000 15741.2500 895 0.25% x6545 U 15 14 11
looking for cut... found 0 cuts, best 1.9999999999999998
looking for cut... found 1 cuts, best 1.0
looking for cut... found 0 cuts, best 1.9999999999999998
looking for cut... found 0 cuts, best 1.9999999999999998
looking for cut... found 0 cuts, best 2.0
looking for cut... found 1 cuts, best 0.0
looking for cut... found 0 cuts, best 1.9999999999999998
looking for cut... found 0 cuts, best 1.9999999999999998
looking for cut... found 1 cuts, best 1.0
looking for cut... found 0 cuts, best 1.9999999999999998
looking for cut... found 1 cuts, best 1.0
looking for cut... found 1 cuts, best 1.3333333333333335
looking for cut... found 1 cuts, best 0.0
looking for cut... found 0 cuts, best 1.9999999999999998
looking for cut... found 0 cuts, best 1.9999999999999998
looking for cut... found 1 cuts, best 1.0
looking for cut... found 0 cuts, best 1.9999999999999998
looking for cut... found 0 cuts, best 1.9999999999999998
looking for cut... found 4 cuts, best 1.5
looking for cut... found 1 cuts, best 1.0
looking for cut... found 0 cuts, best 1.9999999999999998
29 16 15777.0000 47 15781.0000 15755.7500 971 0.16% x4489 N 29 27 10
looking for cut... found 0 cuts, best 1.9999999999999998
looking for cut... found 0 cuts, best 1.9999999999999998
looking for cut... found 0 cuts, best 1.9999999999999998
looking for cut... found 1 cuts, best 0.0
looking for cut... found 0 cuts, best 1.9999999999999998
looking for cut... found 0 cuts, best 1.9999999999999998
looking for cut... found 0 cuts, best 1.9999999999999998
looking for cut... found 1 cuts, best 0.0
looking for cut... found 0 cuts, best 1.9999999999999998
looking for cut... found 0 cuts, best 1.9999999999999998
looking for cut... found 0 cuts, best 1.9999999999999998
looking for cut... found 1 cuts, best 0.0
looking for cut... found 0 cuts, best 1.9999999999999998
looking for cut... found 0 cuts, best 1.9999999999999998
looking for cut... found 0 cuts, best 1.9999999999999998
looking for cut... found 0 cuts, best 1.9999999999999998
looking for cut... found 0 cuts, best 1.9999999999999998
looking for cut... found 0 cuts, best 1.9999999999999998
looking for cut... found 0 cuts, best 1.9999999999999998
looking for cut... found 0 cuts, best 1.9999999999999998
looking for cut... found 0 cuts, best 2.0
57 21 15776.5000 8 15781.0000 15763.0000 1076 0.11% x2871 D 57 56 9
looking for cut... found 0 cuts, best 2.0
* 59 20 integral 0 15780.0000 15763.0000 1078 0.11% x2847 D 59 58 11
looking for cut... found 1 cuts, best 0.0
looking for cut... found 1 cuts, best 0.0
looking for cut... found 0 cuts, best 1.9999999999999998
looking for cut... found 0 cuts, best 1.9999999999999998
looking for cut... found 0 cuts, best 1.9999999999999998
looking for cut... found 0 cuts, best 1.9999999999999998
looking for cut... found 0 cuts, best 2.0
looking for cut... found 0 cuts, best 2.0
looking for cut... found 0 cuts, best 2.0
looking for cut... found 0 cuts, best 2.0
looking for cut... found 0 cuts, best 1.9999999999999998
looking for cut... found 0 cuts, best 2.0
looking for cut... found 0 cuts, best 2.0
looking for cut... found 0 cuts, best 2.0
looking for cut... found 1 cuts, best 1.0
looking for cut... found 0 cuts, best 1.9999999999999998
looking for cut... found 2 cuts, best 0.6666666666666667
looking for cut... found 1 cuts, best 1.0
looking for cut... found 0 cuts, best 1.9999999999999998
looking for cut... found 1 cuts, best 1.3333333333333333
looking for cut... found 0 cuts, best 1.9999999999999998
looking for cut... found 0 cuts, best 1.9999999999999998
looking for cut... found 0 cuts, best 2.0
looking for cut... found 0 cuts, best 2.0
looking for cut... found 0 cuts, best 2.0
91 18 cutoff 15780.0000 15769.5000 1232 0.07% x2871 U 91 65 9
looking for cut... found 0 cuts, best 1.9999999999999998
looking for cut... found 0 cuts, best 2.0
looking for cut... found 0 cuts, best 2.0
looking for cut... found 0 cuts, best 1.9999999999999998
looking for cut... found 0 cuts, best 2.0
looking for cut... found 1 cuts, best 1.0
looking for cut... found 0 cuts, best 2.0
looking for cut... found 0 cuts, best 1.9999999999999998
looking for cut... found 0 cuts, best 2.0
looking for cut... found 0 cuts, best 2.0
looking for cut... found 0 cuts, best 2.0
looking for cut... found 0 cuts, best 2.0
looking for cut... found 0 cuts, best 1.9999999999999998
looking for cut... found 0 cuts, best 1.9999999999999998
looking for cut... found 0 cuts, best 2.0
looking for cut... found 0 cuts, best 1.9999999999999998
Cover cuts applied: 3
Zero-half cuts applied: 11
User cuts applied: 362
Root node processing (before b&c):
Real time = 23.88 sec. (10502.64 ticks)
Sequential b&c:
Real time = 49.95 sec. (2187.41 ticks)
------------
Total (root+branch&cut) = 73.83 sec. (12690.05 ticks)
complete!
opt is: 15780.0
{noformat}
Running the code with and without user cuts, judging from the print statements (which are not entirely reliable), it seems that the bottle neck is actually solving the {mathinline}|V|-1{mathinline} max flow min cut problems. While we know there are better ways to compute a global minimum cut [from our previous discussion|Polynomial Time Separation and UserCutCallback#Polynomial Time Separation for TSP over Cutset Constraints], instead we will explore some heuristics to vastly improve performance without writing a lot of code.
Look at the first column of CPLEX's output, which gives the number of nodes generated thus far. When this column is zero, that means we are still working on the root node. Here are a few observations about our cuts:
* More often then not, when there is a violated cut, the most violated cut has weight zero.
* This is particularly true towards the end of the optimization when we are doing branch and bound (perhaps the _extreme_ action of forcing a variable to take the value one or zero causes the solution to make loops).
* The value of the root LP is significantly larger (which is better) with the cutset constraints, as seen below
* The total number of nodes created using the cutset constraints is much smaller, as seen below.
These final two bullet points indicate that we do stand to gain a lot from the cutset formulation if we can get the running time of our separation routine under control. We now give a few heuristics to improve performance.
{chart:type=bar|title=LP strength on TSPLIB instances|yLabel=LP relaxation/OPT|width=600|height=400|rangeAxisLowerBound=97|rangeAxisUpperBound=100}
||Problem Name||eil51|bier127|ch130|ch150|d198|
|Without User Cuts|99.6|99.1|99.7|98.1|78.9|
|With User Cuts|99.7|99.5|99.7|99.4|99.7|
{chart}
{chart:type=bar|title=Nodes created on TSPLIB instances|yLabel=B&B nodes created|width=600|height=400|}
||Problem Name||eil51|bier127|ch130|ch150|d198|
|Without User Cuts|16|88|142|1765|1536|
|With User Cuts|2|19|44|429|138|
{chart}
h1. Check Connected Components First
{chart:type=bar|title=TSPLIB instances solved with lazy constraints|yLabel=LP relaxation|width=600|height=400}
||Problem Name||eil51|bier127|ch130|ch150|d198|
|Without User Cuts|0.11|0.77|1.98|5.09|17.57|
|With User Cuts|5|6|7|8|20|
|Connected Components First|5|6|7|8|20|
{chart}
h1. Use Only Connected Components After Root Node
{chart:type=bar|title=Run time for TSPLIB instances solved with lazy constraints|yLabel=Run Time (sec)|width=600|height=400}
||Problem Name||eil51|bier127|ch130|ch150|d198|
|Without User Cuts|0.11|0.77|1.98|5.09|17.57|
|With User Cuts|5|6|7|8|20|
|Connected Components First|5|6|7|8|20|
|Stop After Root|5|6|7|8|20|
{chart}
h1. Use a Backoff Function After the Root Node
{chart:type=bar|title=Run time for TSPLIB instances solved with lazy constraints|yLabel=Run Time (sec)|width=600|height=400}
||Problem Name||eil51|bier127|ch130|ch150|d198|
|Without User Cuts|0.11|0.77|1.98|5.09|17.57|
|With User Cuts|5|6|7|8|20|
|Connected Components First|5|6|7|8|20|
|Stop After Root|5|6|7|8|20|
|Backoff after root|5|6|7|8|20|
{chart}
h1. (Optional) Check Random s-t Cuts
{chart:type=bar|title=Run time for TSPLIB instances solved with lazy constraints|yLabel=Run Time (sec)|width=600|height=400}
||Problem Name||eil51|bier127|ch130|ch150|d198|
|Without User Cuts|0.11|0.77|1.98|5.09|17.57|
|With User Cuts|5|6|7|8|20|
|Connected Components First|5|6|7|8|20|
|Stop After Root|5|6|7|8|20|
|Backoff after root|5|6|7|8|20|
|Randomized s-t|5|6|7|8|20|
{chart}
found " + cutSets.size() + " cuts, best " + best);
|
Lets run the code again on d198 and take a closer look at the output.
Show d198 output Cloak |
---|
| d198 CPLEX log excerpt |
Running the code with and without user cuts, judging from the print statements (which are not entirely reliable), it seems that the bottle neck is actually solving the max flow min cut problems. While we know there are better ways to compute a global minimum cut from our previous discussion, instead we will explore some heuristics to vastly improve performance without writing a lot of code.Look at the first column of CPLEX's output, which gives the number of nodes generated thus far. When this column is zero, that means we are still working on the root node. Here are a few observations about our cuts: - More often then not, when there is a violated cut, the most violated cut has weight zero.
- This is particularly true towards the end of the optimization when we are doing branch and bound (perhaps the extreme action of forcing a variable to take the value one or zero causes the solution to make loops).
- The value of the root LP is significantly larger (which is better) with the cutset constraints, as seen below
- The total number of nodes created using the cutset constraints is much smaller, as seen below.
These final two bullet points indicate that we do stand to gain a lot from the cutset formulation if we can get the running time of our separation routine under control. We now give a few heuristics to improve performance. Check Connected Components FirstThe first heuristic is as follows: on the subgraph using the edges with only nonnegative weight (which we have already formed) check for connectivity. If you have more than one connected component, then any cut separating connected components will have weight zero and hence be minimal. As we can check for connected components in time, we will save a lot of time when there are multiple connected components. Viewing the log file above, we this is frequently the case (as conversely, whenever the best cut is zero, there must have been multiple connected components). Notice that the class MinCutSolver (which is holding the subgraph using only edges with positive weight) already has a method built in:Method Name | Return Type | Arguments | Description |
---|
checkConnectedComponents | Set<Set<V>> | | Returns the nodes of the subgraph using only edges with positive weight, partitioned into connected components. |
To integrate this with the UserCutCallback, simply add the code Code Block |
---|
MinCutSolver<V,E> minCutSolver = new MinCutSolver<V,E>(graph,edgeWeights);
Set<Set<E>> cutSets = new HashSet<Set<E>>();
//add here
Set<Set<V>> connectedComponents = minCutSolver.checkConnectedComponents();
if(connectedComponents.size() > 1){
for(Set<V> connectedComponent: connectedComponents){
cutSets.add(tspInstance.cutEdges(connectedComponent));
}
System.out.println("trivial cuts exist, found " + cutSets.size());
}
else{
//old
V source = graph.getVertices().iterator().next();
//...
System.out.println("found " + cutSets.size() + " cuts, best " + best);
//add extra brace
}
for(Set<E> cutEdges: cutSets){
this.add(cplex.ge(Util.integerSum(cplex, edgeVariables, cutEdges), 2));
}
|
Observe the large improvement in running time below. However, we are still not faster than our solver without user cuts, and we still can't practically solve d493 (although we get a lower bound that is 99.6% of the optimum in about 60 seconds). Our separation routine is still too slow, and is notably ineffective towards the end of the optimization. We give a quick fix in the next section. Use Only Connected Components After Root NodeRecall our observation from the log file that as we progressed in the branch and bound tree, it became rare that we encountered violated cutset constraints where the graph was not disconnected into multiple peices (i.e. constraints violated by 2.0). A coarse but effect solution for this problem is to simply stop solving max flow problems after a certain point in the optimization, and rely on the heuristic from the previous section (as a user cut) in combination with the lazy constraints. A convenient (and reasonable) place to stop attempting these max flow problems is when we solve the root node and start branching. That we chose the root node to stop looking for expensive cuts was not a coincidence: we see in was not a coincidence, our earlier computations on the quality of the cutset LP relaxation suggest that once we have solved the root node, our lower bound will be within 1% of optimal, leaving little room for further improvement by more cutting planes. Using the method getNnodes64(), we can modify MinCutCallback with a single line of code as follows: Code Block |
---|
//...
if(connectedComponents.size() > 1){
for(Set<V> connectedComponent: connectedComponents){
cutSets.add(tspInstance.cutEdges(connectedComponent));
}
System.out.println("trivial cuts exist, found " + cutSets.size());
}
else if(this.getNnodes64() == 0){// <--- the only line that changed!
V source = graph.getVertices().iterator().next();
//...
|
We see in the chart below that this new variation beats out all the other user cut heuristics thus far and is competitive with just using lazy constraints on small instances. However, the real gain is on the larger, more challenging instances, which we can now actually solve. As seen here:
Show d493 CPLEX log Cloak |
---|
| d493 CPLEX log excerpt |
it took a little over 2 minutes to be provably within 1% of optimal, and we were able to solve the problem completely in about 12 minutes, while it took about 2 hours with only lazy constraints. Again the LP was 99.6% of optimal, so we can't make a lot of improvement with better cuts (i.e. there is more room for improvement on the primal problem of generating good integer solutions). However, we did ultimately visit over 26,000 nodes. The problem d657 is still well out of reach; after 4571 seconds (over an hour), the MIP relative gap was still 0.33%. Next, we try two more tricks to improve performance, but they are essentially second order corrections as compared to Christofides and Two-Opt. Use a Back Off Function After the Root NodeWe now introduce the abstract class BackOffFunction . The idea of a back off function is as follows: suppose you repeatedly have the opportunity to take some action, but you do not want to take the action at every opportunity. Further, suppose that at as time passes, you want to take the action with a lower frequency. Suppose that is the waiting time between the and time that the action was taken. The various subclasses of BackOffFunction provided will deterministically set asClass | Constructor Arguments | |
---|
ConstantWaiting | int | | LinearWaiting | | | QuadraticWaiting | | | ExponentialWaiting | | | InfiniteWaiting | | |
Warning |
---|
| If you downloaded version 2-1 or earlier, you are missing InfiniteWaiting . If you want it, paste Code Block |
---|
/**
* always returns false
* @author ross
*
*/
public static class InfiniteWating extends BackOffFunction {
public InfiniteWating(){
super(1);
}
@Override
protected int nextWaitingPeriod() {
return 1;
}
public boolean makeAttempt(){
return false;
}
}
|
at the end of BackOff.java . (If InfiniteWaiting is already at the bottom of the file, then you are fine.) |
Note our strategy from the previous section is just InfiniteWaiting, and our strategy before that is just ConstantWaiting(0). Of course many other functions, perhaps more complicated, are possible. Note that ConstantWaiting technically is not backing off... In any event, back off functions are robust way to contain a potentially troublesome heuristic, as it limits how much the heuristic can hurt you, since in the worst case (when the problem takes a long time to solve), you will stop using the heuristic. The class BackOffFunction (and its subclasses) are all used through the single method Method Name | Return Type | Arguments | Description |
---|
makeAttempt | boolean | | Call this function at each possible time to take the action. Will return true iff you should take the action this time period. |
The idea for using the back off function is as follows: we will still always solve the max flow problems at the root node, but after the root node, when the connectivity test does not produce a new cut, we will only periodically attempt to solve the max flow problems according to the back off function. I found that Quadratic Waiting seemed about right, but feel free try anything! To make the change you only need to adjust a few lines of code. First add the back off function as a field then set in the constructor (as you please): Code Block |
---|
private BackOffFunction backOff;
private double minCutVal;
public MinCutCallback(){
minCutVal = 1.99;
this.backOff = new BackOffFunction.QuadraticWaiting();
}
|
Then, in the same line of code we adjusted in the previous section, write: Code Block |
---|
//...
if(connectedComponents.size() > 1){
for(Set<V> connectedComponent: connectedComponents){
cutSets.add(tspInstance.cutEdges(connectedComponent));
}
System.out.println("trivial cuts exist, found " + cutSets.size());
}
else if(this.getNnodes64() == 0 || this.backOff.makeAttempt()){// <--- the only line that changed!
V source = graph.getVertices().iterator().next();
//...
|
Running the code on d493, we have the disappoiting result However, we did visit only 24270 nodes, less than our previous algorithm. Thus if we could solve min cut separate faster, then maybe it could be worth attempting min cut occasionally after the root node. We attempt this in the next section. If all else fails, your code should look like this:
Toggle TspIpSolverCheck Random s-t CutsIn this final section, we try to fully exploit the following idea: since user cuts are not required to find a violated inequality, it could be more efficient quickly find one with high probability and fail otherwise. Recall that in all of our previous algorithms, we fixed and then for every with , we solved a min - cut problem. Now instead, for some small fixed number of attempts , we will randomly select with and then compute the minimum - cut. Observe that we will find the global minimum cut if in at least one random attempt, we have and , or vice versa. Next, observe that if , then it is very likely we will succeed. If (in the best case scenario) , then in each trial we succeed in finding the global minimum cut with probability , thus on attempts, we would succeed at least once with probability . By tracking a little extra data and adding a few more print statements (you can try it yourself if you are interested), you can actually track the size of the minimum cut. Perhaps surprisingly, it is often quite large! Instead of modifying MinCutCallback , we will actually create a second class RandomMinCutCallback . Paste the following after the class MinCutCallback in TspIpSolver.java. Code Block |
---|
private class RandomMinCutCallback extends UserCutCallback{
private int numAttempts;
private Random rand;
private BackOffFunction backOff;
private double minCutVal;
public RandomMinCutCallback(){
numAttempts = 20;
rand = new Random();
backOff = new BackOffFunction.QuadraticWaiting();
minCutVal = 1.99;
}
@Override
protected void main() throws IloException {
//do not attempt to add any cutset constraints until CPLEX is done adding constraints
if(!this.isAfterCutLoop()){
return;
}
double[] edgeVals = this.getValues(edgeVariablesAsArray);
Map<E,Double> edgeWeights = getNonZeroEdgeWeights(edgeVals);
UndirectedGraph<V,E> graph = tspInstance.getGraph();
MinCutSolver<V,E> minCutSolver = new MinCutSolver<V,E>(graph,edgeWeights);
Set<Set<E>> cutSets = new HashSet<Set<E>>();
Set<Set<V>> connectedComponents = minCutSolver.checkConnectedComponents();
if(connectedComponents.size() > 1){
for(Set<V> connectedComponent: connectedComponents){
cutSets.add(tspInstance.cutEdges(connectedComponent));
}
System.out.println("trivial cuts exist, found " + cutSets.size());
}
else if(this.getNnodes64() == 0 || this.backOff.makeAttempt()){
double best = Double.MAX_VALUE;
System.out.print("looking for cut... ");
List<V> nodes = new ArrayList<V>(graph.getVertices());
for(int i = 0; i <numAttempts; i++){
V source = nodes.get(rand.nextInt(nodes.size()));
V sink = nodes.get(rand.nextInt(nodes.size()));
if(source != sink){
Set<V> cutNodes = minCutSolver.computeMinCut(source, sink);
Set<E> cutEdges = tspInstance.cutEdges(cutNodes);
double cutVal = Util.calcSum(cutEdges,edgeWeights);
best = Math.min(best, cutVal);
if(cutVal < minCutVal){
cutSets.add(cutEdges);
}
}
}
System.out.println("found " + cutSets.size() + " cuts, best " + best);
}
for(Set<E> cutEdges: cutSets){
this.add(cplex.ge(Util.integerSum(cplex, edgeVariables, cutEdges), 2));
}
}
}
|
Then, replace the end of the TspIpSolver constructor (where the userCut option is parsed) with the following so we can choose the RandomMinCutCallback instead of the regular MinCutCallback . Code Block |
---|
if(options.contains(Option.userCut)){
cplex.use(new MinCutCallback());
if(options.contains(Option.randomizedUserCut)){
System.err.println("Warning, userCut and randomizedUserCut both selected, ignoring randomizedUserCut");
}
}
else if(options.contains(Option.randomizedUserCut)){
cplex.use(new RandomMinCutCallback());
}
|
To run your new code, be sure to change the options in Main.main . We see that our running time improves slightly! However, some of the improvement comes from "solving" the root relaxation more quickly. However, Randomized s-t with linear waiting visits only about 20,000 nodes, showing that the cuts from linear waiting with imperfect separation are stronger then perfect cuts with quadratic waiting. However, as infinite waiting beats everything, at least on this problem we are not gaining anything by using back off functions. What's next?Below is output generated from using the src/output/NodeLog.java to turn the node log above into a plot. We used the MinCutCallback with the InfiniteWaiting back off function. Excerpt Include |
---|
| excerpt d493 randomized node log |
---|
| excerpt d493 randomized node log |
---|
|
While our lower bounds seem to be converging more slowly than our upper bounds, we see that it still takes a little over 5000 nodes for us to get to our strongest possible upper bound, which will help us prune away more nodes. In the next section, we work on finding strong upper bounds faster. |