Benchmarking the MTZ Model

Santa Claus tried to use the MTZ model in order to plan a route for all the children in London. But soon after he clicked the run button, his server farm caught fire!

Should we perhaps check how well the MTZ model performs for different problem sizes?

In this notebook we will see how the solution time requirements of the TSP/MTZ formulation vary as we begin to increase the size of the problem.

To do this, we will use a for-loop that will measure how long it takes to run the model for a range of problem sizes.

As the solution times might vary for different problem instances of the same size, we will be generating 5 different instances, by varying the value of our blob generator's random_state parameter.

As we will be solving the model many times, we are going to define a function that encapsulates the entire process of instance generation, model formulation and execution.

The resulting time_solver functions returns the run-time, in seconds.

The following code implements our nested for-loop that actually calls the solver. As you can see if you run it, the code returns the run-time for every single instance of the same size, and then an average value for all 5.