# Notebook 3.1 - Introduction to PuLP¶

In this notebook we are going to introduce a Python package called pulp (PuLP), which we can use in order to represent and solve optimisation problems.

The pulp package provides a modelling framework - this means that it does not solve the optimisation problems on its own, but rather provides us with the a set of commands and object types that we could use in order to represent mathematical formulations and problem instances within Python. Another popular modelling framework for Python that you might encounter is pyomo.

The actual solution of the problem instances is carried out by another type of tool called solver. PuLP will call that on our behalf, supply the structure of the model and the values of our parameters - once a solution is found, it will be send back to PuLP.

PuLP can only be used to model linear problems that fall under the LP, IP or MILP families (ie. with real or integer variables or a combination thereof). Unfortunately it cannot be used for non-linear problems, however there are many other Python libraries that we could use for such problems.

By default, PuLP will use the GLPK solver - this is Free and Open Source, but unfortunately not the best-performing solver that is available (in terms of speed or problem sizes that can be tackled). Other options are CPLEX and Gurobi, which can be easily interfaced with PuLP. Free student licenses are available for both - we will not be using them in this module, however, as PuLP will be more than enough for our materials.

As this is the first time that we will be modelling a linear programme, we will start with something simple - the abstract model that we used in our Geometric Method video:

$$$$\begin{array}{ll@{}ll} \text{minimize} & x_1+x_2 &\\ \text{subject to}& 2x_1+3 x_2 \geq 6 & \\ & - x_1+2 x_2 \leq 2 & \\ & 5x_1+x_2 \geq 10 \end{array}$$$$

## Step 1 - Installing Prerequisites¶

First, we need to make sure that the pulp package has been installed - in contrast to Matplotlib, NumPy and Pandas, PuLP is very rarely installed by default, and you will most definitely need to perform the next step.

In [1]:
!pip install pulp --progress-bar off

Requirement already satisfied: pulp in /opt/anaconda3/lib/python3.8/site-packages (2.3.1)
Requirement already satisfied: amply>=0.1.2 in /opt/anaconda3/lib/python3.8/site-packages (from pulp) (0.1.4)
Requirement already satisfied: docutils>=0.3 in /opt/anaconda3/lib/python3.8/site-packages (from amply>=0.1.2->pulp) (0.16)
Requirement already satisfied: pyparsing in /opt/anaconda3/lib/python3.8/site-packages (from amply>=0.1.2->pulp) (2.4.7)


pip: We use this to manage python packages. In the cloud, we need to run this every time while in your own computer, once will suffice.

pulp: The name of the package that we need to install.

--progress-bar off: This disables an animated progress bar that appears when pip is used in the command-line but is not rendered correctly within a notebook.

## Step 2 - Importing PuLP Package¶

We are going to use the PuLP package to model and solve our optimisation model.

In [2]:
from pulp import *


The above command adds all PuLP commands to our environment - we can use them directly, without having to include the pulp keyword.

## Step 3 - Initialise an empty problem.¶

In [3]:
prob = LpProblem('problem',LpMinimize)


We are initialising the problem with the LpProblem() function, and immediately clarify that this is going to be a linear minimization problem. Our problem instance is saved in a variable called prob.

Note that since we imported the entire namespace of the pulp package using the import * command, we can are use the LpProblem() function directly. If we had used import pulp instead, then we would have to use pulp.LpProblem().

The LpMinimize clarifies that this is a minimization. The other option would have been to use LpMaximize.

## Step 4 - Define decision variables¶

As this is an explicit formulation, we need to define every single decision variable in the model. We can do this with the LpVariable command, which also allows us to define lower and upper bounds with the lowBound and upBound parameters. We don't have any such limits imposed in our model, and therefore we will be skipping these for now.

Every instance of a newly created decision variable is saved to a local python variable, ie. x1, which we will use later to build the objective function and the constraints.

When creating the variables, we are also defining the name of the local variable that we will use, which will prove useful later when we want to export the solution found by PuLP.

In [4]:
x1 = LpVariable("x1")
x2 = LpVariable("x2")


## Step 5 - Define objective function¶

From this point onwards, we can start populating our optimization problem.

We can do this by appending each expression to our problem instance prob, using the += keyword.

Our objective function is:

$x_1+x_2$

We can append an objective to prob by transliterating the above function into a python expression, using the variable names that we defined earlier.

In [5]:
prob += x1 + x2


## Step 6 - Add constraints¶

Constraints are added in the same way, with the only difference being that we are including the RHS part.

In [6]:
prob +=  2*x1 + 3*x2  >= 6
prob +=   -x1 + 2*x2  <= 2
prob +=  5*x1 +   x2  >= 10


## Step 7 - Print formulation¶

If you want to show the formulation of the model that you have just built, you can use the command below.

This will be more useful later, when we will be programmatically creating indexed symbolic models, and we will want to double check whether everything has been defined correctly.

In [7]:
print(prob)

problem:
MINIMIZE
1*x1 + 1*x2 + 0
SUBJECT TO
_C1: 2 x1 + 3 x2 >= 6

_C2: - x1 + 2 x2 <= 2

_C3: 5 x1 + x2 >= 10

VARIABLES
x1 free Continuous
x2 free Continuous



## Step 8 - Solve the problem instance¶

Now that we have completed our formulation, we solve the problem instace using the solve() command.

We will save the output of the model to the status variable. We use the following command to print the status of our solution.

In [8]:
status = prob.solve()

print(f'STATUS\n{LpStatus[status]}\n')

STATUS
Optimal



## Step 9 - Print solution¶

OK, we know that the solver managed to find a result, but what are the values of the solution?

We can print the values of the variables with a simple for loop.

In [9]:
print('SOLUTION:')
for v in prob.variables():
print(f'\t\t{v.name} = {v.varValue}')

print('\n') # Prints a blank line
print(f'OBJECTIVE VALUE: {prob.objective.value()}')

SOLUTION:
x1 = 1.8461538
x2 = 0.76923077

OBJECTIVE VALUE: 2.61538457


## Summary¶

That was all - simple wasn't it?

We have included the entire code that we used as single block.

from pulp import *

# Create a LP Minimization problem
prob = LpProblem('Problem', LpMinimize)

# Create problem Variables
x1 = LpVariable("x1")
x2 = LpVariable("x2")

# Objective Function
prob += x1 + x2

# Constraints:
prob +=  2*x1 + 3*x2  >= 6
prob +=   -x1 + 2*x2  <= 2
prob +=  5*x1 +   x2  >= 10

# Display the problem
print(prob)

status = prob.solve()

print(f'STATUS\n{LpStatus[status]}\n')

print('SOLUTION:')
for v in prob.variables():
print(f'\t\t{v.name} = {v.varValue}')

print('\n') # Prints a blank line
print(f'OBJECTIVE VALUE: {prob.objective.value()}')