In [1]:

```
import matplotlib
import matplotlib.pyplot as plt
import seaborn as sns; sns.set()
```

We begin with some housekeeping. We will be using the `matplotlib`

and `seaborn`

packages to plot the charts in this notebook.

In [2]:

```
plot_size = 14
plot_width = 5
plot_height = 5
params = {'legend.fontsize': 'large',
'figure.figsize': (plot_width,plot_height),
'axes.labelsize': plot_size,
'axes.titlesize': plot_size,
'xtick.labelsize': plot_size*0.75,
'ytick.labelsize': plot_size*0.75,
'axes.titlepad': 25}
plt.rcParams.update(params)
```

As we are also using the outputs from the charts to create the illustrations in our lecture slides, we require an additional level of control over the appearance of the charts.

The above commands are therefore used to override the default font and any dimension parameters.

In [3]:

```
from sklearn.cluster import KMeans
from sklearn.datasets import make_blobs
```

We use the `KMeans`

algorithm implementation from the `sklearn`

package. `sklearn`

also provides several other clustering algorithms under the `sklearn.cluster`

namespace. You can read more about the other algorithms here.

Also from `sklearn`

, we are using the `make_blobs`

command to create a random dataset. As the command name implies, it creates a vertex arrangements that is concentrated around a defined number of "blobs".

In [4]:

```
num_customers = 40
```

In [5]:

```
coord, clust_true = make_blobs(n_samples=num_customers,
centers=3,
cluster_std=1,
random_state = 2)
```

The `make_blobs`

command returns two outputs, which we save on the `coord`

and `clust_true`

variables. Let's see what the contain.

In terms of input parameters, aside from the number of elements that we want to create, we can also define how many blobs we are using (`centers`

) and also how far away will the samples be from each "blob" centroid.

As the outputs are random, we are using `random_state`

to provide a "seed" to the underlying random number generator used by the function, which will ensure that we get the same results, every time.

In [6]:

```
coord
```

Out[6]:

In [7]:

```
clust_true
```

Out[7]:

It appears that `coord`

is a two-dimensional array of X,Y coordinates. `clust_true`

returns the index of the cluster that `make_blobs`

things that each coordinate should belong to.

It would be interesting to refer to this array later on, but we are not going to be using it for the purposed of our analysis. We will determine the clusters on our own, with the help of the `KMeans`

algorithm.

In [8]:

```
plt.scatter(coord[:, 0],
coord[:, 1],
s=plot_size*2,
cmap='viridis');
```

We are using the `scatter`

command to plot the customer locations. The `s`

parameter defines the size nodes, while the `cmap`

parameters picks a color scheme from the `seaborn`

library.

`centers`

and `cluster_std`

?¶In [9]:

```
model = KMeans(n_clusters=2)
model.fit(coord)
clust_pred = model.predict(coord)
```

Running the algorithm involves 3 distinct steps:

1) We initialise a `KMeans`

instance, and decide how many clusters we are going to seek (here: 2)

2) We "train" the model using the coordinates using the `fit()`

function.

3) using the `predict()`

function, calculate the centroids.

In [10]:

```
plt.scatter(coord[:, 0],
coord[:, 1],
c = clust_pred,
s=plot_size*2,
cmap='Accent')
centers = model.cluster_centers_
plt.scatter(centers[:, 0],
centers[:, 1],
c = 'red',
s=plot_size*10,
alpha=0.5);
```

We are using the `scatter`

command twice. The first time, we use it to plot the customer coordinates.

Instead of a color, we supply to `c`

the predicted index of the cluster. The `Accent`

color scheme, will automatically transform this to a visually appealing (hopefully) combination of colours.

The second time that we use `scatter`

we add the centroids of our customers to the original graph.

As a rule of thumb, every additional `matplotlib`

command within a Jupyter cell will simply update the previous plot that was created within the cell.

In [11]:

```
model.inertia_
```

Out[11]:

We can obtain the inertia of our model (within-cluster sum of squares) from the `.inertia_`

field, which provides us a measure of how well the `KMeans`

has performed

In [12]:

```
from yellowbrick.cluster import KElbowVisualizer
visualizer = KElbowVisualizer(model, k=(2,12),timings=False)
visualizer.fit(coord) # Fit the data to the visualizer
visualizer.show() # Finalize and render the figure
```

Out[12]:

As discussed in the class, we are using the elbow method to automatically identify a suggested number of clusters, without supplying any additional contextual information about the problem.

This technique usually works well as a preliminary step while performing an initial shift through data. It must, however, be followed up with a proper facility location analysis.

In this case we are using the `KElbowVisualizer`

that is provided by the `yellowbrick`

package. With the parameter `k`

we are instructing the algorithm to check all values between 2 and 12 clusters.

The term "distortion" refers to the within-cluster sum of squares difference between all elements and their corresponding centroids, and is therefore equivalent to "inertia"