How to predict how much a group of customers will spend

You may have read my previous post about predicting the spend of a single known customer. There is a related problem which is predicting the total spend of all your customers, or a sizeable segment of them.

Time series approach: segments of customers

If you don’t need to predict the spend of an individual customer, but you’re happy to predict it for groups of customers, you can bundle customers up into groups. For example rather than needing to predict the future spend of Customer No. 23745993, you may want to predict the average spend of all customers in Socioeconomic Class A at Store 6342.

In this case the great advantage is that you would not have so many empty values in your past time series. So your time series may look like this:

This means you can use a time series library such as Prophet, developed by Facebook.

Here’s what Prophet produces when I give it the data points I showed above, and ask it to produce a prediction for the next few days. You can see that it’s picked up the weekly cycle correctly.

This approach would be very useful if you only needed the data for budgeting or stock planning purposes for an individual store and not for individual customers.

However if you had small enough customer segments, you may find that the prediction for a customer’s segment is adequate as a prediction for that customer.

Multilevel models

The next step up in complexity is multilevel models, where you use a different level of model for each region or economic group of customers, and combine them into a single group model.

Combinations

To get the maximum predictive power you can try ways of combining time series methods with a predictive modelling approach, such as taking the results of a time series prediction for a customer’s segment and using it as input to a predictive model.

Getting started

If you have a prediction problem in retail, or would like to some help with another problem in data science, I’d love to hear from you. Please contact me via the contact form.

How well can you predict an individual customer’s spending habits?

You may have read my previous post about customer churn prediction. Another similar problem that’s just as important as predicting lost customers, is predicting customers’ daily expenditure.

Let me give you an example: you work for a large retailer which has a loyalty card scheme. You’d like to predict for a given customer how much they are likely to spend over the next week.

In this case normally there would be clear patterns

  • customers buy more on Mondays than on Saturdays (weekly cycle)
  • there might be a monthly cycle and a yearly cycle
  • Christmas, Easter and bank holidays might drive an explosion in demand

However there are a few problems when you get down to customer level:

  • some customers may have visited your shop only once
  • some have visited hundreds of times
  • a customer might not enter the shop for a few months but then come back (dormant customer)

What this means is that, if you look at all customers’ expenditures (or averaged over a region), you will probably see some recognisable weekly, monthly and seasonal patterns:

However for a single customer it’s hard to make out any recognisable pattern among all the noise. The weekly and yearly trends were only apparent when we averaged over all customers.

So how can you go about predicting the future expenditure of a given customer the next time they enter the shop?

This problem is quite interesting as there are at least two very different approaches to solving it, from two different traditional disciplines:

  • Predictive modelling (from the field of machine learning) – focusing on an individual customer
  • Time series analysis (from the field of statistics) – focusing on groups of customers

This means that depending on whether you hire somebody with a machine learning background, or somebody with a statistics background, you may get two contradictory answers.

In this post I’ll talk only about the predictive modelling approach.

If you are interested in predicting the first graph, which is averages for groups of customers, you might want to look into my next post on time series analysis.

Predictive model: individual customer

The simplest way would be to use a predictive modelling machine learning approach. For example you could use Linear Regression. If you are unfamiliar with how to do this I recommend Andrew Ng’s Coursera course.

You would provide as input to your Regression model:

  • Last purchase value (if available)
  • Second last purchase value (if available)
  • Third last purchase value (if available)

The output you want it to predict is:

  • The next purchase value

This will predict the next purchase with some accuracy. After all the biggest factor to predict what someone will buy, is what they bought in the past.

However I’m sure you can easily think of some cases where this will break down. For example

  • A customer with no past purchases
  • Over Christmas if purchases tend to be bigger

You can improve the performance of the Predictive Model approach by making it a little more sophisticated:

  • Add more input features to the Regression model such as “day of week”, “day of year”, “isChristmasSeason” etc.
  • Switch to a Polynomial Regression Model, or Random Forest Regression. This will allow your model to become more powerful if the relationships between your inputs and outputs are not entirely linear, although it comes with a risk of your predictions going a crazy (like predicting huge numbers) if you are not careful!
  • Make different models 

Getting started

If you have a prediction problem in retail, or would like to some help with another problem in data science, I’d love to hear from you. Please contact me via the contact form.

Building explainable machine learning models

Sometimes as data scientists we will encounter cases where we need to build a machine learning model that should not be a black box, but which should make transparent decisions that humans can understand. This can go against our instincts as scientists and engineers, as we would like to build the most accurate model possible.

In my previous post about face recognition technology I compared some older hand-designed technologies which are easily understandable for humans, such as facial feature points, to the state of the art face recognisers which are harder to understand. This is an example of the trade-off between performance and interpretability.

The need for explanations

Imagine that you have applied for a loan and the bank’s algorithm rejects you without explanation. Or an insurance company gives you an unusually high quote when the time comes to renew. A medical algorithm may recommend a further invasive test, against the best instincts of the doctor using the program.

Or maybe the manager of the company you are building the model for doesn’t trust anything he or she doesn’t understand, and has demanded an explanation of why you predicted certain values for certain customers.

All of the above are real examples where a data scientist may have to trade some performance for interpretability. In some cases the choice comes from legislation. For example some interpretations of GDPR give an individual a ‘right to explanation’ of any algorithmic decision that affects them.

How can we make machine learning models interpretable?

One approach is to avoid highly opaque models such as Random Forest, or Deep Neural Networks, in favour of more linear models. By simplifying architecture you may end up with a less powerful model, however the loss in accuracy may be negligible. Sometimes by reducing parameters you can end up with a model that is more robust and less prone to overfitting. You may be able to train a complex model and use it to identify feature importance, or clever preprocessing steps you could take in order to keep your model linear.

An example would be if you have a model to predict sales volume based on product price, day, time, season and other factors. If your manager or customer wanted an explainable model, you might convert weekdays, hours and months into a one-hot encoding, and use these as inputs to a linear regression model.

Computer vision

The best models for image recognition and classification are currently Convolutional Neural Networks (CNNs). But they present a problem from a human comprehension point of view: if you want to make the 10 million numbers inside a CNN understandable for a human, how would you proceed? If you’d like a brief introduction to CNNs please check out my previous post on face recognition.

You can make a start by breaking the problem up and looking at what the different layers are doing. We already know that the first layer in a CNN typically recognise edges, later layers are activated by corners, and then gradually more and more complex shapes.

You can take a series of images of different classes and looking at the activations at different points. For example if you pass a series of dog images through a CNN:

Image credit: Zeiler & Fergus (2014) [1]

…by the 4th layer you can see patterns like this, where the neural network is clearly starting to pick up on some kind of ‘dogginess’.

Image credit: Zeiler & Fergus (2014) [1]

Taking this one step further, we can tamper with different parts of the image and see how this affects the activation of the neural network at different stages. By greying out different parts of this Pomeranian we can see the effect on Layer 5 of the network, and then work out which parts of the original image scream ‘Pomeranian’ most loudly to the network.

Image credit: Zeiler & Fergus (2014) [1]

Using these techniques, if your neural network face recogniser backfires and lets an intruder into your house, if you have the input images it would be possible to unpick the CNN to work out where it went wrong. Unfortunately going deep into a neural network like this would take a lot of time, so perhaps a lot of work remains to be done here.

Moving towards linear models

Imagine you have trained a price elasticity model that uses 3rd order polynomial regression. But your client requires something easier to understand. They want to know for each additional penny reduced from the price of the product, what will be the increase in sales? Or for each additional year of age of a vehicle what is the price depreciation?

You can try a few tricks to make this more understandable. For example you can convert your polynomial model to a series of joined linear regression models. This should give almost the same power but could be more interpretable.

Traditional polynomial regression fitting a curve, showing car price depreciation by age of vehicle
Splitting up the data into segments and applying a linear regression to each segment. This is useful because it shows a ballpark rate of depreciation at different stages, which salespeople might find useful for quick calculations.

Recommendation algorithms

Recommendation systems such as Netflix’s movie recommendations are notoriously hard to get right and users are often mystified by what they see as strange recommendations. The recommendations were usually calculated directly or indirectly because of previous shows that the user has watched. So the simplest way of explaining a recommendation system is to display a message such as ‘we’re recommending you The Wire because you watched Breaking Bad’ – which is Netflix’s approach.

General method applicable to all models

There have been some efforts to arrive at a technique that can demystify and explain a machine learning model of any type, no matter how complex.

The technique that I described for investigating a convolutional neural network can be broadly extended to any kind of model. You can try perturbing the input to a machine learning model and monitoring its response to perturbations in the input. For example if you have a text classification model, you can change or remove different words in the document and watch what happens.

LIME

One implementation of this technique is called LIME, or Local Interpretable Model-Agnostic Explanations[2]. LIME works by taking an input and creating thousands of duplicates with small noise added, and passing these duplicate inputs to the ML model and comparing the output probabilities. This way it’s possible to investigate a model that would otherwise be a black box.

Trying out LIME on a CNN text classifier

I tried out LIME on my author identification model. I gave the model an excerpt of one of JK Rowling’s non-Harry Potter novels, where it correctly identified the author, and asked LIME for an explanation of the decision. So LIME tried changing words in the text and checked which changes increase or decrease the probability that JK Rowling wrote it.

LIME explanation for an extract of The Cuckoo’s Calling by JK Rowling, for predictions made by a stylometry model trained on some of her earlier Harry Potter novels

LIME’s explanation of the stylometry model is interesting as it shows how the model has recognised the author by subsequences of function words such as ‘and I don’t…’ (highlighted in green) rather than strong content words such as ‘police’.

However the insight provided by LIME is limited because under the hood, LIME is perturbing words individually, whereas a neural network based text classifier looks at patterns in the document on a larger scale.

I think that for more sophisticated text classification models there is still some work to be done on LIME so that it can explain more succinctly what subsequences of words are the most informative, rather than individual words.

LIME on images

With images, LIME gives some more exciting results. You can get it to highlight the pixels in an image which led to a certain decision.

Image credit: Ribeiro, Singh, Guestrin (2016) [2]

Conclusion

There is a huge variety of machine learning models being used and deployed for diverse purposes, and their complexity is increasing. Unfortunately many of them are still used as black boxes, which can pose a problem when it comes to accountability, industry regulation, and user confidence in entrusting important decisions to algorithms as a whole.

The simplest solution is sometimes to make compromises, such as trading performance for interpretability. Simplifying machine learning models for the sake of human understanding can have the advantage of making models more robust.

Thankfully there have been some efforts to build explainability platforms to make black box machine learning more transparent. I have experimented with LIME in this article which aims to be model-agnostic, but there are other alternatives available.

Hopefully in time regulation will catch up with the pace of technology, and we will see better ways of producing interpretable models which do not reduce performance.

References

  1. Zeiler M.D., Fergus R. (2014) Visualizing and Understanding Convolutional Networks. In: Fleet D., Pajdla T., Schiele B., Tuytelaars T. (eds) Computer Vision – ECCV 2014. ECCV 2014. Lecture Notes in Computer Science, vol 8689. Springer, Cham
  2. Ribeiro T.M., Singh, S., Guestrin, C. (2016). “Why Should I Trust You?”: Explaining the Predictions of Any Classifier. 97-101. 10.18653/v1/N16-3020.

How to improve conversions without losing customer data

You may have had the experience of filling out a long form on a website. For example, creating an account to make a purchase, or applying for a job, or renewing your car insurance.

A long form can lead to customers losing interest and taking their business elsewhere. Each additional field can result in up to 10% more customers dropping out instead of completing the form.

If you have a business with a form like this, one reason why you’re not able to simplify your form is because the data you are requesting is valuable.

There are lots of ways to address the problem, such as improving the design of the form, or splitting it across multiple pages, removing the “confirm password” field, and so on. But it appears that most fields can’t be removed without inherently degrading the data you collect on these new customers.

However with machine learning it’s possible to predict the values of some of these fields, and completely remove them from the form without sacrificing too much information. This way you gain more customers. You would need to have a history of what information customers have provided in the past, in order to remove the fields for new customers.

A few examples

  • On a small ads site, you require users to upload a photo, or fill out a description of the item they’re selling. With machine learning you can suggest a price from the description, or a title from the photo, resulting in less typing for the user.
  • On a recruitment website, you can use machine learning to deduce lots of data (name, address, salary, desired role) directly from the candidate’s CV when it’s uploaded. Even salary can be predicted although it’s not usually explicit in the CV.
  • On a car insurance website, it’s possible to retrieve make, model, car tax and insurance status from an image of the car.

If you are interested and would like to know more please send me a message.

For an example of how data can be inferred from an unstructured text field please check out my forensic stylometry demo.

Building a face recogniser: traditional methods vs deep learning

Face recognition technology has existed for quite some time, but until recently it was not accurate enough for most purposes.
Now it seems that face recognition is everywhere:

  • you upload a photo to Facebook and it suggests who is in the picture
  • your smartphone can probably recognise faces
  • lots of celebrity look-a-like apps have suddenly appeared on the app stores
  • police and antiterrorism units all over the world use the latest in face recognition technology

The reason why facial recognition software has recently got a lot better and a lot faster is due to the advent of deep learning: more powerful and parallelised computers, and better software design.
I’m going to talk about what’s changed.

Traditional face recognition: Eigenfaces
The first serious attempts to build a face recogniser were back in the 1980s and 90s and used something called Eigenfaces. An Eigenface is a blurry face-like image, and a face recogniser assumes that every face is made of lots of these images overlaid on top of each other pixel by pixel.

If we want to recognise an unknown face we just work out which Eigenfaces it’s likely to be composed of.
Not surprisingly the Eigenface method didn’t work very well. If you shift a face image a few pixels to the right or left, you can easily see how this method will fail, since the parts of the face won’t line up with the eigenface any more.

Next step up in complexity: facial feature points
The next generation of face recognisers would take each face image and find important points such as the corner of the mouth, or an eyebrow. The coordinates of these points are called facial feature points. One well known commercial program converts every face into 66 feature points. 

To compare two faces you simply compare the coordinates (after adjusting in case one image is slightly off alignment).

Not surprisingly the facial feature coordinates method is better than the Eigenfaces method but is still suboptimal. We are throwing lots of useful information away: hair colour, eye colour, any facial structure that isn’t captured by a feature point, etc.

Deep learning approach

The last method in particular involved a human programming into a computer the definition of an “eyebrow” etc. The current generation of face recognisers throws all this out of the window.

This approach used convolutional neural networks (CNNs). This involves repeatedly walking a kind of stencil over the image and working out where subsections of the image match particular patterns.

The first time, you pick up corners and edges. After doing this five times, each time on the output of the previous run, you start to pick up parts of an eye or ear. After 30 times, you have recognised a whole face!

The neat trick is that nobody has defined the patterns that we are looking for but rather they come from training the network with millions of face images.

Of course this can be an Achilles’ heel of the CNN approach since you may have no idea exactly why a face recogniser gave a particular answer.

The obstacle you encounter if you want to develop your own CNN face recogniser is, where can you get millions of images to develop the model? Lots of people scrape celebrity images from the internet to do this.

However you can get much more images if you can get people to give you their personal photos for free!

This is the reason why Facebook, Microsoft and Google have some of the most accurate face recognisers, since they have access to the resources necessary to train the models.

The CNN approach is far from perfect and many companies will have some adjustments on top of what I described in order to compensate for its limitations, such as correcting for pose and lighting, often using a 3D mesh model of the face. The field is advancing rapidly and every year the state of the art in face recognition brings a noticeable improvement.

If you’d like to know more about this field or similar projects please get in touch.

Predicting customer churn

One question faced by lots of companies in competitive markets, is… why are our customers leaving us? What drives them to switch to a competitor? This is called ‘customer churn’.

Imagine you run a utility company. You know this about each of your customers:

  • When they signed the first contract
  • How much power they use on weekdays, weekends, etc
  • Size of household
  • Zip code / Postcode

For millions of customers you also know whether they stayed with your company, or switched to a different provider.

Ideally you’d like to identify the people who are likely to switch their supply, before they do so! Then you can offer them promotions or loyalty rewards to convince them to stay.

How can you go about this?

If you have a data scientist or statistician at your company, they can probably run an analysis and produce a detailed report, telling you that high consumption customers in X or Y demographic are highly likely to switch supply.

It’s nice to have this report and it probably has some pretty graphs. But what I want to know is, for each of the 2 million customers in my database, what is the probability that the customer will churn?

If you build a machine learning model you can get this information. For example, customer 34534231 is 79% likely to switch to a competitor in the next month.

Surprisingly building a model like this is very simple. I like to use Scikit-learn for this which is a nice easy-to-use machine learning library in Python. It’s possible to knock up a program in a day which will connect to your database, and give you this probability, for any customer.

One problem you’ll encounter is that the data is very non-homogeneous. For example, the postcode or zip code is a kind of category, while power consumption is a continuous number. For this kind of problem I found the most suitable algorithms are Support Vector Machines, and Random Forest, both of which are in Scikit-learn. I also have a trick of augmenting location data with demographic data for that location, which improves the accuracy of the prediction.

If customer churn is an issue for your business and you’d like to anticipate it before it happens, I’d love to hear from you! Get in touch via the contact form to find out more.

How you can identify the author of a document

Click here to see a live online demo of the neural network forensic stylometry model described in this article.

In 2013 JK Rowling, the author of the Harry Potter series, published a new detective novel under the pen name Robert Galbraith. She wanted to publish a book without the hype resulting from the success of the Harry Potter books.

However, following a tip-off received by a journalist on Twitter, two professors of computational linguistics showed that JK Rowling was highly likely to be the author of the new detective novel.

How did they manage to do this? Needless to say, the crime novel is set in a strictly non-magical world, and superficially it has little in common with the famous wizarding series.

One of the professors involved in the analysis said that he calculates a “fingerprint” of all the authors he’s interested in, which shows the typical patterns in that author’s works.

What’s my linguistic fingerprint? Subconsciously we tend to favour some word patterns over others. Is your salad fork “on” the left of the plate, or “to” the left of the plate? Do you favour long words, or short words? By comparing the fingerprint of a mystery novel to the fingerprints of some known authors it’s possible to get a match.

Here are some (partial) fingerprints I made for three well known female authors who used male pen names:

Identifying the author of a text is a field of computational linguistics called forensic stylometry.

With the advent of ‘deep learning’ software and computing power, forensic stylometry has become much easier. You don’t need to define the recipe for your fingerprint anymore, you just need lots of data.

My favourite way of approaching this problem is a Convolutional Neural Network, which is a deep learning technique that was developed for recognising photos but works very well for natural language!

The technology I’ve described has lots of commercial applications, such as

  • Identifying the author of a terrorist pamphlet
  • Extracting information from company financial reports
  • Identifying spam emails, adverts, job postings
  • Triage of incoming emails
  • Analysis of legal precedents in a Common Law system

If you have a business problem in this area and you’d like some help developing and deploying, or just some consulting advice, please get in touch with me via the contact form.

On 5th July 2018 I will be running a workshop on forensic stylometry aimed at beginners and programmers, at the Digital Humanities Summer School at Oxford University. You can sign up here: http://www.dhoxss.net/from-text-to-tech.

Update: click here to download the presentation from the workshop.

Matchmaking with deep learning

If you’ve ever bought something on Amazon or other large online retailers, you’ll have noticed the ‘similar products’ that the site recommends to you after you’ve made your purchase. Sometimes they’re not the best suggestion, but in my experience most of the time they hit the mark.

This is an area of machine learning called recommender systems.

How do recommender systems work? In the case of online retailers, the standard approach is to fill out huge matrices and work out the relationships between different products. You can then see which products normally go together in the same basket, and make recommendations accordingly. This is called collaborative filtering and it works mainly because most products have been purchased thousands or millions of times, allowing us to spot the patterns.

Now imagine you run a dating website. Let’s simplify and say your site only caters for male-female pairings. How do you recommend a female to a male user who’s just registered?

This is when things get tricky. There are many users, new users are registering all the time, and most users have made few contact requests.

In this case we can work with what we do have:

  • The user’s profile text
  • The profile photo
  • The contact requests, if any.

One approach which I like to use is a deep learning approach called vector embeddings, which goes like this:

  • You can convert every profile text into a ‘fingerprint’. For example it could be a vector in 100-dimensional space.
  • The 100-dimensional vector by itself is meaningless, but people with similar tastes should end up with similar vectors.
  • If you want to make recommendations for a new user, you can calculate their vector, and the distance to other vectors, and find its nearest neighbours!

Of course the tricky bit is how to go from a profile text and image, to a vector. This is something that Convolutional Neural Networks (CNNs) are very good at.

Vector embeddings can be useful for making recommendations in other industries too:

  • Recruitment websites, where candidates have uploaded a CV and you want to recommend jobs.
  • Property sales, where you have a description of the house and a photo.

There are off the shelf recommender systems that you can use for online retail or movie recommendations. But for text or image based recommendations really you need a custom solution, and this is extremely complex to build.

I have set up Fast Data Science Ltd to provide consulting services in this area after 10 years’ experience working with machine learning on natural language data. If you have lots of text or image data and you’d like to build a custom recommender system I’d love to hear from you. Please contact me here.