Look at how data is being gathered and used in your business, and identify opportunities to extract value from your large datasets
I offer the full range of data science consulting services, from a simple overview and high level consultation, to building and deploying a machine learning model to production.
Look at how data is being gathered and used in your business, and identify opportunities to extract value from your large datasets
Use statistics and AI to identify meaningful patterns in your data, enabling you to make smart decisions
Design and train a machine learning model for your numeric, tabular, text or image data, making use of cutting edge machine learning tools
Bring AI solutions through to production, deploying with your preferred technology stack and fully integrating with your systems and APIs
Do you have millions of customers and need to predict the likely behaviour of each individual one? Who's going to switch to a competitor? Which is the most appropriate product recommendation? Or perhaps you need to predict unknown values in the future such as vehicle unloading times, travel times, signup rates, or customer spend? Maybe you have large amounts of unstructured text or image data? In all of these cases I can help.
Observations about the latest developments in the AI universe.
Sometimes as data scientists we will encounter cases where we need to build a machine learning model that should not be a black box, but which should make transparent decisions that humans can understand. This can go against our instincts as scientists and engineers, as we would like to build the most accurate model possible.
In my previous post about face recognition technology I compared some older hand-designed technologies which are easily understandable for humans, such as facial feature points, to the state of the art face recognisers which are harder to understand. This is an example of the trade-off between performance and interpretability.
Imagine that you have applied for a loan and the bank’s algorithm rejects you without explanation. Or an insurance company gives you an unusually high quote when the time comes to renew. A medical algorithm may recommend a further invasive test, against the best instincts of the doctor using the program.
Or maybe the manager of the company you are building the model for doesn’t trust anything he or she doesn’t understand, and has demanded an explanation of why you predicted certain values for certain customers.
All of the above are real examples where a data scientist may have to trade some performance for interpretability. In some cases the choice comes from legislation. For example some interpretations of GDPR give an individual a ‘right to explanation’ of any algorithmic decision that affects them.
One approach is to avoid highly opaque models such as Random Forest, or Deep Neural Networks, in favour of more linear models. By simplifying architecture you may end up with a less powerful model, however the loss in accuracy may be negligible. Sometimes by reducing parameters you can end up with a model that is more robust and less prone to overfitting. You may be able to train a complex model and use it to identify feature importance, or clever preprocessing steps you could take in order to keep your model linear.
An example would be if you have a model to predict sales volume based on product price, day, time, season and other factors. If your manager or customer wanted an explainable model, you might convert weekdays, hours and months into a one-hot encoding, and use these as inputs to a linear regression model.
The best models for image recognition and classification are currently Convolutional Neural Networks (CNNs). But they present a problem from a human comprehension point of view: if you want to make the 10 million numbers inside a CNN understandable for a human, how would you proceed? If you’d like a brief introduction to CNNs please check out my previous post on face recognition.
You can make a start by breaking the problem up and looking at what the different layers are doing. We already know that the first layer in a CNN typically recognise edges, later layers are activated by corners, and then gradually more and more complex shapes.
You can take a series of images of different classes and looking at the activations at different points. For example if you pass a series of dog images through a CNN:
…by the 4th layer you can see patterns like this, where the neural network is clearly starting to pick up on some kind of ‘dogginess’.
Taking this one step further, we can tamper with different parts of the image and see how this affects the activation of the neural network at different stages. By greying out different parts of this Pomeranian we can see the effect on Layer 5 of the network, and then work out which parts of the original image scream ‘Pomeranian’ most loudly to the network.
Using these techniques, if your neural network face recogniser backfires and lets an intruder into your house, if you have the input images it would be possible to unpick the CNN to work out where it went wrong. Unfortunately going deep into a neural network like this would take a lot of time, so perhaps a lot of work remains to be done here.
Imagine you have trained a price elasticity model that uses 3rd order polynomial regression. But your client requires something easier to understand. They want to know for each additional penny reduced from the price of the product, what will be the increase in sales? Or for each additional year of age of a vehicle what is the price depreciation?
You can try a few tricks to make this more understandable. For example you can convert your polynomial model to a series of joined linear regression models. This should give almost the same power but could be more interpretable.
Recommendation systems such as Netflix’s movie recommendations are notoriously hard to get right and users are often mystified by what they see as strange recommendations. The recommendations were usually calculated directly or indirectly because of previous shows that the user has watched. So the simplest way of explaining a recommendation system is to display a message such as ‘we’re recommending you The Wire because you watched Breaking Bad’ – which is Netflix’s approach.
There have been some efforts to arrive at a technique that can demystify and explain a machine learning model of any type, no matter how complex.
The technique that I described for investigating a convolutional neural network can be broadly extended to any kind of model. You can try perturbing the input to a machine learning model and monitoring its response to perturbations in the input. For example if you have a text classification model, you can change or remove different words in the document and watch what happens.
One implementation of this technique is called LIME, or Local Interpretable Model-Agnostic Explanations. LIME works by taking an input and creating thousands of duplicates with small noise added, and passing these duplicate inputs to the ML model and comparing the output probabilities. This way it’s possible to investigate a model that would otherwise be a black box.
I tried out LIME on my author identification model. I gave the model an excerpt of one of JK Rowling’s non-Harry Potter novels, where it correctly identified the author, and asked LIME for an explanation of the decision. So LIME tried changing words in the text and checked which changes increase or decrease the probability that JK Rowling wrote it.
LIME’s explanation of the stylometry model is interesting as it shows how the model has recognised the author by subsequences of function words such as ‘and I don’t…’ (highlighted in green) rather than strong content words such as ‘police’.
However the insight provided by LIME is limited because under the hood, LIME is perturbing words individually, whereas a neural network based text classifier looks at patterns in the document on a larger scale.
I think that for more sophisticated text classification models there is still some work to be done on LIME so that it can explain more succinctly what subsequences of words are the most informative, rather than individual words.
With images, LIME gives some more exciting results. You can get it to highlight the pixels in an image which led to a certain decision.
There is a huge variety of machine learning models being used and deployed for diverse purposes, and their complexity is increasing. Unfortunately many of them are still used as black boxes, which can pose a problem when it comes to accountability, industry regulation, and user confidence in entrusting important decisions to algorithms as a whole.
The simplest solution is sometimes to make compromises, such as trading performance for interpretability. Simplifying machine learning models for the sake of human understanding can have the advantage of making models more robust.
Thankfully there have been some efforts to build explainability platforms to make black box machine learning more transparent. I have experimented with LIME in this article which aims to be model-agnostic, but there are other alternatives available.
Hopefully in time regulation will catch up with the pace of technology, and we will see better ways of producing interpretable models which do not reduce performance.
You may have had the experience of filling out a long form on a website. For example, creating an account to make a purchase, or applying for a job, or renewing your car insurance.
A long form can lead to customers losing interest and taking their business elsewhere. Each additional field can result in up to 10% more customers dropping out instead of completing the form.
If you have a business with a form like this, one reason why you’re not able to simplify your form is because the data you are requesting is valuable.
There are lots of ways to address the problem, such as improving the design of the form, or splitting it across multiple pages, removing the “confirm password” field, and so on. But it appears that most fields can’t be removed without inherently degrading the data you collect on these new customers.
However with machine learning it’s possible to predict the values of some of these fields, and completely remove them from the form without sacrificing too much information. This way you gain more customers. You would need to have a history of what information customers have provided in the past, in order to remove the fields for new customers.
A few examples
If you are interested and would like to know more please send me a message.
For an example of how data can be inferred from an unstructured text field please check out my forensic stylometry demo.
Face recognition technology has existed for quite some time, but until recently it was not accurate enough for most purposes.
Now it seems that face recognition is everywhere:
The reason why facial recognition software has recently got a lot better and a lot faster is due to the advent of deep learning: more powerful and parallelised computers, and better software design.
I’m going to talk about what’s changed.
Traditional face recognition: Eigenfaces
The first serious attempts to build a face recogniser were back in the 1980s and 90s and used something called Eigenfaces. An Eigenface is a blurry face-like image, and a face recogniser assumes that every face is made of lots of these images overlaid on top of each other pixel by pixel.
If we want to recognise an unknown face we just work out which Eigenfaces it’s likely to be composed of.
Not surprisingly the Eigenface method didn’t work very well. If you shift a face image a few pixels to the right or left, you can easily see how this method will fail, since the parts of the face won’t line up with the eigenface any more.
Next step up in complexity: facial feature points
The next generation of face recognisers would take each face image and find important points such as the corner of the mouth, or an eyebrow. The coordinates of these points are called facial feature points. One well known commercial program converts every face into 66 feature points.
To compare two faces you simply compare the coordinates (after adjusting in case one image is slightly off alignment).
Not surprisingly the facial feature coordinates method is better than the Eigenfaces method but is still suboptimal. We are throwing lots of useful information away: hair colour, eye colour, any facial structure that isn’t captured by a feature point, etc.
Deep learning approach
The last method in particular involved a human programming into a computer the definition of an “eyebrow” etc. The current generation of face recognisers throws all this out of the window.
This approach used convolutional neural networks (CNNs). This involves repeatedly walking a kind of stencil over the image and working out where subsections of the image match particular patterns.
The first time, you pick up corners and edges. After doing this five times, each time on the output of the previous run, you start to pick up parts of an eye or ear. After 30 times, you have recognised a whole face!
The neat trick is that nobody has defined the patterns that we are looking for but rather they come from training the network with millions of face images.
Of course this can be an Achilles’ heel of the CNN approach since you may have no idea exactly why a face recogniser gave a particular answer.
The obstacle you encounter if you want to develop your own CNN face recogniser is, where can you get millions of images to develop the model? Lots of people scrape celebrity images from the internet to do this.
However you can get much more images if you can get people to give you their personal photos for free!
This is the reason why Facebook, Microsoft and Google have some of the most accurate face recognisers, since they have access to the resources necessary to train the models.
The CNN approach is far from perfect and many companies will have some adjustments on top of what I described in order to compensate for its limitations, such as correcting for pose and lighting, often using a 3D mesh model of the face. The field is advancing rapidly and every year the state of the art in face recognition brings a noticeable improvement.
If you’d like to know more about this field or similar projects please get in touch.
One question faced by lots of companies in competitive markets, is… why are our customers leaving us? What drives them to switch to a competitor? This is called ‘customer churn’.
Imagine you run a utility company. You know this about each of your customers:
For millions of customers you also know whether they stayed with your company, or switched to a different provider.
Ideally you’d like to identify the people who are likely to switch their supply, before they do so! Then you can offer them promotions or loyalty rewards to convince them to stay.
How can you go about this?
If you have a data scientist or statistician at your company, they can probably run an analysis and produce a detailed report, telling you that high consumption customers in X or Y demographic are highly likely to switch supply.
It’s nice to have this report and it probably has some pretty graphs. But what I want to know is, for each of the 2 million customers in my database, what is the probability that the customer will churn?
If you build a machine learning model you can get this information. For example, customer 34534231 is 79% likely to switch to a competitor in the next month.
Surprisingly building a model like this is very simple. I like to use Scikit-learn for this which is a nice easy-to-use machine learning library in Python. It’s possible to knock up a program in a day which will connect to your database, and give you this probability, for any customer.
One problem you’ll encounter is that the data is very non-homogeneous. For example, the postcode or zip code is a kind of category, while power consumption is a continuous number. For this kind of problem I found the most suitable algorithms are Support Vector Machines, and Random Forest, both of which are in Scikit-learn. I also have a trick of augmenting location data with demographic data for that location, which improves the accuracy of the prediction.
If customer churn is an issue for your business and you’d like to anticipate it before it happens, I’d love to hear from you! Get in touch via the contact form to find out more.
An overview of some of the projects I have been involved with in the past.
A large retail company had GPS records of vehicle telematics. I built an ML model to produce predictions of how long it takes to unload a vehicle and close the loading bay door, taking into account product types, time of day, and other variables. The predictive model had a constraint that it should return a prediction within a few milliseconds. The model was deployed and integrated into their traffic planning software, allowing the company to work with more accurate schedules, improving efficiency.
An internet based company had a signup form where users would upload some text files and then fill out a large amount of small text and dropdown fields. By training a machine learning model on the past data I was able to accurately predict some of the values, allowing some fields to be removed from the form. In an A/B test this was shown to improve conversions.