Computing Machine Learning Features in Real-time
Machine learning models are excellent at automating simple, high frequency decision, like:
- Should I allow this transaction?
- Should I allow this login?
- What items should I show this customer?
To make these decisions they need information about the event that they are scoring. These pieces of information are called “features”.
Up-to-date Features and Real-time Features
Some features are easy to update. Features like “The number of times the user has received a payment in the past week” are just counts that can be maintained in a feature store. When a new payment comes in, add one to the value. When another day has passed, subtract all the old payments out. Simple.
But some features are hard to keep up-to-date. What if we needed the value of the received payment feature above, but to make a decision on a received payment. Whatever system is keeping count now has to be very fast so it can update the feature and return the value to the model in order to make a decision without keeping the user waiting too long. In a payment system “too long” could much less than a second!
Worse, some features can not be pre-computed at all. For example, the feature “Has this user ever logged in from this location?” is very useful for stopping account takeover fraud, but the model needs to make a decision before the login completes. However, the feature can only be computed once the login has started because only then do we know the location!
These features, that have to be computed during the event that a decision is being made on, are called real-time features. I will talk about one way to compute them below.
A Machine Learning System
Let’s consider a simplified machine learning system that that looks like this:
The events (in purple and green on the diagram) are a never ending stream. Imagine the line of event boxes moving from right to left, each one taking a turn to dump its data into the feature store (in orange). The feature store uses the data in the events to update the values of features and reports those values to other parts of the system.
The model host (in blue) also operates on events but it handles them as they are generated, before they even have time to get to the feature store. The event that the model is currently making a decision on is the target event (in green).
The target event will eventually dump its data into the feature store (represented by a dotted line on the diagram) but has not done so yet. The target event does send some data to the model host (generally something simple like a user ID or event ID so the model knows what features to get) but this is fast compared to waiting for the feature store.
Machine Learning Model Host
Let’s take a closer look at the model host:
The data handling code (yellow in the diagram) gets IDs and other data from the target event so that it knows what features to get. For example, it might get a user ID so it can get all the features associated with that specific user. It then passes the features it receives from the feature store to the machine learning model (red in the diagram), which makes a decision and sends it back to the target event (or the system handling the target event).
To use this system to calculate real-time features, we make three changes:
- Add “proto-features” to the feature store.
- Send more data from the target event to the model host.
- Do additional processing in the data handling code to combine the proto-features and data from the target event into a real-time feature.
The diagram changes very slightly:
Here is how that would work for our example login feature:
- Add a proto-feature that is a list of previous login locations.
- Add the current location to the data passed in from the target event.
- Update the data handling code to check if the current location is in the list of previous login locations from the proto-feature.