Integrate Trained Machine Learning Model into API in Just 15 mins | by Shawn Shi | Dec, 2020

[ad_1]


How to deploy a trained sentiment analysis machine learning model to a REST API using Microsoft ML.NET and ASP.NET Core, in just 15 mins.

Shawn Shi
Photo by Robert Zunikoff

If you have a pure Data Scientist role, you probably care more about studying the data, going through feature engineering, choosing the correct statistical or machine learning model, and training the model to perform well on both the training dataset and validation dataset. And that’s it!

However, in most cases, a trained machine learning model does not just stop there and sit in a Jupiter Notebook, but it needs to be deployed as a service so it can be actually used in production applications. For example, at Facebook, machine learning engineers are expected to do everything above, but also be able to quickly deploy the trained model to production (or staging).

Therefore, it is important to know how to deploy a trained machine learning model in a quick matter so it can either be demonstrated to the executive team or be rolled out to production (yeah!).

A trained machine learning model is simply a serialized file that contains the model artifacts. For example, if you are training a simplest logistic regression model with one feature for classification, the formula may look like below. The model artifacts shall contain the values of β0 and β1. Note x is your single feature and is your data point, so the value of x will not be contained in the model artifacts.

Screenshot from Logistic Regression page on Wikipedia

The serialized model file allows you either:

  • to load the trained model into memory and start using the trained model for prediction or analysis
  • or to resume training from when the file was generated. This is particularly useful when models that take a long time to train so that you don’t have to start all over again when you shut down the computer.

If you are familiar with Python, a trained model can be saved using “pickle” into a “.sav” file. If you are using ML.NET, a trained model is saved into a zipped file.

The goal of this article is to demonstrate how to quickly deploy a trained machine learning model to an existing API using Microsoft ML.NET and ASP.NET Core. This article will not discuss how to build and train a model, since Microsoft already has good tutorials. For example, see Microsoft ML.NET Tutorial.

For my application, I have already trained a Sentiment Analysis model using ML.NET, which I will use in my ASP.NET Core API.

Step 1 — Create a class library project to host your machine learning model related files. For my sentiment analysis model, I have these files in the screenshot:

Screenshot by author
  • README.md: your co-workers will love you if you include it.
  • MLModel.zip: which is my serialized ML model that contains artifacts
  • ModelInput.cs: strongly-typed input class used for model prediction input
  • ModelOutput.cs: strongly-typed output class used for model prediction output
  • ConsumeModel.cs: class load model artifacts into memory and serve predict() calls

A few quick notes:

  • Make sure you have Microsoft.ML NuGet package and also the relevant package for your algorithm used. For my case, LightGbm is the algorithm I used for sentiment analysis, so the package is included.
  • Give your class library project a meaningful name, so that when you have multiple ML algorithms/features in the same solution, you won’t get confused! I called mine xxx.SentimentAnalysis since this class library is used for sentiment analysis.
  • Set the trained model, MLModel.zip in this case, to be always copied so that this file gets copied to the runtime bin folder. Otherwise, you may run into file not found errors. To do that, right click MLModel.zip and change setting. See screenshot below.
Change MLModel.zip to be always copied (Screenshot by Author)
  • ConsumeModel.cs hosts the prediction pipeline, which contains code for data pre-processing transformations and also loading the trained model into the application. See sample code below.
Consume Model class code snippet

Step 2 — Register the ConsumeModel class in your API application service provider for dependency injection. In this case, we are using ASP.NET Core built in service provider. A few tips on this part:

  • API project will need to reference the class library project created in the previous step.
  • A singleton instance of the ConsumeModel should be registered since all requests should share the same trained model. Otherwise, each request will have to load the whole MLModel.zip file into memory and cause huge performance issue and possible app crash if scoped and transient lifetime are used. See code below for how to register a singleton instance. Another approach is to make the Predict() method in ConsumeModel class static. For our application, we will stick with the Singleton approach.

Step 3 — Inject the ConsumeModel.cs service and start using it for prediction!

There are many ways to inject the ConsumeModel service. My preferred way is to inject it in the class constructor, which can be a controller class, or a service class, or MediatR handler class if using CQRS pattern.

For example, in this application, I am injecting the ConsumeModel service into my SentimentAnalysisController class constructor and using it in my predict action. One GET endpoint is available for sentiment analysis, which takes a query parameter called “comment”. The “comment” is used to build an input for our sentiment analysis model, and the singleton instance of ConsumeModel, _consumeModel, is used for prediction.

Last — Enjoy your machine learning model!

I can now open my Swagger UI or Postman or just browser page and test my sentiment analysis endpoint! For example, for comment “This is a great movie”, my endpoint returns a really positive sentiment with probability of 99.7%. Pretty good!

Screenshot by Author

While comment “This is a horrible movie” gets a response below with a probability of 98.3% to be negative. Not bad!

Screenshot by Author

Building and training a machine learning model is an art and can be hard, but deploying a trained model should be nice and easy. In fact, this article shows how it can be integrated into an existing ASP.NET Core API in a matter of minutes.

Hope you find this article helpful and will be able to quickly deploy your ML model for either proof of concepts demos or even more exciting, production rollout!

Read More …

[ad_2]


Write a comment