/
Serve Models from MLFlows as microservice Python

Serve Models from MLFlows as microservice Python

Available since version 7.0.0

Introduction

The objective is to generate a new type of microservice to evaluate MLFlow models. We start by running a code that, based on certain training data from a wine quality dataset, generates a predictive model.

Additionally, by using the Models Manager tool (responsible for the Onesait Platform MLOps), we can track the different experiments previously made, evaluate results, compare them, etc. Finally creating an optimal wine prediction model that allows us to determine the quality of the wine based on certain data we provide.

This predictive model will be registered in the Models Manager tool, which will enable its governance, as well as allow us to access it in later stages for deployment.

The next step is to develop the microservice from the Platform, which covers everything from downloading the code to its compilation and the generation of the Docker image through Jenkins, uploading it to an image registry, and then deploying it to the corresponding CaaS platform.

Finally, through the Platform Manage APIs, we create our API, which allows us to define and manage it. We can access the Swagger interface and check the predictive model’s functionality directly from the Platform. By doing it this way, we take advantage of the security provided by the Platform.

Predictive Model in Models Manager

To carry out this example we are going to use the MLFlow of ‘sklearn_elasticnet_wine’, which can be obtained from this GitHub repository:

We generate the predictive model. In this example, we take advantage of the tool provided by the Platform to use the Models Manager from the local environment. To do this, we install the following necessary Python packages to connect with the Models Manager:

  • MLFlow → pip install mlflow

  • mlflow onesaitplatform plugin (for using the file upload tool to work with ML projects) → pip install mlflow-onesaitplatform-plugin

Using the mlflow example of ‘sklearn_elasticnet_wine’, the MLproject and conda.yaml files need to be modified to correctly connect to the Platform.

  • MLproject: The line 3 should be modified to ‘conda_env: conda.yaml’:

image-20250317-170544.png
  • conda.yaml: you need to modify the Python version to 3.11 and add the mlflow package from the Platform (mlflow-onesaitplatform-plugin):

image-20250317-170627.png

If we use our own predictive model, it is important to use the ‘signature’ (function: infer_signature), so that once we generate the swagger, we can have the expected values and it can generate the example, as it is used in the «sklearn_elasticnet_wine» model.

The second and final step is to set up the environment you want to work with on the Platform. This is how you connect your local code with the OnesaitPlatform server:

  • Using an environment variable called MLFLOW_TRACKING_URI, which must be set to {environment}/controlpanel/modelsmanager:

C:\Users\smartinroldang\Desktop\MLFlow\examples>set MLFLOW_TRACKING_URI=https://lab.onesaitplatform.com/controlpanel/modelsmanager
image-20250318-110607.png

Having an MLFlow project, we start the training with the following command:

  • mlflow run {folder project} -P {mlflowparams}

  • mlflow run sklearn_elasticnet_wine -P alpha=0.5

image-20250318-110527.png

Then, we can view the experiment in the controlpanel and explore its details

image-20250318-104320.png
image-20250318-103938.png
image-20250318-103812.png

We can also view the project artifact and the model itself (the .pkl file).

image-20250318-103633.png

 

Creation of the Microservice from the Platform

First, the microservices module will be accessed from the Logic > My Microservices menu.

image-20250320-104757.png

The list of available microservices will then be displayed to the user. To create a new microservice, click on the ‘+’ button at the top right.

image-20250320-095723.png

The microservice creation wizard will open. In ‘Microservice Template’, choose the ‘Models Manager’ option. In this way, the Platform performs a cloning of a base repository (microservice-modelsmanager), with which a generic prediction is created.

image-20250318-075038.png

We finish filling out the general information data.

image-20250318-102324.png

Once we select the 'Models Manager' option in 'Microservice Template', an 'Experiments' option appears, where the experiments are listed. When we select the desired experiment, a table with the models is displayed, and we can choose the model we want to work with. In this case, we select the one we created earlier.

image-20250318-102456.png

The next step is to select the Git configuration we want.

image-20250318-102711.png

Next, we fill out the Jenkins configuration.

image-20250318-102900.png

Finally, the CaaS configuration:

image-20250318-103028.png

Once all the information has been entered, click on the ‘Create’ button to generate the microservice. A message will then be displayed indicating that it has been successfully created, and the screen will return to the list of microservices, where the one that has just been created will appear.

image-20250318-100858.png

When the microservice is generated, two things are created:

  • Git Repository: a Git repository is created with the base template we selected. By clicking the 'Git URL' link from 'My Microservices', we are redirected and can see the repository created with the name we provided.

image-20250318-101231.png
  • Jenkins Pipeline: a pipeline is created in Jenkins called {‘name’ we provided}-pipeline. We can quickly access the pipeline by clicking the 'Jenkins URL' link from 'My Microservices', which redirects us and allows us to view the pipeline created with the mentioned name.

image-20250318-080102.png

The next step is to perform the 'Build Service', which creates the image. To do this, we click on the hammer icon located in the CI/CD section of 'My Microservices', verify that the parameters are correct, and then click 'Generate'.

image-20250318-095425.png

When clicking 'Generate', a screen appears indicating that the request has been sent to the Jenkins queue.

image-20250318-080353.png

If we go to Jenkins and the corresponding pipeline, we can see that it has been successful.

image-20250318-100410.png

If we go to the registry, we can see the image that has been generated.

image-20250318-100314.png

Once this is done, we move on to CaaS, where we want to perform the deployment/service. To do this, we go to the control panel and, in the CI/CD section of our microservice, click on the rocket icon. This opens a screen where we can enter the parameters we want for the deployment/service. Once filled out, we click 'Deploy'.

image-20250318-100059.png

If everything goes correctly, a screen appears informing that the microservice has been deployed.

image-20250318-080917.png

If we go to Rancher, we can see that the container is active and running correctly.

image-20250318-093647.png

In the controlpanel, we can see that the ‘status’ now has a green check, indicating that it is active.

image-20250319-154549.png

If we wish, we can add or modify the microservice parameters. By clicking on CI/CD and then on ‘Upgrade’ (the upward arrow), a screen opens where we can add new parameters with the ‘+’ or remove parameters with the ‘-’. We need to add: HOST, TOKEN, EXPERIMENTID, RUNID, MODEL_NAME. We could also add: API_HOST, API_SCHEME. After making changes to the parameters, we simply click on ‘Upgrade’, and the microservice will be updated.

 

API Generation

Once all the above steps have been completed, the API can be created from the Control Panel. There are two ways to do this:

  • First way: we can insert certain variables into our microservice, and in this way, an "autoregister" will be performed, creating the API automatically. The necessary variables for this are:

    • APIMANAGER_AUTOREGISTER: It must be set to ‘True’ for the autoregister to take place.

    • API_NAME: The name we want to give our API. It will be verified that no API with that name and version 1 exists. If none exists, the API will be created; if an API with that name and version 1 already exists, an error will be generated.

    • API_HOST: The host to which we want the swagger to make the queries.

Once this is done and the container is updated with the new variables, we can go to Logic > My APIs and search for our API.

  • Second way: access the Logic > My APIs menu.

image-20250320-105527.png

The list of available APIs will then be displayed to the user. To create a new API service, click on the ‘+’ button at the top right.

image-20250320-095723.png

In ‘API Type’, we select the option ‘Publish External REST API from Swagger’. On the right side, a section called 'Operations' appears, where we enter the path of the swagger.json file in 'Swagger JSON URL' and click on 'Load'. The .json file is loaded into the black box. After that, we complete the required fields and click on 'Create'.

image-20250318-090810.png

If everything goes well, the message 'All is OK!' will appear, and we will be redirected to the screen with the API list, where we can search for the API we created.

image-20250318-082020.png

 

Once we have our API,, we click on the three dots in the options of our API and select the Swagger option, which allows us to use the Swagger we have created and the endpoint for our wine prediction.

image-20250318-082137.png

The first thing we need to do is enter the token, as we have the security provided by the Platform. To do this, in the lock icon, we enter our user's token, and this allows us to use the '/prediction' endpoint.

image-20250318-082824.png

In this example, we follow the case of the provided example, as it is specifically designed for wine prediction.

image-20250318-083559.png
image-20250318-083521.png

 

Related content