Serving ML models from command line

Last step for a ML process is to deploy the model and serve it in order to consume it. One of the best ways to use the model in a interactive way, is to use it throw a REST API, so you can use it by other applications in an easy way.

You can create your own server with your own logic, but this will be a complex work because you need to take care of dependencies, endpoints, security and many other things. Also, you need to take care of this model by model.

With models manager, you can use the centralized repository system with mlflow and the project concept with the configuration and dependencies (throw conda enviroment) in order to build an isolated aplication with all the dependencies that can accept requests throw REST API and use the model to predict some result.

You can use the mlflow command (previously you need to have the env var MLFLOW_TRACKING_URI pointing to you onesaitplatform server) for create a server based on gunicorn:

export MLFLOW_TRACKING_URI=https://moonwalker.onesaitplatform.com/controlpanel/modelsmanager

pip install mlflow mlflow-onesaitplatform-plugin

And the command itself:

mlflow models serve -m runs:/{runid}/model

The run id is in the experiment section:

This will expose the gunicorn server in the 5000 port so we can use the rest api for doing some predictions

This uses the panda split format so we need to use the format header in order to specify it

Building a docker image with models manager

This is a more advanced option that allow us to build a docker image with all necessary things inside, so we can deploy it in some caas for exposing the public endpoint.

The command is also too easy:

mlflow models build-docker -m "runs:/{runid}/model" -n "{docker image name}"

And then, run the docker with a command like

This can be use with 5005 port that with use for the port mapping like the previous example

Â