Serve Models from MLFlow as Microservice
In the new platform version, it will be possible to deploy models based on the models manager in a centralized way, so that any model uploaded to the models manager will be deployable directly and automated on the platform as a container/pod in the target caas, as well as providing it with platform security directly.
Use will be made of the platform microservices deployment module:Â
An existing experiment can be selected from the Models Manager, and once the experiment is selected, we will choose a run to deploy.
Along with the models manager's own capabilities to generate containers from a runid of an experiment, we can choose a run to deploy. Serving ML models from command line