How to deploy new service in Rancher?

If we want to deploy in Rancher additional services to the Onesait Platform services, or services that are not included within Onesait Platform’s microservices management, we should follow these guidelines:

 

Add a private registry of Docker images

If we are using a private registry protected by username and password to store our Docker images, we will have to register it in CaaS to delegate authentication in Rancher. For this purpose, once logged in Rancher, we will navigate to Infrastructure → Registries:

Add our registry as “Custom“

Once this is done, we should see it as registered:

Due to this we will avoid doing “docker login” in the vm where our services will be deployed.

Create a new stack (independent of the Onesait Platform stack)

In order to avoid problems such as accidentally stopping a Platform module, updating it or even deleting it and to ensure that our services are deployed in isolation, we will create a new stack. In order to do this, we will choose the environment in which we want to deploy:

Once inside we can see the link that allows us to create the stack:

We will click in “Add Stack“, where we will see a form where we can introduce the name of the stack, and even supply a docker-compose (version 2). If we click on advanced options, we can add one or more tags (we will see later for what). Finally, the check "Start services after creating" will allow us to start all the services that we include in the docker-compose at the same time, once the stack has been created.

We click on “Create“ and we can visualize our new stack, in this case empty, because we haven't supply any docker-compose.

Deploy a service within the stack

Once we have the stack created, we can start to deploy our services by clicking on “Add Service”, located in the upper right corner, next to the name of the stack:

We will visualize the next form where we can introduce the following parameters:

  • Service name, which is the discovery mode between the containers that want to invoke it, description and the image with its version or tag. Moreover, we can indicate how many containers of this service are going to be deployed by the option “Scale” and with the checkbox “Always pull image before creating“, we can indicate that, during an upgrade operation, Rancher inspects the registry for changes in the image with the same tag and download it.

  • Port mapping and service links between services. “Port Map“ option allows us to expose one port of the container to the host. “Service Links“ will allow us to invoke other services from our container in different stacks by his service name, instead of indicating the stack name or an alias.

It is not recommended to map ports if there are other services exposing ports in the virtual machine. It could cause port collision issues.

To invoke a service in another stack we should do it in the following way: <service_name>.<stack_name>

  • In the lower box we have multiple configuration options:

    • “Command” Tab: We can define environment variables, auto restart mode if the container stops, container user, different entry point…

       

    • “Volumes“ tab: It allows us to persist the information from the container to host’s filesystem where the container is deployed.

       

    • “Networking“ tab: In this tab we can define what type of network our containers will use (by default it will use the network managed by Docker). We can also add additional DNS or name resolution servers, establish a name to the container…

       

    • “Security/Host“ tab: Here we can set options to limit the access to the Host resources by the container i.e., memory, cpu… We can also set log parameters in this tab.

       

    • “Secrets“ tab: It will allow us to store private information, such as database access passwords.

       

    • “Health Check“ tab: This tab is important since it will determine part of the high availability of our services. Here we can define the port of the containerized application and its protocol (TCP / HTTP) and in case the port or service ceases to be operational, indicate the CaaS what to do. We will normally choose to recreate the service to ensure that if it crashes, it will reinitialize.

       

    • “Labels“ tab: It allows the addition of labels to our service to later define rules for deployment based on them.

       

    • “Scheduling“ tab: Together with the previous tab, it allows us to define deployment planning rules based on labels, for example, that the service deploys only in the nodes labeled as "DEV"

       

Finally we will create our service and we can see it in the stack:

Service life cycle management

Now that the service is created, we can manage its life cycle.

From service options we are allowed to:

  • Upgrade: Update the definition of the service or its version, finishing the upgrade or rolling back (rollback)

  • Restart: Restart every container in the service

  • Stop: Stop the service without losing the non-persisted information of the container

  • Delete: Delete the container, however the service will recreate it again according to the established replication factor (minimum 1)

  • View in API: API REST associated that offers all the operations described via REST

  • Clone: It will allow us to clone the service with the same definition

  • Edit: Without having to restart the containers, we can modify some of their values (Service links, service name…)

     

In addition, we can modify the number of containers in execution depending on the load supported with the "Scale" option:

In this example the scale factor is set to 3, so the CaaS (Rancher) always ensures that the number of these containers is always the same, and if one of them falls, it automatically starts it again. Moreover, the service performs the Round-Robin balancing between the three containers.

Service monitoring

If we click in any of the containers of a service, we will see a summarized monitoring of it, such as:

  • CPU usage

  • Memory usage

  • Network usage

  • Storage usage

  • IP

  • Container generated name

  • Container image

  • Lifetime

Also, in the available options of the container, we can see the logs dumped to standard output:

Finally, for operational tasks we can access the container shell and explore its filesystem: