How to deploy onesait Platform in CaaS Rancher?
Introduction
The platform is ready to be deployed completely containerized on Docker, as you can see in this other tutorial: (Deployment) How to execute the platform with Docker
However, in production environments, you may want to have a CaaS platform to help you manage all the Docker containers in execution.
When we offer the platform under a PaaS approach (Platform as a Service), we use RedHat OpenShift as our support CaaS platform.
But the platform also works on other CaaS platforms. If we have to deploy the platform on an On-Premise infrastructure provided by the customer, we can adapt to the Kubernetes platform used by the client. In other scenarios, the client has no platform. In that case, we suggest Rancher, a lightweight, open-source container manager platform, easy to build, as we will see...
How can you install Rancher in an environment with one Master and one Worker?
To install Rancher, you need at least two machines (virtual or physical), with a 64-bit Linux. The Rancher server will be installed in one of them (Master); and the platform's VM in the other (Worker).
NOTE: If needed, all of it can be installed in a single, more powerful VM.
minimal requirements
Onesait Platform minimal requirements
- Laptop or additional host with Docker installed.
- Docker 17 or greater
- On target hosts:
- 1 VM for CaaS with 2/4 cores and 4/8 GiB RAM
- 1 VM for OP with 4/8 cores and 32/64GiB RAM
- SO CentOS >= 7, RHEL7, Oracle Linux >= 7 or Ubuntu >= 18 (experimental).
- 250GB of disk
- ssh access
- internal traffic enabled ports 4500/udp, 500/udp and 8080/tcp and external port 443/tcp
- in some environments selinux disabled
IN MASTER NODE:
1.- Install Docker CE in Master Node
You need a Docker version compatible with the Rancher version that will be installed; or, if you are going to use a pre-existing Master node, the Docker version compatible with the Rancher version in the Master node (Rancher compatibility matrix)
-CentOS: https://docs.docker.com/install/linux/docker-ce/centos/
-Ubuntu: https://docs.docker.com/install/linux/docker-ce/ubuntu/
(Don't start Docker yet)
2.- Configure Docker
Before starting Docker, you must change the path where Docker stores all its information: images, containers, etc. If you don't change it, it will use the default directory, /var/lib/docker. In the commonly used Azure machines, this directory is in a 35-Gb ephemeral disc, so it can be spent during execution, causing the machine to fail. We recommend you use a disc with more space to avoid this problem and, in the worst of cases, if the disc is full, it does not cause the whole machine to fail. To configure this, you must edit the file /lib/systemd/system/docker.service (sudo vi /lib/systemd/system/docker.service) and add option -g to the ExecStart configuration, as seen in the following example:
ExecStart=/usr/bin/dockerd -g /datadrive/docker |
---|
Finally, with the Docker service stopped, if there are files or directories in /var/lib/docker, you must move them to the new directory.
Add your repository as insecure-registry in Docker's daemon.
sudo vi /etc/docker/daemon.json
and add:
{ "insecure-registries" : [ "registry.onesaitplatform.com" ] } |
---|
3.- Start Docker CE
Start it with >sudo systemctl start docker
and check that it started correctly by running the hello-world image:
>sudo docker run hello-world
Configure that thte Docker starts in the VM's boot:
>sudo systemctl enable docker.service
4.- Install Docker Compose
As Rancher is deployed with Docker Compose, you must install a docker-compose version that is compatible with Docker, Rancher and the versions of the yml in the modules in the Master node.
To do this, download Docker Compose last version (You can see it here: https://github.com/docker/compose/releases)
>sudo curl -L https://github.com/docker/compose/releases/download/1.22.0/docker-compose-$(uname -s)-$(uname -m) -o /bin/docker-compose
and give permissions >sudo chmod +x /bin/docker-compose
And finally check the version with >sudo docker-compose -v
(Steps in https://docs.docker.com/compose/install/#install-compose)
5. - Install and execute the Rancher Server
Go to the user's directory (In this case, /home/administrador/)
Do >sudo mkdir -p /datadrive/onesaitplatform/rancher
Create a .env file with this content:
REPOSITORY=registry.onesaitplatform.com SERVERNAME=sofia2-dockerstack.westeurope.cloudapp.azure.com HOST_VOLUME_PERSIST=/datadrive/onesaitplatform/rancher |
---|
and create a docker-compose.yml file (to execute Rancher on Docker Compose) like this one:
version: "2.1" services: rancherinstance: image: rancher/server:latest container_name: rancherserver ports: - "8080:8080" volumes: - ${HOST_VOLUME_PERSIST}:/var/lib/mysql:rw privileged: true networks: - datanetworknetworks: datanetwork: ipam: driver: default config: - subnet: 172.28.0.0/16 |
---|
You must open UDP ports 500 and 4500 for Rancher to work correctly, and you must also check that the firewall is not blocking those ports.
Once you have created the docker-compose.yml file, go to the directory and launch >sudo docker-compose up -d
With this, you have launched the Rancher console in the address you have configured, and you can access it in http://<IP_pública_vm>:8080
(NOTE: Bear in mind that the TCP port 8080 must also be open).
6.- Create environment in Rancher
Once Rancher is installed in the Master node, you must create an Environment from the Rancher console, to deploy the platform in the Worker node. To do this, from Manage Environments --> Add Environment, create an environment (name and description).
Select Cattle as your template.
then select
After this, you will add a new host to the Environment. You will later deploy, containerized, the Open Platform on it.
7.- Install Rancher agent in the Open Platform's node
In the just-created environment:
select Infraestructure --> Hosts --> Add Host
It will ask you if you want to use that registration URL. Validate if it is correct, and go on:
Leave the Custom value
In the Label section, assign the key "NODE" and a name. Commonly the latter is "Worker-N"
where N is the Rancher worker's sequential, numeric identifier in the cluster.
You must also give the private IP of the machine you want to add to the cluster, in this case:
Once you've filled in those two fields, select the Docker command you have to copy and launch in the VM you are adding to the Rancher cluster.
Start by installing Docker in the Worker node, following the same steps as before.
-CentOS: https://docs.docker.com/install/linux/docker-ce/centos/
-Ubuntu: https://docs.docker.com/install/linux/docker-ce/ubuntu/
and remembering the configurations:
You will change the path where Docker stores all its information:
sudo vi /lib/systemd/system/docker.service
and add option -g to the ExecStart configuration, as seen in the following example:
ExecStart=/usr/bin/dockerd -g /datadrive/docker |
---|
sudo vi /etc/docker/daemon.json
and add:
{ "insecure-registries" : [ "registry.onesaitplatform.com" ] } |
---|
Start Docker: >sudo systemctl start docker
Copy the text specified in the Rancher console, paste it and execute it in the Worker node's shell:
When running this command, Docker will download the needed images to activate the Rancher agent. This will take some minutes.
Once downloaded, in the Rancher environment's infrastructure (Infraestructure --> Hosts) you can see the Cattle agent's different services starting up (scheduler, healthcheck, ip-sec, etc...):
Once these steps are completed, you have deployed Rancher and you also have a node installed as Worker in Rancher to install the platform.
Deploying the Rancher-managed onesait Cloud Platform
1.- Generate the deployment files in Rancher
In the Platform's Git project (https://github.com/onesaitplatform/onesaitplatform-cloud/tree/1.1.0-ce/devops/build-deploy/rancher/onesaitplatform) directory, you will find the needed scripts and templates to generate the files that you will load in Rancher to deploy the Open Platform.
The config.properties file, in the onesaitplatform-cloud/devops/build-deploy/rancher/onesaitplatform/scripts directory, has customizable values depending on the installation:
- PROJECT_NAME → project name.
- WORKER2DEPLOY → it is the node or host where you will deploy the Onesait-Platform. You must provide the Rancher tag that identifies it with NODE
- DOMAIN_NAME → it is the dns through which you will access this OnesaitPlatform instance.
- IMAGE_TAG → a tag identifying the version of the Docker images that will be deployed.
In the example, having the IP 139.59.133.69 for our worker:
2. Launch the deployment files in the Worker node
Once the previous configuration file is edited, copy the four files to the VM serving as Worker. In this case, create a deploy folder in the user's home and upload the four files:
Give execution permission to the file and launch the generate-templates.sh script:
This script generates, in the folder where you are, a directory with the project's name (onesaitplatform_cloudlabs-Worker1) with the Rancher files, from the docker-compose-ce.yml and rancher-compose.yml templates.
There are two .yml-extension files in this directory:
docker-compose.ym → This file is compatible with Docker Compose and defines sservices, images, tags, networks, volumes, etc...
rancher-compose.ymll → Rancher specific configuration (number of replicas for each service, start-up order, etc...).
NOTE: These two files can be manually modified if needed.
3.- Create persistent volume directories
Before launching the deployment in the Worker node, you need to create directories in the host or hosts (your Worker) where the deployment will be made.
These directories are those that the different containers will need to dump data in their volumes, and you can see them in the generated docker-compose.yml file:
To create the volumes directory we'll execute the script create-directories.sh, located in the scripts directory we previously uploaded.
In this example, they are located in the host's external disc, in /datadrive, and you need the following ones:
- /datadrive/onesaitplatform/platform-logs → directory with each container's logs.
- /datadrive/onesaitplatform/nginx/ → directory to store nginx configuration.
- /datadrive/onesaitplatform/nginx/certs → directory to store auto-signed certificates, typically "/etc/nginx/ssl", in the nginx container.
- /datadrive/onesaitplatform/configdb → directory with ConfigDB data.
- /datadrive/onesaitplatform/realtimedb → directory with RealTimeDB data on Mongo.
- /datadrive/onesaitplatform/schedulerdb → directory with scheduler database data.
- /datadrive/onesaitplatform/elasticdb → directory with ElasticSearch indexer data.
- /datadrive/onesaitplatform/flowengine → directory where the FlowEngine flows are stored (Node RED).
- /datadrive/onesaitplatform/webprojects → directory containing the web projects.
- /datadrive/onesaitplatform/export → directory for RTDBMaintainer.
- /datadrive/onesaitplatform/zeppelin/notebook→ directory to store Notebooks.
- /datadrive/onesaitplatform/zeppelin/conf → directory for Notebooks configuration.
- /datadrive/onesaitplatform/kafka-logs → directory with Kafka persistence.
- /datadrive/onesaitplatform/streamsets/data→ directory for DataFlow data.
4.- Generate auto-signed certificates
To generate auto-signed certificates, go to the /datadrive/onesaitplatform/nginx/certs directory and execute:
|
---|
When you run it, it will ask you a number of parameters such as city, country, company, workgroup, etc. Insert all the parameters you are asked, paying special attention to "common name". There, you must insert the DNS or IP from where you are going to access to the platform. We will validate the correct generation of the certificate executing
>openssl verify -CAfile ca.pem platform.cer
5.- Create configuration file for the nginx proxy
In order to expose the platform publicly, we need a balancer to do so. We will use nginx as reverse proxy. We will copy the configuration file located at <upload_directory>/deploy/op_config/nginx.conf to the directory /datadrive/onesaitplatform/nginx/. Give the file 777 permissions. Also copy the configuration files of the rest of onesait services from <upload_directory>/deploy/op_config/conf.d to datadrive/onesaitplatform/nginx/conf.d
In case of needing more redirections for custom services, we must include the configuration of the new service into the /datadrive/onesaitplatform/nginx/conf.d directory. This directory is mapped to /usr/local/conf.d within the container. To finish we must specify nginx to load such file using the include order.
The configuration would resemble this
You must pay especial attention to the "server_name" parameter, which determines the DNS or IP address the server will listen to; lab.onesaitplatform.com in the shown configuration. The container will not start if we have locations that point to services that are not deployed.
6.- Create a Stack in Rancher
After these steps, go back to the Rancher console (In this case, inhttp://10.3.1.4:8080/)
In the previously-created environment, you must add one Stack.
Give it a name representing that installation (container group), in this case OpenPlatform-CloudLabs
We will upload the generated compose files of the directory <upload_directory>/scripts/<project_name>-<worker_tag>/
the easiest way is copying those two files to your local machine, then attach them from there.
Now, in "Advanced Options", specify some tag useful for your deployment (such as "CloudLabs") and disable the "start services after creating" check.
This check must remain disabled because you are interested in following a manual boot order.
Finally, you will have:
Now you can click
The Environment creation starts:
Finally the containers will be assigned:
7.- Start the Platform's containers
Your stack contains the platform's containers.
The containers must be launched following a specific order for the platform to work with stability.
- Firstly, launch the configdb service, that launches the Configuration Database. Launch it with
- Once the configbd is active, launch realtimedb, that launches the Real Time Data Base on MongoDB.
- Once running, launch quasar, the SQL query engine on the RealTimeDB.
- Launch schedulerdb, the database storing the Quartz schedules.
- After this, launch a special container (ephemeral container), the configInit, which will populate the ConfigDB and RealTimeDBs. This container is executed, then loads the data, then dies.In first place we will execute the service with the variable LOADMONGODB If you go to the service and click View Logs, you can see whether the data is loading correctly or not:
The service wil stop once finished due to the tag start_once. Through the right side menu, we can check that there has not been any errors.- Next we will execute the service doing an Upgrade first and setting the LOADMONGODB variable to true, in order to load data in the Mongo DDBB
Once finished, it will stop and we will have our bdd ready to deploy the onesaitplatform modules.
The next service to run is controlpanelservice, corresponding with the Platform's Development Console.
This container takes some minutes to start. You can see its logs in the host machine or in the mapped directory inside the container
If the service did not have any start up errors, you should see the following trace
The container runs on port 18000. If you are in a VPN, you can access using its internal IP:
http://10.3.1.6:18000/controlpanel/login
if you are in the Internet and the ports are open using their external IP.
Anyway, what you want is mapping this port and path (18000:/controlpanel) with a URL like https://<dominio>/controlpanel. And to do this, you must start and configure the service called loadbalancerservice (an instance of NGINX). You will do this at the end, so that the NGNIX can balance all the HTTP services.- Start the loadbalancerservice service. Once started, go to the service and check the logs to confirm there was no configuration error. If everything was OK, you should see a plot of your nginx.conf followed by "Starting nginx" and a number of lines saying the calls that are going through it.
Now at last you can access your onesait Cloud Platform instance at https://<server_name>/controlpanel
- username/password: administrator/Community2019!
9. Commonly, we will want to deploy most if not all the modules of the platform. To do so, we must first start the services we want to use. Once started, we need to uncomment the redirections (include /usr/local/conf.d/<module>.conf) to those modules we want to load in the nginx configuration file .
- Stop the loadbalancer service
- Start the dashboards service, which is the service we want to activate. Since this service depends on the router service, we must start the service, which also depends on the cacheserver.
- Uncomment the include instructions of the dashboardengine and the router service of the nginx configuration file in the host directory /datadrive/onesaitplatform/nginx/nginx.conf
- The files must be in the /datadrive/onesaitplatform/nginx/conf.d directory
- If missing, copy them from the downloaded directory op_config/conf.d
>cp /root/deploy/op_config/conf.d/* /datadrive/onesaitplatform/nginx/conf.d
- If missing, copy them from the downloaded directory op_config/conf.d
11. Start the loadbalancerservice
12. Validate we are able to access the deployed module
- From the main menu (Visualization/Dashboards Management)
- We access the public dashboard VisualizeOpenFlightsData
10. You still have to configure Rancher, for instance by assigning users and permissions to the Rancher manager. To do this, click on Admin>Accounts. In this address, you will see a list with all the users registered in Rancher.
By click on "Add Account", you will see a window where you can register another user. You only have to fill in a small form. You can choose the role you want to associate to the recently-created user. If you give her an administrator role, your new user can see all the environments you have deployed on Rancher. On the other hand, if you give her the role user, she will see only those environments an administrator user has given her permission to see. To associate users and environments, you only have to click "Manage Environments" (if your user is an adminstrator). Here you can see a list of the deployed environments in your Rancher.
By clicking on the three vertical points in each environment you want to associate your user, the following box will appear:
You only need to add to the "Access Control" list the name of the user you want to associate, then click on save. The next time that user logs in, she will have access to the Environments you associated to her.
Managing and Monitoring the platform with Rancher
Once you have deployed the containers in Rancher, you should see something like this:
In this view, you can see the services' state. By clicking the drop-down, you can see a number of options:
- Upgrade: allows you to modify the service launching options such as volumes, environment variables, etc. When upgrading, Rancher keeps the previous active service, giving you the option roll back.
- Restart: restarts the service.
- Stop: stops the service.
- Delete: deletes the service.
- View in API: takes you to a page with that service's configuration in JSON format, and shows the endpoints to attack to manage the services via REST API.
- Clone: clones a service.
- Edit: allows you to modify scalability, name and links with other services without needing to upgrade.
By clicking on a service, you enter that service's configuration. This screen gives you the following information and functionalities:
- Scale: reports the number of containers associated to that service that will be deployed. You can modify the value.
- Image: gives you the url from where the image was downloaded.
In the Containers tag you will see a list of the containers associated to that service. Each column gives you the following information:
- State: container's state. It can be: Running, Restarting, Upgrading, Upgraded, Stopping or Failed.
- Name: name that Rancher associates to that container.
- IP Address: IP that rancher gives to that container.
- Host: machine where the container is deployed.
- Image: repository from where Rancher downloaded the image.
- Stats: monitoring the resources that the container consumes.
The drop-down of each container gives these options:
- Restart: restarts the container.
- Stop: stops the container. (Unless otherwise configured, Rancher will launch the stopped container again if the associated service is still active).
- Delete: deletes the container. (Depending on the value of "Scale", Rancher will create a new container according to the scalability associated to the service).
- Execute Shell: opens a window where we can access the selected container's console.
- View Logs: shows a window with the logs sent by the container.
- View in API: JSON schema with the container's configuration and a number of endpoints to interact via REST API with the container.
- Edit: allows you to modify the container's name and services associated to it.
We suggest that, every time a service is started, you enter in the container or containers and check in the logs that the deployment was performed correctly. When all the services are deployed in their state is Active, then the platform is deployed.