Introduction
To deploy onesait Platform Community, we must consider the following pre-requirements:
If you prefer a videotutorial you can watch this:
https://www.youtube.com/watch?time_continue=5&v=ZcLdEhI5Lfg
Pre-requirements:
1- Have Docker, Community Edition installed
https://docs.docker.com/install/
2- Have Docker compose installed
https://docs.docker.com/compose/install/
Once Docker is installed, for Windows or MacOS environments, we must increase the memory allocated to the Docker service. We recommend at least 8 GB of memory and 2 GB of swap.
3- Have Docker File Sharing configured propperly (only Windows)
Docker for windows needs to set pc user credentials to access file system for container volume mapping.
It can be setted on Settings>Reset Credentials.
Once the requirements are satisfied, it is possible to deploy the platform.
Deploy
Step 1: Download docker-compose's files to launch the platform's services
These files, in yml format, are in the platform's repository in GitHub. To do it, you need to clone the repository locally, then place yourself in the master branch.
> git clone https://github.com/onesaitplatform/onesait-cloud-platform-community-deploy.git > cd onesait-cloud-platform-community-deploy > git fetch origin master > git checkout master
Step 2: Launch the persistence services
Place yourself in the same directory as the docker-compose.yml, in charge of launching the database services, is.
cd onesait-cloud-platform-community-deploy/op_data
You can see several files in that directory:
/.env
Check that the keys have the following values:
keu | value |
---|---|
REPOSITORY | |
PERSISTENCE_TAG | mariadb |
MODULE_TAG | 2.0.0-ce |
MONGO_TAG | latest-noauth |
QUASAR_TAG | 14 |
/docker-compose.yml: if we want to launch the databases without persistence, then we run from a terminal the following command, from the same directory that the file is:
> docker-compose up -d
IMPORTANT: If the databases are launched ephemerally, when the services with docker-compose down are stopped, we will lose all the data.
/docker-compose.persistent.yml: if you want to persist the databases, then you must write in the .env file the directories of your machine where you want the data to be stored. There is no default assigned value. As an example, you can use:
key | example value |
---|---|
REALTIME_VOLUME | /Users/devopsuser/realtimedbdata |
CONFIGDB_VOLUME | /Users/devopsuser/configdbdata |
From the terminal, run the following command:
> docker-compose -f docker-compose.persistent.yml up -d
Once the databases have been launched, you can see the status of each with the following command:
> docker ps
You should see an output like this one:
If you want to see a database container log, you run:
> docker logs <container_name> or > docker logs -f <container_name> (-f equivale a un tail)
Step 3: Populate the databases with data
In that same directory (onesait-cloud-platform-community-deploy/op_data), there is another docker-compose file, in charge of launching the initial data load service.
In case you are using the machine's hostname, you should replace 'localhost' in SERVER_NAME variable at docker-compose.yml by your machine's hostname.
As you did previously, you can run:
> docker-compose -f docker-compose.initdb.yml up
In this case you do not include the -d flag (detached mode) because the service stops on its own once its task is over.
It should be executed again to fill the realtimedb, changing in docker-compose.initdb.yml the variable LOADMONGODB=true.
> docker-compose -f docker-compose.initdb.yml up
Step 4: Launch the platform's modules
Same as we did with the persistence services, we launch the platform's different modules with docker-compose, ordered by start-up and in the directory onesait-cloud-platform-community-deploy/op-modules:
Path | Module | DBs up required | Modules up required |
---|---|---|---|
/control-panel | Platform's web console. | configdb, schedulerdb, realtimedb, quasar | |
/webprojects | Platform web hosting | configdb, schedulerdb, realtimedb, quasar | controlpanel |
/router | Routing module | configdb, schedulerdb, realtimedb, quasar | |
/cacheserver | Cache server | configdb, schedulerdb, realtimedb, quasar | controlpanel |
/iotbroker | Platform's IoT broker. | configdb, schedulerdb, realtimedb, quasar | controlpanel, router, cacheserver |
/flowengine | Platform's Flow engine. | configdb, schedulerdb, realtimedb, quasar | controlpanel, router, cacheserver |
/api-manager | API Manager Module. | configdb, schedulerdb, realtimedb, quasar | controlpanel, router, cacheserver, iotbroker |
/oauth-server | Authentication server | configdb, schedulerdb, realtimedb, quasar | controlpanel, router, cacheserver, |
/notebooks | Notebooks module | configdb, schedulerdb, realtimedb, quasar | controlpanel, router, cacheserver, iotbroker |
/dashboard-engine | Dashboard module | configdb, schedulerdb, realtimedb, quasar | controlpanel, router, cacheserver, iotbroker |
/rules-engine | Rules engine module | configdb, schedulerdb, realtimedb, quasar | controlpanel, router, cacheserver |
/dataflow | Dataflow module | configdb, schedulerdb, realtimedb, quasar | controlpanel, router, cacheserver, iotbroker |
/devicesimulator | Device simulator module | configdb, schedulerdb, realtimedb, quasar | controlpanel, router, cacheserver, iotbroker |
/digitaltwinbroker | Digital twin broker module | configdb, schedulerdb, realtimedb, quasar | controlpanel, router, cacheserver, iotbroker |
/monitoringui | Monitoring UI module | configdb, schedulerdb, realtimedb, quasar | controlpanel, router, cacheserver, iotbroker |
Each file contains:
/.env file with the following environment variables:
key | value | description |
---|---|---|
REPOSITORY | a registry containing the service's image. | |
SERVERNAME | localhost | host machine's hostname |
MODULE_TAG | 2.0.0-ce | service image tag. |
docker-compose.yml file, containing the service's description, volume mapping, ports, subnets, etc.
In case you are using the machine's hostname, you should replace 'localhost' in SERVER_NAME (or SERVERNAME in case of flowengine) variable at docker-compose.yml by your machine's hostname.
To launch each of these modules, you must, from the command line, reach the module's directory and execute the command:
> docker-compose up -d
For example in order to execute Control Panel go to op-modules/control-panel and execute >docker-compose up -d
Step 5: Launch the inverse proxy service (NGINX)
We go to the folder onesait-cloud-platform-community-deploy/op-modules/nginx-proxy in order to launch a container with the NGINX service. It has mapped a volume with the configuration file needed to re-direct incoming requests that reach the control panel (nginx.conf).
If you need to launch any other modules, you just have to docker-compose up -d in the module's folder and uncomment the line in nginx.conf.
You must edit nginx.conf and replace the string ${SERVER_NAME} with the Docker host machine's hostname. You can find it from the command line by executing the command "hostname" or localhost.
nginx.conf before set SERVER_NAME
server { listen 443 ssl; # Replace ${SERVER_NAME} with name obtained from console command output: "hostname" server_name ${SERVER_NAME};
nginx.conf after set SERVER_NAME
server { listen 443 ssl; # Replace ${SERVER_NAME} with name obtained from console command output: "hostname" server_name user-pc;
It is needed to generate ssl certificates using script generate-certificates.sh
generate-certificates.sh
sh generate-certificates.sh
It is needed to set up some variables inside the script.
generate-certificates.sh variables
# Set IP and COMMONNAME export IP="172.22.11.203" export COMMONNAME="localhost" # Comment SO not used SEP="//" # WINDOWS #SEP="" # OTHERS
The ./tls folder will be generated, then you can install the certificates in the host. This folder will be mapped to the container.
Once edited, you launch the service as you have done previously with other services. From the directory /nginx-proxy, execute:
> docker-compose up -d
And you can access the control panel by writing the following URL in the web browser: https://<hostname>/controlpanel
You can access with these users/passwords:
administrator/Community2019!
developer/Community2019!
analytics/Community2019!
Example: Launching other module
To provide a complete example, we show how to launch dashboardengine module:
Commands to run necessary containers (from op_moules folder)
> # First launch controlpanel > cd control-panel > docker-compose up -d > cd .. > # First launch router and cacheservice > cd cacheserver > docker-compose up -d > cd .. > cd router > docker-compose up -d > cd .. > # First launch iotbroker > cd iotbroker > docker-compose up -d > cd .. > # First launch dashbardengine > cd dashboard-engine > docker-compose up -d > cd ..
Uncomment line in ./op_modules/nginx-proxy/nginx.conf for the modules up (you can copy - paste and replace {SERVER_NAME})
nginx.conf for dashboardengine
user www-data; worker_processes 4; pid /run/nginx.pid; events { worker_connections 4000; use epoll; multi_accept on; } http { ## # Basic Settings ## sendfile on; tcp_nopush on; tcp_nodelay on; keepalive_timeout 65; types_hash_max_size 2048; # disable any limits to avoid HTTP 413 for large image uploads client_max_body_size 0; # required to avoid HTTP 411: see Issue #1486 (https://github.com/docker/docker/issues/1486) chunked_transfer_encoding on; server_tokens off; proxy_pass_header Server; include /etc/nginx/mime.types; default_type application/octet-stream; ## # Logging Settings ## access_log /var/log/nginx/access.log; error_log /var/log/nginx/error.log; ## # Gzip Settings ## gzip on; gzip_disable "msie6"; ## # Virtual Host Configs ## # Importante para nombres de dominio muy largos server_names_hash_bucket_size 128; include /etc/nginx/conf.d/*.conf; include /etc/nginx/sites-enabled/*; server { listen 443 ssl; # Replace ${SERVER_NAME} with name obtained from console command output: "hostname" server_name ${SERVER_NAME}; # SSL configuration (for https) add_header Strict-Transport-Security "max-age=31536000"; ssl_certificate /usr/local/tls/selfsigned.crt; ssl_certificate_key /usr/local/tls/selfsigned.key; ssl_protocols SSLv2 SSLv3 TLSv1.1 TLSv1.2; ssl_ciphers 'ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-AES256-GCM-SHA384:DHE-RSA-AES128-GCM-SHA256:DHE-DSS-AES128-GCM-SHA256:kEDH+AESGCM:ECDHE-RSA-AES128-SHA256:ECDHE-ECDSA-AES128-SHA256:ECDHE-RSA-AES128-SHA:ECDHE-ECDSA-AES128-SHA:ECDHE-RSA-AES256-SHA384:ECDHE-ECDSA-AES256-SHA384:ECDHE-RSA-AES256-SHA:ECDHE-ECDSA-AES256-SHA:DHE-RSA-AES128-SHA256:DHE-RSA-AES128-SHA:DHE-DSS-AES128-SHA256:DHE-RSA-AES256-SHA256:DHE-DSS-AES256-SHA:DHE-RSA-AES256-SHA:!aNULL:!eNULL:!EXPORT:!DES:!RC4:!3DES:!MD5:!PSK'; # Uncomment if ControlPanel module is deployed # Required databases up (po_data): [ConfigDB, SchedulerDB, RealTimeDB, Quasar] include /usr/local/conf.d/controlpanel.conf; # Uncomment if Router (Sematic inf. Broker) module is deployed # Required modules up and uncommented (op_modules): [ControlPanel] # Required databases up (op_data): [ConfigDB, SchedulerDB, RealTimeDB, Quasar] include /usr/local/conf.d/router.conf; # Uncomment if DigitalBroker (IoTBroker) module is deployed # Required modules up and uncommented (op_modules): [Router, ControlPanel] # Required databases up (op_data): [ConfigDB, SchedulerDB, RealTimeDB, Quasar] include /usr/local/conf.d/digitalbroker.conf; # Uncomment if you want to use web projects. # Required modules up and uncommented (op_modules): [ControlPanel] # Required databases up (op_data): [ConfigDB, SchedulerDB, RealTimeDB, Quasar] #include /usr/local/conf.d/web.conf; # Uncomment if you want to use APIs projects. # Required modules up and uncommented (op_modules): [Router, DigitalBroker, ControlPanel] # Required databases up (op_data): [ConfigDB, SchedulerDB, RealTimeDB, Quasar] #include /usr/local/conf.d/apimanager.conf; # Uncomment if DashboardEngine module is deployed # Required modules up and uncommented (op_modules): [Router, DigitalBroker, ControlPanel] # Required databases up (op_data): [ConfigDB, SchedulerDB, RealTimeDB, Quasar] include /usr/local/conf.d/dashboardengine.conf; # Uncomment if Notebooks module is deployed # Required modules up and uncommented (op_modules): [Router, DigitalBroker, ControlPanel] # Required databases up (op_data): [ConfigDB, SchedulerDB, RealTimeDB, Quasar] #include /usr/local/conf.d/notebook.conf; # Uncomment if FlowEngine (Nodered) module is deployed # Required modules up and uncommented (op_modules): [Router, DigitalBroker, ControlPanel] # Required databases up (op_data): [ConfigDB, SchedulerDB, RealTimeDB, Quasar] #include /usr/local/conf.d/flowengine.conf; # Uncomment if OauthServer module is deployed # Required modules up and uncommented (op_modules): [Router, DigitalBroker, ControlPanel] # Required databases up (op_data): [ConfigDB, SchedulerDB, RealTimeDB, Quasar] #include /usr/local/conf.d/oauthserver.conf; # Uncomment if DeviceSimulator module is deployed # Required modules up and uncommented (op_modules): [Router, DigitalBroker, ControlPanel] # Required databases up (op_data): [ConfigDB, SchedulerDB, RealTimeDB, Quasar] #include /usr/local/conf.d/devicesimulator.conf; # Uncomment if DigitalTwinBroker module is deployed # Required modules up and uncommented (op_modules): [Router, DigitalBroker, ControlPanel] # Required databases up (op_data): [ConfigDB, SchedulerDB, RealTimeDB, Quasar] #include /usr/local/conf.d/digitaltwinbroker.conf; # Uncomment if MonitoringUI module is deployed # Required modules up and uncommented (op_modules): [Router, DigitalBroker, ControlPanel] # Required databases up (op_data): [ConfigDB, SchedulerDB, RealTimeDB, Quasar] #include /usr/local/conf.d/monitoringui.conf; # Uncomment if RulesEngine Service module is deployed # Required modules up and uncommented (op_modules): [Router, DigitalBroker, ControlPanel] # Required databases up (op_data): [ConfigDB, SchedulerDB, RealTimeDB, Quasar] #include /usr/local/conf.d/rulesengineservice.conf; } }
Then we rebuild the proxy module to reload changes:
# from /op_modules > cd nginx-proxy > docker-compose down > docker-compose up -d
Once done you can use dashboard module in controlpanel!