Platform deployment strategy

onesait Platform Technology Base

Containerization. Why containers?

The onesait Platform is based on a microservice architecture written in Java using the Spring Boot framework. Each of these modules or microservices runs inside a Docker container. There are numerous reasons to choose this way of packaging and to run these microservices with Docker has been chosen. Among them, the following are foremost.

Return on investment and cost savings

The first advantage of using containers is the ROI.

The more you can reduce the costs of a solution, the more you increase the benefits and the better the solution is.

In this sense, containers help facilitate this type of savings by reducing infrastructure resources, since fewer resources are needed to run the same application.

Standardization and productivity

Containers ensure consistency in multiple environments and release cycles. One of the main advantages of containerization is standardization, as it provides replicable development, construction, test and production environments. Service infrastructure standardization throughout the process allows each team member to work on an environment that is identical to that of a production environment. As a result, engineers are better equipped to quickly and efficiently analyze and correct errors. This reduces bug-fixing times and time-to-market.

Containers allow you to make changes to images and control their version; for example, if when making an update of a component., you find anomalies in the operation of an entire environment, you can easily roll back and return to a previous version of your image. This whole process can be tested in a few minutes.

Portability, compatibility and independence from the underlying operating system

One of the main advantages is parity. In terms of containers, this means that the images run the same no matter what server or laptop they are running on, or what operating system they are running on, be it Windows, Linux or MacOS.

For developers, this means less time spent setting up environments and debugging environment-specific issues, and a more portable and easy-to-configure code base. Parity also means that the production infrastructure will be more reliable and easier to maintain.

Simplicity and faster setups

One of the key benefits of containers is the way they simplify things. Users can have their own configuration, put it in the code and deploy it without any problem. As it can be used in a wide variety of environments, infrastructure requirements are no longer tied to the application environment.

Agility in deployments

Containers manage to reduce deployment time. This is because a container for each process, not booting an operating system.

One of the most important advantages of containers over Hypervisors is that a complete operating system is not replicated, but only the minimum and essential for the application to be contained to work properly and the images are much lighter and easier to move between environments. Furthermore, the containers do not exclusively reserve the resources of the operating system (memory, CPU) but share them with other running containers.

Continuous deployment and testing

The containers guarantee consistent environments from development to production. The containers are prepared to internally maintain all configurations and dependencies. Therefore, it you can use the same image from the development environment to the production environment, ensuring that there are no discrepancies or manual intervention.

Multi-cloud platforms

This may well be one of the greatest benefits of containers. In recent years, all major cloud computing providers, including Amazon Web Services (AWS) and Google Compute Platform (GCP), have adopted Docker's availability and added individual support. Dockable containers can run within an Amazon EC2 instance, Google Compute Engine instance, Rackspace server, or VirtualBox, provided that the host operating system supports Docker. If this is the case, a container running within an Amazon EC2 instance can be easily ported between environments.

 

In addition, Docker works very well with other vendors such as Microsoft Azure and OpenStack, and can be used with different configuration managers such as Chef, Puppet and Ansible, etc.

Isolation and security

In a system where a container malfunction occurs, that does not mean that the entire system on which it is running is unresponsive or malfunctioning, i.e. it ensures that applications running in containers are completely segregated and isolated from each other, giving you full control over the operating system's flow and management. No container can see the processes running inside another container.

Containering. Why Docker?

Containment technology is not new. It was already implemented by Oracle in Solaris in early 2000, as a measure of resource isolation and application portability. However, this technology did not mature until the arrival of Linux Containers, shortened to LxC, with the use of cgroups and Linux namespaces, already incorporated to the Kernel in a native way.

Then came CloudFoundry with its own LxC implementation called Warden, CoreOS with Rocket (rkt) and the community with Docker. All of them use both cgroups and namespaces to limit and assign operating system resources. Docker however implements another abstraction layer with its own library: libcontainer.

The choice of Docker over the rest is simple:

  • Widely supported by the Open Source community.

  • Widespread use.

  • It has the highest degree of maturity compared to the rest.

  • Extensive catalogue of applications or "base images" for the most popular products: MySQL, MariaDB, MongoDB, Maven, Open JDK, etc..

  • Most of the container orchestrators support Docker as the de facto standard, for example: Swarm, Cattle, Kubernetes/Openshift, Portainer, Mesos, Nomad, etc.

  • For business environments, they have an EE version.

Container Orchestration. Why Kubernetes?

If we can consider Docker as the de facto standard in application containerization, Kubernetes could be considered the de facto standard in Docker container orchestration. Kubernetes is written in Go and was initially developed by Google and continued by the Cloud Native Foundation.

In this case, the choice of Kubernetes over other orchestrators is based on:

  • Widely supported by the Open Source community.

  • More than 10 years of maturity.

  • Fully integrated with Docker.

  • Based on open source.

  • Ready to work on Production systems allowing their deployment in HA and guaranteeing the HA of the applications (pods1) they deploy.

  • Supports clusters of up to 5000 nodes and up to 150000 pods in execution.

  • Integrated graphic dashboard.

  • Offered as PaaS in several clouds: Amazon (EKS) Azure (AKS) Google (GKE) Oracle (OCI).

  • Implemented by Red Hat as Openshift in Enterprise (OCP) and Community (OKD).

  • Implemented and/or integrated by Rancher 2.

1In Kubernetes, the minimum unit of deployment is the pod. A pod equals one or more running containers that share storage and network.

Deployment of onesait Platform on k8s. Why Helm?

To automate, distribute and deploy the onesait Platform in existing Kubernetes clusters, Helm has been chosen as the base technology for several reasons:

  • It is an official Kubernetes project, developed for Kubernetes and maintained by the Cloud Native Foundation.

  • It can be used in both Kubernetes and Openshift clusters.

  • It allows for the installation and uninstallation of applications as well as the management of their life cycle as packages or pieces of software.

  • With Helm, you can write Openshift Operators (in addition to writing them with Ansible or Go).

Helm allows complex applications to be packaged for deployment in Kubernetes, being able to "template" their files or manifests, such as Deployment, Service, PersistentVolumeClaim, Ingress, Secret, etc... into Helm templates and packaged them in a Chart. Once packaged, the Charts can be distributed and versioned in Charts servers (Chart Museum) and installed in Kubernetes clusters.

CaaS strategy

When choosing a CaaS to cover the entire life cycle of the onesait Platform, several considerations must be taken into account:

  1. In which area it will be used (solutions or projects).

  2. Type of cost and/or licensing.

  3. Support.

  4. No vendor lock-in.

Digital Solutions

For the different Solutions/Products of the company, two CaaS strategies have been followed during these years: the initial one that contemplated the use of dedicated virtual machines and where the Platform was deployed using orchestrated containers with Rancher 1.6; versus the current one, to which the different Solutions will have to be progressively migrated, based on the reuse of infrastructure and where both the Platform and the Solutions will be deployed in a single Openshift cluster.

In January 2019, onesait launched an RFP to decide which is the Platform for the deployment of reference containers in the organization. The candidates included Microsoft with AKS, Red Hat with Openshift and Rancher 2 with RKE. After the launch of this RFP, they decided to use Openshift as the technological basis.

Rancher 1.6 + Cattle - legacy environments

The initial deployment strategy of the different Solutions and Disruptors of the organization, which use the onesait Platform as a technological base, is based on dedicated virtual machines where the different containers of both the onesait Platform and the Solution that uses it are deployed. These containers are managed by Rancher 1.6, a completely Open Source CaaS with an orchestrator called Cattle developed ad-hoc for Rancher.

With this strategy the vm's are provisioned in Azure's Cloud platform with CentOS in development and pre-production environments, while in production environments they are platformed with RHEL. This way, you have Azure's support for the infrastructure and Red Hat's support for the operating system.

Once the infrastructure is available, the installation of the base software in the infrastructure (Docker and additional operating system packages) of the CaaS (Rancher), as well as the deployment and implementation of the Platform, is managed with Ansible. Ansible is an Open Source IaaC (Infrastructure as a code) tool developed by Red Hat, which has been chosen over others such as Chef or Puppet for the following reasons

  • Based on the principle of idempotence, the repeated execution of a playbook does not alter the operation of the system once the desired state has been reached.

  • You do not need agents running in the machine/s you manage.

  • Very easy learning curve.

Installation with Ansible, simplifying, is done in the following steps:

  • Installation of the base software (Docker, docker-compose, additional packages).

  • Installation of the CaaS.

  • Platform deployment at Cattle with docker-compose.

  • Installation of orderly start/stop scripts for those environments where cost savings require shutting down the infrastructure at night and on weekends.

When you deploy the platform over VMS in Development Environments, the platform deployment proposal looks like this:

  • CaaS environment (Rancher 1.6):

    • Single-Instance Deployment: 1 small VM (2 cores and 8 GiB) because a CaaS unavailability does not affect the operation.

  • Worker nodes (Platform):

    • Containerized deployment of all modules including persistence and reverse proxy or balancer.

    • Deployment in 1 VM according to project needs and modules to be deployed:

      • Basic environment: 1 VM with 4 cores and 16 GiB RAM and 256 GiB disk.

      • Typical environment: 1 VM with 8 cores and 32 GiB RAM and 512 GiB disk.

      • High load environment: 1 VM with 16 cores and 64 GiB RAM and 1 TiB disk.

In production environments, the platform deployment proposal contemplates:

  • Reverse Proxy / Load-Balancer:

    • You can use a Load-Balancer from the chosen cloud provider, or the Platform Reverse Proxy (NGINX) or an HW balancer.

  • CaaS (Rancher) environment:

    • Depending on the required HA, you can mount it in Single-Instance or in cluster (3 VMS) with the externalized database

  • Worker nodes (Platform):

    • Containerized deployment of platform modules, not including persistence.

    • Deployment in at least 3 VM according to project needs and modules to be deployed:

      • Basic environment: 2 VM with 4 cores and 16 GiB RAM and 256 GiB disk.

      • Typical environment: 2-3 VM with 8 cores and 32 GiB RAM and 512 GiB disk.

      • High load environment: 3 VM with 16 cores and 64 GiB RAM and 1 TiB disk.

Openshift

Openshift is Red Hat's container management platform (pods) providing the tools needed to cover the entire lifecycle of a container.

Openshift is based on different Open Source components such as:

  • Kubernetes, as a container orchestrator.

  • Prometheus for metrics collection.

  • Grafana for displaying the different system metrics.

  • Quay as a docker image record.

  • RHCOS / RHEL.

Openshift's physical architecture in terms of minimum HA machine infrastructure requires three VM's for the master nodes and three VM's for the processing or worker nodes.

The choice of Openshift depended on several important points:

  • Support: both of the operating system of the nodes in which it deploys, and of Openshift itself through all the software components that they certify and that can be used by the Operators.

  • Support: all levels of support are offered by Red Hat.

  • Security: Openshift pays special attention to security requirements, such as not allowing the deployment of pods with the root user…

  • Training: …and official certifications in both the administration and operation part and the development part of Openshift.

  • CI/CD: Integrated into the Platform, either with native S2i (source to image) or with Red Hat Certified Jenkins Operators.

  • Subscription: A single subscription allows multiple installations in public or private clouds and even on premise.

Migration from Managed Solutions with dedicated VMs and Rancher 1.6 to Openshift will be done progressively and on demand.

Projects

In the case of those clients/projects that do not have a defined CaaS strategy, do not have the necessary knowledge about the existing containerization/organization technologies or, due to cost savings, cannot afford an Openshift license, the following Platform implementation scenarios are proposed:

Rancher 2 + Kubernetes

The strategy to follow in projects where they do not have a CaaS is to install it together with the onesait Platform. The evolution in this case has been to move from Rancher 1.6 as a CaaS and Cattle as a container orchestrator, to Rancher 2 and Kubernetes. The change from Rancher 1.6 to Rancher 2 is due to the fact that Rancher 1.6 reaches EOL on June 30, 2020.

The main features of Rancher 2 are as follows:

  • Is 100% Open Source and gratis (without support).

  • Supports Kubernetes up to version 1.17. X (depending on whether it is managed with RKE or imported).

  • Offers RKE as a very simple way to install Kubernetes clusters.

  • Integrates with existing Kubernetes clusters, either on cloud or on premise.

  • Integrates with clusters of Kubernetes provided as services by the different clouds (AKS, EKS, GKE, OCI).

  • Has integrated metrics and visualization with Prometheus and Grafana.

  • Has integrated CI/CD.

  • Comprehensive tooling catalogue integrated and packaged with Helm.

  • Offers optional support of two types: Standard and Platinum with 4 severity levels each, 8x5 and 24x7.

Same as in Rancher version 1.6, the installation of both the base software and the CaaS and deployment of the Platform is managed with Ansible in the following way:

  • Installation of the base software (Docker, docker-compose, additional packages).

  • CaaS installation and Kubernetes deployment with RKE.

  • Deployment of the Platform in Kubernetes with Helm.

  • Installation of orderly start/stop scripts for those environments where cost savings require shutting down the infrastructure at night and on weekends.

The recommended infrastructure for development and production does not vary from that detailed above for the Solutions:

For development environments:

  • CaaS environment (Rancher 2 + etcd + Control Plane):

    • Single-Instance Deployment: 1 small VM (4 cores, 8 GiB memory and 256GiB disk), since a CaaS unavailability does not affect the operation.

  • Worker nodes (onesait Plataform):

    • Contained deployment of all modules including persistence and reverse proxy or balancer exposed by Ingress.

    • Deployment in 1 VM according to project needs and modules to be deployed:

      • Basic environment: 1 VM with 4 cores and 16 GiB RAM and 256 GiB disk.

      • Typical environment: 1 VM with 8 cores and 32 GiB RAM and 512 GiB disk.

      • High load environment: 1 VM with 16 cores and 64 GiB RAM and 1 TiB disk.

For production environments:

  • Reverse Proxy / Load-Balancer:

    • You can use either a Load-Balancer from the chosen cloud provider, or the Platform Reverse Proxy (NGINX) or an HW balancer.

  • CaaS environment (Rancher 2 + etcd + Control Plane):

    • Depending on the required HA you can mount in Single-Instance or in cluster (3 VMS) with the externalised etcd database.

  • Worker nodes (onesait Plataform):

    • Containerized deployment of platform modules not including persistence.

    • Deployment in at least 3 VM according to project needs and modules to be deployed:

      • Basic environment: 2 VM with 4 cores and 16 GiB RAM and 256 GiB disk.

      • Typical environment: 2 or 3 VM with 8 cores and 32 GiB RAM and 512 GiB disk.

      • High load environment: 3 VM with 16 cores and 64 GiB RAM and 1 TiB disk.

Rancher 2 + Kubernetes as a service (AKS/EKS/GKE)

For those clients/projects that already have a Kubernetes cluster either on Premise or as a cloud service, it is recommended to mount an additional VM, separate from the cluster, Rancher 2 as a centralized deployment and a monitoring console.

This Rancher 2 node will import the existing cluster where the namespaces and the different workloads can be viewed. It will also be possible to do new deployments, start, stop and scale/unscale them, etc...

 

Rancher 1.6 + Cattle - entornos “legacy“

Applies what is already detailed for Solutions with Rancher 1.6.

IaaS Strategy

Brief introduction to Infrastructure as a Service

Infrastructure-as-a-Service (IaaS) is a type of cloud service model in which computing resources are hosted. It is one of four types of cloud services, along with Software-as-a-Service (SaaS), Platform-as-a-Service (PaaS), and Serverless.

Enterprises use the IaaS model to move part or all of their local infrastructure to the cloud, which is owned and managed by a provider.

The elements of the infrastructure can be computer hardware (CPU, RAM), network and storage hardware, etc.

In the IaaS model, the cloud provider owns and operates both the hardware and the software, and also owns the data centres where the resources used are hosted.

IaaS VS Traditional Infrastructure Model

In a traditional scenario, a company manages and maintains its own data center. They must also invest in servers, storage, software and other technologies, and they must hire qualified staff to purchase, manage and update all the equipment and licenses. The data center must be built to meet peak demand, although sometimes workloads decrease and those resources remain idle. In the face of rapid business growth, IT may find it difficult to keep up.

In a typical IaaS model, an enterprise consumes services such as CPU, RAM, storage and databases from a cloud provider. It will no longer need to buy and manage its own equipment, nor will it need space in a data center to "rack" the equipment. The cost of infrastructure will shift to a pay-as-you-go model with easy and quickly executable scaling.

Why IaaS?

IaaS provides four main benefits that enable companies to move faster and achieve their digital transformation goals.

  1. It reduces the time and cost, and improves the provisioning and scaling, of development, test and production environments. This gives developers and DevOps teams more freedom to experiment and innovate.

  2. By making services and resources available on demand, IaaS allows companies to scale their infrastructure by increasing or decreasing resources as needed, paying only for what they use per hour, day or month.

  3. IaaS is available in most countries, with a regional presence near large population centers, allowing companies to increase their presence in those geographies more quickly.

  4. It can provide access to new and updated equipment and services, such as the latest processors, storage hardware, network security features, container orchestration, etc.

The following advantages are derived from those benefits:

  1. Eliminates the initial infrastructure expenditure and reduces the cost over its lifetime. The initial cost of configuring and managing the infrastructure in a Data Center is avoided, making it an economic option.

  2. Improves business continuity and disaster recovery. Achieving high availability, business continuity and disaster recovery is expensive and complex, as it requires a significant amount of infrastructure and qualified personnel. IaaS reduces this cost and eases access to applications and data during an incident or service interruption.

  3. Rapid innovation. The infrastructure needed to launch a new product or initiative can be up and 100% running in hours or even minutes. In a traditional model, that would require days, weeks or months.

  4. Resilient to variations in resource demand. Allows to quickly scale resources to meet peaks in demand, then reduce them again when activity decreases.

  5. Allows you to focus on your business. The team focuses on business and functionality rather than on IT infrastructure.

  6. Increases stability, reliability and capacity. There is no need to maintain and update software and hardware or troubleshoot equipment The service provider ensures that its infrastructure is reliable and meets established service level agreements (SLAs).

  7. Improved security. The Cloud provider provides a layer of security for applications and data in addition to the one already used by the company itself.

  8. Applications in production at greater speed. It is not necessary to have the infrastructure in place before developing and delivering applications.

Competitive Dialogue

We are currently analyzing the proposals of different candidates to select the best Cloud Service Provider partner to help us in the evolution of the Onesait product suite.

The candidates are Amazon, IBM, Azure, OVH and Google.

The objective of this study is to identify the Cloud Provider that best meets our needs. These can be summarized as follows:

  • A solution for the implementation of all Minsait products.

  • Reduce current operating costs of existing platforms.

  • Improve performance and reduce over-sizing of the various IaaS.

  • Eliminate duplication of hardware and software and unify technologies.

  • Use PaaS services as opposed to traditional IaaS.