Typical Onesait Platform Sizing

ES | EN

Introduction

The goal of this post is to show some typical platform sizing based on various scenarios.

These recommendations should be validated with the Product team on the concrete scenario of a project.

 

A foundational objective of the platform is to have a Zero Lock-in strategy with vendors, so that the platform can be deployed in all Clouds and also On Premise.

To fulfil the strategy. the platform proposes a deployment based on Docker containers and orchestrated by Kubernetes, so that these can be deployed in a vendor independent way, and to integrate a CaaS platform that allows the visual and simple management of these containers in a vendor-independent way (like CaaS using Rancher or OpenShift).

Additionally, the platform can use specific SaaS services from some vendors and be deployed on top of an existing Kubernetes cluster (AKS pj), always managed from the platform CaaS to achieve vendor independence.

Deployment for VMS-based DevOps Environments

When deploying the platform on top of VMS in Development Environments, the platform deployment proposal looks like this:

 

  • CaaS environment (Rancher):

    • Single-Instance (1 VM) deployment as a CaaS unavailability does not affect the operation.

    • VM characteristics:

      • 4 cores and 16 Gb RAM.

      • 512 GB HDD mounted on /datadrive

      • XFS file system.

      • Linux 64-bit OS: CentOS 7.X (now 7.8) / Rocky Linux 8.X (now 8.4) / RHEL >8.2 / Ubuntu 20.

    • Access:

      • SSH access with user with sudo permissions.

      • Connectivity with VMs Platform: 8080/tcp, 500/udp, 4500/udp

  • Platform:

    • Containerised deployment of all modules including persistence and balancer.

    • Deployment in 1 VM according to project needs and modules to be deployed:

      • Basic environment: 1 VM with 4 cores and 16 Gb RAM and 256 Gb XFS disk.

      • Typical environment: 1 VM with 8 cores and 32 Gb RAM and 512 Gb XFS disk.

      • High load environment: 1 VM with 16 cores and 64 Gbs RAM and 1 TB XFS disk.

    • Additional requirements:

      • SSH access with user with sudo permission.

      • Connectivity with CaaS VMs: 8080/tcp, 500/udp, 4500/udp

      • Port accessible to the vnet: 443 (Platform web console).

 

Deployment for VM-based Productive Environments

In Productive Environments, the platform deployment proposal contemplates:

 

  • Load-Balancer:

    • You can use Load-Balancer of chosen Cloud, Platform (NGINX) or HW balancer.

  • CaaS environment (Rancher):

    • Depending on the required HA, can be mounted in Single-Instance or cluster (3 VMS).

    • VMs:

      • 4 cores and 16 Gb RAM.

      • 512 GB HDD mounted on /datadrive

      • XFS file system.

      • Linux 64-bit OS: CentOS 7.X (now 7.8) / Rocky Linux 8.X (now 8.4) / RHEL >8.2 / Ubuntu 20..X (now 20.02)

    • Additional requirements:

      • SSH access with sudo permission user.

      • Connectivity to VMS Platform: 8080/tcp, 500/udp, 4500/udp

  • Platform:

    • Containerised deployment of platform modules not including persistence.

    • Deployment in at least 3 VMs according to project needs and modules to be deployed:

      • Basic environment: 2 VM with 4 cores and 16 Gb RAM and 256 Gb RAM.

      • Typical environment: 2/3 VM with 8 cores and 32 Gb RAM and 512 Gb disk.

      • High load environment: 3 VM with 16 cores and 64 Gbs RAM and 1 TB disk.

    • Special needs:

      • SSH access with user with sudo permissions.

      • VMS CaaS connectivity: 8080/tcp, 500/udp, 4500/udp

      • Port accessible to the vnet: 443 (Platform web console).

When to mount CaaS Rancher in Single-Instance?

When Rancher is mounted without HA, or in other words the Kubernetes cluster without HA, the operation is as follows:

  • The services deployed on the Workers continue to run even if the Kubernetes Master is down.

  • The web balancers (ingress in Kubernetes nomenclature) that are created in Kubernetes also continue to work.

  • What you lose if the Kubernetes Master goes down:

    • You cannot make deployments or configuration changes to deployed modules.

    • Besids, if one of the Workers goes down, the services will not be automatically re-lifted in another of the workers, the latter would only affect non-replicated services and the critical services will be replicated.

    • All this is operational again when the Master Kubernetes is up again.

Deployment on a Kubernetes Cluster (such as Azure AKS or AWS EKS)

When we deploy on a Kubernetes Cluster like AKS or EKS, the Platform modules are deployed as PODs, so you can give each one its replication factor.

Sources

 

Â