Platform's High Availability



The platform is designed to work in high volume situations, always providing a stable service, minimizing the Platform's unavailability times.

To do this, and as previously described, the platform provides a horizontally scalable modular design, where a cluster of servers responds to requests that reach the Platform. This cluster not only provides optimal performance through proper deployment sizing, but it also allows the different nodes of each component to support each other to provide the high availability needed in projects of this nature.

The following graph represents the typical availability percentages depending on the deployed infrastructure to handle the requests, for which the platform is prepared and therefore said infrastructure will be adapted to provide the required availability level.



Besides, as it is an elastic deployment based on Docker and Kubernetes, the Platform will automatically react to the detection of interruptions or low performance in the ingestion and processing modules, dynamically adding new nodes to the cluster, then the proxy will begin redirecting requests to them.

This way, the robustness, high availability and scalability of the Platform is guaranteed at all levels:

  • Massive and constant intake of information.

  • Real-time processing capacity.

  • Storage and batch processing of information.

Component status monitoring

Basing the deployment of the platform on containers and a CaaS platform helps to improve the platform's High Availability, since CaaS itself is responsible for checking the health of the containers (platform services) and ensuring that they are active.

CaaS itself offers active and constant monitoring of the platform modules that require it, thus ensuring their high availability. This monitoring is configurable depending on the protocol of the service to be monitored.

For TCP:

For HTTP:

Having configured the healthchecks per module, now the CaaS performs the monitoring, constantly checking the health status by launching requests or heartbeats in which it checks whether the port (tcp) is listening or the endpoint's invocation (http) returns the expected code.

If the answer is no,  thenthe CaaS orchestrator is responsible for redeploying the service to ensure that there is no loss of it, and this can take two main types of actions:

  • Redeploy the entire service.

  • Redeploy the service only when there is at least a parameterizable number of containers in the correct state.