Vista de Componentes
Este diagrama representa los componentes que componen la plataforma para apoyar su funcionalidad.
(*) El símbolo UI en el diagrama indica que un módulo determinado tiene una interfaz de usuario de configuración integrada en el Panel de Control.
Estos módulos están organizados en estas capas:
Adquisition Layer: ofrece los mecanismos para la captación de los datos desde los Sistemas de Captación, bien sea de forma activa desde estos sistemas o conectando con ellos. Abstrae la información de los Sistemas de Captación con un enfoque semántico estándar (Entidades/Ontologías)
Knowledge Layer: ofrece el soporte para el procesado de los datos, la incorporación de valor y la transformación de servicio. Recibe datos tanto de la Capa de Adquisición, como de la Capa de Publicación, En esta capa se encuentran las funcionalidades que permiten el tratamiento y análisis de los datos para generar nuevos datasets o modificar/completar los existentes.
Publication Layer: facilita la construcción de servicios a partir de la información gestionada por plataforma, ofreciendo interfaces sobre la Capa de conocimiento, estableciendo políticas de seguridad y ofreciendo conectores para que sistemas externos puedan acceder a la Plataforma.
Management & Support Layer: esta capa transversal da soporte al resto de funcionalidades ofreciendo servicios como auditoría, monitorización, seguridad, etc, además de la consola web y APIS REST para el desarrollo sobre plataforma.
A continuación veremos el detalle de cada una de ellas.
Acquisition layer
IoT Broker: this Broker allows devices, systems, applications, websites and mobile applications to communicate with the platform through compatible protocols. It also offers APIs in different languages.
Kafka Server: the platform integrates a Kafka cluster that allows communication with systems that use this exchange protocol, generally because they handle a large volume of information and require low latency.
DataFlow: this component allows you to configure data flows from a web interface. These flows are composed of an origin (which can be files, databases, TCP services, HTTP, queues, ...or the IoT Broker platform), one or more transformations (processors in Python, Groovy, JavaScript, ... ) and one or more destinations (same options as the origin).
Digital Twin Broker: this Broker allows communication between the Digital Twins and the platform, and with each other. It supports REST and Web Sockets as protocols.
Video Broker: allows connecting to cameras through the WebRTC protocol and processing the video stream associating it with an algorithm (people detection, OCR, etc.).
Knowledge layer
Semantic Information Broker: once the information is acquired, it comes to this module, which validates whether the Broker client has the permissions to perform that operation (insert, consult, ...) or not, and then gives semantic content to the information received, validating whether the information sent corresponds to this semantics (ontology) or not.
Semantic Data Hub: this module acts as a persistence hub. Through the Query Engine, it allows to persist and consult in the underlying database where the ontology is stored, where these components are compatible with MongoDB, Elasticsearch, relational databases, graphics databases, etc.
Streaming engines: supported by:
Flow Engine: this engine allows to create process flows both visually and easily. It is built in the node network. A separate instance is created for each user.
Digital Twin Orchestrator: the platform allows communication between Digital Twins to be visually orchestrated through the FlowEngine engine itself. This orchestration creates a bidirectional communication with the digital twins.
Rule engine: allows to define business rules from a web interface that can be applied to data entry or programmed.
SQL Streaming Engine: allows to define complex sequences as the data arrives in an SQL-like language.
Data Grid: this internal component acts as a distributed cache, as well as an internal communication queue between the modules.
Notebooks: this module offers a web interface in several languages so that the Data Scientist team can easily create models and algorithms in their favorite languages (Spark, Python, R, SQL, Tensorflow ...)
Publication Layer
API Manager: this module allows to visually create the APIs in the ontologies managed by the platform. It also offers an API Portal for the consumption of the APIs and a Gateway API to invoke the APIs.
Dashboard Engine: this engine allows to create, visually and without any programming, complete dashboards on the information (ontologies stored on the platform), then make them available for consumption outside or inside the platform.
Management layer
Control panel: the platform offers a complete web console that allows a visual management of the elements of the platform through a web-based interface. All this configuration is stored in a configuration database. It offers a REST API to manage all these concepts and a monitoring console to show the status of each module.
Identity Manager: allows defining how to authenticate and authorize users and their roles, user directory (LDAP, ...), protocols (OAuth2, ...)
CaaS Console: allows administering from a web console all the implemented modules (such as Docker containers orchestrated by Kubernetes), including version and rollback updates, number of containers, the scalability rules, etc.
Support layer
MarketPlace: allows to define the assets generated in the platform (API, dashboards, algorithms, models, rules, ...) and publish them so that other users can use them.
GIS viewers: from the console you can create GIS layers (from ontologies, WMS services, KML, images) and GIS viewers (currently under Cesium technology).
File manager: this utility allows you to upload and manage files from the web console or from the REST API. These files are managed with the platform's security.
Web application server: the platform allows serving web applications (HTML + JS) loaded through the platform's web console.
Configuration Manager: this utility allows you to manage configurations (in YAML format) of the platform applications by environments.
Main Base Technologies (Tecnologías Base)
MODULE | TECHNOLOGY |
---|---|
Base Technology | Java =>8 Spring Boot 3.X |
Control Panel | Spring Boot over Thymeleaf. ConfigDB on MariaDB, Postgresql,... |
Semantic Data Hub | MongoDB as reference implementation for online storage. MinIO+Presto as reference implementation for historical and analytics storage. Relational Databases supported. ElasticSearch, TimeScaleDB, CosmosDB, DocumentDB,… supported. |
DataFlow | StreamSets (and some components) integrated on Platform |
Flow Engine | Node-red: Configuration and development on Node-red (components, multitenant,...) |
Digital Broker | Spring Boot development Kafka for high performance streaming. MQTT Moquette for bidirectional communication. WebSockets for web communication. |
API Manager | Development on Spring Boot Integration with Gravitee |
Dashboard Engine | Angular + Gridster as the engine. ODS as reference components library eCharts as library for gadgets. |
Notebooks | Apache Zeppelin (including Interpreter) |
DataGrid & Cache | Hazelcast |
Identity Manager | Reference Implementation: Development over Spring Cloud Security Advanced Implementacion: Integration with Keycloak |
BPM Engine | Camunda |
Deployment | Containerized modules on Docker Orchestrated by Kubernetes Managed by CaaS Rancher or OpenShift |