Skip to end of metadata
Go to start of metadata

You are viewing an old version of this page. View the current version.

Compare with Current View Page History

« Previous Version 7 Next »

Components View

This diagrama represents the components that compose the platform to support its functionality.



(*) The UI symbol in the diagram above indicates that a given module has a configuration user interface integrated in the Control Panel.

These modules are organized in these layers:

  • Acquisition Layer: provides the mechanisms for capturing the data from the Capture Systems, either actively from these systems or connecting to them. It abstracts the information from the Capture Systems with a standard semantic approach (Entities/Ontologies).

  • Knowledge Layer: provides support for data processing, value addition and service transformation. It receives data from both the Acquisition Layer and the Publication Layer. This layer contains the functionalities that enable data processing and analysis to generate new datasets or modify/complete existing ones.

  • Publication Layer: facilitates the construction of services from the information managed by the platform, offering interfaces on the Knowledge Layer, establishing security policies and offering connectors so that external systems can access the Platform.

  • Management & Support Layer: this transversal layer supports the rest of the functionalities by offering services such as auditing, monitoring, security, etc., in addition to the web console and REST APIS for development on the platform.

We will next see the detail of each of them.

Acquisition layer

  • Digital Broker: this Broker allows devices, systems, applications, websites and mobile applications to communicate with the platform through compatible protocols. It also offers APIs in different languages.

  • Kafka Server: the platform integrates a Kafka cluster that allows communication with systems that use this exchange protocol, generally because they handle a large volume of information and require low latency.

  • DataFlow: this component allows you to configure data flows from a web interface. These flows are composed of an origin (which can be files, databases, TCP services, HTTP, queues, ...or the IoT Broker platform), one or more transformations (processors in Python, Groovy, JavaScript, ... ) and one or more destinations (same options as the origin).

  • Digital Twin Broker: this Broker allows communication between the Digital Twins and the platform, and with each other. It supports REST and Web Sockets as protocols.

  • Video Broker: allows connecting to cameras through the WebRTC protocol and processing the video stream associating it with an algorithm (people detection, OCR, etc.).

Knowledge layer

  • Semantic Information Broker: once the information is acquired, it comes to this module, which validates whether the Broker client has the permissions to perform that operation (insert, consult, ...) or not, and then gives semantic content to the information received, validating whether the information sent corresponds to this semantics (ontology) or not.

  • Semantic Data Hub: this module acts as a persistence hub. Through the Query Engine, it allows to persist and consult in the underlying database where the ontology is stored, where these components are compatible with MongoDB, Elasticsearch, relational databases, graphics databases, etc.

  • Streaming engines: supported by:

    • Flow Engine: this engine allows to create process flows both visually and easily. It is built in the node network. A separate instance is created for each user.

  • Digital Twin Orchestrator: the platform allows communication between Digital Twins to be visually orchestrated through the FlowEngine engine itself. This orchestration creates a bidirectional communication with the digital twins.

  • Rule engine: allows to define business rules from a web interface that can be applied to data entry or programmed.

    • SQL Streaming Engine: allows to define complex sequences as the data arrives in an SQL-like language.

  • Data Grid: this internal component acts as a distributed cache, as well as an internal communication queue between the modules.

  • Notebooks: this module offers a web interface in several languages ​​so that the Data Scientist team can easily create models and algorithms in their favorite languages ​​(Spark, Python, R, SQL, Tensorflow ...)

Publication Layer

  • API Manager: this module allows to visually create the APIs in the ontologies managed by the platform. It also offers an API Portal for the consumption of the APIs and a Gateway API to invoke the APIs.

  • Dashboard Engine: this engine allows to create, visually and without any programming, complete dashboards on the information (ontologies stored on the platform), then make them available for consumption outside or inside the platform.

Management layer

  • Control panel: the platform offers a complete web console that allows a visual management of the elements of the platform through a web-based interface. All this configuration is stored in a configuration database. It offers a REST API to manage all these concepts and a monitoring console to show the status of each module.

  • Access Manager: allows defining how to authenticate and authorize users and their roles, user directory (LDAP, ...), protocols (OAuth2, ...)

  • CaaS Console: allows administering from a web console all the implemented modules (such as Docker containers orchestrated by Kubernetes), including version and rollback updates, number of containers, the scalability rules, etc.

Support layer

  • MarketPlace: allows to define the assets generated in the platform (API, dashboards, algorithms, models, rules, ...) and publish them so that other users can use them.

  • GIS viewers: from the console you can create GIS layers (from ontologies, WMS services, KML, images) and GIS viewers (currently under Cesium technology).

  • File manager: this utility allows you to upload and manage files from the web console or from the REST API. These files are managed with the platform's security.

  • Web application server: the platform allows serving web applications (HTML + JS) loaded through the platform's web console.

  • Configuration Manager: this utility allows you to manage configurations (in YAML format) of the platform applications by environments.

Main technologies

MODULE

TECHNOLOGY

Base Technology

Java >8

Spring Boot 2.X

Control Panel

Spring Boot over Thymeleaf.

ConfigDB on MariaDB, Postgresql,...

Semantic Data Hub

MongoDB as reference implementation for online storage.

MinIO+Presto as reference implementation for historical and analytics storage.

Relational Databases supported.

ElasticSearch supported.

DataFlow

Streamsets (and some components) integrated on platform

Flow Engine

Node-red: Configuration and development on Node-red (components, multitenant,...)

Digital Broker

Spring Boot development

Kafka for high performance streaming.

MQTT Moquette for bidirectional communication.

WebSockets for web communication.

API Manager

Development on Spring Boot + Integration with Gravitee

Dashboard Engine

Angular + Gridster as the engine.

ODS as reference components library

eCharts as library for gadgets.

Notebooks

Configuration and Interpreter on Apache Zeppelin

DataGrid & Cache

Hazelcast

Identity Manager

Reference Implementation: Development over Spring Cloud Security

Advanced Implementacion: Integration with Keycloak

Deployment

Containerized modules on Docker

Orchestrated by Kubernetes

Managed by CaaS Rancher or OpenShift





  • No labels