Skip to end of metadata
Go to start of metadata

You are viewing an old version of this page. View the current version.

Compare with Current View Page History

Version 1 Next »

Intro

This diagrama represents the components that compose the platform to support its functionality.



(*) The UI symbol in the diagram above indicates that a given module has a configuration user interface integrated in the Control Panel.


Onesait Platform Architecture on Deploy


These modules are organized by layers. We will next see the detail of each of them.

Acquisition layer

  • IoT Broker: this Broker allows devices, systems, applications, websites and mobile applications to communicate with the platform through compatible protocols. It also offers APIs in different languages.
  • Kafka Server: the platform integrates a Kafka cluster that allows communication with systems that use this exchange protocol, generally because they handle a large volume of information and require low latency.
  • DataFlow: this component allows you to configure data flows from a web interface. These flows are composed of an origin (which can be files, databases, TCP services, HTTP, queues, ...or the IoT Broker platform), one or more transformations (processors in Python, Groovy, JavaScript, ... ) and one or more destinations (same options as the origin).

  • Digital Twin Broker: this Broker allows communication between the Digital Twins and the platform, and with each other. It supports REST and Web Sockets as protocols.
  • Video Broker: allows connecting to cameras through the WebRTC protocol and processing the video stream associating it with an algorithm (people detection, OCR, etc.).

Knowledge layer

  • Semantic Information Broker: once the information is acquired, it comes to this module, which validates whether the Broker client has the permissions to perform that operation (insert, consult, ...) or not, and then gives semantic content to the information received, validating whether the information sent corresponds to this semantics (ontology) or not.
  • Semantic Data Hub: this module acts as a persistence hub. Through the Query Engine, it allows to persist and consult in the underlying database where the ontology is stored, where these components are compatible with MongoDB, Elasticsearch, relational databases, graphics databases, etc.
  • Streaming engines: supported by:
    • Flow Engine: this engine allows to create process flows both visually and easily. It is built in the node network. A separate instance is created for each user.

    • Digital Twin Orchestrator: the platform allows communication between Digital Twins to be visually orchestrated through the FlowEngine engine itself. This orchestration creates a bidirectional communication with the digital twins.

    • Rule engine: allows to define business rules from a web interface that can be applied to data entry or programmed.
    • SQL Streaming Engine: allows to define complex sequences as the data arrives in an SQL-like language.
  • Data Grid: this internal component acts as a distributed cache, as well as an internal communication queue between the modules.
  • Notebooks: this module offers a web interface in several languages ​​so that the Data Scientist team can easily create models and algorithms in their favorite languages ​​(Spark, Python, R, SQL, Tensorflow ...)

Publication Layer

  • API Manager: this module allows to visually create the APIs in the ontologies managed by the platform. It also offers an API Portal for the consumption of the APIs and a Gateway API to invoke the APIs.

  • Dashboard Engine: this engine allows to create, visually and without any programming, complete dashboards on the information (ontologies stored on the platform), then make them available for consumption outside or inside the platform.

Management layer

  • Control panel: the platform offers a complete web console that allows a visual management of the elements of the platform through a web-based interface. All this configuration is stored in a configuration database. It offers a REST API to manage all these concepts and a monitoring console to show the status of each module.

  • Access Manager: allows defining how to authenticate and authorize users and their roles, user directory (LDAP, ...), protocols (OAuth2, ...)
  • CaaS Console: allows administering from a web console all the implemented modules (such as Docker containers orchestrated by Kubernetes), including version and rollback updates, number of containers, the scalability rules, etc.

Support layer

  • MarketPlace: allows to define the assets generated in the platform (API, dashboards, algorithms, models, rules, ...) and publish them so that other users can use them.

  • GIS viewers: from the console you can create GIS layers (from ontologies, WMS services, KML, images) and GIS viewers (currently under Cesium technology).

  • File manager: this utility allows you to upload and manage files from the web console or from the REST API. These files are managed with the platform's security.

  • Web application server: the platform allows serving web applications (HTML + JS) loaded through the platform's web console.
  • Configuration Manager: this utility allows you to manage configurations (in YAML format) of the platform applications by environments.

  • No labels