Notebooks
The Notebooks module allows the Data Scientists that work with the platform to:
Create algorithms in a centralized web environment
Publish these algorithms as APIS REST so that other users can consume them
These algorithms can be created in different languages, such as Python, Spark, R, Tensorflow, Keras, ...
The platform ensures that Data Scientists only have access to the information (ontologies) that corresponds to them, thus enabling the creation of a secure and complete DataLake.
A look at the Notebooks
Basic on Notebooks
- Using the platform interpreter in the Notebooks
- Notebooks with persistence and SSO
- Running a Notebook via its REST API
- How to work with Git in Notebooks
- ReadOnly and Runner Roles in Notebook
- Using the Spark client for the Platform's Notebooks
- Conda Enviroments use in %python interpreter
Tools for Notebooks
- Monitoring in Notebooks Engine
- Jupyter Plugin to export Notebooks to the Platform
- PyGWalker integration in Notebooks
onesaitplatform Interpreter for Notebooks
- How to configure the %onesaitplatform interpreter
- %onesaitplatform Interpreter Reference
- Using the %onesaitplatform interpreter
- Interpreter configuration in the Notebooks module
Tips for Development with Notebooks
- Rebooting my interpreter
- How to install new libraries with Spark interpreter
- How to free up memory in the Notebooks?
- Importing notebooks into the Platform
Model Notebooks
- Creation and execution of a Model
- Model execution with dashboard output
- Training and deployment of models with BaseModelService
Examples on Notebooks
- Data Exploitation Notebook with Spark
- Image recognition with GluonCV and the Notebooks
- How to make a prediction with Linear Regression on Notebooks?
- Creation of a time series prediction model with Prophet
- Use of a time series prediction model
- Training & deployment of models with BaseModelService
- Using Data Across Interpreters
- Interpreter of R for Apache Zeppelin
- Interpreter SparkR for Apache Zeppelin
- Spark SQL interpreter PySpark