Prevision.io — Documentation¶
Présentation¶
Introduction¶
value proposition¶
AI in the enterprise promises leaps in efficiency, business innovation and customer-facing performance. However, to enable greater adoption, democratization and acceptance of AI, organizations must overcome not only talent gaps, but deployment, usability and governance gaps too. The real enabler of enterprise AI is the removal of friction between Business Departments, Data Science, and IT users. Data science is following in the path of software development: Integrated environments, agile, iterative methods, and move to modularity and no-code approaches that empower both expert and citizen developers and data scientists alike. The winners in the Enterprise AI revolution are those who can achieve faster, more agile, more integrated, more collaborative production cycles across DataOps, ML Ops and DevOps areas. Prevision is an end-to-end enterprise AI platform, specifically designed to enable business users, data scientists, and developers to deliver AI projects that deliver ROI, faster. It streamlines the creation, deployment and management of AI-powered business applications across their full lifecycle.
requirements¶
Prevision.IA is a SAAS platform optimized for Firefox and Chrome navigators. The cloud version can be accessed online at (https://cloud.prevision.io), or it can be deployed on-premise or in your private cloud. Please contact us at (support@prevision.io) if you have any questions regarding our deployment capabilities.
conditions¶
Please read the general terms and conditions available on following link : https://cloud.prevision.io/terms :
contacts¶
If you have any questions about using Prevision.IO platform please contact us using the chat button on the Prevision.IO store interface or by email at the following contact address :
Getting started¶
Account creation¶
By clicking to the following address, you will land on the connection page which allows you to create an account or register if you already have a Prevision.IO account.
In order to create a new account, just click on the sign up button next to the log in button. You will access the following account creation form.

image alt text
Once you have fulfilled the needed information, you will have a 30 days free but limited access to the Prevision.IO platform. In order to upgrade your free trial account into a full access one, please contact us on following email (support@prevision.io)
Connection¶
Once your account has been created, you will be able to access the prevision’s Studio and Store and start creating models and deploying them.
Please note that SSO using you google/linkedin/GitHub account is available.

image alt text
Cloud & freetrial limitations¶
If you are using our cloud platform (https://cloud.prevision.io) using a free trial account, some limitations are setted up. Here a quick view of limitations for free testing accounts :
Entity | Action | Limitation |
---|---|---|
PROJECT | Create Project | Free trial users can create 2 Limited Projects |
DATASETS | Add dataset from file / datasource | 10 Datasets max + 1GB per dataset |
IMAGE FOLDER | Add / Update / Delete image Folder in project | 1 Image Folder |
DATA SOURCE | Add / Update / Delete datasource in project | 1 Datasource max |
CONNECTOR | Add / Update / Delete connector in project | 1 Connector max |
USECASE | Add / Update / Delete usecase in project | 5 Use Cases max |
USECASE VERSION | Add / Update usecase version in project | 3 Concurrent usecase versions |
PREDICTION | Add / Update prediction in usecase version | 2 Concurrent predictions |
STORE APP | Deploy Apps | 5 Concurrent deployed apps |
Studio¶
Concepts¶
Projects¶
Introduction¶
In Prevision.IO studio, ressources, such as datasets or models, are scoped by project in order to structure your work and collaborate easily with people inside a project.
Create a new project¶
In order to create a new project, you have to click on the “new project” button on top right of the “my projects” view.
You will access to the following interface :

In order to create your project you have to fulfill at least a color and a project name. You can also add a description of your project.
Please note that you can at any moment, if admin role into a project has been setted up for your account, change these information by clicking on “settings” into the project menu.
All your projects will be displayed in the “my projects” view. Two different displays are list view and cards view and you can switch between one view and another by clicking on the view button you prefere next to the search bar.
View cards¶
The view card is displaying all your projects as cards.

You will find the following information on the cards :
- project name
- created by and creating at
- description (if available)
- number of datasets/pipelines/use cases
- list of collaborators into the project and their associated role into this project
If your role has been setted up as admin into a project, an action button on top right of each card will be available. By clicking on this button you will be able to edit the project information and delete the project. Please note that deleting a project will also delete all sub project’s items, such as pipelines or datasets, created on the project.
Note
Tips : you can filter projects by their names using the search bar on top right of the projects view
View list¶
In this view you will find the same information and actions than the card view at the exception of project description.

collaboration into a project¶
Prevision.IO studio is built in order for our users to collaborate within the projects. To do that, into the “collaborators” menu of a selected project you can manage, if your role is admin on the project, the collaborators and the right inside the project.

- Add a user : by enter the email address of a Prevision.IO platform registered user you can add a collaborator
- role : if your role level is admin into the project you will be able to edit user roles
- by clicking the delete button on the right side of a user, you can disable the access to the selected user to the project
Project roles¶
Into a project there are 3 levels of roles :
- End-user : in this project, the user can only access to the list of deployed models and applications and make predictions
- Viewer : you can navigate into all ressources of a project and visualize information but you can’t download or create ressources
- Contributor : Viewer rights + you can create and manage resources of the project
- Admin : Contributor rights + you can manage project users and project settings
edit a project¶
You can change the following parameters of your project by clicking on settings on the main navigation of your project or, on the list/card view of your project by clicking on the action button.
- Name of the project
- Description of the project
- Color displayed on the card of the project
delete a project¶
If a project is no longer useful, you can delete it by clicking on the action button on the card/list projects view.
Warning : all ressources created into the project will be deleted with the suppression of the project with no possibility of back-up. If deleted, a project and its resources are no longer available for you but also for all users previously added to this project.
Project home¶
By entering a project, you will first be redirected to the project homepage. The following sections, including the 3 latest entries for each section, are displayed :
- Datasets : last uploaded dataset
- Pipelines : last pipeline templates
- Usecases : last usecases
Under each section you will also find a link to the dedicated page, also available through the left project main menu, and, for pipelines and usecase, a shortcut to create new ones.
Data¶
datasets¶
Upload a dataset¶
By clicking on the Datasets button of the dedicated data page menu, you will land on the dataset page.

This page allows you to consult all the project datasets uploaded into the application and import new ones by using one of the following methods :
- either from files (CSV or ZIP)
- either from a Data Source at a given time (snapshot)
In order to upload a dataset from a file, you can drag & drop in the dedicated area of your file or click on “Browse” in order to open your computer file explorer. Once your file is selected, you can start the upload by clicking on the “save Data set” button on the right side of the file upload area.
In order to create a dataset from a datasource, you have to use the toggle button “use a data source” and then select a data source from the dropdown list.
When the upload of the dataset is done, the platform will automatically compute information regarding your dataset in order to maximize the automation of machine learning computes. You can follow thanks to the status column on the list the progress of these operations.
dataset statistics pending : pre-computing of dataset information ready to be used : the dataset is ready to be used in the platform dataset statistic failed : the dataset can’t be used in a train, you have to re upload the file drift pre-computing failed : you can train with this dataset but once a model deployed, the drift will be not available
From the dataset table, several actions are possible
Actions¶

By clicking on the usecase button, you can start the configuration of a training based on the selected dataset. By clicking on the start embedding button, you can launch the dataset analysis computing. Once it is done, the icone in the liste will change for “explore embedding”. By clicking on it you will access to the dataset analysis dedicated page. By clicking on the action button on the right side of the table, you will be able to :
- edit the name of the dataset
- use this dataset into a pipeline
- delete the dataset
Please note that this action button will be also available into a dataset page.
Datasets informations¶
Navigation
Once your dataset is uploaded and computed into the platform, you will be able to access information about it by clicking on your dataset on the list.

Three menu regarding the dataset are available allowing you to understand better your data
- General : general information about your dataset
- Columns : information about features of your dataset
- Sample : a sample visualisation of your dataset
General
On the general screen of a dataset you will find generic information about your dataset such as the number of columns, number of samples and number of cells or the usecases using this dataset. Two graph are also displayed showing : * the feature distribution regarding the feature type (linear, categorial, date or text). This distribution is automatically calculated when uploading a dataset into the platform * correlation matrix showing the correlation coefficients between variables. Each cell in the table shows the correlation between two variables
You will also have on the bottom of the screen the list of usecases trained from this dataset.

Columns
By clicking on the column button on the top menu you will find a listing of all the dataset features, their role (categorial, linear, text or date) and the percentage of missing value for each feature

image alt text
Sample
By clicking on the sample button, a sample of 10 rows of your dataset will be displayed.

Dataset analysis¶
Introduction¶
The Data Explorer is a specific module that aims to detect similarities between samples of your dataset. It uses a combination of Dimension reduction algorithms for representing your dataset into a vector space, sometimes called embedding By using it, you’re being able to :
Visually observe cluster see which samples are the most similar to a selected one, for example a Customer in his buying habits See in which population a given feature, like expenses, is present or higher Have a global view of your data
The Data explorer is often used as a pre-analysis of datasets, as it uses an unsupervised algorithm, but it can be used as a standalone feature. Once the embedding has been generated you can request them by API or download them for use in a third party tool like Excel.
Start a dataset analysis¶
Two possibilities in order to launch the explorer. By clicking on the “start embedding” on the dataset list or, after opening a dataset page, by clicking the actions button on top of the screen and clicking on “start embedding” .


Compute embedding will take more or less time regarding the size of your dataset. Once the computing is done, you will see on the list an eye icone and, on the dataset page on the actions button, the compute analysis button will be replaced by an “explore embedding” button. By clicking on one of these buttons, you will enter into the dataset analysis interface.
The explorer¶
The Data Explorer is now accessible and will give you a visual representation in 2 or 3 dimensions of the selected Data Set. This representation is a dimension reduction constrained to 2 or 3 dimensions, applied on the embedded vectors, that may be of a higher dimension. There are five important sections in the data explorer.
- Graphical projection
The main screen is a visual representation of the dataset. Each point is a sample of your dataset ( up to 5000 ). You can pan and zoom and if you click on a point, or use the selecting box tool, some more info is displayed.
In this representation, points are grouped by similarities as much as possible, meaning that if two points are near in this space, the samples share some important similarities.
The nature of the displayed information are selected on the section (3)
- Search and similarities
The second section is a dynamic list of similar sample.
You can search any sample from any feature. For example if your dataset has an index with name, you can search a sample by using its name but you can too search all the sample that have « RPG » as type or « 5 » for size.
Once a sample is selected, it and a list of similar are is highlighted in the main section. They can be further isolated by clicking on the « isolate N points » button on top of the section.

image alt text
The number of similar samples to display can be choosen with the « neighbors » slider

image alt text
- Labels
Section 3 main purpose is to set labels displayed in section 1. Here you can set :
the label displayed above each point
the feature use for coloring each point :


- Segmentation and clustering
Section 4 is all about Segmentation and clustering your samples.
Here you can choose an algorithm and tune its parameter to display the most similar point together. Thus, you can start to observe sample clusters, or segments of data that represent big groups that share important similarities.
Yet, as we try to project a lot of dimensions in a smaller space (3D or 2D), note that this algorithm is just for displaying and shaping human decision. A lot of the process is a little bit subjective and further conclusion should be driven by a supervised algorithm.
Anyway, here you can choose between 3 algorithms :
- PCA : the quickest and simplest algorithm. Clicking on the PCA tab immediately led to a 3D representation of your samples. Yet, this is a very simple algorithm that only shows sample variability along 3 axes. You can find more information about PCA on Wikipedia
- t-SNE : once you click on the t-SNE tab, a process of convergence is launched. t-SNE is a very time consuming algorithm but that can lead to very accurate segmentation. You can change its parameters and click on the « Stop » button then « Re-run » it. But in most cases it’s better to already know this algorithm to use it. You ca find more information about t-SNE on Wikipedia
- UMAP : UMAP is a good alternative to t-SNE and PCA. Quicker than t-SNE , it offers better results than PCA. THe only parameters is « Neighbors », that change the size of clusters. The more neighbors you ask for, the bigger the cluster. You can find more information about UMAP on Wikipedia.
We recommend using UMAP in most cases.
- API informations
The 5th part is only about API information.
When launching a dataset Analysis, the platform builds an embedding of the dataset, namely, it projects each sample of the dataset to a vector. This embedding is attached to the dataset and can be retrieved with the dataset ID. Then you can use it for running any mathematical operation, in most cases a distance, that can be run on vectors.
Section 5 of the tools gives you the Id of your dataset :

image alt text
With it you can access several URL :
- GET https://.prevision.io/api/datasets/files//download : get the original dataset
- GET https://.prevision.io/api/datasets/files/ : JSON info about your dataset
- GET https://.prevision.io/api/datasets/files//explorer : JSON info about the embeddding
- GET https://.prevision.io/api/datasets/files//explorer/tensors.bytes : numpy files of embeddings
- GET https://.prevision.io/api/datasets/files//explorer/labels.bytes : tsv files of labels
The embedding files (tensor.bytes) is a numpy float 32 file whom shape is in the json file if explorer URL. You can read it with the following python code for example
req = Request('https://<YOUR_DOMAIN>.prevision.io/ext/v1/datasets/files/<DATASET_ID>/explorer/tensors.bytes')
req.add_header('Authorization',<YOUR_TOKEN> ) #get YOUR_TOKEN in the admin page
content = urlopen(req).read()
vec = np.frombuffer(BytesIO(content).read(), dtype="float32").reshape(u,v) # u,v is the shape gotten in /ext/v1/datasets/files/<DATASET_ID>/explorer
print(vec.shape)
Please note that you can use SDK’s functions in order to simplify this process.
image folders¶
In order to train image use cases you will have to upload images using a zip file. By drag & drop a zip file in the dedicated area you will be able to load your image folder. All images folders uploaded into your project will appear in the list under the drag & drop section.
It is recommended to use an images dataset whose total volume does not exceed 4 GB. We invite you to contact us if you want to use larger datasets.

By clicking the action button on the list you will be able to :
- edit the name of your image folder
- delete your image folder
Connectors¶
In the Prevision.IO platform you can set connectors in order to connect the application directly to your data spaces and generate datasets. Several connector types are available :
- SQL databases
- HIVE databases
- FTP server
- Amazon S3 datastore
- GCP
By clicking on the “new connector” button, you will be able to create and configure a new connector. You will need to provide information depending on connector’s type in order for the platform to be able to connect to your database/file server.
Note
TIPS : you can test your connector when configured by clicking the “test connector” button.
Once connectors are added, you will find under the new connector configuration area the list of all your connectors. You can, by clicking on the action button :
- test the connector
- edit the connector
- delete the connector
Once at least one connector is well configured, you will be able to use the data sources menu in order to create CSV from your database or file server.
Data sources¶
In order to create datasets, you first need to configure a data source using connector information. To do this, click on the data sources menu and select the configured connector in the dropdown list. Depending on the connector type, you will have to configure the data source differently.
When your data source is ready, in order to generate a dataset from it, you have to go to the datasets page, enable the toogle button “use a data source” and select your data source from the dropdown list.
SQL data sources¶
Once a SQL connector is selected, you will have to choose first the database you want to use. Then, two different methods are available in order to configure your data source :
- by selecting a table
- by clicking the “select by query” button and entering a valid SQL query
HIVE data sources¶
FTP data sources¶
SFTP data sources¶
Amazon S3 datastore¶
GCP data sources¶
Use cases¶
Introduction¶
Once in a project, you can go to the “use case” page using lateral navigation and start creating new use cases or explore already existing ones.
Regarding the problematic and the data type you have, several training possibilities are available in the platform :
Training type / Data type | Tabular | Timeseries | Images | Definition | Exemple |
---|---|---|---|---|---|
Regression | Yes | Yes | Yes | Prediction of a quantitative feature | 2.39 / 3.98 / 18.39 |
Classification | Yes | No | Yes | Prediction of a binary quantitative feature | « Yes » / « No » |
Multi Classification | Yes | No | Yes | Prediction of a qualitative feature whose cardinality is > 2 | « Victory » / « Defeat » / « Tie game » |
Object Detection | No | No | Yes | Detection from 1 to n objects per image + location | Is there a car in this image ? If so, where ? |
Text Similarity | Yes | No | No | Estimate the similarity degree between two text.Find texts that are similar in context and meaning with your queries | « a tool for screws » should lead to a a screwdriver description |
Then, for each data type, you will have to choose between several usecase types demanding a specific configuration for each.
Create a new usecase¶
In order to create a new usecase using the interface, three possibilities are available :
- In the usecase menu by clicking on the “new usecase” button top right of the screen
- By clicking the actions button of the dataset list and clicking on the “create usecase” button
- On a dataset page by clicking on the “actions” button and select “create usecase” on the menu
Then you will land on the new usecase page and will have to choose the datatype and the training type regarding your problem.

As training types requires specific configuration, all information needed to start the training of a usecase will be explain on each training type dedicated chapters
versioning of a usecase¶
In the prevision.IO platform you can create multiple versions of one usecase allowing you to search for optimal performance training and, deploy and switch any model from any version of the same usecase.
In order to do that, several possibilities :
- From the usecase list, by clicking on the “action button” of an entry and selecting “new version”
- one a usecase page, by clicking on the “action” button and selecting “new version”
- On the version menu from a usecase and selecting “new version” in the list action button
Then, you will be redirected to the “new usecase” page but with limited option. First of all, you can not change the datatype and training type between version
duplication of a usecase¶
In order to duplicate a usecase, there is two options :
- by using the action button right side of the usecase list
- by using the “action button” on top right of any usecase page and select “duplicate usecase”
By doing this, the new usecase screen will appear keeping the duplicated usecase configuration.
models pages¶
Each model page is specific to the datatype/training type you choose for the usecase training. Screens and functionality for each training type will be explained in the following sections. You can access a model page by two ways :
- by clicking on a graph entry from the general usecase page
- by clicking on a list entry from the models top navigation bar entry
Then you will land on the selected model page splitted in different parts regarding the training type.
tabular usecases - general information¶
For each kind of tabular training type, the model general information will be displayed on the top of the screen. Three sections will be available.

- Model information : information about the trained model such as the selected metric and the model score
- Hyperparameters : downloadable list of hyperparameters applied on this model during the training
- Selected feature engineerings (for regression, classification & multi-classification) : features engineerings applied during the training
- Preprocessing (for text similarity usecases) : list of pre-processing applied on textual features
Please note that for following usecases types, the general information parts is different than from others :
- Image detection usecases : no feature engineering
- text similarity usecases : preprocessing are displayed instead of feature engineering
Model page - Graphical analysis¶
In order to better understand the selected model, several graphical analyses are displayed on a model page. Depending on the nature of the usecase, the displayed graphs change. Here an overview of displayed analysis depending on the usecase type.
Tabular regression | Tabular classification | Tabular multi-classification | Tabular text similarity | Time series regression | Image regression | Image classification | Image multi-classification | Image detection | |
---|---|---|---|---|---|---|---|---|---|
Scatter plot graph | Yes | No | No | No | Yes | Yes | No | No | No |
Residual errors distribution | Yes | No | No | No | Yes | Yes | No | No | No |
Score table (textual) | Yes | No | No | No | Yes | Yes | No | No | No |
Residual errors distribution | No | No | No | No | No | No | No | No | No |
Score table (overall) | No | No | Yes | No | No | No | No | Yes | No |
Cost matrix | No | Yes | No | No | No | No | Yes | No | No |
Density chart | No | Yes | No | No | No | No | Yes | No | No |
Confusion matrix | No | Yes | Yes | No | No | No | Yes | Yes | No |
Score table (by class) | No | Yes | Yes | No | No | No | Yes | Yes | No |
Gain chart | No | Yes | No | No | No | No | Yes | No | No |
Decision chart | No | Yes | No | No | No | No | Yes | No | No |
lift per bin | No | Yes | No | No | No | No | Yes | No | No |
Cumulated lift | No | Yes | No | No | No | No | Yes | No | No |
ROC curve | No | Yes | Yes | No | No | No | Yes | Yes | No |
Accuracy VS K results | No | No | No | Yes | No | No | No | No | No |
Model page - graphs explanation¶
Then the feature graphs will be displayed (not for text similarity) allowing you to see the influence of features for the selected model. Two graphs are accessible through the two features tabs :
- Feature importance : graph showing you the importance of the dataset features. By clicking on the chart, you will be redirected to the dedicated feature page.
- Feature engineering importance : showing you the importance of selected feature engineering.

Please note that the feature importance graph also takes into account the feature engineering importance. For example, if a feature n°1 has not so much influence by itself regarding the model but, after feature engineering has a great influence, it will be represented on the feature importance graph.
- Scatter plot graph : This graph illustrates the actual values versus the values predicted by the model. A powerful model gathers the point cloud around the orange line.

- Residual errors distribution : This graph illustrates the dispersion of errors, i.e. residuals. A successful model displays centered and symmetric residues around 0.

- Score table (textual) : Among the displayed metrics, we have:
- The mean square error (MSE)
- The root of the mean square error (RMSE)
- The mean absolute error (MAE)
- The coefficient of determination (R2)
- The mean absolute percentage error (MAPE)

Please note that you can download every graph displayed in the interface by clicking on the top right button of each graph and selecting the format you want.
- Slider : For a binary classification, some graphs and scores may vary according to a probability threshold in relation to which the upper values are considered positive and the lower values negative. This is the case for:
- The scores
- The confusion matrix
- The cost matrix
Thus, you can define the optimal threshold according to your preferences. By default, the threshold corresponds to the one that minimizes the F1-Score. Should you change the position of the threshold, you can click on the « back to optimal » link to position the cursor back to the probability that maximizes the F1-Score.

- Cost matrix : Provided that you can quantify the gains or losses associated with true positives, false positives, false negatives, and true negatives, the cost matrix works as an estimator of the average gain for a prediction made by your classifier. In the case explained below, each prediction yields an average of €2.83.

The matrix is initiated with default values that can be freely modified.
- Density chart : The density graph allows you to understand the density of positives and negatives among the predictions. The more efficient your classifier is, the more the 2 density curves are disjointed and centered around 0 and 1.

- Confusion matrix : The confusion matrix helps to understand the distribution of true positives, false positives, true negatives and false negatives according to the probability threshold. The boxes in the matrix are darker for large quantities and lighter for small quantities.

Ideally, most classified individuals should be located on the diagonal of your matrix.
- Score table (graphical) : Among the displayed metrics, we have:
- Accuracy: The sum of true positives and true negatives divided by the number of individuals
- F1-Score: Harmonic mean of the precision and the recall
- Precision: True positives divided by the sum of positives
- Recall: True positives divided by the sum of true positives and false negatives

- Gain chart : The gain graph allows you to quickly visualize the optimal threshold to select in order to maximise the gain as defined in the cost matrix.

- Decision chart : The decision graph allows you to quickly visualize all the proposed metrics, regardless of the probability threshold. Thus, one can visualize at what point the maximum of each metric is reached, making it possible for one to choose its selection threshold.

It should be noted that the discontinuous line curve illustrates the expected gain by prediction. It is therefore totally linked to the cost matrix and will be updated if you change the gain of one of the 4 possible cases in the matrix.
- lift per bin : The predictions are sorted in descending order and the lift of each decile (bin) is indicated in the graph. Example: A lift of 4 means that there are 4 times more positives in the considered decile than on average in the population.

The orange horizontal line shows a lift at 1.
- Cumulated lift : The objective of this curve is to measure what proportion of the positives can be achieved by targeting only a subsample of the population. It therefore illustrates the proportion of positives according to the proportion of the selected sub-population.

A diagonal line (orange) illustrates a random pattern (= x % of the positives are obtained by randomly drawing x % of the population). A segmented line (blue) illustrates a perfect model (= 100% of positives are obtained by targeting only the population’s positive rate).
- ROC curve : The ROC curve illustrates the overall performance of the classifier (more info: https://en.wikipedia.org/wiki/Receiver_operating_characteristic). The more the curve appears linear, the closer the quality of the classifier is to a random process. The more the curve tends towards the upper left side, the closer the quality of your classifier is to perfection.

- Accuracy VS K results : this graph shows the evolution of accuracy and MRR for several value of K results

Contributors¶
By clicking on contributors on the main menu of a project, you will access the list of all users of the project. If you have enough rights on this project, you will be able to add & delete users from the project and modify the rights of the users.
Rôles & rules¶
- Viewer : you can access to all pages (except project settings) with no possibility of creation or edition
- Contributor : viewer rights + you can edit and create resources inside the project
- Admin : contributor rights + you can manage users and modify project properties
Add & delete¶
If you are admin in a project, you can manage users into your project.
In order to add a user, you have to enter into the top left field the collaborator email, set his right using the dropdown menu and click on the “invite this collaborator”. Please note that you can only invite collaborators that have already a prevision.io account.
In order to change the rights of a user into the project, into the list, you just have to select the new role using the dropdown. In order to be sure that the project and users properties can be manage at least one collaborator have to be admin of the project.
In order to remove a collaborator from the project, use the trash button on the left side of the list.

Project settings¶
If you are admin on a project, the project settings button is enabled and, by clicking on it, you will access the project setting page.

You can on this page :
- update name, description and color of your project
- delete the project. Please note that if you delete a project, all ressources linked to the project will be deleted (usecases, datasets, deployed models…)
Notebooks¶
Introduction¶
Prevision.io offers various tools to enable data science use cases to be carried out. Among these tools, there are notebooks and production tools. Notebooks are not scoped into projects. You can access notebooks by clicking on the notebook button on the left main menu. Then you will be redirected to the following page.

Jupyter (python)¶
For Python users, a JUPYTERLAB environment (https://github.com/jupyterlab/jupyterlab) is available in Prevision.io
Note that a set of packages is preinstalled on your instance (list: https://previsionio.readthedocs.io/fr/latest/_static/ressources/packages_python.txt), particularly the previsionio package that encapsulates the functions using the tool’s native APIs. Package documentation link: https://prevision-python.readthedocs.io/en/latest/
Jupyter (R studio)¶
For R users, a R STUDIO environment (https://www.rstudio.com) is available in Prevision.io
Note that a set of packages is preinstalled on your instance (list: https://previsionio.readthedocs.io/fr/latest/_static/ressources/packages_R.txt), particularly the previsionio package that encapsulates the functions that use the tool’s native APIs. Package documentation link: https://previsionio.readthedocs.io/fr/latest/_static/ressources/previsionio.pdf
Help¶
By clicking on help on the main menu, you will be redirected to the prevision’s ressources helping you through the application and usecases. Four sections are available :
- type of problem : helping you to define what kind of usecase type is the most suitable for your issue
- videos : centralisation of tutorials & data-science ressources
- medium post : our datascience deticated posts published on medium
- documentation : link to the application documentation such as the readthedoc or the SDK documentation
User¶

- Language : switch the language between french or english
- Profile : navigate to you profil information such as email and password
- administration & API key : available only for admins
- documentation : ReadTheDoc redirection
- Terms and conditions
- log out