The chat responses are generated using Generative AI technology for intuitive search and may not be entirely accurate. They are not intended as professional advice. For full details, including our use rights, privacy practices and potential export control restrictions, please refer to our Generative AI Service Terms of Use and Generative AI Service Privacy Information. As this is a test version, please let us know if something irritating comes up. Like you get recommended a chocolate fudge ice cream instead of an energy managing application. If that occurs, please use the feedback button in our contact form!
Skip to content
Insights Hub and Industrial IoT

Insights Hub drives smart manufacturing through the industrial Internet of Things. Gain actionable insights with asset and operational data and improve your processes.

Model Management Service¶

Idea¶

The Model Management for Analytical solutions helps customers to store single file models, algorithms, scripts, docker images and training or validation data, used for machine learning or AI tasks.

Access¶

The Model Management Service is exposed as a REST API. Storing, retrieving, updating of models and their versions, along with the associated metadata can be done by simple API calls.

For accessing this service you need to have the respective roles listed in Model Management Service roles and scopes.

Basics¶

The Model Management Service stores and serves models for either active users or applications, which require storing of (large) binaries. It supports both versioning and metadata information.

Model Management supports structured information associated with models, such as:

  • Model Metadata: Provides general model information
  • Version Metadata: Provides traceability of model versions
  • Version Payload: Provides traceability of the actual binary content of a model, which is always associated with version information

Model Metadata¶

The model metadata stores general model information like name, author, creation date and its type.

Version Metadata¶

The version metadata stores detailed information regarding the stored version. Those are the version number, the type (like Zeppelin, Jupyter, Protobuff, Docker etc.), in/out parameters, freeform parameters, build and/or run dependencies (libraries and associated version), and dependencies on other models. A dependency on another model is for example given, if the other model produces a payload, which is required as an input. This dependency is defined using the producedBy field.

Version payload¶

The version payload stores the actual model content in a file, which can be of any type, including .json, .pmml, .py, .ipynb, or .pb.

Features¶

The Model Management Service exposes its API for realizing the following tasks:

  • Storing analytical model binaries and versioning info
  • Managing versions of a model
  • Downloading a model for examination or execution
  • Defining dependencies needed to execute a model
  • Defining parameters required to execute a model

Limitations¶

  • Currently, the Model Management Service can only store one version payload (file) for a specific version of a model.
  • Model needs to be kept in sync with asset modelling. The user needs to update the model and retain it, if there is any change in asset modelling which is used in the algorithm/model. If this is not done, then jobs using such models will start failing as they would be looking for a variable/aspect/asset by name which does not exists in the system.
  • All tenants have 100 GB storage allocated by default irrespective of the offering

API Rate Limits for P&P Tenant¶

Model Management has imposed technical limits for P&P tenant to safeguard the system and to avoid system exploitation on heavy load exceeding system limits.API rate limits for Model Management are applicable as technical rate limits.

Example Scenario¶

A client has their own analytical models for training or forecasting. They use the Model Management Service store these models and retrieve them for training. After the training is finished, the model can output weights or another type of trained model binaries. Additional models or simple inference services can load the weights files to perform predictions.