The Prediction and Optimisation Run-time component encapsulates data processing algorithms and seamlessly integrates them in the ZDMP platform. It uses the data management tools from the ZDMP environment and provides an easy-to-use API through which other components or zApps may use the embedded algorithms. By deploying an algorithm as a Prediction and Optimisation Run-time (PO Run-time) it is hosted in a scalable manner that is available to all components/zApps that support the PO Run-time API. In contrast to most ZDMP components, a single instance of a ZDMP platform may host multiple versions of the PO Run-time component. More precisely each user can start multiple instances of PO Run-time.

A potential workflow to create and run a Prediction and Optimisation Run-time is as follows:

  1. Identify data processing problem

  2. Use the Prediction and Optimisation Run-time Designer to search for available algorithms that solve the problem

  3. Adapt the data processing algorithm to needs or write a new algorithm from scratch. This step may also include training the model using test data

  4. Use the Prediction and Optimisation Run-time Designer to embed the finished algorithm/model inside a PO Run-time

  5. Run and use the PO Run-time throughout AI Analytics Run-time component in the ZDMP platform

The component consists of three Docker containers:

  • A Gunicorn Sever (WSGI) that deploys the Python Flask application that implements the components main functionality (API, Layer, …)

  • A Redis database that serves as a cache for specific message bus topics

  • A Python script that manages message bus subscriptions and updates the local cache (Redis database). In the current version, a MongoDB database is also deployed to persist configurations. Later the MongoDB container will be replaced by the ZDMP Storage component

The data processing algorithms are developed independently and are later injected and used in the Prediction and Optimisation Runtime. For this to work, each data processing algorithm needs to implement the ComputeUnit sub-component which has a predefined interface. On the implementation level the injection is achieved using a Docker parent image (PO-base image) which provides core functionalities as a service (eg API and IO). The Dockerfile of the data processing algorithm (ComputeUnit) is based on this PO-base image, ie starts by importing it. Using this system, the data processing algorithm can be provided as a Python package to the main Flask application.