CONTINUOUS INTEGRATION OF PERFORMANCE MODEL CIPM incrementally extracts and calibrates architecture-level performance models (PMs) swith parametric dependencies after each source code commit. To do so, CIPM updates the PM continuously to keep it consistent with the running system, i.e, the deployed source code and the last measurements.CIPM consists of four main activities:(1)Behaviour model update and adaptive instrumentation: CIPM analyses the source code changes, updates the PM elements based on the co-evolution approach accordingly and instruments the changed parts of code to calibrate the new/up-dated related parts of the architecture. (2)Monitoring: CIPM collects the required measurements either during testing or executing the system in production. (3)Incremental calibration and estimation of parametric dependencies: CIPM estimates the Performance Model Parameters PMPs and calibrates the PM incrementally. (4)Validation: CIPM validates the resulting PM to measure its degree of accuracy. This enables the developer to perform model-based performance prediction and support the model-based decision making for the next development iteration.
- The initial idea of the approach is published in the 4th International Workshop on Quality-Aware DevOps in Companion of the 2018 ACM/SPEC International Conference on Performance Engineering.
- The new version of the approach is published in IEEE International Conference on Software Architecture ICSA 2020.
For full bibliographic detail and BibTeX entries, see https://are.ipd.kit.edu/people/manar-mazkatli/publications/.
We perform experiments based on the following case studies:
CoCoME-PCM is a trading system that is designed for use in supermarkets. It supports several processes like scanning products at a cash desk or processing sales using a credit card. We used a cloud-based implementation of CoCoME where the enterprise server and the database run in the cloud.
In this evaluation scenario, we suppose that the “bookSale” service is newly added. The “bookSale” service is responsible for processing a sale after a user-submitted his payment. As input, this service receives a list of the purchased products including all information related to the purchase. The service consists of several internal and external actions and two loops. The following figure visualizes the abstract behavior of the ”bookSale” *PCM SEFF. Datei:SEFF-bookSale.pdf
According to our assumption that the “bookSale” service is newly added, we instrument this service and all services that are subsequently called by it. Then we calibrated using our calibration pipeline. You can reproduce this experiment by applying the following instructions.
- The experiments configuration can be found on this link
- The experiment can be reproduced by two steps:
- The incremental calibration of “bookSale” by executing the implemented Docker image. To run the Docker image please flow the instructions.
- The evaluation of the calibrated PM by reproduced automatically by executing [the automatic evaluation of the calibrated CoCoMe model]
- The experiment data like the measurements, the original and calibrated models found in casestudy-data folder
- The Experiments data like load configuration, the used monitoring data and calibrated models are stored on [[ https://github.com/CIPM-tools/Incremental-Calibration-Pipeline/tree/master/evaluation/evaluation-automation-platform/casestudys/cocome case study folder]]
TeaStore is a micro-service based reference application DOI. It implements a webshop where you can buy different kinds of tea. Besides, the TeaStore supports user authentication and automatic recommendations. Amongst others, it is designed to evaluate approaches in the field of performance modeling.
We assume that the train service is evolved incrementally and calibrated it incrementally. The following figure shows the SEFF of train service. Datei:Train-SEFF.pdf
We extend some parts of the source code to be able to perform the experiments.
- The changed source code is on GitHub
- The experiments configuration can be found on this link
- The experiment can be reproduced by executing the implemented Docker image. To run the Docker image please flow the instructions.
- The experiment data like the measurements, the original and calibrated models found in casestudy-data
Foundations and Related Projects
The following works are useful as background information:
- Automated Extraction of Palladio Component Models from Running Enterprise Java Applications (DOI).`
The code of model-based DevOps pipeline that calibrates the PCM incrementally is now available on Github:
The code for performing the adaptive instrumentation automatically based on Vitruvius platform is available on Github:
The strategies that are responsible for integrating the existing source code in Vitruvius are on GitHub. Currently, we are working on extending them to perform more experiments to integrate the previous aspects of CIPM, i.e., the incremental calibration based on automatic adaptive instrumentation of source code.
Setup of MbDevOps pipline
There are two ways how the Calibration Pipeline can be used.
If you only want to use the Calibration Pipeline we recommend using the Docker image.
If you want to extend the Calibration Pipeline it makes sense to fork the repository and modify the Docker image so it loads the current version of the Pipeline from your Repository. Then you can import the Gradle projects, perform changes and add extensions, push your changes to your repository and the Docker Container automatically pulls the changes). --> The big advantage is that you do not need to Setup an Eclipse Version with the PCM plugins (The Docker Image does that for you).
This is the easier way. You only need docker installed on your machine.
Clone (or fork and clone) this repository. Navigate to the "docker" folder in the root directory. Build the docker image with docker build -t pcm-pipeline .. --> This may take some time, it needs to download Java/Eclipse/Eclipse Plugins and build the Gradle project. Now you can run the docker image with docker run --name pipe -p 8080:8080 pcm-pipeline. You can access the web interface on http://localhost:8080/ The next step is to transfer the data (PCM models and monitoring) to the docker container. This can be done with the following command: docker cp LOCALPATH/docker-data/ pipe:/etc/docker-data/. (For testing purposes you can use the PCM and the monitoring data from the cocome-casestudy project which is contained in this repository - Location: Casestudy Data)
Then go to the web interface and create the configuration (http://localhost:8080/create). The result should look similar to: Configuration
The second method is a bit more complicated as you need to clone the repository, import the projects and also download Eclipse and Eclipse projects.
The import is very easy you only need to clone the project and import the root Gradle project in your favorite IDE. Please consult the Dockerfile for the next setup steps as it documents all parts that need to be installed and configured (Maybe we will add detailed steps later).
Setup of Adaptive Instrumentation
- As Eclipse plugins.
- The following dependencies are needed>
- EMF Text
Please contact Manar by email if you have any questions.