The main goal of the Continuous Integration of Performance Models (CIPM) is to enable an accurate architecture-based performance prediction at each point of the systems development life cycle. For this goal, the CIPM approach continuously updates the architecture-level performance models of a software system according to observed changes at development time and at operation time.
Following the measurement-based performance evaluation during agile software development, provides just the actual state of the performance. Evaluating the performance of design alternatives (unseen states) by measurement-based approaches is expensive. It requires setting up the test environment and measurements for all design alternatives (e.g., deployment, system composition, execution environment, etc.). On the contrary, architecture-based performance predictions can support design decision-making by simulating or analyzing the architectural performance models. However, modeling/ keeping architectural performance models up-to-date during agile software development is challenging; the frequent changes in source code and changes in operations (e.g., system composition or deployment) outdate the models. CIPM approach addresses this issue by observing the impacting changes and updating the architecture models accordingly.
The CIPM processes keep the consistency between the software system artifacts (source code, performance models and the measurements) during the software development life cycle. The following figure shows the processes of CIPM and how they can be integrated into DevOps software development as an example of agile software development.
At development time, CIPM extends the Continuous Integration (CI) of the source code with a CI-based update of software models1. This Process extracts the source code changes from each commit and accordingly updates the corresponding source code model1.1 in a Single Underlying Model predefined in Vitruvius plattform. Based on consistency preservation rules defined at the metamodel level, CIPM updates the repository model1.2 (components, interfaces, and abstract behavior) and identifies instrumentation points1.3 for changed parts of source code. Besides, CIPM extracts the system model1.4 (the composition of components). For estimating the performance parameters like the resource demands, CIPM applies an automatic adaptive instrumentation2 to the changed parts of the source code.
Performance testing3 using the instrumented source code allows collecting the required measurements for calibrating performance parameters. The incremental calibration4 updates the affected performance parameters considering the impacting parametric dependencies like the input data. The self-validation5 process validates and improves the accuracy of the calibrated models. If the models are deemed accurate, developers can apply architecture-based performance prediction6 of unseen states. Otherwise, the accuracy can be improved by collecting more measurements from test/ production environments.
At operation time, the continuous adaptive monitoring8 of the system allows collecting the required run-time measurements for detecting operation changes and keeping performance models up-to-date. The self-validation9 process compares the monitoring data and monitored simulation results to detect inaccurate parts of models. The results of self-validation are used as an input to the Ops-time calibration9, which updates the models (e.g., system models) according to the detected changes and improves their accuracy. The updated models enable model-based analyses11 like model-based auto-scaling. Moreover, they support the development planning12 with evaluating design alternatives.
You can read more information on the CIPM process in CIPM publications.
- QUDOS18: The Continuous integration of performance model introduces the initial idea of the approach.
- ICSA20: "Incremental calibration of architectural performance models with parametric dependencies" describes the CIPM approach in more detail and evaluates the accuracy of the performance models that are incrementally calibrated.
- ICSA21: "Enabling consistency between software artifacts for software adaption and evolution" continuously updates the performance models at operation time using the self-validation results as an input for the calibration to improve the models' accuracy and to reduce the monitoring overhead.
- QUDOS21 "Optimizing parametric dependencies for incremental performance model extraction." applies the genetic algorithm to optimize the performance parameters with parametric dependencies.
- JSS: "Continuous Integration of Architectural Performance Models with Parametric Dependencies - The CIPM Approach" is submitted. The new contributions in this work are the commit-based update of performance models at evolution time and further evaluation of the approach.
For full bibliographic detail and BibTeX entries, see https://are.ipd.kit.edu/people/manar-mazkatli/publications/.
We perform experiments based on the following case studies:
CoCoME-PCM is a trading system designed for use in supermarkets. It supports several processes like scanning products at a cash desk or processing sales using a credit card. We used a cloud-based implementation of CoCoME.
TeaStore is a microservice-based webshop to buy different kinds of tea DOI. The webshop consists of 8 microservices that register themselves by registry microservice to allow client-side load balancing. The microservices communicate with each other using representational state transfer (REST) standard. This case study is designed to evaluate approaches in performance modeling.
Details for published and submitted papers
The evaluation using CoCoMe
This evaluation scenario supposes that the “bookSale” service is newly added. According to this assumption, the “bookSale” is instrumented and calibrated. The service consists of several internal and external actions and two loops. The following figure visualizes the abstract behavior of the ”bookSale”.
Experiment Reproduction: Two steps can reproduce the results:
- The incremental calibration of CoCoMe by executing the calibration pipeline. To run it, please flow these instructions.
- The evaluation of the calibrated performance model can be reproduced by executing the automatic evaluation of the calibrated CoCoMe model. The evaluation covers different aspects like the accuracy of the calibrated models, the required monitoring overhead, and the calibration pipeline's performance. The data used by evaluation (e.g., monitoring configuration, the used monitoring data and calibrated models) are stored in evaluation folder.
The evaluation using TeaStore
For the evaluation, we used the source code in our fork. The evaluation covers three scenarios described in the paper: (A) the incremental calibration of TeaStore to evaluate the accuracy of the incrementally calibrated models (Train service calibration), (B) the incremental calibration of TeaStore over incremental evolution to evaluate the stability of the models' accuracy over development and (C) the incremental calibration of TeaStore after changes in the parametric dependencies to evaluate the identification of different types of parametric dependencies and its improvement of the models' accuracy.
Experiments Reproduction: Two steps can reproduce the results:
- The incremental calibration of TeaStore by executing the calibration pipeline flowing these instructions.
- The evaluation of the calibrated performance model can be reproduced by running the the automatic evaluation of the calibrated teaStore models. Notice that there are three different tests for the abovementioned evaluation scenarios (A: TeastoreEvolutionScenarioEvaluation.java, B: TeastoreSingleRecommenderEvaluation.java, and C:TeastoreParameterizedEvaluation.java). The evaluation covers the accuracy of the calibrated models, the required monitoring overhead, and the calibration pipeline's performance. The evaluation data for all evaluation scenarios (A, B and C) are stored in evaluation folder.
Information on the evaluation and the reproduction of results is documented on this link of Wiki
The experiment (E1) is performed by the commit-based update of software models pipeline. The source code and information on reproducing the E1 result are on GitHub.
Foundations and Related Projects
The following works are useful as background information:
- Automated Extraction of Palladio Component Models from Running Enterprise Java Applications (DOI).`
Please contact Manar Mazkatli by email if you have any questions.