CIPM

Aus SDQ-Wiki
Zur Navigation springen Zur Suche springen

CIPM

Short Summary

The main goal of the Continuous Integration of Performance Models (CIPM) is to enable an accurate architecture-based performance prediction at each point of the systems development life cycle. For this goal, the CIPM approach continuously updates the architecture-level performance models of a software system according to observed changes at development time and at operation time.

Context

Following the measurement-based performance evaluation during agile software development, provides just the actual state of the performance. Evaluating the performance of design alternatives (unseen states) by measurement-based approaches is expensive. It requires setting up the test environment and measurements for all design alternatives (e.g., deployment, system composition, execution environment, etc.). On the contrary, architecture-based performance predictions can support design decision-making by simulating or analyzing the architectural performance models. However, modeling/ keeping architectural performance models up-to-date during agile software development is challenging; the frequent changes in source code and changes in operations (e.g., system composition or deployment) outdate the models. CIPM approach addresses this issue by observing the impacting changes and updating the architecture models accordingly.

CIPM Processes

The CIPM processes keep the consistency between the software system artifacts (source code, performance models and the measurements) during the software development life cycle. The following figure shows the processes of CIPM and how they can be integrated into DevOps software development as an example of agile software development.

At development time, CIPM extends the Continuous Integration (CI) of the source code with a CI-based update of software models1. This Process extracts the source code changes from each commit and accordingly updates the corresponding source code model1.1 in a Single Underlying Model predefined in Vitruvius plattform. Based on consistency preservation rules defined at the metamodel level, CIPM updates the repository model1.2 (components, interfaces, and abstract behavior) and identifies instrumentation points1.3 for changed parts of source code. Besides, CIPM extracts the system model1.4 (the composition of components). For estimating the performance parameters like the resource demands, CIPM applies an automatic adaptive instrumentation2 to the changed parts of the source code.

Performance testing3 using the instrumented source code allows collecting the required measurements for calibrating performance parameters. The incremental calibration4 updates the affected performance parameters considering the impacting parametric dependencies like the input data. The self-validation5 process validates and improves the accuracy of the calibrated models. If the models are deemed accurate, developers can apply architecture-based performance prediction6 of unseen states. Otherwise, the accuracy can be improved by collecting more measurements from test/ production environments.

At operation time, the continuous adaptive monitoring8 of the system allows collecting the required run-time measurements for detecting operation changes and keeping performance models up-to-date. The self-validation9 process compares the monitoring data and monitored simulation results to detect inaccurate parts of models. The results of self-validation are used as an input to the Ops-time calibration9, which updates the models (e.g., system models) according to the detected changes and improves their accuracy. The updated models enable model-based analyses11 like model-based auto-scaling. Moreover, they support the development planning12 with evaluating design alternatives.

You can read more information on the CIPM process in CIPM publications.

The model-based DevOps pipeline enables the continuous integration of performance model.

Publications

For full bibliographic detail and BibTeX entries, see https://are.ipd.kit.edu/people/manar-mazkatli/publications/.

Case studies

We perform experiments based on the following case studies:

CoCoMe

CoCoME-PCM is a trading system designed for use in supermarkets. It supports several processes like scanning products at a cash desk or processing sales using a credit card. We used a cloud-based implementation of CoCoME.

TeaStore

TeaStore is a microservice-based webshop to buy different kinds of tea DOI. The webshop consists of 8 microservices that register themselves by registry microservice to allow client-side load balancing. The microservices communicate with each other using representational state transfer (REST) standard. This case study is designed to evaluate approaches in performance modeling.

Details for published and submitted papers

ICSA2020

The incremental calibration pipeline is available on GitHub and documented in its Wiki. However, we extended this pipeline after the publication with more features ( see CIPM pipeline).

The evaluation using CoCoMe

This evaluation scenario supposes that the “bookSale” service is newly added. According to this assumption, the “bookSale” is instrumented and calibrated. The service consists of several internal and external actions and two loops. The following figure visualizes the abstract behavior of the ”bookSale”.

SEFF-bookSale-JPG.jpg

Experiment Reproduction: Two steps can reproduce the results:

  1. The incremental calibration of CoCoMe by executing the calibration pipeline. To run it, please flow these instructions.
    • The configuration of the calibration pipeline can be found on this link
    • The experiment data like the monitoring configuration, measurements, the original and calibrated models are found in case study data folder.
  2. The evaluation of the calibrated performance model can be reproduced by executing the automatic evaluation of the calibrated CoCoMe model. The evaluation covers different aspects like the accuracy of the calibrated models, the required monitoring overhead, and the calibration pipeline's performance. The data used by evaluation (e.g., monitoring configuration, the used monitoring data and calibrated models) are stored in evaluation folder.

The evaluation using TeaStore

For the evaluation, we used the source code in our fork. The evaluation covers three scenarios described in the paper: (A) the incremental calibration of TeaStore to evaluate the accuracy of the incrementally calibrated models (Train service calibration), (B) the incremental calibration of TeaStore over incremental evolution to evaluate the stability of the models' accuracy over development and (C) the incremental calibration of TeaStore after changes in the parametric dependencies to evaluate the identification of different types of parametric dependencies and its improvement of the models' accuracy.

Experiments Reproduction: Two steps can reproduce the results:

  1. The incremental calibration of TeaStore by executing the calibration pipeline flowing these instructions.
    • The experiments configuration can be found on this link
    • The experiment data like the monitoring configuration, measurements, the original and calibrated models are found in case study data folder.
  2. The evaluation of the calibrated performance model can be reproduced by running the the automatic evaluation of the calibrated teaStore models. Notice that there are three different tests for the abovementioned evaluation scenarios (A: TeastoreEvolutionScenarioEvaluation.java, B: TeastoreSingleRecommenderEvaluation.java, and C:TeastoreParameterizedEvaluation.java). The evaluation covers the accuracy of the calibrated models, the required monitoring overhead, and the calibration pipeline's performance. The evaluation data for all evaluation scenarios (A, B and C) are stored in evaluation folder.

ICSA2021

The source code that is used for the evaluation of this publication is frozen on the ICSA21 branch of CIPM pipeline. The source code is documented using the Wiki.

Information on the evaluation and the reproduction of results is documented on this link of Wiki

JSS

The experiment (E1) is performed by the commit-based update of software models pipeline. The source code and information on reproducing the E1 result are on GitHub.

The remaining experiments (E2-E5) were performed based on the CIPM Pipeline. The source code documentation and information on the reproduction of results are on Wiki.

The source code of the commit-based update of software models will be later integrated into the CIPM Pipeline.

Foundations and Related Projects

Foundations

The following works are useful as background information:

Related Projects



Please contact Manar Mazkatli by email if you have any questions.