misc_brosig.bib

@misc{BrKoPa2009-OTN-WLDF2PCM,
  abstract = {Throughout the system life cycle, the ability to predict a software system's performance under different configurations and workloads is highly valuable to ensure that the system meets its performance requirements. During the design phase, performance prediction helps to evaluate different design alternatives. At deployment time, it facilitates system sizing and capacity planning. During operation, predicting the effect of changes in the workload or in the system configuration is beneficial for run-time performance management. The alternative to performance prediction is to deploy the system in an environment reflecting the configuration of interest and conduct experiments measuring the system performance under the respective workloads. Such experiments, however, are normally very expensive and time-consuming and therefore often considered not to be economically viable. To enable performance prediction we need an abstraction of the real system that incorporates performance-relevant data, i.e., a performance model. Based on such a model, performance analysis can be carried out. Unfortunately, building predictive performance models manually requires a lot of time and effort. The model must be designed to reflect the abstract system structure and capture its performance-relevant aspects. In addition, model parameters like resource demands or system configuration parameters have to be determined. Given the costs of building performance models, techniques for automatic extraction of models based on observation of the system at run-time are highly desirable. During system development, such models can be exploited to evaluate the performance of system prototypes. During operation, an automatically extracted performance model can be applied for efficient and performance-aware resource management. For example, if one observes an increased user workload and assumes a steady workload growth rate, performance predictions help to determine when the system would reach its saturation point. This way, system operators can react to the changing workload before the system has failed to meet its performance objectives thus avoiding a violation of service level agreements (SLAs). Current performance analysis tools used in industry mostly focus on profiling and monitoring transaction response times and resource consumption. The tools often provide large amounts of low level data while important information needed for building performance models is missing, e.g., the resource demands of individual components. In this article, we present a method for automated extraction of performance models for Java EE applications during operation. We implemented the method in a tool prototype and evaluated its effectiveness in the context of a case study with an early prototype of the SPECjEnterprise2009 benchmark application which in the following we will refer to as SPECjEnterprise2009_pre. (SPECjEnterprise2009 is the successor benchmark of the SPECjAppServer2004 benchmark developed by the Standard Performance Evaluation Corp. [SPEC]; SPECjEnterprise is a trademark of SPEC. The SPECjEnterprise2009 results or findings in this publication have not been reviewed or accepted by SPEC, therefore no comparison nor performance inference can be made against any published SPEC result.) The target Java EE platform we consider is Oracle WebLogic Server (WLS). The extraction is based on monitoring data that is collected during operation using the WebLogic Diagnostics Framework (WLDF). As a performance model, we selected the Palladio Component Model (PCM). PCM is a sophisticated performance modeling framework with mature tool support. In contrast to low level mathematical models like, e.g., queueing networks, PCM is a high-level UML-like design-oriented model that captures the performance-relevant aspects of the system architecture. This makes PCM models easy to understand and use by software developers. We begin by providing some background on the technologies we use, focusing on the WLDF monitoring framework and the PCM models. We then describe the model extraction method in more detail. Finally, we present the case study we conducted and conclude with a summary.},
  author = {Fabian Brosig and Samuel Kounev and Charles Paclat},
  howpublished = {Oracle Technology Network (OTN) Article},
  month = {September},
  title = {{Using WebLogic Diagnostics Framework to Enable Performance Prediction for Java EE Applications}},
  url = {http://www.oracle.com/technetwork/articles/brosig-wldf-086367.html},
  year = {2009}
}