PerOpteryx/Hybrid Optimisation Case Study

Aus SDQ-Wiki
Zur Navigation springen Zur Suche springen

This pages gives more detail on the BRS case study for the hybrid optimisation approach with analytic optimisation and PerOpteryx. Check out the PerOpteryx page for details on the tool.

BRS model details

The WebServer component handles user requests for generating reports or viewing the plain data logged by the system. It delegates the requests to a Scheduler component, which in turn forwards the requests to the ReportingEngine component. The ReportingEngine accesses the Database component, sometimes using an intermediate Cache component.

Server configuration

Configuration Processor Speed PR Availability HA Cost HCost
C_1 2.4 GHz 0.987 1
C_2 2.6 GHz 0.987 2
C_3 2.8 GHz 0.987 3.5
C_4 3.0 GHz 0.999 5

Available Server Configurations for BRS

Component Allocation

Each component can be allocated to one of the four servers with varying configuration.

We realised the server configuration and allocation with joint degrees of freedom. Overall, 16 combinations were available for the four base servers Si and four configuration options Cj: S1_C1, S1_C2, ... S4_C3, S4_C4. The figure below show the resulting allocation options for the Webserver component (one of Webserver, Webserver2, Webserver3, depending on the other degree). The menu seen on the right is opened when clicking the yellow button in the lower right corner.

Allocation of the Webserver component.

Component Selection

The Webserver can be realised using third party components. The software architect can choose among three functional equivalent implementations: Webserver2 with cost 4 and Webserver3 with cost 6. Both have less resource demand than the initial Webserver. Webserver2 has better availability for the requests of type "view", while Webserver3 has better availability for the requests of type "report".

Check the section on the PCM model below for resource demands and availability values.

PCM models

You can download the Datei:BRS PCM model Hybrid Optimisation.zip and open it with our current nightly build of the PCM tools. See PCM_3.2 for details how to install and use the PCM. You find a Run Configuration entry named PCM Design Space Exploration for PerOpteryx. You can also inspect the model files with an XML editor, if you like. See PerOpteryx for more information on the evolutionary optimisation tool.

The SVN is located at https://sdqweb.ipd.uka.de/svn/code/Palladio.Examples/branches/PCM3.2_BRS_Milano_QoSA2010/PCM3.2_BRS_Optimisation_Milano

Visualisation of the model

Resource Demand and Availability

The resource demands and availability values are as follow. In this table, the service demand is relative to the server configuration C_2, i.e. a value of one means 1 second service demand on a CPU with 2.6 GHz. The arrival rate of our open workload is exponentially distributed with rate 5.

Component Service Action Service demand P-fail Availability
WebServer processRequest acceptReport 4 2.40E-02 0.976
ReportingEngine report prepare 1 3.40E-05 0.999966
ReportingEngine report prepareSmallReport 0.05 5.00E-05 0.99995
DB getSmallReport getSmallReportFromDB 0.03 5.50E-05 0.999945
ReportingEngine report prepareBigReport 0.3 9.80E-05 0.999902
DB getBigReport getBigReportFromDB 0.3 5.00E-05 0.99995
ReportingEngine report GenReport 0.25 0.0037 0.9963
DB getCachedData getCachedDataFromDB 0.2 2.10E-05 0.999979
WebServer processRequest acceptView 2 1.30E-02 0.987
ReportingEngine view prepareViewing 1.4 3.90E-05 0.999961
Scheduler report schedule 0.5 0 1
Scheduler view schedule 0.5 0 1
Cache doCacheAccess prepareDBAccess 0.4 1.00E-06 0.999999
WebServer2 processRequest acceptReport 2 4.00E-03 0.996
WebServer2 processRequest acceptView 3 1.80E-02 0.982
WebServer3 processRequest acceptReport 1 3.00E-02 0.97
WebServer3 processRequest acceptView 1 1.00E-03 0.999
DB getSmallReport getSmallReportFromDB 0.03 0.000055 0.999945
DB getBigReport getBigReportFromDB 0.3 0.00005 0.99995
DB getCachedData getCachedDataFromDB 0.2 0.000021 0.999979
Cache doCacheAccess prepareDBAccess 0.4 0.000001 0.999999
ReportingEngine report prepare 1 0.000034 0.999966
ReportingEngine report prepareSmallReport 0.05 0.00005 0.99995
ReportingEngine report prepareBigReport 0.3 0.000098 0.999902
ReportingEngine report GenReport 0.25 0.0037 0.9963
ReportingEngine view prepareViewing 1.4 0.000039 0.999961

Execution Paths

The figure shows the execution paths of the BRS system. Under the studied usage profile, the probabilities that the paths are executed are as follows:

  • Path 1: 0.15
  • Path 2: 0.15
  • Path 3: 0.7


Simplified view of the execution paths of the BRS system

Optimisation details

Analytic Optimisation

Decision variables

Explanation Decision variables

x conflicts with description
1 Downgrade server1 to 1.4
2 Upgrade server1 to 2.8
3 Upgrade server1 to 3
4 Downgrade server2 to 1.4
5 Upgrade server2 to 2.8
6 Upgrade server2 to 3
7 Downgrade server3 to 1.4
8 Upgrade server3 to 2.8
9 Upgrade server3 to 3
10 Downgrade server4 to 1.4
11 Upgrade server4 to 2.8
12 Upgrade server4 to 3
13 Use Webserver2
14 Use Webserver3
15 Redeploy Cache to server4, no upgrades
16 13 und 1 Use Webserver2 and downgrade server1
17 13 und 2 Use Webserver2 and upgrade server1 to 2.8
18 13 und 3 Use Webserver2 and upgrade server1 to 3
19 14 and 1 Use Webserver3 and downgrade server1
20 14 and 2 Use Webserver3 and upgrade server1 to 2.8
21 14 and 3 Use Webserver3 and upgrade server1 to 3
22 15 and 10 Redeploy Cache and downgrade server4 and server3
23 15 and 11 Redeploy Cache and upgrade server4 and server3 to 2.8
24 15 and 12 Redeploy Cache and upgrade server4 and server3 to 3

PerOpteryx

Search problem Input candidates candidates per iteration candidate total optimal candidates iterations duration mean duration per candidate
Hybrid Performance and Cost 19 25 151 16 10 46 min 18.3 sec
Evol. Only Performance and Cost 0 25 149 15 10 34 min 13.6 sec
Hybrid Availability and Cost 12 15 130 10 15 5 min 2.3 sec
Evol. Only Availability and Cost 0 15 132 10 15 5 min 2.3 sec

For performance and cost optimisation, the duration may vary because of the variable simulation confidence stop criterion.


Additional Result Diagrams

The figure below shows a hybrid vs. evolutionary search comparison for availability and cost optimisation, where each optimisation run had the same number of iterations (= 15). We observe that the hybrid front is superior to the front from scratch. The hybrid front has found all but one globally Pareto-optimal candidates.

Hybrid vs. evolutionary search comparison for availability and cost optimisation.

The figure below shows the results of the hybrid search for performance and cost (shown in Figure 8 in the paper) in a different way: You can compared the analytic results, as evaluated with SimuCom (blue diamonds), the additionally found optimal evolutionary candidates (green triangles), and all other evolutionary candidates (red x).

Optimisation results for availability and costs.