In order to guarantee a reliable service quality to their customers and due to the increasing complexity of cloud applications, SaaS cloud providers usually run their business critical applications with a fixed amount of resources. This approach has drawbacks such as a higher costs (e.g., bad energy efficiency, if the system is not fully utilized), or bad performance regarding unexpected work peaks. Amazon with its EC2, for instance, provides a configurable scheduled-based and rule-based auto-scaler. However, this reactive scaling has a latency depending on resource type in the order of minutes.
Therefore, proactive and application aware auto-scalers - intelligent controllers, which can reconfigure the system on time, to ensure high availability/constant performance under changing conditions - are required. The existing auto-scalers can be classified into 5 groups and prominent examples of each class were investigated. The result of this observation is that the existing controllers are either application-specific or too generic. I.e. the auto-scalers can either perform well only on the chosen system or they perform far beyond their means.
Therefore, we introduce a novel approach of a proactive, application-aware elasticity mechanism (PAAEM). The proposed controller employs established forecast methods for short-, mid- and long-term predictions of the arriving load intensity, application knowledge, and resource demand estimation to calculate the required resources per work unit. While taking these information into account, the mechanism reconfigures the deployment of an application in a way that the supply of resources matches the current and estimated future demand for resources. I.e. PAAEM consists of two mechanism: (I) a reactive rule-based controller as fall-back insurance and (II) a proactive controller which has three major building blocks: (a) continuous workload forecasting (using a modified version of WCF) with dynamic forecast method selection, (b) a descriptive performance model/ application knowledge (DML Model@RunTime), and (c) optimized resource demand estimation approaches (using the tool LibReDe).
In the ongoing evaluation, we are currently comparing PAAEM against five different state-of-the-art controllers in a private CloudStack-based environment. The BUNGEE elasticity benchmark framework and metrics are used to conduct and analyse the row of experiments. As workload scenarios, an http application that solves nxn matrixes (whereas n is the request parameter) and SPECjEnterprise2010 is used. The applications are driven with variable load profiles (extracted with LIMBO from the FIFA world cup 1998 trace) generated with JMeter (instead of FABAN - as classically used by SPECjEnterprise2010). Based on power consumption measurements for each physical server load level, the consumed watts per request can be estimated for the different auto-scalers in the given workload scenario. Further Experiments (comparison, sensitivity, SPECj) are currently ongoing and results will be included in the presentation.