Transitioning from intuitive to data-driven approaches to capacity management

Date

What does capacity planning look like within your organisation?

In most organisations existing capacity planning methodologies are rooted in decades of traditional physical server deployments and dedicated infrastructure. If an application server is at 50% load today, and load has historically doubled every 24 months, chances are your capacity planning methodology predicts that you have two years before you must add further capacity. While such an approach may work acceptably when dealing with dedicated, physical server instances the now widespread use of production server virtualisation limits how accurate and therefore worthwhile such predictions can be.

Further, when this approach fails, IT managers and administrators typically fall back on intuitive approaches to capacity planning – responding to reports of application slowness, or to changes in headcount, in a linear manner that does not account for the complex relationships between application performance and each layer of the infrastructure upon which the applications are hosted.

These intuitive capacity planning methodologies are at best inefficient, resulting in needless or poorly targeted infrastructure investment. At worst, they can be completely ineffective, resulting in highly approaches to infrastructure management with significant operational costs.

Virtually Unknowable

Virtualisation – along with the adoption of other shared systems, such as clustered database and web servers, hardware load balancing appliances and storage area networks – necessitate a holistic approach to capacity planning. It is no longer enough to simply understand resource utilisation on an application-by-application basis. Instead, IT managers must consider the inter-relationships between applications; when are peak periods for individual applications, which applications have peak periods which overlap, how do applications map to line-of-business functions, etc . Each additional piece of data which must be included in capacity planning calculations exponentially increases the complexity of the forecasting, increasing the likelihood for error and therefore decreasing the value of the capacity planning exercise itself.

Given this it is no wonder that in most organisations capacity planning for virtualised environments remains an ad hoc process, with virtual infrastructure administrators applying the traditional physical server cap planning methodology to ESX hosts and simply trying to manage around its shortfallings via “agility” in infrastructure procurement and deployment.

A New Approach

A new generation of tools are beginning to emerge that seek to resolve these problems. Approaches vary across vendors but we can see common themes between them:

the ability to automate application mapping, allowing analysis to incorporate relationships between servers
the ability to rationalise performance and capacity metrics from multiple infrastructure layers – typically application, database, operating system, hypervisor, network and storage
Scenario-based modelling of growth

By automating discovery and data collection and by operating across all layers of the application/infrastructure stack these tools help drive a transition from the old, intuitive capacity planning methodologies to one that is based on hard data, and therefore much better able to accurately predict capacity demands within your unique environment. And, as we will discussing in a forthcoming post, such a data-driven approach is critical to managing not just capacity forecasts but application performance as well.

More
ARTICLES