Cloud providers will allow options that let you consolidate and aggregate instances to get the most value from provisioning. However, when provisioning systems and resources you must understand the requirements, continuously monitor and have the flexibility to adjust system capacity over time. There are many tools available to help achieve this goal. For example, AWS offers APIs and SDKs that automate the adjust process to make the amount of effort needed to align capacity to requirements to virtually zero. The use of dashboard can allow both your central team and stakeholders to view usage and adjust accordingly.
Right sizing ensures you receive the capacity you need when its needed and you pay the lowest cost for the resource you use. The key to right sizing is to understand precisely your organization’s usage needs and patterns and the ability to take advantage of the elasticity of the Cloud to respond to those needs.
Right sizing activities take into account all of the resources of a system and all of the attributes of each individual resource. For example, memory utilization, network bandwidth, and system connections can be monitored and analyzed, and resource allocated to meet the demand. Resource is then applied to each computing requirements. The technique also allows you to examine deployed instances and identifying opportunities to eliminate or downsize without compromising capacity or other requirements – resulting in lower costs.
This is an iterative process and your right sizing arrangement should be re-assessed and adjusted on a monthly or even weekly basis. In many organisations, this process is automated so adjusts are triggered by changes in usage patterns and external factors such as changing pricing models, instance options or resource types.
You can think of tagging as another form of meta-date. Tags allow you to overlay business and organizational information onto your billing and usage data. This helps you categorize and track your costs to a very granular level. You can apply tags that represent business categories – such as cost centers, application names, projects, or owners – to organize your costs across multiple services and teams. It allows you to attribute costs to individuals, user groups or business units.
Assigning tags to resources allows higher levels of automation and ease of management. When resources are accurately tagged, automation tools can identify key characteristics of those resources needed to take advantage of Cloud elasticity.. The best tool for accomplishing this task is Auto Scaling, which you can use to optimize performance by automatically increasing the number of instances during demand spikes and decreasing capacity as demand diminishes.
According to research, a massive $3.4 billion of Cloud spend is wasted each year on unused storage volumes and snapshots(10). In many instances, organisations don’t attempt to optimize storage in the same way they would with system performance and capacity. However, Cloud storage is a key focus for cost optimisation.
To optimize storage, the first step is to understand the performance profile for each of your workloads. You should conduct a performance analysis to measure input/output operations per second (IOPS), throughput, and other variables. There then follows a process of right sizing similar to capacity provisioning. And, just like right sizing in other areas, this is continuous and iterative. To maintain a storage architecture that is both right-sized and right-priced, you should optimize storage on at least a monthly basis.
It’s clear that when you have insight into your capacity needs – who uses the services, how they use them and how much capacity they consume – you can begin to take advantage of the impressive discounts available by exploiting models such as reserved or spot instances.
In addition, some public Cloud providers offer more complex solution to help maximize your spend. For example, AWS provides EC2 Fleet that lets you define a target compute capacity and then creates and automatically launches the best mix of On-Demand, Reserved and Spot instances to meet your specific requirements.
Simply keeping on top of the changing pricing structures and promotional/discount opportunities can be difficult, which often leads to organisation missing cost benefits available to them.
This is also true when it comes to licensing. Even within the traditional data centre, software license management is complex. This complexity increases tenfold in the Cloud environment. Cloud vendors are continually innovating, which is great for their customers. But, it also means new pricing metrics, licensing bundles, connectivity charges, etc. that must be factored into your spend planning.
The complexity of this pricing and licensing environment is leading some companies to consider managed services as a means to access a single service that can remove licensing issues.
To fully benefit from the Cloud, it is important to map business goals to specific metrics so that you can evaluate where changes need to be made. There is a huge amount of data generated by Cloud systems that means an organisation can gain much clearer visibility into performance and cost.
When you measure and monitor users and applications, and combine the data you collect with data from the Cloud platform, you can quickly identify any gaps between your requirements and current system utilization. It’s important to establish the metrics that most closely align with your desired business outcomes and continuously monitor and re-assess real time performance against those metrics.
Cost-related metrics – such as user, subscriber, API calls, page visits, etc. – allow an organisation to take a data-driven approach to cost optimization. But this must form part of a continuous improvement strategy as the Cloud is a highly dynamic environment. Even if you accurately right size services and workloads at the outset, performance and capacity requirements will change over time leading to either under over over-provisioned resources.