<< Back to all Blogs
5 ways to get your hands dirty with AWS Cloud Cost Optimisation

5 ways to get your hands dirty with AWS Cloud Cost Optimisation

Douglas Hull

In a cloud environment, your spend is a byproduct of many architectural decisions - not only is the choice of compute or storage platform crucial, but also how and when they are utilised. Left unexamined, small inefficiencies such as overprovisioned resources, idle development environments, legacy storage classes and many more, can compound and costs can start to stack up.

In our previous blog, An introduction to Cloud Cost Optimisation, we covered the first steps to managing and monitoring your cloud costs, including organisational support and visibility. These controls, while important, don’t reduce cloud spend on their own. Now it’s time to have a bit more fun and go into specific technical changes you can make to reduce your bill and make your CFO smile, or at least frown a little less.

Five ways you can optimise your cloud architecture today

When you strive to make cloud-hosted software and systems run more efficiently (which should be the goal of every good software engineer), they naturally become more cost optimised. An added bonus is that optimised systems use less energy, which can help accelerate your sustainability goals.

Here are five ways you can optimise your cloud architecture to reduce cloud costs:

1. Schedule your resource availability

Unlike the ‘always-on’ world of the on-prem data centre, the cloud is an on-demand resource, which means costs are only incurred when a compute resource is running.

If a compute resource is charged on an hourly basis, systems that are only needed during office hours should only run during office hours. Changing a system from operating 24/7 (168 hours per week) to 8/5 (40 hours per week) equates to a 76% reduction in running time and therefore, cost. Most development and test environments fall into this category, as do many internal IT systems.

Other systems may need to remain always-on; however, they may experience periods of variable traffic. Automatically scaling resources up and down to meet demand means that costs are constantly balanced favourably. For example, an e-commerce website needs to handle the Black Friday traffic, but the day before payday in January is an entirely different story! Online streaming services need to handle the Friday night load in the depths of winter, but likely experience lighter traffic on a Sunday afternoon in summer.

2. Choose your family wisely

Selecting the right compute family is another major optimisation lever. For many workloads, AWS’ own Graviton-based instances (ARM architecture) deliver better price-performance than comparable x86 alternatives, which means migrating your workloads onto these platforms can reduce compute costs.

Services such as Amazon EC2, Amazon Elastic Kubernetes Service (EKS) and Amazon Elastic Container Service (ECS) all support Graviton instance families, making adoption relatively straightforward for modern containerised or stateless workloads.

By choosing to keep the compute family native within AWS, we’ve seen compute cost reductions of 20% or more, often alongside performance gains, when migrating from older x86-based instance families to current-generation Graviton equivalents.

As with any architecture change, compatibility testing is essential, but for many applications, the savings are immediate and material.

3. Keep up with new generations

Moving to newer versions of EBS storage is relatively straightforward and quick to implement, and can also deliver a cost saving. For example, changing from General Purpose SSD Generation 2 (gp2) to Generation 3 (gp3) can result in around 20% cost savings, with little to no downtime or compatibility risks. There is little reason to stick with legacy gp2 storage these days. For many organisations, moving to gp3 is low hanging fruit from a cost optimisation viewpoint.

4. Guide your developers

Cost governance is most effective when applied consistently and centrally, rather than incrementally.

Developing organisational guardrails using Service Control Policies (SCPs) in Amazon Web Services (AWS) is one of the most effective ways to enforce cost discipline at scale.

At the organisation level, SCPs in AWS Organizations can: restrict the use of expensive or non-approved services, deny legacy instance families (for example, older EC2 generations), and limit deployments to approved regions.

By combining these organisation-level policies with further preventative controls in AWS Control Tower, permission boundaries in AWS Identity and Access Management (IAM), and proactive checks via AWS Config rules, organisations can ensure developers only deploy cost-appropriate instance types and sizes.

These guardrails should be applied to all existing and future accounts within the organisation as rolling them out gradually can create policy gaps where legacy accounts continue incurring unnecessary spend.

5. Provide visibility of costs

A guiding principal at Mechanical Rock is that you can’t measure what you can’t see. Many teams are not aware what their AWS spend is, or find it hard to predict.

In many organisations, billing is managed by Finance teams, rather than technical teams. This separation means engineers managing the cloud systems often cannot see the costs they are incurring. Shifting visibility of and responsibility for cost left by introducing tools such as Infracost can make the financial impacts of introducing or changing infrastructure clear before it becomes a significant headache.

However, simply making an investment in a cost observability tool is not enough. As much as these often come with ‘out-of-the-box’ functionality - fully utilising them takes time and commitment. If your organisation is not ready for such commitment, sticking with tools already available in your cloud provider may be a better investment.

Optimise your cloud costs further by registering for our upcoming webinar

Mechanical Rock has helped many companies optimise their AWS costs. Over time, we’ve identified many ways to optimise cloud environments without sacrificing performance.

In our upcoming webinar, Glen Buktenica, from Fortescue and Douglas Hull, Delivery Lead at Mechanical Rock, will share how companies can reduce their total cloud spend without rearchitecting everything. Attendance is free, but seats are limited. Register here >

Or, if you’re ready to dive straight into a Cloud Cost Optimisation assessment for your company, let’s chat.

This blog wasn’t the work of one person - we had an incredible team of people collaborate to build this blog. Thank you to Craig Webster, Principal Engineer at Mechanical Rock, and Maciej Tarsa, Senior Consultant at Mechanical Rock.