How We Cut $100K/Month From a Manufacturing Company's Azure Bill
- Logan Hemphill
- 3 hours ago
- 5 min read
A manufacturing company with thousands of employees was spending $287,000 per month on Azure across three environments: Dev, UAT, and Production.
Read that again. $287K a month.
When I got involved, the first thing I looked at was the cost split across environments. What I found was staggering: over $200K of that $287K was going to Dev and UAT alone. Both environments were scaled nearly identically to Production — same VM sizes, same database tiers, same app service plans — and all of it was running 24/7.
Dev and UAT. Running at production scale. Around the clock. For teams that deployed changes to production maybe twice a month.
That was the moment I knew this engagement was going to deliver serious results.
The Situation
This company had a capable cloud team, but they were buried. New business initiatives had priority, stability issues kept pulling them into reactive firefighting, and cost optimization had never made it to the top of anyone's list. Nobody was doing cost reviews. Nobody had time.
Here's what the environment looked like when I walked in:
No cost governance structure. They had multiple subscriptions per environment but no management groups organizing them. There was no easy way to see where money was going or who was responsible for what. Cost reviews weren't happening because the data wasn't accessible in any useful way.
Dev and UAT mirrored Production. Every resource in Dev and UAT was provisioned at the same scale as Prod. Same VM SKUs, same database tiers, same app service plans. Nothing scaled down at night. Nothing scaled down on weekends. Nothing was right-sized based on actual usage — because nobody had looked at actual usage.
Observability was reactive, not strategic. They had alerts and dashboards, but all of them had been created during incident calls — someone would hit a problem, slap together a dashboard, and move on. There was no cohesive monitoring strategy. They were measuring the wrong things and missing the signals that mattered.
Orphaned resources everywhere. Developers would spin up resources for sandboxed testing and forget to delete them. Without tagging, lifecycle policies, or cleanup automation, these resources just accumulated — quietly adding to the bill every month.
What I Did
The approach was methodical: stop the bleeding first, then build the systems that prevent it from happening again.

Phase 1: Immediate Cost Reduction
Right-sized Dev and UAT resources based on actual usage. I exported every resource, its type, SKU, and cost, then cross-referenced against actual utilization metrics. VMs, app service plans, databases, and various other resources in Dev and UAT were dramatically oversized relative to their workloads. I right-sized them to match what they actually needed — not what Production was running.
Implemented autoscaling and off-hours schedules. App service plans in Dev and UAT were configured with autoscaling rules to scale down at night and on weekends. There is zero reason for a Dev environment to run at full capacity at 3 AM.
Turned off unnecessary logging and metrics in Dev and UAT. They had full diagnostic logging enabled across all three environments — the same logging configuration in Dev as in Production. For non-production environments, this is pure waste. I disabled the metrics and log collection that wasn't needed, reducing Log Analytics ingestion costs significantly.
Cleaned up orphaned resources. I identified and removed resources left over from sandboxed dev testing that had been forgotten. Storage accounts, NICs, public IPs, disks — the usual graveyard of resources that nobody remembers creating but Azure remembers to bill for.
Optimized storage accounts. I reviewed and changed storage account tiers, adjusted retention policies, and modified replication settings to match actual requirements rather than the defaults that had been left in place.
Phase 2: Governance and Visibility
Built a management group structure and custom cost dashboards. I organized their subscriptions under management groups so costs could be reviewed by environment, by team, and by workload — not just as one giant bill. I built custom cost dashboards in Azure Cost Management that gave leadership and the cloud team real visibility into where money was going for the first time.
Designed a cost-tagging strategy with teeth. Every resource would be tagged to a VP or director and mapped to a cost center. This created clear ownership — no more "whose resource is this?" conversations. For Dev specifically, tags included a lifecycle component: resources would be flagged and auto-deleted after a defined period if not renewed. No more orphaned sandbox resources accumulating forever.
Created a project approval process. New resource provisioning would require cost center mapping and an approving owner before spinning up. This connected cloud spend directly to business decisions and made cost a first-class consideration in project planning.
Phase 3: Knowledge Transfer and Long-Term Roadmap
Established a recurring cost review process. I set up a regular cost review meeting with their team and walked them through how to conduct reviews themselves — what to look for, which dashboards to check, what alerts to configure, and how to catch cost anomalies early.
Built a 3-year optimization roadmap. The immediate wins were significant, but we also mapped out a longer-term plan including containerization of key workloads, seasonal pre-scaling in Production based on usage patterns, and continued refinement of their governance model.
The goal was always to make them self-sufficient — not dependent on me forever.
The Results

$100K+ per month in Azure cost savings. Annual run-rate reduction of over $1.2 million.
Monthly spend dropped from $287K to under $185K — and the environment was actually better for it. Dev and UAT were right-sized and responsive. The team had dashboards that told them something useful. Leadership could see where money was going for the first time.
Beyond the dollar figure, the engagement fundamentally changed how the company thought about cloud costs:
Cost became visible. Management groups and dashboards replaced the black box.
Cost became owned. Tagging and cost center mapping meant every dollar had a name attached to it.
Cost became proactive. Recurring reviews and alerts replaced the old pattern of finding out about waste six months too late.
The cloud team got their time back. With better observability and fewer stability issues caused by over-provisioned, poorly monitored environments, the team could focus on the initiatives that actually mattered.
The Takeaway
The biggest cost leaks in Azure aren't exotic. They're boring. They're Dev environments running at Production scale because nobody had time to change it. They're logging configurations copied across every environment because it was easier than thinking about what each environment actually needs. They're orphaned resources from six months ago that nobody remembers creating.
Most mid-market companies I talk to have 20–40% in Azure savings hiding in plain sight. Not because they made bad decisions — but because nobody has had the time to make better ones.
If you're curious whether your Azure environment has room to shrink, I do free 30-minute cost reviews. I'll look at your resource SKUs, cost breakdowns, scaling configurations, and logging overhead and show you where the money is going.
No pitch, no strings — just a free look at your spend.
Logan Hemphill is the founder of Altitude Cloud LLC, specializing in Azure cost optimization and cloud architecture modernization for mid-market companies. He has driven over $5M in documented cloud cost savings across healthcare, financial services, retail, manufacturing, and industrial sectors.
_(1).webp)