Serverless on ARM: How I Slashed Costs and Simplified My Blog
When I first built my blog, like many developers, I went with what I knew: containers running on EC2 instances. It worked, but every time I wanted to publish a new post, I had to rebuild and redeploy a container. Talk about overkill for a simple markdown file! Today, I want to share how moving to a serverless architecture on AWS Graviton 2 ARM processors has transformed my workflow and significantly reduced costs.
The Old Way vs. The New Way
Let me paint you a picture of the old deployment process:
- Write a blog post in markdown
- Add it to the container's content directory
- Build a new container image
- Push it to ECR
- Update the ECS task definition
- Wait for the deployment to complete
Now, my process looks like this:
- Write a blog post in markdown
- Copy it to S3
- There is no step 3
That's it. The Lambda function automatically picks up the new file, extracts the metadata, and updates the database. The Go web server pulls the content directly from S3. Simple, elegant, and ridiculously efficient.
The Cost Advantage of Serverless on ARM
Moving from EC2 to ECS Fargate on Graviton 2 has cut my estimated costs by around 50% - possibly more. But the real savings come from the serverless architecture itself:
- Lambda: I only pay when a new blog post is uploaded, which is... let's be honest, not that often
- S3: Storage costs for text files are negligible
- RDS: Using the smallest available db.t4g.micro Graviton 2 instance keeps database costs minimal
The ARM-based Graviton 2 processors provide better price-performance than their x86 counterparts, which compounds these savings even further.
The One Gotcha: ARM64 Configuration
If there was one hiccup in this whole process, it was remembering to set the ECS Fargate container to use ARM64 architecture. It's a simple setting, but easy to overlook:
runtime_platform {
operating_system_family = "LINUX"
cpu_architecture = "ARM64"
}
Miss this configuration, and you'll deploy to x86 by default, missing out on those sweet ARM efficiency gains.
Baby Steps™ to Zero-Downtime Deployment
True to my development philosophy, I approached this migration using the Baby Steps™ approach:
- First, I built and tested the Lambda function to process S3 uploads
- Then, I set up the RDS database with the proper schema
- Next came the Go web server, developed and tested locally with Docker
- I deployed to beta.convergence.ninja for thorough testing
- Finally, I cut over to convergence.ninja with zero downtime
This methodical approach ensured that each component worked perfectly before moving to the next. The result was a smooth transition with absolutely no service interruption.
The Real Winner: Simplified Deployments
While the cost savings are nice, the real game-changer has been the simplified deployment process. Publishing content is now frictionless - just copy a markdown file to S3 and I'm done. No more building containers, waiting for deployments, or managing complex infrastructure just to share my thoughts.
For someone who writes code all day, removing friction from sharing knowledge is invaluable. It means I'm more likely to write, which means I'm more likely to crystallize my thoughts, which leads to better understanding and retention.
Is Serverless on ARM Right for You?
If you're running a simple content site like a blog, the answer is almost certainly yes. The combination of serverless architecture and ARM processors offers a compelling mix of simplicity, cost-efficiency, and performance.
Even for more complex applications, the Baby Steps™ approach can help you migrate incrementally, testing each component thoroughly before committing fully. Start with the stateless parts of your application, then gradually move more complex components as you gain confidence.
In the end, technology choices should simplify your life, not complicate it. And for me, serverless on ARM has definitely been a step in the right direction.