What Are Containers and Why Do They Matter?
Containers are lightweight, portable packages of software that bundle everything needed to run an application — code, dependencies, and runtime. Unlike traditional VMs, containers share the host OS kernel, making them faster to start and more resource-efficient.
Containers are ideal for:
- Microservices architecture
- CI/CD pipelines
- Portability across environments (dev → test → prod)
- Fast scaling and automation
Docker is the most widely used container engine, and AWS supports Docker containers across its container services.
Amazon Elastic Container Service (ECS)
Overview
ECS is AWS’s native container orchestration service. It manages how your Docker containers are deployed and scaled on AWS infrastructure.
Features
- Fully managed container orchestration
- Deep integration with AWS services (CloudWatch, IAM, ALB, Auto Scaling)
- Supports EC2 launch type (you manage the infrastructure) or Fargate launch type (AWS manages the infra)
- Task Definitions define how containers run
- ECS Cluster: logical group of container instances
Use Cases
- Microservices architectures on AWS
- Cost-sensitive workloads (when using EC2 launch type)
- Applications needing deep AWS integration
Limitations
- Not suitable if you want multi-cloud portability or native Kubernetes
- Learning curve around task definitions and cluster setup
ECS Service Auto Scaling: Smarter Load Handling
When running containers in ECS, it’s not enough to simply deploy them — you also want them to automatically adjust to changes in load. That’s where ECS Service Auto Scaling comes in. It adjusts the number of running tasks in your ECS service based on demand.
But this is not the same as EC2 Auto Scaling, which scales the number of EC2 instances in your cluster. ECS Service Auto Scaling deals with tasks, while EC2 Auto Scaling deals with instances. Both can work together, especially if you’re using the EC2 launch type.
ECS Service Auto Scaling can scale your tasks using:
- Target Tracking: The most commonly used and easiest method. You specify a metric and a target value (e.g., keep CPU utilization at 50%). ECS adjusts the number of tasks automatically to maintain this.
- Step Scaling: You define specific metric thresholds and how many tasks to add/remove at each threshold (e.g., add 2 tasks if CPU > 70%). More granular but also more complex to manage.
- Scheduled Scaling: You define a schedule when to scale up or down — useful for predictable workloads (e.g., scale up weekdays at 9 AM, scale down at 6 PM).
ECS Service Auto Scaling uses CloudWatch metrics such as:
CPUUtilization
andMemoryUtilization
per service- Custom metrics (e.g., queue depth from SQS)
- ALB request count per target
These metrics can be pulled from ECS itself or from linked services like CloudWatch or ALB.
Capacity Providers give ECS more control over how infrastructure is used when scaling. There are two main options:
- Fargate Capacity Provider: No EC2 management. ECS launches Fargate tasks as needed.
- EC2 Capacity Provider: You define Auto Scaling Groups (ASGs) as capacity providers. ECS ensures there’s enough infrastructure (EC2 instances) to place tasks.
With capacity providers, you can also set weighting and base strategies — for example, you could tell ECS to place 80% of tasks on EC2 and 20% on Fargate.
If you’re using the EC2 launch type, you must also manage scaling of the EC2 instances themselves using Auto Scaling Groups:
- Define launch templates
- Enable ECS Cluster integration
- Use scaling policies based on metrics like
CPUUtilization
orECSServiceAverageCPUUtilization
- Register ASG as a capacity provider in ECS
Task Roles: Giving Containers Secure Access to AWS Services
Sometimes your containerized app running in ECS needs to access other AWS services — for example, writing logs to S3, sending metrics to CloudWatch, or querying DynamoDB. Instead of hardcoding credentials (which is a security risk), ECS allows you to attach an IAM role to the task — known as a task role.
A task role is defined in your task definition and is used to grant permissions to the containers at runtime. The ECS agent retrieves temporary credentials tied to this role and injects them into the task, so your containers can securely call AWS APIs.
How it differs from a regular IAM role:
Feature | IAM Role (General) | ECS Task Role |
---|---|---|
Scope | Can be assumed by users, apps, EC2, Lambda | Only assumed by ECS tasks |
Defined In | IAM or CloudFormation | ECS task definition |
Permissions Applied To | The AWS service that assumes the role | All containers within a single ECS task |
Credential Delivery | Via STS or service integration | Delivered securely via ECS agent |
This lets you follow best practices: least privilege, no hardcoded secrets, and scoped access per task.
Also note: each ECS task can have its own role. So in a microservices architecture with multiple services in a cluster, each task can be granted only the permissions it needs.
Security
- IAM controls access to ECS resources
- Use task roles to let containers access AWS services securely
- Secrets can be injected via AWS Secrets Manager or Systems Manager Parameter Store
- Containers are isolated from one another at runtime
Pricing
- ECS with EC2 launch type: You pay for the EC2 instances running your containers
- ECS with Fargate launch type: You pay per vCPU and GB of memory per second
Launch Type | Pricing Model | Managed Infra? | Use Case |
---|---|---|---|
EC2 | EC2 instance hours | You manage | Full control, custom setup |
Fargate | Per-second CPU/memory | AWS-managed | Simplified, pay-as-you-go model |
Amazon Elastic Kubernetes Service (EKS)
Overview
EKS is AWS’s managed Kubernetes service. If you’re already using Kubernetes or want to standardize across clouds, EKS makes sense.
Features
- Runs upstream Kubernetes
- Supports both EC2 and Fargate
- Control plane is managed by AWS
- Integrates with IAM, CloudWatch, ALB, etc.
- Works with Kubernetes tools (kubectl, Helm, etc.)
- In terms of data storage, it supports:
- EBS
- EFS (with Fargate, this is the only type that works)
- FSx for Lustre and FSx for NetApp ONTAP
Use Cases
- Multi-cloud containerized apps
- Organizations with existing Kubernetes expertise
- Migrating from self-managed Kubernetes clusters
Node types
In EKS, containers run inside pods, and those pods need to run on worker nodes. There are three node types available, each with different levels of control, flexibility, and management.
1. Self-managed EC2 nodes
- You manually provision EC2 instances and connect them to your EKS cluster.
- Full control over instance configuration and lifecycle.
- You manage updates, scaling, and health — high operational burden.
- Use when you need custom AMIs, special networking, or GPU support.
2. Managed Node Groups (MNG)
- AWS provisions and manages EC2 instances for you.
- Integrated with Auto Scaling Groups.
- Lower operational overhead — AWS handles updates, health checks, and node replacement.
- Still uses EC2, so pricing is per instance.
3. AWS Fargate
- Serverless option — no EC2 provisioning at all.
- Each pod runs in its own isolated compute environment.
- Simplest to manage, but less flexible (no daemon sets, limited configuration).
- Pricing is per vCPU and memory per second — can be costly for long-running workloads.
Quick Comparison Table
Node Type | You Manage Infra? | Use Cases | Key Limitation |
---|---|---|---|
EC2 (self-managed) | Yes | Full control, custom setups | High ops overhead |
Managed Node Group | Partial | General workloads | Less customizable than self-managed |
Fargate | No | Simple apps, low ops | Limited features, higher cost |
Limitations
- Steeper learning curve if you’re new to Kubernetes
- More complex than ECS for small/simple workloads
Security
- Kubernetes RBAC and IAM integration
- Runs control plane in a managed VPC
- Supports encrypted secrets, service mesh (via Istio)
Pricing
- $0.10 per hour per EKS cluster
- Plus cost of worker nodes (EC2 or Fargate)
Feature | EKS | ECS |
---|---|---|
Orchestration | Kubernetes | AWS-native |
Multi-cloud | Yes | No |
Complexity | Higher | Lower |
Cost (control) | Less granular | More granular (ECS+EC2) |
Ecosystem | Kubernetes-native | AWS-native |
AWS App Runner
Overview
App Runner is a fully managed service for deploying containerized web apps and APIs directly from source code or container images.
Features
- No infrastructure to manage
- Auto builds and deploys from GitHub or ECR
- Scales automatically based on traffic
- HTTPS out of the box, load balancing built-in
Use Cases
- Rapid deployment of web apps without DevOps expertise
- Startups, prototypes, internal tools
- Developers moving from PaaS like Heroku
Limitations
- Less customizable compared to ECS/EKS
- Limited to web-facing HTTP applications
- No granular control over underlying infrastructure
Security
- IAM for App Runner permissions
- HTTPS with managed certificates
- Supports VPC integration for private endpoints
Pricing
- You pay for compute and memory used, plus active request duration
- Automatically scales up/down based on load
Feature | App Runner | ECS + Fargate | EKS + Fargate |
---|---|---|---|
Use Case | Web apps, APIs | Custom workloads | Kubernetes workloads |
Infra Management | None | Minimal | Medium |
Scaling | Auto | Auto/manual | Auto/manual |
Pricing Simplicity | High | Medium | Lower |
Customizability | Low | High | Highest |
Amazon Elastic Container Registry (ECR)
Overview
ECR is AWS’s managed Docker container registry. It stores your container images so they can be used by ECS, EKS, or App Runner.
Features
- Push and pull Docker images using Docker CLI
- Integrated with IAM and AWS services
- Scans images for vulnerabilities
- Supports public and private repositories
Use Cases
- Central repo for container images
- CI/CD pipelines storing images pre-deploy
- App Runner image source
Limitations
- Must be used with AWS IAM and roles
- Slightly higher learning curve than Docker Hub for newcomers
Security
- IAM and repository permissions
- Image scanning for vulnerabilities
- Encryption at rest
Pricing
- Pay for data storage and data transfer
- First 500MB/month of storage is free
AWS App2Container
Overview
App2Container is a CLI tool that helps convert legacy applications (Java/.NET) running on VMs to containerized versions.
Features
- Scans and analyzes existing apps
- Generates Dockerfile and ECS/EKS deployment artifacts
- Speeds up container migration
Use Cases
- Lift-and-shift from on-prem to container on AWS
- Enterprises modernizing legacy apps
Limitations
- Limited to Java and .NET applications
- Requires app analysis and possible rework
Security
- Security depends on generated Dockerfile practices
- Post-conversion, you can apply container security best practices
Pricing
- App2Container itself is free
- You pay for the AWS services used (like ECS, EKS, ECR)
When to Use What?
Service | Best For | Infra Managed | Use Case |
---|---|---|---|
ECS | AWS-native apps with deep service integration | Partial/full | Custom container workflows |
EKS | Kubernetes workloads and multi-cloud needs | Partial/full | Standardized K8s environments |
App Runner | Fast deploy of HTTP apps without ops overhead | Fully | Web apps, APIs, fast startup |
ECR | Hosting container images | N/A | Source for ECS/EKS/App Runner |
App2Container | Migrating legacy Java/.NET apps | N/A | Replatform legacy apps to AWS |