Day 14 of the #30DayTerraformChallenge
Today was about understanding one of the most important parts of how Terraform actually works under the hood — the provider system.
Until now I have been writing provider "aws" at the top of every configuration without really thinking about it. Today I went deep on what that actually does, how providers get installed and versioned, and — most importantly — how to deploy resources across multiple AWS regions using provider aliases.
This is the foundation for any real multi-region architecture.
A provider is a plugin that translates your Terraform resource declarations into real API calls for a specific platform.
When you write:
resource "aws_instance" "web" {
instance_type = "t3.micro"
}
Terraform does not know how to create an EC2 instance on its own. The AWS provider plugin does. It takes your resource block, calls the AWS EC2 API, and returns the result.
Think of Terraform as a universal remote control. Providers are the batteries — without the right provider installed, Terraform cannot do anything for that platform.
Providers exist for AWS, Azure, GCP, Kubernetes, GitHub, Datadog, PagerDuty, and hundreds of other platforms. They all work the same way — you declare them, Terraform downloads them, and they handle the API calls.
When you run terraform init, Terraform reads your required_providers block and downloads the correct provider binary from the Terraform Registry:
Initializing provider plugins...
- Finding hashicorp/aws versions matching "~> 5.0"...
- Installing hashicorp/aws v5.31.0...
- Installed hashicorp/aws v5.31.0 (signed by HashiCorp)
Terraform has been successfully initialized!
The provider binary is downloaded into .terraform/providers/ in your project directory. This is why .terraform/ is in your .gitignore — provider binaries are large, platform-specific, and regenerated by terraform init.
Without version pinning, Terraform downloads the latest available provider version every time someone runs terraform init. A major version release with breaking changes could silently break your configuration.
Always declare your required providers with a version constraint:
terraform {
required_version = ">= 1.5.0"
required_providers {
aws = {
source = "hashicorp/aws"
version = "~> 5.0"
}
}
}
| Constraint | Meaning |
|---|---|
= 5.0.0 |
Exactly version 5.0.0 — nothing else |
>= 5.0 |
Version 5.0 or higher — any version |
~> 5.0 |
Version >= 5.0 and < 6.0 — safe for minor updates |
~> 5.31.0 |
Version >= 5.31.0 and < 5.32.0 — very strict |
>= 5.0, < 6.0 |
Same as ~> 5.0 but written explicitly |
The ~> operator (called the pessimistic constraint operator) is the most common choice. ~> 5.0 allows patch and minor version updates but prevents major version upgrades that might contain breaking changes.
After running terraform init, Terraform creates a .terraform.lock.hcl file in your project directory. This is your provider lock file.
# This file is maintained automatically by "terraform init".
# Manual edits may be lost in future updates.
provider "registry.terraform.io/hashicorp/aws" {
version = "5.31.0"
constraints = "~> 5.0"
hashes = [
"h1:rgRdPMEhCDRjLXrBRMXSHks2l2BRMdYnmDnEBLqEsYM=",
"zh:0843ca2b2f18c90b26cf41c0a0aa3cad6be38be8c7d9f77b8dc33e6a27e58cad",
"zh:1c6a79fc3b1a65a5d41b3e9ef0d45b8ccf52dfbd36e8776e01bd4af1c32bfba8",
]
}
Each field means:
| Field | What It Records |
|---|---|
version |
The exact version that was installed |
constraints |
The constraint from required_providers |
hashes |
Cryptographic checksums of the provider binary |
Why commit this file to Git:
The lock file ensures every team member and every CI system installs the exact same provider version. Without it, two engineers running terraform init on different days might get different provider versions. The lock file eliminates that inconsistency — it is the provider equivalent of a package-lock.json in Node.js or a Pipfile.lock in Python.
# Always commit the lock file
git add .terraform.lock.hcl
git commit -m "Pin AWS provider to v5.31.0"
By default, the single provider "aws" block applies to every resource in your configuration. To deploy resources in a second region, you define an aliased provider.
# Default provider — all resources use this unless specified otherwise
provider "aws" {
region = "us-east-1"
}
# Aliased provider — only used by resources that explicitly reference it
provider "aws" {
alias = "us_west"
region = "us-west-2"
}
The first provider has no alias — it is the default. The second has alias = "us_west" — resources must explicitly reference it to use it.
# Deploys in us-east-1 — uses the default provider
resource "aws_s3_bucket" "primary" {
bucket = "my-app-primary-bucket-2026"
}
# Deploys in us-west-2 — uses the aliased provider
resource "aws_s3_bucket" "replica" {
provider = aws.us_west # ← references the alias
bucket = "my-app-replica-bucket-2026"
}
The provider = aws.us_west argument tells Terraform: “use the provider with alias us_west for this resource.” When Terraform creates this resource, it calls the AWS API endpoint for us-west-2 instead of us-east-1.
Here is a complete multi-region deployment that creates a primary S3 bucket in us-east-1 and a replica in us-west-2, with automatic replication configured between them.
providers.tf:
terraform {
required_version = ">= 1.5.0"
required_providers {
aws = {
source = "hashicorp/aws"
version = "~> 5.0"
}
}
}
# Primary region
provider "aws" {
region = "us-east-1"
}
# Secondary region for replication
provider "aws" {
alias = "us_west"
region = "us-west-2"
}
main.tf:
# --- PRIMARY BUCKET (us-east-1) ---
resource "aws_s3_bucket" "primary" {
bucket = "my-app-primary-bucket-2026"
tags = {
Name = "Primary Bucket"
Region = "us-east-1"
}
}
# Enable versioning on primary — required for replication
resource "aws_s3_bucket_versioning" "primary" {
bucket = aws_s3_bucket.primary.id
versioning_configuration {
status = "Enabled"
}
}
# --- REPLICA BUCKET (us-west-2) ---
resource "aws_s3_bucket" "replica" {
provider = aws.us_west # ← deploys in us-west-2
bucket = "my-app-replica-bucket-2026"
tags = {
Name = "Replica Bucket"
Region = "us-west-2"
}
}
# Enable versioning on replica — required for replication destination
resource "aws_s3_bucket_versioning" "replica" {
provider = aws.us_west # ← must match the bucket's provider
bucket = aws_s3_bucket.replica.id
versioning_configuration {
status = "Enabled"
}
}
# --- IAM ROLE FOR REPLICATION ---
data "aws_iam_policy_document" "replication_assume_role" {
statement {
effect = "Allow"
principals {
type = "Service"
identifiers = ["s3.amazonaws.com"]
}
actions = ["sts:AssumeRole"]
}
}
resource "aws_iam_role" "replication" {
name = "s3-replication-role"
assume_role_policy = data.aws_iam_policy_document.replication_assume_role.json
}
data "aws_iam_policy_document" "replication_policy" {
statement {
effect = "Allow"
actions = [
"s3:GetReplicationConfiguration",
"s3:ListBucket",
]
resources = [aws_s3_bucket.primary.arn]
}
statement {
effect = "Allow"
actions = [
"s3:GetObjectVersionForReplication",
"s3:GetObjectVersionAcl",
"s3:GetObjectVersionTagging",
]
resources = ["${aws_s3_bucket.primary.arn}/*"]
}
statement {
effect = "Allow"
actions = [
"s3:ReplicateObject",
"s3:ReplicateDelete",
"s3:ReplicateTagging",
]
resources = ["${aws_s3_bucket.replica.arn}/*"]
}
}
resource "aws_iam_role_policy" "replication" {
name = "s3-replication-policy"
role = aws_iam_role.replication.id
policy = data.aws_iam_policy_document.replication_policy.json
}
# --- REPLICATION CONFIGURATION ---
resource "aws_s3_bucket_replication_configuration" "replication" {
# Depends on versioning being enabled first
depends_on = [aws_s3_bucket_versioning.primary]
role = aws_iam_role.replication.arn
bucket = aws_s3_bucket.primary.id
rule {
id = "replicate-all"
status = "Enabled"
destination {
bucket = aws_s3_bucket.replica.arn
storage_class = "STANDARD"
}
}
}
# --- OUTPUTS ---
output "primary_bucket_name" {
value = aws_s3_bucket.primary.id
description = "Name of the primary S3 bucket in us-east-1"
}
output "replica_bucket_name" {
value = aws_s3_bucket.replica.id
description = "Name of the replica S3 bucket in us-west-2"
}
After terraform apply, any object uploaded to the primary bucket in us-east-1 is automatically replicated to us-west-2. If us-east-1 has an outage, the data is safe in the replica region.
For organisations with separate AWS accounts for staging and production, you can deploy to multiple accounts in a single Terraform configuration using assume_role:
# Deploy to production account
provider "aws" {
alias = "production"
region = "us-east-1"
assume_role {
role_arn = "arn:aws:iam::111111111111:role/TerraformDeployRole"
}
}
# Deploy to staging account
provider "aws" {
alias = "staging"
region = "us-east-1"
assume_role {
role_arn = "arn:aws:iam::222222222222:role/TerraformDeployRole"
}
}
# Resource in production account
resource "aws_s3_bucket" "prod_bucket" {
provider = aws.production
bucket = "my-app-production-bucket-2026"
}
# Resource in staging account
resource "aws_s3_bucket" "staging_bucket" {
provider = aws.staging
bucket = "my-app-staging-bucket-2026"
}
What assume_role does: When Terraform calls the AWS API for a resource that uses this provider, it first calls sts:AssumeRole to get temporary credentials for the specified role in the target account. Those temporary credentials are then used for all API calls to that account.
IAM permissions required for TerraformDeployRole:
The role in each account needs:
sts:AssumeRole permission to allow the calling account to assume itThe calling IAM user (your terraform-dev user) needs permission to call sts:AssumeRole on the target roles.
When you do not specify provider on a resource, Terraform uses the default provider for that resource type. The default provider is the one with no alias.
The resolution order:
provider argument — provider = aws.us_west — always winsaws_instance → looks for the aws provideraws provider with no aliasIf there is no default provider (all providers have aliases), Terraform requires every resource to have an explicit provider argument. This is a common source of errors when refactoring configurations to use aliases.
Error: creating S3 Bucket (my-app-replica-bucket-2026):
BucketAlreadyExists: The requested bucket name is not available.
The bucket namespace is shared by all users of the system.
What happened: S3 bucket names are globally unique across all AWS accounts and all regions worldwide. A name that seems unique to you may already be taken by someone else anywhere in the world.
Fix: Added more specific suffixes to make the names unique:
bucket = "my-app-primary-yourname-2026"
bucket = "my-app-replica-yourname-2026"
Or use a random suffix generated by Terraform:
resource "random_id" "bucket_suffix" {
byte_length = 4
}
resource "aws_s3_bucket" "primary" {
bucket = "my-app-primary-${random_id.bucket_suffix.hex}"
}
provider Argument on Versioning ResourceAfter creating the replica bucket with provider = aws.us_west, I forgot to add the same provider argument to the aws_s3_bucket_versioning resource for the replica:
# Wrong — missing provider argument
resource "aws_s3_bucket_versioning" "replica" {
bucket = aws_s3_bucket.replica.id # replica bucket is in us-west-2
# but this resource uses the default us-east-1 provider
}
Error:
Error: reading S3 Bucket (my-app-replica-bucket-2026):
operation error S3: HeadBucket,
https response error StatusCode: 301,
requested bucket from "us-east-1", actual location "us-west-2"
What happened: The bucket exists in us-west-2 but the versioning resource was calling the us-east-1 API endpoint. AWS returned a 301 redirect saying “this bucket is in a different region.”
Fix: Added provider = aws.us_west to all resources that reference the replica bucket:
resource "aws_s3_bucket_versioning" "replica" {
provider = aws.us_west # ← must match the bucket's region
bucket = aws_s3_bucket.replica.id
}
General rule: Any resource that interacts with a resource in a non-default region must use the same aliased provider as that resource.
Error: putting S3 Replication Configuration:
InvalidRequest: Versioning must be enabled on the source bucket.
What happened: I tried to configure replication before the versioning resource had been applied. Terraform was creating the replication configuration at the same time as the versioning — but AWS requires versioning to be fully enabled before replication can be configured.
Fix: Added an explicit depends_on to the replication configuration:
resource "aws_s3_bucket_replication_configuration" "replication" {
depends_on = [aws_s3_bucket_versioning.primary] # ← wait for versioning first
role = aws_iam_role.replication.arn
bucket = aws_s3_bucket.primary.id
...
}
Terraform usually figures out dependency ordering from resource references. But since the replication configuration references the bucket ID (not the versioning resource), Terraform did not automatically know to wait. The explicit depends_on makes the dependency clear.
After running terraform init -upgrade to get the latest AWS provider, my .terraform.lock.hcl was updated with a new version. When a teammate ran terraform plan without running terraform init first, they got:
Error: Provider requirements cannot be satisfied by locked dependencies
The following required providers are not installed:
- registry.terraform.io/hashicorp/aws (required by the root module)
Please run "terraform init" to install the necessary providers.
What happened: The lock file had been updated to a new provider version but the teammate’s local .terraform/providers/ cache still had the old version.
Fix: The teammate ran terraform init and the new provider version was downloaded. This is expected behaviour — whenever the lock file changes (because someone ran terraform init -upgrade), everyone on the team needs to run terraform init to get the updated provider.
Best practice: When you update provider versions, communicate it to your team. Consider adding a note to the pull request description when the lock file changes.
terraform init downloads providers from the Terraform Registry into .terraform/providers/~> 5.0 to allow minor updates but prevent breaking major version changesassume_role allows a single Terraform configuration to deploy to multiple AWS accountsdepends_on is sometimes needed when Terraform cannot infer a dependency from resource references alonePart of the #30DayTerraformChallenge with AWS AI/ML UserGroup Kenya, Meru HashiCorp User Group, and EveOps.