OpsCanary
Back to daily brief
terraformworkspacesPractitioner

Managing Terraform State with S3: Best Practices

5 min read HashiCorp DocsApr 27, 2026
Share
PractitionerHands-on experience recommended

Managing Terraform state is a critical aspect of infrastructure as code. Storing state in S3 provides a centralized and durable solution, especially when working in teams. It prevents state file conflicts and allows for better collaboration. By using S3, you can also take advantage of features like versioning, which is essential for state recovery in case of accidental deletions.

The S3 backend stores state data as an object in an S3 bucket, defined by the bucket and key parameters in your Terraform configuration. When using workspaces, the default workspace's state is stored at the specified key path. Other workspaces follow the pattern <workspace_key_prefix>/<workspace_name>/<key>, with the default prefix being env:. You can customize this prefix using the workspace_key_prefix parameter. Additionally, enabling state locking with the use_lockfile parameter is advisable to prevent simultaneous writes, although it requires specific permissions on the lock file.

In production, always enable Bucket Versioning on your S3 bucket to safeguard against accidental deletions. Be aware that using a lockfile necessitates permissions for s3:GetObject, s3:PutObject, and s3:DeleteObject on the lock file. Avoid hardcoding AWS credentials; instead, leverage environment variables for better security practices. Keep in mind that DynamoDB-based locking is deprecated, so plan for future changes in your infrastructure management strategy.

Key takeaways

  • Configure the S3 backend with bucket and key parameters to store state effectively.
  • Utilize workspaces to isolate state for different environments or teams.
  • Enable Bucket Versioning on S3 to recover from accidental deletions.
  • Set `use_lockfile` to true to prevent concurrent state modifications.
  • Use environment variables for AWS credentials to enhance security.

Why it matters

Properly managing Terraform state in S3 can prevent costly downtime and configuration drift. It ensures that your infrastructure remains consistent and recoverable, which is critical in production environments.

Code examples

HCL
terraform{backend"s3"{bucket="mybucket"key="path/to/my/key"region="us-east-1"}}
JSON
{"Version":"2012-10-17","Statement":[{"Effect":"Allow","Action":"s3:ListBucket","Resource":"arn:aws:s3:::mybucket","Condition":{"StringEquals":{"s3:prefix":"mybucket/path/to/my/key"}}},{"Effect":"Allow","Action":["s3:GetObject","s3:PutObject"],"Resource":["arn:aws:s3:::mybucket/path/to/my/key"]},{"Effect":"Allow","Action":["s3:GetObject","s3:PutObject","s3:DeleteObject"],"Resource":["arn:aws:s3:::mybucket/path/to/my/key.tflock"]}]}
HCL
data"terraform_remote_state""network"{backend="s3"config={bucket="terraform-state-prod"key="network/terraform.tfstate"region="us-east-1"}}

When NOT to use this

The official docs don't call out specific anti-patterns here. Use your judgment based on your scale and requirements.

Want the complete reference?

Read official docs

Test what you just learned

Quiz questions written from this article

Take the quiz →

Get the daily digest

One email. 5 articles. Every morning.

No spam. Unsubscribe anytime.