In today’s blog post, we will discuss terraform import block used to import a configuration into the state file. Normally, terraform records the configuration of all the infrastructure managed by it in the state file. Now what if the infrastructure configuration was changed outside terraform. Terraform state file will not have those outside terraform changes and terraform will detect a drift (difference between an infrastructure configuration recorded in the state/configuration files vs actual infrastructure configuration).
There are two possible way to solve this drift in terraform, either remove the configuration (where drift was detected) from the terraform configuration and state file or import the configuration in terraform configuration and state file.
What is Terraform Import?
At its core, Terraform import is a mechanism to tell Terraform about an existing infrastructure that it did not create. It reads the configuration of a real-world resource (e.g., an EC2 instance, an S3 bucket, an Azure VM) and records it in your Terraform state file. Once an object is in the state file, Terraform starts managing the it, and subsequent terraform plan
and terraform apply
operations will account for it.
There are two way’s to import a configuration into terraform state file –
- Using
terraform import
command (supported in all terraform versions) - Using
import
block (available terraform 1.5v onwards)
And the workflow generally involves below steps –
- Define the missing configurations in
.tf
files. - Import the missing configuration using
terraform import
command orimport
block. - Run
terraform plan
andterraform apply
to validate and apply the changes.
Note: Terraform import (whether the CLI command or the import
block) does NOT generate your Terraform configuration (.tf
files) automatically. It only updates the state file. You must first write the corresponding resource block in your HCL configuration that Terraform will use to manage the imported object.
Why is Terraform Import Essential?
- IaC Adoption for Existing Infrastructure: It allows you to gradually bring manually created or legacy infrastructure under Terraform’s control, enabling consistent management, version control, and automation.
- Preventing Conflicts: Without import, if you write HCL for an existing resource and try to
terraform apply
, Terraform will attempt to create a new resource, leading to an “already exists” error. Import tells Terraform, “This resource already exists, just manage it for me.” - Disaster Recovery (Advanced): In certain advanced DR scenarios, if a state file is lost but infrastructure remains, import can be part of a strategy to reconcile. (Though highly not recommended as a primary DR strategy).
The terraform import Workflow (CLI Command): Step-by-Step
The older process of importing a resource involves these steps. While the declarative import
block is often preferred for new operations, understanding the CLI command is still valuable for ad-hoc scenarios or older Terraform versions.
Step 1: Identify the Resource to Import
Before you touch any Terraform files, identify the exact resource you want to import in your cloud provider’s console or via its CLI. You will need its unique identifier (ID, ARN, or name, depending on the resource type).
Example: An existing AWS S3 bucket with the name my-existing-manual-bucket-2025-06-11
.
Step 2: Write the Corresponding Terraform Configuration
Create a resource block in your HCL configuration that matches the type of the existing resource that you want to import. You do not need to fill in all the arguments initially; a minimal configuration with required arguments is often sufficient to get the import working.
main.tf
(before import):
# main.tf
resource "aws_s3_bucket" "my_imported_bucket" {
# We will fill in the actual configuration after import.
# For import, just provide the logical name and resource type.
}
provider "aws" {
region = "us-east-1"
}
Step 3: Execute the terraform import Command
Use the terraform import
command, specifying the Terraform resource address (your HCL resource block’s type and name) and the actual ID of the existing resource in your cloud provider.
Command Syntax:
terraform import <TERRAFORM_RESOURCE_ADDRESS> <CLOUD_RESOURCE_ID>
For our S3 bucket example:
terraform import aws_s3_bucket.my_imported_bucket my-existing-manual-bucket-2025-06-11
Example Output:
aws_s3_bucket.my_imported_bucket: Importing from ID "my-existing-manual-bucket-2025-06-11"...
aws_s3_bucket.my_imported_bucket: Import prepared!
aws_s3_bucket.my_imported_bucket: Refreshing state... [id=my-existing-manual-bucket-2025-06-11]
aws_s3_bucket.my_imported_bucket: Import successful!
Imported aws_s3_bucket.my_imported_bucket (ID: my-existing-manual-bucket-2025-06-11)
Apply complete! Resources: 1 added, 0 changed, 0 destroyed.
Explanation: Terraform connects to your cloud provider, reads the configuration of the existing S3 bucket with the ID my-existing-manual-bucket-2025-06-11
, and records all its attributes under the aws_s3_bucket.my_imported_bucket
address in your state file.
Step 4: Verify the Import and Update Your Configuration
After a successful import, run terraform plan
.
terraform plan
What to expect in the plan output:
- “No changes. Your infrastructure matches the configuration.”: This is the ideal outcome. It means your minimal HCL configuration exactly matches the imported resource’s state. You can then add more arguments to your HCL as needed.
- Changes (Modifications, Additions, or Removals): More commonly,
terraform plan
will show that Terraform wants to make changes to the imported resource. This happens if:- Your HCL configuration is incomplete (missing attributes).Your HCL configuration has different values for attributes compared to the imported resource.The imported resource has default values applied by the provider that are not explicit in your HCL.
plan
output after importing (showing drift):
Terraform will perform the following actions:
# aws_s3_bucket.my_imported_bucket will be updated in-place
~ resource "aws_s3_bucket" "my_imported_bucket" {
id = "my-existing-manual-bucket-2025-06-11"
+ acl = "private" # This was set by the provider, but not in our HCL
+ tags = { # These tags were on the bucket, not in our HCL
"Environment" = "prod"
"Owner" = "ops"
}
# ... other attributes
}
Your task now is to update your main.tf
to match the actual configuration of the imported resource. You can use terraform show
to display the imported resource’s attributes from the state file and copy them into your HCL.
main.tf
(after updating based on terraform show
output):
# main.tf
resource "aws_s3_bucket" "my_imported_bucket" {
bucket = "my-existing-manual-bucket-2025-06-11"
acl = "private" # Added based on imported state
tags = { # Added based on imported state
Environment = "prod"
Owner = "ops"
}
}
Repeat terraform plan
and refine your HCL until the plan shows “No changes.” This means your HCL now accurately represents the imported infrastructure.
Step 5: Apply the Configuration (Optional)
Once your terraform plan
shows no changes, you can confidently run terraform apply
. This will confirm that Terraform has truly adopted the resource and will make no modifications to it. Any future changes to your HCL will then be applied to this now-managed resource.
The Declarative import Block (Terraform v1.5+)
While the terraform import
CLI command is effective, it is a manual step that can be tricky to integrate into automated workflows or large-scale import operations. Terraform v1.5 introduced the declarative import
block, which simplifies this process by allowing you to define import operations directly within your HCL configuration. This is generally the preferred method for modern Terraform workflows.
Benefits of the import Block:
- Declarative and Auditable: The import instructions are part of your version-controlled HCL.
- Integrated into
plan
/apply
: Imports are executed as part of your normalterraform plan
andapply
workflow. - CI/CD Friendly: No need for separate
terraform import
commands in your scripts. - Multiple Imports: Define multiple imports in a single
apply
.
Syntax of the import Block:
import {
to = <TERRAFORM_RESOURCE_ADDRESS>
id = <CLOUD_RESOURCE_ID>
}
Terraform Import Block Examples:
Example 1: Importing a Single Resource
# main.tf
# 1. Define the resource block as if you were creating it
resource "aws_s3_bucket" "my_imported_bucket_declarative" {
bucket = "my-existing-manual-bucket-2025-06-11-declarative"
acl = "private"
tags = {
Environment = "dev"
ManagedBy = "Terraform"
}
}
# 2. Add the import block pointing to the resource and its existing ID
import {
to = aws_s3_bucket.my_imported_bucket_declarative
id = "my-existing-manual-bucket-2025-06-11-declarative" # The actual ID of the existing bucket
}
provider "aws" {
region = "us-east-1"
}
Workflow with import
Block:
- Add both the
resource
block and theimport
block to your HCL. - Run
terraform plan
. Terraform will recognize theimport
block and plan to import the resource. It will also show any planned changes to the imported resource if your HCL does not fully match it (just like with the CLI command). - Run
terraform apply
. Terraform will first perform the import, then apply any planned modifications to the newly imported resource to bring it in line with your HCL. - Note: Remove the
import
block! After a successfulterraform apply
(which includes the import), theimport
block is no longer needed. Remove it from your HCL to keep your configuration clean. If you leave it,terraform plan
will continuously try to run the import operation (though it will effectively be a no-op after the first time).
Example 2: Importing Multiple Resources using a for_each
Block
This is incredibly powerful for bringing collections of similar, existing resources under for_each
management.
Let us say you have two existing EC2 instances manually created with IDs i-0abcdef1234567890
and i-0fedcba9876543210
.
# main.tf
locals {
# Define the keys for your for_each resource, mapping to the existing instance IDs.
# The keys here are arbitrary but should be meaningful.
instance_map = {
"web-server-1" = "i-0abcdef1234567890", # Existing ID
"web-server-2" = "i-0fedcba9876543210" # Existing ID
}
}
resource "aws_instance" "my_servers" {
for_each = local.instance_map # Drive the resource using the map
ami = "ami-0abcdef1234567890" # Match the AMI of the existing instances
instance_type = "t2.micro" # Match the instance type of existing instances
tags = {
Name = each.key
}
}
# Define an import block for each instance in your for_each map
# The 'to' address uses the 'each.key' to target the specific instance.
import {
for_each = aws_instance.my_servers
to = each.value
id = local.instance_map[each.key]
}
provider "aws" {
region = "us-east-1"
}
Explanation:
- The
import
block’sfor_each
meta-argument allows it to iterate over theaws_instance.my_servers
resource, creating an import operation for each desired instance. to = each.value
refers to the specific instance ofaws_instance.my_servers
being targeted.id = local.instance_map[each.key]
dynamically provides the correct cloud resource ID for each instance to be imported.
Run terraform plan
and terraform apply
. After the successful import, remove the import
block.
Example 3: Importing Resources into a Module
Importing resources directly into a module using the import
block is powerful for bringing in existing components.
Let us say you have an existing VPC with ID vpc-0123456789abcdef0
and a subnet subnet-0fedcba9876543210
. You want to manage these with an existing network
module.
modules/network/main.tf
(Module Definition):
# modules/network/main.tf
resource "aws_vpc" "this" {
cidr_block = var.vpc_cidr
tags = {
Name = var.name
}
}
resource "aws_subnet" "public" {
vpc_id = aws_vpc.this.id
cidr_block = var.public_subnet_cidr
tags = {
Name = "${var.name}-public-subnet"
}
}
variable "vpc_cidr" {}
variable "public_subnet_cidr" {}
variable "name" {}
output "vpc_id" {
value = aws_vpc.this.id
}
output "public_subnet_id" {
value = aws_subnet.public.id
}
main.tf
(Root Configuration for Importing):
# main.tf
module "my_existing_network" {
source = "./modules/network"
vpc_cidr = "10.0.0.0/16" # Match existing VPC CIDR
public_subnet_cidr = "10.0.1.0/24" # Match existing Subnet CIDR
name = "my-existing-network"
}
# Import the existing VPC
import {
to = module.my_existing_network.aws_vpc.this
id = "vpc-0123456789abcdef0" # The actual ID of your existing VPC
}
# Import the existing Subnet
import {
to = module.my_existing_network.aws_subnet.public
id = "subnet-0fedcba9876543210" # The actual ID of your existing subnet
}
provider "aws" {
region = "us-east-1"
}
Explanation:
- You define the module call in your root configuration, providing arguments that match the configuration of your existing VPC and subnet.
- You then add
import
blocks. Theto
argument for resources within a module follows the pattern:module.<module_name>.<resource_type>.<resource_name>
. - Run
terraform plan
andterraform apply
. After successful import, remove theimport
blocks.
Example 4: Importing and Generating Configuration with terraform plan -generate-config-out
Terraform v1.5 introduced the import
block, and alongside it, the terraform plan -generate-config-out=PATH
command, which can help you generate initial HCL for imported resources. This is a powerful feature for simplifying the post-import configuration matching step.
Let us say you have an existing AWS S3 bucket with the ID my-auto-generated-import-bucket-12345
.
Define a minimal import
block:
Create a temporary file (e.g., import.tf
) with just the import
block. You do not need a resource
block yet.
# import.tf
import {
to = aws_s3_bucket.my_generated_bucket # This resource does not exist in HCL yet
id = "my-auto-generated-import-bucket-12345"
}
# You might also need a basic provider block in another file if not already present, e.g., provider.tf
# provider "aws" {
# region = "us-east-1"
# }
Run terraform plan
with -generate-config-out
:
terraform plan -generate-config-out=generated.tf
Explanation:
- Terraform reads the
import
block. - It performs a “dry run” import operation.
- Instead of just showing the plan, it generates the HCL configuration for the resource at the
to
address (in this case,aws_s3_bucket.my_generated_bucket
) into the specified file (generated.tf
).
Review the generated generated.tf
:
The generated.tf
file will now contain the HCL for your S3 bucket, fully populated with its current attributes as read from AWS:
# generated.tf
resource "aws_s3_bucket" "my_generated_bucket" {
bucket = "my-auto-generated-import-bucket-12345"
# ( ... many other attributes will be populated here, e.g., )
acl = "private"
force_destroy = false
tags = {
"Environment" = "test"
}
# ... and so on
}
Integrate and Apply:
- Move the generated
resource "aws_s3_bucket" "my_generated_bucket"
block fromgenerated.tf
into your main configuration files (e.g.,main.tf
). - Keep the
import
block (fromimport.tf
) in your active configuration. - Run
terraform plan
again to confirm that no changes are planned. - Run
terraform apply
to complete the import and bring the resource under management. - Remove the
import
block from your HCL after the successful apply.
Advantages of -generate-config-out
: This method significantly streamlines the post-import verification and HCL writing step, especially for complex resources with many attributes, reducing manual effort and potential errors. It is a powerful way to accelerate bringing brownfield infrastructure under Terraform’s declarative control.
terraform import cli command vs terraform import block
Feature | terraform import (CLI Command) | import Block (Terraform 1.5+) |
---|---|---|
Purpose | Brings existing infrastructure into Terraform state. | Brings existing infrastructure into Terraform state, integrated into the plan /apply workflow. |
Usage | Executed via the CLI: terraform import ADDRESS ID | Defined directly in .tf configuration files: <br> import { to = RESOURCE_ADDRESS, id = RESOURCE_ID } |
Configuration Generation | Does not generate HCL configuration for the imported resource. You must manually write the resource block after import. | Can optionally generate the corresponding HCL configuration for the imported resource using the -generate-config-out flag during terraform plan . |
Workflow Integration | A separate, one-off command. Typically run before terraform plan and terraform apply . | Integrated into the standard terraform plan and terraform apply workflow. The import operation is previewed in the plan. |
Predictability | Less predictable, as the state is modified directly by the command. Requires careful manual verification. | More predictable, as the import operation is part of the plan output, allowing review before state modification. |
CI/CD Friendliness | Less suitable for automated CI/CD pipelines due to its interactive nature and need for manual configuration updates. | Much more CI/CD friendly, as imports can be defined in code and executed as part of automated pipelines. |
Multiple Resources | Primarily designed for importing one resource at a time. While possible to script, it’s cumbersome for bulk imports. | Supports importing multiple resources within a single apply operation, including using for_each for bulk imports. |
Idempotency | Not inherently idempotent in the same way. Rerunning the command for an already managed resource can be a no-op but the manual steps still need to be managed. | Idempotent. Applying an import action multiple times for the same resource is a harmless no-op as long as the resource is already in the state. |
Post-Import Action | After import, you must manually write the corresponding resource configuration in your .tf files. | After apply , the import block can optionally be removed from the configuration, or kept as a record. |
Visibility | The import operation is only visible in the command output. | The import operation is clearly visible in the terraform plan output. |
Terraform Version | Available in all Terraform versions. | Introduced in Terraform v1.5.0 and later. |
Important Considerations and Best Practices
- “Does Terraform Import Generate Code?” – No! This is a common misconception. Whether you use the CLI command or the
import
block, you must write the HCL resource block before importing. Terraform only updates the state file. Tools likeTerraformer
(a third-party utility) can generate HCL from existing infrastructure, but that is not part of core Terraform. - Use Plan for Validation: The
terraform plan
after animport
is the most important step. Do not proceed until your HCL fully matches the imported resource, resulting in a “No changes” plan. Ifplan
shows modifications, update your HCL to match the imported resource’s attributes precisely. - Single Resource at a Time (Conceptually): While the
import
block can handle multiple imports in oneapply
, eachimport
block maps to a single resource instance. - Complex Resources: Some resources (e.g., AWS Network ACLs, VPN connections) might implicitly create child resources (like rules). When you import the parent, the child resources might also be imported into the state. You will need to define HCL blocks for these implicit child resources to prevent Terraform from planning their destruction.
- Backup State: Always back up your state file before performing complex
import
operations, especially if you are not fully confident. - Provider Documentation is Your Friend: For each resource, review the provider’s documentation. Many providers explicitly list the required
id
format forterraform import
at the bottom of their resource documentation pages. - Remove
import
Blocks After Success: Onceterraform apply
successfully executes the import, theimport
blocks are no longer needed and should be removed from your HCL files. This keeps your configuration clean.
Conclusion
Terraform’s new import
block (introduced in v1.5+) makes it much easier to bring existing cloud resources under Terraform management without using separate CLI commands. You just add an import block in your code, run your usual plan and apply, and Terraform will include the resource in its state. This approach is repeatable, works well in CI/CD pipelines, supports bulk imports (using for_each), and gives you visibility into what’s being imported before it happens. After importing, you can remove the block or keep it as documentation.
Author

Experienced Cloud & DevOps Engineer with hands-on experience in AWS, GCP, Terraform, Ansible, ELK, Docker, Git, GitLab, Python, PowerShell, Shell, and theoretical knowledge on Azure, Kubernetes & Jenkins.
In my free time, I write blogs on ckdbtech.com