Daniel López Azaña

Theme

Social Media

Blog

GNU/Linux, Open Source, Cloud Computing, DevOps and more...

How to quickly import all records from a Route53 DNS zone into Terraform

terraform-and-route53-logos

The terraform import command allows you to import into HashiCorp Terraform resources that already existed previously in the provider we are working with, in this case AWS. However, it only allows you to import those records one by one , with one run of terraform import at a time. This, apart from being extremely tedious, in some situations becomes impractical. This is the case for the records of a Route53 DNS zone. The task can become unmanageable if we have multiple DNS zones, each one with tens or hundreds of records. In this article I offer you a bash script that will allow you to import in Terraform all the records of a Route53 DNS zone in a matter of seconds or a few minutes.

Prerequisites

In order to perform the procedure described in this article you will need to have Amazon’saws cli tool installed and configured and also the jq tool available in most Linux distributions.

Procedure

You can perform the import by following these simple steps. You can skip steps 1 and 2 and go directly to step 3 if you already have your Terraform project correctly configured:

1.- Create a new Terraform project folder and configure the provider through the following main.tf file:

provider "aws" {
    region = "eu-west-1"
    access_key = "XXXXXXXXXXXXXXXXXXXX"
    secret_key = "XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX"
}

2.- Initialize the new Terraform project:

$ terraform init

Initializing the backend...

Initializing provider plugins...
- Finding latest version of hashicorp/aws...
- Installing hashicorp/aws v3.74.1...
- Installed hashicorp/aws v3.74.1 (signed by HashiCorp)

Terraform has created a lock file .terraform.lock.hcl to record the provider
selections it made above. Include this file in your version control repository
so that Terraform can guarantee to make the same selections by default when
you run "terraform init" in the future.

Terraform has been successfully initialized!

You may now begin working with Terraform. Try running "terraform plan" to see
any changes that are required for your infrastructure. All Terraform commands
should now work.

If you ever set or change modules or backend configuration for Terraform,
rerun this command to reinitialize your working directory. If you forget, other
commands will detect it and remind you to do so if necessary.

3.- Run the following script:

Remember to set the appropriate value for the zone_name , zone_id variables beforehand, and also aws_profile if you use different profiles in the aws cli tool to manage different AWS accounts.

#! /bin/bash

# This script retrieves all DNS records from AWS Route53 DNS zone and imports all of them to Terraform

zone_name='example.com'
zone_id='XXXXXXXXXXXXXXXXXXXXX'
aws_profile='example_com'

# Get zone slug from zone name
zone_slug=$(echo ${zone_name} | tr '.' '-')

# Get DNS zone current data from AWS
zone="$(aws --profile=${aws_profile} route53 list-hosted-zones | jq '.HostedZones[] | select (.Id | contains("'${zone_id}'"))')"
# Another method to get DNS zone data searching by zone name instead of zone ID
#zone="$(aws --profile=${aws_profile} route53 list-hosted-zones | jq '.HostedZones[] | select (.Name=="'${zone_name}'.")')"
zone_comment="$(echo ${zone} | jq '.Comment')"
if [ "${zone_comment}" == 'null' ];then
    zone_comment="${zone_name} zone"
fi

# Write aws_route53_zone resource to terraform file
cat << EOF > dns-zone-${zone_name}.tf
resource "aws_route53_zone" "${zone_slug}" {
    name         = "${zone_name}"
    comment      = "${zone_comment}"
}
EOF

# Import DNS zone and records from file to terraform
terraform import "aws_route53_zone.${zone_slug}" "${zone_id}"

# Retrieve all regular records (not alias) from DNS zone and write them down to terraform file
IFS=$'\n'
for dns_record in $(aws --profile="${aws_profile}" route53 list-resource-record-sets --hosted-zone-id "${zone_id}" | jq -c '.ResourceRecordSets[] | select(has("AliasTarget") | not)');do
    name="$(echo ${dns_record} | jq -r '.Name')"
    type="$(echo ${dns_record} | jq -r '.Type')"
    name_slug="$(echo ${type}-${name} | sed -E 's/[\._\ ]+/-/g' | sed -E 's/(^-|-$)//g')"
    ttl="$(echo ${dns_record} | jq -r '.TTL')"
    records="$(echo ${dns_record} | jq -cr '.ResourceRecords' | jq '.[].Value' | sed 's/$/,/')"
    records="$(echo ${records} | sed 's/,$//')"

    cat << EOF >> dns-zone-${zone_name}.tf

resource "aws_route53_record" "${name_slug}" {
    zone_id = aws_route53_zone.${zone_slug}.zone_id
    name    = "${name}"
    type    = "${type}"
    ttl     = "${ttl}"
    records = [${records}]
}
EOF

    # Import DNS record to Terraform
    terraform import "aws_route53_record.${name_slug}" "${zone_id}_${name}_${type}"
done

# Retrieve all alias records from DNS zone and write them down to terraform file
IFS=$'\n'
for dns_record in $(aws --profile="${aws_profile}" route53 list-resource-record-sets --hosted-zone-id "${zone_id}" | jq -c '.ResourceRecordSets[] | select(has("AliasTarget"))');do
    name="$(echo ${dns_record} | jq -r '.Name')"
    type="$(echo ${dns_record} | jq -r '.Type')"
    name_slug="$(echo ${type}-${name} | sed -E 's/[\._\ ]+/-/g' | sed -E 's/(^-|-$)//g')"
    alias_name="$(echo ${dns_record} | jq -cr '.AliasTarget' | jq -r '.DNSName')"

    cat << EOF >> dns-zone-${zone_name}.tf

resource "aws_route53_record" "${name_slug}" {
    zone_id = aws_route53_zone.${zone_slug}.zone_id
    name    = "${name}"
    type    = "${type}"

    alias {
        name                   = "${alias_name}" 
        zone_id                = "${zone_id}"
        evaluate_target_health = true
    }
}
EOF

    # Import DNS record to Terraform
    terraform import "aws_route53_record.${name_slug}" "${zone_id}_${name}_${type}"
done

The script generates a .tf file in the directory from which it is executed containing all the DNS records existing in Route53. At the same time it imports them into Terraform to start managing these resources with Terraform when it finishes. The execution of the script can take several minutes depending on the number of records in the zone to be imported, Fortunately it is a totally unattended process that will not require any effort on your part. I hope you find it useful!

Daniel López Azaña

About the author

Daniel López Azaña

Tech entrepreneur and cloud architect with over 20 years of experience transforming infrastructures and automating processes.

Specialist in AI/LLM integration, Rust and Python development, and AWS & GCP architecture. Restless mind, idea generator, and passionate about technological innovation and AI.

Related articles

Script to automatically change all gp2 volumes to gp3 with aws-cli

Script to automatically change all gp2 volumes to gp3 with aws-cli

Last December Amazon announced its new EBS gp3 volumes, which offer better performance and a cost saving of 20% compared to those that have been used until now (gp2). Well, after successfully testing these new volumes with multiple clients, I can do nothing but recommend their use, because they are all advantages and in these 2 and a half months that have passed since the announcement I have not noticed any problems or side effects.

February 16, 2021
AWS security groups

How to automatically update all your AWS EC2 security groups when your dynamic IP changes

One of the biggest annoyances when working with AWS and your Internet connection has a dynamic IP is that when it changes, you immediately stop accessing to all servers and services protected by an EC2 security group whose rules only allow traffic to certain specific IP’s instead of allowing open connections to everyone (0.0.0.0.0/0).Certainly the simplest thing to do is always allowing traffic on a given port to everyone, so that even if you have a dynamic IP on your Internet connection you will always be able to continue accessing even if it changes. But opening traffic on a port to everyone is not the right way to proceed from a security point of view, because then any attacker will be able to access that port without restrictions, and that is not what you want.

January 12, 2021
Logo AWS EBS

How to enlarge the size of an EBS volume in AWS and extend an ext4 partition

When we completely fill up an ext4 filesystem mounted on a partition hosted in an EBS volume of Amazon Web Services and we can not do anything to free space because we do not want to lose any of the stored data, the only solution is to grow up the volume and extend the associated partition up to 100% of its capacity to obtain free space again.We start in our example with a 50 GB volume full to 100%. We want to extend it to double the size, 100 GB:

May 23, 2017

Comments

Be the first to comment

Submit comment