Essential Terraform Interview Questions and Answers: A Comprehensive Guide

Essential Terraform Interview Questions and Answers: A Comprehensive Guide

Questions Overview

  1. What is Terraform and how it is different from other IaaC tools?

  2. How do you call a main.tf module?

  3. What exactly is Sentinel? Can you provide few examples where we can use for Sentinel policies?

  4. You have a Terraform configuration file that defines an infrastructure deployment. However, there are multiple instances of the same resource that need to be created. How would you modify the configuration file to achieve this?

  5. You want to know from which paths Terraform is loading providers referenced in your Terraform configuration (*.tf files). You need to enable debug messages to find this out. Which of the following would achieve this?

    • A. Set the environment variable TF_LOG=TRACE

    • B. Set verbose logging for each provider in your Terraform configuration

    • C. Set the environment variable TF_VAR_log=TRACE

    • D. Set the environment variable TF_LOG_PATH

  6. Below command will destroy everything that is being created in the infrastructure. Tell us how would you save any particular resource while destroying the complete infrastructure.

     terraform destroy
    
  7. Which module is used to store .tfstate file in S3?

  8. How do you manage sensitive data in Terraform, such as API keys or passwords?

  9. You are working on a Terraform project that needs to provision an S3 bucket, and a user with read and write access to the bucket. What resources would you use to accomplish this, and how would you configure them?

  10. Who maintains Terraform providers?

  11. How can we export data from one module to another?

In today's cloud-first world, Infrastructure as Code (IaC) has become a crucial skill for DevOps engineers and cloud practitioners. Terraform, being one of the most popular IaC tools, is frequently featured in technical interviews. This blog post will cover some of the most common Terraform interview questions and provide detailed answers to help you prepare.

Question 1: What is Terraform and how is it different from other IaC tools?

Terraform is an open-source Infrastructure as Code (IaC) tool developed by HashiCorp. It enables users to define and provision infrastructure using a declarative configuration language known as HashiCorp Configuration Language (HCL) or optionally JSON.

Key differentiating factors that set Terraform apart from other IaC tools include:

  1. Provider Ecosystem:

    • Terraform supports multiple cloud providers (AWS, Azure, GCP) and services

    • Single tool for managing multi-cloud infrastructure

    • Consistent workflow across all providers

  2. State Management:

    • Maintains a state file that tracks real-world resources

    • Enables infrastructure drift detection

    • Supports remote state storage and locking

  3. Declarative Approach:

     # Declarative approach (Terraform)
     resource "aws_instance" "example" {
       ami           = "ami-123456"
       instance_type = "t2.micro"
     }
    

    vs

     # Procedural approach (Ansible)
     - name: Launch EC2 instance
       ec2:
         ami: ami-123456
         instance_type: t2.micro
         state: present
    
  4. Resource Graph:

    • Creates a dependency graph of resources

    • Optimizes creation/modification of resources in parallel

    • Provides visual representation of infrastructure

Question 2: How do you call a main.tf module?

Modules in Terraform are containers for multiple resources that are used together. Here's a detailed explanation of how to call a module:

# Basic module call
module "vpc" {
  source = "./modules/vpc"

  # Input variables
  vpc_cidr = "10.0.0.0/16"
  environment = "production"
}

# Module with remote source
module "s3_bucket" {
  source = "terraform-aws-modules/s3-bucket/aws"
  version = "3.7.0"

  bucket_name = "my-unique-bucket"
  versioning = true
}

# Module with conditional creation
module "database" {
  source = "./modules/rds"
  count = var.create_database ? 1 : 0

  db_name = "myapp"
  db_size = "db.t3.medium"
}

Question 3: What exactly is Sentinel? Can you provide examples of Sentinel policies?

Sentinel is HashiCorp's policy as code framework integrated with their Enterprise products. It enables fine-grained, logic-based policy decisions that can be enforced across all HashiCorp products.

Example Sentinel Policies:

  1. Enforce Instance Types:
import "tfplan"

# Allowed instance types
allowed_types = [
  "t2.micro",
  "t2.small",
  "t3.micro",
  "t3.small",
]

# Main rule
main = rule {
  all tfplan.resources.aws_instance as _, instances {
    all instances as _, r {
      r.applied.instance_type in allowed_types
    }
  }
}
  1. Enforce Mandatory Tags:
import "tfplan"

mandatory_tags = [
  "environment",
  "owner",
  "project",
]

main = rule {
  all tfplan.resources.aws_instance as _, instances {
    all instances as _, r {
      all mandatory_tags as tag {
        r.applied.tags contains tag
      }
    }
  }
}

Question 4: Creating Multiple Instances of Resources

When you need to create multiple instances of the same resource, Terraform provides two primary methods: count and for_each.

  1. Using count:
resource "aws_instance" "server" {
  count = 3

  ami           = "ami-123456"
  instance_type = "t2.micro"

  tags = {
    Name = "Server-${count.index + 1}"
  }
}
  1. Using for_each with a map:
locals {
  instances = {
    "dev"     = "t2.micro"
    "staging" = "t2.small"
    "prod"    = "t2.medium"
  }
}

resource "aws_instance" "server" {
  for_each = local.instances

  ami           = "ami-123456"
  instance_type = each.value

  tags = {
    Name = "Server-${each.key}"
    Environment = each.key
  }
}
  1. Using for_each with a set:
resource "aws_iam_user" "developers" {
  for_each = toset(["john", "jane", "bob"])

  name = each.key
  tags = {
    Role = "Developer"
  }
}

Question 5: Enabling Debug Messages for Provider Loading

The correct approach to enable debug messages for provider loading is to set the environment variable TF_LOG=TRACE. Here's a breakdown of different log levels:

# Most verbose logging
export TF_LOG=TRACE

# Less verbose options
export TF_LOG=DEBUG
export TF_LOG=INFO
export TF_LOG=WARN
export TF_LOG=ERROR

# Log to file
export TF_LOG_PATH=./terraform.log

Question 6: Protecting Resources During Terraform Destroy

To protect specific resources during a terraform destroy, you can use the prevent_destroy lifecycle rule:

# Protect a single resource
resource "aws_s3_bucket" "critical" {
  bucket = "critical-data-bucket"

  lifecycle {
    prevent_destroy = true
  }
}

# Protect multiple resources
resource "aws_db_instance" "production" {
  identifier = "production-db"

  lifecycle {
    prevent_destroy = true
    ignore_changes = [
      password,
      engine_version
    ]
  }
}

Question 7: Storing .tfstate File in S3

To store the Terraform state file in S3, you need to configure the S3 backend:

terraform {
  backend "s3" {
    bucket         = "terraform-state-bucket"
    key            = "terraform.tfstate"
    region         = "us-west-2"
    encrypt        = true
    dynamodb_table = "terraform-lock"
  }
}

Additional configuration for state locking:

resource "aws_dynamodb_table" "terraform_lock" {
  name           = "terraform-lock"
  billing_mode   = "PAY_PER_REQUEST"
  hash_key       = "LockID"

  attribute {
    name = "LockID"
    type = "S"
  }
}

Question 8: Managing Sensitive Data in Terraform

There are several approaches to manage sensitive data in Terraform:

  1. Using Sensitive Variables:
variable "database_password" {
  type        = string
  sensitive   = true
  description = "Password for database"
}
  1. Using terraform.tfvars:
# terraform.tfvars (not committed to version control)
database_password = "super-secret-password"
api_key = "abc123xyz"
  1. Using Environment Variables:
export TF_VAR_database_password="super-secret-password"
  1. Using AWS Secrets Manager:
data "aws_secretsmanager_secret_version" "db_password" {
  secret_id = "database-password"
}

resource "aws_db_instance" "example" {
  password = data.aws_secretsmanager_secret_version.db_password.secret_string
}

Question 9: S3 Bucket and IAM User Configuration

Here's a complete example of creating an S3 bucket and a user with read/write access:

# Create S3 bucket
resource "aws_s3_bucket" "app_bucket" {
  bucket = "my-application-bucket"

  versioning {
    enabled = true
  }

  server_side_encryption_configuration {
    rule {
      apply_server_side_encryption_by_default {
        sse_algorithm = "AES256"
      }
    }
  }
}

# Create IAM user
resource "aws_iam_user" "bucket_user" {
  name = "s3-bucket-user"
}

# Create access key for the user
resource "aws_iam_access_key" "bucket_user_key" {
  user = aws_iam_user.bucket_user.name
}

# Create bucket policy
resource "aws_iam_user_policy" "bucket_policy" {
  name = "s3-bucket-access"
  user = aws_iam_user.bucket_user.name

  policy = jsonencode({
    Version = "2012-10-17"
    Statement = [
      {
        Action = [
          "s3:PutObject",
          "s3:GetObject",
          "s3:ListBucket",
          "s3:DeleteObject"
        ]
        Effect = "Allow"
        Resource = [
          aws_s3_bucket.app_bucket.arn,
          "${aws_s3_bucket.app_bucket.arn}/*"
        ]
      }
    ]
  })
}

Question 10: Who Maintains Terraform Providers?

Terraform providers are maintained by three main groups:

  1. HashiCorp Official Providers:

    • Maintained directly by HashiCorp

    • Examples: AWS, Azure, GCP, Kubernetes

    • Highest level of support and testing

  2. Partner Providers:

    • Maintained by technology partners

    • Examples: Heroku, MongoDB Atlas, DataDog

    • Official support from respective companies

  3. Community Providers:

    • Maintained by community members

    • Available through Terraform Registry

    • Support through community channels

Question 11: Exporting Data Between Modules

Data can be exported between modules using output values. Here's a comprehensive example:

# modules/vpc/main.tf
resource "aws_vpc" "main" {
  cidr_block = var.vpc_cidr
}

resource "aws_subnet" "private" {
  vpc_id     = aws_vpc.main.id
  cidr_block = var.private_subnet_cidr
}

output "vpc_id" {
  value = aws_vpc.main.id
}

output "private_subnet_id" {
  value = aws_subnet.private.id
}

# modules/ec2/main.tf
variable "vpc_id" {
  type = string
}

variable "subnet_id" {
  type = string
}

resource "aws_instance" "app_server" {
  ami           = "ami-123456"
  instance_type = "t2.micro"
  subnet_id     = var.subnet_id

  tags = {
    Name = "AppServer"
  }
}

# root main.tf
module "vpc" {
  source = "./modules/vpc"

  vpc_cidr            = "10.0.0.0/16"
  private_subnet_cidr = "10.0.1.0/24"
}

module "ec2" {
  source = "./modules/ec2"

  vpc_id    = module.vpc.vpc_id
  subnet_id = module.vpc.private_subnet_id
}

Conclusion

These questions cover many important aspects of Terraform that you might encounter in an interview. Remember that Terraform is constantly evolving, so it's important to stay updated with the latest features and best practices. Practice these concepts hands-on to better understand how they work in real-world scenarios.