Deploying an application to AWS with Terraform and Ansible – Part 1 (Terraform)

As I’m am very interested into automation I was curious if I would be able to deploy a (dummy) application to AWS by only making use of code only.

This article series shows how I used Terraform and Ansible to make this possible.

Afbeeldingsresultaat voor terraform image

This first blog post in this series is about setting up the application infrastructure by using the Infrastructure-As-Code tooling Terraform by HashiCorp. Prior to this blog post I created a small demo which shows how the application is deployed by Terraform, after which Ansible takes over and starts the configuration of the webservers and database servers. This blog post series shows how this was built.

Terraform

For people unfamiliar with HashiCorp Terraform: Terraform allows you to define infrastructure as code (IaC) and deploy it repeatably with the same end result. The application infrastructure is defined in code by defining needed components like compute instances, storage buckets, networks, load-balancers, firewalls etc. Terraform will then take this blueprint and plan how to reach the desired state defined in the code. This also allows TerraForm to do incremental changes by comparing the defined (changed) state with the deployed (current) state and execute only the needed changes. We simply create a  file (or multiple files) with the .tf extension and defining all the components we need. In my case I chose to split the files up by the components they defined (network, compute, ansible, etc.).


Defining AWS Provider

After installing Terraform (Link), first, we start off by defining a so-called provider (in our case AWS), which will provide the needed the resources to run the application we want to deploy. Many more providers are available  which we can use to define our infrastructure (i.e. Azure Stack, Oracle Cloud Platform VMware vSphere). For a full list of Terraform providers please check out this link.

In the following piece of code, we tell in which region to deploy the resources, which credentials to use and which profile to use (which is defined by a section in the shared credentials file) and which role TerraForm should assume to deploy the defined infrastructure.

You can remove the assume_role part if the defined credentials have sufficient rights to deploy the application to account in which the application has to be deployed. The assume role allows you to deploy the application to another AWS account :

provider "aws" {
  region = "eu-west-1"
  shared_credentials_file = "~/.creds"
  profile = "DEV"
  assume_role {
    role_arn     = "arn:aws:iam::<account>:role/Terraform"
  }
}

The shared credentials file should be set up as following:

[DEV]
aws_access_key_id=<ENTER YOUR AWS ACCESS KEY HERE>
aws_secret_access_key=<ENTER YOUR AWS SECRET ACCESS KEY HERE>

Defining AWS network-components

After we have the provider defined we can continue defining the different AWS resources we need to deploy the application. We’ll start by defining the VPC and the subnets we need. 

As you can see we are also using some variables  ( i.e. ${var.environment} ) which we can use the customize the set-up during deployment. To link the subnets to the defined VPC we refer to the VPC in the definition of the subnet.

resource "aws_vpc" "robertverdam" {
  cidr_block = "10.0.0.0/16" # Defines overall VPC address space
  enable_dns_hostnames = true # Enable DNS hostnames for this VPC
  enable_dns_support = true # Enable DNS resolving support for this VPC
  tags{
      Name = "VPC-${var.environment}" # Tag VPC with name
  }
}

resource "aws_subnet" "pub-web-az-a" {
  availability_zone = "eu-west-1a" # Define AZ for subnet
  cidr_block = "10.0.11.0/24" # Define CIDR-block for subnet
  map_public_ip_on_launch = true # Map public IP to deployed instances in this VPC
  vpc_id = "${aws_vpc.robertverdam.id}" # Link Subnet to VPC
  tags {
      Name = "Subnet-EU-West-1a-Web" # Tag subnet with name
  }
}

resource "aws_subnet" "pub-web-az-b" {
    availability_zone = "eu-west-1b"
    cidr_block = "10.0.12.0/24"
    map_public_ip_on_launch = true
    vpc_id = "${aws_vpc.robertverdam.id}"
      tags {
      Name = "Subnet-EU-West-1b-Web"
  }
}

resource "aws_subnet" "priv-db-az-a" {
  availability_zone = "eu-west-1a"
  cidr_block = "10.0.1.0/24"
  map_public_ip_on_launch = false
  vpc_id = "${aws_vpc.robertverdam.id}"
  tags {
      Name = "Subnet-EU-West-1a-DB"
  }
}

resource "aws_subnet" "priv-db-az-b" {
    availability_zone = "eu-west-1b"
    cidr_block = "10.0.2.0/24"
    map_public_ip_on_launch = false
    vpc_id = "${aws_vpc.robertverdam.id}"
      tags {
      Name = "Subnet-EU-West-1b-DB"
  }
}

To be able to access the instances (which have a mapped public IP) from the internet and allows access to the internet we will need an internet gateway, so let’s define one!:

resource "aws_internet_gateway" "inetgw" {
  vpc_id = "${aws_vpc.robertverdam.id}"
  tags {
      Name = "IGW-VPC-${var.environment}-Default"
  }
}

Looks easy, right? But you may also have guessed we also will need to set-up a route-table to attach to the subnets which define the default route and allows the subnets to talk to each other

resource "aws_route_table" "eu-default" {
  vpc_id = "${aws_vpc.robertverdam.id}"

  route {
      cidr_block = "0.0.0.0/0" # Defines default route 
      gateway_id = "${aws_internet_gateway.inetgw.id}" # via IGW
  }

  tags {
      Name = "Route-Table-EU-Default"
  }
}

resource "aws_route_table_association" "eu-west-1a-public" {
  subnet_id = "${aws_subnet.pub-web-az-a.id}"
  route_table_id = "${aws_route_table.eu-default.id}"
}

resource "aws_route_table_association" "eu-west-1b-public" {
  subnet_id = "${aws_subnet.pub-web-az-b.id}"
  route_table_id = "${aws_route_table.eu-default.id}"
}


resource "aws_route_table_association" "eu-west-1a-private" {
  subnet_id = "${aws_subnet.priv-db-az-a.id}"
  route_table_id = "${aws_route_table.eu-default.id}"
}

resource "aws_route_table_association" "eu-west-1b-private" {
  subnet_id = "${aws_subnet.priv-db-az-b.id}"
  route_table_id = "${aws_route_table.eu-default.id}"
}

Define AWS Instances

Having the networking part of our Terraform definition ready we will continue on configuring the needed computing instances, for which we will define which AMI (Amazon Machine Image) to use, what instance_type (t2.micro), which tags the instance will get, which subnet it is in and which key pair to use for accessing the instance via SSH. Finally, we define which security groups will be attached to the instance (security groups act as a firewall directly attached to the virtual network interface of the instance). Of course, we have to define these security groups later on.

resource "aws_instance" "WEBA" {
    ami = "${lookup(var.aws_ubuntu_awis,var.region)}"
    instance_type = "t2.micro"
    tags {
        Name = "${var.environment}-WEB001"
        Environment = "${var.environment}"
        sshUser = "ubuntu"
    }
    subnet_id = "${aws_subnet.pub-web-az-a.id}"
    key_name = "${aws_key_pair.keypair.key_name}"
    vpc_security_group_ids = ["${aws_security_group.WebserverSG.id}"]
}
resource "aws_instance" "WEBB" {
    ami = "${lookup(var.aws_ubuntu_awis,var.region)}"
    instance_type = "t2.micro"
    tags {
        Name = "${var.environment}-WEB002"
        Environment = "${var.environment}"
        sshUser = "ubuntu"
    }
    subnet_id = "${aws_subnet.pub-web-az-b.id}"
    key_name = "${aws_key_pair.keypair.key_name}"
    vpc_security_group_ids = ["${aws_security_group.WebserverSG.id}"]
}
resource "aws_instance" "BASTIONHOSTA" {
    ami = "${lookup(var.aws_ubuntu_awis,var.region)}"
    instance_type = "t2.micro"
    tags {
        Name = "${var.environment}-BASTION001"
        Environment = "${var.environment}"
        sshUser = "ubuntu"
    }
    subnet_id = "${aws_subnet.pub-web-az-a.id}"
    key_name = "${aws_key_pair.keypair.key_name}"
    vpc_security_group_ids = ["${aws_security_group.bastionhostSG.id}"]
}

resource "aws_instance" "BASTIONHOSTB" {
    ami = "${lookup(var.aws_ubuntu_awis,var.region)}"
    instance_type = "t2.micro"
    tags {
        Name = "${var.environment}-BASTION002"
        Environment = "${var.environment}"
        sshUser = "ubuntu"
    }
    subnet_id = "${aws_subnet.pub-web-az-b.id}"
    key_name = "${aws_key_pair.keypair.key_name}"
    vpc_security_group_ids = ["${aws_security_group.bastionhostSG.id}"]
}

resource "aws_instance" "SQLA" {
    ami = "${lookup(var.aws_ubuntu_awis,var.region)}"
    instance_type = "t2.micro"
    tags {
        Name = "${var.environment}-SQL001"
        Environment = "${var.environment}"
        sshUser = "ubuntu"
    }
    subnet_id = "${aws_subnet.priv-db-az-a.id}"
    key_name = "${aws_key_pair.keypair.key_name}"
    vpc_security_group_ids = ["${aws_security_group.DBServerSG.id}"]
}

resource "aws_instance" "SQLB" {
    ami = "${lookup(var.aws_ubuntu_awis,var.region)}"
    instance_type = "t2.micro"
    tags {
        Name = "${var.environment}-SQL002"
        Environment = "${var.environment}"
        sshUser = "ubuntu"
    }
    subnet_id = "${aws_subnet.priv-db-az-b.id}"
    key_name = "${aws_key_pair.keypair.key_name}"
    vpc_security_group_ids = ["${aws_security_group.DBServerSG.id}"]
}

Defining Classic Loadbalancer

In front of the application, we will be placing a classic load balancer, which will load-balance incoming web traffic (port 80) across the availability zones (eu-west-1a & eu-west-1b) in which the web servers are located. In a later article, we will be replacing this classic load balancer with a more advanced application load balancers.

resource "aws_elb" "lb" {
    name_prefix = "${var.environment}-"
    subnets = ["${aws_subnet.pub-web-az-a.id}", "${aws_subnet.pub-web-az-b.id}"]
    health_check {
        healthy_threshold = 2
        unhealthy_threshold = 2
        timeout = 3
        target = "HTTP:80/"
        interval = 30
    }
    listener {
        instance_port = 80
        instance_protocol = "http"
        lb_port = 80
        lb_protocol = "http"
    }
    cross_zone_load_balancing = true
    instances = ["${aws_instance.WEBA.id}", "${aws_instance.WEBB.id}"]
    security_groups = ["${aws_security_group.LoadBalancerSG.id}"]
}

Defining Security Groups

As mentioned before we will be attaching security groups to the defined compute instances and also the defined load balancer to only allow the specified incoming (ingress) and specified outgoing (egress) traffic. Besides using CIDR blocks subnets, we can also define other security groups as allowed traffic.

Also, you see an example of using separate rules attached to an security group ( resource aws_security_group_rule ). This prevents Terraform running into trouble because it can’t figure out which resource to create first (circular reference between the security groups). If this happens we then simply define the separate rules which we attach to the security rules. This allows Terraform to first create the security groups without the rules and attach them later, so solving the circular reference.

Also notice that we are using the bastion-hosts as proxies for the instances in the private subnets to access the internet via a squid proxy which will be installed to the bastion-hosts by Ansible.

resource "aws_security_group" "LoadBalancerSG"
{
    name = "LoadBalancerSG"
    vpc_id = "${aws_vpc.robertverdam.id}"
    description = "Security group for load-balancers"
    ingress {
        from_port = 80
        to_port = 80
        protocol = "TCP"
        cidr_blocks = ["0.0.0.0/0"]
        description = "Allow incoming HTTP traffic from anywhere"
    }
    ingress {
        from_port = 443
        to_port = 443
        protocol = "TCP"
        cidr_blocks = ["0.0.0.0/0"]
        description = "Allow incoming HTTPS traffic from anywhere"
    }

    egress {
        from_port = 80
        to_port = 80
        protocol = "TCP"
        security_groups = ["${aws_security_group.WebserverSG.id}"]
    }

    egress {
        from_port = 443
        to_port = 443
        protocol = "TCP"
        security_groups = ["${aws_security_group.WebserverSG.id}"]
    }

    tags
    {
        Name = "SG-Loadbalancer"
    }
}
resource "aws_security_group" "WebserverSG"
{
    name = "WebserverSG"
    vpc_id = "${aws_vpc.robertverdam.id}"
    description = "Security group for webservers"
    ingress {
        from_port = 22
        to_port = 22
        protocol = "TCP"
        security_groups = ["${aws_security_group.bastionhostSG.id}"]
        description = "Allow incoming SSH traffic from Bastion Host"
    }
  ingress {
      from_port = -1
      to_port = -1
      protocol = "ICMP"
      security_groups = ["${aws_security_group.bastionhostSG.id}"]
      description = "Allow incoming ICMP from management IPs"
  }
    egress {
        from_port = 0
        to_port = 0
        protocol = "-1"
        self = true
    }
    egress {
        from_port = 3128
        to_port = 3128
        protocol = "TCP"
        security_groups = ["${aws_security_group.bastionhostSG.id}"]
    }
    tags
    {
        Name = "SG-WebServer"
    }
}

resource "aws_security_group" "bastionhostSG" {
  name = "BastionHostSG"
  vpc_id = "${aws_vpc.robertverdam.id}"
  description = "Security group for bastion hosts"
  ingress {
      from_port = 22
      to_port = 22
      protocol = "TCP"
      cidr_blocks = ["${var.mgmt_ips}"]
      description = "Allow incoming SSH from management IPs"
  }

  ingress {
      from_port = -1
      to_port = -1
      protocol = "ICMP"
      cidr_blocks = ["${var.mgmt_ips}"]
      description = "Allow incoming ICMP from management IPs"
  }
  egress {
      from_port = 0
      to_port = 0
      cidr_blocks = ["0.0.0.0/0"]
      protocol = "-1"
      description = "Allow all outgoing traffic"
  }
  tags {
      Name = "SG-Bastionhost"
  }
}

resource "aws_security_group_rule" "lbhttpaccess" {
    security_group_id = "${aws_security_group.WebserverSG.id}"
    type = "ingress"
    from_port = 80
    to_port = 80
    protocol = "TCP"
    source_security_group_id = "${aws_security_group.LoadBalancerSG.id}"
    description = "Allow Squid proxy access from loadbalancers"
}

resource "aws_security_group_rule" "lbhttpsaccess" {
    security_group_id = "${aws_security_group.WebserverSG.id}"
    type = "ingress"
    from_port = 443
    to_port = 443
    protocol = "TCP"
    source_security_group_id = "${aws_security_group.LoadBalancerSG.id}"
    description = "Allow Squid proxy access from loadbalancers"
}

resource "aws_security_group_rule" "webproxyaccess" {
    security_group_id = "${aws_security_group.bastionhostSG.id}"
    type = "ingress"
    from_port = 3128
    to_port = 3128
    protocol = "TCP"
    source_security_group_id = "${aws_security_group.WebserverSG.id}"
    description = "Allow Squid proxy access from webservers"
}

resource "aws_security_group_rule" "dbproxyaccess" {
    security_group_id = "${aws_security_group.bastionhostSG.id}"
    type = "ingress"
    from_port = 3128
    to_port = 3128
    protocol = "TCP"
    source_security_group_id = "${aws_security_group.DBServerSG.id}"
    description = "Allow Squid proxy access from database servers"
}

resource "aws_security_group" "DBServerSG" {
    name = "DBServerSG"
    vpc_id = "${aws_vpc.robertverdam.id}"
    description = "Security group for database servers"
    ingress {
        from_port = 3306
        to_port = 3306
        protocol = "TCP"
        security_groups = ["${aws_security_group.WebserverSG.id}"]
        description = "Allow incoming MySQL traffic from webservers"
    }
    ingress {
        from_port = 22
        to_port = 22
        protocol = "TCP"
        security_groups = ["${aws_security_group.bastionhostSG.id}"]
        description = "Allow incoming SSH traffic from Bastion Host"
    }
  ingress {
      from_port = -1
      to_port = -1
      protocol = "ICMP"
      security_groups = ["${aws_security_group.bastionhostSG.id}"]
      description = "Allow incoming ICMP from management IPs"
  }
    egress {
        from_port = 3128
        to_port = 3128
        protocol = "TCP"
        security_groups = ["${aws_security_group.bastionhostSG.id}"]
    }
    tags
    {
        Name = "SG-DBServer"
    }
}

Defining SSH key-pair

For allowing access to the different compute-instances, we will also define a AWS key pair which allows us to login to the compute hosts via SSH (via a bastion host if needed). First, we have Terraform generate a key-pair which which we ask Terraform to use to create the AWS key pair attached to the instances.  Finally we tell Terraform to output the (sensitive) key on demand (by using the Terraform output command).

resource "tls_private_key" "privkey"
{
    algorithm = "RSA" 
    rsa_bits = 4096
}
resource "aws_key_pair" "keypair"
{
    key_name = "${var.key_name}"
    public_key = "${tls_private_key.privkey.public_key_openssh}"
}
output "private_key" {
  value = "${tls_private_key.privkey.private_key_pem}"
  sensitive = true
}

Define Terraform variables

As you have seen in the code snippets we have used some variables in there. Of course, these variables have to be defined.  The variables are defined by name (i.e. region) and can be provided with a default value (i.e. eu-west-1), if not defined when deploying the application :

variable "region"
{
    default = "eu-west-1"
}

variable "aws_ubuntu_awis"
{
    default = {
        "eu-west-1" = "ami-2a7d75c0"
    }
}

variable "environment"{
    type = "string"
}

variable "application" {
    type = "string"
}

variable "key_name" {
    type = "string"
    default = "ec2key"
}

variable "mgmt_ips" {
    default = ["0.0.0.0/0"]
}

Define Ansible inventory

To be able to use these defined hosts in Ansible I’ve installed the TerraForm plugin provided from https://github.com/nbering/terraform-provider-ansible, which together with an ansible dynamic inventory script from https://github.com/nbering/terraform-inventory/ allows to use the information from the Terraform state as input for Ansible.

We define the information to pass to Ansible as following. We define the inventory_hostname which will be used by Ansible to identify the instance, and the group of hosts the instance will belong to (i.e. security / web / db) and some variables to help ansible find the correct Python interpreter and key to connect to the instance.

ansible_ssh_common_args is used to tell ansible to use an SSH proxy connection to the bastion-host in the specified AZ which then can connect to the compute instances (and for security reasons in my definition only my homelab public IP has access to these bastion hosts).

You also see the use of the privkey.pem below, which you can output from the terraform state by using : terraform output private_key > privkey.pem and then modifying the path in the definitions below:

resource "ansible_host" "BASTIONHOSTA" {
  inventory_hostname = "${aws_instance.BASTIONHOSTA.public_dns}"
  groups = ["security"]
  vars
  {
      ansible_user = "ubuntu"
      ansible_ssh_private_key_file="/opt/terraform/aws_basic/privkey.pem"
      ansible_python_interpreter="/usr/bin/python3"
  }
}

resource "ansible_host" "BASTIONHOSTB" {
  inventory_hostname = "${aws_instance.BASTIONHOSTB.public_dns}"
  groups = ["security"]
  vars
  {
      ansible_user = "ubuntu"
      ansible_ssh_private_key_file="/opt/terraform/aws_basic/privkey.pem"
      ansible_python_interpreter="/usr/bin/python3"
  }
}


resource "ansible_host" "WEB001" {
  inventory_hostname = "${aws_instance.WEBA.private_dns}"
  groups = ["web"]
  vars
  {
      ansible_user = "ubuntu"
      ansible_ssh_private_key_file="/opt/terraform/aws_basic/privkey.pem"
      ansible_python_interpreter="/usr/bin/python3"
      ansible_ssh_common_args= " -o ProxyCommand=\"ssh -i /opt/terraform/aws_basic/privkey.pem -W %h:%p -q ubuntu@${aws_instance.BASTIONHOSTA.public_dns}\""
      proxy = "${aws_instance.BASTIONHOSTA.private_ip}"
  }
}

resource "ansible_host" "WEB002" {
  inventory_hostname = "${aws_instance.WEBB.private_dns}"
  groups = ["web"]
  vars
  {
      ansible_user = "ubuntu"
      ansible_ssh_private_key_file="/opt/terraform/aws_basic/privkey.pem"
      ansible_python_interpreter="/usr/bin/python3"
      ansible_ssh_common_args= " -o ProxyCommand=\"ssh -i /opt/terraform/aws_basic/privkey.pem -W %h:%p -q ubuntu@${aws_instance.BASTIONHOSTB.public_dns}\""
      proxy = "${aws_instance.BASTIONHOSTB.private_ip}"
  }
}

resource "ansible_host" "SQL001" {
  inventory_hostname = "${aws_instance.SQLA.private_dns}"
  groups = ["db"]
  vars
  {
      ansible_user = "ubuntu"
      ansible_ssh_common_args= " -o ProxyCommand=\"ssh -i /opt/terraform/aws_basic/privkey.pem -W %h:%p -q ubuntu@${aws_instance.BASTIONHOSTA.public_dns}\""
      ansible_ssh_private_key_file="/opt/terraform/aws_basic/privkey.pem"
      ansible_python_interpreter="/usr/bin/python3"
      proxy = "${aws_instance.BASTIONHOSTA.private_ip}"
  }
}

resource "ansible_host" "SQL002" {
  inventory_hostname = "${aws_instance.SQLB.private_dns}"
  groups = ["db"]
  vars
  {
      ansible_user = "ubuntu"
      ansible_ssh_common_args= " -o ProxyCommand=\"ssh -i /opt/terraform/aws_basic/privkey.pem -W %h:%p -q ubuntu@${aws_instance.BASTIONHOSTB.public_dns}\""
      ansible_ssh_private_key_file="/opt/terraform/aws_basic/privkey.pem"
      ansible_python_interpreter="/usr/bin/python3"
      proxy = "${aws_instance.BASTIONHOSTB.private_ip}"
  }
}

Deploy application

After defining everything we need to deploy the application-infrastructure we simply run terraform init in the same folder as where you created your TerraForm .tf file to initialize the TerraForm environment:

Terraform init

Then it is time to run terraform plan which tells terraform to see what has to be done to deploy the infrastructure we defined before. Terraform will ask you to input any variables you didn’t define on the commandline (by using the parameter -var <var-name>=<value>)

Terraform plan

If the output of the command looks ok, we can then deploy the application to AWS by typing: terraform apply (followed by the same variables) and answering the confirmation to perform these actions with yes (or use the commandline parameter -auto-approve)

Terraform apply
Apply complete!

Destroying the complete appplication-infrastructure again is even as simple, just replace the apply command with the destroy and your environment gets cleaned up again nicely.

Destroy environment

End of part 1

Hope this first post from the blog post series gives you a good insight in how to define an application infrastructure in Terraform to deploy an application on AWS.

In the next post we will see how to use the information from Terraform in Ansible to do further configuration of the compute instances.

Any comments, questions, tips&tricks are welcome, so please feel free to contact me! 

The following two tabs change content below.

Robert Verdam

Consultant at bConn ICT
My main focus is infrastructure (Storage, Networking and Computing), but I'm also very interested in designing and implementing VDI and Server Based Computing-environments.
You Might Also Like
Leave a Reply

 

This site uses Akismet to reduce spam. Learn how your comment data is processed.