AWS EC2 Provisioning and Configuration with Ansible for Development and Production Environments

This is a slightly more complex version of the /ansible/ansible-dynamic-inventory-aws/ guide.

This is going to be a two role playbook which will provision and configure an AWS instance(s) for both the development and production environments. It assumes having two seperate VPCs one for production the other for developemnt. However with some small tweaks it may also work in a single VPC.

The two roles this process will use are:

  • aws_provision_ec2
  • aws_configure_ec2

There will be no stastic inventory file. This method uses dynamic inventory privided by ec2.py script.

The ec2.py script will reply on boto3 to use multi-environment authentication credentials.

All secrets will be vaulted or secured as much as possible to minimize exposure.

This guide should work for MacOS, Linux, Unix maybe with some small differences but that's beyond the scope of this guide.

This guide targets AWS Cloud. The same general method should apply to other environments using their own inventory gathering methods/scripts.

We assume that the production VPC instances will only be accessible by private IP addresses. The development VPC instances will be accessible by public IP addresses.

Boto - AWS Credentials Management

Boto will be used to handle AWS credentials for multiple accounts (environments). Use pip to install it.

pip install boto3

Create if it doesn't exist and edit ~/.boto

Alternative location: ~/.aws/credentials

Add or edit this section:

[profile production]
aws_access_key_id=......
aws_secret_access_key=......

[profile development]
aws_access_key_id=......
aws_secret_access_key=......

ec2.py and Dynamic Inventory

Download ec2.py and ec2.ini files.

ec2.py download link ec2.ini download link

Copy these two files into /roles/inventory/ and make ec2.py executable chmod +x ex2.py

Copy the ec2.ini file to two seperate files in the same directory, rename them like this:

├── inventory
│   ├── dev-ec2.ini                 # ec2.py development configuration file
│   └── prod-ec2.ini                # ec2.py production configuration file

These files will be 99.99% identical. In most cased this doesn't have to be modified at all, the default value will work just fine.

However, this scenario requires the ec2.py script to access two seperate environments where one of them only allows access on private IP addresses. This is the one reason why the prod-ec2.ini file has to be modified to reflect that requirement.

In prod-ec2.ini change this line:

destination_variable = public_dns_name

to

destination_variable = private_dns_name

AWS Credentials

The aws-ami-provision role uses the ec2 Ansible module which relies on role variables where AWS credentials will be defined.

The aws-ami-configure role uses Boto variables.

To test individual environment profiles with the ec2.py tool use this:

./ec2.py --boto-profile development --list

or

./ec2.py --boto-profile production --list

These commands take a moment to complete, especially on the initial run without existing valid cache. Once completed without errors they should output a YAML formatted list of all facts about the AWS account environment. This is the dynamic inventory which Ansible will consume in place of the static inventory.

AWS SSH key

Download your existing SSH key from AWS and copy it to: ~/.ssh/. To avoid any security related warnings change its permissions: chmod 400 ~/.ssh/key.pem

Environment Variables

This one shouldn't change at anytime: export ANSIBLE_HOSTS=~/ansible/inv/ec2.py

Set this according to location of your production ssh key: export PROD_AWS_SSH_KEY=~/.ssh/production-ec2-keypair.pem

Set this according to location of your non-production ssh key: export DEV_AWS_SSH_KEY=~/.ssh/development-ec2-keypair.pem

Directory and Files Tree Layout

Directory tree layout for this project. The purpose and details of each of them will be explained below:

.
├── aws-configure.yml           # configure EC2 instance playbook
├── aws-provision.yml           # provision EC2 instance playbook
├── inventory
│   ├── aws-dev                     # dummy development inventory file
│   ├── aws-prod                    # dummy production inventory file
│   ├── dev-ec2.ini                 # ec2.py development configuration file
│   ├── ec2.py                      # ec2.py AWS dynamic inventory file
│   └── prod-ec2.ini                # ec2.py production configuration file
└── roles
    ├── aws-configure
    │   ├── defaults
    │   │   └── main.yml
    │   ├── files
    │   ├── handlers
    │   │   └── main.yml
    │   ├── tasks
    │   │   └── main.yml
    │   ├── templates
    │   └── vars
    │       └── main.yml
    └── aws-provision
        ├── defaults
        │   └── main.yml
        ├── tasks
        │   └── main.yml
        └── vars
            ├── aws-dev-vars.yml    # AWS development environment variables
            └── aws-prod-vars.yml   # AWS production environment variables

Dummy Static Inventory Files

Ansible still needs a dummy local inventory file needed for AWS dynamic host inventory to work properly Boto.

The content of both development and production inventory files should be the same and as follows:

In inventory/aws-dev:

[local]
localhost ansible_connection=local ansible_python_interpreter=python

In inventory/aws-prod:

[local]
localhost ansible_connection=local ansible_python_interpreter=python

Playbook to Provision a New EC2 Instance

aws-provision.yml playbook

Content of aws-provision.yml:

---
- name: Provision EC2 instance
  hosts: localhost
  connection: local
  gather_facts: False
  become: False
  roles:
    - { role: "aws-provision" }

aws-provision role

Content of roles/aws_provision_ec2/tasks/main.yml:

---

# This task includes a custom role variable file based on the environment
- name: Include environment specific variables
  include_vars:
    file: aws-{{ env }}-vars.yml

# Task to provision the EC2 instance
- name: Provision EC2 instance
  ec2:
    aws_access_key: "{{ access_key_id }}"
    aws_secret_key: "{{ secret_access_key }}"
    key_name: "{{ aws_ssh_key }}"
    instance_type: "{{ aws_instance_type }}"
    region: "{{ aws_region }}"
    image: "{{ ami_image_id }}"
    group_id: "{{ security_group_id }}"
    wait: yes
    wait_timeout: 300
    count: 1
    vpc_subnet_id: "{{ aws_vpc_id }}"
    instance_tags:
      Env: "{{ custom_env_tag }}"
      Name: "{{ custom_name_tag }}"
      Owner: "{{ custom_username_tag }}"
      Purpose: "{{ custom_purpose_tags }}"
      Project: "{{ custom_project_tags }}"
  register: ec2

# The next two tasks will add the newly create instance to a temporary inventory
# This is in preparation for the next role: aws_configure_ec2
- name: Add the new instance private IP address to its dynamic production host group
  add_host:
    hostname: "{{ item.private_ip }}"
    groups: custom_ec2
  with_items: "{{ ec2.instances }}"
  when: env == "production"

- name: Add the new instance public IP address to its dynamic development host group
  add_host:
    hostname: "{{ item.public_ip }}"
    groups: custom_ec2
  with_items: "{{ ec2.instances }}"
  when: env == "development"

# Finally we let the playbook wait until the provisioning process is completed
- name: Wait for the new production node to boot up
  wait_for:
    host: "{{ item.private_ip }}"
    port: 22
    delay: 30
    timeout: 120
    state: started
  with_items: "{{ ec2.instances }}"
  when: env == "production"

- name: Wait for the new development node to boot up
  wait_for:
    host: "{{ item.public_ip }}"
    port: 22
    delay: 30
    timeout: 120
    state: started
  with_items: "{{ ec2.instances }}"
  when: env == "development"

Below is its corresponding variables file. This one is an example for the development node. A similar vars file has to be created for the production environment with its own set of values.

Keep in mind to rename those two files accordingly, replacing the {{ env }} with their respective environment value.

Content of roles/aws_provision_ec2/vars/aws-{{ env }}-vars.yml:

---

# AWS DEV Access Keys
access_key_id: XXXXXXXXX
secret_access_key: XXXXXXXXXXX

aws_username_tag: username
aws_ssh_key: ec2-keypair.pem # use a pre-existing key created in the AWS account
aws_env_tag: development
aws_name_tag: awesome_aws_project #DO NOT CHANGE THIS TAG, CONFIGURATION ROLE DEPENDS ON IT
aws_project_tags: Web Server Node
aws_instance_type: t1.micro
aws_region: us-east-1
ami_image_id: ami-3b823144 #official Centos 7.5 AWS AMI
security_group_id: sg-a12abc34
aws_vpc_id: subnet-a12abc34

Provision AWS EC2 instance

DEVELOPMENT

Set the EC2_INI_PATH to where dev-ec2.ini and prod-ec2.ini are located on your workstation. ec2.py needs the ini file to properly generate the dynamic inventory from AWS.

This has to be done prior to any Ansible AWS deployment

AWS_SSH_KEY is optional, if it wasn't done already as shown in Environment Variables

export EC2_INI_PATH=/Users/username/ansible/inventory/dev-ec2.ini
AWS_PROFILE=development ansible-playbook -i inventory/ec2.py aws_provision_ec2.yml --key-file=$DEV_AWS_SSH_KEY -e "env=development"

To check what the current EC2_INI_PATH variable value is run this:

set | grep EC2

PRODUCTION

Set the EC2_INI_PATH to where dev-ec2.ini and prod-ec2.ini are located on your workstation. ec2.py needs the ini file to properly generate the dynamic inventory from AWS.

This has to be done prior to any Ansible AWS deployment

AWS_SSH_KEY is optional, if it wasn't done already as shown in Environment Variables

export EC2_INI_PATH=/Users/username/ansible/inv/prod-ec2.ini
AWS_PROFILE=production ansible-playbook -i inv/ec2.py aws_provision_ec2.yml --key-file=$PROD_AWS_SSH_KEY -e "env=production"

Configure the Newly Provisioned EC2 instance

The main difference between the provisioning and configuration roles is that the configuration role doesn't rely on AWS secret keys variables to authenticate but rather uses Boto profiles to handle it. No need to store or vault any secrets, that's a good thing.

In the provisioning role we set a variable aws_name_tag: awesome_aws_project. The provisioning role uses that variable to tag the newly provisioned host. We are going to use this tag in the configuration role to tell Ansible which hosts it should deploy to.

aws-configure.yml playbook

---

- name: Configure EC2
  hosts: tag_Name_awesome_aws_project # target only hosts with this tag
  user: centos
  become: True
  gather_facts: True
  roles:
    - { role: "aws-configure" }

aws-configure role

This role will include everything that needs to be done on the newly provisioned EC2 instance. Below is just an example.

---

# Install and configure all required software

- name: Set selinux into 'disabled' mode.
  selinux:
    policy: targeted
    state: disabled
  become: yes

- name: Install Apache
  yum:
    name: httpd
    state: present

Run Ansible Deployment

Development

List all target hosts:

AWS_PROFILE=development ansible-playbook -i inv/ec2.py aws-configure.yml --key-file=$DEV_AWS_SSH_KEY -e "variable_host=tag_Name_awesome_aws_project" -e "env=development" --ask-vault-pass --list-hosts

Run a simulated dry-run to make sure all is well:

AWS_PROFILE=development ansible-playbook -i inv/ec2.py aws-configure.yml --key-file=$DEV_AWS_SSH_KEY -e "variable_host=tag_Name_awesome_aws_project" -e "env=development" --ask-vault-pass --check

Deploy:

AWS_PROFILE=development ansible-playbook -i inv/ec2.py aws-configure.yml --key-file=$DEV_AWS_SSH_KEY -e "variable_host=tag_Name_awesome_aws_project" -e "env=development" --ask-vault-pass

Production

List all target hosts:

AWS_PROFILE=production ansible-playbook -i inv/ec2.py aws-configure.yml --key-file=$PROD_AWS_SSH_KEY -e "variable_host=tag_Name_awesome_aws_project" -e "env=production" --ask-vault-pass --list-hosts

Run a simulated dry-run to make sure all is well:

AWS_PROFILE=production ansible-playbook -i inv/ec2.py aws-configure.yml --key-file=$PROD_AWS_SSH_KEY -e "variable_host=tag_Name_awesome_aws_project" -e "env=production" --ask-vault-pass --check

Deploy:

AWS_PROFILE=production ansible-playbook -i inv/ec2.py aws-configure.yml --key-file=$PROD_AWS_SSH_KEY -e "variable_host=tag_Name_awesome_aws_project" -e "env=production" --ask-vault-pass

Extra Tips and Tricks

Ping host via dynamic inventory

This is a very useful basic Ansible troubleshooting trick using the ping module. It's handy in troubleshooting connection issues.

DEVELOPMENT

AWS_PROFILE=development ansible -i inv/ec2.py -m ping tag_Name_awesome_aws_project -u centos --key-file=$DEV_AWS_SSH_KEY

PRODUCTION

AWS_PROFILE=production ansible -i inv/ec2.py -m ping tag_Name_awesome_aws_project -u centos --key-file=$PROD_AWS_SSH_KEY

Dynamic AWS inventory in Ansible

The ec2.py doesn't have too may options but these main ones are enough to get everything out of it.

--list - generates a JSON formated inventory output, exactly what Ansible needs. If ran on CLI manually it'll output that list on the CLI, very useful for manual inspection.

--host IP_ADDRESS - it'll generate detailed information list for that particual host only.

DEVELOPMENT

./ec2.py --boto-profile development --list ./ec2.py --boto-profile development --host 172.10.10.10

PRODUCTION

./ec2.py --boto-profile production --list ./ec2.py --boto-profile production --host 4.3.2.2


Reading Time

~7 min read

Published

Category

Ansible

Tags

Contact