AWS EC2 Provisioning and Configuration with Ansible

This is going to be a two role playbook which will provision and configure an AWS instance(s).

The two roles this process will use are:

  • aws_provision_ec2 - role to provision a new EC2 instance
  • aws_configure_ec2 - role to configure the newly provisioned EC2 instance

There will be no stastic inventory file. This method uses dynamic inventory privided by ec2.py script.

The ec2.py script will reply on boto3 to use multi-environment authentication credentials.

All secrets will be vaulted or secured as much as possible to minimize exposure.

This guide should work for MacOS, Linux, Unix maybe with some small differences but that's beyond the scope of this guide.

This guide targets AWS Cloud. The same general method should apply to other environments using their own inventory gathering methods/scripts.

We assume that the VPC will only be accessible by public IP addresses.

Boto - AWS Credentials Management

boto will be used to handle AWS credentials for multiple accounts (environments). Use pip to install it.

pip install boto3

Create, if it doesn't exist, and edit ~/.boto

Alternative location: ~/.aws/credentials

Add or edit this section:

[default]
aws_access_key_id=......
aws_secret_access_key=......

Dynamic Inventory with ec2.py

Download ec2.py and ec2.ini files:

ec2.py download link ec2.ini download link

Copy these two files into /roles/inventory/ and make ec2.py executable chmod +x ex2.py

Your inventory should contain these 3 files:

.
├── aws
├── ec2.py
└── ec2.ini

AWS Credentials (REVIEW THIS ONE, privision role may not need AWS keys anymore)

The aws-provision-ec2 role uses the ec2 Ansible module which relies on role variables where AWS credentials will be defined.

The aws-configure-ec2 role uses Boto variables.

To test the ec2.py tool use this:

./ec2.py --boto-profile default --list

These commands take a moment to complete, especially on the initial run without existing valid cache. Once completed without errors they should output a YAML formatted list of all facts about the AWS account environment. This is the dynamic inventory which Ansible will consume in place of the static inventory.

AWS SSH keys

Download your existing SSH key from AWS and copy it to: ~/.ssh/.

To avoid any security related warnings change its permissions: chmod 400 ~/.ssh/key.pem

Environment Variables

This one shouldn't change at anytime:

export ANSIBLE_HOSTS=~/ansible/inv/ec2.py

Set this according to location of your AWS ssh key:

export AWS_SSH_KEY=~/.ssh/aws-keypair.pem

Directory and Files Tree Layout

Directory tree layout for this project. The purpose and details of each of them will be explained below:

.
├── aws-configure.yml               # configure EC2 instance playbook
├── aws-provision.yml               # provision EC2 instance playbook
├── inventory
│   ├── aws                         # dummy development inventory file
│   ├── ec2.ini                     # ec2.py development configuration file
│   ├── ec2.py                      # ec2.py AWS dynamic inventory file
└── roles
    ├── aws-configure
    │   ├── defaults
    │   │   └── main.yml
    │   ├── files
    │   ├── handlers
    │   │   └── main.yml
    │   ├── tasks
    │   │   └── main.yml
    │   ├── templates
    │   └── vars
    │       └── main.yml
    └── aws-provision
        ├── defaults
        │   └── main.yml
        ├── tasks
        │   └── main.yml
        └── vars
            ├── aws-vars.yml        # AWS development environment variables

Dummy Static Inventory Files

Ansible still needs a dummy local inventory file needed for AWS dynamic host inventory to work properly with Boto.

The content of both development and production inventory files should be the same and as follows:

In inventory/aws:

[local]
localhost ansible_connection=local ansible_python_interpreter=python

Example Playbook to Provision a New EC2 Instance

aws-provision.yml playbook

This is the playbook in the root directory which will be executed:

ansible-playbook aws-provision.yml

Content of aws-provision.yml:

---
- name: Provision EC2 instance
  hosts: localhost
  connection: local
  gather_facts: False
  become: False
  roles:
    - { role: "aws-provision" }

aws-provision role

This is the role which the above playbook includes.

Content of roles/aws_provision_ec2/tasks/main.yml:

---

# This task includes a custom role variable file based on the environment
- name: Include environment specific variables
  include_vars:
    file: aws-vars.yml

# Task to provision the EC2 instance
- name: Provision EC2 instance
  ec2:
    aws_access_key: "{{ access_key_id }}"
    aws_secret_key: "{{ secret_access_key }}"
    key_name: "{{ aws_ssh_key }}"
    instance_type: "{{ aws_instance_type }}"
    region: "{{ aws_region }}"
    image: "{{ ami_image_id }}"
    group_id: "{{ security_group_id }}"
    wait: yes
    wait_timeout: 300
    count: 1
    vpc_subnet_id: "{{ aws_vpc_id }}"
    instance_tags:
      Env: "{{ custom_env_tag }}"
      Name: "{{ custom_name_tag }}"
      Owner: "{{ custom_username_tag }}"
      Purpose: "{{ custom_purpose_tags }}"
      Project: "{{ custom_project_tags }}"
  register: ec2

# The next two tasks will add the newly create instance to a temporary inventory
# This is in preparation for the next role: aws_configure_ec2
- name: Add the new instance private IP address to its dynamic host group
  add_host:
    hostname: "{{ item.private_ip }}"
    groups: custom_ec2
  with_items: "{{ ec2.instances }}"

# Finally we let the playbook wait until the provisioning process is completed
- name: Wait for the new node to boot up
  wait_for:
    host: "{{ item.private_ip }}"
    port: 22
    delay: 30
    timeout: 120
    state: started
  with_items: "{{ ec2.instances }}"
  ```

Below is its corresponding variables file used in the `aws_provision_ec2` role.

Content of `roles/aws_provision_ec2/vars/aws-{{ env }}-vars.yml:`

```ini
---

# AWS DEV Access Keys
access_key_id: XXXXXXXXX
secret_access_key: XXXXXXXXXXX

aws_username_tag: username
aws_ssh_key: aws-key # use a pre-existing key created in the AWS account
aws_name_tag: awesome_aws_project #DO NOT CHANGE THIS TAG, CONFIGURATION ROLE DEPENDS ON IT
aws_project_tags: Web Server Node
aws_instance_type: t1.micro
aws_region: us-east-1
ami_image_id: ami-3b823144 #official Centos 7.5 AWS AMI
security_group_id: sg-a12abc34
aws_vpc_id: subnet-a12abc34

Provision AWS EC2 instance (update the exports commands to include boto creds)

Set the EC2_INI_PATH to where ec2.ini is located on your workstation. ec2.py needs the ini file to properly generate the dynamic inventory from AWS.

This has to be done prior to any Ansible AWS deployment

AWS_SSH_KEY is optional, if it wasn't done already as shown in Environment Variables

export EC2_INI_PATH=/Users/username/ansible/inventory/ec2.ini
export AWS_SSH_KEY=/Users/username/.ssh/aws-keypair.pem
ansible-playbook -i inventory/ec2.py aws_provision_ec2.yml --key-file=$AWS_SSH_KEY -e "env=development"

To check what the current EC2_INI_PATH variable value is run this:

set | grep EC2

Configure the Newly Provisioned EC2 instance

The main difference between the provisioning and configuration roles is that the configuration role doesn't rely on AWS secret keys variables to authenticate but rather uses Boto profiles to handle it. No need to store or vault any secrets, that's a good thing.

In the provisioning role we set a variable aws_name_tag: awesome_aws_project. The provisioning role uses that variable to tag the newly provisioned host. We are going to use this tag in the configuration role to tell Ansible which hosts it should deploy to.

aws-configure.yml playbook

---

- name: Configure EC2
  hosts: tag_Name_awesome_aws_project # target only hosts with this tag
  user: centos
  become: True
  gather_facts: True
  roles:
    - { role: "aws-configure" }

aws-configure role

This role will include everything that needs to be done on the newly provisioned EC2 instance. Below is just an example.

---

# Install and configure all required software

- name: Set selinux into 'disabled' mode.
  selinux:
    policy: targeted
    state: disabled
  become: yes

- name: Install Apache
  yum:
    name: httpd
    state: present

Run Ansible Deployment

List All Target Hosts

AWS_PROFILE=default ansible-playbook -i inv/ec2.py aws-configure.yml --key-file=$AWS_SSH_KEY -e "variable_host=tag_Name_awesome_aws_project" --ask-vault-pass --list-hosts

Run a simulated dry-run to make sure all is well:

AWS_PROFILE=default ansible-playbook -i inv/ec2.py aws-configure.yml --key-file=$AWS_SSH_KEY -e "variable_host=tag_Name_awesome_aws_project" --ask-vault-pass --check

Deploy:

AWS_PROFILE=default ansible-playbook -i inv/ec2.py aws-configure.yml --key-file=$AWS_SSH_KEY -e "variable_host=tag_Name_awesome_aws_project" -e --ask-vault-pass

Extra Tips and Tricks

Ping host via dynamic inventory

This is a very useful basic Ansible troubleshooting trick using the ping module. It's handy in troubleshooting connection issues.

AWS_PROFILE=default ansible -i inv/ec2.py -m ping tag_Name_awesome_aws_project -u centos --key-file=$AWS_SSH_KEY

Dynamic AWS inventory in Ansible

The ec2.py doesn't have too may options but these main ones are enough to get everything out of it.

--list - generates a JSON formated inventory output, exactly what Ansible needs. If ran on CLI manually it'll output that list on the CLI, very useful for manual inspection.

--host IP_ADDRESS - it'll generate detailed information list for that particual host only.

./ec2.py --boto-profile default --list ./ec2.py --boto-profile default --host 4.3.2.2

Troubleshooting

TBD

Checklist

TBD

Video Tutorial

TBD


Reading Time

~5 min read

Published

Category

Ansible

Tags

Contact