Create a Static Website at Amazon AWS using Python, S3, and Route53

Benefits of Static Web Sites

The most important benefits of static web sites are speed and security.

Fully static web sites can be delivered via content delivery networks (CDN), making them load much faster to the end user. CDN benefits include caching for your site objects and edge locations (servers) that are geographically closer to your end users.

Static web sites are also generally more secure than dynamic sites. This is because there are significantly fewer attack vectors for flat html pages than with using an application server. Popular content management systems such as WordPress and Drupal have had exploits affecting millions of web sites. And new exploits for popular application servers and CMS software are routinely discovered.

In one example, a critical vulnerability in Drupal was announced impacting 12 million websites, and any web site not patched within 7 hours was considered compromised.

A critical vulnerability in WordPress could be considered even more serious as WordPress powers an estimated 25% of all websites globally.

Static vs Dynamic

A static web site is a website made up of “flat” or “stationary” files, that are delivered to the end user exactly as stored. Most commonly, static websites are a collection of plain .html files.

Dynamic web sites, on the other hand, are generated for the user on the fly by an application server. An example of a dynamic web site would be any WordPress site.

Setting up a Static Web Site at Amazon AWS with Python

To set up a static web site at AWS, we’ll use 2 of their services: S3 and Route53. S3 is an object storage service and this is where we’ll store the files that comprise our site. CloudFront is the AWS content delivery network (CDN) service, that has edge locations distributed throughout the world, to ensure your end users are able to load your site as fast as possible. Route53 is the domain name system (DNS) which lets you host your domain name with AWS.

I’ll be using Python to demonstrate creating the static site using AWS services. AWS provides a Getting Started: Static Website Hosting tutorial if you want to manually perform these steps.

Prerequisite: Install AWS Python SDK

The examples use the AWS python SDK to build the static site, so you’ll want to install it.

For most people, this will typically be:

pip install boto3 awscli

Once installed, we will create an AWS configuration file with credentials and default settings such as preferred region:

aws configure

Step 1: Create S3 Bucket for a static web site

Our new static web site will be stored in AWS S3 so we’ll need to create a new bucket for the website’s files.
Creating a S3 bucket with python is simple:


# Load aws boto3 module
import boto3

# Specify the region to create the AWS resources in
DEFAULT_REGION = "us-east-1"

# Create S3 resource
s3 = boto3.resource('s3')

# Set a bucket name which will be our domain name.
bucket_name = "demo123456.com"

# Create a new S3 bucket, using a demo bucket name
s3.create_bucket(Bucket=bucket_name)

# We need to set an S3 policy for our bucket to
# allow anyone read access to our bucket and files.
# If we do not set this policy, people will not be
# able to view our S3 static web site.
bucket_policy = s3.BucketPolicy(bucket_name)
policy_payload = {
  "Version": "2012-10-17",
  "Statement": [{
    "Sid": "Allow Public Access to All Objects",
    "Effect": "Allow",
    "Principal": "*",
    "Action": "s3:GetObject",
    "Resource": "arn:aws:s3:::%s/*" % (domain)
  }
  ]
}

# Add the policy to the bucket
response = bucket_policy.put(Policy=json.dumps(policy_payload))

# Next we'll set a basic configuration for the static
# website.
website_payload = {
    'ErrorDocument': {
        'Key': 'error.html'
    },
    'IndexDocument': {
        'Suffix': 'index.html'
    }
}

# Make our new S3 bucket a static website
bucket_website = s3.BucketWebsite(bucket_name)

# And configure the static website with our desired index.html
# and error.html configuration.
bucket_website.put(WebsiteConfiguration=website_payload)

Step 1.1: Create S3 Bucket for redirecting www.domain.com to root domain.com

I like to redirect “www” to the root domain, such that www.domain.com will redirect to domain.com for the user. For this to work in AWS, we’ll need to create a second bucket for the www hostname, and set the bucket to redirect.


# Load aws boto3 module
import boto3

# Specify the region to create the AWS resources in
DEFAULT_REGION = "us-east-1"

# Create S3 resource
s3 = boto3.resource('s3')

# Create a new S3 bucket, using the www demo bucket name
bucket_name = "demo123456.com"
redirect_bucket_name = "www.demo123456.com"

s3.create_bucket(Bucket=redirect_bucket_name)

# The S3 settings to redirect to the root domain,
# in this case the bucket_name variable from above.
redirect_payload = {
        'RedirectAllRequestsTo': {
            'HostName': '%s' % (bucket_name),
            'Protocol': 'http'
        }
}

# Make our redirect bucket a S3 website
bucket_website_redirect = s3.BucketWebsite(redirect_bucket_name)

# Set the new bucket to redirect to our root domain
# with the redirect payload above.
bucket_website_redirect.put(WebsiteConfiguration=redirect_payload)

Step 2: Create a Route53 Hosted zone for the domain

Now that we have created an S3 bucket and web site for our new domain, we need to add the new domain to Amazon AWS DNS service, called Route53.
In Route53, we will create a new hosted zone for our domain name and add DNS records for the root domain.com and the redirect www.domain.com to point to our corresponding S3 buckets.


# Load the AWS boto3 module
import boto3
# We'll want to generate a unique UUID later
import uuid

# Specify the region to create the AWS resources in
DEFAULT_REGION = "us-east-1"

# A mapping of hosted zone IDs to AWS regions.
# Apparently this data is not accessible via API
# http://docs.aws.amazon.com/general/latest/gr/rande.html#s3_region
# https://forums.aws.amazon.com/thread.jspa?threadID=116724
S3_HOSTED_ZONE_IDS = {
    'us-east-1': 'Z3AQBSTGFYJSTF',
    'us-west-1': 'Z2F56UZL2M1ACD',
    'us-west-2': 'Z3BJ6K6RIION7M',
    'ap-south-1': 'Z11RGJOFQNVJUP',
    'ap-northeast-1': 'Z2M4EHUR26P7ZW',
    'ap-northeast-2': 'Z3W03O7B5YMIYP',
    'ap-southeast-1': 'Z3O0J2DXBE1FTB',
    'ap-southeast-2': 'Z1WCIGYICN2BYD',
    'eu-central-1': 'Z21DNDUVLTQW6Q',
    'eu-west-1': 'Z1BKCTXD74EZPE',
    'sa-east-1': 'Z7KQH4QJS55SO',
    'us-gov-west-1': 'Z31GFT0UA1I2HV',
}

# Load Route53 module
route53 = boto3.client('route53')

# Define the domain name we want to add in Route53
domain = "demo123456.com"
www_redirect = "www.demo123456.com"

# We need to create a unique string to identify the request.
# A UUID4 string is an easy to use unique identifier.
caller_reference_uuid = "%s" % (uuid.uuid4())

# Create the new hosted zone in Route53
response = route53.create_hosted_zone(
    Name=domain,
    CallerReference=caller_reference_uuid,
    HostedZoneConfig={'Comment': domain, 'PrivateZone': False})

# Get the newly created hosted zone id, used for
# adding our DNS records pointing to our S3 buckets
hosted_zone_id = response['HostedZone']['Id']

# Add DNS records for domain.com and www.domain.com
website_dns_name = "s3-website-%s.amazonaws.com" % (DEFAULT_REGION)
redirect_dns_name = "s3-website-%s.amazonaws.com" % (DEFAULT_REGION)

# Here is the payload we will send to Route53
# We are creating two DNS records:
# one for domain.com to point to our S3 bucket,
# and a second for www.domain.com to point to our
# S3 redirect bucket, to redirect to domain.com
change_batch_payload = {
    'Changes': [
        {
            'Action': 'UPSERT',
            'ResourceRecordSet': {
                'Name': domain,
                'Type': 'A',
                'AliasTarget': {
                    'HostedZoneId': S3_HOSTED_ZONE_IDS[DEFAULT_REGION],
                    'DNSName': website_dns_name,
                    'EvaluateTargetHealth': False
                }
            }
        },
        {
            'Action': 'UPSERT',
            'ResourceRecordSet': {
                'Name': www_redirect,
                'Type': 'A',
                'AliasTarget': {
                    'HostedZoneId': S3_HOSTED_ZONE_IDS[DEFAULT_REGION],
                    'DNSName': redirect_dns_name,
                    'EvaluateTargetHealth': False
                }
            }
        }
    ]
}

# Create the DNS records payload in Route53
response = route53.change_resource_record_sets(
    HostedZoneId=hosted_zone_id, ChangeBatch=change_batch_payload)


Add Content to S3 Bucket

After creating our S3 buckets and added our domain name to Route53, we have a few remaining tasks in order to make our new static web site live.
First, we need to add an html page to our S3 bucket for our visitors to see.
We can do this with python or we can use a static site generator site as Jekyll to build our static site.
Here’s an example using python:


# Load the AWS boto3 module
import boto3

# Set our domain name and bucket name
# I use the domain as the bucket name,
# such that they are the same
domain = "demo123456.com"

s3 = boto3.resource('s3')

# Very simple, basic HTML code for our landing page
payload = ("<html><head><title>%s</title></head>"
           "<body><h1>%s</h1></body></html>"
           % (domain, domain))

# Create the index.html page in S3
s3.Object(domain, 'index.html').put(Body=payload, ContentType='text/html')

Change nameservers at your domain registrar to point to AWS Route53

If we are using Route53 for our DNS services, we will need to update our nameservers at our domain name registrar to use Route53.
This step will vary from registrar to registrar and will most likely be a manual process because most registrars do not offer API access.
On the AWS Route53 side, you’ll need to get your name servers for your new hosted zone, then you’ll go to your registrar, such as Namecheap, GoDaddy, etc. and update your name server records there. Your registrar will have documentation on how to perform the necessary updates in their dashboards.

ever2text: Migrating off of Evernote to Dropbox with Plain Text Files

ever2text: Migrating off of Evernote to Dropbox with Plain Text Files

tl;dr

ever2text converts Evernote exports of notebooks and files to pure text files, stored in Dropbox.

Why migrate off of Evernote?

Evernote recently announced they were limiting the free tier of accounts to only 2 devices, and also substantially increasing the pricing of all of their plans. You can read their announcement here.

I used Evernote on 5 devices, but I only used Evernote to take text notes. I didn’t use Evernote for images, PDFs, email support, or any of the other features included in the — now costlier — paid plans. I just use plain text notes.

I started working on a plan to move to a different platform for my notes. My goal was to get plain text files that I could use with any text editor, and on any device.

Migrating to Dropbox

I heavily use Dropbox, so Dropbox seemed like the best place to store all of my text notes. With Dropbox, I could access my notes on all of my devices and store them permanently in plain text files.

The Dropbox app for phones and tablets (both Android and IOS) also lets you create, search, and edit plain text files. This means I have no need for any additional apps on my phone and tablet devices for taking notes.

The problem with Evernote became: how do I export my all of my Evernote notes and notebooks to plain text files?

Exporting Evernote notes and notebooks to plain text files with ever2text

After some googling, I came across ever2simple which was created to export Evernote ENEX export files to SimpleNote format. It could also export to a directory, but it left me desiring more functionality, such as preserving the note’s title in the generated filename, and an option to export to pure text rather than a markdown formatted text file.

I decided to create a pure Evernote to text file converter: ever2text.

Ever2text converts Evernote ENEX exports to files while preserving the note title in the file name and the ability to choose either raw text or markdown for the formatting.

I did come across an issue with exporting Evernote notebooks. It appears you are unable to export all notebooks in to a single export file. I had a lot of notebooks I had created over the years, so I ended up having to export each Evernote notebook individually in to separate ENEX export files. You can export an evernote notebook by right-clicking on the notebook then clicking on “Export Notes”. Make sure to select the option to export as a file in ENEX format.

You can find ever2text in my github: https://github.com/nicholaskuechler/ever2text

Build Rackspace Cloud Servers with Ansible in a Virtualenv

Ansible, Rackspace Cloud Servers, and Virtualenv

Ansible is a powerful tool for managing and configuring servers. It can also be used to create new cloud servers and then configure them automatically for us. For a system administrator, engineer, and developer, ansible can save us a lot of time and hassle by automating a lot of our routine tasks.

Here’s a tutorial guide for using ansible to build and configure new Rackspace Cloud servers.

I will walk through setting up a new python virtual environment, installing pyrax and ansible inside of of the virtualenv, and then write an ansible playbook and role for building a Rackspace Cloud server, setting up DNS entries for the new server, and installing our favorite packages on the new server.

You can find the complete source for this ansible Rackspace Cloud Servers example on my github: ansible-rackspace-servers-example

Set up your virtualenv with pyrax and ansible

First, let’s create a new python virtualenv for use with Rackspace Cloud. The new virtualenv contains everything needed to build new Rackspace Cloud servers using the cloud servers API.

Create a new python virtualenv for use with Rackspace Cloud

virtualenv rackspace

The output from the virtualenv command should look something like this:

nick@mbp: ~/virtualenvs$ virtualenv rackspace_cloud
New python executable in rackspace_cloud/bin/python
Installing setuptools, pip...done.

Once the virtualenv has been created, let’s activate it so we can work with it:

source rackspace_cloud/bin/activate

This is what it will look like once activated on my machine. Notice how the prompt has changed to include the virtualenv name “rackspace_cloud” I have just activated.

nick@mbp: ~/virtualenvs$ source rackspace_cloud/bin/activate
(rackspace_cloud)nick@mbp: ~/virtualenvs$

Install the pyrax python package for working with Rackspace Cloud

Next we’ll need to install the pyrax package which is needed to work with the Rackspace Cloud API in python. Installing pyrax will also install all of the other prerequisite packages needed by pyrax, such as python-novaclient, python-keystoneclient, rackspace-novaclient, and rackspace-auth-openstack.

pip install pyrax

Here’s what the output looks like on my workstation: https://gist.github.com/nicholaskuechler/603ee13bd74944866650

Install Ansible in to the virtualenv

Now that we have pyrax installed, let’s go ahead and install ansible in to our new virtualenv.

pip install ansible

Here’s the output from my workstation: https://gist.github.com/nicholaskuechler/8f4812226c6a908dbc9a

Configuration files for pyrax and ansible

Now that we have our python virtualenv set up and the pyrax and ansible packages installed, we need to create a couple configuration files for use with pyrax and ansible.

Configuration file for pyrax

We will place our Rackspace Cloud username and API key in the pyrax configuration, which pyrax will use when making API calls to Rackspace Cloud products.

Create a new file: ~/.rackspace_cloud_credentials

Here’s what my .rackspace_cloud_credentials pyrax config looks like:

[rackspace_cloud]
username = mycloudusername
api_key = 0123456789abcde

Configuration file for ansible

An ansible configuration file is not necessary as the defaults will work without any issues. There’s one setting in particular, though, which I find helpful when creating a lot of new Rackspace cloud servers via ansible. This setting is the SSH setting StrictHostKeyChecking=no, which means I don’t have to manually confirm a host key when trying to SSH to it. This is a real advantage during automation and playbook runs, since we may not want to have a human confirm the SSH connection.

The default ansible configuration file lives at: ~/.ansible.cfg

Here’s my .ansible.cfg with the option to disable SSH’s strict host key checking:

[ssh_connection]
ssh_args = -o StrictHostKeyChecking=no

Ansible playbook to create Rackspace Cloud Servers

Now that we’ve installed and configured virtualenv, pyrax, and ansible, we’re ready to write an ansible playbook for building new Rackspace cloud servers.

First let’s make a new directory for our ansible playbook where the inventory file, play books, and ansible roles will live:

mkdir ansible-rackspace-servers

Use a dynamic ansible inventory in virtualenv with Rackspace Cloud

Many people use static ansible inventory files when working with their servers. Since we’re using a cloud provider, let’s use a dynamic inventory. But there’s a catch due to installing ansible in a virtualenv: we need to specify the path to the virtualenv python, because ansible will default to the system python rather than the python installed in the virtualenv.

In the ansible-rackspace-servers directory we’ve just created, make a new file for the ansible virtualenv inventory file. I named my virtualenv inventory file: virtualenv-inventory.yml

Here are the contents of my ansible virtualenv inventory file: virtualenv-inventory.yml

[localhost]
localhost ansible_connection=local ansible_python_interpreter=/Users/nick/virtualenvs/rackspace_cloud/bin/python

The important configuration piece here is ansible_python_interpreter where we specify the full path to our “rackspace_cloud” virtualenv’s python binary. Without the ansible_python_interpreter setting, ansible will try to use the default system python which is probably /usr/bin/python, and as such it will not find our virtualenv packages like pyrax and ansible that we installed in our virtualenv named rackspace_cloud.

A basic playbook to create a new Rackspace Cloud server

Reading an ansible playbook is fairly straightforward, but writing them can be a little trickier. I’ve created a simple playbook you can use to build new cloud servers in the Rackspace Cloud.

Let’s create a new file for our playbook. I’ve called mine build-cloud-server.yml. Here’s my playbook to create a new Rackspace cloud instance: build-cloud-server.yml

---
- name: Create a Rackspace Cloud Server
  hosts: localhost
  user: root
  connection: local
  gather_facts: False

  vars:
   # this is the name we will see in the Rackspace Cloud control panel, and 
   # this will also be the hostname of our new server
   - name: admin.enhancedtest.net
   # the flavor specifies the server side of our instance
   - flavor: performance1-1
   # the image specifies the linux distro we will use for our server
   # note: this image UUID is for Ubuntu 14.10 PVHVM
   - image: 0766e5df-d60a-4100-ae8c-07f27ec0148f
   # the region is the Rackspace Cloud region we want to build our server in
   - region: DFW
   # credentials specifies the location of our pyrax configuration file we created earlier
   - credentials: /Users/nick/.rackspace_cloud_credentials
   # I like to drop in my SSH pub key automatically when I create the server
   # so that I can ssh in without a password
   # Note: Instead of dropping in a file, you can use a stored Rackspace key
   # when you build the server by editing key_name below to your key's name.
   - files:
        /root/.ssh/authorized_keys: /Users/nick/.ssh/id_rsa.pub

  tasks:
    - name: Rackspace cloud server build request
      local_action:
        module: rax
        credentials: "{{ credentials }}"
        name: "{{ name }}"
        flavor: "{{ flavor }}"
        image: "{{ image }}"
        region: "{{ region }}"
        # key_name - specifies the Rackspace cloud key to add to the server upon creation
        #key_name: my_rackspace_key
        files: "{{ files }}"
        # wait - specifies that we should wait until the server is fully created before proceeding
        wait: yes
        # state - present means we want our server to exist
        state: present
        # specify that we want both a public network (public IPv4) and
        # a private network (10. aka service net)
        networks:
          - private
          - public
        # group - specifies metadata to add to the new server with a server group
        #group: deploy
      # register is an ansible term to save the output in to a variable named rax
      register: rax

The ansible task to create a new Rackspace cloud server is pretty straightforward. It uses the rax module included by default with ansible. There are many different options we can work with when creating a new server, but I’ve included the important ones in this playbook.

Run the ansible playbook to create new Rackspace Cloud Server

Let’s run our new playbook and see what happens! To run the playbook, we need to call ansible-playbook along with our dynamic inventory file virtualenv-inventory.yml and our playbook yaml build-cloud-server.yml. Note: make sure you’re still in the virtualenv you’ve created for this project!

ansible-playbook -i virtualenv-inventory.yml build-cloud-server.yml -vvvv

Notes:

  • -i virtualenv-inventory.yml specifies the inventory file to use. The “-i” denotes the inventory option.
  •  -vvvv specifies that we want really verbose ansible-playbook output, so that we can debug and troubleshoot if something goes wrong.

Here’s the output when running the build-cloud-server.yml playbook. Note I have obfuscated some of the output as it contains personal information: https://gist.github.com/nicholaskuechler/85aa9e21998bea87a3c5

Success! We’ve just created a new Rackspace Cloud server using an ansible playbook inside of a virtualenv.

Kicking it up a notch: Create a server, add DNS records, and Install some packages

When creating a new server, there are other tasks I have to complete, such as adding new forward and reverse DNS entries and installing a set of base packages I like to use on my new servers.

We can do all of these tasks in ansible!

Use Ansible to create Rackspace Cloud DNS entries for a cloud server

Now that we have a play for building a new server, let’s add a couple tasks for creating DNS A and PTR records for the newly created server automatically. Another benefit is that if we have an existing DNS record, the ansible rax_dns module can update the records to the new IP address of a newly created cloud server.

Ansible task to add dynamic instance to dynamic group inventory

First we need to have ansible add the new cloud server to an ansible group that we will be using for future tasks in our playbook:

    - name: Add new cloud server to host group
      local_action:
        module: add_host
        hostname: "{{ item.name }}"
        ansible_ssh_host: "{{ item.rax_accessipv4 }}"
        ansible_ssh_user: root
        groupname: deploy
      with_items: rax.instances

What this tasks is doing is looking at the variable we registered with ansible “rax” and adding it to ansible’s internal list of groups and hosts within the group through ansible’s add_host module. The group we’re adding this newly created cloud server to is named “deploy”. We want to specify the hostname, IP address, and the username of the new server when adding the host to the ansible group.

Ansible task to create a Rackspace Cloud DNS A record

Next, we’ll create a task to add a forward DNS A record in Rackspace Cloud DNS for our new server:

    - name: DNS - Create A record
      local_action:
        module: rax_dns_record
        credentials: "{{ credentials }}"
        domain: "{{ domain }}"
        name: "{{ name }}"
        data: "{{ item.rax_accessipv4 }}"
        type: A
      with_items: rax.instances
      register: a_record

Again we’re using the registered “rax” variable which contains our newly created instance. We’ve also added a new variable “{{ domain }}” that we’ll specify in the “vars” section of our playbook. The domain variable specifies the domain name we want to ensure exists, or create it, in Rackspace Cloud DNS.

Add domain to the list of vars at the top of the playbook:

   - domain: enhancedtest.net

But wait! What if the domain does not yet exist in Rackspace Cloud DNS? The DNS A record addition will fail!

Ansible task to create a new domain in Rackspace Cloud DNS

Let’s make a new task to create a new domain if it does not yet exist in Rackspace Cloud DNS. This tasks will come before the task to add any DNS records.

    - name: DNS - Domain create request
      local_action:
        module: rax_dns
        credentials: "{{ credentials }}"
        name: "{{ domain }}"
        email: "{{ dns_email }}"
      register: rax_dns

Rackspace Cloud DNS requires an email address to use as the admin contact for a domain’s DNS. Let’s add a new variable “dns_email” in our “vars” list at the top of our playbook. Note: the email address doesn’t actually have to exist or work, we just need to specify one to create the new domain.

Add dns_email to the list of vars at the top of the playbook:

   - dns_email: admin@enhancedtest.net

Ansible task to create a new Rackspace Cloud PTR record

Now that we have created the new DNS domain and A record for our server, let’s create a DNS PTR record aka reverse DNS for the new server.

    - name: DNS - Create PTR record
      local_action:
        module: rax_dns_record
        credentials: "{{ credentials }}"
        server: "{{ item.id }}"
        name: "{{ name }}"
        region: "{{ region }}"
        data: "{{ item.rax_accessipv4 }}"
        type: PTR
      with_items: rax.instances
      register: ptr_record

Ansible playbook to create new Rackspace Cloud Server and add DNS entries

Here’s what the playbook looks like now with our Rackspace Cloud DNS additions to create a new domain, add an A record for the new server, and add a PTR record.

---
- name: Create a Rackspace Cloud Server
  hosts: localhost
  user: root
  connection: local
  gather_facts: False

  vars:
   # Rackspace Cloud DNS settings:
   # domain - the domain we will be using for the new server
   - domain: enhancedtest.net
   # dns_email - admin email address for the new domain name
   - dns_email: admin@enhancedtest.net
   # this is the name we will see in the Rackspace Cloud control panel, and
   # this will also be the hostname of our new server
   - name: admin.enhancedtest.net
   # the flavor specifies the server side of our instance
   - flavor: performance1-1
   # the image specifies the linux distro we will use for our server
   # note: this image UUID is for Ubuntu 14.10 PVHVM
   - image: 0766e5df-d60a-4100-ae8c-07f27ec0148f
   # the region is the Rackspace Cloud region we want to build our server in
   - region: DFW
   # credentials specifies the location of our pyrax configuration file we created earlier
   - credentials: /Users/nick/.rackspace_cloud_credentials
   # I like to drop in my SSH pub key automatically when I create the server
   # so that I can ssh in without a password
   # Note: Instead of dropping in a file, you can use a stored Rackspace key
   # when you build the server by editing key_name below to your key's name.
   - files:
        /root/.ssh/authorized_keys: /Users/nick/.ssh/id_rsa.pub

  tasks:
    - name: Rackspace cloud server build request
      local_action:
        module: rax
        credentials: "{{ credentials }}"
        name: "{{ name }}"
        flavor: "{{ flavor }}"
        image: "{{ image }}"
        region: "{{ region }}"
        # key_name - specifies the Rackspace cloud key to add to the server upon creation
        #key_name: my_rackspace_key
        files: "{{ files }}"
        # wait - specifies that we should wait until the server is fully created before proceeding
        wait: yes
        # state - present means we want our server to exist
        state: present
        # specify that we want both a public network (public IPv4) and
        # a private network (10. aka service net)
        networks:
          - private
          - public
        # group - specifies metadata to add to the new server with a server group
        #group: deploy
      # register is an ansible term to save the output in to a variable named rax
      register: rax

    - name: Add new cloud server to host group
      local_action:
        module: add_host
        hostname: "{{ item.name }}"
        ansible_ssh_host: "{{ item.rax_accessipv4 }}"
        ansible_ssh_user: root
        groupname: deploy
      with_items: rax.instances

    - name: Add new instance to host group
      local_action:
        module: add_host
        hostname: "{{ item.name }}"
        ansible_ssh_host: "{{ item.rax_accessipv4 }}"
        ansible_ssh_user: root
        groupname: deploy
      with_items: rax.instances

    - name: DNS - Domain create request
      local_action:
        module: rax_dns
        credentials: "{{ credentials }}"
        name: "{{ domain }}"
        email: "{{ dns_email }}"
      register: rax_dns

    - name: DNS - Create A record
      local_action:
        module: rax_dns_record
        credentials: "{{ credentials }}"
        domain: "{{ domain }}"
        name: "{{ name }}"
        data: "{{ item.rax_accessipv4 }}"
        type: A
      with_items: rax.instances
      register: a_record

    - name: DNS - Create PTR record
      local_action:
        module: rax_dns_record
        credentials: "{{ credentials }}"
        server: "{{ item.id }}"
        name: "{{ name }}"
        region: "{{ region }}"
        data: "{{ item.rax_accessipv4 }}"
        type: PTR
      with_items: rax.instances
      register: ptr_record

Let’s give our updated playbook a test run and see what happens!

Run the playbook the same way as before. Note: I’m not including -vvvv this time as the output can be extremely verbose, but I always run it when testing, debugging and troubleshooting.

ansible-playbook -i virtualenv-inventory.yml build-cloud-server.yml

Output from the playbook: https://gist.github.com/nicholaskuechler/f6a01223252b89dea959

Success! Some interesting things to note:

  • The server already existed with the name we specified in the playbook, so a new cloud server was not created
  • The new DNS domain was added successfully
  • The new A record was added successfully
  • The new PTR record was added successfully

What happens if we run the playbook again? Will it create more DNS records again? Let’s give it a try. Here’s the output from my workstation: https://gist.github.com/nicholaskuechler/d2562ef1c780cdeffe14

Since the domain, the A record, and PTR record already existed in DNS, the tasks were marked as OK and no changes were made. If changes were made, the tasks would be denoted with “changed:” rather than “ok:” in the task output. You can also see the overall playbook tasks ok / changed / failed status in the play recap summary.

Use ansible to install base packages on a Rackspace Cloud server

Now that we have successfully created a new cloud server and performed routine system administrator tasks like setting up DNS entries, let’s go ahead and install our favorite packages on our new server. On every server I use, I want vim, git and screen to be available, so let’s make sure those packages are definitely installed.

There are two different ways we can accomplish the package installations:

1.) Add a new task to install a list of packages
2.) Add a new ansible role that installs base packages that we can reuse in other playbooks

I prefer option #2 which allows me to re-use my code in other playbooks. I install the same set of base packages on each new server I create, so this will definitely come in handy for me in the future.

But let’s cover both options!

Add a new ansible task to install a list of packages

Here’s how we can add a new task to our existing ansible playbook to run apt update and install a bunch of our favorite packages on our new Rackspace cloud server. The task basically loops through a list of packages specified by with_items and uses apt to install the individual item, which is the package.

- name: Install Packages
  hosts: deploy
  user: root
  gather_facts: True

  tasks:
  - name: Run apt update
    apt: update_cache=yes

  - name: install packages
    action: apt state=installed pkg={{ item }}
    with_items:
    - vim
    - git
    - screen
    tags:
    - packages

Important note: Since we’re using ubuntu as our linux distro, we need to use the apt ansible module to install packages. If we were using CentOS or Fedora we would change this to yum.

Note: The ansible hosts group we’re using is “deploy” which we specified previously when adding the newly created cloud server to an internal, dynamic ansible host group. The important piece here is to make sure both group names match if you decide to use a different name!

Create a new ansible role to install a list of base packages

Using ansible roles to install our favorite packages is a slightly more complicated method than adding the install packages task to the existing playbook, but it is much more reusable and using roles is generally the preferred ansible way.

Make ansible role directories

First, we need to create a directory structure for our ansible roles. All ansible roles will live in a subdirectory of the directory your playbook lives in called “roles”. Let’s call our new role “base”

In our example, our playbook build-cloud-server.yml lives in the directory ansible-rackspace-servers, so in the ansible-rackspace-servers directory let’s make a new directory named “roles”

mkdir roles

Then make the roles directory for our “base” role, and a couple standard directories roles in ansible use:

cd roles
mkdir -p base/{files,handlers,meta,templates,tasks,vars}

Note: We’re not going to be using all of these standard role directories like meta and vars at this time, but we’ll go ahead and create them now in case we expand our role in the future.

Create package installation task in the new base role

With our directory structure set up for our new “base” role, we need to create a task in this role to install our favorite packages.

Create a new file in roles/base/tasks/ directory named main.yml. By default, all ansible roles will have a main.yml in the role’s tasks subdirectory.

Inside of main.yml, let’s write a simple task to install our packages. It’s going to look very similar to the method of adding the package install tasks directly in the playbook, but we won’t need to specify hosts, users, etc.

---
- name: Run apt update
  apt: update_cache=yes

- name: Install apt packages
  apt: pkg={{ item }} state=installed
  with_items:
    - vim
    - git
    - screen
  tags:
    - packages

That’s it! We’ve just created a new role named “base” to install our favorite packages. The best part about this is that we can expand our base role to perform other tasks that need to run on all of our servers, such as adding an admin user or setting up ntp.

Using the newly created base role in our build cloud server playbook

With our base role created, we now need to modify our playbook to use the role to install our base configuration and packages.

In our playbook, we need to add:

- name: Install base packages to new cloud server
  hosts: deploy
  user: root
  gather_facts: True
  roles:
    - base

The “roles” section specifies which roles to run on the hosts specified in the dynamic “deploy” group. For example, if we had an nginx ansible role we wanted to use, we could easily add it to the list of roles to use, like this:

  roles:
    - base
    - nginx

We can also easily use the new “base” role we just created in other ansible playbooks.

Run the playbook with the new base role to install packages

Let’s run our playbook now using the new base role to install packages on our new cloud server. Here’s the ansible-playbook output from my test run: https://gist.github.com/nicholaskuechler/72ae99ef1acc85a712fe

Success! The “changed” status under “install apt packages” denotes our 3 favorite packages were installed, and the recap tells us there were no failures.

Final version of playbook to create a Rackspace Cloud Server and install packages

Here’s the final version of our playbook:

---
- name: Create a Rackspace Cloud Server
  hosts: localhost
  user: root
  connection: local
  gather_facts: False

  vars:
   # Rackspace Cloud DNS settings:
   # domain - the domain we will be using for the new server
   - domain: enhancedtest.net
   # dns_email - admin email address for the new domain name
   - dns_email: admin@enhancedtest.net
   # this is the name we will see in the Rackspace Cloud control panel, and
   # this will also be the hostname of our new server
   - name: admin.enhancedtest.net
   # the flavor specifies the server side of our instance
   - flavor: performance1-1
   # the image specifies the linux distro we will use for our server
   - image: 0766e5df-d60a-4100-ae8c-07f27ec0148f
   # the region is the Rackspace Cloud region we want to build our server in
   - region: DFW
   # credentials specifies the location of our pyrax configuration file we created earlier
   - credentials: /Users/nick/.rackspace_cloud_credentials
   # I like to drop in my SSH pub key automatically when I create the server
   # so that I can ssh in without a password
   # Note: Instead of dropping in a file, you can use a stored Rackspace key
   # when you build the server by editing key_name below to your key's name.
   - files:
        /root/.ssh/authorized_keys: /Users/nick/.ssh/id_rsa.pub

  tasks:
    - name: Rackspace cloud server build request
      local_action:
        module: rax
        credentials: "{{ credentials }}"
        name: "{{ name }}"
        flavor: "{{ flavor }}"
        image: "{{ image }}"
        region: "{{ region }}"
        # key_name - specifies the Rackspace cloud key to add to the server upon creation
        #key_name: my_rackspace_key
        files: "{{ files }}"
        # wait - specifies that we should wait until the server is fully created before proceeding
        wait: yes
        # state - present means we want our server to exist
        state: present
        # specify that we want both a public network (public IPv4) and
        # a private network (10. aka service net)
        networks:
          - private
          - public
        # group - specifies metadata to add to the new server with a server group
        #group: deploy
      # register is an ansible term to save the output in to a variable named rax
      register: rax

    - name: Add new cloud server to host group
      local_action:
        module: add_host
        hostname: "{{ item.name }}"
        ansible_ssh_host: "{{ item.rax_accessipv4 }}"
        ansible_ssh_user: root
        groupname: deploy
      with_items: rax.instances

    - name: Add new instance to host group
      local_action:
        module: add_host
        hostname: "{{ item.name }}"
        ansible_ssh_host: "{{ item.rax_accessipv4 }}"
        ansible_ssh_user: root
        groupname: deploy
      with_items: rax.instances

    - name: DNS - Domain create request
      local_action:
        module: rax_dns
        credentials: "{{ credentials }}"
        name: "{{ domain }}"
        email: "{{ dns_email }}"
      register: rax_dns

    - name: DNS - Create A record
      local_action:
        module: rax_dns_record
        credentials: "{{ credentials }}"
        domain: "{{ domain }}"
        name: "{{ name }}"
        data: "{{ item.rax_accessipv4 }}"
        type: A
      with_items: rax.instances
      register: a_record

    - name: DNS - Create PTR record
      local_action:
        module: rax_dns_record
        credentials: "{{ credentials }}"
        server: "{{ item.id }}"
        name: "{{ name }}"
        region: "{{ region }}"
        data: "{{ item.rax_accessipv4 }}"
        type: PTR
      with_items: rax.instances
      register: ptr_record

- name: Install base packages to new cloud server
  hosts: deploy
  user: root
  gather_facts: True
  roles:
    - base

Source for ansible-rackspace-servers-example on GitHub

You can find the complete source for this ansible Rackspace cloud servers example on my github: ansible-rackspace-servers-example

Beef Stew Recipe for 3 quart Slow Cooker

 3 quart slow cooker beef stew recipe

I enjoy a nice, hot bowl of stew when it’s cold outside. Making your own beef stew is easy, cheap, and most importantly delicious.

Unfortunately for me, I only have a 3 quart slow cooker (Crock-Pot 3-Quart Manual Slow Cooker, Stainless Steel). The good: the slow cooker was free. The bad: most slow cooker recipes are for 6 quart slow cookers. After some experimentation, I’ve come up with a beef stew recipe I really enjoy for my 3 quart slow cooker. Note: If you have a 6 quart slow cooker, you can double all the ingredients.

The recipe makes about 3-4 servings. I like to serve it with some warm, crusty bread, like a warmed baguette.

Ingredients

  • 1 lb beef stew meat, cut in to 1 inch cubes (My grocery store sells 1 pound packs of beef stew meat already cut in to 1 inch cubes.)
  • 1 large potato, diced
  • 1/2 pound of baby carrots, cut in half crosswise
  • 1 stalk celery, chopped (I buy the pre-washed, pre-cut celery sticks and chop them up. I use about 5-6 celery sticks.)
  • 1/2 onion (large), or 1 small onion, chopped
  • 1/8 cup all-purpose (plain) flour
  • 1/4 teaspoon salt
  • 1/4 teaspoon ground black pepper
  • 1/4 teaspoon seasoned salt
  • 1 bay leaf
  • 1 cup beef broth
  • 1 clove garlic, minced
  • 1/2 teaspoon paprika
  • 1/2 teaspoon Worcestershire sauce

Directions

  1. Prep the meat
    • Place raw meat in slow cooker.
    • In a small bowl, mix together the flour, salt, pepper, and seasoned salt.
    • Pour the mixture over the meat, and stir to coat meat with flour mixture. I use my hands to coat the meat with the flour mixture.
  2. Prep the vegetables
    • Chop and dice the vegetables per the ingredients.
    • Put all of the vegetables in a large bowl, and mix them together.
    • Add bay leaf to vegetable mixture.
  3. Prep the broth
    • In a small bowl, mix the beef broth, Worcestershire sauce, paprika, and minced garlic together.
  4. Combine all the ingredients in the slow cooker
    • First I add the broth mixture to slow cooker
    • Next I slowly add vegetables to slow cooker, and mix with the meat and broth
    • Once everything has been added to the slow cooker, I give everything another stir to mix together.
  5. Start cooking
    • Cover the stew.
    • Cook on Low setting for 10 to 12 hours, or on High setting for 4 to 6 hours. Most times I cook on high for 5-6 hours.
  6. Enjoy your beef stew!

Favorite Quotes: Thomas Edison on Work, Success and Inventing

Thomas Edison Quotes on Work

Thomas Edison, inventor of the lightbulb
Thomas Edison, inventor of the lightbulb

Opportunity is missed by most people because it is dressed in overalls and looks like work. – Thomas Edison

There is no substitute for hard work. – Thomas Edison

I never did anything by accident, nor did any of my inventions come by accident; they came by work. – Thomas Edison

Thomas Edison Quotes on Success

I have not failed. I’ve just found 10,000 ways that won’t work. – Thomas Edison

One might think that the money value of an invention constitutes its reward to the man who loves his work. But… I continue to find my greatest pleasure, and so my reward, in the work that precedes what the world calls success. – Thomas Edison

Many of life’s failures are people who did not realize how close they were to success when they gave up. – Thomas Edison

The three great essentials to achieve anything worth while are: Hard work, Stick-to-itiveness, and Common sense. – Thomas Edison

Thomas Edison Quotes on Inventing

Genius is one percent inspiration and ninety-nine percent perspiration. – Thomas Edison

To invent, you need a good imagination and a pile of junk. – Thomas Edison

Just because something doesn’t do what you planned it to do doesn’t mean it’s useless. – Thomas Edison

If we did all the things we are capable of, we would literally astound ourselves. – Thomas Edison