Create a Static Website at Amazon AWS using Python, S3, and Route53

Benefits of Static Web Sites

The most important benefits of static web sites are speed and security.

Fully static web sites can be delivered via content delivery networks (CDN), making them load much faster to the end user. CDN benefits include caching for your site objects and edge locations (servers) that are geographically closer to your end users.

Static web sites are also generally more secure than dynamic sites. This is because there are significantly fewer attack vectors for flat html pages than with using an application server. Popular content management systems such as WordPress and Drupal have had exploits affecting millions of web sites. And new exploits for popular application servers and CMS software are routinely discovered.

In one example, a critical vulnerability in Drupal was announced impacting 12 million websites, and any web site not patched within 7 hours was considered compromised.

A critical vulnerability in WordPress could be considered even more serious as WordPress powers an estimated 25% of all websites globally.

Static vs Dynamic

A static web site is a website made up of “flat” or “stationary” files, that are delivered to the end user exactly as stored. Most commonly, static websites are a collection of plain .html files.

Dynamic web sites, on the other hand, are generated for the user on the fly by an application server. An example of a dynamic web site would be any WordPress site.

Setting up a Static Web Site at Amazon AWS with Python

To set up a static web site at AWS, we’ll use 2 of their services: S3 and Route53. S3 is an object storage service and this is where we’ll store the files that comprise our site. CloudFront is the AWS content delivery network (CDN) service, that has edge locations distributed throughout the world, to ensure your end users are able to load your site as fast as possible. Route53 is the domain name system (DNS) which lets you host your domain name with AWS.

I’ll be using Python to demonstrate creating the static site using AWS services. AWS provides a Getting Started: Static Website Hosting tutorial if you want to manually perform these steps.

Prerequisite: Install AWS Python SDK

The examples use the AWS python SDK to build the static site, so you’ll want to install it.

For most people, this will typically be:

pip install boto3 awscli

Once installed, we will create an AWS configuration file with credentials and default settings such as preferred region:

aws configure

Step 1: Create S3 Bucket for a static web site

Our new static web site will be stored in AWS S3 so we’ll need to create a new bucket for the website’s files.
Creating a S3 bucket with python is simple:


# Load aws boto3 module
import boto3

# Specify the region to create the AWS resources in
DEFAULT_REGION = "us-east-1"

# Create S3 resource
s3 = boto3.resource('s3')

# Set a bucket name which will be our domain name.
bucket_name = "demo123456.com"

# Create a new S3 bucket, using a demo bucket name
s3.create_bucket(Bucket=bucket_name)

# We need to set an S3 policy for our bucket to
# allow anyone read access to our bucket and files.
# If we do not set this policy, people will not be
# able to view our S3 static web site.
bucket_policy = s3.BucketPolicy(bucket_name)
policy_payload = {
  "Version": "2012-10-17",
  "Statement": [{
    "Sid": "Allow Public Access to All Objects",
    "Effect": "Allow",
    "Principal": "*",
    "Action": "s3:GetObject",
    "Resource": "arn:aws:s3:::%s/*" % (domain)
  }
  ]
}

# Add the policy to the bucket
response = bucket_policy.put(Policy=json.dumps(policy_payload))

# Next we'll set a basic configuration for the static
# website.
website_payload = {
    'ErrorDocument': {
        'Key': 'error.html'
    },
    'IndexDocument': {
        'Suffix': 'index.html'
    }
}

# Make our new S3 bucket a static website
bucket_website = s3.BucketWebsite(bucket_name)

# And configure the static website with our desired index.html
# and error.html configuration.
bucket_website.put(WebsiteConfiguration=website_payload)

Step 1.1: Create S3 Bucket for redirecting www.domain.com to root domain.com

I like to redirect “www” to the root domain, such that www.domain.com will redirect to domain.com for the user. For this to work in AWS, we’ll need to create a second bucket for the www hostname, and set the bucket to redirect.


# Load aws boto3 module
import boto3

# Specify the region to create the AWS resources in
DEFAULT_REGION = "us-east-1"

# Create S3 resource
s3 = boto3.resource('s3')

# Create a new S3 bucket, using the www demo bucket name
bucket_name = "demo123456.com"
redirect_bucket_name = "www.demo123456.com"

s3.create_bucket(Bucket=redirect_bucket_name)

# The S3 settings to redirect to the root domain,
# in this case the bucket_name variable from above.
redirect_payload = {
        'RedirectAllRequestsTo': {
            'HostName': '%s' % (bucket_name),
            'Protocol': 'http'
        }
}

# Make our redirect bucket a S3 website
bucket_website_redirect = s3.BucketWebsite(redirect_bucket_name)

# Set the new bucket to redirect to our root domain
# with the redirect payload above.
bucket_website_redirect.put(WebsiteConfiguration=redirect_payload)

Step 2: Create a Route53 Hosted zone for the domain

Now that we have created an S3 bucket and web site for our new domain, we need to add the new domain to Amazon AWS DNS service, called Route53.
In Route53, we will create a new hosted zone for our domain name and add DNS records for the root domain.com and the redirect www.domain.com to point to our corresponding S3 buckets.


# Load the AWS boto3 module
import boto3
# We'll want to generate a unique UUID later
import uuid

# Specify the region to create the AWS resources in
DEFAULT_REGION = "us-east-1"

# A mapping of hosted zone IDs to AWS regions.
# Apparently this data is not accessible via API
# http://docs.aws.amazon.com/general/latest/gr/rande.html#s3_region
# https://forums.aws.amazon.com/thread.jspa?threadID=116724
S3_HOSTED_ZONE_IDS = {
    'us-east-1': 'Z3AQBSTGFYJSTF',
    'us-west-1': 'Z2F56UZL2M1ACD',
    'us-west-2': 'Z3BJ6K6RIION7M',
    'ap-south-1': 'Z11RGJOFQNVJUP',
    'ap-northeast-1': 'Z2M4EHUR26P7ZW',
    'ap-northeast-2': 'Z3W03O7B5YMIYP',
    'ap-southeast-1': 'Z3O0J2DXBE1FTB',
    'ap-southeast-2': 'Z1WCIGYICN2BYD',
    'eu-central-1': 'Z21DNDUVLTQW6Q',
    'eu-west-1': 'Z1BKCTXD74EZPE',
    'sa-east-1': 'Z7KQH4QJS55SO',
    'us-gov-west-1': 'Z31GFT0UA1I2HV',
}

# Load Route53 module
route53 = boto3.client('route53')

# Define the domain name we want to add in Route53
domain = "demo123456.com"
www_redirect = "www.demo123456.com"

# We need to create a unique string to identify the request.
# A UUID4 string is an easy to use unique identifier.
caller_reference_uuid = "%s" % (uuid.uuid4())

# Create the new hosted zone in Route53
response = route53.create_hosted_zone(
    Name=domain,
    CallerReference=caller_reference_uuid,
    HostedZoneConfig={'Comment': domain, 'PrivateZone': False})

# Get the newly created hosted zone id, used for
# adding our DNS records pointing to our S3 buckets
hosted_zone_id = response['HostedZone']['Id']

# Add DNS records for domain.com and www.domain.com
website_dns_name = "s3-website-%s.amazonaws.com" % (DEFAULT_REGION)
redirect_dns_name = "s3-website-%s.amazonaws.com" % (DEFAULT_REGION)

# Here is the payload we will send to Route53
# We are creating two DNS records:
# one for domain.com to point to our S3 bucket,
# and a second for www.domain.com to point to our
# S3 redirect bucket, to redirect to domain.com
change_batch_payload = {
    'Changes': [
        {
            'Action': 'UPSERT',
            'ResourceRecordSet': {
                'Name': domain,
                'Type': 'A',
                'AliasTarget': {
                    'HostedZoneId': S3_HOSTED_ZONE_IDS[DEFAULT_REGION],
                    'DNSName': website_dns_name,
                    'EvaluateTargetHealth': False
                }
            }
        },
        {
            'Action': 'UPSERT',
            'ResourceRecordSet': {
                'Name': www_redirect,
                'Type': 'A',
                'AliasTarget': {
                    'HostedZoneId': S3_HOSTED_ZONE_IDS[DEFAULT_REGION],
                    'DNSName': redirect_dns_name,
                    'EvaluateTargetHealth': False
                }
            }
        }
    ]
}

# Create the DNS records payload in Route53
response = route53.change_resource_record_sets(
    HostedZoneId=hosted_zone_id, ChangeBatch=change_batch_payload)


Add Content to S3 Bucket

After creating our S3 buckets and added our domain name to Route53, we have a few remaining tasks in order to make our new static web site live.
First, we need to add an html page to our S3 bucket for our visitors to see.
We can do this with python or we can use a static site generator site as Jekyll to build our static site.
Here’s an example using python:


# Load the AWS boto3 module
import boto3

# Set our domain name and bucket name
# I use the domain as the bucket name,
# such that they are the same
domain = "demo123456.com"

s3 = boto3.resource('s3')

# Very simple, basic HTML code for our landing page
payload = ("<html><head><title>%s</title></head>"
           "<body><h1>%s</h1></body></html>"
           % (domain, domain))

# Create the index.html page in S3
s3.Object(domain, 'index.html').put(Body=payload, ContentType='text/html')

Change nameservers at your domain registrar to point to AWS Route53

If we are using Route53 for our DNS services, we will need to update our nameservers at our domain name registrar to use Route53.
This step will vary from registrar to registrar and will most likely be a manual process because most registrars do not offer API access.
On the AWS Route53 side, you’ll need to get your name servers for your new hosted zone, then you’ll go to your registrar, such as Namecheap, GoDaddy, etc. and update your name server records there. Your registrar will have documentation on how to perform the necessary updates in their dashboards.

Build Rackspace Cloud Servers with Ansible in a Virtualenv

Ansible, Rackspace Cloud Servers, and Virtualenv

Ansible is a powerful tool for managing and configuring servers. It can also be used to create new cloud servers and then configure them automatically for us. For a system administrator, engineer, and developer, ansible can save us a lot of time and hassle by automating a lot of our routine tasks.

Here’s a tutorial guide for using ansible to build and configure new Rackspace Cloud servers.

I will walk through setting up a new python virtual environment, installing pyrax and ansible inside of of the virtualenv, and then write an ansible playbook and role for building a Rackspace Cloud server, setting up DNS entries for the new server, and installing our favorite packages on the new server.

You can find the complete source for this ansible Rackspace Cloud Servers example on my github: ansible-rackspace-servers-example

Set up your virtualenv with pyrax and ansible

First, let’s create a new python virtualenv for use with Rackspace Cloud. The new virtualenv contains everything needed to build new Rackspace Cloud servers using the cloud servers API.

Create a new python virtualenv for use with Rackspace Cloud

virtualenv rackspace

The output from the virtualenv command should look something like this:

nick@mbp: ~/virtualenvs$ virtualenv rackspace_cloud
New python executable in rackspace_cloud/bin/python
Installing setuptools, pip...done.

Once the virtualenv has been created, let’s activate it so we can work with it:

source rackspace_cloud/bin/activate

This is what it will look like once activated on my machine. Notice how the prompt has changed to include the virtualenv name “rackspace_cloud” I have just activated.

nick@mbp: ~/virtualenvs$ source rackspace_cloud/bin/activate
(rackspace_cloud)nick@mbp: ~/virtualenvs$

Install the pyrax python package for working with Rackspace Cloud

Next we’ll need to install the pyrax package which is needed to work with the Rackspace Cloud API in python. Installing pyrax will also install all of the other prerequisite packages needed by pyrax, such as python-novaclient, python-keystoneclient, rackspace-novaclient, and rackspace-auth-openstack.

pip install pyrax

Here’s what the output looks like on my workstation: https://gist.github.com/nicholaskuechler/603ee13bd74944866650

Install Ansible in to the virtualenv

Now that we have pyrax installed, let’s go ahead and install ansible in to our new virtualenv.

pip install ansible

Here’s the output from my workstation: https://gist.github.com/nicholaskuechler/8f4812226c6a908dbc9a

Configuration files for pyrax and ansible

Now that we have our python virtualenv set up and the pyrax and ansible packages installed, we need to create a couple configuration files for use with pyrax and ansible.

Configuration file for pyrax

We will place our Rackspace Cloud username and API key in the pyrax configuration, which pyrax will use when making API calls to Rackspace Cloud products.

Create a new file: ~/.rackspace_cloud_credentials

Here’s what my .rackspace_cloud_credentials pyrax config looks like:

[rackspace_cloud]
username = mycloudusername
api_key = 0123456789abcde

Configuration file for ansible

An ansible configuration file is not necessary as the defaults will work without any issues. There’s one setting in particular, though, which I find helpful when creating a lot of new Rackspace cloud servers via ansible. This setting is the SSH setting StrictHostKeyChecking=no, which means I don’t have to manually confirm a host key when trying to SSH to it. This is a real advantage during automation and playbook runs, since we may not want to have a human confirm the SSH connection.

The default ansible configuration file lives at: ~/.ansible.cfg

Here’s my .ansible.cfg with the option to disable SSH’s strict host key checking:

[ssh_connection]
ssh_args = -o StrictHostKeyChecking=no

Ansible playbook to create Rackspace Cloud Servers

Now that we’ve installed and configured virtualenv, pyrax, and ansible, we’re ready to write an ansible playbook for building new Rackspace cloud servers.

First let’s make a new directory for our ansible playbook where the inventory file, play books, and ansible roles will live:

mkdir ansible-rackspace-servers

Use a dynamic ansible inventory in virtualenv with Rackspace Cloud

Many people use static ansible inventory files when working with their servers. Since we’re using a cloud provider, let’s use a dynamic inventory. But there’s a catch due to installing ansible in a virtualenv: we need to specify the path to the virtualenv python, because ansible will default to the system python rather than the python installed in the virtualenv.

In the ansible-rackspace-servers directory we’ve just created, make a new file for the ansible virtualenv inventory file. I named my virtualenv inventory file: virtualenv-inventory.yml

Here are the contents of my ansible virtualenv inventory file: virtualenv-inventory.yml

[localhost]
localhost ansible_connection=local ansible_python_interpreter=/Users/nick/virtualenvs/rackspace_cloud/bin/python

The important configuration piece here is ansible_python_interpreter where we specify the full path to our “rackspace_cloud” virtualenv’s python binary. Without the ansible_python_interpreter setting, ansible will try to use the default system python which is probably /usr/bin/python, and as such it will not find our virtualenv packages like pyrax and ansible that we installed in our virtualenv named rackspace_cloud.

A basic playbook to create a new Rackspace Cloud server

Reading an ansible playbook is fairly straightforward, but writing them can be a little trickier. I’ve created a simple playbook you can use to build new cloud servers in the Rackspace Cloud.

Let’s create a new file for our playbook. I’ve called mine build-cloud-server.yml. Here’s my playbook to create a new Rackspace cloud instance: build-cloud-server.yml

---
- name: Create a Rackspace Cloud Server
  hosts: localhost
  user: root
  connection: local
  gather_facts: False

  vars:
   # this is the name we will see in the Rackspace Cloud control panel, and 
   # this will also be the hostname of our new server
   - name: admin.enhancedtest.net
   # the flavor specifies the server side of our instance
   - flavor: performance1-1
   # the image specifies the linux distro we will use for our server
   # note: this image UUID is for Ubuntu 14.10 PVHVM
   - image: 0766e5df-d60a-4100-ae8c-07f27ec0148f
   # the region is the Rackspace Cloud region we want to build our server in
   - region: DFW
   # credentials specifies the location of our pyrax configuration file we created earlier
   - credentials: /Users/nick/.rackspace_cloud_credentials
   # I like to drop in my SSH pub key automatically when I create the server
   # so that I can ssh in without a password
   # Note: Instead of dropping in a file, you can use a stored Rackspace key
   # when you build the server by editing key_name below to your key's name.
   - files:
        /root/.ssh/authorized_keys: /Users/nick/.ssh/id_rsa.pub

  tasks:
    - name: Rackspace cloud server build request
      local_action:
        module: rax
        credentials: "{{ credentials }}"
        name: "{{ name }}"
        flavor: "{{ flavor }}"
        image: "{{ image }}"
        region: "{{ region }}"
        # key_name - specifies the Rackspace cloud key to add to the server upon creation
        #key_name: my_rackspace_key
        files: "{{ files }}"
        # wait - specifies that we should wait until the server is fully created before proceeding
        wait: yes
        # state - present means we want our server to exist
        state: present
        # specify that we want both a public network (public IPv4) and
        # a private network (10. aka service net)
        networks:
          - private
          - public
        # group - specifies metadata to add to the new server with a server group
        #group: deploy
      # register is an ansible term to save the output in to a variable named rax
      register: rax

The ansible task to create a new Rackspace cloud server is pretty straightforward. It uses the rax module included by default with ansible. There are many different options we can work with when creating a new server, but I’ve included the important ones in this playbook.

Run the ansible playbook to create new Rackspace Cloud Server

Let’s run our new playbook and see what happens! To run the playbook, we need to call ansible-playbook along with our dynamic inventory file virtualenv-inventory.yml and our playbook yaml build-cloud-server.yml. Note: make sure you’re still in the virtualenv you’ve created for this project!

ansible-playbook -i virtualenv-inventory.yml build-cloud-server.yml -vvvv

Notes:

  • -i virtualenv-inventory.yml specifies the inventory file to use. The “-i” denotes the inventory option.
  •  -vvvv specifies that we want really verbose ansible-playbook output, so that we can debug and troubleshoot if something goes wrong.

Here’s the output when running the build-cloud-server.yml playbook. Note I have obfuscated some of the output as it contains personal information: https://gist.github.com/nicholaskuechler/85aa9e21998bea87a3c5

Success! We’ve just created a new Rackspace Cloud server using an ansible playbook inside of a virtualenv.

Kicking it up a notch: Create a server, add DNS records, and Install some packages

When creating a new server, there are other tasks I have to complete, such as adding new forward and reverse DNS entries and installing a set of base packages I like to use on my new servers.

We can do all of these tasks in ansible!

Use Ansible to create Rackspace Cloud DNS entries for a cloud server

Now that we have a play for building a new server, let’s add a couple tasks for creating DNS A and PTR records for the newly created server automatically. Another benefit is that if we have an existing DNS record, the ansible rax_dns module can update the records to the new IP address of a newly created cloud server.

Ansible task to add dynamic instance to dynamic group inventory

First we need to have ansible add the new cloud server to an ansible group that we will be using for future tasks in our playbook:

    - name: Add new cloud server to host group
      local_action:
        module: add_host
        hostname: "{{ item.name }}"
        ansible_ssh_host: "{{ item.rax_accessipv4 }}"
        ansible_ssh_user: root
        groupname: deploy
      with_items: rax.instances

What this tasks is doing is looking at the variable we registered with ansible “rax” and adding it to ansible’s internal list of groups and hosts within the group through ansible’s add_host module. The group we’re adding this newly created cloud server to is named “deploy”. We want to specify the hostname, IP address, and the username of the new server when adding the host to the ansible group.

Ansible task to create a Rackspace Cloud DNS A record

Next, we’ll create a task to add a forward DNS A record in Rackspace Cloud DNS for our new server:

    - name: DNS - Create A record
      local_action:
        module: rax_dns_record
        credentials: "{{ credentials }}"
        domain: "{{ domain }}"
        name: "{{ name }}"
        data: "{{ item.rax_accessipv4 }}"
        type: A
      with_items: rax.instances
      register: a_record

Again we’re using the registered “rax” variable which contains our newly created instance. We’ve also added a new variable “{{ domain }}” that we’ll specify in the “vars” section of our playbook. The domain variable specifies the domain name we want to ensure exists, or create it, in Rackspace Cloud DNS.

Add domain to the list of vars at the top of the playbook:

   - domain: enhancedtest.net

But wait! What if the domain does not yet exist in Rackspace Cloud DNS? The DNS A record addition will fail!

Ansible task to create a new domain in Rackspace Cloud DNS

Let’s make a new task to create a new domain if it does not yet exist in Rackspace Cloud DNS. This tasks will come before the task to add any DNS records.

    - name: DNS - Domain create request
      local_action:
        module: rax_dns
        credentials: "{{ credentials }}"
        name: "{{ domain }}"
        email: "{{ dns_email }}"
      register: rax_dns

Rackspace Cloud DNS requires an email address to use as the admin contact for a domain’s DNS. Let’s add a new variable “dns_email” in our “vars” list at the top of our playbook. Note: the email address doesn’t actually have to exist or work, we just need to specify one to create the new domain.

Add dns_email to the list of vars at the top of the playbook:

   - dns_email: admin@enhancedtest.net

Ansible task to create a new Rackspace Cloud PTR record

Now that we have created the new DNS domain and A record for our server, let’s create a DNS PTR record aka reverse DNS for the new server.

    - name: DNS - Create PTR record
      local_action:
        module: rax_dns_record
        credentials: "{{ credentials }}"
        server: "{{ item.id }}"
        name: "{{ name }}"
        region: "{{ region }}"
        data: "{{ item.rax_accessipv4 }}"
        type: PTR
      with_items: rax.instances
      register: ptr_record

Ansible playbook to create new Rackspace Cloud Server and add DNS entries

Here’s what the playbook looks like now with our Rackspace Cloud DNS additions to create a new domain, add an A record for the new server, and add a PTR record.

---
- name: Create a Rackspace Cloud Server
  hosts: localhost
  user: root
  connection: local
  gather_facts: False

  vars:
   # Rackspace Cloud DNS settings:
   # domain - the domain we will be using for the new server
   - domain: enhancedtest.net
   # dns_email - admin email address for the new domain name
   - dns_email: admin@enhancedtest.net
   # this is the name we will see in the Rackspace Cloud control panel, and
   # this will also be the hostname of our new server
   - name: admin.enhancedtest.net
   # the flavor specifies the server side of our instance
   - flavor: performance1-1
   # the image specifies the linux distro we will use for our server
   # note: this image UUID is for Ubuntu 14.10 PVHVM
   - image: 0766e5df-d60a-4100-ae8c-07f27ec0148f
   # the region is the Rackspace Cloud region we want to build our server in
   - region: DFW
   # credentials specifies the location of our pyrax configuration file we created earlier
   - credentials: /Users/nick/.rackspace_cloud_credentials
   # I like to drop in my SSH pub key automatically when I create the server
   # so that I can ssh in without a password
   # Note: Instead of dropping in a file, you can use a stored Rackspace key
   # when you build the server by editing key_name below to your key's name.
   - files:
        /root/.ssh/authorized_keys: /Users/nick/.ssh/id_rsa.pub

  tasks:
    - name: Rackspace cloud server build request
      local_action:
        module: rax
        credentials: "{{ credentials }}"
        name: "{{ name }}"
        flavor: "{{ flavor }}"
        image: "{{ image }}"
        region: "{{ region }}"
        # key_name - specifies the Rackspace cloud key to add to the server upon creation
        #key_name: my_rackspace_key
        files: "{{ files }}"
        # wait - specifies that we should wait until the server is fully created before proceeding
        wait: yes
        # state - present means we want our server to exist
        state: present
        # specify that we want both a public network (public IPv4) and
        # a private network (10. aka service net)
        networks:
          - private
          - public
        # group - specifies metadata to add to the new server with a server group
        #group: deploy
      # register is an ansible term to save the output in to a variable named rax
      register: rax

    - name: Add new cloud server to host group
      local_action:
        module: add_host
        hostname: "{{ item.name }}"
        ansible_ssh_host: "{{ item.rax_accessipv4 }}"
        ansible_ssh_user: root
        groupname: deploy
      with_items: rax.instances

    - name: Add new instance to host group
      local_action:
        module: add_host
        hostname: "{{ item.name }}"
        ansible_ssh_host: "{{ item.rax_accessipv4 }}"
        ansible_ssh_user: root
        groupname: deploy
      with_items: rax.instances

    - name: DNS - Domain create request
      local_action:
        module: rax_dns
        credentials: "{{ credentials }}"
        name: "{{ domain }}"
        email: "{{ dns_email }}"
      register: rax_dns

    - name: DNS - Create A record
      local_action:
        module: rax_dns_record
        credentials: "{{ credentials }}"
        domain: "{{ domain }}"
        name: "{{ name }}"
        data: "{{ item.rax_accessipv4 }}"
        type: A
      with_items: rax.instances
      register: a_record

    - name: DNS - Create PTR record
      local_action:
        module: rax_dns_record
        credentials: "{{ credentials }}"
        server: "{{ item.id }}"
        name: "{{ name }}"
        region: "{{ region }}"
        data: "{{ item.rax_accessipv4 }}"
        type: PTR
      with_items: rax.instances
      register: ptr_record

Let’s give our updated playbook a test run and see what happens!

Run the playbook the same way as before. Note: I’m not including -vvvv this time as the output can be extremely verbose, but I always run it when testing, debugging and troubleshooting.

ansible-playbook -i virtualenv-inventory.yml build-cloud-server.yml

Output from the playbook: https://gist.github.com/nicholaskuechler/f6a01223252b89dea959

Success! Some interesting things to note:

  • The server already existed with the name we specified in the playbook, so a new cloud server was not created
  • The new DNS domain was added successfully
  • The new A record was added successfully
  • The new PTR record was added successfully

What happens if we run the playbook again? Will it create more DNS records again? Let’s give it a try. Here’s the output from my workstation: https://gist.github.com/nicholaskuechler/d2562ef1c780cdeffe14

Since the domain, the A record, and PTR record already existed in DNS, the tasks were marked as OK and no changes were made. If changes were made, the tasks would be denoted with “changed:” rather than “ok:” in the task output. You can also see the overall playbook tasks ok / changed / failed status in the play recap summary.

Use ansible to install base packages on a Rackspace Cloud server

Now that we have successfully created a new cloud server and performed routine system administrator tasks like setting up DNS entries, let’s go ahead and install our favorite packages on our new server. On every server I use, I want vim, git and screen to be available, so let’s make sure those packages are definitely installed.

There are two different ways we can accomplish the package installations:

1.) Add a new task to install a list of packages
2.) Add a new ansible role that installs base packages that we can reuse in other playbooks

I prefer option #2 which allows me to re-use my code in other playbooks. I install the same set of base packages on each new server I create, so this will definitely come in handy for me in the future.

But let’s cover both options!

Add a new ansible task to install a list of packages

Here’s how we can add a new task to our existing ansible playbook to run apt update and install a bunch of our favorite packages on our new Rackspace cloud server. The task basically loops through a list of packages specified by with_items and uses apt to install the individual item, which is the package.

- name: Install Packages
  hosts: deploy
  user: root
  gather_facts: True

  tasks:
  - name: Run apt update
    apt: update_cache=yes

  - name: install packages
    action: apt state=installed pkg={{ item }}
    with_items:
    - vim
    - git
    - screen
    tags:
    - packages

Important note: Since we’re using ubuntu as our linux distro, we need to use the apt ansible module to install packages. If we were using CentOS or Fedora we would change this to yum.

Note: The ansible hosts group we’re using is “deploy” which we specified previously when adding the newly created cloud server to an internal, dynamic ansible host group. The important piece here is to make sure both group names match if you decide to use a different name!

Create a new ansible role to install a list of base packages

Using ansible roles to install our favorite packages is a slightly more complicated method than adding the install packages task to the existing playbook, but it is much more reusable and using roles is generally the preferred ansible way.

Make ansible role directories

First, we need to create a directory structure for our ansible roles. All ansible roles will live in a subdirectory of the directory your playbook lives in called “roles”. Let’s call our new role “base”

In our example, our playbook build-cloud-server.yml lives in the directory ansible-rackspace-servers, so in the ansible-rackspace-servers directory let’s make a new directory named “roles”

mkdir roles

Then make the roles directory for our “base” role, and a couple standard directories roles in ansible use:

cd roles
mkdir -p base/{files,handlers,meta,templates,tasks,vars}

Note: We’re not going to be using all of these standard role directories like meta and vars at this time, but we’ll go ahead and create them now in case we expand our role in the future.

Create package installation task in the new base role

With our directory structure set up for our new “base” role, we need to create a task in this role to install our favorite packages.

Create a new file in roles/base/tasks/ directory named main.yml. By default, all ansible roles will have a main.yml in the role’s tasks subdirectory.

Inside of main.yml, let’s write a simple task to install our packages. It’s going to look very similar to the method of adding the package install tasks directly in the playbook, but we won’t need to specify hosts, users, etc.

---
- name: Run apt update
  apt: update_cache=yes

- name: Install apt packages
  apt: pkg={{ item }} state=installed
  with_items:
    - vim
    - git
    - screen
  tags:
    - packages

That’s it! We’ve just created a new role named “base” to install our favorite packages. The best part about this is that we can expand our base role to perform other tasks that need to run on all of our servers, such as adding an admin user or setting up ntp.

Using the newly created base role in our build cloud server playbook

With our base role created, we now need to modify our playbook to use the role to install our base configuration and packages.

In our playbook, we need to add:

- name: Install base packages to new cloud server
  hosts: deploy
  user: root
  gather_facts: True
  roles:
    - base

The “roles” section specifies which roles to run on the hosts specified in the dynamic “deploy” group. For example, if we had an nginx ansible role we wanted to use, we could easily add it to the list of roles to use, like this:

  roles:
    - base
    - nginx

We can also easily use the new “base” role we just created in other ansible playbooks.

Run the playbook with the new base role to install packages

Let’s run our playbook now using the new base role to install packages on our new cloud server. Here’s the ansible-playbook output from my test run: https://gist.github.com/nicholaskuechler/72ae99ef1acc85a712fe

Success! The “changed” status under “install apt packages” denotes our 3 favorite packages were installed, and the recap tells us there were no failures.

Final version of playbook to create a Rackspace Cloud Server and install packages

Here’s the final version of our playbook:

---
- name: Create a Rackspace Cloud Server
  hosts: localhost
  user: root
  connection: local
  gather_facts: False

  vars:
   # Rackspace Cloud DNS settings:
   # domain - the domain we will be using for the new server
   - domain: enhancedtest.net
   # dns_email - admin email address for the new domain name
   - dns_email: admin@enhancedtest.net
   # this is the name we will see in the Rackspace Cloud control panel, and
   # this will also be the hostname of our new server
   - name: admin.enhancedtest.net
   # the flavor specifies the server side of our instance
   - flavor: performance1-1
   # the image specifies the linux distro we will use for our server
   - image: 0766e5df-d60a-4100-ae8c-07f27ec0148f
   # the region is the Rackspace Cloud region we want to build our server in
   - region: DFW
   # credentials specifies the location of our pyrax configuration file we created earlier
   - credentials: /Users/nick/.rackspace_cloud_credentials
   # I like to drop in my SSH pub key automatically when I create the server
   # so that I can ssh in without a password
   # Note: Instead of dropping in a file, you can use a stored Rackspace key
   # when you build the server by editing key_name below to your key's name.
   - files:
        /root/.ssh/authorized_keys: /Users/nick/.ssh/id_rsa.pub

  tasks:
    - name: Rackspace cloud server build request
      local_action:
        module: rax
        credentials: "{{ credentials }}"
        name: "{{ name }}"
        flavor: "{{ flavor }}"
        image: "{{ image }}"
        region: "{{ region }}"
        # key_name - specifies the Rackspace cloud key to add to the server upon creation
        #key_name: my_rackspace_key
        files: "{{ files }}"
        # wait - specifies that we should wait until the server is fully created before proceeding
        wait: yes
        # state - present means we want our server to exist
        state: present
        # specify that we want both a public network (public IPv4) and
        # a private network (10. aka service net)
        networks:
          - private
          - public
        # group - specifies metadata to add to the new server with a server group
        #group: deploy
      # register is an ansible term to save the output in to a variable named rax
      register: rax

    - name: Add new cloud server to host group
      local_action:
        module: add_host
        hostname: "{{ item.name }}"
        ansible_ssh_host: "{{ item.rax_accessipv4 }}"
        ansible_ssh_user: root
        groupname: deploy
      with_items: rax.instances

    - name: Add new instance to host group
      local_action:
        module: add_host
        hostname: "{{ item.name }}"
        ansible_ssh_host: "{{ item.rax_accessipv4 }}"
        ansible_ssh_user: root
        groupname: deploy
      with_items: rax.instances

    - name: DNS - Domain create request
      local_action:
        module: rax_dns
        credentials: "{{ credentials }}"
        name: "{{ domain }}"
        email: "{{ dns_email }}"
      register: rax_dns

    - name: DNS - Create A record
      local_action:
        module: rax_dns_record
        credentials: "{{ credentials }}"
        domain: "{{ domain }}"
        name: "{{ name }}"
        data: "{{ item.rax_accessipv4 }}"
        type: A
      with_items: rax.instances
      register: a_record

    - name: DNS - Create PTR record
      local_action:
        module: rax_dns_record
        credentials: "{{ credentials }}"
        server: "{{ item.id }}"
        name: "{{ name }}"
        region: "{{ region }}"
        data: "{{ item.rax_accessipv4 }}"
        type: PTR
      with_items: rax.instances
      register: ptr_record

- name: Install base packages to new cloud server
  hosts: deploy
  user: root
  gather_facts: True
  roles:
    - base

Source for ansible-rackspace-servers-example on GitHub

You can find the complete source for this ansible Rackspace cloud servers example on my github: ansible-rackspace-servers-example

Trading Tools – Stock Price Decline Checker

Stock Market Trader Desk
Stock Market Trader Desk

As a trader and investor, I’m constantly trying to make profitable stock trades and invest more wisely. I’m primarily looking for ways to increase returns, decrease risk, and to make smarter decisions.

One of the ways I make better trades and invest more wisely is through the use of various trading and investing tools. There are many great tools and web sites out there to help trading and investing research, such as R, Yahoo! Finance, Google Finance, and Wolfram Alpha, to name a few. Sometimes I’ll even create my own tools to use.

One of the tools I’ve created is the Stock Price Decline Checker which you can find on my GitHub.

The stock price decline checker tool checks a list of stocks to see if the price has declined a certain percentage from its maximum price over a specified number of days.

The goal is to identify primarily index ETFs that have declined substantially in price from their maximum values over the past few weeks or months, which could potentially signal a buying opportunity.

One way I use the stock price decline checker is to see if index ETFs such as QQQ, SPY, or DIA have declined significantly from their max prices over the past 2 months. To be more specific, I’ll run the stock price decliner checker to check if the index ETFs have fallen over 7.5% over the past 50 days. This kind of fall would indicate a substantial drop in the overall market price, such as a market correction, and to me this could signal a good buying opportunity.

Here are some links to my other trading and investing related web sites and tools:

Pelican, a static site generator written in Python

What is Pelican?

Pelican is a static site generator, written in Python. Pelican is open source and you can find Pelican on GitHub.

Pelican also supports themes and plugins. You can write your own themes and plugins, or you can download many different themes and plugins already made and ready to go.

Pelican is similar to Jekyll in that both are static site generators and easy to use. The big difference is Pelican is written in Python while Jekyll is written in Ruby. If you prefer Python syntax, like I do, Pelican may be perfect for you.

Pelican Features

Pelican has many different features and tools to help you generate your static web site. Here is a list of the most popular Pelican features:

  • Write your content directly with your editor of choice, such as vim or Sublime Text, in reStructuredText, Markdown, or AsciiDoc formats.
  • Includes a simple CLI tool to run a development/testing web server and (re)generate your site.
  • Easy to interface with version control systems and web hooks such as GitHub.
  • Completely static output is easy to host anywhere. I use Rackspace Cloud Files CDN.
  • Built in support for Articles (such as blog posts) and Pages (such as “About”, “Projects” and “Contact” pages).
  • Theming support using Jinja2 templates.
  • Code syntax highlighting.
  • Atom and RSS feeds
  • PDF generation of the articles/pages (optional).
  • Comments, via an external service such as Disqus.
  • Publication of articles in multiple languages.
  • Import your existing site from WordPress, Dotclear, or RSS feeds.
  • Integration with external tools: Twitter, Google Analytics, etc. (optional).

Read More about Pelican

For more information about Pelican, take a look at the Pelican Blog, the Pelican code on GitHub, and the Pelican Documentation.

Jekyll with Clean URLs Hosted at Rackspace Cloud Files

I’ve been using Jekyll to generate static web sites and then hosting them on the Rackspace Cloud Files CDN which uses Akamai’s content delivery network (CDN).

With Rackspace Cloud Files I have a CDN-enabled container and I have enabled my container to serve static web site files. This means I can use Rackspace Cloud Files with Akamai CDN to serve all my static web sites and I do not need to run or manage my own servers for web hosting. I simply use Cloud Files to store and serve my site. Some bonuses to using Cloud Files with CDN are my site is served to visitors very fast and my site can easily handle a very large number of visitors. Basically, my static site can handle web scale traffic.

What are Clean URLs?

I’m an advocate of using clean URLs, or human-readable URLs, in my sites. Clean URLs have many benefits:

  • Search engine optimization
  • Improved usability
  • Improved accessibility
  • Simplifies URLs
  • Easier to remember URLs
  • Do not contain implementation details of your site (Example: no php / html / asp / etc extensions on the URL)

Here’s an example of an un-clean URL:

http://www.domain.com/category/post-name-here.html

And here’s an example of a clean URL:

http://www.domain.com/category/post-name-here

Notice there is no .html and the URL looks better. Cleaner.

What is Jekyll?

Jekyll is a simple, blog aware, static site generator written in Ruby. It lets you create text-based posts and pages and a default layout that will be used across all of your posts. So you can easily change the look and feel of your site by modifying your default template and then re-generate your site, and the changes will be applied to all of your blog posts. Jekyll also generates static files that you can use on your CDN or host them yourself on your own server.

Jekyll does not create clean URLs by default, however. It will append .html to the file name and reference URLs with the .html suffix. Not ideal for a clean URL.

How To Get Clean URLs with Jekyll

I’m using a jekyll plugin which rewrites the file name and URL reference so that the html suffix is not included. It turns your blog-post.html file name in to “blog-post” without the .html extension.

To use Clean URLs with jekyll, you’ll need to set your permalink format in your jekyll _config.yml and use a jekyll plugin to generate your web site files without the .html extension.

Here is the _config.yml permalink structure I use for my site:

permalink: /:categories/:title

This will create a friendly URL in the form of: http://www.domain.com/articles/my-awesome-article

If you don’t want to display the category in the URL, you can change the permalink to:

permalink: /:title

And this will create a URL in the format of: http://www.domain.com/my-awesome-article

Check out the jekyll plugin I’m using on my github here: jekyll-rackspace-cloudfiles-clean-urls

Rackspace Cloud Files with Jekyll and Clean URLs

I came across another problem: Rackspace Cloud Files does not know what type of file “blog-post” is as there is no file extension on it. When you browse to my CDN-hosted site to a clean URL, your browser would try to download the file instead of rendering it as html. The reason is that Cloud Files can’t peer inside the file and see that it’s all HTML code and apply the correct content type. I needed to manually set the content type myself and tell Rackspace Cloud Files that “blog-post” is type “text/html” so that a web browser can properly display it.

In order to solve this problem I have written a python helper script to apply the “text/html” content type automatically for my jekyll generated sites. My python helper script will upload my site to Rackspace Cloud Files for me and check the files it has uploaded to see if they are HTML files or not. If an HTML file is found, the python helper script will tell Cloud Files it is type “text/html”, allowing Cloud Files to properly display the html to a browser.

Download my Cloud Files / jekyll helper script from my github: jekyll-rackspace-cloudfiles-clean-urls