Tag Archives: aws

Using AWS Route53 as Dynamic DNS

Basic script I run on my raspberry pi to keep my ISP’s dynamic IP address changes synchronised with my AWS Route53 entry. It uses dig and the AWS CLI tool to resolve the ip address and update route53 entry.

#!/bin/bash
ZONEID=<AWS ZONE ID>
DNSNAME=home.yourdomain.com.au
COMMENT="ip-update"
TTL=300
TYPE=A

BASE=/home/user/ip-update
LOGFILE="$BASE/ip-update.log"

IP=$(dig +short myip.opendns.com @resolver1.opendns.com)
DNSIP=$(dig +short $DNSNAME @ns-1609.awsdns-09.co.uk)
[ -z "$IP" ] && IP=$(curl -s https://api.ipify.org)
echo $(date) >> "$LOGFILE"
echo "Resolved IP: $IP" >> "$LOGFILE"
echo "DNS IP: $DNSIP" >> "$LOGFILE"

if [ "$IP" = "$DNSIP" ]; then
  echo "IP was unchanged." >> "$LOGFILE"
  exit 0
else
  TMPFILE=$(mktemp /tmp/route53-temp.XXXXXXXX)
  cat > ${TMPFILE} << EOF { "Comment":"$COMMENT", "Changes":[{ "Action":"UPSERT", "ResourceRecordSet":{ "ResourceRecords":[{ "Value":"$IP" }], "Name":"$DNSNAME", "Type":"$TYPE", "TTL":$TTL } }] } EOF echo "Updating IP address..." >> "$LOGFILE"
  /usr/local/bin/aws route53 change-resource-record-sets \
    --hosted-zone-id $ZONEID \
    --change-batch file://"$TMPFILE" >> "$LOGFILE"
fi

AWS volume snapshots across multiple regions

So I needed a script to backup volumes each day from multiple regions. I’m sure there are lots of scripts out there but why not add another.

By default this script sets a UTC expiry date tag on snapshots. After the expiry is reached it removes old snapshots.
1st day of month = default expiry 90 days
Sunday = default expiry 21 days
Others = default expiry 1 day

Pre-requisites

Install python 2.7 and python boto library

$ sudo apt-get install python python-pip
$ sudo pip install boto

IAM Policy

Setup a user in AWS IAM with the following policy, keep a copy of credentials you’ll need that for script

{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Effect": "Allow",
            "Action": [
                "ec2:CreateSnapshot",
                "ec2:CreateTags",
                "ec2:DeleteSnapshot",
                "ec2:DescribeAvailabilityZones",
                "ec2:DescribeRegions",
                "ec2:DescribeSnapshots",
                "ec2:DescribeVolumeAttribute",
                "ec2:DescribeVolumeStatus",
                "ec2:DescribeVolumes"
            ],
            "Resource": [
                "*"
            ]
        }
    ]
}

Snapshot script

Don’t forget to replace the credentials in the script with your own, you may also want to specify different regions.

#!/usr/bin/env python
from datetime import datetime, timedelta
import boto.ec2, sys

# snapshot.py used to backup volumes across AWS regions
# author: Matt Weston

# Using backup-account in IAM
aws_key = 'AWS_ACCESS_KEY'
aws_secret = 'AWS_SECRET_KEY'
regions = ['us-east-1','us-west-2','ap-southeast-2']

# snaphot date information
current_time = datetime.utcnow()
day_of_month = current_time.day
day_of_week = current_time.weekday()
week_of_year = current_time.isocalendar()[1]
month_of_year = current_time.month
snapshot_date = current_time.strftime('%Y-%m-%dT%H:%M:%S.000Z')

# determine type and expiry based on current day, week or month
snapshot_type = 'daily'
snapshot_expires = current_time + timedelta(days=1)
if day_of_week == 6:
  snapshot_type = 'weekly'
  snapshot_expires = current_time + timedelta(days=21)
if day_of_month == 1:
  snapshot_type = 'monthly'
  snapshot_expires = current_time + timedelta(days=90)
snapshot_expiry = snapshot_expires.strftime('%Y-%m-%dT%H:%M:%S.000Z')

# Get all Regions
for region in regions:
  print "connecting to", region
  try:
    connection = boto.ec2.connect_to_region(region, aws_access_key_id=aws_key, aws_secret_access_key=aws_secret)
    volumes = connection.get_all_volumes()
    print 'creating snapshots for all attached volumes'
    for volume in volumes:
      attached = volume.attachment_state()
      if attached:
        # create snapshots
        attach_data = volume.attach_data
        snapshot_name = 'snapshot: '+attach_data.instance_id+":"+attach_data.device
        snapshot = volume.create_snapshot(snapshot_name)
        snapshot.add_tag("snapshot-by", 'snapshot.py')
        snapshot.add_tag("snapshot-type", snapshot_type)
        snapshot.add_tag("snapshot-expiry", snapshot_expiry)
        snapshot.add_tag("snapshot-instance-id", attach_data.instance_id)
        snapshot.add_tag("snapshot-device", attach_data.device)
        print 'created', snapshot 
     
    print 'deleting expired snapshots for all attached volumes'
    volumes = connection.get_all_volumes()
    for volume in volumes:
      attached = volume.attachment_state()
      if attached:
        # cleanup snapshots
        existing =  volume.snapshots()
        for snapshot in existing:
          if snapshot.status == 'completed' and 'snapshot-expiry' in snapshot.tags:
            snapshot_expiry = snapshot.tags['snapshot-expiry']
            expiry_time = datetime.strptime(snapshot_expiry, '%Y-%m-%dT%H:%M:%S.000Z')
            if expiry_time < current_time:
              print 'expired snapshot', snapshot.id, snapshot.status, snapshot.description

  except Exception, e:
    print "Unexpected error:", sys.exc_info()[0]

Schedule the script using cron

Easy enough to run it as often as needed via cron.

$ chmod +x /path/to/script/snapshot.py
$ crontab -e
# Snapshot attached volumes each day and cleanup expired
30 01 * * * /path/to/script/snapshot.py > /path/to/script/snapshot.log 2>&1