Easily deploy CloudFormation stacks

In my last project, I added a root CloudFormation stack that contains a few things. Following that post, I wanted to start doing things a little cleanly. The main thing I want to fix is the fact that I use a role with a lot more privilege than I should. In order to fix that, I wanted each of my services deployed through Serverless to also have a CloudFormation stack. This stack will contain mostly two roles: deploy and runner. While playing around with that, I got annoyed by the AWS cli for handling CloudFormation stacks. Therefore, I did a small tool to wrap that part of the tool. Here is how I now easily deploy CloudFormation stacks.

I know I could probably use Ansible instead of doing my own tool. But I strongly feel that understanding the underlying technology is a good thing, so I prefer to dig deeper to learn it!

As always, all the code I wrote is available on my GitHub page.

The goal of the tool

Deploying a CloudFormation stack manually is not complex per se. But the command lines to use to do the actions are quite long and you need to use the right one at the right moment. For example, a basic flow would be:

  1. Create the CloudFormation stack: aws cloudformation create-stack --stack-name MyStack --template-body file://myCFStack.yml
  2. Wait for the stack to be created: aws cloudformation wait stack-create-complete --stack-name MyStack
  3. Update your stack: aws cloudformation update-stack --stack-name MyStack --template-body file://myCFStack.yml
  4. Wait for the stack to be updated: aws cloudformation wait stack-update-complete --stack-name MyStack

And this was only the basic, non-safe, no advanced features, flow. If you want to create IAM roles in your stack, you’ll need to add some –capabilities CAPABILITY_IAM to the creation and update commands.

If you want to make things a bit safer, you’ll want to use changesets. These will allow you to see and approve the actions when you do an update. Basically, you need to create a changeset (create-change-set) and wait for it to be ready (wait change-set-create-complete). Once it is created, you’ll be able to list the changes (describe-change-set). You’ll then be able to decide if you want to execute the change (execute-change-set) or scrap them (delete-change-set). Again, each of these needs multiple parameters.

The aim of my tool is to abstract all of that. You don’t need to remember which AWS profile to use, your stack name or if that stack is even deployed. It will check if the stack is deployed, create a changeset, show you the change and allow you to execute.

Using the tool

The tool is meant to be as easy to use as possible. Define your stacks and how you want them to be deployed, then call the tool saying which one you want to be deployed.

The tool is written in python, therefore it can be installed through pip: pip install cloudformation-helper.

Here is a sample of the configuration file:

With this file, you’ll be able to deploy your stack using cfhelper deploy MyStackAlias. By default, it will look in the current directory for a configuration file named stacks.cfh. This can be overridden using a flag on the command line (cfhelper –config ../path/to/stacks.cfh deploy MyStackAlias) or with the CFHELPER_CONFIG environment variable.

A lot of things are still missing from the tool for it to be ready for production, but the basis is there!

Using a reverse SSH tunnel to access a RaspberryPi

I’ve got a few Alexa devices around the house, and a few RaspberryPi lying around so I decided it could be interesting to try to make them work together. The first step I could think of was to figure out a way to be able to connect to my RaspberryPi from AWS. I wanted to stay in the Free Tier from AWS, so a lot of options were not available to me. I do have access to a shared server where I can run SSHd and PHP, so I decided to leverage those. Here is how I ended up using a reverse SSH tunnel to access a RaspberryPi.

Why use a reverse SSH Tunnel

When playing with this experiment, I wanted something that was somewhat secure. I did not want to open ports to my personal router and use a service like no-ip. A Reverse SSH Tunnel allows me to not open any ports on my home router.

Basically, a Reverse SSH Tunnel opens a port on the remote host and forwards any traffic on that port to a port on the machine that established the SSH connection. My hosted server’s network configuration prevents all non-HTTP(s) connections to it. This setup allows me to have a small PHP script with some kind of access-key that forwards connections directly to the tunnel.

The Setup

As always, I posted my code on one of my GitHub repository.  In the case of this project, it lives here and is split into two major scripts: ssh_forever.sh and ssh_no_more.sh. Both of these are controlled by environment variables and are explained in sample.env.

Using this script is quite simple:

  1. Copy the sample.env file somewhere
  2. Edit the file to put in values that make sense for your setup
  3. Add the script execution to your crontab

I set up my crontab to run the script every minute. In order to edit the crontab for your current user, simply run crontab -e. Once this is done, your favorite editor will show up. You’ll need to add a line at the end of the file, in my case, I used the following:

The first part,  */1 * * * *, controls when to run the script. In the current case, it will run the script every minute. The rest of the line is the command to execute. In my case, I want to source the content of a file into my environment, execute the script, and keep a log of what happened.

The command is therefore split into three parts:

  1. . /home/someuser/sshforever.env: sources the environment file
  2. /home/someuser/bin/ssh_forever.sh: executes the script
  3. > /home/someuser/tunnel.log 2>&1: redirects both STDOUT and STDERR to a log file

Doing things this way prevents you from using a password to connect to the SSH server. You should set up your connection using a public/private key pair.

A few related links

Filesystem auto-complete while using docker in WSL

Besides simply using the command line, mounting directories is quite painful when using docker on Windows. Therefore, I tend to use WSL to run my docker commands. Still, auto-complete of local file and directories (when using docker’s -v option) is quite helpful. Here is a little bit of information on getting filesystem auto-complete while using docker in WSL.

For my local setup, I use Virtual Box and Docker Toolbox to run docker in a linux VM and you can find how I linked WSL with that VM in my other post.

My final goal was to be able to do a simple:
docker run –rm -v /mnt/d/data:/data -i -t calestar/ebook-utils COMMAND
where autocomplete for local directories (/mnt/d/data) works correctly in WSL.

In order to get to that point, I needed one simple thing: have part of the Windows filesystem mounted in the same directory on both (a) the linux VM and (b) WSL. Luckily for me WSL already links to the Windows filesystem: the D: drive is automatically mounted as /mnt/d.

Next step was to get the linux VM to get access to the files in the D: drive, and have them under /mnt/d. This part is specific to VirtualBox, I’m quite certain this can be done using Hyper-V (and Docker for Windows), but I haven’t tried it.

VirtualBox configuration

VirtualBox configuration

First thing to do is open VirtualBox and find the VM that is used by docker. In order to get the name of the VM to modify, open a Windows command prompt and run docker- machine env. In the output, you will find a a value associated with DOCKER_MACHINE_NAME. You will now shutdown the VM in order to modify it. To do so, simple right-click on the VM and select Close->ACPI Shutdown and confirm that you want to shut down the VM. With the VM shutdown, right-click on it and select Settings.

In the settings page, go into the Shared Folders category. This page is where the magic happens. You will need to add a new shared folder. The folder path is the path on you Windows filesystem (D: for me). The name of the share is in fact the path on the linux VM where docker runs (/mnt/d for me).

With this configuration done, it is time to start the VM once again. To do so, select your VM from the main menu, right-click on it and select Start->Headless Start.

It might take a little bit of time before the VM is up-and-running, but it should all be working now.

Docker command line in WSL

Using containers and Docker is the norm for a lot of developers out there nowadays, but using it on Windows can be painful. Here is a little documentation on how I’ve been using Docker command line in WSL (Windows Subsystem for Linux).

The main problem I was having with using Docker on Windows is quite simple: Docker is basically a series of command line tools (docker, docker-compose, docker-machine, …) and the Windows command prompt is not quite nice to use.

For a while, I was using Docker from a git-bash shell. It is better than nothing, but then I needed more and more Linux tools and got quite annoyed. In order to make things easier, I decided to install Ubuntu directly on my Windows 10 machine through WSL. In order to get that installed, simply follow this tutorial.

To run Docker on Windows, you have two choices:

  1. Docker for Windows, uses Hyper-V to run a Linux VM for Docker;
  2. Docker Toolbox, uses VirtualBox to run the Linux VM for Docker

Both of these will get the job done. I had to go with Docker Toolbox for one simple reason: I don’t have a Enterprise, Professional, or Education version of Windows 10, and it is needed to get Hyper-V running.

Once you have Docker (for Windows or Toolbox) running, you should be able to use all the command line commands through the Windows command prompt or git-bash. In order to get everything running on Ubuntu you will need to do one last thing: configure your Linux environment.

The first thing you will need to get your environment up-and-running is the value of your DOCKER_HOST environment variable. To get this value, simply run echo %DOCKER_HOST%  in a Windows command prompt (or  echo $DOCKER_HOST  in a git-bash console). In my case, the value is tcp://192.168.99.100:2376 . This gives you the IP/Port where your Docker daemon is listening.

Armed with this value, you will need to edit your bash initialization script in your Ubuntu installation. In order to do so, simply add the following at the end of your ~/.bash_profile file:

The value associated with DOCKER_HOST is the one we got a bit earlier, and the value associated with DOCKER_CERT_PATH is a file inside your Windows user’s home directory.

You should now be able to install the docker package and use any command line tools contained in it!

Sources