Deploying an AWS lambda for Alexa using Serverless

I’ve played with deploying AWS lambdas manually, and I’ve played with SAM. Now is time to play with Serverless. The good thing with Serverless is that it is cloud-agnostic. Therefore, it can be used to deploy on a lot more platforms than simply AWS. Of course, all the Alexa skill linking will be AWS-specific. So here is how I ended up deploying an AWS lambda for Alexa using Serverless. As always, the code of the associated project can be found on my GitHub page.

Getting ready to use Serverless

Again, I had issues installing Serverless, pretty much the same as with SAM. Basically, it is not pre-built for a Raspberry Pi:

The fix, again, is to bypass the installer and install using npm: npm install serverless --global.

Running Serverless on a Raspberry Pi

Besides the installation of the framework, I could not find any issues running it. After generating a test event using SAM, I could even invoke the function locally using:

Preparing the project

Bootstrapping the project was quick and simple:

  1. run  serverless  (no arguments), which prompts a few questions and generates a basic project
  2. copy all files from my original project to the newly created directory
  3. delete the useless deploy scripts from before

Writing Serverless template

Writing the deployment template for Serverless was way easier than using SAM. It is a bit weird when you think about the fact that SAM is supported by AWS and only supports AWS. The two issues I got when using SAM were fixed trivially.

The first one, selecting a retention policy for the Log Group is as simple as adding a parameter:

The second one, linking to the Alexa skill, is done again using a simple parameter:

Deploying everything

Deploying the skill is done through the serverless command: serverless deploy --region us-east-1. This command will create a Cloud Formation stack for your function.

Removing your stack is again done directly through the serverless command: serverless remove --region us-east-1.

Comparison with SAM

Overall, Serverless seems easier to use and have more feature than SAM. Of course, my only testing of the frameworks is these few posts, so it is possible that things change over time.

Deploying an AWS lambda for Alexa using SAM

After manually deploying my lambdas during my previous posts (for example, this post), I decided it is time to look at the automation available. The first one I wanted to look at was AWS Serverless Application Model (SAM). The setup is quite straight-forward, but still, a few points warrant documentation since I run from a Raspberry Pi. Here is how I ended up deploying an AWS lambda for Alexa using SAM. As always, this full example can be found on my GitHub.

Getting ready to use SAM

The official documentation from AWS uses brew to install the SAM CLI. Sadly, brew only works on Intel Processor, so this route was a no-go on my Raspberry Pi. After a bit of searching, I found that you could also install the python module manually through pip: pip install aws-sam-cli.

Installing it manually made everything work for me. Maybe I already had the other dependencies, unsure.

Limitations due to Raspberry Pi

The greatest limitation I could find was that I could not run the lambda locally using SAM. When trying to invoke the lambda locally through sam local invoke, I simply get:

Error: Could not find amazon/aws-sam-cli-emulation-image-nodejs12.x:rapid-1.16.0 image locally and failed to pull it from docker.

This is a side effect of the docker image used being available only for linux/amd64. Since I don’t plan on writing tests for this lambda, this limitation is of no consequence to me.

Preparing the project

I decided to go with  sam init in order to bootstrap the project. The command line will help you generate a basic template, I went with something that seemed really simple. I then replaced the hello-world directory with the source code from my previous project. No changes had to be done to the source code to get it running.

Writing the SAM template

Handling the lambda in the template was easy enough. Two parts of the deployment ended up harder: log management and linking to a specific Alexa skill.

In order to fix the log management issue, I followed this post. Basically, I had to freeze the name of the function and manually generate the AWS::Logs::LogGroup. The following snippet shows the relevant parts of the template for this fix:

The second issue, linking to a specific Alexa skill, was fixed roughly the same way, following this post. Again, the fix is basically manually handling a resource associated with the function, this time the AWS::Lambda::Permission. The following snippet shows the relevant parts of the template for this fix:

Deploying everything

In theory, deploying everything should be as trivial as running sam deploy. Sadly, IAM got in the way. After trying quite a bit, I decided to be a bad boy and give way too much access to my user. I’ll play with the permissions in a clean way another day.

Once you have deployed your newly created lambda, you’ll be able to find all associated resources on the Cloud Formation page. This page is also where you’ll be able to delete your stack if you need to. In order for the SAM stack to be deleted cleanly, I had to manually delete a few S3 bucket content.

Related links

Making AWS Lambda deploy faster using layers

I’ve been playing with AWS Lambdas for a little while, at the same time as learning NodeJS. Let’s say that I’ve uploaded quite a few new versions of my test Lambdas. Even with a small code base and minimal dependencies, I always feel like deploying is slow. Here is how I ended up making AWS Lambda deploy faster using layers. I decided to update the scripts I was using for my previous post, therefore everything can be found in this repository. More precisely, update_dependencies.sh shows an example of how to deploy a new version of a layer and update your lambda.

The problem

I timed a few parts of my deploy script to find out that the zipping process was taking quite a bit of time:

This is a whole 15 seconds to deploy a change to the lambda. You might think that this is millions of files, images, and such. But it is merely 1900 files, the Alexa SDK. Okay, some of that might be my fault, I’m using a Raspberry Pi to develop my things.

Fixing the issue

I already know about Lambda layers since I’ve used HashiCorp’s Vault AWS Lambda extension in a previous job. I was not sure if that concept could be used to store libraries. Turns out it can!

Using layers might even allow you to directly edit your code in the web interface of the lambda if it is small enough.

The first thing that I did was to manually create the layer one time through the web page. This will give you an ARN and make things easier later. The zip file that I uploaded was simply containing a random file of mine.

Uploading the layer version

Once you have the ARN you’ll be able to use the AWS cli to upload the new version of the layer. In order to do so, you’ll need to use the command: aws lambda publish-layer-version. In my case, I invoke that command line like so:

The one ambiguous part is the creation of the zip file. In my case, the only thing that will be included in the zip file will be libraries, the documentation explains the directory structure that needs to be used. Basically, for nodejs package that is not specific to a given version, you’ll need to have all your dependencies in a nodejs/node_modules directory in the zip file. There are ways to make this specific to a given runtime, the documentation explains these.

I decided to make it simple to create the dependency tree: create a temporary directory with a sub-directory named nodejs, and run npm install in there to get the dependencies. Once this is done, zip everything starting at the root of the temporary directory.

For the next step, I will need the version number of the new layer. Luckily, the return value of the command used to update the layer will give you that information. The return format is JSON formatted, I decided to use jq to extract the version. Using the great tool, extracting the version is as simple as piping the result of the update command into  jq .Version.

Updating the function to use the new version

Do note that updating the function at this point is probably a bad idea for production code. Any instance of the lambda will use the new dependencies, and might simply break.

Since I’m only playing with all this and do not mind if my lambdas break, I decided to update the functions directly in the script that uploads a new layer version. It can be done using update-function-configuration:

A few related links

Storing data in Alexa-triggered Lambda

My latest project is to be able to somewhat control my RaspberryPi with my Alexa devices. While playing around, I ended up wanting to store data associated with the Amazon account. I decided to explore storing data with two retention policies: data kept for the session and data kept forever. Here is how I ended up storing data in Alexa-triggered Lambda for those two scenarios. As always, the source code used for this experiment can be found in my GitHub repository.

Identifying the user

I saw two identifiers that were interesting: the userId and the personId. According to the documentation, userId is the identifier associated with the account of the user. On the other side, the personId is used to represent the human that executed the command. Therefore, multiple personIds can be associated with the same userId.

Alexa Skill setup

In order to be able to test both storage policies, I needed a few different intents. I decided to keep it simple and went with the following:

  • Add: adds one to a session counter and returns the new value
  • Current: returns the value of the counter
  • Forget: resets the counter
  • Persist: stores the value of the counter in a persistent database
  • Restore: restores the value of the counter from the persistent database

Lambda environment setup

As with all my other projects, one of my main goals here was to learn a little something. Therefore I decided to try a few new technologies this time around. My previous test with Alexa was done using Python and handled HTTP calls directly.

In order to shake things up, I decided to go with: Node.js and the Alexa SDK. This meant a new language and using the official SDK instead of raw HTTP queries.

For the persistent storage, I wanted something that was easy to use, simple, and, most importantly, free. I decided to go with DynamoDB.

Handling session storage

The session storage is available through the JSON payload received and returned by your lambda. Using the Alexa SDK makes managing these attributes easy through the attribute manager.

In order to read a value from the session, you can use the attribute manager:

You can write to the session through the same object:

Getting a new Alexa session

While testing this skill, I ended up having to reset my session a few times. The simplest way to do so is to say: Alexa, exit. This will give you a new sessionId and reset all stored session attributes.

Handling persistent storage

Reading and writing to DynamoDB is almost as easy as writing to the session storage.

In order to write to the database, you need to build the parameters and pass these to the put function of an instance of AWS.DynamoDB.DocumentClient. The following is an example of that flow:

You can then read the value following the same pattern: build the params and pass those to the get function of the instance.

Test in-development Alexa Skill with an Echo device

My latest project is to be able to somewhat control my RaspberryPi with my Alexa devices. For simplicity, I did all my testing using the Test section of the Alexa Developer portal. This section allows you to simply write whatever you want to feed into the Alexa algorithms and bypass the speech-to-text section. Turns out the first time I tested with my Echo nothing was working. Even when running with my smartphone and the Amazon Alexa app, nothing worked. Here is how I managed to test in-development Alexa Skill with an Echo device.

Diagnosing the issue

The first thing that I assumed was that I had to enable the skill somehow. The first place I looked at was at the configuration in the Alexa Developer portal. After looking around for a little while, I could not see anything that would enable the skill for myself.

Next on the list was to see if the skill was enabled on my device. You can see the in-development skills in your account by checking in the installed skills on your Alexa Smartphone App. It turns out that skills that are in development are automatically enabled if the emails used for your Alexa login and your Alexa Developer portal are the same. So this was not the issue.

While Googling around, I found a bit more information to point me in the right direction on this page:

Make sure that the locale of your Alexa app matches at least one of the locales available for your skill. For example, if your skill has the en-US locale, set your Alexa app to English (United States).

There it was, that was my issue. My Alexa devices are all set as English/Canada and my skill only had an English/United-States setup.

Fixing the issue

I had two choices to fix my problem: update my devices to English/United-States or update my skill to support English/Canada. I decided to do the second one.

Skill Language Settings

Skill Language Settings

Adding the new language through the Alexa Developer portal is quick and easy:

  1. Go to the language setting section of your skill (image on the right)
  2. On the next page, you will be able to add languages to your skill

Once the language is added, you will need to add most of the configuration again. I could not find any way to copy an existing language into another. Also, I ended up with some HTTP 500 issues along the way, a simple refresh of the page fixed these.

Once the new language is added, your Echo devices and Smartphone App should work correctly.

Various related links

AWS Lambda triggered by catch-all Alexa Skill

For my most recent project, I wanted to have a way to control my RaspberryPi from my Amazon Echo. I already figured out a way to connect to my RaspberryPi from AWS in a secure way in my last post. Now it was time to play with the other end: writing an Alexa Skill. As a first step towards that direction, I wanted a catch-all Alexa skill that calls an AWS Lambda. This post outlines how I got an AWS Lambda triggered by catch-all Alexa Skill.

AWS Setup

The first thing needed to do this whole project is an AWS account. In order to create one, follow the instruction on the AWS Portal.

At the time of this writing, AWS Lambda has a free tier allowing 1,000,000 free requests per month and up to 3.2 million seconds of compute time per month. There is no reason why this should not be enough for this project. But do watch out: you can easily occur costs if you add other services.

Once you created your account, you will need to generate an access key for your user. It is a bad plan to create an access key for your root user. You should look into how to secure your AWS account but for the sake of this quick project, following this guide will give you what you need.

Alexa Developer Console Setup

In the Developer Console, you will create a new Skill. That new skill will use a custom model and backend resources that you will provision yourself. No need to select any template, we will be doing all the work ourselves.

The Alexa Skill

I won’t go over everything that needs to be done to configure the Skill. The most important parts I could find to achieve what I wanted were:

  1. Create a new Intent; in this Intent, add one slot of type AMAZON.SearchQuery
  2. In the ToolsAccounts Linking section, you will need to disable account linking (first option)

Every time you do changes, remember to hit the Build button. This will validate your setup. You can also go into the Certification section. This will allow you to run validations on your configuration and give pointers on how to fix issues.

The AWS Lambda

The next step is to create your AWS Lambda. In order to do so, simply go to the AWS Lambda home page and hit Create Function. We will be writing our function from scratch, using a Python 3.7 runtime.

The code we will be using for this lambda resides in this repository. It does not do much: it outputs any slots it could find to the logs of your lambda. But it can at least show that the linking worked correctly.

Deploying the code can be done directly through the Lambda UI or using the publish.sh script in the above repository. Once deployed you can use the Test button directly in the Lambda UI in the AWS portal to trigger a run of the code. You will need to enter the content of the lambda_test.json file when asked for a test configuration.

Linking the Skill to the Lambda

In order to link everything, you will need two pieces of information:

  1. The ARN of the Lambda you created
    You can get this information by going to the AWS Portal and selecting your Lambda, the ARN will be at the top right
  2. Your Skill ID
    This information you can get from the Amazon Developer portal, by going into your skill and into the endpoint section

The linking needs to be done in both direction:

  1. In the Amazon Developer portal, in the endpoint section of your skill, you will need to enter the Lambda ARN into the default region field
  2. In the AWS portal, after selecting your Lambda, you will need to add a new trigger of type Alexa Skills Kit and enter the Skill ID of your skill

Testing Everything

There are multiple ways to test this setup:

  1. Testing using an Echo device
  2. Use the Amazon Alexa app on your smartphone
  3. Use the test UI in the Amazon Developer portal

I strongly suggest using the Amazon Developer portal since this removes the ambiguity of speech-to-text. In order to access this UI, simply go onto the Amazon Developer portal, select your skill and go into the Test section on top. There you will be able to enter text directly into the simulator.

Various related links

Using a reverse SSH tunnel to access a RaspberryPi

I’ve got a few Alexa devices around the house, and a few RaspberryPi lying around so I decided it could be interesting to try to make them work together. The first step I could think of was to figure out a way to be able to connect to my RaspberryPi from AWS. I wanted to stay in the Free Tier from AWS, so a lot of options were not available to me. I do have access to a shared server where I can run SSHd and PHP, so I decided to leverage those. Here is how I ended up using a reverse SSH tunnel to access a RaspberryPi.

Why use a reverse SSH Tunnel

When playing with this experiment, I wanted something that was somewhat secure. I did not want to open ports to my personal router and use a service like no-ip. A Reverse SSH Tunnel allows me to not open any ports on my home router.

Basically, a Reverse SSH Tunnel opens a port on the remote host and forwards any traffic on that port to a port on the machine that established the SSH connection. My hosted server’s network configuration prevents all non-HTTP(s) connections to it. This setup allows me to have a small PHP script with some kind of access-key that forwards connections directly to the tunnel.

The Setup

As always, I posted my code on one of my GitHub repository.  In the case of this project, it lives here and is split into two major scripts: ssh_forever.sh and ssh_no_more.sh. Both of these are controlled by environment variables and are explained in sample.env.

Using this script is quite simple:

  1. Copy the sample.env file somewhere
  2. Edit the file to put in values that make sense for your setup
  3. Add the script execution to your crontab

I set up my crontab to run the script every minute. In order to edit the crontab for your current user, simply run crontab -e. Once this is done, your favorite editor will show up. You’ll need to add a line at the end of the file, in my case, I used the following:

The first part,  */1 * * * *, controls when to run the script. In the current case, it will run the script every minute. The rest of the line is the command to execute. In my case, I want to source the content of a file into my environment, execute the script, and keep a log of what happened.

The command is therefore split into three parts:

  1. . /home/someuser/sshforever.env: sources the environment file
  2. /home/someuser/bin/ssh_forever.sh: executes the script
  3. > /home/someuser/tunnel.log 2>&1: redirects both STDOUT and STDERR to a log file

Doing things this way prevents you from using a password to connect to the SSH server. You should set up your connection using a public/private key pair.

A few related links