After manually deploying my lambdas during my previous posts (for example, this post), I decided it is time to look at the automation available. The first one I wanted to look at was AWS Serverless Application Model (SAM). The setup is quite straight-forward, but still, a few points warrant documentation since I run from a Raspberry Pi. Here is how I ended up deploying an AWS lambda for Alexa using SAM. As always, this full example can be found on my GitHub.
Getting ready to use SAM
The official documentation from AWS uses brew to install the SAM CLI. Sadly, brew only works on Intel Processor, so this route was a no-go on my Raspberry Pi. After a bit of searching, I found that you could also install the python module manually through pip: pip install aws-sam-cli.
Installing it manually made everything work for me. Maybe I already had the other dependencies, unsure.
Limitations due to Raspberry Pi
The greatest limitation I could find was that I could not run the lambda locally using SAM. When trying to invoke the lambda locally through sam local invoke, I simply get:
Error: Could not find amazon/aws-sam-cli-emulation-image-nodejs12.x:rapid-1.16.0 image locally and failed to pull it from docker.
This is a side effect of the docker image used being available only for linux/amd64. Since I don’t plan on writing tests for this lambda, this limitation is of no consequence to me.
Preparing the project
I decided to go with sam init in order to bootstrap the project. The command line will help you generate a basic template, I went with something that seemed really simple. I then replaced the hello-world directory with the source code from my previous project. No changes had to be done to the source code to get it running.
Writing the SAM template
Handling the lambda in the template was easy enough. Two parts of the deployment ended up harder: log management and linking to a specific Alexa skill.
In order to fix the log management issue, I followed this post. Basically, I had to freeze the name of the function and manually generate the AWS::Logs::LogGroup. The following snippet shows the relevant parts of the template for this fix:
The second issue, linking to a specific Alexa skill, was fixed roughly the same way, following this post. Again, the fix is basically manually handling a resource associated with the function, this time the AWS::Lambda::Permission. The following snippet shows the relevant parts of the template for this fix:
FunctionName: !GetAtt StorageFunction.Arn
In theory, deploying everything should be as trivial as running sam deploy. Sadly, IAM got in the way. After trying quite a bit, I decided to be a bad boy and give way too much access to my user. I’ll play with the permissions in a clean way another day.
Once you have deployed your newly created lambda, you’ll be able to find all associated resources on the Cloud Formation page. This page is also where you’ll be able to delete your stack if you need to. In order for the SAM stack to be deleted cleanly, I had to manually delete a few S3 bucket content.