I amassed a few tidbits of nice information while playing with Serverless over time. These are all too small to do a real post about, so I decided to outline a few in a single one. This time, I will go over two subjects: make simple project deployment faster and using the same S3 bucket for all your projects. Here is how I ended up fixing these two Serverless loose ends.
Make simple project deployment faster
Most projects for this blog have the same shape: they require no external dependency except the AWS SDK and serverless. Both of these are completely useless at runtime: serverless is dev-only and AWS SDK is already provided in lambdas.
My issue is that it looks like serverless removes development dependencies instead of installing them in a clean directory. The side effect of this is that it takes quite a bit of time to remove all those small files. And all that work to end up with an empty directory anyway. For example, deploying a simple project would take almost 6 minutes:
1 2 3 4 5 6 |
┌─[pi@raspberrypi]─[ruby-2.6.3]─(~/repos/experiments/serverless/lambda/extensions/service_discovery) (master) └─[21:30]$ time npm run deploy [snip] real 5m45.842s user 4m22.120s sys 0m17.635s |
I decided to look around and found a forum post trying to fix this issue (see here). The fix described in the post is to simply tell serverless to not care about your dependencies and exclude the node_modules folder completely:
1 2 3 4 5 |
package: individually: true excludeDevDependencies: false exclude: - node_modules/** |
Once this was done, deployment time went down quite a bit to less than 4 minutes:
1 2 3 4 5 6 |
┌─[pi@raspberrypi]─[ruby-2.6.3]─(~/repos/experiments/serverless/lambda/extensions/service_discovery) (master) └─[21:39]$ time npm run deploy [snip] real 3m49.137s user 1m41.482s sys 0m8.528s |
Using a single S3 bucket for multiple projects
By default, serverless will generate an S3 bucket per project. This might not be an issue for small businesses or a simple playground, but it can become an issue later. Amazon has put a soft limit to 100 S3 buckets per account. This limit can be increased to a maximum of 1000 (source: link). This might seem like a lot, but you could also be using buckets for a lot of things. Of course, this limit can be bypassed by creating new AWS accounts, but it becomes a nightmare once you start accessing resources cross-account.
Following a forum post (here), I discovered the deploymentBucket option. This needs to be added to your provider section and needs to contain the ARN of the bucket to use. For example, I use the following in my playground:
1 2 3 4 5 |
provider: name: aws lambdaHashingVersion: 20201221 deploymentBucket: name: ${cf:ServerlessRoot.ServerlessDeploymentBucketArn} |
I wanted to make things simple for me and not have to hardcode anything. At the same time, I wanted to play with CloudFormation, so I decided to expose the bucket through a CloudFormation stack. In the previous sample, the name I use for the deployment bucket is ${cf:ServerlessRoot.ServerlessDeploymentBucketArn}. This tells serverless to get the CloudFormation stack called ServerlessRoot. Then, it extracts the output named ServerlessDeploymentBucketArn from it.
The Stack I deployed can be found here. And a quick helper script to deploy it. Right now, it contains only the deployment bucket, but maybe I’ll add some IAM stuff in there too.