Dissecting Serverless Stacks (IV)

This content is more than 4 years old and the cloud moves fast so some information may be slightly out of date.

Dissecting Serverless Stacks (IV)

After we figured out how to implement a sls command line option to switch between the usual behaviour and a way to conditionally omit IAM in our deployments, we will get deeper into it and build a small hack on how we could hand over all artefacts of our project to somebody who does not even know SLS at all.

If you looked around a bit inside your development directory, you probably already know that there is a command sls package which creates the CloudFormation document which the whole stack is based on. This is basically a step right before deployment and we have that JSON file as .serverless/cloudformation-template-update-stack.json in the project directory. Next to it we also find the ZIP file of our Lambda.

So we are almost at a point where we could hand over a iam.yaml, the cloudformation-template-update-stack.json and the ZIP file to someone who deploys it manually.

But wait, one thing is not quite right: By default, Serverless will create an S3 bucket for our project, upload the ZIP there and the Stack only references this for the deployment. This won’t work for a manual deployment where we need to have some Parameters to hand over Bucket and S3 key.

For this, I built a small Ruby script which will

  • read the resulting JSON
  • add Parameters of type String for ServerlessDeploymentBucket and ServerlessDeploymentArtifact
  • replace the Code block of the Lambda CloudFormation Resource
  • write the result back as YAML

As I implemented this as a task within Rake, my Rakefile contains this:

desc "Export separately"
task :export do
  sh <<~EOS, { verbose: false }
    sls package --deployment no_includes
    cp .serverless/*.zip pkg/
    cp cloudformation-resources/* pkg/
  EOS

  require 'json'
  require 'yaml'

  project_base = Rake.application.find_rakefile_location[1]

  filename = File.join(project_base, '.serverless/cloudformation-template-update-stack.json')
  json = JSON.parse(File.read(filename))

  # Replace S3 resource by Parameter
  json['Resources'].delete('ServerlessDeploymentBucket')
  json['Parameters'] = {}
  json['Parameters']['ServerlessDeploymentBucket'] = {
    "Type" => 'String'
  }

  zip_file = Dir.glob(File.join(project_base, '.serverless/*.zip')).first
  json['Parameters']['ServerlessDeploymentArtifact'] = {
    'Type' => 'String',
    'Default' => File.basename(zip_file)
  }

  functions = json['Resources'].select { |k,v| v['Type'] == 'AWS::Lambda::Function' }
  functions.each do |function|
    source = File.join(project_base, function[1]['Properties']['Handler'].gsub(/\.[a-z_]+$/, '.rb'))

    function[1]['Properties']['Code'] = {
      'S3Bucket' => {
        'Ref' => "ServerlessDeploymentBucket"
      },
      'S3Key' => {
        'Ref' => "ServerlessDeploymentArtifact"
      }
    }
  end

  fp = File.open(File.join(project_base, 'pkg/stack.yaml'), 'w')
  fp.write json.to_yaml
  fp.close
end

At the end of this journey, we have three ways of deploying our SLS project

  • sls deploy as the standard way, which will deploy all parts automatically
  • sls deploy --deployment no_includes for the “IAM first, Serverless second” approach
  • rake export for handing over artefacts to a third party and manual deployment.

Summary

Of course, these approaches might not work well for you. I created it especially for smaller, 1-Lambda projects and not for complicated microservices architectures. But if you work with customers and want to recycle your small Lambda-based helpers across different types of organizations, you might find it handy to follow the patterns.

And even if you don’t want to use these deployment styles, I hope I could show you some lesser known ways of structuring your serverless.yml (externalized sub-stacks, references) or how to use custom CLI options (with mappings and double references).

Similar Posts You Might Enjoy

Dissecting Serverless Stacks (III)

Dissecting Serverless Stacks (III) The third post of this series showed how to make IAM statements an external file, so we can deploy that one but still work with the sls command. It still involved commenting out things in the configuration, so this post will show how to solve that issue. - by Thomas Heinen

Dissecting Serverless Stacks (II)

Dissecting Serverless Stacks (II) With the output of the last post of this series, we established the base to be able to deliver a Serverless application independent of its needed IAM privileges. So let’s see how this will work out. - by Thomas Heinen

Dissecting Serverless Stacks (I)

Dissecting Serverless Stacks (I) This post establishes the base for a small series on how to create Serverless based Lambdas which can be deployed in environments without IAM privileges or where the sls command cannot be used at all. - by Thomas Heinen