Simple Random Strings In Cloudformation with Codepipeline
TLDR: Sometimes you need a random item in cloudformation. Parameter Overrides can be put into your codepipeline actions making them available in your template config that creates a cloudformation update. Some overrides are available by default, such as the objectkey in S3 or the URL of an artifact. Stick something like this in a pipeline cloudformation template.
ParameterOverrides: |
{
"BuildObjectKey":
{"Fn::GetArtifactAtt":["MyCodeBuildArtifact", "ObjectKey"]}
}
And in your buildspec to create the artifact, make sure the output files have something that changes every time in them. The following in a buildspec would change the name to have a timestamp that increments.
phases:
prebuild:
command:
- export epochTimeStamp=$(date +%s)
artifacts:
name: MyArtifact-$epochTimeStamp.zip
If you don’t have an artifact then when you get the “source” file in a pipeline you could use something from that.
ParameterOverrides: |
{
"BuildObjectKey":
{"Fn::GetArtifactAtt":["MyPipelineSource", "ObjectKey"]}
}
Pipeline picture of what I’m using. We’re making SAM stuff, check out cli cloudformation package and sam cli package. This pipeline is built with a cloudformation template, that’s important. I think this could still be done without using a cloudformation template to create the pipeline by editing the pipeline manually and I didn’t try that. The pipeline is kicked off by a source change in S3, Code commit or Github. The code is built in codebuild with a buildspec.yml file. The build output (MyArtifact) is placed in an S3 bucket. This part gives 3 default paramaters the pipeline can reference. ObjectKey, BucketName and URL. Those strings can be used as override paramaters passed into the cloudformation template. Depending on your artifact output, this string will be different every time. If it isn’t different already, it’s easy to change that. From there that string can update the names of resources, help create new ones, or provide random or easily configurable values to be used in the cloudformation.
Cloudformation can be a pain to get some stuff done in. And there are a lot of work arounds for all sorts of things. Check out cloudformation coding custom resources, or dynamic resource generation using cloudformation macros, or aws cloudformation and interactive inputs. There are a lot of ways to create macros or run functions that will give you a random string you can reference as a parameter or whatever you need in your cloudformation template. I wanted a simple random value to bring in to my template. This happened because of lambda layers. In cloudformation a Layer needs something to change in order to create a new one — it really doesn’t know the code changed. So description, license, or some other parameter have to be updated.
NOTE: Layers have some other immaturities. For me it was how they are referenced. Check out the bottom of this for something I’ll probably never get around to writing about updating your template config in the buildspec.yml
(See pipeline-structure action requirements and continuous delivery codepipeline actions and codepipeline parameter overrides for the full documentation.)
I wanted something I could easily plug in without a bunch of other resources like macros and custom made lambdas to call and export values.
Our pipeline is built up by a Cloudformation template. Configuration in the codepipeline action explains what that step is actually doing. A configuration option that I had never looked at before is ParameterOverrides.
With Parameter Overrides you can stick in a configuration step that passes values into the template, these values can be created outside of the template configs. Some Parameter overrides are there by default. Those 3 are 1) BucketName, 2)ObjectKey, and 3)URL. If you have any dynamic value in your bucket/ bucket prefix, or artifact name and you don’t care that an ugly string is going into your pipeline these paramaters can work well. The URL will change every time and can be treated as a somewhat random.
NOTE: My s3 prefix gets changed when I upload an artifact after codebuild (Info on that change below). So our objects URL changes and we can use that as a parameter override. When the artifact created from my build step is put out to s3 many of the exports are given a random string. Most of the time our build step has something like this export Prefix=$(echo $CODEBUILD_SOURCE_VERSION | cut -d’/’ -f2)
and we stick that as the S3 prefix when we stick our cloudformation package into s3 - this makes our ARTIFACT URL unique. I would suggest using something likedate +%s
. This will give you a new chronological stamp every time and might be a better way to output the package name also.
phases:
prebuild:
command:
- export epochTimeStamp=$(date +%s)
artifacts:
name: MyArtifact-$epochTimeStamp.zip
Now that we have a “changing” or “random” paramater in our Codepipeline we can reference our Object Key or it’s URL, If you created a new bucket everytime that could work also. Here is an example of stage and an Action with an override. This stage is named deploy to dev, this part is only the piece that creates a changeset. Not shown is when the pipeline executes that change. The action is named setUpResources and is a creates a changesetp for a cloudformation stack (StackForMyThing). At the bottom the changeset knows to use dev.json and an artifact called package.yml from the buildstep. And the parameter override sends another parameter (not in the dev.json) to the template.
- Name: deployToDevEnv
Actions:
- Name: setUpResources
RunOrder: 1
ActionTypeId:
Owner: AWS
Category: Deploy
Provider: CloudFormation
RoleArn: MyRole
InputArtifacts:
- Name: MyCodeBuildArtifact
Configuration:
ActionMode: CHANGE_SET_REPLACE
StackName: StackForMyThing###THE OVERRIDE
ParameterOverrides: |
{
"BuildObjectKey":
{"Fn::GetArtifactAtt":["MyCodeBuildArtifact", "ObjectKey"]}
}
ChangeSetName: MYCHANGESET
TemplateConfiguration: MyCodeBuildArtifact::dev.json
RoleArn: Role For CloudFormation to Deploy
TemplatePath: MyCodeBuildArtifact::package.yml
So now our cloudformation template has access to a parameter passed in during the codepipeline action, in this case BuildObjectKey. My layer needed a new description each time in order to notice a change and deploy a new version. To make the description change I subbed the parameter in.
Parameters:
##Comes in from our template configs like dev.json
Environment:
Type: String
AllowedValues: [dev, qa, pilot, prod]##Passed in from codepipeline
BuildObjectKey:
Type: StringResources:
MyLambdaLayer:
Type: AWS::Lambda::LayerVersion
Properties:
CompatibleRuntimes:
- python3.7
Content:
S3Bucket: MyArtifactsSource
S3Key: !Sub ${BuildObjectKey}.zip ##Description changes to MYLayer TIMESTAMP.ZIP
Description: !Sub Mylayer ${BuildObjectKey}
LayerName: !Sub ${Environment}-My-Layer
Now when my cloudformation template change is executed the lambda layer changes it’s description from something like. Mylayer Z3Xry.zip to something like Mylayer HT7M1.zip. The change in the description creates a new version of the lambda.
If you wanted to create something like a new s3 bucket everytime you run the change set you could add an update replace policy in the cloudformation template and the old s3 bucket should stay along with creating a new s3 bucket based on the override parameter. Remember that Bucket Names must be unique across all of AWS, and other constraints.
Resources:
myDB:
Type: 'AWS::RDS::DBInstance'
DeletionPolicy: Retain
UpdateReplacePolicy: Retain
Properties:
Bucket: !Sub ${BuildObjectKey}-MyProjectBucket
Some other Fun Notes
Put in multiple parameter overrides.
ParameterOverrides: |
{ "BucketName" :
{"Fn::GetArtifactAtt" : ["mybucket", "BucketName"]},
"ObjectKey" :
{"Fn::GetArtifactAtt" : ["MyobjKey", "ObjectKey"]}
}
Put in Parameter Overrides from json files
ParameterOverrides: |
{ "JsonParam":
{"Fn::GetParam":["artifactName","JSONFileName","keyName"]}
}
Use your buildspec to create a new .json file. That value be made into an override parameter.
## make a random number in your buildspec
## note real code
number = math.rand(1,10)
{"randomnumber": "$number"} >> random.jsonmake sure random.json gets in your artifact package.
Then use the json param to get that random number
ParameterOverrides: |
{ "JSONRandNum":
{"Fn::GetParam":["artifactName","random.json","randomnumber"]}
}
Use in your template
Parameters:
##Passed in from codepipeline
JSONRandNum:
Type: String
Combine with conditionals or other stuff to make a random number of a resource or other values.
You could do the same thing as above by grabbing a json template that is in S3.
Create something that makes a new json file and sticks it in S3. Perhaps have an event rule that looks for a change in that file. Have that event rule kick off a pipeline. Have that pipeline use that json file that got uploaded.
Update template configs during the Buildstep
This should probably be it’s own post. Update your template configs with a script in your buildspec.yml. Quick example.
Write a script to use aws cli to get some sort of value that you need to put into your build script.
#!/bin/bash
##CLI COMMAND TO GET WHAT YOU WANT
ARN=$(aws lambda list-layer-versions — layer-name My-Layer — query ‘LayerVersions[0].LayerVersionArn’ — output text)###Put in your template config - in this case dev.jsonsed -i ‘s/ — layerarn — /’${ARN}’/g’ dev.json
Then put that into your template configs before they get packaged up for cloudformation to deploy
##DIFFERENT FILE...
##Your dev.json configs will need a place to update that
{
"Parameters": {
"Environment": "— layerarn —",
}
}
Call that script something like — AWSCLIGetFoo.sh
and in your build spec simply add a step like. ./AWSCLIGetFoo.sh
If it’s in another account — like my stuff was - take a look at these commands to assume a role and get the credentials. https://github.com/Integralist/Shell-Scripts/blob/master/aws-cli-assumerole.sh
If that looks kind of messy to you realize that you can find pretty much this exact example in AWS docs. https://docs.aws.amazon.com/IAM/latest/UserGuide/id_credentials_temp_use-resources.html