Unicorns and AWS CodePipeline
TLDR (This blog is around 580 words long): How I saved a heap of time and effort by creating an automated deployment using AWS CodePipeline.
Recently I’ve been preparing the Unicorn game I created from my previous blog to present at the AWS Community Day on the 6th of September in Wellington. One thing became pretty apparent was the need to automated deployment from my laptop up to AWS.
Below is a diagram of the architecture I’ve gone with. The focus has been around AWS EKS (Elastic Kubernetes Service) which hosts the game’s backend using a container running inside a Kubernetes pod. The server side is set up to accept requests via an ALB (Application Load Balancer) from client mobile browsers using web sockets to hold connections in both directions.
So that’s all great, but how do I get code from my laptop up into those pods to serve the game? I could build it locally using docker and then push it up to ECR (Elastic Container Registry) and from there deploy into AWS EKS but I really don’t want to do that every time I want to deploy a code update.
The answer is automation. You’ll notice in the diagram I’ve added AWS CodePipeline to perform the task of taking code from Github after it’s been pushed up, build it and then put it in ECR and then finally deploy it to AWS EKS.
CodePipeline
AWS CodePipeline allow you to automate your release pipelines for fast and reliable code updates. You’ll notice in the diagram below that within AWS CodePipeline the work of build and deploy is done by AWS CodeBuild.
Once code is pushed up from the laptop to Github this triggers off the first AWS CodeBuild Stage to build the code using Docker and push it to AWS ECR. The next stage is to deploy the code using another AWS CodeBuild stage to AWS EKS.
Next, I’ll take you through an in-depth description of each of the steps (known as stages) through the deployment pipeline.
Source Stage
Below is a screenshot of the “Source” stage of the CodePipeline which takes code from Github once it’s uploaded and then passes it onto the “Build” stage.
Build Stage
The “Build” stage uses a "buildspec" file below which contains instructions to log into AWS ECR, build a docker file and then push that up to the ECR repository.
Recommended by LinkedIn
version: 0.2
phases:
pre_build:
commands:
- echo Logging in to Amazon ECR...
- aws ecr get-login-password --region $AWS_DEFAULT_REGION | docker login --username AWS --password-stdin $AWS_ACCOUNT_ID.dkr.ecr.$AWS_DEFAULT_REGION.amazonaws.com
- REPOSITORY_URI=$AWS_ACCOUNT_ID.dkr.ecr.$AWS_DEFAULT_REGION.amazonaws.com/$IMAGE_REPO_NAME
build:
commands:
- echo Build started on `date`
- echo Building the Docker image...
- docker build -t $REPOSITORY_URI:v6 .
- docker tag $REPOSITORY_URI:v6 $REPOSITORY_URI:$IMAGE_TAG
post_build:
commands:
- echo Build completed on `date`
- echo Pushing the Docker image...
- docker push $REPOSITORY_URI:v6
- docker push $REPOSITORY_URI:$IMAGE_TAG
You can click the “View logs” to see the results of the build. Below shows the logs after the CodeBuild docker build completely successfully. You can see below in the logs below docker being used to build the container image before pushing it up to AWS ECR.
All going well you'll see a new version of the container image in AWS ECR:
Deploy Stage
The “Deploy” stage is similar and also uses CodeBuild but this time to deploy to AWS EKS. Again, a buildspec file is used. This time we need to login with a role to AWS EKS first, update the native kubectl tool to the correct namespace and then use the deployment.yaml file to deploy the correct container image from AWS ECR.
version: 0.2
phases:
pre_build:
commands:
- echo pre build
build:
commands:
- echo Deployment started on `date`
- echo Log into EKS
- CREDENTIALS=$(aws sts assume-role --role-arn arn:aws:iam::915922766016:role/EksCodeBuildKubectl_role --role-session-name codebuild-kubectl --duration-seconds 900)
- export AWS_ACCESS_KEY_ID="$(echo ${CREDENTIALS} | jq -r '.Credentials.AccessKeyId')"
- export AWS_SECRET_ACCESS_KEY="$(echo ${CREDENTIALS} | jq -r '.Credentials.SecretAccessKey')"
- export AWS_SESSION_TOKEN="$(echo ${CREDENTIALS} | jq -r '.Credentials.SessionToken')"
- export AWS_EXPIRATION=$(echo ${CREDENTIALS} | jq -r '.Credentials.Expiration')
- aws sts get-caller-identity
- aws eks update-kubeconfig --name gamecluster --region ap-southeast-2
- kubectl config set-context --current --namespace=games
- kubectl get pods
- cd kubernetes
- kubectl apply -f deployment.yaml
- kubectl get pods
- echo Deployment completed on `date`
post_build:
commands:
- echo Post Deployment completed on `date`
Below shows the “Deploy” stage once the deploy has been completed for AWS EKS. Again, you can click on the “View logs” button to check deployment details.
In the logs below you can see the deployment from AWS ECR and a new container being created in the Kubernetes pod inside AWS EKS.
And finally, you can check out the latest Pod has the container loaded in the EKS AWS Console interface.
And that’s it! An automated deployment pipeline end to end. This means that all I need to do now is push my changes up to Github and CodePipeline uses the two stages above with CodeBuild to build and deploy the container into AWS EKS.
I’m hoping to share a demo of this at the AWS Community Day on the 6th of September so looking forward to seeing you there!
Stefan