CI/CD with Bitbucket Pipelines and AWS ElasticBeanstalk Dockerized Deployment with docker-compose.yml + AWS Secret Manager (2023)

I’m going to share a very simple yet effective setup to create a CI/CD pipeline using Bitbucket pipelines and AWS ElasticBeanstalk for auto scaling. The whole setup consists of:

  1. bitbucket-pipelines.yml
  2. A python deployment script.
  3. Part of a Terraform config to create the ElasticBeanstalk environment.
  4. docker-compose.yml and some details about ElasticBeanstalk configuration files.

bitbucket-pipelines.yml

image: python:3.7.2

pipelines:
  custom:
    deployment-to-staging:
      - step:
          name: Package application
          image: kramos/alpine-zip
          script:
            - zip -r artifact.zip * .platform .ebextensions
          artifacts:
            - artifact.zip
      - step:
          name: Deployment
          deployment: STAGING
          trigger: automatic
          caches:
            - pip
          script:
            - curl -O https://bootstrap.pypa.io/pip/3.4/get-pip.py
            - python get-pip.py
            - python -m pip install --upgrade "pip < 21.0"
            - pip install boto3==1.14.26 jira==2.0.0 postmarker==0.13.0 urllib3==1.24.1
            - python deployment/beanstalk_deploy.py
            - deploy_env=STAGING
            - new_tag=$deploy_env$(date +_%Y%m%d%H%M%S)
            - git tag -a "$new_tag" -m "$new_tag"
            - rm -f .git/hooks/pre-push
            - git push origin --tags

For this pipeline to work there are some prerequisites. First, you need to setup deployments on Bitbucket. This is usually found in this path under your project:

/admin/addon/admin/pipelines/deployment-settings

There, you need to enable deployments and set up your staging deployment with a minimum of two environment variables within the deployment.

APPLICATION_NAME // Which is your AWS ElasticBeanstalk application name
APPLICATION_ENVIRONMENT // Which is your AWS ElasticBeanstalk environment name.

We need three other environment variables in the “Repository Variables” section of the repo:

S3_BUCKET // This is where the zipped application code is uploaded then to be deployed to Beanstalk.
AWS_SECRET_ACCESS_KEY // AWS secret access key for authentication to AWS services.
AWS_ACCESS_KEY_ID // AWS access key id for authentication to AWS services.

Every Beanstalk application can have many environments. I usually set the applications as different environments as in “Production-Application” or “Staging-Application” and then in these applications I have “Web-Environment” and “Worker-Environment” and so on.

Deployment Script

# Copyright 2016 Amazon.com, Inc. or its affiliates. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"). You may not use this file
# except in compliance with the License. A copy of the License is located at
#
#     http://aws.amazon.com/apache2.0/
#
# or in the "license" file accompanying this file. This file is distributed on an "AS IS"
# BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations under the License.
"""
A Bitbucket Builds template for deploying
an application to AWS Elastic Beanstalk
[email protected]
v1.0.0
"""
from __future__ import print_function
import os
import sys
from time import strftime, sleep
import boto3
from botocore.exceptions import ClientError, WaiterError

VERSION_LABEL = strftime("%Y%m%d%H%M%S")
BUCKET_KEY = os.getenv('APPLICATION_NAME') + '/' + VERSION_LABEL + \
             '-bitbucket_builds.zip'


def upload_to_s3(artifact):
    """
    Uploads an artifact to Amazon S3
    """
    try:
        client = boto3.client('s3', region_name='eu-central-1')
    except ClientError as err:
        print("Failed to create boto3 client.\n" + str(err))
        return False

    try:
        client.put_object(
            Body=open(artifact, 'rb'),
            Bucket=os.getenv('S3_BUCKET'),
            Key=BUCKET_KEY
        )
    except ClientError as err:
        print("Failed to upload artifact to S3.\n" + str(err))
        return False
    except IOError as err:
        print("Failed to access artifact.zip in this directory.\n" + str(err))
        return False

    return True


def create_new_version():
    """
    Creates a new application version in AWS Elastic Beanstalk
    """
    try:
        client = boto3.client('elasticbeanstalk', region_name='eu-central-1')
    except ClientError as err:
        print("Failed to create boto3 client.\n" + str(err))
        return False

    try:
        response = client.create_application_version(
            ApplicationName=os.getenv('APPLICATION_NAME'),
            VersionLabel=VERSION_LABEL,
            Description='New build from Bitbucket',
            SourceBundle={
                'S3Bucket': os.getenv('S3_BUCKET'),
                'S3Key': BUCKET_KEY
            },
            Process=True
        )
    except ClientError as err:
        print("Failed to create application version.\n" + str(err))
        return False

    try:
        if response['ResponseMetadata']['HTTPStatusCode'] is 200:
            return True
        else:
            print(response)
            return False
    except (KeyError, TypeError) as err:
        print(str(err))
        return False


def deploy_new_version(environment):
    """
    Deploy a new version to AWS Elastic Beanstalk
    """
    try:
        client = boto3.client('elasticbeanstalk', region_name='eu-central-1')
    except ClientError as err:
        print("Failed to create boto3 client.\n" + str(err))
        return False

    try:
        client.update_environment(
            ApplicationName=os.getenv('APPLICATION_NAME'),
            EnvironmentName=os.getenv(environment),
            VersionLabel=VERSION_LABEL,
        )
    except ClientError as err:
        print("Failed to update environment.\n" + str(err))
        return False

    waiter = client.get_waiter('environment_updated')
    try:
        waiter.wait(
            ApplicationName=os.getenv('APPLICATION_NAME'),
            EnvironmentNames=[os.getenv(environment)],
            IncludeDeleted=False,
            WaiterConfig={
                'Delay': 20,
                'MaxAttempts': 30
            }
        )
        return True
    except WaiterError:
        print('Deployment might be failed or it might be a false positive. Please check beanstalk.')
        return False


def main():
    " Your favorite wrapper's favorite wrapper "
    if not upload_to_s3('artifact.zip'):
        sys.exit(1)
    if not create_new_version():
        sys.exit(1)
    # Wait for the new version to be consistent before deploying
    sleep(5)
    if not deploy_new_version('APPLICATION_ENVIRONMENT'):
        sys.exit(1)

if __name__ == "__main__":
    main()

The deployment script is pretty much self explanatory. It gets the zipped code of the application which was created in the first step of the pipeline and creates a new beanstalk application version. Then it deploys the new version to beanstalk and waits for the status to change to environment_updated and that’s it.

You can see the line in the pipeline yml file where it calls:

python deployment/beanstalk_deploy.py

This means your script is under a folder named deployment within your source code.

Terraform for Elasticbeanstalk

The following is a section of a terraform file that I’m using to create resources on AWS. This file won’t work by itself but it can give you a general idea.

Specifically, you need to set some data and some resources before this section of the file. For example, you need to set your vpc data like this:

data "aws_vpc" "my_vpc" {
  id = "your-vpc-id"
}

So that this section can use the variable “data.aws_vpc.my_vpc”.

resource "aws_elastic_beanstalk_application" "examplewebapp_staging" {
  name        = "examplewebapp-staging-application"
  description = "Examplewebapp Staging Application"

  appversion_lifecycle {
    service_role          = "arn:aws:iam::502026590581:role/aws-elasticbeanstalk-service-role"
    max_count             = 128
    delete_source_from_s3 = true
}

resource "aws_elastic_beanstalk_configuration_template" "examplewebapp_staging_template" {
  name                = "examplewebapp-staging-template-config"
  application         = aws_elastic_beanstalk_application.examplewebapp_staging.name
  solution_stack_name = "64bit Amazon Linux 2 v3.5.5 running Docker"
  setting {
    namespace = "aws:autoscaling:launchconfiguration"
    name      = "InstanceType"
    value     = "c5.2xlarge"
  }
  setting {
    namespace = "aws:autoscaling:launchconfiguration"
    name      = "IamInstanceProfile"
    value     = "aws-elasticbeanstalk-ec2-role"
  }
  setting {
    namespace = "aws:autoscaling:launchconfiguration"
    name      = "EC2KeyName"
    value     = "examplewebapp-beanstalk-staging"
  }
  setting {
    namespace = "aws:autoscaling:launchconfiguration"
    name      = "SecurityGroups"
    value     = data.aws_security_group.default.id
  }
  setting {
    namespace = "aws:ec2:vpc"
    name      = "Subnets"
    value     = aws_subnet.private_subnet.id
  }
  setting {
    namespace = "aws:ec2:vpc"
    name      = "VPCId"
    value     = data.aws_vpc.my_vpc.id
  }
  setting {
    namespace = "aws:ec2:vpc"
    name      = "ELBSubnets"
    value     = data.aws_subnet.public_subnet.id
  }
}

resource "aws_elastic_beanstalk_environment" "examplewebapp_staging_environment" {
  name                = "examplewebapp-staging-environment"
  application         = aws_elastic_beanstalk_application.examplewebapp_staging.name
  template_name = aws_elastic_beanstalk_configuration_template.examplewebapp_staging_template.name
  tier                = "WebServer"
}

With this section of the terraform we created an Elasticbeanstalk application called examplewebapp-staging-application and an Elasticbeanstalk environment called examplewebapp-staging-environment.

docker-compose.yml and ElasticBeanstalk Configuration Files and Folders

docker-compose.yml

version: '3.8'
services:
  app:
    build:
      context: .
      dockerfile: Dockerfile
    restart: always
    env_file:
      - .env
    ports:
      - 80:80

This is a very simplified docker-compose.yml file. When ElasticBeanstalk fetches your application version, it unzips it and checks for several things.

If it finds a docker-compose.yml file it uses the file to create the containers. Here’s there’s a trick. You might have a docker-compose.yml file for your development environment and another one for the staging/production environments. Let’s say you have these files:

  • docker-compose.yml // for development.
  • docker-compose.beanstalk.yml // for staging/production deployments.

You don’t want to use the default docker-compose.yml file to deploy to elasticbeanstalk environments so you need rename your docker-compose.beanstalk.yml file to docker-compose.yml file so that beanstalk creates the actual containers.

You can achieve this with beanstalk platform hooks. All you need to do is creating a folder called .platform/hooks/prebuild in your source code and put some scripts there to be run before the docker container creation.

.platform/hooks/prebuild/01prepare.sh

cp docker-compose.beanstalk.yml docker-compose.yml

Another thing we are missing now is the .env file within the docker-compose.beanstalk.yml. Let’s say we have created our environment variables in AWS Secret Manager. Now, we need to fetch those and put them in a file called .env so that our docker compose does not break. We can update out 01prepare.sh file to do this:

cp docker-compose.beanstalk.yml docker-compose.yml
aws --region "eu-central-1" secretsmanager get-secret-value --secret-id "examplewebapp_staging" | \
  jq -r '.SecretString' | \
  jq -r "to_entries|map(\"\(.key)=\\\"\(.value|tostring)\\\"\")|.[]" > .env

This will create the .env file with the proper format and that’s it.

Debugging Problems

There’s only one certainty: Deployments will fail! There can be many reasons, some missing configuration, wrong instance type that can’t handle your app, other errors in your AWS setup and so on. Anything is possible.

When your deployment fails, ElasticBeanstalk will give you some information but that is usually not enough by itself. You’ll need to get into the EC2 instance that Beanstalk created and check the logs under /var/log.

The main file that I check to debug problems is eb-engine.log. I usually do a tail -f to that file when trying to deploy new versions at the very first setup. Once the deployments are successful you won’t need to check these logs ever again unless there’s a change in your deployment setup.

Conclusion

This is probably 95% of what you need to do to create an automated CI/CD pipeline on Bitbucket with AWS Elastic Beanstalk and AWS Secret Manager. The remaining 5% are small adjustments here and there which you’ll need to do depending on the information you get from failed deployments. I tried to give all the information but it’s possible that I forgot some minor details.

There are a lot more configuration details for Elastic Beanstalk so I suggest you deep dive into AWS docs.

Thank you for reading.