ShauryaAg

How I designed a CI/CD setup for a Microservice Architecture at zero cost
Posted by ShauryaAg on April 03, 2021

Read first:

In the last part we set up our pipeline for a Monolithic Architecture, but that’s not what I promised in the first part of the series.

Let’s get going…

Off to decoupling our services, so that they can live freely once again.

Well, that’s easy. Create a separate repository for each service, copy the workflow files to each of them. You are done! Ok, bye.

No, definitely not.

You can use that setup if all you have to do is run unit tests, but what about integration testing. Can’t do that on the production instance, and you are a struggling startup; you don’t have enough resources to spin up more instances just to run integration tests.

Well?

Ok, we will be putting each service in its own separate remote repository, but we will have one parent repository that refers to all the services’ repositories.

Let’s get started with git submodules…

Git Submodules are a way of adding a git repository inside a git repository. All submodules point to a commit in the remote repository.

The original intention behind git submodules was to keep a copy of a certain commit (or release) locally on which our project might depend.

You just need to run:

git submodule add <my-remote> <optional-path>

However, we need them to keep up-to-date with our services’ repository, that’s why it’s a good thing that we can make them point to a certain branch instead too.

git config -f .gitmodules submodule.<submodule-name>.branch <branch-name>

Now, you can keep all your submodules up-to-date with just one command

git submodule update --remote

Now, that git submodules are out of the way, let’s get to the actual “good” stuff.

Ok, let’s talk about the actual workflow that I follow:

  • Each service’s repository contains its unit test
  • After a commit is pushed to the service’s repository, it runs its unit tests.
  • If all the unit test pass, then we commit it to the parent repository.

_<submodule>/.github/workflows/test-and-push.yml_

name: CI/CD Deployment
on: [push]
jobs:
buildAndTest:
name: CI Pipeline
runs-on: ubuntu-latest
strategy:
matrix:
node-version: ['12.x']
steps:
- uses: actions/checkout@v2
# Initialize Node.js
- name: Install Node.js ${{ matrix.node-version }}
uses: actions/setup-node@v1
with:
node-version: ${{ matrix.node-version }}
# Install project dependencies and test
- name: Install dependencies
run: npm install
- name: Run tests
run: npm run test
push:
name: Deploy
runs-on: ubuntu-latest
needs: buildAndTest
steps:
- name: Checkout
uses: actions/checkout@v2
# Clone the parent repo
- name: Clone Parent
env:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
run: |
git clone --recursive <parent-repo-remote>
# Confifure username and email
- name: Config username
run: |
git config --global user.name '<your-name>'
git config --global user.email '<your-email>'
# Commit and push changes
- name: Commit and push
env:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
run: |
git add .
git commit -m "Updated Tracking ${{ github.sha }}"
git push origin master
view raw test-and-push.yml hosted with ❤ by GitHub

  • The parent repo lists the submodules where the changes were made since the last push and only deploys those services again.

<parent-repo>/.github/workflows/deploy.yml

name: CI/CD Deployment
on: [push]
jobs:
deploy:
name: Deploy
runs-on: ubuntu-latest
strategy:
matrix:
python-version: [3.8]
node-version: ['12.x']
appname: ['my-application-codedeploy']
deploy-group: ['prod']
s3-bucket:
['my-application-codedeploys']
s3-filename:
['prod-aws-codedeploy-${{ github.sha }}']
steps:
- name: Checkout
uses: actions/checkout@v2
with:
fetch-depth: 0
submodules: 'true'
token: ${{ secrets.PAT }} # Defining PAT to fetch private submodules
- name: Configure AWS credentials
uses: aws-actions/configure-aws-credentials@v1
with:
aws-access-key-id: ${{ secrets.AWS_ACCESS_KEY_ID }}
aws-secret-access-key: ${{ secrets.AWS_SECRET_ACCESS_KEY }}
aws-region: ${{ secrets.AWS_REGION }}
- name: AWS Deploy
env:
AWS_APP_NAME: ${{ matrix.appname }}
AWS_DEPLOY_GROUP: ${{ matrix.deploy-group }}
AWS_BUCKET_NAME: ${{ matrix.s3-bucket }}
AWS_FILENAME: ${{ matrix.s3-filename }}
GITHUB_EVENT_BEFORE: ${{ github.event.before }}
GITHUB_SHA: ${{ github.sha }}
run: |
sudo chmod +x ./scripts/deploy.sh
./scripts/deploy.sh
view raw deploy.yml hosted with ❤ by GitHub

This way all the other services keep running without any disturbance, while one of the service is updated.

<parent-repo>/scripts/deploy.sh

# /bin/bash
# Getting all the submodules(directories)/files where changes were made
temp=("$(git diff-tree --submodule=diff --name-only ${GITHUB_EVENT_BEFORE} ${GITHUB_SHA})")
echo $temp
# Keeping only distinct values in the array
UNIQ_SUBS=($(echo "${temp[@]}" | tr ' ' '\n' | sort -u | tr '\n' ' '))
for SUB in ${UNIQ_SUBS[@]}
do
if [ -d "$SUB" ]
then
cd ${SUB}
if [ ! -f "appspec.yml" ]
then
echo $PWD
echo "AppSpec.yml not found in ${SUB}"
continue
fi
else
continue
fi
chmod +x ../scripts/get_env.sh
../scripts/get_env.sh ${SUB}
aws deploy push \
--application-name ${AWS_APP_NAME} \
--description "Revision for the ${SUB}-${AWS_APP_NAME}" \
--no-ignore-hidden-files \
--s3-location s3://${AWS_BUCKET_NAME}/${SUB}-${AWS_FILENAME}.zip \
--source .
aws deploy create-deployment \
--application-name ${AWS_APP_NAME} \
--deployment-config-name CodeDeployDefault.OneAtATime \
--deployment-group-name ${SUB}-${AWS_DEPLOY_GROUP} \
--file-exists-behavior OVERWRITE \
--s3-location bucket=${AWS_BUCKET_NAME},key=${SUB}-${AWS_FILENAME}.zip,bundleType=zip
cd ..
done
view raw deploy.sh hosted with ❤ by GitHub

I also created separate Deployment Groups for each service of the name: _<service-name>-prod_, i.e. for _nodeauth_ service, I created a _nodeauth-prod_ deployment group, with rest of the configuration same as we did in the previous part.

We also do need to modify the appspec.yml and our scripts.

  • Since each service is a separate deployment, we need to put the appspec.yml in each service's repository.

version: 0.0
os: linux
files:
- source: /
destination: /home/ubuntu/my-application/my-submodule-1
hooks:
BeforeInstall:
- location: ./scripts/init.sh
timeout: 300
runas: root
ApplicationStart:
- location: ./scripts/start_app.sh
timeout: 300
runas: root
ApplicationStop:
- location: ./scripts/cleanup.sh
timeout: 300
runas: root
view raw appspec.yml hosted with ❤ by GitHub

  • Since we have decoupled all the services from each other, we no longer run them using sudo docker-compose up, each service has to be started individually.

<submodule>/scripts/start_app.sh

SERVICE_NAME=${DEPLOYMENT_GROUP_NAME%-*}
# Change directory into the service folder
cd /home/ubuntu/tastebuds-backend/${SERVICE_NAME}/
# Read env variables from .env file
export $(cat .env | xargs)
# Remove the previous container
sudo docker stop ${SERVICE_NAME}
sudo docker rm ${SERVICE_NAME}
# Create a default network to connect the services
sudo docker network create ${NETWORK}
# Build the docker image
sudo docker build -t ${SERVICE_NAME} .
# Run the container
if [ -z ${VOLUME} ]
then
sudo docker run --rm -d -p ${PORT}:${PORT} --network ${NETWORK} --name ${SERVICE_NAME} ${SERVICE_NAME}
else
sudo docker run --rm -d -p ${PORT}:${PORT} -v $(pwd)/${VOLUME} --network ${NETWORK} --name ${SERVICE_NAME} ${SERVICE_NAME}
fi
# unset env variables
unset $(grep -v '^#' .env | sed -E 's/(.*)=.*/\1/' | xargs)
view raw start_app.sh hosted with ❤ by GitHub

  • I wrote a bit complex script so that we can use the same set of scripts in all our services instead of writing them again and again as new services are added.
  • The scripts can also be added as a git submodule to all the services’ repositories, which makes it easier to maintain (but they weren’t in my setup at the moment of writing this blog).
  • init.sh contains code to install docker on the instance (if not already present).
  • cleanup.sh contains code to remove the previous unused containers.

That’s it. You are finally done. You’ve got your own weird-ass setup to test and deploy a Microservice Architecture at zero cost. You can also keep the previous docker-compose.yml to maintain a local development setup.

The single instance’s cost is covered under AWS free-tier.

disclaimer: Yes, there is probably a better way of doing this. I hope there is.

Contact

© Shaurya Agarwal 2021