ShauryaAg

How I designed and deployed a scalable microservice architecture with limited resources
Posted by ShauryaAg on April 03, 2021

Ok, so let’s start with what’s a microservice.

Basically, there are types are two types of services: a monolithic and a microservice.

Monolithic Architecture:

A monolith architecture is just one big service deployed on one instance, where if one part of the service faces an issue, the whole site goes down.

Microservice Architecture:

A microservice architecture has the whole service divided into multiple smaller services (called microservices). Each service is deployed separately and doesn’t affect the deployment of the other in any way.

Even if one service goes down, the other services are not affected in any way.

Let’s get started with setting up our microservice architecture…

At the moment we have three services (there will be more), an authentication service that authenticates users viva OTP, one service to upload files to AWS S3, and a GraphQL service for all our other needs at the moment.

Note: This is not a tutorial on how to write a NodeJS API

Let’s get started…

Our directory structure is as following

├── appspec.yml
├── docker-compose.yml
├── env
│   └── ...
├── nginx
│   ├── Dockerfile
│   └── ...
├── nodeauth
│   ├── Dockerfile
│   └── ...
├── nodeupload
│   ├── Dockerfile
│   └── ...
└── scripts
    └── ...

In order to make the setup easy (on the deployment side), I containerized all the setup using docker.

Here’s our **nodeservice/Dockerfile** for all our NodeJS services.

# Pull the NodeJS image from dockerHub
FROM node:alpine
# Create the directory inside the container
WORKDIR /usr/src/app
# Copy the package.json files from local machine to the workdir in container
COPY package*.json ./
# Run npm install in our local machine
RUN npm install
# Copy the generated modules and all other files to the container
COPY . .
# Exposing port 8000 of the container
EXPOSE 8000
# Start the app
CMD ["node", "index.js"]
view raw Dockerfile hosted with ❤ by GitHub

Note: The exposed PORT should be different for different services

Now, we need to setup nginx as our API gateway

For that we need the **_nginx/Dockerfile_** and nginx configuration files

# Pull Nginx image from DockerHub
FROM nginx
# Copying general purpose files
COPY example.com/ /etc/nginx/example.com/
# Replace default.conf with our nginx.conf file
COPY nginx.conf /etc/nginx/nginx.conf
view raw Dockerfile hosted with ❤ by GitHub

and the **_nginx.conf_** file is as follows

events {
worker_connections 1024;
}
http {
upstream nodeauth {
least_conn;
server nodeauth:8000;
}
upstream nodeupload {
least_conn;
server nodeupload:8001;
}
server {
listen 80;
server_name example.com sub.example.com;
location /api/v1/auth {
include example.com/proxy.conf;
proxy_pass http://nodeauth;
}
location /api/v1/upload {
include example.com/proxy.conf;
proxy_pass http://nodeupload;
}
}
}
view raw nginx.conf hosted with ❤ by GitHub

Now, we have our three containers ready, we just need a docker-compose file to start them all with a single command.

**_docker-compose.yml_**

version: "3.8"
services:
nodeauth:
build:
context: ./nodeauth
ports:
- "8000:8000"
nodeupload:
build:
context: ./nodeupload
ports:
- "8001:8001"
nginx:
restart: always
build:
context: ./nginx
depends_on:
- nodeauth
- nodeupload
ports:
- "80:80"
view raw docker-compose.yml hosted with ❤ by GitHub

Viola! You got all your containers ready, now you just need to deploy those to the server. Package ’em all up and push to a git repository of your choice.

I used an AWS EC2 instance to deploy.

Go to your AWS EC2 Console and start up a new instance. I used a t2.micro instance with ubuntu:20.04 image, (as it is covered under the free tier).

SSH into your EC2 instance, and clone the repository that you just made onto the machine using git clone <repo-link> .

Now, you need to install docker on your instance sudo snap install docker .

Once you have cloned the repo onto the machine and installed the dependencies, it’s time to build and start the docker containers. Well, that’s where containerizing everything helps, you only need to run one command to run everything.

sudo docker-compose up --build

Make sure you have changed the directory into your cloned repository

You got it done! Now you just need to set the ANAME for your domain and you can make requests to your domain.

But wait…

The requests are only going to http instead of https . You need to set up https for your domain now.

In order to setup https for our server, we need to generate an SSL certificate. We can always do that stuff manually, but that’s no good when there are other people who have done this stuff for you.

I used staticfloat/nginx-certbot docker image to do this stuff for me.

We need to listen on PORT 443 for https instead of PORT 80 in case of http , and specify sslcertificate, sslcertificate_key in your **_nginx.conf_**.

# Auth Service
upstream nodeauth {
least_conn;
server nodeauth:8000;
}
# Upload Service
upstream nodeupload {
least_conn;
server nodeupload:8001;
}
server {
listen 443 ssl;
server_name example.com;
ssl_certificate /etc/letsencrypt/live/example.com/fullchain.pem;
ssl_certificate_key /etc/letsencrypt/live/example.com/privkey.pem;
location /api/v1/auth {
include example.com/proxy.conf;
proxy_pass http://nodeauth;
}
location /api/v1/upload {
include example.com/proxy.conf;
proxy_pass http://nodeupload;
}
# Include general purpose files
include example.com/security.conf;
include example.com/general.conf;
root /var/www/html;
index index.html index.htm index.nginx-debian.html;
}
view raw nginx.conf hosted with ❤ by GitHub

And your **nginx/Dockerfile** changes to…

FROM staticfloat/nginx-certbot
ENV CERTBOT_EMAIL=info@example.com
COPY conf.d/ /etc/nginx/user.conf.d
COPY example.com/ /etc/nginx/example.com
EXPOSE 443
view raw Dockerfile hosted with ❤ by GitHub

You also need to make changes in the **docker-compose.yml** to map the letsencrypt/ directory of your container to that of your host machine.

version: "3.8"
services:
nodeauth:
build:
context: ./nodeauth
env_file:
- ./env/.env
ports:
- "8000:8000"
nodeupload:
build:
context: ./nodeupload
env_file:
- ./env/.env
ports:
- "8001:8001"
nginx:
restart: always
build:
context: ./nginx
environment:
CERTBOT_EMAIL: info@example.com
ports:
- "80:80"
- "443:443"
depends_on:
- nodeauth
- nodeupload
volumes:
- ./nginx/conf.d:/etc/nginx/user.conf.d:ro
- letsencrypt:/etc/letsencrypt
volumes:
letsencrypt:
view raw docker-compose.yml hosted with ❤ by GitHub

You are all done! Just push these changes to your git repository, and pull them on your instance.

Configure the security group for your instance and enable HTTPS.

Now, all you gotta do is run sudo docker-compose up --build and you should get services running on https .

But think about it…

Do you really want to pull every change that you make on your service manually and restart the service again? No, right?

Neither do I, that’s why I set up CI/CD pipelines to deploy and restart the services automatically every time a new commit is pushed to the git repo.

Let’s get to that stuff in PART 2….

Read Next:

Contact

© Shaurya Agarwal 2021