I’ve recently been working on a small software project which I intend to run on a single server.

The project consists of:
* Static Website (Static HTML)
* Backend API server (Python running ASGI webserver)
* Front-end Single page App (Vue compiled to an SPA)

From a git project perspective, each piece of the project lives in its own repository. There is a separate repository for the static website, API server and SPA. Throughout this article, we’ll also create two more repositories.

Using containers is an excellent way of compartmentalizing different aspects of an application and making deployment consistent across any development environment. Using continuous integration pipelines (CI) can automate the process of building and deploying container images. In this tutorial, each sub-project will be configured to have its own container for easy versioning and building.

Building containers with a CI pipeline can be painful to get running right. Small errors are easy to make, and the slow development cycle of CI pipelines doesn’t help. This article focuses on the configuration needed to deploy a project like this.

While there are different approaches to container deployment, this tutorial will take the following approach:
* Project container management will use docker compose
* Web server will use nginx
* SSL will use LetsEncrypt
* Deployment will use Gitlab CI pipelines and a Gitlab runner
* Deployments will be triggered manually

Throughout this tutorial, you’ll see references to mysite.com. Replace this with the name relevant to your project.

Web Server

Nginx was chosen as the web server for this project. All sites are served using a single instance of nginx, which either serves a static html sites or proxies the request through the a backend server as required.

To achieve this, an nginx container is needed, which then serves content from the other containers.

Step 1: Create a new gitlab project called “nginx”.

We’ll start without SSL and then add LetsEncrypt at the end. The relevant LetsEncrypt configuration is included below but should be left commented out for now.

Step 2: Add the following files to the repository

mysite.conf

server { listen 80 default_server reuseport; listen [::]:80 default_server reuseport;  # listen 443 ssl default_server reuseport; # listen [::]:443 ssl default_server reuseport; server_name mysite.com; # ssl_certificate /etc/letsencrypt/live/mysite.com/fullchain.pem; # ssl_certificate_key /etc/letsencrypt/live/mysite.com/privkey.pem; # ssl_trusted_certificate /etc/letsencrypt/live/mysite.com/chain.pem;  root /usr/share/nginx/website; location / {   try_files $uri $uri/ /index.html; } error_page 404 /index.html;}server { listen 80; listen [::]:80; # listen 443 ssl default_server reuseport; # listen [::]:443 ssl default_server reuseport;  server_name app.mysite.com; # ssl_certificate /etc/letsencrypt/live/mysite.com/fullchain.pem; # ssl_certificate_key /etc/letsencrypt/live/mysite.com/privkey.pem; # ssl_trusted_certificate /etc/letsencrypt/live/mysite.com/chain.pem;  root /usr/share/nginx/frontend; location / {  try_files $uri $uri/ /index.html; } error_page 404 /index.html;}server { listen 80; listen [::]:80; # listen 443 ssl default_server reuseport; # listen [::]:443 ssl default_server reuseport;  server_name api.mysite.com; # ssl_certificate /etc/letsencrypt/live/mysite.com/fullchain.pem; # ssl_certificate_key /etc/letsencrypt/live/mysite.com/privkey.pem; # ssl_trusted_certificate /etc/letsencrypt/live/mysite.com/chain.pem;  location / {  proxy_pass http://api:8080/;  proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;  proxy_set_header X-Forwarded-Proto $scheme;  proxy_set_header X-Forwarded-Host $host;  proxy_set_header X-Forwarded-Prefix /; }}

This will set up Nginx to serve three subdomains: one for the website, one for the frontend app, and one for the API.

/usr/share/nginx/website and /usr/share/nginx/frontend are the folders from which static files will be served. These will be mapped to folders on the server's local filesystem using docker.

The API will be served by Python from a container called “api” on port 8080. Nginx will just forward requests.

setup_certbot.sh

#!/bin/bash# Assumes that the nginx server is already installed and running# Obtain the SSL certificatecertbot certonly - nginx - cert-name mysite.com -d mysite.com -d app.mysite.com -d api.mysite.com - non-interactive - agree-tos -m hello@mysite.com# Set up a cron job for automatic renewalecho "0 0,12 * * * certbot renew - quiet - post-hook 'service nginx reload'" | crontab -# Start cron serviceservice cron start

This will be used to, as the name suggests, set up certbot (letsencrypt agent responsible for security certificate issuance and updates).

Dockerfile

FROM nginx:latestRUN apt-get update && apt-get install -y \ cron \ certbot \ python3-certbot-nginx \ && rm -rf /var/lib/apt/lists/*WORKDIR /opt/mysiteCOPY mysite.conf /etc/nginx/conf.d/mysite.confCOPY setup_certbot.sh /opt/zeetamax/nginx/setup_certbot.shRUN chmod +x /opt/zeetamax/nginx/setup_certbot.shEXPOSE 80EXPOSE 443ENTRYPOINT ["sh", "-c", "service nginx start && sleep infinity"]# ENTRYPOINT ["sh", "-c", "service nginx start && /opt/zeetamax/nginx/setup_certbot.sh && sleep infinity"]

This docker file will simply copy the mysite.conf configuration above into the container, expose port 80 to the internet and start the nginx service.

.gitlab-ci.yml

stages: - buildvariables: VERSION_MAJOR: 0 VERSION_MINOR: 0 VERSION_STRING: ${VERSION_MAJOR}.${VERSION_MINOR}.${CI_PIPELINE_IID} DOCKER_IMAGE: $CI_REGISTRY_IMAGEbuild container: stage: build image: docker:latest services: - docker:dind before_script: - echo "started by ${GITLAB_USER_NAME}" - echo "$CI_REGISTRY_PASSWORD" | docker login $CI_REGISTRY -u $CI_REGISTRY_USER - password-stdin script: - docker build -t $DOCKER_IMAGE:$VERSION_STRING . - docker tag $DOCKER_IMAGE:$VERSION_STRING $DOCKER_IMAGE:latest - docker push $DOCKER_IMAGE:$VERSION_STRING - docker push $DOCKER_IMAGE:latest only: - main

The GitLab CI configuration will rebuild the Nginx server with our configuration inside and store it in the GitLab container registry. The script will also auto-version each version of the container, which we’ll use later.

Static Website

Packaging a static website inside a container may seem like overkill, but for consistency of deployment, it makes sense. The configuration is essentially copying files in and out of the container.

Inside the static website repository, add the following files:

Dockerfile:

FROM alpineCOPY public /websiteCOPY docker/entrypoint.sh /entrypoint.sh

.gitlab-ci.yml

stages: - deployvariables: VERSION_MAJOR: 0 VERSION_MINOR: 0 VERSION_STRING: ${VERSION_MAJOR}.${VERSION_MINOR}.${CI_PIPELINE_IID} DOCKER_IMAGE: $CI_REGISTRY_IMAGEbuild container: stage: deploy image: docker:latest services: - docker:dind before_script: - echo "started by ${GITLAB_USER_NAME}" - echo "$CI_REGISTRY_PASSWORD" | docker login $CI_REGISTRY -u $CI_REGISTRY_USER - password-stdin script: - docker build -t $DOCKER_IMAGE:$VERSION_STRING . - docker tag $DOCKER_IMAGE:$VERSION_STRING $DOCKER_IMAGE:latest - docker push $DOCKER_IMAGE:$VERSION_STRING - docker push $DOCKER_IMAGE:latest only: - main

docker/entrypoint.sh

#!/bin/sh# entrypoint.sh# Copy the contents of /website to the shared volumels -la /websitecp -r /website/* /shared-website

The Dockerfile and .gitlab-ci.yml files simply copies the contents of the “public” folder in the repository (where the website files are to be served live) to the “website” folder inside the container. The Dockerfile also copies the entrypoint.sh file to the container as well.

When this container is started, the entrypoint.sh script will be run, which then simply copies the contents of the /website folder to the /shared-website folder (which we will mount to the server's local filesystem using a docker compose file). Once the files are copied, the container will shut down.

Single Page App (frontend)

The SPA will be served using the same process as the static website, the only difference, however is that because the app is written in Vue/Quasar, it will first need to be tested and compiled

Dockerfile

FROM alpineCOPY dist/spa /frontendCOPY docker/entrypoint.sh /entrypoint.sh

.gitlab-ci.yml

stages: - test - build - deployvariables: VERSION_MAJOR: 0 VERSION_MINOR: 0 VERSION_STRING: ${VERSION_MAJOR}.${VERSION_MINOR}.${CI_PIPELINE_IID} DOCKER_IMAGE: $CI_REGISTRY_IMAGEtest frontend: image: node:18 stage: build before_script: - echo "started by ${GITLAB_USER_NAME}" - node - version - npx playwright install - with-deps - echo $ENV >> .env script: - npm ci - npm run test:unitbuild quasar: image: node:18 stage: build before_script: - echo "Node Version" - node - version - echo "Install requirements" - npm install -g @quasar/cli - progress=false - npm install - progress=false - npm version ${VERSION_STRING} - no-git-tag-version - echo ${ENV} > .env script: - quasar build - echo The built frontend files are now stored in the dist/spa folder only: - main artifacts: paths: - dist/spabuild container: stage: deploy image: docker:latest services: - docker:dind before_script: - echo "started by ${GITLAB_USER_NAME}" - echo "$CI_REGISTRY_PASSWORD" | docker login $CI_REGISTRY -u $CI_REGISTRY_USER - password-stdin script: - docker build -t $DOCKER_IMAGE:$VERSION_STRING . - docker tag $DOCKER_IMAGE:$VERSION_STRING $DOCKER_IMAGE:latest - docker push $DOCKER_IMAGE:$VERSION_STRING - docker push $DOCKER_IMAGE:latest only: - main

Note how the environment variables are managed. In the Gitlab CI settings, create a variable called “ENV”, which should contain the contents of your .env file.

docker/entrypoint.sh

#!/bin/sh# entrypoint.sh# Copy the contents of /frontend to the shared volumels -la /frontendcp -r /frontend/* /shared-frontend

API backend

The API backed is being served by Python, but this could easily be swapped out for other frameworks or languages as required.

Dockerfile

FROM python:3.12-slimWORKDIR /opt/mysiteCOPY . /opt/mysiteRUN pip install - no-cache-dir -r requirements.txtEXPOSE 8080CMD python gunicorn -k uvicorn.workers.UvicornWorker -w 4 -b 0.0.0.0:8080 - log-level debug - error-logfile - - access-logfile - - capture-output run_prod:app

The contents of this file will depend on your specific project, but the essence of this file is to copy all of the project files to the /opt/mysite folder inside the container (where they will be run from), install dependencies, expose port 8080, and start the server.

.gitlab-ci.yml

stages: - test - deployvariables: VERSION_MAJOR: 0 VERSION_MINOR: 0 VERSION_STRING: ${VERSION_MAJOR}.${VERSION_MINOR}.${CI_PIPELINE_IID} DOCKER_IMAGE: $CI_REGISTRY_IMAGEtest: stage: test image: python:3.12 before_script: - pip3 install -r requirements.txt - pip3 install pytest script: - echo "started by ${GITLAB_USER_NAME}" - pytest --junitxml=report.xml artifacts: when: always reports: junit: report.xmlbuild container: stage: deploy image: docker:latest services: - docker:dind before_script: - echo "started by ${GITLAB_USER_NAME}" - echo "$CI_REGISTRY_PASSWORD" | docker login $CI_REGISTRY -u $CI_REGISTRY_USER - password-stdin script: - echo "$ENV" >> .env - docker build -t $DOCKER_IMAGE:$VERSION_STRING . - docker tag $DOCKER_IMAGE:$VERSION_STRING $DOCKER_IMAGE:latest - docker push $DOCKER_IMAGE:$VERSION_STRING - docker push $DOCKER_IMAGE:latest only: - main

This .gitlab-ci.yml file calls pytest to test the code. The — junitxml flag will produce test results in a format that can be displayed nicely inside the GitLab test results area.

As with the frontend pipeline configuration above, the environment variables are stored in the ENV variable inside the GitLab CI settings area.

Putting it all together

At this point, we have created 4 containers. One for each project and one for the web server. Now we need to bring everything together and have the whole thing deploy.

Docker compose will be used to deploy the stack, and a GitLab runner will be used to deploy the container on the server.

To do this, create yet another repository called “infrastructure”.

Inside this repository, create the following files.

docker-compose.yml

services: nginx: image: registry.gitlab.com/myorganisation/mysite/nginx:${NGINX_VERSION} container_name: nginx restart: unless-stopped depends_on: - website - frontend - api ports: - "80:80" - "443:443" volumes: # - /opt/mysite/certbot:/etc/letsencrypt - /opt/mysite/website:/usr/share/nginx/website - /opt/mysite/frontend:/usr/share/nginx/frontend - /opt/mysite/docs:/usr/share/nginx/docsdb: image: postgres:17 container_name: db ports: - "5432:5432" environment: POSTGRES_USER: someusername POSTGRES_PASSWORD: somepassword POSTGRES_DB: somedatabase volumes: - /opt/mysite/database:/var/lib/postgresql/datawebsite: image: registry.gitlab.com/myorganisation/mysite/website:${WEBSITE_VERSION} container_name: website entrypoint: ["/bin/sh", "/entrypoint.sh"] volumes: - /opt/mysite/website:/shared-website  frontend: image: registry.gitlab.com/myorganisation/mysite/frontend:${APP_VERSION} container_name: frontend entrypoint: ["/bin/sh", "/entrypoint.sh"] volumes: - /opt/mysite/frontend:/shared-frontendapi: image: registry.gitlab.com/myorganisation/mysite/server:${API_VERSION} container_name: api depends_on: - db environment: - PORT=8080 ports: - "8080:8080"

.env

(note this SHOULD be committed to the repository)

NGINX_VERSION='0.0.1'APP_VERSION='0.0.2'API_VERSION='0.0.3'WEBSITE_VERSION='0.0.4'

.gitlab-ci.yml

stages: - deployvariables: VERSION_MAJOR: 0 VERSION_MINOR: 0 VERSION_STRING: ${VERSION_MAJOR}.${VERSION_MINOR}.${CI_PIPELINE_IID} DOCKER_IMAGE: $CI_REGISTRY_IMAGEdeploy-prod: stage: deploy tags: - ec2-production only: - main script: - echo "Deploying to Production" - sudo docker login -u $CI_REGISTRY_USER -p $CI_JOB_TOKEN $CI_REGISTRY - echo "Pulling the latest images" - sudo docker compose pull # Use the docker-compose file in the production folder to pull the images - sudo docker image prune -f # Remove any dangling images - sudo docker compose up -d - echo "Deploy Complete"

The tags: ec2-production line is used to instruct the pipeline to deploy using GitLab Runner.

This script will run the docker compose file on the server. The script will reach into the gitlab registry and fetch the corresponding version of the container specified in the .env file and deploy it.

Before it can do this, however, GitLab permissions need to be set up to allow the infrastructure pipeline to access the container registry of the rest of the project. This is done in the Gitlab Settings for each repository (Settings > CI/CD > Job Tokens. Add the infrastructure repository).

Server Configuration

### Docker# Add Docker's official GPG key:sudo apt-get updatesudo apt-get install ca-certificates curlsudo install -m 0755 -d /etc/apt/keyringssudo curl -fsSL https://download.docker.com/linux/ubuntu/gpg -o /etc/apt/keyrings/docker.ascsudo chmod a+r /etc/apt/keyrings/docker.asc# Add the repository to Apt sources:echo \ "deb [arch=$(dpkg - print-architecture) signed-by=/etc/apt/keyrings/docker.asc] https://download.docker.com/linux/ubuntu \ $(. /etc/os-release && echo "$VERSION_CODENAME") stable" | \ sudo tee /etc/apt/sources.list.d/docker.list > /dev/nullsudo apt-get updatesudo apt-get install docker-ce docker-ce-cli containerd.io docker-buildx-plugin docker-compose-pluginsudo mkdir /opt/mysitesudo mkdir /opt/mysite/databasesudo mkdir /opt/mysite/nginx#sudo mkdir /opt/mysite/certbotsudo mkdir /opt/mysite/frontendsudo mkdir /opt/mysite/websitesudo chown -R root:root /opt/mysite### Gitlab runner# Download the binary for your systemsudo curl -L - output /usr/local/bin/gitlab-runner https://gitlab-runner-downloads.s3.amazonaws.com/latest/binaries/gitlab-runner-linux-amd64`# Give it permission to executesudo chmod +x /usr/local/bin/gitlab-runner# Create a GitLab Runner usersudo useradd - comment 'GitLab Runner' - create-home gitlab-runner - shell /bin/bash# Fix logout issue - Comment out all the linessudo nano /home/gitlab-runner/.bash_logout# Give extra permissionssudo usermod -a -G www-data gitlab-runnersudo usermod -a -G sudo gitlab-runnersudo visudo# Add the following to the bottom of the filegitlab-runner ALL=(ALL) NOPASSWD: ALL# Follow instructions on the gitlab runner setup page: Run commands as sudo, Choose shell executor# Install and run as a servicesudo gitlab-runner install - user=gitlab-runner - working-directory=/home/gitlab-runnersudo gitlab-runner start

Setting up LetsEncrypt

It is important to get everything running on HTTP first before trying to enable SSL. Lets Encrypt cannot grant security certificates unless your site is already running. It will perform DNS validation and also make temporary changes to your nginx configuration.

Nginx will not start when SSL configuration is present, and the SSL certificates are not.

It is important to do these steps for this to work.

1. Ensure your site is already running on http
2. Manually trigger LetsEncrypt for the first time inside the nginx container
3. Enable SSL in the nginx configuration.

The first time after the containers are deployed or after domain names are changed, the certificates need to be generated using the following commands:

sudo docker exec -it nginx /bin/bashcertbot certonly -d mysite.com -d app.mysite.com -d api.mysite.com - email hello@mysite.com - agree-tos - no-eff-email(ctrl-d to exit)

If that was all successful, edit the nginx configuration files again and uncomment out the commented lines and redeploy.

Your site should now be up and running.

Deploying future releases

Simply edit the infrastructure/.env file with the version of each sub-project you wish to release and commit. The GitLab CI pipeline should now deploy everything.

Closing remarks

There is a lot of work in setting up a deployment scenario this way, but once the work is done, deploying changes becomes a piece of cake.

While there are other approaches to deploying containers, this approach mixes the ‘old-school’ way of managing everything on VMs using a modern tool-chain.

Feel free to leave questions or comments, and if you found this article useful, leaving a clap would be appreciated.