Fully automated and self-hosted production builds with GitLab
First of all, why?
Because not everything needs to go to the public internet! One of the most critical components in the modern software development lifecycle are continuous integration (CI) and continuous delivery (CD). In the commercial space, it's generally acceptable to rely on cloud-based services such as GitHub Actions, TravisCI, or CircleCI CI/CD to build and deploy your applications. Customers in industries such as healthcare, finance, and government have their own requirements for how they want to manage their software development lifecycle. I regularly interact with customers who need this fine touch. In order to dogfood the whole process, I host my own GitLab instance and run my CI/CD pipelines on it. I use this infrastructure to build this site and other software 100% locally.
Okay, so how?
First, I need to define my software development lifecycle. Simply put it's:
Write code -> Push code to remote -> Build code -> Deploy code
To do this all locally, GitLab has every tool we need.
- I write my code on my workstation and push it to a remote repository in GitLab.
- I define a pipeline that executes the build process using a GitLab Runner hosted separately in my homelab infrastructure.
- The build stage outputs a Docker image which is persisted in GitLab's built-in container registry.
- A deploy stage in the pipeline connects to the production environment,pulls the image from the container registry, and redeploys production with near zero downtime (mostly thanks to CDN caching).
For those not familiar, GitLab is a self-hosted Git repository manager. It's an open source project that runs on your own server. Setting this up in my homelab was fairly straightforward. In Proxmox, I created a Debian VM allowing it four cores and up to 12GB of RAM and followed the official documentation to get it running.

After setting up and configuring my GitLab instance, I can start building and deploying my applications. Before having this instance, all my code was stored on GitHub and now I both mirror GitHub and store private projects in GitLab.
Building and deploying this site
This site is a Next.js application that uses Dockerfile to create a Docker image that contains the compiled application. For simplicity, I've elected to use MDX files to create pages and articles, so each new post requires a new build. As I add pages and complexity, I'll likely add a headless content management system since I'm very familiar from my work at Brightspot
Vercel provides guidance on how to create a Dockerfile for a Next.js application.
I adapted their example and my final Dockerfile looks like this:
FROM node:18-alpine AS base
# Install dependencies only when needed
FROM base AS deps
# Check https://github.com/nodejs/docker-node/tree/b4117f9333da4138b03a546ec926ef50a31506c3#nodealpine to understand why libc6-compat might be needed.
RUN apk add --no-cache libc6-compat
WORKDIR /app
# Install dependencies based on the preferred package manager
COPY package.json yarn.lock* package-lock.json* pnpm-lock.yaml* ./
RUN \
if [ -f yarn.lock ]; then yarn --frozen-lockfile; \
elif [ -f package-lock.json ]; then npm ci; \
elif [ -f pnpm-lock.yaml ]; then corepack enable pnpm && pnpm i --frozen-lockfile; \
else echo "Lockfile not found." && exit 1; \
fi
# Rebuild the source code only when needed
FROM base AS builder
WORKDIR /app
COPY /app/node_modules ./node_modules
COPY . .
# Next.js collects completely anonymous telemetry data about general usage.
# Learn more here: https://nextjs.org/telemetry
# Uncomment the following line in case you want to disable telemetry during the build.
# ENV NEXT_TELEMETRY_DISABLED 1
RUN \
if [ -f yarn.lock ]; then yarn run build; \
elif [ -f package-lock.json ]; then npm run build; \
elif [ -f pnpm-lock.yaml ]; then corepack enable pnpm && pnpm run build; \
else echo "Lockfile not found." && exit 1; \
fi
# Production image, copy all the files and run next
FROM base AS runner
WORKDIR /app
ENV NODE_ENV production
# Uncomment the following line in case you want to disable telemetry during runtime.
# ENV NEXT_TELEMETRY_DISABLED 1
RUN addgroup --system --gid 1001 nodejs
RUN adduser --system --uid 1001 nextjs
#COPY --from=builder /app/public ./public
# Set the correct permission for prerender cache
RUN mkdir .next
RUN chown nextjs:nodejs .next
# Automatically leverage output traces to reduce image size
# https://nextjs.org/docs/advanced-features/output-file-tracing
COPY /app/.next/standalone ./
COPY /app/.next/static ./.next/static
USER nextjs
EXPOSE 3000
ENV PORT 3000
# server.js is created by next build from the standalone output
# https://nextjs.org/docs/pages/api-reference/next-config-js/output
CMD HOSTNAME="0.0.0.0" node server.js
I now need write GitLab CI/CD configuration to build and push the Docker image to my built-in registry. The build stage needs to log into the container registry and then run the Dockerfile to build the image.
stages:
- build
- deploy
.docker-template: &docker_template
image: docker:cli
services:
- docker:dind
before_script:
- docker login -u "$CI_REGISTRY_USER" -p "$CI_REGISTRY_PASSWORD" $CI_REGISTRY
docker-build:
<<: *docker_template
stage: build
variables:
DOCKER_IMAGE_NAME: $CI_REGISTRY_IMAGE:$CI_COMMIT_REF_SLUG
# All branches are tagged with $DOCKER_IMAGE_NAME (defaults to commit ref slug)
# Default branch is also tagged with `latest`
script:
- docker build --pull -t "$DOCKER_IMAGE_NAME" .
- docker push "$DOCKER_IMAGE_NAME"
- |
if [[ "$CI_COMMIT_BRANCH" == "$CI_DEFAULT_BRANCH" ]]; then
docker tag "$DOCKER_IMAGE_NAME" "$CI_REGISTRY_IMAGE:latest"
docker push "$CI_REGISTRY_IMAGE:latest"
fi
# Run this job in a branch or tag that starts with "v" where a Dockerfile exists
rules:
- if: '$CI_COMMIT_BRANCH || ($CI_COMMIT_TAG && $CI_COMMIT_TAG =~ /^v.+/)'
exists:
- Dockerfile
After the container image is built and pushed, we can deploy it to our environment using the following pipeline stage. It's important to note that the runner must have access to the server where the container is deployed. Plan out a strategy for securing access to the production server! In my case, I isolate runners in their own VLAN, only allow specific traffic through to the production server via firewall rules, and use SSH to deploy the container image.
# Continuous Deployment
production-deploy:
stage: deploy
script:
- ssh $DEPLOY_USER@$DEPLOY_SERVER "cd $COMPOSE_DIR && docker compose pull && docker compose up -d --force-recreate"
only:
- main
environment:
name: production
url: https://pauleasterbrooks.com
Final Product
At the end of the day, my entire process of creating new software is automated and reproducible. When I merge code to my main
branches, I know that I can check my production instances in a couple of minutes and my changes are live without ever leaving my property!