Building a Kubernetes CI/CD Pipeline with GitLab and Helm

February 01, 2021

Everyone loves GitLab CI and Kubernetes.

GitLab CI (Continuous Integration) is a popular tool for building and testing software developers write for applications. GitLab CI helps developers build code faster, more confidently, and detect errors quickly.

Kubernetes, popularly shortened to K8s, is a portable, extensible, open-source platform for managing containerization workloads and services. K8s is used by companies of all sizes everyday to automate deployment, scaling, and managing applications in containers.

The purpose of this post is to show how you can bolt on the Continuous Delivery (CD) piece of the puzzle to build a CI/CD pipeline so you can deploy your applications to Kubernetes. But before we get too far, we’re going to need to talk about Helm, which is an important part of the puzzle.

What the Helm?

Helm calls itself “the package manager for Kubernetes”. That’s a pretty accurate description. Helm is a versatile, sturdy tool DevOps engineers can use to define configuration files in, and perform variable substitution to create consistent deployments to our clusters, and have different variables for different environments.

It’s certainly the right solution to the problem we’re covering here.

How do we do it?

First off, a few prerequisites. You’re going to have to have this all hammered out before you started with the project. There’s links to helpful documentation below if you need help.

Or, you could always get in touch with us and we could talk about your project together.

  1. You already have an Amazon EKS cluster.
  2. You already know how to use GitLab CI.
  3. You have a GitLab CI runner configured in your Kubernetes cluster.
  4. You have the AWS Load Balancer Controller running in your cluster.

With those boxes checked, we can get started. You’ll want to create a new repository in GitLab first for us to use in this example. Once you’ve done that we can get started with creating our files.

File tree

Basically, at the end our folder/file structure is going to look like this:

<dir>
├── chart/
|   ├── Chart.yaml
|   ├── values.yaml
|   └── templates/
|      ├── deployment.yaml
|      ├── service.yaml
|      ├── ingress.yaml
|      └── configmap.yaml
└── gitlab-ci.yml

values.yaml

applicationName: my-first-app
certArn: your-certificate-arn
domain: your domain name
subnets: your subnets
securityGroups: your security groups

deployment.yaml

apiVersion: apps/v1
kind: Deployment
metadata:
  name: {{ .Values.applicationName }}
  namespace: {{ .Values.applicationName  }}
spec:
  replicas: 2
  revisionHistoryLimit: 2
  selector:
    matchLabels:
      app: {{ .Values.applicationName }}
  template:
    metadata:
      labels:
        app: {{ .Values.applicationName  }}
    spec:
      containers:
        - name: {{ .Values.applicationName }}
          imagePullPolicy: Always
          image: nginx:1.19.4
          ports:
            - containerPort: 80
          volumeMounts:
            - mountPath: /usr/share/nginx/html/index.html
              name: nginx-conf
              subPath: index.html
      volumes:
        - name: nginx-conf
          configMap:
            name: {{ .Values.applicationName  }}-configmap

This is the configuration file that defines our deployment. You can see there are a few lines with {{ some text }}. This is how we use a variable we define in our values file within our chart.

configmap.yaml

apiVersion: v1
kind: ConfigMap
metadata:
  name: {{ .Values.applicationName }}-configmap
  namespace: {{ .Values.applicationName }}
data:
  index.html: |
    <html>
    <head>
      <h1>My first Helm deployment!</h1>
    </head>
    <body>
      <p>Thanks for checking out my first Helm deployment.</p>
    </body>
    </html>

This config map just defines a simple index page that we’ll display for our app.

service.yaml

apiVersion: v1
kind: Service
metadata:
  name: {{ .Values.applicationName }}
  namespace: {{ .Values.applicationName }}
spec:
  ports:
    - port: 80
      targetPort: 80
      protocol: TCP
      name: {{ .Values.applicationName }}
    - port: 80
      targetPort: 80
      protocol: TCP
      name: {{ .Values.applicationName }}
  type: NodePort
  selector:
    app: {{ .Values.applicationName }}

ingress.yaml

apiVersion: extensions/v1beta1
kind: Ingress
metadata:
  name: {{ .Values.applicationName }}
  namespace: {{ .Values.applicationName }}
  annotations:
    kubernetes.io/ingress.class: alb
    alb.ingress.kubernetes.io/subnets: {{ .Values.subnets }}
    alb.ingress.kubernetes.io/healthcheck-path: /
    alb.ingress.kubernetes.io/security-groups: {{ .Values.securityGroups }}
    alb.ingress.kubernetes.io/scheme: internet-facing
    alb.ingress.kubernetes.io/certificate-arn:  {{ .Values.certArn }}
    alb.ingress.kubernetes.io/listen-ports: '[{"HTTP": 80}, {"HTTPS":443}]'
    alb.ingress.kubernetes.io/actions.ssl-redirect: '{"Type": "redirect", "RedirectConfig": { "Protocol": "HTTPS", "Port": "443", "StatusCode": "HTTP_301"}}'
spec:
  rules:
    - host: {{ .Values.applicationName }}.{{ .Values.domain }}
      http:
        paths:
        - path: /*
          backend:
            serviceName: ssl-redirect
            servicePort: use-annotation
        - path: /*
          backend:
            serviceName: {{ .Values.applicationName }}
            servicePort: 80

.gitlab-ci.yml

stages:
  - deploy

variables:
  DOCKER_HOST: tcp://localhost:2375/
  DOCKER_DRIVER: overlay2
  APP_NAME: my-first-app

deploy:
  stage: deploy
  image: alpine/helm:3.2.1
  script:
    - helm upgrade ${APP_NAME} ./charts --install --values=./charts/values.yaml --namespace ${APP_NAME}
  rules:
    - if: $CI_COMMIT_BRANCH == 'master'
      when: always

Okay we have all the files. Now what?

Well, after you have all the files defined and your infrastructure follows our prerequisites, there’s not much left to do.

If you commit these files, GitLab will interpet your .gitlab-ci.yml file and initiate a pipeline. Our pipeline only has one stage and one job (deploy). It’ll spin up a container in the cluster for the deployment using the helm:3.2.1 image and run our script command. This does all of the heavy lifting for us with creating all of the files required in our namespace and starting our application.

If you configure in Route53 a DNS record like my-first-app.my-domain.com with an A record to the load balancer that the ingress controller created, you’ll see the index page we defined in the configmap!

Originally posted at nextlinklabs.com


© 2023, Dan Slapelis