Setting Up Self-Hosted GitHub Actions Runners in a Kubernetes Cluster, the Easy Way

Introduction

I own a repository which builds Docker (container) image for amd64, arm64, arm/v7 architectures once a month.

The bottleneck of whole workflow is building arm/v7 images, on GitHub-hosted Actions Runner (as of April 2024), it took around 90 minutes to build.

Since the image build of ARM variants ran on AMD64 machines with QEMU, I think it’s possible to speed up by using GitHub Actions Runners running in native ARM architecture environments.

However, GitHub doesn’t provide such runner to public (as of April 2024), we have to setup our own.

In this guide, I will walk through the process of setting up a single node Kubernetes cluster on a Raspberry Pi 4, and deploying an Actions Runner Controller to dynamically provision/deprovision GitHub Actions runners.

Spoiler Alert

As you can see from the image below, running arm/v7 task on an arm64 Runner greatly improved the build time, from 90 minutes to 40 minutes.

What will be done in this guide

In this guide, we will:

Hardware & OS

I’ll setup a Raspberry Pi 4 running Ubuntu 23.10 (arm64). Only one machine is required as I am setting the cluster with only one single node.

No HA setup, I am trading off the reliability for the cost.

Setup: Deploy k0s on the Pi 4

Before installing GitHub Actions Runner Controller (ARC), we need a kubernetes cluster. For quick and dirty setup on a resource constrained machine, I chose k0s and deployed in single mode configuration.

I followed some steps on the official k0s documentation.

System Configuration

Ensure the following packages are installed on your Raspberry Pi :

1
sudo apt-get update && sudo apt-get install cgroup-lite cgroup-tools cgroupfs-mount

Enable memory cgroup in the kernel by modifying the kernel command line:

1
echo "$(cat /boot/firmware/cmdline.txt) cgroup_enable=cpuset cgroup_enable=memory cgroup_memory=1" | sudo tee /boot/firmware/cmdline.txt

Load necessary kernel modules:

1
2
3
echo "overlay
nf_conntrack
br_netfilter" | sudo tee /etc/modules-load.d/modules.conf

Reboot the Raspberry Pi to apply the changes:

1
sudo reboot

Installing k0s

Download and install k0s on the Raspberry Pi:

1
curl -sSLf https://get.k0s.sh | sudo sh

Create initial k0s configuration file:

1
2
sudo mkdir -p /etc/k0s
k0s config create | sudo tee /etc/k0s/k0s.yaml

Modify the extensions.helm section in /etc/k0s/k0s.yaml configuration to include OpenEBS helm charts:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
  extensions:
    helm:
      concurrencyLevel: 5
      repositories:
      - name: openebs-internal
        url: https://openebs.github.io/charts
      charts:
      - name: openebs
        chartname: openebs-internal/openebs
        version: "3.9.0"
        namespace: openebs
        order: 1
        values: |
          localprovisioner:
            hostpathClass:
              enabled: true
              isDefaultClass: true

Deploy k0s as a single node cluster (control plane & worker on the same machine):

1
2
3
sudo k0s install controller --single -c /etc/k0s/k0s.yaml
sudo systemctl start k0scontroller.service
sudo systemctl status k0scontroller.service

The service should be running now, as shown below:

Configuring kubectl

For easier access to kubectl:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
mkdir -p ~/.kube
chmod 700 ~/.kube
sudo cp /var/lib/k0s/pki/admin.conf ~/.kube/config
sudo chown $USER:$USER ~/.kube/config
chmod 600 ~/.kube/config

export KUBECONFIG=$HOME/.kube/config
alias kubectl='k0s kubectl'
echo "export KUBECONFIG=$HOME/.kube/config" >> ~/.bashrc
echo "alias kubectl='k0s kubectl'" >> ~/.bashrc

Let’s pause and test the configuration, type kubectl get pods -A in your terminal, you should see some pods running:

Helm Installation

Install Helm 3 to manage Kubernetes applications:

1
curl https://raw.githubusercontent.com/helm/helm/main/scripts/get-helm-3 | bash

Setup: Install GitHub Actions Runner Controller And Runner Scale Set

Deploying Actions Runner Controller

Set up the Actions Runner Controller using Helm:

1
2
3
4
5
NAMESPACE="arc-systems"
helm install arc \
    --namespace "${NAMESPACE}" \
    --create-namespace \
    oci://ghcr.io/actions/actions-runner-controller-charts/gha-runner-scale-set-controller

Create a GitHub App for Runner Scale Set

Runner Scale Set needs to interactive with your organization/repo when register/deregister runner, so we have either provide a Personal Access Token or create a GitHub App for it.

I go for GitHub Apps this time, it is more complicated but offers some degress of security comaring to provide a PAT.

In your organization settings, navigate to “Developer Settings” > “GitHub Apps”.

Click “New GitHub App”

Select required permissions as per official documentation here.

After created the App, click Generate a private key and save the .pem file.

In the menu at the top-left corner of the page, click Install app, and next to your organization, click Install to install the app on your organization.

After confirming the installation permissions on your organization, note the app installation ID. You will use it later. You can find the app installation ID on the app installation page, which has the following URL format:

https://github.com/organizations/ORGANIZATION/settings/installations/INSTALLATION_ID

Remember App ID, Installation ID, and private key, you’ll need them later.

Configure Runner Scale Set

Create a configuration file called runner-scale-set-values.yaml for runner scale set and define your preferences:

  • githubConfigUrl: the URL of your repository/organization/enterprise where you are going to register your runners to.
  • runnerScaleSetName: the label (or tag if you are coming from GitLab CI/CD) you will write in your workflow to assign job to these runners.
  • maxRunners, minRunners: you guessed it.
  • containerMode: you could use kubernetes type of dind type. Please tell me their difference if you know it, thanks. Uncomment the text below for your preferred mode. I chose to able to switch between 2 modes quickly during development so I ended up installed 2 Runner scale sets, one for kubernetes and one for DinD.
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
githubConfigUrl: "https://github.com/your-organization"
githubConfigSecret: pre-defined-secret
maxRunners: 5
minRunners: 2
runnerScaleSetName: "set-linux-arm64"

# for DinD mode, uncomment the containerMode section below
# containerMode:
#   type: "dind"

# for kubernetes mode, uncomment the containerMode and template section below
# containerMode:
#   type: "kubernetes"
#   kubernetesModeWorkVolumeClaim:
#     accessModes: ["ReadWriteOnce"]
#     storageClassName: "openebs-hostpath"
#     resources:
#       requests:
#         storage: 5Gi
#
# template:
#   spec:
#     initContainers:
#     - name: kube-init
#       image: ghcr.io/actions/actions-runner:latest
#       command: ["sudo", "chown", "-R", "1001:1001", "/home/runner/_work"]
#       volumeMounts:
#       - name: work
#         mountPath: /home/runner/_work
#     containers:
#     - name: runner
#       image: ghcr.io/actions/actions-runner:latest
#       command: ["/home/runner/run.sh"]
#       env:
#         - name: ACTIONS_RUNNER_REQUIRE_JOB_CONTAINER
#           value: "false"

Add Personal Access Token to k8s namespace (mutual exclusive to next part)

(This part is only needed if you choose not to create a GitHub App but rather provide a PAT to runner scale set)

Securely add your Personal Access Token (PAT) for GitHub:

For how to generate PAT, please read the documentation.

1
2
3
kubectl create namespace arc-runners
kubectl -n arc-runners create secret generic pre-defined-secret \
   --from-literal=github_token='__PAT_YOU_GENERATED_ON_GITHUB__'

Add GitHub App Credential to k8s namespace (mutual exclusive to previous part)

(This part is only needed if you choose not to provide a PAT but rather create a GitHub App for runner scale set)

1
2
3
4
5
6
kubectl create namespace arc-runners
kubectl create secret generic pre-defined-secret \
   --namespace=arc-runners \
   --from-literal=github_app_id=123456 \
   --from-literal=github_app_installation_id=654321 \
   --from-literal=github_app_private_key='-----BEGIN RSA PRIVATE KEY-----********'

Install the runner set using the prepared Helm values

1
2
3
4
5
6
INSTALLATION_NAME="arc-runner-set"
NAMESPACE="arc-runners"
helm install "${INSTALLATION_NAME}" \
    --namespace "${NAMESPACE}" \
    -f runner-scale-set-values.yaml \
    oci://ghcr.io/actions/actions-runner-controller-charts/gha-runner-scale-set

Verifying the Installation

Check the status of your Kubernetes pods and ensure everything is running smoothly:

1
2
3
4
5
6
7
8
➜  ~ kubectl get pods -A | grep '^arc'
arc-runners-dind   set-linux-arm64-dind-ukewea-runner-8gf8f        2/2     Running   0          3m17s
arc-runners-dind   set-linux-arm64-dind-ukewea-runner-vw87t        2/2     Running   0          3m17s
arc-runners        set-linux-arm64-ukewea-runner-5kl8p             1/1     Running   0          3m53s
arc-runners        set-linux-arm64-ukewea-runner-xx9k9             1/1     Running   0          3m54s
arc-systems        arc-gha-rs-controller-88454c84d-crdft          1/1     Running   0          21m
arc-systems        set-linux-arm64-754b578d-listener              1/1     Running   0          3m58s
arc-systems        set-linux-arm64-dind-6f85f55d-listener         1/1     Running   0          3m21s

And we are done!


Proxy setting

If you need proxy setup, read following articles to do it properly:

Conclusion

In this guide, we have set up a single node Kubernetes cluster on a Raspberry Pi 4, and deployed an Actions Runner Controller to dynamically provision/deprovision GitHub Actions runners, topology shown below:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
+-----------------------------------------------------------------+
|                            GitHub                               |
|                                                                 |
| +----------------+     +----------------+                       |
| | GitHub Actions | <-> | GitHub Actions |                       |
| |    Workflows   |     |    Repository  |                       |
| +----------------+     +----------------+                       |
+-----------------------------------------------------------------+
                              /\
                              ||
                              ||
+-----------------------------------------------------------------+
|                   Raspberry Pi 4 (Single Node)                  |
|                     Running Ubuntu 23.10                        |
|                                                                 |
|                       +-----------------+                       |
|                       | Kubernetes Node |                       |
|                       |    (k0s)        |                       |
|                       +-----------------+                       |
|                               ||                                |
|        +----------------------+-----------------------+         |
|        |                                              |         |
|        |    +-------------+        +-------------+    |         |
|        |    |             |        |             |    |         |
|        |    |  Actions    |        |  Runner Set |    |         |
|        |    | Runner Ctrl | <----> |  Namespace  |    |         |
|        |    | Namespace   |        |             |    |         |
|        |    |             |        |             |    |         |
|        |    +-------------+        +-------------+    |         |
|        |           ||                    ||           |         |
|        |           ||                    ||           |         |
|        |     +-----++-----+       +------++------+    |         |
|        |     |  Listener  |       |    Runner    |    |         |
|        |     |    Pods    |       |     Pods     |    |         |
|        |     +------------+       +--------------+    |         |
|        +----------------------------------------------+         |
|                                                                 |
+-----------------------------------------------------------------+
comments powered by Disqus
Built with Hugo
Theme Stack designed by Jimmy