I highly recommend setting up your own Runner for your projects. The shared instance runners can become clogged up with other projects or experience issues like rate limits or running out of storage. If you depend on Pipelines or make extensive use of them, using your own Runners will get you are more stable and performant experience and also brings security advantages.
Instance Runner are shared for all users and project that want to use them. To target a shared Runner, tag your Pipeline with shared
. There are a total of 3 different types available, two running Docker with either amd64
or arm64
architecture (tagged docker
), and one using a generic image with many preloaded tools running on a serverless platform (tagged serverless
). See below for details.
If you don't configure a tag for your Pipeline, a runner configured to run untagged jobs will execute it. In my case, that will be the serverless runner, as it's great for quick and simple tasks and is available instantly. (It's also cheaper than to spin up an entire VM)
Docker-based Instance Runners usually share a cache which works across runners and pipeline runs. Runner where this is garuanteed to work are tagged with cache
. Cached data will be available for 21 days, after which it will be automatically cleaned up. It's the perfect place for your node_moudles or rust dependencies.
Be aware that Cache is designed to reduce unnecessary bandwith or compute, not as a secure storage. If you need to share sensitive data between jobs, using artifacts is recommended, as they follow the projects security model, whereas the cache could potentially be read from other branches or merge requests and is not designed to be secure.
Tags allow you to fine tune what specifications the CI server should meet in order to execute your job. Please note that just because a tag exists, this doesn't mean that a runner is actually available. See below for a list of all shared Runners.
The following tags are available:
shared
: A catchall for all shared instance runners. Just settings this tag means any available instance runner is fine (but could result in unexpected behaviour due to different architectures).
prod
: Runners tagged with this are considered stable and ready for production. Leaving this tag out means you may get an experimental Runner to execute your pipeline. You should only set this tag if your Pipeline really requires a more stable experience.
docker
: A GitLab Runner using the docker executor, meaning that you can define an image
in your Pipeline and expect this image to be used.
dind
: Docker-in-Docker means that the docker executor has access to a docker daemon, meaning you can use the docker command inside your container to build an image. In non-dind executors, no docker socket is availabvle and thus your pipeline will error out when you try to use a docker command.
cache
: Runner is able to load and update the distributed cache specified in your Pipeline.
amd64
: The host CPU-architecture is amd64 (meaning 64bit based on AMDs design, x86-64 compatability). This is probably what you are running on your home desktop.
arm64
: The host CPU-architecture is arm64 (alias Aarch64 or ARMv8+). This is the architecture of modern Macbooks and probably also your phone. YOu might experience problems with your build tools using this architecture and amd64 binaries do not work here (and vice-versa).
ipv4
/ ipv6
: Runner has either only ipv4, ipv6 or both if both tags present. Most ipv6-only runners have DNS64 configured to restore connectivity to most of the IPv4 internet, but you may still have issues if you depend on IPv4 access. If you have connection issues with downloading your dependencies, try adding the IPv4 tag.
This Runner features a Docker-executor and is meant for most workloads while providing a high level of isolation from the host as well as other jobs. Your images are cached on-disk and do not incurr higher than usualy bandwith. The host itself should be always available and jobs will execute without much delay, unless a queue has built up (rare). Feel free to use this Runner as much as you want!
Due to the host being static and not ephemeral, the hardened container runtime gVisor
is used. Some advanced syscalls as well as Docker-in-Docker will not work here. If you absolutely require a more advanced execution evironment, consider hosting your own Runner or use a Advanced Elastic Instance Runner
as described in the next section.
The following Runners are available as Standard Constrained Instance Runner
Runner tags: shared
, prod
, docker
, cache
, amd64
, ipv4
Only partially available and currently under unstable development!
You are using shared infrastructure, executing untrusted code! While secure container runtimes are used to isolate jobs from each other, the host is still meant as a multi-tenancy system. Please see below for further details.
These are autoscaling Runners using a Docker-executor with amd64
/arm64
CPU-architecture and a decent amount of performance. In comparison to the standard runner, these are more advanced in the sense that they are both faster and their runtime is allowing a lot more freedom and behaves similar to a full VM. You can run "privileged" workloads like Docker-in-Docker (usually) without issues and expect the environment to work like on a regular system.
They are called elastic as they are only provisioned on-demand, depending on usage and scale up to a few concurrent instances as well as down to zero if unused. This means it can take a minute or two if the job requires spinning up a new host.
Please do not use these runners, if you don't require their advanced use-case like Docker-in-Docker or x86_64
architecture! As they are scale down to zero, their local container cache is only useful for consecutive jobs / during active usage. Daily usage will result in higher bandwith usage for the upstream cntainer registry and inefficient networking usage.
The following On-Demand Runner-types are available as Advanced Elastic Instance Runner
Runner tags:
shared
, docker
, dind
, cache
, amd64
, ipv4
, ipv6
(dual stack)Host:
Runner tags:
shared
, docker
, dind
, cache
, arm64
, ipv4
, ipv6
(dual stack)When using a Docker-based Instance Runner, your code will run on an ephemeral host, provisioned on demand but being used for multiple tenants. This means jobs from other users can can before, during or after your job - reusing the same ephemeral host.
A container runtime with reasonable security is deployed to prevent container escapes and grant each job an isolated environment. This architecture allows for mostly unchaned performance and has a high comptability with whatever job you intend to execute, but can't fully garantuee complete isolation.
However, using a fully virtualized or highly hardened container runtime would allow for this, but isn't reasonably deployable here / can't provide the feature set required for this environment. The decision to slightly reduce the level of isolation in favor of unlocking features usually expected and often required for many jobs was made as the hosts running these jobs typically only exist for a few hours before being deprovisioned autoamtically, resulting in attackers being required to breach hosts again and again, making attempts to extract information from other people's jobs hard and also quite apparent.
Details on exact sandboxing config pending, currently under development.
In conclusion, the level of isolation is a compromise between security and comptability, keeping Runners easy to use for a vast majority of jobs, while also providing decent security. But maybe don't give them global and unscoped API-keys.
For cost saving purposes, infrastructure is provisioned on-demand and not in advance and scales down to zero if unused, which can result in a wait time of 1-2 minutes before the executor is ready to start your job.
It should look something like this while your Runner is getting ready:
Running with gitlab-runner 17.11.1 (96856197)
on hetzner-docker-autoscaler-arm64 dDoxCnU6M, system ID: r_6Pq5XNfagybH
Preparing the "docker-autoscaler" executor