I highly recommend setting up your own Runner for your projects. The shared instance runners can become clogged up with other projects or experience issues like rate limits or running out of storage. If you depend on Pipelines or make extensive use of them, using your own Runners will get you are more stable and performant experience and also brings security advantages.
Instance Runners are shared for all users and project that want to use them. To target a shared Runner, tag your Pipeline with shared
. There are a total of 3 different types available, two running Docker with either amd64
or arm64
architecture (tagged docker
), and one using a generic image with many preloaded tools running on a serverless platform (tagged serverless
). See below for details.
If you don't configure a tag for your Pipeline, a runner configured to run untagged jobs will execute it. In my case, that will be the serverless runner, as it's great for quick and simple tasks and is available instantly. (It's also cheaper than to spin up an entire VM)
Docker-based Instance Runners have a shared cache which works across runners and pipeline runs. Cached data will be available for 14 days, after which it will be automatically cleaned up. It's the perfect place for your node_moudles or rust dependencies.
Be aware that Cache is designed to reduce unnecessary bandwith or compute, not as a secure storage. If you need to share sensitive data between jobs, using artifacts is recommended, as they follow the projects security model, whereas the cache could potentially be read from other branches or merge requests and is not designed to be secure.
Tags allow you to fine tune what specifications the CI server should meet in order to execute your job. Please note that just because a tag exists, this doesn't mean that a runner is actually available. See below for a list of all shared Runners.
The following tags are available:
shared
: A catchall for all shared instance runners. Just settings this tag means any available instance runner is fine (but could result in weird behaviour).
docker
: A GitLab Runner using the docker executor, meaning that you can define an image
in your Pipeline and expect this image to be used.
dind
: Docker-in-Docker means that the docker executor has access to a docker daemon, meaning you can use the docker command inside your container to build an image. In non-dind executors, no docker socket is availabvle and thus your pipeline will error out when you try to use a docker command.
serverless
: Runner is using a managed platform and is probably based on a generic base image.
cache
: Runner is able to load and update cache specified in your Pipeline.
amd64
: The host CPU-architecture is amd64 (meaning 64bit based on AMDs design, ~ x86-64 compatability). This is probably what you are running on your home desktop.
arm64
: The host CPU-architecture is arm64 (alias Aarch64 or ARMv8). This is the architecture of modern Macbooks and probably also your phone. YOu might experience problems with your build tools using this architecture and amd64 binaries do not work here (and vice-versa).
ipv4
/ ipv6
: Runner has either only ipv4, ipv6 or both if both tags present. Most ipv6 runners have DNS64 configured to restore connectivity to most of the IPv4 internet, but you may still have issues if you depend on IPv4 access. If you have connection issues with downloading your dependencies, try adding the IPv4 tag.
Only partially available and currently under unstable development!
You are using shared infrastructure, executing untrusted code! While secure container runtimes are used to isolate jobs from each other, the host is still meant as a multi-tenancy system. Please see below for further details.
These are autoscaling Runners with amd64
/arm64
CPU-architecture and a decent amount of performance.
Runner tags:
shared
docker
cache
amd64
ipv4
, ipv6
(dual stack)Host:
Runner tags:
shared
docker
cache
arm64
ipv4
, ipv6
(dual stack)When using a Docker-based Instance Runner, your code will run on an ephemeral host, provisioned on demand but being used for multiple tenants. This means jobs from other users can can before, during or after your job - reusing the same ephemeral host.
A container runtime with reasonable security is deployed to prevent container escapes and grant each job an isolated environment. This architecture allows for mostly unchaned performance and has a high comptability with whatever job you intend to execute, but can't fully garantuee complete isolation.
However, using a fully virtualized or highly hardened container runtime would allow for this, but isn't reasonably deployable here / can't provide the feature set required for this environment. The decision to slightly reduce the level of isolation in favor of unlocking features usually expected and often required for many jobs was made as the hosts running these jobs typically only exist for a few hours before being deprovisioned autoamtically, resulting in attackers being required to breach hosts again and again, making attempts to extract information from other people's jobs hard and also quite apparent.
Details on exact sandboxing config pending, currently under development.
In conclusion, the level of isolation is a compromise between security and comptability, keeping Runners easy to use for a vast majority of jobs, while also providing decent security. But maybe don't give them global and unscoped API-keys.
For cost saving purposes, infrastructure is provisioned on-demand and not in advance and scales down to zero if unused, which can result in a wait time of 1-2 minutes before the executor is ready to start your job.
It should look something like this while your Runner is getting ready:
Running with gitlab-runner 17.11.1 (96856197)
on hetzner-docker-autoscaler-arm64 dDoxCnU6M, system ID: r_6Pq5XNfagybH
Preparing the "docker-autoscaler" executor
Not yet available.
A serverless Runner executing only commands instead of a Docker container. Good for tasks requiring short executions and/or high security, they spin up faster and have less overhead than the VM-based runners.
SOUND-ONLY - well, scripting-only but you get the idea.
This Runner uses a container environment with very strict isolation from the host, see Scaleway limitations and restrictions for more details.
Runner tags:
shared
serverless
cache
When it comes to running your Pipeline, this Runner essentially just disregards the image you specified and executes the commands using a standardized image based on Debian, thus offering the apt
package manager, as well as these preloaded packages:
git, curl, wget, zip, unzip
build-essential, cmake, automake
ruby-full
golang-go
cargo
npm
pip