Architecture Overview
Control Plane
The control plane receives statistics about each job’s execution. It also dispatches new jobs to worker nodes.
Worker Nodes
Worker nodes host the VMs that run your builds. Each worker runs a lightweight daemon that polls for jobs from the control plane. When a new job arrives, the worker launches a short-lived VM, connects it to an isolated network bridge, and monitors its execution.
Workers run on dedicated, bare metal hosts. To avoid noisy neighbors, we do not overprovision worker nodes - that is, workers will allocate 1 CPU to a VM for each CPU requested by the job itself and will leave jobs in the queue if there are insufficient CPUs available.
We use qemu to manage virtual machines on each worker node. qemu provides strong isolation between VMs, and hardware acceleration to ensure fast performance.
The worker nodes will aggregate egress logs from each VM, and send it to the control plane for storage. All communication between worker nodes and the control plane takes place over a secure, encrypted channel. Neither worker nodes nor the VMs they host can communicate with each other.
Virtual Machines
Each build job runs inside its own ephemeral VM. These VMs look and behave like a standard ubuntu-2404 GitHub runner, but they’re hardened for isolation and observability.
When a VM starts, it connects to GitHub using a one-time use token. It then starts executing the Action’s workflow job. The VM can make outbound connections as long as they match the allowlist defined in the repository’s policy file. Egress traffic is routed through a firewall defined on the worker node. Logs are shipped to the control plane for anomaly detection and audit logging. This setup makes it very difficult for attackers to bypass our egress controls.
VMs never accept inbound connections, and their file systems are erased after every job. This ensures complete isolation between builds. No IronCD services of any kind run inside VMs.