Following up from my last post on Ironic, this one is about Bifrost, and unlike the last post, this one is less of an explainer and more of a devlog. You'll see why.
So, what is Bifrost?
Bifrost is a set of Ansible playbooks that automates deploying a base image onto a set of known hardware using Ironic in standalone mode (meaning without the rest of the OpenStack services like Nova or Neutron). Mostly this is useful when you just want to provision bare metal machines without the overhead of a full cloud. It handles installing Ironic, setting up dnsmasq for DHCP and PXE, configuring nginx to serve images, and wiring everything together.
The workflow it gives you is three steps: install, enroll, deploy. Install sets up all the services. Enroll registers your hardware with Ironic. Deploy tells Ironic to write an OS image to the enrolled nodes and boot them.
My setup
I ran this on my Dell XPS running Ubuntu 24.04. My plan was to use Bifrost's built-in test environment support, which creates virtual machines to act as the bare metal targets, so no physical hardware needed for this first run.
First things first, cloning the repo and generating an SSH key:
git clone https://opendev.org/openstack/bifrost
cd bifrost
ssh-keygen -t rsa -b 4096 -f ~/.ssh/id_rsa -N ""
Then running the testenv setup:
./bifrost-cli testenv
This is where my first problem showed up. The command failed after stalling for a long time with Failed to update apt cache. Turns out my Brave browser repository had a broken GPG key that was causing apt to return a non-zero exit code, which I believe Ansible treated as fatal, and had to fix it by refreshing the key:
sudo curl -fsSL https://brave-browser-apt-release.s3.brave.com/brave-browser-archive-keyring.gpg \
| sudo gpg --dearmor -o /usr/share/keyrings/brave-browser-archive-keyring.gpg
After that, testenv ran fine and created two virtual machines, testvm1 and testvm2, alongside a generated inventory file at ~/bifrost/baremetal-inventory.json. Each VM had a MAC address, IPMI credentials managed via VirtualBMC, and static IP assignments baked into the inventory. This is what Bifrost would use to enroll and provision them. Running virsh list --all showed both VMs defined but powered off at that point, waiting to be enrolled:
Id Name State
----------------------------------
- testvm1 shut off
- testvm2 shut off
Next, installing Ironic
With the test VMs ready:
sudo ./bifrost-cli install --testenv
This actually takes a while because it installs Ironic, MariaDB, RabbitMQ, nginx, dnsmasq, and downloads the IPA kernel and ramdisk. Once done, the finished output instructs I activated the environment and verified the install by running:
source /opt/stack/bifrost/bin/activate
export OS_CLOUD=bifrost
baremetal driver list
Ironic was up with ipmi and redfish drivers available.
Enrolling the nodes
Enrollment is how Ironic gets to know about your hardware, and with the inventory Bifrost generated from the testenv setup, all I had to do was run:
./bifrost-cli enroll baremetal-inventory.json
testvm2 enrolled cleanly and reached available state, but testvm1 on the other hand hit a timeout during automated cleaning. It turns out that the IPA ramdisk booted on the VM but never called back to Ironic.
After some digging, the issue was a config file that libvirt had dropped into my Bifrost's dnsmasq configuration directory:
/etc/dnsmasq.d/libvirt-daemon
It contained except-interface=virbr0, which told Bifrost's dnsmasq to exclude the exact interface the test VMs were on. Now the VMs were getting IPs from libvirt's own dnsmasq, which doesn't know about PXE boot options, and couldn't fetch the IPA ramdisk. Removing it and restarting dnsmasq fixed it:
sudo rm /etc/dnsmasq.d/libvirt-daemon
sudo systemctl restart dnsmasq
After manually moving testvm1 back through manageable to trigger cleaning again, both nodes completed cleaning and reached available.
+--------------------------------------+---------+---------------+-------------+-----------------+-------------+
| uuid | name | instance_uuid | power_state | provision_state | maintenance |
+--------------------------------------+---------+---------------+-------------+-----------------+-------------+
| 4e41df61-84b1-5856-bfb6-6b5f2cd3dd11 | testvm1 | None | power on | available | False |
| 878c3113-0035-5033-9f99-46520b89b56d | testvm2 | None | power on | available | False |
+--------------------------------------+---------+---------------+-------------+-----------------+-------------+
Deploying
With both nodes available, deploying:
./bifrost-cli deploy baremetal-inventory.json \
-e @baremetal-install-env.json \
--image http://192.168.122.1:8080/jammy-server-cloudimg-amd64.img \
--image-checksum 0d8646d16b91372aec21c09cb19097c1 \
-e ssh_public_key_path=~/.ssh/id_rsa.pub \
--wait
A couple of things worth noting here. The image URL points to the local nginx server Ironic runs at 192.168.122.1:8080, so I had to download an Ubuntu Jammy cloud image locally first because I suspected the IPA ramdisk running on the test VMs couldn't resolve external DNS to fetch it directly after a failed attempt. Also, ssh_public_key_path needs to be passed explicitly because Bifrost's default group_vars file has it commented out, meaning SSH key injection into the configdrive won't happen without it.
Both nodes reached active state, and I SSH-ed into testvm1:
ssh -i ~/.ssh/id_rsa ubuntu@192.168.122.2
...
Welcome to Ubuntu 22.04.5 LTS (GNU/Linux 5.15.0-173-generic x86_64)
...
ubuntu@testvm1:~$
Same for testvm2 at 192.168.122.3 after removing testvm1 from known_hosts file. It felt wholesome gaining access into both, there’s somthing about the ‘fresh smell’ of a clean and free Linux environment.
What I took away from this
Genuinely, I wouldn’t know yet how many hurdles Bifrost has helped me escape by just using it (until then), but using it was definitely easy and straightforward as what it says. It took all potential complexities and wraps it into a three-command workflow. That’s amazing to me, I promise. The two bugs that I hit are both my environment-specific issues. The second one is worth knowing if you're running Bifrost on a machine that already has libvirt managing networks.
Next post is me trying this on actual physical hardware (I have a old Lenovo laptop lying around), which should make the IPMI and PXE parts a lot more tangible. Let’s go!
Read up links:
- Bifrost testenv — https://docs.openstack.org/bifrost/latest/contributor/testenv.html
- IPA docs — https://docs.openstack.org/ironic-python-agent/2026.1/
- Ubuntu cloud images — https://cloud-images.ubuntu.com/jammy/current/