Before now, I was aware and also tinkered with provisioning virtual machines, but not much idea that bare metals could be provisioned as well. Started exploring that with OpenStack, and throughout this post I will discuss what I know are involved and if convincing enough, you get to try it out just like me.
What is OpenStack Ironic?
Let’s start off with the above question. Ironic is a collection of components that manages and provisions physical machines. Before understanding how this is even possible, the kinds of workloads that justify this are pretty specific: mention high-performance computing clusters, database hosts where a hypervisor tanks performance, GPU nodes for OpenCL jobs, or environments with strict single-tenant, regulatory, or security requirements where you simply cannot share hardware. Yet, these use cases still require the kind of node operations carried out on VMs for the physical servers used. Operations like enroll a server, configure it, boot it with a target image, and hand it over to a tenant.
Now, remember I said Ironic is collection of components, imagine a few cooperating processes. And first in line is the ironic-api, this exposes a RESTful interface that handles inbound requests from operators enrolling hardware, from Nova issuing deployment commands, or from tooling querying node state. It doesn't act on hardware directly but instead it passes the requests over RPC to the conductor. And that brings us to the ironic-conductor. This is where the actual work happens because it is what talks to the hardware through drivers, manages node state transitions, coordinates with Neutron for network configuration, and as well as pulls images from Glance. It's the only process in the whole stack that needs simultaneous access to the data plane and the IPMI control plane, which is why the docs recommend isolating it on a dedicated host. Next in line is the ironic-python-agent (IPA), a Python service that runs inside a temporary ramdisk on the target node during deployment. Once a node boots this ramdisk, the conductor has remote, in-band access to the hardware. This access means it can write images to disk, collect hardware inventory, and perform cleaning operations, all through the IPA running on the node itself.
Checkout this architectural flow (from OpenStack docs) of the components involved, it helped me with a mental mapping of the overall systems interactions.

Beyond the aforementioned components, there are also supporting components for network switch management (ironic-networking) and NoVNC-based graphical console proxying (ironic-novncproxy), but those are less central to understanding the core provisioning model.
How does it fit into OpenStack
First, let’s understand that Ironic doesn't operate in isolation. When a user sends in a request to boot an instance, it first travels through the Nova API,
where the Nova Scheduler applies its filters and selects an eligible bare metal node based on flavor properties (things like cpu_arch), and then Nova Compute hands off that request to the Nova Ironic virt driver,
imagine a thin layer that speaks to Ironic's API on Nova's behalf.
Now, from here Ironic takes over. Its conductor coordinates with Glance to retrieve deploy images, with Neutron to configure provisioning networks and update DHCP options, with Swift for temporary storage of configdrives and deployment logs, and finally with the hardware itself through its driver interfaces. Lest I forget, Keystone handles authentication throughout. What needs understanding here is that Ironic deliberately doesn't reimplement scheduling, quota enforcement, or image management because it offloads those concerns to the services already built for them.
The deploy process
So what does an end-to-end deployment look like, and brace yourself because it gets interesting here and equally complex. And mind you, what follows is a conceptual understanding, not exactly a hands-on practice. That said, once the conductor receives a deploy request for a node, it coordinates across four driver interfaces including boot, deploy, power, and management.
The boot interface prepares a PXE configuration and caches the deploy kernel and ramdisk. PXE (Preboot Execution Environment) is the mechanism by which a node's firmware can bootstrap from the network instead of local disk. Here, the NIC sends out a DHCP request, gets an IP address and the address of a TFTP server in the response, fetches the Network Bootstrap Program (NBP) over TFTP, and loads it into memory. Think of NBP as an equivalent of GRUB, except it's fetching the kernel over the network.
The management interface then issues commands over IPMI (Intelligent Platform Management Interface) to configure the node to network-boot. IPMI gives you control over a machine's power state and boot configuration through a dedicated management controller, independent of whether the OS is running. This is how Ironic powers nodes on and off and sets boot device without caring about OS state. Isn’t that cool?
The power interface powers the node on through the deploy ramdisk, then IPA starts up inside it,
and the deploy interface either streams the instance image directly from an HTTP object store URL (the Direct deploy path) or orchestrates via Ansible.
Once the image is written to disk, the boot interface flips the PXE config to point to the instance's own kernel, the node is power-cycled a second time, and it boots into the deployed OS,
thereby provisioning state transitions to active.
I may have jumped the gun a little here, but before any of the above can happen, a physical server has to be registered with Ironic.
Preliminary steps involve an operator creating a node record via baremetal node create, specifying a driver (commonly ipmi),
then populate driver_info with the node's BMC address, credentials, and deploy kernel/ramdisk UUIDs. Also, MAC addresses get registered as ports so Neutron can configure switch-side networking.
At this stage, the node then transitions through enroll to manageable (which verifies power and management connectivity) and finally to available,
at which point the Compute service's resource tracker picks it up for scheduling.
What I find interesting about it
For someone who has been gradually getting closer to the firmware stack, Ironic presents a soft landing coming from kernel subsystems programming. Getting to see how a software stack reaches all the way down to control firmware, power cycle physical hardware, and boot a machine over the network is purely interesting to me. That chain from an API call to a node powering on is something I didn't expect to find this satisfying to trace.
And beyond Ironic itself, the entire OpenStack project is a perfect tinkering ground for where I'm headed — starting my masters in high performance computing this September, the intersection of bare metal provisioning, hardware management, and distributed infrastructure is exactly the kind of depth I want to be building in.
See below links for more details, and soon we’ll cover more in coming posts.
- Ironic state machine — https://docs.openstack.org/ironic/2026.1/user/states.html
- PXE/NBP — https://docs.openstack.org/ironic/2026.1/install/configure-pxe.html
- ironic-python-agent — https://docs.openstack.org/ironic-python-agent/2026.1/
- Bifrost docs — https://docs.openstack.org/bifrost/2026.1/