After successfully bootstrapping a containerized Uyuni server and a Leap 15.6 client locally, my next immediate curiosity was the underlying storage layer. During some recent discussions with the maintainers, a known infrastructure pain point came up, "running Podman workloads with NFS-backed storage".
Since my GSoC proposal revolves heavily around benchmarking these exact storage behaviors, I couldn't just read about the bug, I needed to see it break myself.
Starting from the early part of today, I've been working on reproducing this exact failure state. I am currently provisioning a local NFS share to attach to my single-node environment. The goal is to deliberately trigger the filesystem locking issues, capture the raw logs, and understand exactly how the kernel and container runtime are failing to communicate.
This page will serve as my live scratchpad. Soon enough, I will be updating this with the raw logs, mount configurations, and any potential workarounds I uncover during the investigation.