-
Notifications
You must be signed in to change notification settings - Fork 2.9k
Description
Is this a BUG REPORT or FEATURE REQUEST? (leave only one on its own line)
/kind feature
Description
Currently its not possible to perform even read-only operations (such as podman ps) when the graphroot happens to be in a read-only volume (for instance, when the host / is mounted as ro, either intentionally or because ext4 went "oopsie").
However we already a somewhat working solution for that specific use case: move the graphroot to an ephemeral mount. Assuming you want to use images and volumes stored at the default graphroot (as I did for testing this idea) you could change your config like this:
/etc/containers/storage.conf:
[storage]
graphroot = "/run/containers/storage"
[storage.options]
additionalimagestores = [
"/var/lib/containers/storage"
]
/etc/containers/containers.conf:
[engine]
volume_path = "/var/lib/containers/storage/volumes"
This will work fine until the next reboot. Since volume metadata seems to be persisted on the boltdb store (and not in the volume_path), podman won't know about their existence once the graphroot is recreated, and trying to recreate those volumes results in an error since f7e72bc: #8254
Error: error creating volume directory "/var/lib/containers/storage/volumes/bla/_data": mkdir /var/lib/containers/storage/volumes/bla/_data: file exists
I don't know if this behavior is intentional or not but I've created a little PR which should revert it:
I know this PR doesn't actually fix the core issue but it should be enough of a fix for when you want:
- images stored in a read-only partition
- user data is in a rw partition
- podman state is as ephemeral as the containers it manages
Steps to reproduce the issue:
Describe the results you received:
Describe the results you expected:
Additional information you deem important (e.g. issue happens only occasionally):
Output of podman version:
(paste your output here)
Output of podman info --debug:
(paste your output here)
Package info (e.g. output of rpm -q podman or apt list podman):
(paste your output here)
Have you tested with the latest version of Podman and have you checked the Podman Troubleshooting Guide?
Yes/No
Additional environment details (AWS, VirtualBox, physical, etc.):