diff options
author | Ian Wienand <iwienand@redhat.com> | 2021-11-09 14:07:49 +1100 |
---|---|---|
committer | Ian Wienand <iwienand@redhat.com> | 2021-11-09 18:34:21 +1100 |
commit | 72cf3896851aac16c834d6ac9508e1aab281a93e (patch) | |
tree | 2345a366480b1177761d0b580fa73f2ce990efcd /libpod/container_graph.go | |
parent | 3a31ac50daa819e2dec686d2bdfc2e92c974e28e (diff) | |
download | podman-72cf3896851aac16c834d6ac9508e1aab281a93e.tar.gz podman-72cf3896851aac16c834d6ac9508e1aab281a93e.tar.bz2 podman-72cf3896851aac16c834d6ac9508e1aab281a93e.zip |
shm_lock: Handle ENOSPC better in AllocateSemaphore
When starting a container libpod/runtime_pod_linux.go:NewPod calls
libpod/lock/lock.go:AllocateLock ends up in here. If you exceed
num_locks, in response to a "podman run ..." you will see:
Error: error allocating lock for new container: no space left on device
As noted inline, this error is technically true as it is talking about
the SHM area, but for anyone who has not dug into the source (i.e. me,
before a few hours ago :) your initial thought is going to be that
your disk is full. I spent quite a bit of time trying to diagnose
what disk, partition, overlay, etc. was filling up before I realised
this was actually due to leaking from failing containers.
This overrides this case to give a more explicit message that
hopefully puts people on the right track to fixing this faster. You
will now see:
$ ./bin/podman run --rm -it fedora bash
Error: error allocating lock for new container: allocation failed; exceeded num_locks (20)
[NO NEW TESTS NEEDED] (just changes an existing error message)
Signed-off-by: Ian Wienand <iwienand@redhat.com>
Diffstat (limited to 'libpod/container_graph.go')
0 files changed, 0 insertions, 0 deletions