diff options
author | Valentin Rothberg <vrothberg@redhat.com> | 2022-04-13 16:21:21 +0200 |
---|---|---|
committer | Valentin Rothberg <vrothberg@redhat.com> | 2022-05-02 13:29:59 +0200 |
commit | 4eff0c8cf284a6007122aec731e4d97059750166 (patch) | |
tree | cdbfee34bd64bb295556667129a6a3c5db9b4612 /libpod/runtime_worker.go | |
parent | 77d872ea38ec7b685ec99efe6688d1793c9fa256 (diff) | |
download | podman-4eff0c8cf284a6007122aec731e4d97059750166.tar.gz podman-4eff0c8cf284a6007122aec731e4d97059750166.tar.bz2 podman-4eff0c8cf284a6007122aec731e4d97059750166.zip |
pod: add exit policies
Add the notion of an "exit policy" to a pod. This policy controls the
behaviour when the last container of pod exits. Initially, there are
two policies:
- "continue" : the pod continues running. This is the default policy
when creating a pod.
- "stop" : stop the pod when the last container exits. This is the
default behaviour for `play kube`.
In order to implement the deferred stop of a pod, add a worker queue to
the libpod runtime. The queue will pick up work items and in this case
helps resolve dead locks that would otherwise occur if we attempted to
stop a pod during container cleanup.
Note that the default restart policy of `play kube` is "Always". Hence,
in order to really solve #13464, the YAML files must set a custom
restart policy; the tests use "OnFailure".
Fixes: #13464
Signed-off-by: Valentin Rothberg <vrothberg@redhat.com>
Diffstat (limited to 'libpod/runtime_worker.go')
-rw-r--r-- | libpod/runtime_worker.go | 41 |
1 files changed, 41 insertions, 0 deletions
diff --git a/libpod/runtime_worker.go b/libpod/runtime_worker.go new file mode 100644 index 000000000..ca44a27f7 --- /dev/null +++ b/libpod/runtime_worker.go @@ -0,0 +1,41 @@ +package libpod + +import ( + "time" +) + +func (r *Runtime) startWorker() { + if r.workerChannel == nil { + r.workerChannel = make(chan func(), 1) + r.workerShutdown = make(chan bool) + } + go func() { + for { + // Make sure to read all workers before + // checking if we're about to shutdown. + for len(r.workerChannel) > 0 { + w := <-r.workerChannel + w() + } + + select { + // We'll read from the shutdown channel only when all + // items above have been processed. + // + // (*Runtime).Shutdown() will block until until the + // item is read. + case <-r.workerShutdown: + return + + default: + time.Sleep(100 * time.Millisecond) + } + } + }() +} + +func (r *Runtime) queueWork(f func()) { + go func() { + r.workerChannel <- f + }() +} |