summaryrefslogtreecommitdiff
path: root/libpod
Commit message (Collapse)AuthorAge
* syncContainer: transition from `stopping` to `exited`Valentin Rothberg2022-07-27
| | | | | | | | | | | | | | | | | | | | | | | | Allow the cleanup process (and others) to transition the container from `stopping` to `exited`. This fixes a race condition detected in #14859 where the cleanup process kicks in _before_ the stopping process can read the exit file. Prior to this fix, the cleanup process left the container in the `stopping` state and removed the conmon files, such that the stopping process also left the container in this state as it could not read the exit files. Hence, `podman wait` timed out (see the 23 seconds execution time of the test [1]) due to the unexpected/invalid state and the test failed. Further turn the warning during stop to a debug message since it's a natural race due to the daemonless/concurrent architecture and nothing to worry about. [NO NEW TESTS NEEDED] since we can only monitor if #14859 continues flaking or not. [1] https://storage.googleapis.com/cirrus-ci-6707778565701632-fcae48/artifacts/containers/podman/6210434704343040/html/sys-remote-fedora-36-rootless-host.log.html#t--00205 Fixes: #14859 Signed-off-by: Valentin Rothberg <vrothberg@redhat.com>
* Merge pull request #14976 from giuseppe/do-not-lock-containers-pod-rmOpenShift Merge Robot2022-07-22
|\ | | | | libpod: do not lock all containers on pod rm
| * libpod: do not lock all containers on pod rmGiuseppe Scrivano2022-07-21
| | | | | | | | | | | | | | | | | | | | | | | | do not attempt to lock all containers on pod rm since it can cause deadlocks when other podman cleanup processes are attempting to lock the same containers in a different order. [NO NEW TESTS NEEDED] Closes: https://github.com/containers/podman/issues/14929 Signed-off-by: Giuseppe Scrivano <gscrivan@redhat.com>
* | container wait: improve error messageValentin Rothberg2022-07-22
| | | | | | | | | | | | | | | | | | | | Improve the error message when looking up the exit code of a container. The state of the container may help us track down #14859 which flakes rarely and is impossible to reproduce on my machine. [NO NEW TESTS NEEDED] Signed-off-by: Valentin Rothberg <vrothberg@redhat.com>
* | resource limits for podsCharlie Doern2022-07-21
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | added the following flags and handling for podman pod create --memory-swap --cpuset-mems --device-read-bps --device-write-bps --blkio-weight --blkio-weight-device --cpu-shares given the new backend for systemd in c/common, all of these can now be exposed to pod create. most of the heavy lifting (nearly all) is done within c/common. However, some rewiring needed to be done here as well! Signed-off-by: Charlie Doern <cdoern@redhat.com>
* | Merge pull request #15003 from giuseppe/create-etc-passwdOpenShift Merge Robot2022-07-21
|\ \ | | | | | | libpod: create /etc/passwd if missing
| * | libpod: create /etc/passwd if missingGiuseppe Scrivano2022-07-21
| | | | | | | | | | | | | | | | | | | | | | | | | | | create the /etc/passwd and /etc/group files if they are missing in the image. Closes: https://github.com/containers/podman/issues/14966 Signed-off-by: Giuseppe Scrivano <gscrivan@redhat.com>
* | | Merge pull request #14984 from Luap99/logsOpenShift Merge Robot2022-07-21
|\ \ \ | | | | | | | | fix goroutine leaks in events and logs backend
| * | | fix goroutine leaks in events and logs backendPaul Holzinger2022-07-20
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | When running a single podman logs this is not really important since we will exit when we finish reading the logs. However for the system service this is very important. Leaking goroutines will cause an increased memory and CPU ussage over time. Both the the event and log backend have goroutine leaks with both the file and journald drivers. The journald backend has the problem that journal.Wait(IndefiniteWait) will block until we get a new journald event. So when a client closes the connection the goroutine would still wait until there is a new journal entry. To fix this we just wait for a maximum of 5 seconds, after that we can check if the client connection was closed and exit correctly in this case. For the file backend we can fix this by waiting for either the log line or context cancel at the same time. Currently it would block waiting for new log lines and only check afterwards if the client closed the connection and thus hang forever if there are no new log lines. [NO NEW TESTS NEEDED] I am open to ideas how we can test memory leaks in CI. To test manually run a container like this: `podman run --log-driver $driver --name test -d alpine sh -c 'i=1; while [ "$i" -ne 1000 ]; do echo "line $i"; i=$((i + 1)); done; sleep inf'` where `$driver` can be either `journald` or `k8s-file`. Then start the podman system service and use: `curl -m 1 --output - --unix-socket $XDG_RUNTIME_DIR/podman/podman.sock -v 'http://d/containers/test/logs?follow=1&since=0&stderr=1&stdout=1' &>/dev/null` to get the logs from the API and then it closes the connection after 1 second. Now run the curl command several times and check the memory usage of the service. Fixes #14879 Signed-off-by: Paul Holzinger <pholzing@redhat.com>
* | | | Merge pull request #14907 from flouthoc/remove-hooksOpenShift Merge Robot2022-07-21
|\ \ \ \ | |_|/ / |/| | | pkg,libpod: remove `pkg/hooks` and use `hooks` from `c/common`
| * | | pkg,libpod: remove pkg/hooks and use hooks from c/commonAditya R2022-07-20
| | |/ | |/| | | | | | | | | | | | | | | | | | | | | | | | | | | | PR https://github.com/containers/common/pull/1071 moved `pkg/hooks` to `c/common` hence remove that from podman and use `pkg/hooks` from `c/common` [NO NEW TESTS NEEDED] [NO TESTS NEEDED] Signed-off-by: Aditya R <arajan@redhat.com>
* / | Update init ctr default for play kubeUrvashi Mohnani2022-07-20
|/ / | | | | | | | | | | | | | | | | Update the init container type default to once instead of always to match k8s behavior. Add a new annotation that can be used to change the init ctr type in the kube yaml. Signed-off-by: Urvashi Mohnani <umohnani@redhat.com>
* / Use SafeChown rather then chown for volumes on NFSDaniel J Walsh2022-07-12
|/ | | | | | | | | | | | | | | | NFS Servers will thrown ENOTSUPP error if you attempt to chown a directory to the same UID and GID as the directory already has. If volumes are stored on NFS directories this throws an ugly error and then works on the next try. Bottom line don't chown directories that already have the correct UID and GID. Fixes: https://github.com/containers/podman/issues/14766 [NO NEW TESTS NEEDED] Difficult to setup an NFS Server in testing. Signed-off-by: Daniel J Walsh <dwalsh@redhat.com>
* fix wrong log message on Trace levelMikhail Khachayants2022-07-12
| | | | | | | | | | | | | | | | [NO NEW TESTS NEEDED] Empty path to runtime binary was printed instead of a real path. Before fix: TRAC[0000] found runtime "" TRAC[0000] found runtime "" After: TRAC[0000] found runtime "/usr/bin/crun" TRAC[0000] found runtime "/usr/bin/runc" Signed-off-by: Mikhail Khachayants <khachayants@arrival.com>
* [CI:DOCS] Improve language. Fix spelling and typos.Erik Sjölund2022-07-11
| | | | | | | | | * Correct spelling and typos. * Improve language. Co-authored-by: Ed Santiago <santiago@redhat.com> Signed-off-by: Erik Sjölund <erik.sjolund@gmail.com>
* Merge pull request #14181 from umohnani8/kube-hostnameopenshift-ci[bot]2022-07-11
|\ | | | | Add ports and hostname correctly in kube yaml
| * Add ports and hostname correctly in kube yamlUrvashi Mohnani2022-07-08
| | | | | | | | | | | | | | | | | | | | | | | | If a pod is created without net sharing, allow adding separate ports for each container to the kube yaml and also set the pod level hostname correctly if the uts namespace is not being shared. Add a warning if the default namespace sharing options have been modified by the user. Signed-off-by: Urvashi Mohnani <umohnani@redhat.com>
* | libpod: read exit code when cleaning up the runtimeValentin Rothberg2022-07-11
| | | | | | | | | | | | | | | | | | | | | | | | While for some call paths we may be doing this redundantly we need to make sure the exit code is always read at this point. [NO NEW TESTS NEEDED] as I do not manage to reproduce the issue which is very likely caused by a code path not writing the exit code when running concurrently. Fixes: #14859 Signed-off-by: Valentin Rothberg <vrothberg@redhat.com>
* | podman wait: return 0 if container never ranValentin Rothberg2022-07-11
| | | | | | | | | | | | | | Make sure to return/exit with 0 when waiting for a container that never ran. Signed-off-by: Valentin Rothberg <vrothberg@redhat.com>
* | Merge pull request #14841 from Luap99/common-codeopenshift-ci[bot]2022-07-07
|\ \ | | | | | | use c/common code for resize and CopyDetachable
| * | use c/common code for resize and CopyDetachablePaul Holzinger2022-07-06
| |/ | | | | | | | | | | | | | | | | Since conmon-rs also uses this code we moved it to c/common. Now podman should has this also to prevent duplication. [NO NEW TESTS NEEDED] Signed-off-by: Paul Holzinger <pholzing@redhat.com>
* | Merge pull request #14501 from cdoern/podUTSopenshift-ci[bot]2022-07-06
|\ \ | |/ |/| podman pod create --uts support
| * podman pod create --uts supportcdoern2022-07-05
| | | | | | | | | | | | | | | | | | | | | | | | add support for the --uts flag in pod create, allowing users to avoid issues with default values in containers.conf. uts follows the same format as other namespace flags: --uts=private (default), --uts=host, --uts=ns:PATH resolves #13714 Signed-off-by: Charlie Doern <cdoern@redhat.com>
* | libpod: switch to golang native error wrappingSascha Grunert2022-07-05
| | | | | | | | | | | | | | | | | | We now use the golang error wrapping format specifier `%w` instead of the deprecated github.com/pkg/errors package. [NO NEW TESTS NEEDED] Signed-off-by: Sascha Grunert <sgrunert@redhat.com>
* | Merge pull request #14626 from jakecorrenti/disable-docker-compose-health-checkopenshift-ci[bot]2022-07-05
|\ \ | | | | | | Docker-compose disable healthcheck properly handled
| * | Docker-compose disable healthcheck properly handledJake Correnti2022-07-05
| |/ | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Previously, if a container had healthchecks disabled in the docker-compose.yml file and the user did a `podman inspect <container>`, they would have an incorrect output: ``` "Healthcheck":{ "Test":[ "CMD-SHELL", "NONE" ], "Interval":30000000000, "Timeout":30000000000, "Retries":3 } ``` After a quick change, the correct output is now the result: ``` "Healthcheck":{ "Test":[ "NONE" ] } ``` Additionally, I extracted the hard-coded strings that were used for comparisons into constants in `libpod/define` to prevent a similar issue from recurring. Closes: #14493 Signed-off-by: Jake Correnti <jcorrenti13@gmail.com>
* / Sync: handle exit fileValentin Rothberg2022-07-05
|/ | | | | | | | | | | Make sure `Sync()` handles state transitions and exit codes correctly. The function was only being called when batching which could render containers in an unusable state when running concurrently with other state-altering functions/commands since the state must be re-read from the database before acting upon it. Fixes: #14761 Signed-off-by: Valentin Rothberg <vrothberg@redhat.com>
* Merge pull request #14789 from saschagrunert/libpod-errorsopenshift-ci[bot]2022-07-05
|\ | | | | libpod/runtime: switch to golang native error wrapping
| * libpod/runtime: switch to golang native error wrappingSascha Grunert2022-07-04
| | | | | | | | | | | | | | | | | | We now use the golang error wrapping format specifier `%w` instead of the deprecated github.com/pkg/errors package. [NO NEW TESTS NEEDED] Signed-off-by: Sascha Grunert <sgrunert@redhat.com>
* | Merge pull request #14807 from eriksjolund/fix_read_only_spellingopenshift-ci[bot]2022-07-04
|\ \ | | | | | | [CI:DOCS] Fix spelling "read only" -> "read-only"
| * | Fix spelling "read only" -> "read-only"Erik Sjölund2022-07-02
| | | | | | | | | | | | Signed-off-by: Erik Sjölund <erik.sjolund@gmail.com>
* | | podman pod create --memoryCharlie Doern2022-07-01
|/ / | | | | | | | | | | | | | | using the new resource backend, implement podman pod create --memory which enables users to modify memory.max inside of the parent cgroup (the pod), implicitly impacting all children unless overriden Signed-off-by: Charlie Doern <cdoern@redhat.com>
* | fix buildValentin Rothberg2022-07-01
| | | | | | | | | | | | | | | | | | PR containers/podman/pull/14449 had an outdated base. Merging it broke builds. [NO NEW TESTS NEEDED] Signed-off-by: Valentin Rothberg <vrothberg@redhat.com>
* | Merge pull request #14449 from cdoern/podVolumesopenshift-ci[bot]2022-07-01
|\ \ | |/ |/| podman volume create --opt=o=timeout...
| * podman volume create --opt=o=timeout...cdoern2022-06-09
| | | | | | | | | | | | | | add an option to configure the driver timeout when creating a volume. The default is 5 seconds but this value is too small for some custom drivers. Signed-off-by: cdoern <cdoern@redhat.com>
* | Merge pull request #14720 from sstosh/rm-optionopenshift-ci[bot]2022-06-29
|\ \ | | | | | | Fix: Prevent OCI runtime directory remain
| * | Fix: Prevent OCI runtime directory remainToshiki Sonoda2022-06-24
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This bug was introduced in https://github.com/containers/podman/pull/8906. When we use 'podman rm/restart/stop/kill etc...' command to the container running with --rm, the OCI runtime directory remains at /run/<runtime name> (root user) or /run/user/<user id>/<runtime name> (rootless user). This bug could cause other bugs. For example, when we checkpoint the container running with --rm (podman checkpoint --export) and restore it (podman restore --import) with crun, error message "Error: OCI runtime error: crun: container `<container id>` already exists" is outputted. This error is caused by an attempt to restore the container with the same container ID as the remaining OCI runtime's container ID. Therefore, I fix that the cleanupRuntime() function runs to remove the OCI runtime directory, even if the container has already been removed by --rm option. Signed-off-by: Toshiki Sonoda <sonoda.toshiki@fujitsu.com>
* | | Merge pull request #14764 from cdoern/cgroupopenshift-ci[bot]2022-06-29
|\ \ \ | | | | | | | | limit cgroupfs when rootless
| * | | only create crgoup when not rootless if using cgroupfsCharlie Doern2022-06-28
| | | | | | | | | | | | | | | | | | | | | | | | | | | | [NO NEW TESTS NEEDED] now that podman's cgroup config tries to initialize controllers, cgroupfs errors out on pod creation we need to mimic the behavior that used to exist and only create the cgroup when running as rootful Signed-off-by: Charlie Doern <cdoern@redhat.com>
* | | | runtime: unpause the container before killing itGiuseppe Scrivano2022-06-28
|/ / / | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | the new version of runc has the same check in place and it automatically resume the container if it is paused. So when Podman tries to resume it again, it fails since the container is not in the paused state. Closes: https://bugzilla.redhat.com/show_bug.cgi?id=2100740 [NO NEW TESTS NEEDED] the CI doesn't use a new runc on cgroup v1 systems. Signed-off-by: Giuseppe Scrivano <gscrivan@redhat.com>
* | | Merge pull request #14734 from giuseppe/copyup-switch-orderopenshift-ci[bot]2022-06-28
|\ \ \ | | | | | | | | volume: add two new options copy and nocopy
| * | | volume: new options [no]copyGiuseppe Scrivano2022-06-27
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | add two new options to the volume create command: copy and nocopy. When nocopy is specified, the files from the container image are not copied up to the volume. Closes: https://github.com/containers/podman/issues/14722 Signed-off-by: Giuseppe Scrivano <gscrivan@redhat.com>
| * | | volume: drop TODO commentGiuseppe Scrivano2022-06-27
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | the two operations are equivalent since securejoin.SecureJoin() has solved the symlinks. Prefer the Lstat version though to make sure symlinks are never resolved and we do not end up using a path on the host. Signed-off-by: Giuseppe Scrivano <gscrivan@redhat.com>
| * | | volumes: switch order of checksGiuseppe Scrivano2022-06-27
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | avoid any I/O operation on the volume if the source directory is empty. This is useful on network file systems (since CAP_DAC_OVERRIDE is not honored) where the root user might not have enough privileges to perform an I/O operation on the NFS mount but the user running inside the container has. [NO NEW TESTS NEEDED] it needs a setup with a network file system Signed-off-by: Giuseppe Scrivano <gscrivan@redhat.com>
* | | | Merge pull request #14713 from Luap99/volume-pluginopenshift-ci[bot]2022-06-27
|\ \ \ \ | | | | | | | | | | add podman volume reload to sync volume plugins
| * | | | add podman volume reload to sync volume pluginsPaul Holzinger2022-06-23
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Libpod requires that all volumes are stored in the libpod db. Because volume plugins can be created outside of podman, it will not show all available plugins. This podman volume reload command allows users to sync the libpod db with their external volume plugins. All new volumes from the plugin are also created in the libpod db and when a volume from the db no longer exists it will be removed if possible. There are some problems: - naming conflicts, in this case we only use the first volume we found. This is not deterministic. - race conditions, we have no control over the volume plugins. It is possible that the volumes changed while we run this command. Fixes #14207 Signed-off-by: Paul Holzinger <pholzing@redhat.com>
| * | | | libpod: volume plugin sendRequest remove body boolPaul Holzinger2022-06-23
| | |/ / | |/| | | | | | | | | | | | | | | | | | | | | | There is no need for an extra parameter if the body is set. We can just check to interface for not nil. Signed-off-by: Paul Holzinger <pholzing@redhat.com>
* | | | Merge pull request #14705 from jakecorrenti/show-health-status-eventopenshift-ci[bot]2022-06-27
|\ \ \ \ | | | | | | | | | | Show Health Status events
| * | | | Show Health Status eventsJake Correnti2022-06-27
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Previously, health status events were not being generated at all. Both the API and `podman events` will generate health_status events. ``` {"status":"health_status","id":"ae498ac3aa6c63db8b69a37583a6eae1a9cefbdbdbeeadcf8e1d66d745f0df63","from":"localhost/healthcheck-demo:latest","Type":"container","Action":"health_status","Actor":{"ID":"ae498ac3aa6c63db8b69a37583a6eae1a9cefbdbdbeeadcf8e1d66d745f0df63","Attributes":{"containerExitCode":"0","image":"localhost/healthcheck-demo:latest","io.buildah.version":"1.26.1","maintainer":"NGINX Docker Maintainers \u003cdocker-maint@nginx.com\u003e","name":"healthcheck-demo"}},"scope":"local","time":1656082205,"timeNano":1656082205882271276,"HealthStatus":"healthy"} ``` ``` 2022-06-24 11:06:04.886238493 -0400 EDT container health_status ae498ac3aa6c63db8b69a37583a6eae1a9cefbdbdbeeadcf8e1d66d745f0df63 (image=localhost/healthcheck-demo:latest, name=healthcheck-demo, health_status=healthy, io.buildah.version=1.26.1, maintainer=NGINX Docker Maintainers <docker-maint@nginx.com>) ``` Signed-off-by: Jake Correnti <jcorrenti13@gmail.com>
* | | | | Merge pull request #14732 from dfr/criuopenshift-ci[bot]2022-06-27
|\ \ \ \ \ | |_|_|/ / |/| | | | Add missing criu symbols to criu_unsupported.go