summaryrefslogtreecommitdiff
path: root/libpod
Commit message (Collapse)AuthorAge
* Merge pull request #14449 from cdoern/podVolumesopenshift-ci[bot]2022-07-01
|\ | | | | podman volume create --opt=o=timeout...
| * podman volume create --opt=o=timeout...cdoern2022-06-09
| | | | | | | | | | | | | | add an option to configure the driver timeout when creating a volume. The default is 5 seconds but this value is too small for some custom drivers. Signed-off-by: cdoern <cdoern@redhat.com>
* | Merge pull request #14720 from sstosh/rm-optionopenshift-ci[bot]2022-06-29
|\ \ | | | | | | Fix: Prevent OCI runtime directory remain
| * | Fix: Prevent OCI runtime directory remainToshiki Sonoda2022-06-24
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This bug was introduced in https://github.com/containers/podman/pull/8906. When we use 'podman rm/restart/stop/kill etc...' command to the container running with --rm, the OCI runtime directory remains at /run/<runtime name> (root user) or /run/user/<user id>/<runtime name> (rootless user). This bug could cause other bugs. For example, when we checkpoint the container running with --rm (podman checkpoint --export) and restore it (podman restore --import) with crun, error message "Error: OCI runtime error: crun: container `<container id>` already exists" is outputted. This error is caused by an attempt to restore the container with the same container ID as the remaining OCI runtime's container ID. Therefore, I fix that the cleanupRuntime() function runs to remove the OCI runtime directory, even if the container has already been removed by --rm option. Signed-off-by: Toshiki Sonoda <sonoda.toshiki@fujitsu.com>
* | | Merge pull request #14764 from cdoern/cgroupopenshift-ci[bot]2022-06-29
|\ \ \ | | | | | | | | limit cgroupfs when rootless
| * | | only create crgoup when not rootless if using cgroupfsCharlie Doern2022-06-28
| | | | | | | | | | | | | | | | | | | | | | | | | | | | [NO NEW TESTS NEEDED] now that podman's cgroup config tries to initialize controllers, cgroupfs errors out on pod creation we need to mimic the behavior that used to exist and only create the cgroup when running as rootful Signed-off-by: Charlie Doern <cdoern@redhat.com>
* | | | runtime: unpause the container before killing itGiuseppe Scrivano2022-06-28
|/ / / | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | the new version of runc has the same check in place and it automatically resume the container if it is paused. So when Podman tries to resume it again, it fails since the container is not in the paused state. Closes: https://bugzilla.redhat.com/show_bug.cgi?id=2100740 [NO NEW TESTS NEEDED] the CI doesn't use a new runc on cgroup v1 systems. Signed-off-by: Giuseppe Scrivano <gscrivan@redhat.com>
* | | Merge pull request #14734 from giuseppe/copyup-switch-orderopenshift-ci[bot]2022-06-28
|\ \ \ | | | | | | | | volume: add two new options copy and nocopy
| * | | volume: new options [no]copyGiuseppe Scrivano2022-06-27
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | add two new options to the volume create command: copy and nocopy. When nocopy is specified, the files from the container image are not copied up to the volume. Closes: https://github.com/containers/podman/issues/14722 Signed-off-by: Giuseppe Scrivano <gscrivan@redhat.com>
| * | | volume: drop TODO commentGiuseppe Scrivano2022-06-27
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | the two operations are equivalent since securejoin.SecureJoin() has solved the symlinks. Prefer the Lstat version though to make sure symlinks are never resolved and we do not end up using a path on the host. Signed-off-by: Giuseppe Scrivano <gscrivan@redhat.com>
| * | | volumes: switch order of checksGiuseppe Scrivano2022-06-27
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | avoid any I/O operation on the volume if the source directory is empty. This is useful on network file systems (since CAP_DAC_OVERRIDE is not honored) where the root user might not have enough privileges to perform an I/O operation on the NFS mount but the user running inside the container has. [NO NEW TESTS NEEDED] it needs a setup with a network file system Signed-off-by: Giuseppe Scrivano <gscrivan@redhat.com>
* | | | Merge pull request #14713 from Luap99/volume-pluginopenshift-ci[bot]2022-06-27
|\ \ \ \ | | | | | | | | | | add podman volume reload to sync volume plugins
| * | | | add podman volume reload to sync volume pluginsPaul Holzinger2022-06-23
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Libpod requires that all volumes are stored in the libpod db. Because volume plugins can be created outside of podman, it will not show all available plugins. This podman volume reload command allows users to sync the libpod db with their external volume plugins. All new volumes from the plugin are also created in the libpod db and when a volume from the db no longer exists it will be removed if possible. There are some problems: - naming conflicts, in this case we only use the first volume we found. This is not deterministic. - race conditions, we have no control over the volume plugins. It is possible that the volumes changed while we run this command. Fixes #14207 Signed-off-by: Paul Holzinger <pholzing@redhat.com>
| * | | | libpod: volume plugin sendRequest remove body boolPaul Holzinger2022-06-23
| | |/ / | |/| | | | | | | | | | | | | | | | | | | | | | There is no need for an extra parameter if the body is set. We can just check to interface for not nil. Signed-off-by: Paul Holzinger <pholzing@redhat.com>
* | | | Merge pull request #14705 from jakecorrenti/show-health-status-eventopenshift-ci[bot]2022-06-27
|\ \ \ \ | | | | | | | | | | Show Health Status events
| * | | | Show Health Status eventsJake Correnti2022-06-27
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Previously, health status events were not being generated at all. Both the API and `podman events` will generate health_status events. ``` {"status":"health_status","id":"ae498ac3aa6c63db8b69a37583a6eae1a9cefbdbdbeeadcf8e1d66d745f0df63","from":"localhost/healthcheck-demo:latest","Type":"container","Action":"health_status","Actor":{"ID":"ae498ac3aa6c63db8b69a37583a6eae1a9cefbdbdbeeadcf8e1d66d745f0df63","Attributes":{"containerExitCode":"0","image":"localhost/healthcheck-demo:latest","io.buildah.version":"1.26.1","maintainer":"NGINX Docker Maintainers \u003cdocker-maint@nginx.com\u003e","name":"healthcheck-demo"}},"scope":"local","time":1656082205,"timeNano":1656082205882271276,"HealthStatus":"healthy"} ``` ``` 2022-06-24 11:06:04.886238493 -0400 EDT container health_status ae498ac3aa6c63db8b69a37583a6eae1a9cefbdbdbeeadcf8e1d66d745f0df63 (image=localhost/healthcheck-demo:latest, name=healthcheck-demo, health_status=healthy, io.buildah.version=1.26.1, maintainer=NGINX Docker Maintainers <docker-maint@nginx.com>) ``` Signed-off-by: Jake Correnti <jcorrenti13@gmail.com>
* | | | | Merge pull request #14732 from dfr/criuopenshift-ci[bot]2022-06-27
|\ \ \ \ \ | |_|_|/ / |/| | | | Add missing criu symbols to criu_unsupported.go
| * | | | Fix spelling of GetCriuVersionDoug Rabson2022-06-27
| | | | | | | | | | | | | | | | | | | | Signed-off-by: Doug Rabson <dfr@rabson.org>
* | | | | Merge pull request #14654 from cdoern/cgroupopenshift-ci[bot]2022-06-27
|\ \ \ \ \ | |/ / / / |/| | | | podman cgroup enhancement
| * | | | podman cgroup enhancementcdoern2022-06-24
| | |/ / | |/| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | currently, setting any sort of resource limit in a pod does nothing. With the newly refactored creation process in c/common, podman ca now set resources at a pod level meaning that resource related flags can now be exposed to podman pod create. cgroupfs and systemd are both supported with varying completion. cgroupfs is a much simpler process and one that is virtually complete for all resource types, the flags now just need to be added. systemd on the other hand has to be handeled via the dbus api meaning that the limits need to be passed as recognized properties to systemd. The properties added so far are the ones that podman pod create supports as well as `cpuset-mems` as this will be the next flag I work on. Signed-off-by: Charlie Doern <cdoern@redhat.com>
* | | | Two fixes for DB exit code handlingMatthew Heon2022-06-23
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Firstly: don't prune exit codes after a refresh - instead, clear the table entirely. We are guaranteed that all containers are gone after a refresh, we should not worry about exit codes given this. Secondly: alter the way pruning was done. We were updating the DB by calling Update from within an existing View, and stacking an RW transaction on top of an existing RO one seems dodgy; further, modifying a bucket while iterating over it with ForEach is undefined behavior. Hopefully this will resolve our CI issues. Signed-off-by: Matthew Heon <mheon@redhat.com>
* | | | libpod: fix wait and exit-code logicValentin Rothberg2022-06-23
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This commit addresses three intertwined bugs to fix an issue when using Gitlab runner on Podman. The three bug fixes are not split into separate commits as tests won't pass otherwise; avoidable noise when bisecting future issues. 1) Podman conflated states: even when asking to wait for the `exited` state, Podman returned as soon as a container transitioned to `stopped`. The issues surfaced in Gitlab tests to fail [1] as `conmon`'s buffers have not (yet) been emptied when attaching to a container right after a wait. The race window was extremely narrow, and I only managed to reproduce with the Gitlab runner [1] unit tests. 2) The clearer separation between `exited` and `stopped` revealed a race condition predating the changes. If a container is configured for autoremoval (e.g., via `run --rm`), the "run" process competes with the "cleanup" process running in the background. The window of the race condition was sufficiently large that the "cleanup" process has already removed the container and storage before the "run" process could read the exit code and hence waited indefinitely. Address the exit-code race condition by recording exit codes in the main libpod database. Exit codes can now be read from a database. When waiting for a container to exit, Podman first waits for the container to transition to `exited` and will then query the database for its exit code. Outdated exit codes are pruned during cleanup (i.e., non-performance critical) and when refreshing the database after a reboot. An exit code is considered outdated when it is older than 5 minutes. While the race condition predates this change, the waiting process has apparently always been fast enough in catching the exit code due to issue 1): `exited` and `stopped` were conflated. The waiting process hence caught the exit code after the container transitioned to `stopped` but before it `exited` and got removed. 3) With 1) and 2), Podman is now waiting for a container to properly transition to the `exited` state. Some tests did not pass after 1) and 2) which revealed the third bug: `conmon` was executed with its working directory pointing to the OCI runtime bundle of the container. The changed working directory broke resolving relative paths in the "cleanup" process. The "cleanup" process error'ed before actually cleaning up the container and waiting "main" process ran indefinitely - or until hitting a timeout. Fix the issue by executing `conmon` with the same working directory as Podman. Note that fixing 3) *may* address a number of issues we have seen in the past where for *some* reason cleanup processes did not fire. [1] https://gitlab.com/gitlab-org/gitlab-runner/-/issues/27119#note_970712864 Signed-off-by: Valentin Rothberg <vrothberg@redhat.com> [MH: Minor reword of commit message] Signed-off-by: Matthew Heon <mheon@redhat.com>
* | | | conmon: silence json-file errorValentin Rothberg2022-06-23
|/ / / | | | | | | | | | | | | | | | | | | We should just silently fall through. The log was flooding the system-service logs when running Gitlab runner. Signed-off-by: Valentin Rothberg <vrothberg@redhat.com>
* | | Fix spelling "setup" -> "set up" and similarErik Sjölund2022-06-22
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | * Replace "setup", "lookup", "cleanup", "backup" with "set up", "look up", "clean up", "back up" when used as verbs. Replace also variations of those. * Improve language in a few places. Signed-off-by: Erik Sjölund <erik.sjolund@gmail.com>
* | | Merge pull request #14659 from eriksjolund/setup_to_set_up_in_codeopenshift-ci[bot]2022-06-21
|\ \ \ | | | | | | | | [CI:DOCS] "setup" -> "set up" in source code comments
| * | | [CI:DOCS] "setup" -> "set up" in source code commentsErik Sjölund2022-06-19
| | | | | | | | | | | | | | | | | | | | | | | | | | | | * Replace "setup", "lookup" with "set up", "look up" when used as verbs. Signed-off-by: Erik Sjölund <erik.sjolund@gmail.com>
* | | | podman pod create --shm-sizecdoern2022-06-20
|/ / / | | | | | | | | | | | | | | | | | | | | | | | | expose the --shm-size flag to podman pod create and add proper handling and inheritance for the option. resolves #14609 Signed-off-by: Charlie Doern <cdoern@redhat.com>
* | | Merge pull request #14299 from cdoern/podCloneopenshift-ci[bot]2022-06-16
|\ \ \ | | | | | | | | implement podman pod clone
| * | | podman pod clonecdoern2022-06-10
| | |/ | |/| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | implement podman pod clone, a command to create an exact copy of a pod while changing certain config elements current supported flags are: --name change the pod name --destroy remove the original pod --start run the new pod on creation and all infra-container related flags from podman pod create (namespaces etc) resolves #12843 Signed-off-by: cdoern <cdoern@redhat.com>
* | | Merge pull request #14596 from ↵openshift-ci[bot]2022-06-15
|\ \ \ | | | | | | | | | | | | | | | | giuseppe/move-conmon-different-cgroup-system-service libpod: improve check to create conmon cgroup
| * | | libpod: improve check to create conmon cgroupGiuseppe Scrivano2022-06-15
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | commit 1951ff168a63157fa2f4711fde283edfc4981ed3 introduced a check so that conmon is not moved to a new cgroup when podman is running inside of a systemd service. This is helpful to integrate podman in systemd so that the spawned conmon lives in the same cgroup as the service that created it. Unfortunately this breaks when podman daemon is running in a systemd service since the same check is in place thus all the conmon processes end up in the same cgroup as the podman daemon. When the podman daemon systemd service stops the conmon processes are also terminated as well as the containers they monitor. Improve the check to exclude podman running as a daemon. Closes: https://bugzilla.redhat.com/show_bug.cgi?id=2052697 Signed-off-by: Giuseppe Scrivano <gscrivan@redhat.com>
* | | | fix CI: golangci-lint is broken on mainPaul Holzinger2022-06-15
| | | | | | | | | | | | | | | | | | | | | | | | | | | | The merge of both 528739cef3d2 and 1b62e4543845 at the same time created a lint error on main. Signed-off-by: Paul Holzinger <pholzing@redhat.com>
* | | | Merge pull request #14584 from Luap99/lint-systemdOpenShift Merge Robot2022-06-15
|\ \ \ \ | | | | | | | | | | golangci-lint: add systemd build tag
| * | | | golangci-lint: add systemd build tagPaul Holzinger2022-06-14
| | |_|/ | |/| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Lint the systemd code and fix the reported problems. The remoteclient tag is no longer used so I just removed it. [NO NEW TESTS NEEDED] Signed-off-by: Paul Holzinger <pholzing@redhat.com>
* | | | Merge pull request #14585 from Luap99/nolintopenshift-ci[bot]2022-06-14
|\ \ \ \ | | | | | | | | | | golangci-lint: enable nolintlint
| * | | | golangci-lint: enable nolintlintPaul Holzinger2022-06-14
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The nolintlint linter does not deny the use of `//nolint` Instead it allows us to enforce a common nolint style: - force that a linter name must be specified - do not add a space between `//` and `nolint` - make sure nolint is only used when there is actually a problem Signed-off-by: Paul Holzinger <pholzing@redhat.com>
* | | | | Merge pull request #14582 from giuseppe/no-create-containerenv-if-run-volumeopenshift-ci[bot]2022-06-14
|\ \ \ \ \ | |/ / / / |/| | | | container: do not create .containerenv with -v SRC:/run
| * | | | container: do not create .containerenv with -v SRC:/runGiuseppe Scrivano2022-06-14
| | |/ / | |/| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | if /run is on a volume do not create the file /run/.containerenv as it would leak outside of the container. Closes: https://github.com/containers/podman/issues/14577 Signed-off-by: Giuseppe Scrivano <gscrivan@redhat.com>
* | | | Merge pull request #14580 from jakecorrenti/stats-on-non-running-containeropenshift-ci[bot]2022-06-14
|\ \ \ \ | |_|/ / |/| | | Non-running containers now report statistics via the `podman stats`
| * | | Non-running containers now report statistics via the `podman stats`Jake Correnti2022-06-13
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | command Previously, if a container was not running, and the user ran the `podman stats` command, an error would be reported: `Error: container state improper`. Podman now reports stats as the fields' default values for their respective type if the container is not running: ``` $ podman stats --no-stream demo ID NAME CPU % MEM USAGE / LIMIT MEM % NET IO BLOCK IO PIDS CPU TIME AVG CPU % 4b4bf8ce84ed demo 0.00% 0B / 0B 0.00% 0B / 0B 0B / 0B 0 0s 0.00% ``` Closes: #14498 Signed-off-by: Jake Correnti <jcorrenti13@gmail.com>
* | | | podman cp: do not overwrite non-dirs with dirs and vice versaValentin Rothberg2022-06-10
| |_|/ |/| | | | | | | | | | | | | | | | | | | | | | | | | | Add a new `--overwrite` flag to `podman cp` to allow for overwriting in case existing users depend on the behavior; they will have a workaround. By default, the flag is turned off to be compatible with Docker and to have a more sane behavior. Fixes: #14420 Signed-off-by: Valentin Rothberg <vrothberg@redhat.com>
* | | Merge pull request #14480 from cdoern/infraOpenShift Merge Robot2022-06-09
|\ \ \ | | | | | | | | patch for pod host networking & other host namespace handling
| * | | patch for pod host networking & other host namespace handlingcdoern2022-06-09
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | this patch included additonal host namespace checks when creating a ctr as well as fixing of the tests to check /proc/self/ns/net see #14461 Signed-off-by: cdoern <cdoern@redhat.com>
* | | | Do not error on signalling a just-stopped containerMatthew Heon2022-06-09
| |_|/ |/| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Previous PR #12394 tried to address this, but made a mistake: containers that have just exited do not move to the Exited state but rather the Stopped state - as such, the code would never have run (there is no way we start `podman kill`, and the container transitions to Exited while we are doing it - that requires holding the container lock, which Kill already does). Fix the code to check Stopped as well (we omit Exited entirely but it's a cheap check and our state logic could change in the future). Also, return an error, instead of exiting cleanly - the Kill failed, after all. ErrCtrStateInvalid is already handled by the sig-proxy logic so there won't be issues. [NO NEW TESTS NEEDED] This fixes a race that I cannot reproduce myself, and I have no idea how we'd repro in CI. Signed-off-by: Matthew Heon <mheon@redhat.com>
* | | Merge pull request #14220 from Luap99/resolvconfOpenShift Merge Robot2022-06-07
|\ \ \ | | | | | | | | use resolvconf package from c/common/libnetwork
| * | | use resolvconf package from c/common/libnetworkPaul Holzinger2022-06-07
| | |/ | |/| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Podman and Buildah should use the same code the generate the resolv.conf file. This mostly moved the podman code into c/common and created a better API for it so buildah can use it as well. [NO NEW TESTS NEEDED] All existing tests should continue to pass. Fixes #13599 (There is no way to test this in CI without breaking the hosts resolv.conf) Signed-off-by: Paul Holzinger <pholzing@redhat.com>
* | | Merge pull request #14483 from ↵OpenShift Merge Robot2022-06-07
|\ \ \ | | | | | | | | | | | | | | | | jakecorrenti/restart-privelaged-containers-after-host-device-change Privileged containers can now restart if the host devices change
| * | | Privileged containers can now restart if the host devices changeJake Correnti2022-06-06
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | If a privileged container is running, stops, and the devices on the host change, such as a USB device is unplugged, then a container would no longer start. Previously, the devices from the host were only being added to the container once: when the container was created. Now, this happens every time the container starts. I did this by adding a boolean to the container config that indicates whether to mount all of the devices or not, which can be set via an option. During spec generation, if the `MountAllDevices` option is set in the container config, all host devices are added to the container. Additionally, a couple of functions from `pkg/specgen/generate/config_linux.go` were moved into `pkg/util/utils_linux.go` as they were needed in multiple packages. Closes #13899 Signed-off-by: Jake Correnti <jcorrenti13@gmail.com>
* | | | libpod: store network status when userns is usedPaul Holzinger2022-06-07
| |/ / |/| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | When a container with a userns is created the network setup is special. Normally the netns is setup before the oci runtime container is created, however with a userns the container is created first and then the network is setup. In the second case we never saved the container state afterwards. Because of it, podman inspect would not show the network info and network teardown will not happen. This worked with local podman because there was a save() call later in the code path which then also saved the network status. But in the podman API code path this save never happened thus all containers started via API had this problem. Fixes #14465 Signed-off-by: Paul Holzinger <pholzing@redhat.com>
* | | Merge pull request #14499 from giuseppe/make-error-clearerOpenShift Merge Robot2022-06-07
|\ \ \ | | | | | | | | runtime: make error clearer