aboutsummaryrefslogtreecommitdiff
path: root/libpod
Commit message (Collapse)AuthorAge
* Merge pull request #13597 from Luap99/statsOpenShift Merge Robot2022-03-23
|\ | | | | podman stats: calc CPU percentage correctly
| * podman stats: improve cpu average calcPaul Holzinger2022-03-22
| | | | | | | | | | | | | | We can just calculate the cpu percent for the time the container is running. There is no need to use datapoints. Signed-off-by: Paul Holzinger <pholzing@redhat.com>
| * podman stats: calc CPU percentage correctlyPaul Holzinger2022-03-22
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | When you run podman stats, the first interval always shows the wrong cpu usage. To calculate cpu percentage we get the cpu time from the cgroup and compare this against the system time between two stats. Since the first time we do not have a previous stats an empty struct is used instead. Thus we do not use the actual running time of the container but the current unix timestamp (time since Jan 1 1970). To fix this we make sure that the previous stats time is set to the container start time, when it is empty. [NO NEW TESTS NEEDED] No idea how I could create a test which would have a predictable cpu usage. See the linked bugzilla for a reproducer. Fixes https://bugzilla.redhat.com/show_bug.cgi?id=2066145 Signed-off-by: Paul Holzinger <pholzing@redhat.com>
* | Fix a potential race around the exec cleanup processMatthew Heon2022-03-23
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Every exec session run attached will, on exit, do two things: it will signal the associated `podman exec` that it is finished (to allow Podman to collect the exit code and exit), and spawn a cleanup process to clean up the exec session (in case the `podman exec` process died, we still need to clean up). If an exec session is created that exits almost instantly, but generates a large amount of output (e.g. prints thousands of lines), the cleanup process can potentially execute before `podman exec` has a chance to read the exit code, resulting in errors. Handle this by detecting if the cleanup process has already removed the exec session before handling the error from reading the exec exit code. [NO NEW TESTS NEEDED] I have no idea how to test this in CI. Fixes #13227 Signed-off-by: Matthew Heon <mheon@redhat.com>
* | Merge pull request #13398 from giuseppe/fix-warning-pod-create-rmOpenShift Merge Robot2022-03-22
|\ \ | | | | | | libpod: drop warning if cgroup doesn't exist
| * | libpod: drop warning if cgroup doesn't existGiuseppe Scrivano2022-03-02
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | do not print a warning on cgroup removal if it doesn't exist. Closes: https://github.com/containers/podman/issues/13382 [NO NEW TESTS NEEDED] Signed-off-by: Giuseppe Scrivano <gscrivan@redhat.com>
* | | Merge pull request #13580 from vrothberg/enable-lintersOpenShift Merge Robot2022-03-22
|\ \ \ | |_|/ |/| | enable linters
| * | fix a number of errcheck issuesValentin Rothberg2022-03-22
| | | | | | | | | | | | | | | | | | Numerous issues remain, especially in tests/e2e. Signed-off-by: Valentin Rothberg <vrothberg@redhat.com>
| * | fix a number of `godot` issuesValentin Rothberg2022-03-22
| | | | | | | | | | | | | | | | | | | | | Still an unknown number remains but I am running out of patience. Adding dots is not the best use of my time. Signed-off-by: Valentin Rothberg <vrothberg@redhat.com>
| * | linter: enable makezeroValentin Rothberg2022-03-22
| | | | | | | | | | | | Signed-off-by: Valentin Rothberg <vrothberg@redhat.com>
| * | linter: enable nilerrValentin Rothberg2022-03-22
| | | | | | | | | | | | | | | | | | | | | A number of cases looked suspicious, so I marked them with `FIXME`s to leave some breadcrumbs. Signed-off-by: Valentin Rothberg <vrothberg@redhat.com>
| * | linter: enable wastedassignValentin Rothberg2022-03-22
| | | | | | | | | | | | Signed-off-by: Valentin Rothberg <vrothberg@redhat.com>
* | | Merge pull request #13585 from flouthoc/fix-no-healthcheckOpenShift Merge Robot2022-03-22
|\ \ \ | |/ / |/| | healthcheck: stop showing wrong status when `--no-healthcheck` is set
| * | healthcheck: stop showing wrong status when --no-healthcheck is setAditya R2022-03-22
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Containers started with `--no-healthcheck` are configured to contain no healthcheck and test configured as `NONE`. Podman shows wrong status as such use cases. Following commit fixes the faulty behavior of stauts field for containers started with `--no-healthcheck` Signed-off-by: Aditya R <arajan@redhat.com>
* | | Merge pull request #13577 from giuseppe/drop-fedora-31-warningOpenShift Merge Robot2022-03-22
|\ \ \ | |/ / |/| | libpod: drop warning for Fedora 31
| * | libpod: drop warning for Fedora 31Giuseppe Scrivano2022-03-21
| | | | | | | | | | | | | | | | | | | | | | | | drop a warning for runc not supporting cgroup v2 on Fedora 31. [NO NEW TESTS NEEDED] Signed-off-by: Giuseppe Scrivano <gscrivan@redhat.com>
* | | bump golangci-lint to v1.45.0Valentin Rothberg2022-03-21
|/ / | | | | | | | | | | | | | | | | | | * supports Go 1.18 * disable a number of new linters * fix minor stylecheck issues [NO NEW TESTS NEEDED] Signed-off-by: Valentin Rothberg <vrothberg@redhat.com>
* | Merge pull request #13552 from vrothberg/go1.18OpenShift Merge Robot2022-03-18
|\ \ | | | | | | go fmt: use go 1.18 conditional-build syntax
| * | go fmt: use go 1.18 conditional-build syntaxValentin Rothberg2022-03-18
| | | | | | | | | | | | Signed-off-by: Valentin Rothberg <vrothberg@redhat.com>
* | | podman machine: remove hostip from portPaul Holzinger2022-03-17
|/ / | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Inside the podman machine vm we always remove the hostip from the port mapping because this should only be used on the actual host. Otherwise you run into issues when we would bind 127.0.0.1 or try to bind a host address that is not available in the VM. This was already done for cni/netavark ports and slirp4netns but not for the port bindings inside libpod which are only used as root. [NO NEW TESTS NEEDED] We still do not have machine tests! Fixes #13543 Signed-off-by: Paul Holzinger <pholzing@redhat.com>
* | Merge pull request #13450 from jwhonce/bz/2052697OpenShift Merge Robot2022-03-16
|\ \ | | | | | | Exit code change BZ #2052697
| * | Exit with 0 when receiving SIGTERMJhon Honce2022-03-15
| | | | | | | | | | | | | | | | | | | | | * systemctl stop podman.service will now return exit code 0 * Update test framework to support JSON boolean and numeric values Signed-off-by: Jhon Honce <jhonce@redhat.com>
* | | move k8s deps into podmanPaul Holzinger2022-03-15
|/ / | | | | | | | | | | | | | | We only need a small part of the k8s dependencies but they are the biggest dependencies in podman by far. Moving them into podman allows us to remove the unnecessary parts. Signed-off-by: Paul Holzinger <pholzing@redhat.com>
* | slirp: fix setup on ipv6 disabled systemsPaul Holzinger2022-03-14
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | When enable_ipv6=true is set for slirp4netns (default since podman v4), we will try to set the accept sysctl. This sysctl will not exist on systems that have ipv6 disabled. In this case we should not error and just ignore the extra ipv6 setup. Also the current logic to wait for the slirp4 setup was kinda broken, it did not actually wait until the sysctl was set before starting slirp. This should now be fixed by using two `sync.WaitGroup`s. [NO NEW TESTS NEEDED] Fixes #13388 Signed-off-by: Paul Holzinger <pholzing@redhat.com>
* | Add support for --chrootdirsLStandman2022-03-14
| | | | | | | | Signed-off-by: LStandman <65296484+LStandman@users.noreply.github.com>
* | Inspect network info of a joined network namespace😎 Mostafa Emami2022-03-08
| | | | | | | | | | Closes: https://github.com/containers/podman/issues/13150 Signed-off-by: 😎 Mostafa Emami <mustafaemami@gmail.com>
* | Merge pull request #13413 from giuseppe/pod-no-use-cgroups-if-disabledOpenShift Merge Robot2022-03-04
|\ \ | | | | | | libpod: pods do not use cgroups if --cgroups=disabled
| * | libpod: pods do not use cgroups if --cgroups=disabledGiuseppe Scrivano2022-03-03
| |/ | | | | | | | | | | | | | | | | do not attempt to use cgroups with pods if the cgroups are disabled. A similar check is already in place for containers. Closes: https://github.com/containers/podman/issues/13411 Signed-off-by: Giuseppe Scrivano <gscrivan@redhat.com>
* / container: workdir resolution must consider symlink if explicitly configuredAditya R2022-03-02
|/ | | | | | | | | | | | | | | | | | | | | | | While resolving `workdir` we mostly create a `workdir` when `stat` fails with `ENOENT` or `ErrNotExist` however following cases are not true when user explicitly specifies a `workdir` while `running` using `--workdir` which tells `podman` to only use workdir if its exists on the container. Following configuration is implicity set with other `run` mechanism like `podman play kube` Problem with explicit `--workdir` or similar implicit config in `podman play kube` is that currently podman ignores the fact that workdir can also be a `symlink` and actual `link` could be valid. Hence following commit ensures that in such scenarios when a `workdir` is not found and we cannot create a `workdir` podman must perform a check to ensure that if `workdir` is a `symlink` and `link` is resolved successfully and resolved link is present on the container then we return as it is. Docker performs a similar behviour. Signed-off-by: Aditya R <arajan@redhat.com>
* Merge pull request #13362 from keonchennl/pod-logs-add-flagOpenShift Merge Robot2022-03-01
|\ | | | | Add the names flag for pod logs
| * Add the names flag for pod logsXueyuan Chen2022-03-01
| | | | | | | | | | | | Fixes containers#13261 Signed-off-by: Xueyuan Chen <X.Chen-47@student.tudelft.nl>
* | Add podman volume mount supportDaniel J Walsh2022-02-28
|/ | | | | | Fixes: https://github.com/containers/podman/issues/12768 Signed-off-by: Daniel J Walsh <dwalsh@redhat.com>
* Show version of the deb package in info outputAnders F Björklund2022-02-24
| | | | | | | | | Previously just showing name of the package, followed by the path repeated again (already stated on the line above) [NO NEW TESTS NEEDED] Signed-off-by: Anders F Björklund <anders.f.bjorklund@gmail.com>
* Merge pull request #13314 from flouthoc/container-commit-squashOpenShift Merge Robot2022-02-23
|\ | | | | container-commit: support `--squash` to squash layers into one if users want.
| * container-commit: support --squash to squash layers into oneAditya R2022-02-23
| | | | | | | | | | | | | | | | | | | | | | Allow users to commit containers into a single layer. Usage ```bash podman container commit --squash <name> ``` Signed-off-by: Aditya R <arajan@redhat.com>
* | Merge pull request #13232 from rhatdan/volumesOpenShift Merge Robot2022-02-23
|\ \ | | | | | | Don't log errors on removing volumes inuse, if container --volumes-from
| * | Don't log errors on removing volumes inuse, if container --volumes-fromDaniel J Walsh2022-02-21
| |/ | | | | | | | | | | | | | | | | | | | | When removing a container created with a --volumes-from a container created with a built in volume, we complain if the original container still exists. Since this is an expected state, we should not complain about it. Fixes: https://github.com/containers/podman/issues/12808 Signed-off-by: Daniel J Walsh <dwalsh@redhat.com>
* | Remove the runtime lockMatthew Heon2022-02-22
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This primarily served to protect us against shutting down the Libpod runtime while operations (like creating a container) were happening. However, it was very inconsistently implemented (a lot of our longer-lived functions, like pulling images, just didn't implement it at all...) and I'm not sure how much we really care about this very-specific error case? Removing it also removes a lot of potential deadlocks, which is nice. [NO NEW TESTS NEEDED] Signed-off-by: Matthew Heon <mheon@redhat.com>
* | Merge pull request #13059 from cdoern/cloneOpenShift Merge Robot2022-02-22
|\ \ | |/ |/| Implement Podman Container Clone
| * Implement Podman Container Clonecdoern2022-02-20
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | podman container clone takes the id of an existing continer and creates a specgen from the given container's config recreating all proper namespaces and overriding spec options like resource limits and the container name if given in the cli options this command utilizes the common function DefineCreateFlags meaning that we can funnel as many create options as we want into clone over time allowing the user to clone with as much or as little of the original config as they want. container clone takes a second argument which is a new name and a third argument which is an image name to use instead of the original container's the current supported flags are: --destroy (remove the original container) --name (new ctr name) --cpus (sets cpu period and quota) --cpuset-cpus --cpu-period --cpu-rt-period --cpu-rt-runtime --cpu-shares --cpuset-mems --memory --run resolves #10875 Signed-off-by: cdoern <cdoern@redhat.com> Signed-off-by: cdoern <cbdoer23@g.holycross.edu> Signed-off-by: cdoern <cdoern@redhat.com>
* | Propagate $CONTAINERS_CONF to conmonDavid Gibson2022-02-18
|/ | | | | | | | | | | | | | | | | | | | | The CONTAINERS_CONF environment variable can be used to override the configuration file, which is useful for testing. However, at the moment this variable is not propagated to conmon. That means in particular, that conmon can't propagate it back to podman when invoking its --exit-command. The mismatch in configuration between the starting and cleaning up podman instances can cause a variety of errors. This patch also adds two related test cases. One checks explicitly that the correct CONTAINERS_CONF value appears in conmon's environment. The other checks for a possible specific impact of this bug: if we use a nonstandard name for the runtime (even if its path is just a regular crun), then the podman container cleanup invoked at container exit will fail. That has the effect of meaning that a container started with -d --rm won't be correctly removed once complete. Fixes #12917 Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
* Fix checkpoint/restore pod testsAdrian Reber2022-02-11
| | | | | | | | | Checkpoint/restore pod tests are not running with an older runc and now that runc 1.1.0 appears in the repositories it was detected that the tests were failing. This was not detected in CI as CI was not using runc 1.1.0 yet. Signed-off-by: Adrian Reber <areber@redhat.com>
* Modify /etc/resolv.conf when connecting/disconnectingMatthew Heon2022-02-10
| | | | | | | | | | | | | | | | | | The `podman network connect` and `podman network disconnect` commands give containers access to different networks than the ones they were created with; these networks can also have DNS servers associated with them. Until now, however, we did not modify resolv.conf as network membership changed. With this PR, `podman network connect` will add any new nameservers supported by the new network to the container's /etc/resolv.conf, and `podman network disconnect` command will do the opposite, removing the network's nameservers from `/etc/resolv.conf`. Fixes #9603 Signed-off-by: Matthew Heon <matthew.heon@pm.me>
* Merge pull request #13163 from myml/myml/fix-durationOpenShift Merge Robot2022-02-08
|\ | | | | fix: Multiplication of durations
| * fix: Multiplication of durationsmyml2022-02-08
| | | | | | | | | | | | | | 'killContainerTimeout' is already 5 second [NO NEW TESTS NEEDED] Signed-off-by: myml <wurongjie1@gmail.com>
* | Merge pull request #13159 from Luap99/slirp4-scopeOpenShift Merge Robot2022-02-08
|\ \ | |/ |/| move rootless netns slirp4netns process to systemd user.slice
| * move rootless netns slirp4netns process to systemd user.slicePaul Holzinger2022-02-07
| | | | | | | | | | | | | | | | | | | | | | | | | | When running podman inside systemd user units, it is possible that systemd kills the rootless netns slirp4netns process because it was started in the default unit cgroup. When the unit is stopped all processes in that cgroup are killed. Since the slirp4netns process is run once for all containers it should not be killed. To make sure systemd will not kill the process we move it to the user.slice. Fixes #13153 Signed-off-by: Paul Holzinger <pholzing@redhat.com>
* | Merge pull request #13129 from flouthoc/healthcheck-session-read-from-pipeOpenShift Merge Robot2022-02-07
|\ \ | |/ |/| healthcheck, libpod: Read healthcheck event output from os pipe
| * healthcheck, libpod: Read healthcheck event output from os pipeAditya R2022-02-04
| | | | | | | | | | | | | | | | | | | | It seems we are ignoring output from healthcheck session. Open a valid pipe to healthcheck session in order read its output. Use common pipe for both `stdout/stderr` since that was the previous behviour as well. Signed-off-by: Aditya R <arajan@redhat.com>
* | Fix: Do not print error when parsing journald log failsmyml2022-02-07
| | | | | | | | | | | | | | foramtError was written as err [NO NEW TESTS NEEDED] Signed-off-by: myml <wurongjie1@gmail.com>