| Commit message (Collapse) | Author | Age |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Every exec session run attached will, on exit, do two things: it
will signal the associated `podman exec` that it is finished (to
allow Podman to collect the exit code and exit), and spawn a
cleanup process to clean up the exec session (in case the `podman
exec` process died, we still need to clean up). If an exec
session is created that exits almost instantly, but generates a
large amount of output (e.g. prints thousands of lines), the
cleanup process can potentially execute before `podman exec` has
a chance to read the exit code, resulting in errors. Handle this
by detecting if the cleanup process has already removed the exec
session before handling the error from reading the exec exit
code.
[NO NEW TESTS NEEDED] I have no idea how to test this in CI.
Fixes #13227
Signed-off-by: Matthew Heon <mheon@redhat.com>
|
|\
| |
| | |
libpod: drop warning if cgroup doesn't exist
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
do not print a warning on cgroup removal if it doesn't exist.
Closes: https://github.com/containers/podman/issues/13382
[NO NEW TESTS NEEDED]
Signed-off-by: Giuseppe Scrivano <gscrivan@redhat.com>
|
|\ \
| | |
| | | |
enable linters
|
| | |
| | |
| | |
| | |
| | |
| | | |
Numerous issues remain, especially in tests/e2e.
Signed-off-by: Valentin Rothberg <vrothberg@redhat.com>
|
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
Still an unknown number remains but I am running out of patience.
Adding dots is not the best use of my time.
Signed-off-by: Valentin Rothberg <vrothberg@redhat.com>
|
| | |
| | |
| | |
| | | |
Signed-off-by: Valentin Rothberg <vrothberg@redhat.com>
|
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
A number of cases looked suspicious, so I marked them with `FIXME`s to
leave some breadcrumbs.
Signed-off-by: Valentin Rothberg <vrothberg@redhat.com>
|
| | |
| | |
| | |
| | | |
Signed-off-by: Valentin Rothberg <vrothberg@redhat.com>
|
|\ \ \
| |/ /
|/| | |
healthcheck: stop showing wrong status when `--no-healthcheck` is set
|
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
Containers started with `--no-healthcheck` are configured to contain no
healthcheck and test configured as `NONE`. Podman shows wrong status as
such use cases.
Following commit fixes the faulty behavior of stauts field for
containers started with `--no-healthcheck`
Signed-off-by: Aditya R <arajan@redhat.com>
|
|\ \ \
| |/ /
|/| | |
libpod: drop warning for Fedora 31
|
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
drop a warning for runc not supporting cgroup v2 on Fedora 31.
[NO NEW TESTS NEEDED]
Signed-off-by: Giuseppe Scrivano <gscrivan@redhat.com>
|
|/ /
| |
| |
| |
| |
| |
| |
| |
| |
| | |
* supports Go 1.18
* disable a number of new linters
* fix minor stylecheck issues
[NO NEW TESTS NEEDED]
Signed-off-by: Valentin Rothberg <vrothberg@redhat.com>
|
|\ \
| | |
| | | |
go fmt: use go 1.18 conditional-build syntax
|
| | |
| | |
| | |
| | | |
Signed-off-by: Valentin Rothberg <vrothberg@redhat.com>
|
|/ /
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
Inside the podman machine vm we always remove the hostip from the port
mapping because this should only be used on the actual host. Otherwise
you run into issues when we would bind 127.0.0.1 or try to bind a
host address that is not available in the VM.
This was already done for cni/netavark ports and slirp4netns but not for
the port bindings inside libpod which are only used as root.
[NO NEW TESTS NEEDED] We still do not have machine tests!
Fixes #13543
Signed-off-by: Paul Holzinger <pholzing@redhat.com>
|
|\ \
| | |
| | | |
Exit code change BZ #2052697
|
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
* systemctl stop podman.service will now return exit code 0
* Update test framework to support JSON boolean and numeric values
Signed-off-by: Jhon Honce <jhonce@redhat.com>
|
|/ /
| |
| |
| |
| |
| |
| |
| | |
We only need a small part of the k8s dependencies but they are the
biggest dependencies in podman by far. Moving them into podman allows us
to remove the unnecessary parts.
Signed-off-by: Paul Holzinger <pholzing@redhat.com>
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
When enable_ipv6=true is set for slirp4netns (default since podman v4),
we will try to set the accept sysctl. This sysctl will not exist on
systems that have ipv6 disabled. In this case we should not error and
just ignore the extra ipv6 setup.
Also the current logic to wait for the slirp4 setup was kinda broken, it
did not actually wait until the sysctl was set before starting slirp.
This should now be fixed by using two `sync.WaitGroup`s.
[NO NEW TESTS NEEDED]
Fixes #13388
Signed-off-by: Paul Holzinger <pholzing@redhat.com>
|
| |
| |
| |
| | |
Signed-off-by: LStandman <65296484+LStandman@users.noreply.github.com>
|
| |
| |
| |
| |
| | |
Closes: https://github.com/containers/podman/issues/13150
Signed-off-by: 😎 Mostafa Emami <mustafaemami@gmail.com>
|
|\ \
| | |
| | | |
libpod: pods do not use cgroups if --cgroups=disabled
|
| |/
| |
| |
| |
| |
| |
| |
| |
| | |
do not attempt to use cgroups with pods if the cgroups are disabled.
A similar check is already in place for containers.
Closes: https://github.com/containers/podman/issues/13411
Signed-off-by: Giuseppe Scrivano <gscrivan@redhat.com>
|
|/
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
While resolving `workdir` we mostly create a `workdir` when `stat`
fails with `ENOENT` or `ErrNotExist` however following cases are not
true when user explicitly specifies a `workdir` while `running` using
`--workdir` which tells `podman` to only use workdir if its exists on
the container. Following configuration is implicity set with other
`run` mechanism like `podman play kube`
Problem with explicit `--workdir` or similar implicit config in `podman play
kube` is that currently podman ignores the fact that workdir can also be
a `symlink` and actual `link` could be valid.
Hence following commit ensures that in such scenarios when a `workdir`
is not found and we cannot create a `workdir` podman must perform a
check to ensure that if `workdir` is a `symlink` and `link` is resolved
successfully and resolved link is present on the container then we
return as it is.
Docker performs a similar behviour.
Signed-off-by: Aditya R <arajan@redhat.com>
|
|\
| |
| | |
Add the names flag for pod logs
|
| |
| |
| |
| |
| |
| | |
Fixes containers#13261
Signed-off-by: Xueyuan Chen <X.Chen-47@student.tudelft.nl>
|
|/
|
|
|
|
| |
Fixes: https://github.com/containers/podman/issues/12768
Signed-off-by: Daniel J Walsh <dwalsh@redhat.com>
|
|
|
|
|
|
|
|
|
| |
Previously just showing name of the package, followed by
the path repeated again (already stated on the line above)
[NO NEW TESTS NEEDED]
Signed-off-by: Anders F Björklund <anders.f.bjorklund@gmail.com>
|
|\
| |
| | |
container-commit: support `--squash` to squash layers into one if users want.
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
Allow users to commit containers into a single layer.
Usage
```bash
podman container commit --squash <name>
```
Signed-off-by: Aditya R <arajan@redhat.com>
|
|\ \
| | |
| | | |
Don't log errors on removing volumes inuse, if container --volumes-from
|
| |/
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
When removing a container created with a --volumes-from a container
created with a built in volume, we complain if the original container
still exists. Since this is an expected state, we should not complain
about it.
Fixes: https://github.com/containers/podman/issues/12808
Signed-off-by: Daniel J Walsh <dwalsh@redhat.com>
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
This primarily served to protect us against shutting down the
Libpod runtime while operations (like creating a container) were
happening. However, it was very inconsistently implemented (a lot
of our longer-lived functions, like pulling images, just didn't
implement it at all...) and I'm not sure how much we really care
about this very-specific error case?
Removing it also removes a lot of potential deadlocks, which is
nice.
[NO NEW TESTS NEEDED]
Signed-off-by: Matthew Heon <mheon@redhat.com>
|
|\ \
| |/
|/| |
Implement Podman Container Clone
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
podman container clone takes the id of an existing continer and creates a specgen from the given container's config
recreating all proper namespaces and overriding spec options like resource limits and the container name if given in the cli options
this command utilizes the common function DefineCreateFlags meaning that we can funnel as many create options as we want
into clone over time allowing the user to clone with as much or as little of the original config as they want.
container clone takes a second argument which is a new name and a third argument which is an image name to use instead of the original container's
the current supported flags are:
--destroy (remove the original container)
--name (new ctr name)
--cpus (sets cpu period and quota)
--cpuset-cpus
--cpu-period
--cpu-rt-period
--cpu-rt-runtime
--cpu-shares
--cpuset-mems
--memory
--run
resolves #10875
Signed-off-by: cdoern <cdoern@redhat.com>
Signed-off-by: cdoern <cbdoer23@g.holycross.edu>
Signed-off-by: cdoern <cdoern@redhat.com>
|
|/
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The CONTAINERS_CONF environment variable can be used to override the
configuration file, which is useful for testing. However, at the moment
this variable is not propagated to conmon. That means in particular, that
conmon can't propagate it back to podman when invoking its --exit-command.
The mismatch in configuration between the starting and cleaning up podman
instances can cause a variety of errors.
This patch also adds two related test cases. One checks explicitly that
the correct CONTAINERS_CONF value appears in conmon's environment. The
other checks for a possible specific impact of this bug: if we use a
nonstandard name for the runtime (even if its path is just a regular crun),
then the podman container cleanup invoked at container exit will fail.
That has the effect of meaning that a container started with -d --rm won't
be correctly removed once complete.
Fixes #12917
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
|
|
|
|
|
|
|
|
|
| |
Checkpoint/restore pod tests are not running with an older runc and now
that runc 1.1.0 appears in the repositories it was detected that the
tests were failing. This was not detected in CI as CI was not using runc
1.1.0 yet.
Signed-off-by: Adrian Reber <areber@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The `podman network connect` and `podman network disconnect`
commands give containers access to different networks than the
ones they were created with; these networks can also have DNS
servers associated with them. Until now, however, we did not
modify resolv.conf as network membership changed.
With this PR, `podman network connect` will add any new
nameservers supported by the new network to the container's
/etc/resolv.conf, and `podman network disconnect` command will do
the opposite, removing the network's nameservers from
`/etc/resolv.conf`.
Fixes #9603
Signed-off-by: Matthew Heon <matthew.heon@pm.me>
|
|\
| |
| | |
fix: Multiplication of durations
|
| |
| |
| |
| |
| |
| |
| | |
'killContainerTimeout' is already 5 second
[NO NEW TESTS NEEDED]
Signed-off-by: myml <wurongjie1@gmail.com>
|
|\ \
| |/
|/| |
move rootless netns slirp4netns process to systemd user.slice
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
When running podman inside systemd user units, it is possible that
systemd kills the rootless netns slirp4netns process because it was
started in the default unit cgroup. When the unit is stopped all
processes in that cgroup are killed. Since the slirp4netns process is
run once for all containers it should not be killed. To make sure
systemd will not kill the process we move it to the user.slice.
Fixes #13153
Signed-off-by: Paul Holzinger <pholzing@redhat.com>
|
|\ \
| |/
|/| |
healthcheck, libpod: Read healthcheck event output from os pipe
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
It seems we are ignoring output from healthcheck session.
Open a valid pipe to healthcheck session in order read its output.
Use common pipe for both `stdout/stderr` since that was the previous
behviour as well.
Signed-off-by: Aditya R <arajan@redhat.com>
|
| |
| |
| |
| |
| |
| |
| | |
foramtError was written as err
[NO NEW TESTS NEEDED]
Signed-off-by: myml <wurongjie1@gmail.com>
|
|\ \
| | |
| | | |
append podman dns search domain
|
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
Append the podman dns seach domain to the host search domains when we
use the dnsname/aardvark server. Previously it would only use podman
seach domains and discard the host domains.
Fixes #13103
Signed-off-by: Paul Holzinger <pholzing@redhat.com>
|
|\ \ \
| |/ /
|/| | |
Podman pod create --share-parent vs --share=cgroup
|