| Commit message (Collapse) | Author | Age |
|
|
|
| |
Signed-off-by: Valentin Rothberg <vrothberg@redhat.com>
|
|\
| |
| | |
libpod: drop warning for Fedora 31
|
| |
| |
| |
| |
| |
| |
| |
| | |
drop a warning for runc not supporting cgroup v2 on Fedora 31.
[NO NEW TESTS NEEDED]
Signed-off-by: Giuseppe Scrivano <gscrivan@redhat.com>
|
|/
|
|
|
|
|
|
|
|
| |
* supports Go 1.18
* disable a number of new linters
* fix minor stylecheck issues
[NO NEW TESTS NEEDED]
Signed-off-by: Valentin Rothberg <vrothberg@redhat.com>
|
|\
| |
| | |
go fmt: use go 1.18 conditional-build syntax
|
| |
| |
| |
| | |
Signed-off-by: Valentin Rothberg <vrothberg@redhat.com>
|
|/
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Inside the podman machine vm we always remove the hostip from the port
mapping because this should only be used on the actual host. Otherwise
you run into issues when we would bind 127.0.0.1 or try to bind a
host address that is not available in the VM.
This was already done for cni/netavark ports and slirp4netns but not for
the port bindings inside libpod which are only used as root.
[NO NEW TESTS NEEDED] We still do not have machine tests!
Fixes #13543
Signed-off-by: Paul Holzinger <pholzing@redhat.com>
|
|\
| |
| | |
Exit code change BZ #2052697
|
| |
| |
| |
| |
| |
| |
| | |
* systemctl stop podman.service will now return exit code 0
* Update test framework to support JSON boolean and numeric values
Signed-off-by: Jhon Honce <jhonce@redhat.com>
|
|/
|
|
|
|
|
|
| |
We only need a small part of the k8s dependencies but they are the
biggest dependencies in podman by far. Moving them into podman allows us
to remove the unnecessary parts.
Signed-off-by: Paul Holzinger <pholzing@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
When enable_ipv6=true is set for slirp4netns (default since podman v4),
we will try to set the accept sysctl. This sysctl will not exist on
systems that have ipv6 disabled. In this case we should not error and
just ignore the extra ipv6 setup.
Also the current logic to wait for the slirp4 setup was kinda broken, it
did not actually wait until the sysctl was set before starting slirp.
This should now be fixed by using two `sync.WaitGroup`s.
[NO NEW TESTS NEEDED]
Fixes #13388
Signed-off-by: Paul Holzinger <pholzing@redhat.com>
|
|
|
|
| |
Signed-off-by: LStandman <65296484+LStandman@users.noreply.github.com>
|
|
|
|
|
| |
Closes: https://github.com/containers/podman/issues/13150
Signed-off-by: 😎 Mostafa Emami <mustafaemami@gmail.com>
|
|\
| |
| | |
libpod: pods do not use cgroups if --cgroups=disabled
|
| |
| |
| |
| |
| |
| |
| |
| |
| | |
do not attempt to use cgroups with pods if the cgroups are disabled.
A similar check is already in place for containers.
Closes: https://github.com/containers/podman/issues/13411
Signed-off-by: Giuseppe Scrivano <gscrivan@redhat.com>
|
|/
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
While resolving `workdir` we mostly create a `workdir` when `stat`
fails with `ENOENT` or `ErrNotExist` however following cases are not
true when user explicitly specifies a `workdir` while `running` using
`--workdir` which tells `podman` to only use workdir if its exists on
the container. Following configuration is implicity set with other
`run` mechanism like `podman play kube`
Problem with explicit `--workdir` or similar implicit config in `podman play
kube` is that currently podman ignores the fact that workdir can also be
a `symlink` and actual `link` could be valid.
Hence following commit ensures that in such scenarios when a `workdir`
is not found and we cannot create a `workdir` podman must perform a
check to ensure that if `workdir` is a `symlink` and `link` is resolved
successfully and resolved link is present on the container then we
return as it is.
Docker performs a similar behviour.
Signed-off-by: Aditya R <arajan@redhat.com>
|
|\
| |
| | |
Add the names flag for pod logs
|
| |
| |
| |
| |
| |
| | |
Fixes containers#13261
Signed-off-by: Xueyuan Chen <X.Chen-47@student.tudelft.nl>
|
|/
|
|
|
|
| |
Fixes: https://github.com/containers/podman/issues/12768
Signed-off-by: Daniel J Walsh <dwalsh@redhat.com>
|
|
|
|
|
|
|
|
|
| |
Previously just showing name of the package, followed by
the path repeated again (already stated on the line above)
[NO NEW TESTS NEEDED]
Signed-off-by: Anders F Björklund <anders.f.bjorklund@gmail.com>
|
|\
| |
| | |
container-commit: support `--squash` to squash layers into one if users want.
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
Allow users to commit containers into a single layer.
Usage
```bash
podman container commit --squash <name>
```
Signed-off-by: Aditya R <arajan@redhat.com>
|
|\ \
| | |
| | | |
Don't log errors on removing volumes inuse, if container --volumes-from
|
| |/
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
When removing a container created with a --volumes-from a container
created with a built in volume, we complain if the original container
still exists. Since this is an expected state, we should not complain
about it.
Fixes: https://github.com/containers/podman/issues/12808
Signed-off-by: Daniel J Walsh <dwalsh@redhat.com>
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
This primarily served to protect us against shutting down the
Libpod runtime while operations (like creating a container) were
happening. However, it was very inconsistently implemented (a lot
of our longer-lived functions, like pulling images, just didn't
implement it at all...) and I'm not sure how much we really care
about this very-specific error case?
Removing it also removes a lot of potential deadlocks, which is
nice.
[NO NEW TESTS NEEDED]
Signed-off-by: Matthew Heon <mheon@redhat.com>
|
|\ \
| |/
|/| |
Implement Podman Container Clone
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
podman container clone takes the id of an existing continer and creates a specgen from the given container's config
recreating all proper namespaces and overriding spec options like resource limits and the container name if given in the cli options
this command utilizes the common function DefineCreateFlags meaning that we can funnel as many create options as we want
into clone over time allowing the user to clone with as much or as little of the original config as they want.
container clone takes a second argument which is a new name and a third argument which is an image name to use instead of the original container's
the current supported flags are:
--destroy (remove the original container)
--name (new ctr name)
--cpus (sets cpu period and quota)
--cpuset-cpus
--cpu-period
--cpu-rt-period
--cpu-rt-runtime
--cpu-shares
--cpuset-mems
--memory
--run
resolves #10875
Signed-off-by: cdoern <cdoern@redhat.com>
Signed-off-by: cdoern <cbdoer23@g.holycross.edu>
Signed-off-by: cdoern <cdoern@redhat.com>
|
|/
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The CONTAINERS_CONF environment variable can be used to override the
configuration file, which is useful for testing. However, at the moment
this variable is not propagated to conmon. That means in particular, that
conmon can't propagate it back to podman when invoking its --exit-command.
The mismatch in configuration between the starting and cleaning up podman
instances can cause a variety of errors.
This patch also adds two related test cases. One checks explicitly that
the correct CONTAINERS_CONF value appears in conmon's environment. The
other checks for a possible specific impact of this bug: if we use a
nonstandard name for the runtime (even if its path is just a regular crun),
then the podman container cleanup invoked at container exit will fail.
That has the effect of meaning that a container started with -d --rm won't
be correctly removed once complete.
Fixes #12917
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
|
|
|
|
|
|
|
|
|
| |
Checkpoint/restore pod tests are not running with an older runc and now
that runc 1.1.0 appears in the repositories it was detected that the
tests were failing. This was not detected in CI as CI was not using runc
1.1.0 yet.
Signed-off-by: Adrian Reber <areber@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The `podman network connect` and `podman network disconnect`
commands give containers access to different networks than the
ones they were created with; these networks can also have DNS
servers associated with them. Until now, however, we did not
modify resolv.conf as network membership changed.
With this PR, `podman network connect` will add any new
nameservers supported by the new network to the container's
/etc/resolv.conf, and `podman network disconnect` command will do
the opposite, removing the network's nameservers from
`/etc/resolv.conf`.
Fixes #9603
Signed-off-by: Matthew Heon <matthew.heon@pm.me>
|
|\
| |
| | |
fix: Multiplication of durations
|
| |
| |
| |
| |
| |
| |
| | |
'killContainerTimeout' is already 5 second
[NO NEW TESTS NEEDED]
Signed-off-by: myml <wurongjie1@gmail.com>
|
|\ \
| |/
|/| |
move rootless netns slirp4netns process to systemd user.slice
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
When running podman inside systemd user units, it is possible that
systemd kills the rootless netns slirp4netns process because it was
started in the default unit cgroup. When the unit is stopped all
processes in that cgroup are killed. Since the slirp4netns process is
run once for all containers it should not be killed. To make sure
systemd will not kill the process we move it to the user.slice.
Fixes #13153
Signed-off-by: Paul Holzinger <pholzing@redhat.com>
|
|\ \
| |/
|/| |
healthcheck, libpod: Read healthcheck event output from os pipe
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
It seems we are ignoring output from healthcheck session.
Open a valid pipe to healthcheck session in order read its output.
Use common pipe for both `stdout/stderr` since that was the previous
behviour as well.
Signed-off-by: Aditya R <arajan@redhat.com>
|
| |
| |
| |
| |
| |
| |
| | |
foramtError was written as err
[NO NEW TESTS NEEDED]
Signed-off-by: myml <wurongjie1@gmail.com>
|
|\ \
| | |
| | | |
append podman dns search domain
|
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
Append the podman dns seach domain to the host search domains when we
use the dnsname/aardvark server. Previously it would only use podman
seach domains and discard the host domains.
Fixes #13103
Signed-off-by: Paul Holzinger <pholzing@redhat.com>
|
|\ \ \
| |/ /
|/| | |
Podman pod create --share-parent vs --share=cgroup
|
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
separated cgroupNS sharing from setting the pod as the cgroup parent,
made a new flag --share-parent which sets the pod as the cgroup parent for all
containers entering the pod
remove cgroup from the default kernel namespaces since we want the same default behavior as before which is just the cgroup parent.
resolves #12765
Signed-off-by: cdoern <cdoern@redhat.com>
Signed-off-by: cdoern <cbdoer23@g.holycross.edu>
Signed-off-by: cdoern <cdoern@redhat.com>
|
| |/
|/|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
podman system prune should also remove all networks. When we want to
users to migrate to the new network stack we recommend to run podman
system reset. However this did not remove networks and if there were
still networks around we would continue to use cni since this was
considered an old system.
There is one exception for the default network. It should not be removed
since this could cause other issues when it no longer exists. The
network backend detection logic ignores the default network so this is
fine.
Signed-off-by: Paul Holzinger <pholzing@redhat.com>
|
| |
| |
| |
| |
| |
| |
| | |
these mount flags are already used for the /dev/shm mount on the host,
but they are not set for the bind mount itself.
Signed-off-by: Giuseppe Scrivano <gscrivan@redhat.com>
|
| |
| |
| |
| |
| |
| |
| |
| |
| | |
by default slirp4netns uses the tap0 device. When slirp4netns is
used, use that device by default instead of eth0.
Closes: https://github.com/containers/podman/issues/11695
Signed-off-by: Giuseppe Scrivano <gscrivan@redhat.com>
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
Often users want their overlayed volumes to be `non-volatile` in nature
that means that same `upper` dir can be re-used by one or more
containers but overall of nature of volumes still have to be `overlay`
so work done is still on a overlay not on the actual volume.
Following PR adds support for more advanced options i.e custom `workdir`
and `upperdir` for overlayed volumes. So that users can re-use `workdir`
and `upperdir` across new containers as well.
Usage
```console
$ podman run -it -v myvol:/data:O,upperdir=/path/persistant/upper,workdir=/path/persistant/work alpine sh
```
Signed-off-by: Aditya R <arajan@redhat.com>
|
|\ \
| | |
| | | |
exec: retry rm -rf on ENOTEMPTY and EBUSY
|
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
when running on NFS, a RemoveAll could cause EBUSY because of some
unlinked files that are still kept open and "silly renamed" to
.nfs$ID.
This is only half of the fix, as conmon needs to be fixed too.
Closes: https://bugzilla.redhat.com/show_bug.cgi?id=2040379
Related: https://github.com/containers/conmon/pull/319
[NO NEW TESTS NEEDED] as it requires NFS as the underlying storage.
Signed-off-by: Giuseppe Scrivano <gscrivan@redhat.com>
|
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
the config.json file for the OCI runtime is never closed, this is a
problem when running on NFS, since it leaves around stale files that
cannot be unlinked.
Signed-off-by: Giuseppe Scrivano <gscrivan@redhat.com>
|
| | |
| | |
| | |
| | | |
Signed-off-by: Giuseppe Scrivano <gscrivan@redhat.com>
|
|/ /
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
We should not check if the network supports dns when we create a
container with network aliases. This could be the case for containers
created by docker-compose for example if the dnsname plugin is not
installed or the user uses a macvlan config where we do not support dns.
Fixes #12972
Signed-off-by: Paul Holzinger <pholzing@redhat.com>
|