| Commit message (Collapse) | Author | Age |
|
|
|
|
|
|
|
| |
While trying to match permissions of target directory podman adds
extra `0111` which should not be needed if target path does not have
execute permission.
Signed-off-by: Aditya Rajan <arajan@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
| |
... at least within a single service.
[NO NEW TESTS NEEDED]
because testing RNGs is problematic. (We _could_
probably inject a mock RNG implementation that always
returns the same value, or something like that.)
Signed-off-by: Miloslav Trmač <mitr@redhat.com>
|
|
|
|
|
|
|
|
| |
Add an error return to it and affected callers.
Should not affect behavior, the function can't currently fail.
Signed-off-by: Miloslav Trmač <mitr@redhat.com>
|
|
|
|
|
|
|
|
|
|
| |
Use a private RNG with the desired seed, don't interfere
with the other uses.
Introducing the servicePortState type is rather overkill
for the single member, but we'll add another one immediately.
Signed-off-by: Miloslav Trmač <mitr@redhat.com>
|
|
|
|
| |
Signed-off-by: Miloslav Trmač <mitr@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
If the /proc/$PID/cgroup file doesn't exist, then it is likely the
container was terminated in the meanwhile so report ErrCtrStopped that
is already handled instead of ENOENT.
commit a66f40b4df039e94572fa38c070207a435cfa466 introduced the regression.
Closes: https://github.com/containers/podman/issues/12457
[NO NEW TESTS NEEDED] it solves a race in the CI that is difficult to reproduce.
Signed-off-by: Giuseppe Scrivano <gscrivan@redhat.com>
|
|
|
|
|
|
|
| |
The arguments of ps(1) should be shlexed.
Fixes: #12452
Signed-off-by: Valentin Rothberg <rothberg@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
improve the heuristic to detect the scope that was created for the container.
This is necessary with systemd running as PID 1, since it moves itself
to a different sub-cgroup, thus stats would not account for other
processes in the same container.
Closes: https://github.com/containers/podman/issues/12400
Signed-off-by: Giuseppe Scrivano <gscrivan@redhat.com>
<MH: Fixed cherry-pick conflicts>
Signed-off-by: Matthew Heon <mheon@redhat.com>
|
|
|
|
| |
Signed-off-by: Giuseppe Scrivano <gscrivan@redhat.com>
|
|
|
|
|
|
|
|
|
|
| |
OCI runtimes may set the memory limits in different ways, e.g., crun
creates a sub-cgroup where the limits are applied, while runc applies
them directly on the created cgroup. Since there is standardization
on the cgroup path to use, just use the limit specified in the spec
file.
Signed-off-by: Giuseppe Scrivano <gscrivan@redhat.com>
|
|
|
|
|
|
|
|
|
| |
`crun status ctrid` outputs `No such file or directory` when container
is not there so podman much ack it.
[NO NEW TESTS NEEDED]
Signed-off-by: Aditya Rajan <arajan@redhat.com>
|
|
|
|
|
|
|
|
|
|
| |
While trying to kill a container with a `signal` we cant do anything if
container is already dead so `exit` gracefully instead of trying to
delete container again. Get container status from runtime.
[ NO NEW TESTS NEEDED ]
Signed-off-by: Aditya Rajan <arajan@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
| |
Honor custom `target` if specified while running or creating containers
with secret `type=mount`.
Example:
`podman run -it --secret token,type=mount,target=TOKEN ubi8/ubi:latest
bash`
Signed-off-by: Aditya Rajan <arajan@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
| |
When reading logs from the journal, keep going after the container
exits, in case it gets restarted.
Events logged to the journal via the normal paths don't include
CONTAINER_ID_FULL, so don't bother adding it to the "history" event we
use to force at least one entry for the container to show up in the log.
Signed-off-by: Nalin Dahyabhai <nalin@redhat.com>
|
|
|
|
|
|
| |
Fixes: https://github.com/containers/podman/issues/11255
Signed-off-by: Daniel J Walsh <dwalsh@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
| |
make sure the /etc/mtab symlink is created inside the rootfs when /etc
is a symlink.
Closes: https://github.com/containers/podman/issues/12189
[NO NEW TESTS NEEDED] there is already a test case
Signed-off-by: Giuseppe Scrivano <gscrivan@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Let the gvproxy dns server handle the host.containers.internal entry.
Support for this is already added to gvproxy. [1]
To make sure the container uses the dns response from gvproxy we should
not add host.containers.internal to /etc/hosts in this case.
[NO NEW TESTS NEEDED] podman machine has no tests
Fixes #11642
[1] https://github.com/containers/gvisor-tap-vsock/commit/1108ea45162281046d239047a6db9bc187e64b08
Signed-off-by: Paul Holzinger <pholzing@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
| |
Patch originally by Paul Holzinger (sourced from [1]).
This is necessary to get the tests to pass in order to include a
batch of other, related journald fixes in `podman logs`.
[1] https://github.com/containers/podman/pull/12274#issuecomment-967168173
Signed-off-by: Matthew Heon <matthew.heon@pm.me>
|
|
|
|
|
|
| |
[NO NEW TESTS NEEDED]
Signed-off-by: Boaz Shuster <boaz.shuster.github@gmail.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
When starting a container libpod/runtime_pod_linux.go:NewPod calls
libpod/lock/lock.go:AllocateLock ends up in here. If you exceed
num_locks, in response to a "podman run ..." you will see:
Error: error allocating lock for new container: no space left on device
As noted inline, this error is technically true as it is talking about
the SHM area, but for anyone who has not dug into the source (i.e. me,
before a few hours ago :) your initial thought is going to be that
your disk is full. I spent quite a bit of time trying to diagnose
what disk, partition, overlay, etc. was filling up before I realised
this was actually due to leaking from failing containers.
This overrides this case to give a more explicit message that
hopefully puts people on the right track to fixing this faster. You
will now see:
$ ./bin/podman run --rm -it fedora bash
Error: error allocating lock for new container: allocation failed; exceeded num_locks (20)
[NO NEW TESTS NEEDED] (just changes an existing error message)
Signed-off-by: Ian Wienand <iwienand@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
| |
Address the TOCTOU when generating random names by having at most 10
attempts to assign a random name when creating a pod or container.
[NO TESTS NEEDED] since I do not know a way to force a conflict with
randomly generated names in a reasonable time frame.
Fixes: #11735
Signed-off-by: Valentin Rothberg <rothberg@redhat.com>
|
|
|
|
|
|
|
|
|
| |
A restored container still had the state set to 'Checkpointed: true'
which seems wrong if it running again.
[NO NEW TESTS NEEDED]
Signed-off-by: Adrian Reber <areber@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
commit 6b3b0a17c625bdf71b0ec8b783b288886d8e48d7 introduced a check for
the PID file before attempting to move the PID to a new scope.
This is still vulnerable to TOCTOU race condition though, since the
PID file or the PID can be removed/killed after the check was
successful but before it was used.
Closes: https://github.com/containers/podman/issues/12065
[NO NEW TESTS NEEDED] it fixes a CI flake
Signed-off-by: Giuseppe Scrivano <gscrivan@redhat.com>
|
|
|
|
|
|
|
|
|
| |
check that the pause pid exists before trying to move it to a separate
scope.
Closes: https://github.com/containers/podman/issues/12065
Signed-off-by: Giuseppe Scrivano <gscrivan@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Duplicate Address Detection slows the ipv6 setup down for 1-2 seconds.
Since slirp4netns is run it is own namespace and not directly routed
we can skip this to make the ipv6 address immediately available.
We change the default to make sure the slirp tap interface gets the
correct value assigned so DAD is disabled for it.
Also make sure to change this value back to the original after slirp4netns
is ready in case users rely on this sysctl.
Fixes #11062
Signed-off-by: Paul Holzinger <pholzing@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The following problems regarding `logs --tail` with the journald log
driver are fixed:
- One more line than a specified value is displayed.
- '--tail 0' displays all lines while the other log drivers displays
nothing.
- Partial lines are not considered.
- If the journald events backend is used and a container has exited,
nothing is displayed.
Integration tests that should have detected the bugs are also fixed. The
tests are executed with json-file log driver three times without this
fix.
Signed-off-by: Hironori Shiina <shiina.hironori@jp.fujitsu.com>
|
|
|
|
|
|
|
| |
Setting these environment variables can cause issues with custom CNI
plugins, see #12083.
Signed-off-by: Paul Holzinger <pholzing@redhat.com>
|
|
|
|
|
|
|
|
| |
LGTM alert:
Off-by-one index comparison against length may lead to out-of-bounds read.
Signed-off-by: Stefan Weil <sw@weilnetz.de>
|
|
|
|
|
|
|
|
|
| |
When looking for a cursor that matches the first journal entry for a
given container, wait and try to find it using exponential backoff.
[NO NEW TESTS NEEDED]
Signed-off-by: Nalin Dahyabhai <nalin@redhat.com>
|
|
|
|
|
|
|
|
| |
Made changes so that if the pod contains all exited containers and only infra is running, remove the pod.
resolves #11713
Signed-off-by: cdoern <cdoern@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
If we interrupt pod removal between removing containers and
removing the whole pod, the infra ID was still in the DB, and
most pod operations would try to retrieve the infra container
(and would this fail). Clear the infra ID from the DB just before
we remove all containers to prevent this.
Fixes #12034
[NO NEW TESTS NEEDED] This is a very narrow race and I have no
idea how to repro it.
Signed-off-by: Matthew Heon <mheon@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
If podman uses Workdir="/" or the workdir specified in the image, it
should not add it to the yaml.
If Podman find environment variables in the image, they should not
get added to the yaml.
If the container or pod do not have changes to SELinux we should not
print seLinuxOpt{}
If the container or pod do not change any dns options the yaml should
not have a dnsOption={}
If the container is not privileged it should not have privileged=false
in the yaml.
Fixes: https://github.com/containers/podman/issues/11995
Signed-off-by: Daniel J Walsh <dwalsh@redhat.com>
|
|
|
|
| |
Signed-off-by: Stefan Weil <sw@weilnetz.de>
|
|
|
|
|
|
|
|
|
| |
When run as rootless the podman network reload command tries to reload
the rootlessport ports because the childIP could have changed.
However if the containers has no ports we should skip this instead of
printing a warning.
Signed-off-by: Paul Holzinger <pholzing@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Make Podman more tolerant when parsing image volumes during container
creation and further fix an infinite loop when checking them.
Consider `VOLUME ['/etc/foo', '/etc/bar']` in a Containerfile. While
it looks correct to the human eye, the single quotes are wrong and yield
the two volumes to be `[/etc/foo,` and `/etc/bar]` in Podman and Docker.
When running the container, it'll create a directory `bar]` in `/etc`
and a directory `[` in `/` with two subdirectories `etc/foo,`. This
behavior is surprising to me but how Docker behaves. We may improve on
that in the future. Note that the correct way to syntax for volumes in
a Containerfile is `VOLUME /A /B /C` or `VOLUME ["/A", "/B", "/C"]`;
single quotes are not supported.
This change restores this behavior without breaking container creation
or ending up in an infinite loop.
BZ: https://bugzilla.redhat.com/show_bug.cgi?id=2014149
Signed-off-by: Valentin Rothberg <rothberg@redhat.com>
|
|
|
|
|
|
|
|
|
|
| |
When the targetPort is not defined, it is supposed to
be set to the port value according to the k8s docs.
Add tests for targetPort.
Update tests to be able to check the Service yaml that
is generated.
Signed-off-by: Urvashi Mohnani <umohnani@redhat.com>
|
|
|
|
|
|
|
|
|
|
| |
Checkpoint is blowing up when you use --log-driver=none
[NO NEW TESTS NEEDED] No way currently to test checkpoint restore.
Fixes: https://github.com/containers/podman/issues/11974
Signed-off-by: Daniel J Walsh <dwalsh@redhat.com>
|
|
|
|
|
|
|
|
|
|
| |
As the default protocol in k8s is TCP, don't add it
to the generate yaml when using protocol.
Add UDP to the protocol of the generated yaml when udp
is being used.
Add tests for this as well.
Signed-off-by: Urvashi Mohnani <umohnani@redhat.com>
|
|
|
|
|
|
| |
[NO NEW TESTS NEEDED]
Signed-off-by: Daniel J Walsh <dwalsh@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
| |
If no entrypoint or command is set in the podman create
command, and the image command or entrypoint is being
used as the default, then do not add the image command or
entrypoint to the generated kube yaml.
Kubernetes knows to default to the image command and/or
entrypoint settings when not defined in the kube yaml.
Add and modify tests for this case.
Signed-off-by: Urvashi Mohnani <umohnani@redhat.com>
|
|
|
|
|
|
|
|
|
| |
Removed the inclusion of RunAsUser or RunAsGroup unless a container is run with the --user flag. When building from an image
the user will be pulled from there anyway
resolves #11914
Signed-off-by: cdoern <cdoern@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
| |
Kubernetes fails to deal with an annotation that has a space in it.
Trim these strings to remove spaces.
Fixes: #11929
Signed-off-by: Brent Baude <bbaude@redhat.com>
[NO TESTS NEEDED]
|
|
|
|
|
|
|
|
|
|
|
|
| |
The backend for `ps --sync` has been nonfunctional for a long
while now - probably since v2.0. It's questionable how useful the
flag is in modern Podman (the original case it was intended to
catch, Conmon gone via SIGKILL, should be handled now via pinging
the process with a signal to ensure it's still alive) but having
the ability to force a refresh of container state from the OCI
runtime is still useful.
Signed-off-by: Matthew Heon <mheon@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This allows you to stop a container after a `podman stop` process
started, but did not finish, stopping the container (probably an
ignored stop signal, with no time to SIGKILL?). This is a very
narrow case, but once you're in it the only way to recover is a
`podman rm -f` of the container or extensive manual remediation
(you'd have to kill the container yourself, manually, and then
force a `podman ps --all --sync` to update its status from the
OCI runtime).
[NO NEW TESTS NEEDED] I have no idea how to verify this one -
we need to test that it actually started *during* the other stop
command, and that's nontrivial.
Signed-off-by: Matthew Heon <mheon@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
There is a race where `conn.Close()` was called before `conn.CloseWrite()`.
In this case `CloseWrite` will fail and an useless error is printed. To
fix this we move the the `CloseWrite()` call to the same goroutine to
remove the race. This ensures that `CloseWrite()` is called before
`Close()` and never afterwards.
Also fixed podman-remote run where the STDIN was never was closed.
This is causing flakes in CI testing.
[NO TESTS NEEDED]
Fixes #11856
Signed-off-by: Paul Holzinger <pholzing@redhat.com>
<MH: Fixed cherry-pick conflicts>
Signed-off-by: Matthew Heon <matthew.heon@pm.me>
|
|
|
|
|
|
|
|
|
|
|
|
| |
When using play kube and generate kube, we need to support if bind
mounts have selinux options. As kubernetes does not support selinux in
this way, we tuck the selinux values into a pod annotation for
generation of the kube yaml. Then on play, we check annotations to see
if a value for the mount exists and apply it.
Fixes BZ #1984081
Signed-off-by: Brent Baude <bbaude@redhat.com>
|
|
|
|
|
|
|
|
| |
When generating a kube yaml and there is a port configuration
add the configuration to the first regular container in the pod
and not to the init container.
Signed-off-by: Urvashi Mohnani <umohnani@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
As we were not updating the pod ID bucket, removing a pod with
containers still in it (including the infra container, which will
always suffer from this) will not properly update the name
registry to remove the name of any renamed containers. This
patch ensures that does not happen - all containers will be fully
removed, even if renamed.
Fixes #11750
Signed-off-by: Matthew Heon <matthew.heon@pm.me>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Access the container's config field directly inside of libpod instead of
calling `Config()` which in turn creates expensive JSON deep copies.
Accessing the field directly drops memory consumption of a simple
`podman run --rm busybox true` from 1245kB to 410kB.
[NO TESTS NEEDED]
Signed-off-by: Valentin Rothberg <rothberg@redhat.com>
<MH: Fixed cherry-pick conflicts>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The dnsname plugin tries to use XDG_RUNTIME_DIR to store files.
podman run will have XDG_RUNTIME_DIR set and thus the cni plugin can use
it. The problem is that XDG_RUNTIME_DIR is unset for the conmon process
for rootful users. This causes issues since the cleanup process is spawned
by conmon and thus not have XDG_RUNTIME_DIR set to same value as podman run.
Because of it dnsname will not find the config files and cannot correctly
cleanup.
To fix this we should also unset XDG_RUNTIME_DIR for the cni plugins as
rootful.
Signed-off-by: Paul Holzinger <pholzing@redhat.com>
|