| Commit message (Collapse) | Author | Age |
|
|
|
|
|
|
|
|
|
|
| |
Patch originally by Paul Holzinger (sourced from [1]).
This is necessary to get the tests to pass in order to include a
batch of other, related journald fixes in `podman logs`.
[1] https://github.com/containers/podman/pull/12274#issuecomment-967168173
Signed-off-by: Matthew Heon <matthew.heon@pm.me>
|
|
|
|
|
|
| |
[NO NEW TESTS NEEDED]
Signed-off-by: Boaz Shuster <boaz.shuster.github@gmail.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
When starting a container libpod/runtime_pod_linux.go:NewPod calls
libpod/lock/lock.go:AllocateLock ends up in here. If you exceed
num_locks, in response to a "podman run ..." you will see:
Error: error allocating lock for new container: no space left on device
As noted inline, this error is technically true as it is talking about
the SHM area, but for anyone who has not dug into the source (i.e. me,
before a few hours ago :) your initial thought is going to be that
your disk is full. I spent quite a bit of time trying to diagnose
what disk, partition, overlay, etc. was filling up before I realised
this was actually due to leaking from failing containers.
This overrides this case to give a more explicit message that
hopefully puts people on the right track to fixing this faster. You
will now see:
$ ./bin/podman run --rm -it fedora bash
Error: error allocating lock for new container: allocation failed; exceeded num_locks (20)
[NO NEW TESTS NEEDED] (just changes an existing error message)
Signed-off-by: Ian Wienand <iwienand@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
| |
Address the TOCTOU when generating random names by having at most 10
attempts to assign a random name when creating a pod or container.
[NO TESTS NEEDED] since I do not know a way to force a conflict with
randomly generated names in a reasonable time frame.
Fixes: #11735
Signed-off-by: Valentin Rothberg <rothberg@redhat.com>
|
|
|
|
|
|
|
|
|
| |
A restored container still had the state set to 'Checkpointed: true'
which seems wrong if it running again.
[NO NEW TESTS NEEDED]
Signed-off-by: Adrian Reber <areber@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
commit 6b3b0a17c625bdf71b0ec8b783b288886d8e48d7 introduced a check for
the PID file before attempting to move the PID to a new scope.
This is still vulnerable to TOCTOU race condition though, since the
PID file or the PID can be removed/killed after the check was
successful but before it was used.
Closes: https://github.com/containers/podman/issues/12065
[NO NEW TESTS NEEDED] it fixes a CI flake
Signed-off-by: Giuseppe Scrivano <gscrivan@redhat.com>
|
|
|
|
|
|
|
|
|
| |
check that the pause pid exists before trying to move it to a separate
scope.
Closes: https://github.com/containers/podman/issues/12065
Signed-off-by: Giuseppe Scrivano <gscrivan@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Duplicate Address Detection slows the ipv6 setup down for 1-2 seconds.
Since slirp4netns is run it is own namespace and not directly routed
we can skip this to make the ipv6 address immediately available.
We change the default to make sure the slirp tap interface gets the
correct value assigned so DAD is disabled for it.
Also make sure to change this value back to the original after slirp4netns
is ready in case users rely on this sysctl.
Fixes #11062
Signed-off-by: Paul Holzinger <pholzing@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The following problems regarding `logs --tail` with the journald log
driver are fixed:
- One more line than a specified value is displayed.
- '--tail 0' displays all lines while the other log drivers displays
nothing.
- Partial lines are not considered.
- If the journald events backend is used and a container has exited,
nothing is displayed.
Integration tests that should have detected the bugs are also fixed. The
tests are executed with json-file log driver three times without this
fix.
Signed-off-by: Hironori Shiina <shiina.hironori@jp.fujitsu.com>
|
|
|
|
|
|
|
| |
Setting these environment variables can cause issues with custom CNI
plugins, see #12083.
Signed-off-by: Paul Holzinger <pholzing@redhat.com>
|
|
|
|
|
|
|
|
| |
LGTM alert:
Off-by-one index comparison against length may lead to out-of-bounds read.
Signed-off-by: Stefan Weil <sw@weilnetz.de>
|
|
|
|
|
|
|
|
|
| |
When looking for a cursor that matches the first journal entry for a
given container, wait and try to find it using exponential backoff.
[NO NEW TESTS NEEDED]
Signed-off-by: Nalin Dahyabhai <nalin@redhat.com>
|
|
|
|
|
|
|
|
| |
Made changes so that if the pod contains all exited containers and only infra is running, remove the pod.
resolves #11713
Signed-off-by: cdoern <cdoern@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
If we interrupt pod removal between removing containers and
removing the whole pod, the infra ID was still in the DB, and
most pod operations would try to retrieve the infra container
(and would this fail). Clear the infra ID from the DB just before
we remove all containers to prevent this.
Fixes #12034
[NO NEW TESTS NEEDED] This is a very narrow race and I have no
idea how to repro it.
Signed-off-by: Matthew Heon <mheon@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
If podman uses Workdir="/" or the workdir specified in the image, it
should not add it to the yaml.
If Podman find environment variables in the image, they should not
get added to the yaml.
If the container or pod do not have changes to SELinux we should not
print seLinuxOpt{}
If the container or pod do not change any dns options the yaml should
not have a dnsOption={}
If the container is not privileged it should not have privileged=false
in the yaml.
Fixes: https://github.com/containers/podman/issues/11995
Signed-off-by: Daniel J Walsh <dwalsh@redhat.com>
|
|
|
|
| |
Signed-off-by: Stefan Weil <sw@weilnetz.de>
|
|
|
|
|
|
|
|
|
| |
When run as rootless the podman network reload command tries to reload
the rootlessport ports because the childIP could have changed.
However if the containers has no ports we should skip this instead of
printing a warning.
Signed-off-by: Paul Holzinger <pholzing@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Make Podman more tolerant when parsing image volumes during container
creation and further fix an infinite loop when checking them.
Consider `VOLUME ['/etc/foo', '/etc/bar']` in a Containerfile. While
it looks correct to the human eye, the single quotes are wrong and yield
the two volumes to be `[/etc/foo,` and `/etc/bar]` in Podman and Docker.
When running the container, it'll create a directory `bar]` in `/etc`
and a directory `[` in `/` with two subdirectories `etc/foo,`. This
behavior is surprising to me but how Docker behaves. We may improve on
that in the future. Note that the correct way to syntax for volumes in
a Containerfile is `VOLUME /A /B /C` or `VOLUME ["/A", "/B", "/C"]`;
single quotes are not supported.
This change restores this behavior without breaking container creation
or ending up in an infinite loop.
BZ: https://bugzilla.redhat.com/show_bug.cgi?id=2014149
Signed-off-by: Valentin Rothberg <rothberg@redhat.com>
|
|
|
|
|
|
|
|
|
|
| |
When the targetPort is not defined, it is supposed to
be set to the port value according to the k8s docs.
Add tests for targetPort.
Update tests to be able to check the Service yaml that
is generated.
Signed-off-by: Urvashi Mohnani <umohnani@redhat.com>
|
|
|
|
|
|
|
|
|
|
| |
Checkpoint is blowing up when you use --log-driver=none
[NO NEW TESTS NEEDED] No way currently to test checkpoint restore.
Fixes: https://github.com/containers/podman/issues/11974
Signed-off-by: Daniel J Walsh <dwalsh@redhat.com>
|
|
|
|
|
|
|
|
|
|
| |
As the default protocol in k8s is TCP, don't add it
to the generate yaml when using protocol.
Add UDP to the protocol of the generated yaml when udp
is being used.
Add tests for this as well.
Signed-off-by: Urvashi Mohnani <umohnani@redhat.com>
|
|
|
|
|
|
| |
[NO NEW TESTS NEEDED]
Signed-off-by: Daniel J Walsh <dwalsh@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
| |
If no entrypoint or command is set in the podman create
command, and the image command or entrypoint is being
used as the default, then do not add the image command or
entrypoint to the generated kube yaml.
Kubernetes knows to default to the image command and/or
entrypoint settings when not defined in the kube yaml.
Add and modify tests for this case.
Signed-off-by: Urvashi Mohnani <umohnani@redhat.com>
|
|
|
|
|
|
|
|
|
| |
Removed the inclusion of RunAsUser or RunAsGroup unless a container is run with the --user flag. When building from an image
the user will be pulled from there anyway
resolves #11914
Signed-off-by: cdoern <cdoern@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
| |
Kubernetes fails to deal with an annotation that has a space in it.
Trim these strings to remove spaces.
Fixes: #11929
Signed-off-by: Brent Baude <bbaude@redhat.com>
[NO TESTS NEEDED]
|
|
|
|
|
|
|
|
|
|
|
|
| |
The backend for `ps --sync` has been nonfunctional for a long
while now - probably since v2.0. It's questionable how useful the
flag is in modern Podman (the original case it was intended to
catch, Conmon gone via SIGKILL, should be handled now via pinging
the process with a signal to ensure it's still alive) but having
the ability to force a refresh of container state from the OCI
runtime is still useful.
Signed-off-by: Matthew Heon <mheon@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This allows you to stop a container after a `podman stop` process
started, but did not finish, stopping the container (probably an
ignored stop signal, with no time to SIGKILL?). This is a very
narrow case, but once you're in it the only way to recover is a
`podman rm -f` of the container or extensive manual remediation
(you'd have to kill the container yourself, manually, and then
force a `podman ps --all --sync` to update its status from the
OCI runtime).
[NO NEW TESTS NEEDED] I have no idea how to verify this one -
we need to test that it actually started *during* the other stop
command, and that's nontrivial.
Signed-off-by: Matthew Heon <mheon@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
There is a race where `conn.Close()` was called before `conn.CloseWrite()`.
In this case `CloseWrite` will fail and an useless error is printed. To
fix this we move the the `CloseWrite()` call to the same goroutine to
remove the race. This ensures that `CloseWrite()` is called before
`Close()` and never afterwards.
Also fixed podman-remote run where the STDIN was never was closed.
This is causing flakes in CI testing.
[NO TESTS NEEDED]
Fixes #11856
Signed-off-by: Paul Holzinger <pholzing@redhat.com>
<MH: Fixed cherry-pick conflicts>
Signed-off-by: Matthew Heon <matthew.heon@pm.me>
|
|
|
|
|
|
|
|
|
|
|
|
| |
When using play kube and generate kube, we need to support if bind
mounts have selinux options. As kubernetes does not support selinux in
this way, we tuck the selinux values into a pod annotation for
generation of the kube yaml. Then on play, we check annotations to see
if a value for the mount exists and apply it.
Fixes BZ #1984081
Signed-off-by: Brent Baude <bbaude@redhat.com>
|
|
|
|
|
|
|
|
| |
When generating a kube yaml and there is a port configuration
add the configuration to the first regular container in the pod
and not to the init container.
Signed-off-by: Urvashi Mohnani <umohnani@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
As we were not updating the pod ID bucket, removing a pod with
containers still in it (including the infra container, which will
always suffer from this) will not properly update the name
registry to remove the name of any renamed containers. This
patch ensures that does not happen - all containers will be fully
removed, even if renamed.
Fixes #11750
Signed-off-by: Matthew Heon <matthew.heon@pm.me>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Access the container's config field directly inside of libpod instead of
calling `Config()` which in turn creates expensive JSON deep copies.
Accessing the field directly drops memory consumption of a simple
`podman run --rm busybox true` from 1245kB to 410kB.
[NO TESTS NEEDED]
Signed-off-by: Valentin Rothberg <rothberg@redhat.com>
<MH: Fixed cherry-pick conflicts>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The dnsname plugin tries to use XDG_RUNTIME_DIR to store files.
podman run will have XDG_RUNTIME_DIR set and thus the cni plugin can use
it. The problem is that XDG_RUNTIME_DIR is unset for the conmon process
for rootful users. This causes issues since the cleanup process is spawned
by conmon and thus not have XDG_RUNTIME_DIR set to same value as podman run.
Because of it dnsname will not find the config files and cannot correctly
cleanup.
To fix this we should also unset XDG_RUNTIME_DIR for the cni plugins as
rootful.
Signed-off-by: Paul Holzinger <pholzing@redhat.com>
|
|
|
|
|
|
|
|
|
| |
If the command came from the underlying image, then we should
not include it in the generate yaml file.
Fixes: https://github.com/containers/podman/issues/11672
Signed-off-by: Daniel J Walsh <dwalsh@redhat.com>
|
|
|
|
|
|
|
|
| |
Fixes: https://github.com/containers/podman/issues/11207
[NO TESTS NEEDED] Since I don't know how to get into this situation.
Signed-off-by: Daniel J Walsh <dwalsh@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
| |
Currently we add the default PATH, TERM and container from Podman
to every kubernetes.yaml file. These values should not be recorded
in the yaml files.
Signed-off-by: Daniel J Walsh <dwalsh@redhat.com>
<MH: Fixed cherry-pick conflicts>
Signed-off-by: Matthew Heon <matthew.heon@pm.me>
|
|
|
|
|
|
|
|
|
|
| |
The health check result is stored in the container state. Since the
state can change or might not even be set we have to retrive the current
state before we try to read the health check result.
Fixes #11687
Signed-off-by: Paul Holzinger <pholzing@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The libpod package should only compile on linux. The remote client
should never try to import this package.
Since these files do not add any value we should remove them, this
prevents people from accidentally importing this package because it would
fail to compile on windows/macos.
[NO TESTS NEEDED]
Signed-off-by: Paul Holzinger <pholzing@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
| |
The check for net=none was wrong. It just assumed when we do not create
the netns but have one set that we use the none mode. This however also
applies to a container which joins the pod netns.
To correctly check for the none mode use `config.NetMode.IsNone()`.
Fixes #11596
Signed-off-by: Paul Holzinger <pholzing@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
| |
make sure the pause process is moved to its own scope as well as what
we do when we join an existing user+mount namespace.
Closes: https://github.com/containers/podman/issues/11560
[NO TESTS NEEDED]
Signed-off-by: Giuseppe Scrivano <gscrivan@redhat.com>
(cherry picked from commit a2c8b5d9d6d6e46679fe9540619d4303d4b4601d)
|
|
|
|
|
|
| |
Honor --cgroups=split also when the container is running in a pod.
Signed-off-by: Giuseppe Scrivano <gscrivan@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
| |
Along with the name (id) and the version(_id)
But only show the information if is available
Examples: Fedora CoreOS, Ubuntu Focal
[NO TESTS NEEDED]
Signed-off-by: Anders F Björklund <anders.f.bjorklund@gmail.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
For rootful users ports are forwarded via iptables. To make sure no
other process tries to use them, libpod will bind the ports and pass the
fds to conmon. There seems to be race when a container is restarted
because libpod tries to bind the port before the conmon process exited.
The problem only hapens with the podman service because it keeps the
connection open. Once we have the fd and passed it to conmon the
podman service should close the connection.
To verify run `sudo ss -tulpn` and check that only the conmon process
keeps the port open. Previously you would also see the podman server
process listed.
Signed-off-by: Paul Holzinger <pholzing@redhat.com>
|
|
|
|
|
|
|
|
| |
When a container is automatically restarted due its restart policy and
the container uses rootless cni networking with ports forwarded we have
to start a new rootlessport process since it exits with conmon.
Signed-off-by: Paul Holzinger <pholzing@redhat.com>
|
|
|
|
|
|
|
|
| |
and fix it for running with runc.
Closes: https://github.com/containers/podman/issues/11165
Signed-off-by: Giuseppe Scrivano <gscrivan@redhat.com>
|
|
|
|
|
|
|
| |
These are not presently functional - we need a rewrite of how the
pod cgroup is handled first.
Signed-off-by: Matthew Heon <mheon@redhat.com>
|
|
|
|
|
|
|
|
|
|
| |
This reverts commit 21f396de6f5024abbf6edd2ca63edcb1525eefcc.
Changing the log endpoint is a breaking change we should not do in 3.4.
[NO TESTS NEEDED]
Signed-off-by: Paul Holzinger <pholzing@redhat.com>
|
|\
| |
| | |
Add init containers to generate and play kube
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
Kubernetes has a concept of init containers that run and exit before
the regular containers in a pod are started. We added init containers
to podman pods as well. This patch adds support for generating init
containers in the kube yaml when a pod we are converting had init
containers. When playing a kube yaml, it detects an init container
and creates such a container in podman accordingly.
Note, only init containers created with the init type set to "always"
will be generated as the "once" option deletes the init container after
it has run and exited. Play kube will always creates init containers
with the "always" init container type.
Signed-off-by: Urvashi Mohnani <umohnani@redhat.com>
|
|\ \
| | |
| | |
| | |
| | | |
containers/dependabot/go_modules/github.com/containers/psgo-1.6.0
Bump github.com/containers/psgo from 1.5.2 to 1.6.0
|