| Commit message (Collapse) | Author | Age |
|
|
|
|
|
|
|
| |
Some CI tests are hanging, timing out in 60 or 120 minutes.
I wonder if it's #7316, the bug where all podman commands
hang forever if NOTIFY_SOCKET is set?
Signed-off-by: Ed Santiago <santiago@redhat.com>
|
|\
| |
| | |
Wait for reexec to finish when fileOutput is nil
|
| |
| |
| |
| |
| |
| |
| | |
Currently, we're not cleanup up after ourselves when fileOutput is nil.
This patch fixes that.
Signed-off-by: Jonathan Dieter <jonathan.dieter@spearline.com>
|
|\ \
| | |
| | | |
Ensure pod infra containers have an exit command
|
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
This should help alleviate races where the pod is not fully
cleaned up before subsequent API calls happen.
Signed-off-by: Matthew Heon <mheon@redhat.com>
|
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
Most Libpod containers are made via `pkg/specgen/generate` which
includes code to generate an appropriate exit command which will
handle unmounting the container's storage, cleaning up the
container's network, etc. There is one notable exception: pod
infra containers, which are made entirely within Libpod and do
not touch pkg/specgen. As such, no cleanup process, network never
cleaned up, bad things can happen.
There is good news, though - it's not that difficult to add this,
and it's done in this PR. Generally speaking, we don't allow
passing options directly to the infra container at create time,
but we do (optionally) proxy a pre-approved set of options into
it when we create it. Add ExitCommand to these options, and set
it at time of pod creation using the same code we use to generate
exit commands for normal containers.
Fixes #7103
Signed-off-by: Matthew Heon <mheon@redhat.com>
|
|\ \ \
| | | |
| | | | |
Use `bash` binary from env instead of /bin/bash for scripts
|
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | | |
It's not possible to run any of the scripts on distributions which do
have `bash` not in `/bin`. This is being fixed by using `/usr/bin/env
bash` instead.
Signed-off-by: Sascha Grunert <sgrunert@suse.com>
|
|\ \ \ \
| | | | |
| | | | | |
system tests: enable sdnotify tests
|
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | | |
Oops. PR #6693 (sdnotify) added tests, but they were disabled
due to broken crun on f31. I tried for three weeks to get a
magic CI:IMG PR to update crun on the CI VMs ... but in that
time I forgot to actually enable those new tests.
This PR removes a 'skip', replacing it with a check that systemd
is running plus one more to make sure our runtime is crun. It
looks like sdnotify just doesn't work on Ubuntu (it hangs), and
my guess is that it's a crun/runc issue.
I also changed the test image from fedora:latest to :31, because,
sigh, fedora:latest removed the systemd-notify tool.
WARNING WARNING WARNING: the symptom of a missing systemd-notify
is that podman will hang forever, not even stopped by the timeout
command in podman_run! (Filed: #7316). This means that if the
sdnotify-in-container test ever fails, the symptom will be that
Cirrus itself will time out (2 hours?). This is horrible. I
don't know what to do about it other than push for a fix for 7316.
Signed-off-by: Ed Santiago <santiago@redhat.com>
|
|\ \ \ \ \
| |_|/ / /
|/| | | |
| | | | |
| | | | | |
TomSweeneyRedHat/dev/tsweeney/knownissuetoissuetemp
Add pointer to troubleshooting in issue template
|
| |/ / /
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | | |
Add pointers to the Troubleshooting guide, including a new
question that the reporter referenced it in the issue template
that's displayed on GitHub.
Signed-off-by: TomSweeneyRedHat <tsweeney@redhat.com>
|
|\ \ \ \
| | | | |
| | | | | |
IPv6 default route
|
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | | |
podman containers using IPv6 were missing the default route, breaking
deployments trying to use them.
The problem is that the default route was hardcoded to IPv4, this
takes into consideration the podman subnet IP family to generate
the corresponding default route.
Signed-off-by: Antonio Ojea <aojea@redhat.com>
|
|\ \ \ \ \
| | | | | |
| | | | | | |
Bump k8s.io/api from 0.18.6 to 0.18.8
|
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | | |
Bumps [k8s.io/api](https://github.com/kubernetes/api) from 0.18.6 to 0.18.8.
- [Release notes](https://github.com/kubernetes/api/releases)
- [Commits](kubernetes/api@v0.18.6...v0.18.8)
Signed-off-by: dependabot-preview[bot] <support@dependabot.com>
Signed-off-by: Daniel J Walsh <dwalsh@redhat.com>
|
|\ \ \ \ \ \
| |_|_|_|_|/
|/| | | | |
| | | | | |
| | | | | | |
containers/dependabot/go_modules/github.com/containers/storage-1.23.0
Bump github.com/containers/storage from 1.21.2 to 1.23.0
|
| |/ / / /
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | | |
Bumps [github.com/containers/storage](https://github.com/containers/storage) from 1.21.2 to 1.23.0.
- [Release notes](https://github.com/containers/storage/releases)
- [Changelog](https://github.com/containers/storage/blob/master/docs/containers-storage-changes.md)
- [Commits](https://github.com/containers/storage/compare/v1.21.2...v1.23.0)
Signed-off-by: dependabot-preview[bot] <support@dependabot.com>
Signed-off-by: Daniel J Walsh <dwalsh@redhat.com>
|
|\ \ \ \ \
| |/ / / /
|/| | | | |
[CI:DOCS] Update podmanimages README.md
|
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | | |
Updates to the README.md for the contrib/podmanimages directory.
This completes the changes to answer this Buildah issue: https://github.com/containers/buildah/issues/1693
and then also adds the quay.io/conatiners/podman images to the list of images.
Signed-off-by: TomSweeneyRedHat <tsweeney@redhat.com>
|
|\ \ \ \ \
| |_|_|/ /
|/| | | |
| | | | |
| | | | | |
containers/dependabot/go_modules/k8s.io/apimachinery-0.18.8
Bump k8s.io/apimachinery from 0.18.6 to 0.18.8
|
|/ / / /
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | | |
Bumps [k8s.io/apimachinery](https://github.com/kubernetes/apimachinery) from 0.18.6 to 0.18.8.
- [Release notes](https://github.com/kubernetes/apimachinery/releases)
- [Commits](https://github.com/kubernetes/apimachinery/compare/v0.18.6...v0.18.8)
Signed-off-by: dependabot-preview[bot] <support@dependabot.com>
Signed-off-by: Daniel J Walsh <dwalsh@redhat.com>
|
|\ \ \ \
| | | | |
| | | | | |
podman.service: use sdnotify
|
| | |/ /
| |/| |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | | |
Commit 2b6dd3fb4384 set the killmode of the podman.service to the
systemd default which ultimately lead to the problem that systemd
will kill *all* processes inside the unit's cgroup and hence kill
all containers whenever the service is stopped.
Fix it by setting the type to sdnotify and the killmode to process.
`podman system service` will send the necessary notify messages
when the NOTIFY_SOCKET is set and unset it right after to prevent
the backend and container runtimes from jumping in between and send
messages as well.
Fixes: #7294
Signed-off-by: Valentin Rothberg <rothberg@redhat.com>
|
|\ \ \ \
| | | | |
| | | | | |
run, create: add new security-opt proc-opts
|
| |/ / /
| | | |
| | | |
| | | |
| | | |
| | | |
| | | | |
it allows to customize the options passed down to the OCI runtime for
setting up the /proc mount.
Signed-off-by: Giuseppe Scrivano <gscrivan@redhat.com>
|
|\ \ \ \
| |/ / /
|/| | | |
Fix hang when `path` doesn't exist
|
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | | |
I'm not sure if this is an OS-specific issue, but on CentOS 8, if `path`
doesn't exist, this hangs while waiting to read from this socket, even
though the socket is closed by the `reexec_in_user_namespace`. Switching
to a pipe fixes the problem, and pipes shouldn't be an issue since this is
Linux-specific code.
Signed-off-by: Jonathan Dieter <jonathan.dieter@spearline.com>
|
|\ \ \ \
| | | | |
| | | | | |
podman save use named pipe
|
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | | |
podman save uses named pipe as output path, not directly using /dev/stdout.
fix #7017
Signed-off-by: Qi Wang <qiwan@redhat.com>
|
|\ \ \ \ \
| | | | | |
| | | | | | |
Change /sys/fs/cgroup/systemd mount to rprivate
|
| |/ / / /
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | | |
I used the wrong propagation first time around because I forgot
that rprivate is the default propagation. Oops. Switch to
rprivate so we're using the default.
Signed-off-by: Matthew Heon <mheon@redhat.com>
|
|\ \ \ \ \
| | | | | |
| | | | | | |
Add support for setting the CIDR when using slirp4netns
|
| |/ / / /
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | | |
This adds support for the --cidr parameter that is supported
by slirp4netns since v0.3.0. This allows the user to change
the ip range that is used for the network inside the container.
Signed-off-by: Adis Hamzić <adis@hamzadis.com>
|
|\ \ \ \ \
| | | | | |
| | | | | | |
add xz as a recommended pkg
|
| |/ / / /
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | | |
xz package is required by buildah and podman when building a
image and ADD a tar.xz file archive is used
See https://github.com/containers/buildah/issues/2525
Signed-off-by: Job Cespedes Ortiz <jobcespedes@gmail.com>
|
|\ \ \ \ \
| | | | | |
| | | | | | |
podman-remote fixes for msi and client
|
| |/ / / /
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | | |
correct small typo that sets the path on windows via the msi xml.
in the remote client, prompt for SSH password when no identity or alternate means of authentication are provided.
Signed-off-by: Brent Baude <bbaude@redhat.com>
|
|\ \ \ \ \
| | | | | |
| | | | | | |
Makefile: use full path for ginkgo
|
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | | |
Without this change, I get:
```
ginkgo \
-r \
\
--skipPackage test/e2e,pkg/apparmor,test/endpoint,pkg/bindings,hack \
--cover \
--covermode atomic \
--coverprofile coverprofile \
--outputdir .coverage \
--tags " selinux systemd exclude_graphdriver_devicemapper seccomp" \
--succinct
/bin/sh: line 1: ginkgo: command not found
```
Signed-off-by: Lokesh Mandvekar <lsm5@fedoraproject.org>
|
|\ \ \ \ \ \
| | | | | | |
| | | | | | | |
add event for image build
|
| | |/ / / /
| |/| | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | | |
upon image build completion, a new image type event is written for "build". more intricate details, like pulling an image, that might be done by build must be implemented in different vendored packages only after libpod is split from podman.
Fixes: #7022
Signed-off-by: Brent Baude <bbaude@redhat.com>
|
|\ \ \ \ \ \
| |/ / / / /
|/| | | | | |
Add parameter verification for api creation network
|
| | |_|_|/
| |/| | |
| | | | |
| | | | | |
Signed-off-by: zhangguanzhang <zhangguanzhang@qq.com>
|
|\ \ \ \ \
| | | | | |
| | | | | | |
Replace deepcopy on history results
|
| |/ / / /
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | | |
the deepcopy in the remote history code path was throwing an uncaught error on a type mismatch. we now manually do the conversion and fix the type mismatch on the fly.
Fixes: #7122
Signed-off-by: Brent Baude <bbaude@redhat.com>
|
|\ \ \ \ \
| | | | | |
| | | | | | |
Update nix pin with `make nixpkgs`
|
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | | |
Also sync nix `packageOverrides` across skopeo/buildah/podman/cri-o for
utilizing local build cache.
Signed-off-by: Wong Hoi Sing Edison <hswong3i@gmail.com>
|
|\ \ \ \ \ \
| | | | | | |
| | | | | | | |
Ensure correct propagation for cgroupsv1 systemd cgroup
|
| | |/ / / /
| |/| | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | | |
On cgroups v1 systems, we need to mount /sys/fs/cgroup/systemd
into the container. We were doing this with no explicit mount
propagation tag, which means that, under some circumstances, the
shared mount propagation could be chosen - which, combined with
the fact that we need a mount to mask
/sys/fs/cgroup/systemd/release_agent in the container, means we
would leak a never-ending set of mounts under
/sys/fs/cgroup/systemd/ on container restart.
Fortunately, the fix is very simple - hardcode mount propagation
to something that won't leak.
Signed-off-by: Matthew Heon <mheon@redhat.com>
|