| Commit message (Collapse) | Author | Age |
|\
| |
| | |
runtime: check for pause pid existence
|
| |
| |
| |
| |
| |
| |
| |
| |
| | |
check that the pause pid exists before trying to move it to a separate
scope.
Closes: https://github.com/containers/podman/issues/12065
Signed-off-by: Giuseppe Scrivano <gscrivan@redhat.com>
|
|\ \
| | |
| | | |
Fix systemd PID1 test
|
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
Previously this test used an ad-hoc timeout mechanism to synchronize
with output of the container ID. However, depending on runtime
conditions this may not correctly correspond with complete startup
of the systemd process. Consequently this test fails under some
conditions with an error like:
`System has not been booted with systemd as init system (PID 1). Can't
operate. Failed to connect to bus: Host is down`
Fix this by using the more appropriate `WaitContainerReady()`
against output from system startup, close to finalization. In this way,
the test status command cannot run until systemd is fully operational.
Signed-off-by: Chris Evich <cevich@redhat.com>
|
|\ \ \
| |_|/
|/| | |
remove need to download pause image
|
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
So far, the infra containers of pods required pulling down an image
rendering pods not usable in disconnected environments. Instead, build
an image locally which uses local pause binary.
Fixes: #10354
Signed-off-by: Valentin Rothberg <rothberg@redhat.com>
|
|\ \ \
| |_|/
|/| | |
Slirp4netns with ipv6 set net.ipv6.conf.default.accept_dad=0
|
| |/
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
Duplicate Address Detection slows the ipv6 setup down for 1-2 seconds.
Since slirp4netns is run it is own namespace and not directly routed
we can skip this to make the ipv6 address immediately available.
We change the default to make sure the slirp tap interface gets the
correct value assigned so DAD is disabled for it.
Also make sure to change this value back to the original after slirp4netns
is ready in case users rely on this sysctl.
Fixes #11062
Signed-off-by: Paul Holzinger <pholzing@redhat.com>
|
|\ \
| | |
| | | |
Fix a few problems in 'podman logs --tail' with journald driver
|
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
The following problems regarding `logs --tail` with the journald log
driver are fixed:
- One more line than a specified value is displayed.
- '--tail 0' displays all lines while the other log drivers displays
nothing.
- Partial lines are not considered.
- If the journald events backend is used and a container has exited,
nothing is displayed.
Integration tests that should have detected the bugs are also fixed. The
tests are executed with json-file log driver three times without this
fix.
Signed-off-by: Hironori Shiina <shiina.hironori@jp.fujitsu.com>
|
|\ \ \
| |_|/
|/| | |
Allow 'container restore' with '--ipc host'
|
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
Trying to restore a container that was started with '--ipc host' fails
with:
Error: error creating container storage: ProcessLabel and Mountlabel must either not be specified or both specified
We already fixed this exact same error message for containers started
with '--privileged'. The previous fix was to check if the to be restored
container is a privileged container (c.config.Privileged). Unfortunately
this does not work for containers started with '--ipc host'.
This commit changes the check for a privileged container to check if
both the ProcessLabel and the MountLabel is actually set and only then
re-uses those labels.
Signed-off-by: Adrian Reber <areber@redhat.com>
|
|/ /
| |
| |
| |
| |
| | |
Fixes: https://github.com/containers/podman/issues/11727
Signed-off-by: Daniel J Walsh <dwalsh@redhat.com>
|
|\ \
| |/
|/| |
Generate Kube should not print default structs
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
If podman uses Workdir="/" or the workdir specified in the image, it
should not add it to the yaml.
If Podman find environment variables in the image, they should not
get added to the yaml.
If the container or pod do not have changes to SELinux we should not
print seLinuxOpt{}
If the container or pod do not change any dns options the yaml should
not have a dnsOption={}
If the container is not privileged it should not have privileged=false
in the yaml.
Fixes: https://github.com/containers/podman/issues/11995
Signed-off-by: Daniel J Walsh <dwalsh@redhat.com>
|
|\ \
| | |
| | | |
tag: Support tagging manifest list instead of resolving to images
|
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
Following commit makes sure when buildah tag is invoked on a manifest
list, it tags the same manifest list instead of resolving to an image and
tagging it.
Port of: https://github.com/containers/buildah/pull/3483
Signed-off-by: Aditya Rajan <arajan@redhat.com>
|
|\ \ \
| |/ /
|/| | |
Add test for system connection
|
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
First a basic (connectionless) one to make sure we 'add', 'ls',
and 'rm' work; then an actual one with a service; then (if
ssh to localhost is set up and works) test ssh
Requires a little trickery to work around the CI definition
of $PODMAN, which includes "--url /path/to/sock", which
overrides podman's detection of whether to use a connection
or not.
Signed-off-by: Ed Santiago <santiago@redhat.com>
|
| | |
| | |
| | |
| | |
| | |
| | | |
...and fix one instance where there was no check
Signed-off-by: Ed Santiago <santiago@redhat.com>
|
|\ \ \
| | | |
| | | | |
Pod Rm Infra Handling Improvements
|
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | | |
Made changes so that if the pod contains all exited containers and only infra is running, remove the pod.
resolves #11713
Signed-off-by: cdoern <cdoern@redhat.com>
|
|\ \ \ \
| | | | |
| | | | | |
system tests: CONTAINER_* and --help: cleanup
|
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | | |
A small part of this test was written in a confusing and fragile
way: it was very hard to understand, and in fact only worked
through pure luck (using 'echo $output', which emitted everything
in one long line, vs the standard quoted 'echo "$output"' which
would've kept the formatting and caused the test to pass,
incorrectly, no matter whether --remote was in the output
or not). Plus, the '$?' check in the next line would never
trigger on failure anyway, so the failure message would've
been unhelpful if the test were ever to fail.
Anyhow. Make it readable and make it work.
(Followup to #11990)
Signed-off-by: Ed Santiago <santiago@redhat.com>
|
|/ / / /
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | | |
On Docker this is ignored, and it should be on Podman as
well. This is documented in the man page.
Fixes: https://github.com/containers/podman/issues/12002
Signed-off-by: Daniel J Walsh <dwalsh@redhat.com>
|
|\ \ \ \
| |_|/ /
|/| | | |
libpod: change mountpoint ownership when creating overlays on top of external rootfs
|
| | |/
| |/|
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
external rootfs
Allow chainging ownership of mountpoint created on top external overlay
rootfs to support use-cases when custom --uidmap and --gidmap are
specified.
Signed-off-by: Aditya Rajan <arajan@redhat.com>
|
|/ /
| |
| |
| |
| |
| |
| |
| |
| | |
Stop using "*" to indicate default. Add default field to make
it more obvios and the json field more machine usable.
Fixes: https://github.com/containers/podman/issues/12019
Signed-off-by: Daniel J Walsh <dwalsh@redhat.com>
|
|\ \
| | |
| | | |
We should only be relabeling when on first run
|
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
On the second runs, the labels should be the same so no
need to relabel.
Fixes: https://bugzilla.redhat.com/show_bug.cgi?id=2013548
Signed-off-by: Daniel J Walsh <dwalsh@redhat.com>
|
|\ \ \
| | | |
| | | | |
system tests: socket activation: clean up
|
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | | |
Multiarch folks are seeing flakes in this test. I can't reproduce
them, but I did notice that the test isn't doing the best possible
job of reporting failures nor of confirming what it purports to test.
Major fix here is to check the exit status of each curl: if we
see the flake again, that will help us track down the failure.
Other fixes are just refactoring, cleanup, and disambiguation
(using the random service name consistently)
Signed-off-by: Ed Santiago <santiago@redhat.com>
|
|/ / /
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
Following commit ensures not dandling mounts are left behind when we are
creating an overlay on top of external rootfs.
Co-authored-by: Valentin Rothberg <rothberg@redhat.com>
Signed-off-by: Aditya Rajan <arajan@redhat.com>
|
|/ /
| |
| |
| |
| |
| |
| |
| | |
Current code does not check early enough.
Follow up to https://github.com/containers/podman/pull/11978
Signed-off-by: Daniel J Walsh <dwalsh@redhat.com>
|
|\ \
| | |
| | | |
Checkpoint/Restore test fixes
|
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
Moving to Fedora 35 showed test failures (time outs) in the test
"podman checkpoint and restore container with different port mappings"
The test starts a container and maps the internal port 6379 to the local
port 1234 ('-p 1234:6379') and then tries to connect to localhost:1234
On Fedora 35 this failed and blocked the test because the container was
not yet ready. The test was trying to connect to localhost:1234 but
nothing was running there. So the error was not checkpointing related.
Before trying to connect to the container the test is now waiting for
the container to be ready.
Another problem with this test and running ginkgo in parallel was that
it was possible that the port was already in use. Now for each run a
random port is selected to decrease the chance of collisions.
Signed-off-by: Adrian Reber <areber@redhat.com>
|
|\ \ \
| | | |
| | | | |
Set targetPort to the port value in the kube yaml
|
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | | |
When the targetPort is not defined, it is supposed to
be set to the port value according to the k8s docs.
Add tests for targetPort.
Update tests to be able to check the Service yaml that
is generated.
Signed-off-by: Urvashi Mohnani <umohnani@redhat.com>
|
|\ \ \ \
| | | | |
| | | | | |
If CONTAINER_HOST env variable is set default podman --remote=true
|
| |/ / /
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | | |
Users enabling CONTAINER_HOST==PATH is indicating to podman they intend
to use remote functionality.
Fixes: https://github.com/containers/podman/issues/11196
Update man pages to document all of the environment variables.
Signed-off-by: Daniel J Walsh <dwalsh@redhat.com>
|
|\ \ \ \
| | | | |
| | | | | |
Test-hang fix: Wait for ready + timeout on connect.
|
| |/ / /
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | | |
It was observed during initial F35 testing, this test can cause Ginkgo
to "hang" by attempting to connect before the redis is up/listening.
Fix this by confirming the ready-state before attempting to connect.
Also, force IPv4 and timeout on any connection fault - to allow other
tests to run.
Thanks to Adrian Reber for help on this and related fixes.
Signed-off-by: Chris Evich <cevich@redhat.com>
|
|/ / /
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
As the default protocol in k8s is TCP, don't add it
to the generate yaml when using protocol.
Add UDP to the protocol of the generated yaml when udp
is being used.
Add tests for this as well.
Signed-off-by: Urvashi Mohnani <umohnani@redhat.com>
|
|\ \ \
| | | |
| | | | |
Fix codespell errors
|
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | | |
Along with a couple of nits found by Ed.
[NO NEW TESTS NEEDED]
Signed-off-by: Daniel J Walsh <dwalsh@redhat.com>
|
|\ \ \ \
| |/ / /
|/| | | |
Fix panic in container create compat api
|
| |/ /
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
The bind and tmpfs options can be nil, we have to check that before we
try to use it.
Fixes #11961
Signed-off-by: Paul Holzinger <pholzing@redhat.com>
|
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
If no entrypoint or command is set in the podman create
command, and the image command or entrypoint is being
used as the default, then do not add the image command or
entrypoint to the generated kube yaml.
Kubernetes knows to default to the image command and/or
entrypoint settings when not defined in the kube yaml.
Add and modify tests for this case.
Signed-off-by: Urvashi Mohnani <umohnani@redhat.com>
|
|\ \ \
| | | |
| | | | |
Kube Gen run as user/group issues
|
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | | |
Removed the inclusion of RunAsUser or RunAsGroup unless a container is run with the --user flag. When building from an image
the user will be pulled from there anyway
resolves #11914
Signed-off-by: cdoern <cdoern@redhat.com>
|