| Commit message (Collapse) | Author | Age |
|
|
|
|
|
|
|
|
|
| |
Allow users to add annotions in the podman play kube command.
This PR Also fixes the fact that annotations in the pod spec were
not being passed down to containers.
Fixes: https://github.com/containers/podman/issues/12968
Signed-off-by: Daniel J Walsh <dwalsh@redhat.com>
|
|\
| |
| | |
Add --context-dir option to podman play kube
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
This option was requested so that users could specify alternate
locations to find context directories for each image build. It
requites the --build option to be set.
Partion Fix: https://github.com/containers/podman/issues/12485
Signed-off-by: Daniel J Walsh <dwalsh@redhat.com>
|
|\ \
| | |
| | | |
Fix handling of tmpfs-mode for tmpfs creation in compat mode
|
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
The permissions on disk were wrong since we were not converting to
octal.
Fixes: https://github.com/containers/podman/issues/13108
[NO NEW TESTS NEEDED] Since we don't currently test using the docker
client
Signed-off-by: Daniel J Walsh <dwalsh@redhat.com>
|
|\ \ \
| |/ /
|/| | |
Set default rule at the head of device configuration
|
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
The default rule should be set at the head of device configuration.
Otherwise, rules for user devices are overridden by the default rule so
that any access to the user devices are denied.
Signed-off-by: Hironori Shiina <shiina.hironori@jp.fujitsu.com>
|
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
pprof tests are way too flaky, and are causing problems for
community contributors who don't have privs to press Re-run.
There has been no activity or interest in fixing the bug,
and it's not something I can fix. So, just disable the test.
Signed-off-by: Ed Santiago <santiago@redhat.com>
|
|\ \ \
| | | |
| | | | |
Move secret-verify-leak containerfile into its own Directory
|
| |/ /
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
Secret-verify-leak is causing flakes, when running in parallel tests.
This is because remote secrets are copied into the context directory to
send to the API server, and secret-verify-leak is doing a COPY * and
then checking if the temporary secret file ends up in the container or
not. Since all the temporary files are prefixed with
"podman-build-secret", this test checks if podman-build-secret is in the
image. However, when run in parallel with other tests, other temporary
podman-build-secrets might be in the context dir. Moving
secret-verify-leak into its own directory makes sure that the context
dir is used only by this one test.
Also renamed Dockerfile -> Containerfile and cleaned up unused
Containerfiles.
Signed-off-by: Ashley Cui <acui@redhat.com>
|
|/ /
| |
| |
| |
| | |
Closes: https://github.com/containers/podman/issues/13150
Signed-off-by: 😎 Mostafa Emami <mustafaemami@gmail.com>
|
|\ \
| | |
| | | |
Move all python tests to pytest
|
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
* Add configuration to add report header for python client used in tests
* Move report headers into the individual test runners vs runner.sh
Signed-off-by: Jhon Honce <jhonce@redhat.com>
|
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
Error out if the kube yaml passed to play kube has more
than one container or init container with the same name.
Signed-off-by: Urvashi Mohnani <umohnani@redhat.com>
|
|\ \ \
| | | |
| | | | |
libpod: pods do not use cgroups if --cgroups=disabled
|
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | | |
do not attempt to use cgroups with pods if the cgroups are disabled.
A similar check is already in place for containers.
Closes: https://github.com/containers/podman/issues/13411
Signed-off-by: Giuseppe Scrivano <gscrivan@redhat.com>
|
|\ \ \ \
| |_|/ /
|/| | | |
deps: bump to race-free `c/image` and `c/storage` along with test to verify `concurrent/parallel` builds
|
| |/ /
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
Invoking parallel/concurrent builds from podman race against each other
following behviour was fixed in
https://github.com/containers/storage/pull/1153 and https://github.com/containers/image/pull/1480
Test verifies if following bug is fixed in new race-free API or not.
Read more about this issue, see bz 2055487 for more details.
More details here: https://github.com/containers/buildah/pull/3794 and https://github.com/containers/podman/pull/13339
Co-authored-by: Ed Santiago <santiago@redhat.com>
Signed-off-by: Aditya R <arajan@redhat.com>
|
|/ /
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
While resolving `workdir` we mostly create a `workdir` when `stat`
fails with `ENOENT` or `ErrNotExist` however following cases are not
true when user explicitly specifies a `workdir` while `running` using
`--workdir` which tells `podman` to only use workdir if its exists on
the container. Following configuration is implicity set with other
`run` mechanism like `podman play kube`
Problem with explicit `--workdir` or similar implicit config in `podman play
kube` is that currently podman ignores the fact that workdir can also be
a `symlink` and actual `link` could be valid.
Hence following commit ensures that in such scenarios when a `workdir`
is not found and we cannot create a `workdir` podman must perform a
check to ensure that if `workdir` is a `symlink` and `link` is resolved
successfully and resolved link is present on the container then we
return as it is.
Docker performs a similar behviour.
Signed-off-by: Aditya R <arajan@redhat.com>
|
|\ \
| |/
|/| |
Refactor docker-py compatibility tests
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
* Add which python client is being used to run tests, see "python
client" below.
* Remove redundate code from test classes
* Update/Add comments to modules and classes
======================================================= test session starts ========================================================
platform linux -- Python 3.10.0, pytest-6.2.4, py-1.10.0, pluggy-0.13.1
python client -- DockerClient
rootdir: /home/jhonce/Projects/go/src/github.com/containers/podman
plugins: requests-mock-1.8.0
collected 33 items
test/python/docker/compat/test_containers.py ...s.............. [ 54%]
test/python/docker/compat/test_images.py ............ [ 90%]
test/python/docker/compat/test_system.py ... [100%]
Note: Follow-up PRs will verify the test results and expand the tests.
Signed-off-by: Jhon Honce <jhonce@redhat.com>
|
|\ \
| | |
| | | |
Add the names flag for pod logs
|
| |/
| |
| |
| |
| |
| | |
Fixes containers#13261
Signed-off-by: Xueyuan Chen <X.Chen-47@student.tudelft.nl>
|
|/
|
|
|
|
| |
Fixes: https://github.com/containers/podman/issues/12768
Signed-off-by: Daniel J Walsh <dwalsh@redhat.com>
|
|\
| |
| | |
container-commit: support `--squash` to squash layers into one if users want.
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
Allow users to commit containers into a single layer.
Usage
```bash
podman container commit --squash <name>
```
Signed-off-by: Aditya R <arajan@redhat.com>
|
|\ \
| | |
| | | |
Don't log errors on removing volumes inuse, if container --volumes-from
|
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
When removing a container created with a --volumes-from a container
created with a built in volume, we complain if the original container
still exists. Since this is an expected state, we should not complain
about it.
Fixes: https://github.com/containers/podman/issues/12808
Signed-off-by: Daniel J Walsh <dwalsh@redhat.com>
|
| | |
| | |
| | |
| | | |
Signed-off-by: Daniel J Walsh <dwalsh@redhat.com>
|
|\ \ \
| |_|/
|/| | |
Implement Podman Container Clone
|
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
podman container clone takes the id of an existing continer and creates a specgen from the given container's config
recreating all proper namespaces and overriding spec options like resource limits and the container name if given in the cli options
this command utilizes the common function DefineCreateFlags meaning that we can funnel as many create options as we want
into clone over time allowing the user to clone with as much or as little of the original config as they want.
container clone takes a second argument which is a new name and a third argument which is an image name to use instead of the original container's
the current supported flags are:
--destroy (remove the original container)
--name (new ctr name)
--cpus (sets cpu period and quota)
--cpuset-cpus
--cpu-period
--cpu-rt-period
--cpu-rt-runtime
--cpu-shares
--cpuset-mems
--memory
--run
resolves #10875
Signed-off-by: cdoern <cdoern@redhat.com>
Signed-off-by: cdoern <cbdoer23@g.holycross.edu>
Signed-off-by: cdoern <cdoern@redhat.com>
|
| |/
|/|
| |
| |
| |
| |
| |
| | |
When a test which creates a network fail it will not remove the network.
The teardown logic should remove the networks. Since there is no --all
option for network rm we use network prune --force.
Signed-off-by: Paul Holzinger <pholzing@redhat.com>
|
|\ \
| | |
| | | |
kube: honor `--build=false` if specified.
|
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
`podman play kube` tries to build images even if `--build` is set to
false so lets honor that and make `--build` , `true` by default so it
matches the original behviour.
Signed-off-by: Aditya R <arajan@redhat.com>
|
|\ \ \
| | | |
| | | |
| | | |
| | | | |
Romain-Geissler-1A/url-and-connection-implies-remote
Option --url and --connection should imply --remote.
|
| |/ /
| | |
| | |
| | |
| | |
| | | |
Closes #13242
Signed-off-by: Romain Geissler <romain.geissler@amadeus.com>
|
|\ \ \
| | | |
| | | | |
provide better error on invalid flag
|
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | | |
Add a extra `See 'podman command --help'` to the error output.
With this patch you now get:
```
$ podman run -h
Error: flag needs an argument: 'h' in -h
See 'podman run --help'
```
Fixes #13082
Fixes #13002
Signed-off-by: Paul Holzinger <pholzing@redhat.com>
|
|/ / /
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
We're running into problems that are impossible to diagnose
because we have no idea if the SUT is using netavark or CNI.
We've previously run into similar problems with runc/crun,
or cgroups 1/2.
This adds a one-line 'echo' with important system info. Now,
when viewing a full test log, it will be possible to view
system settings in one glance.
Signed-off-by: Ed Santiago <santiago@redhat.com>
|
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
The CONTAINERS_CONF environment variable can be used to override the
configuration file, which is useful for testing. However, at the moment
this variable is not propagated to conmon. That means in particular, that
conmon can't propagate it back to podman when invoking its --exit-command.
The mismatch in configuration between the starting and cleaning up podman
instances can cause a variety of errors.
This patch also adds two related test cases. One checks explicitly that
the correct CONTAINERS_CONF value appears in conmon's environment. The
other checks for a possible specific impact of this bug: if we use a
nonstandard name for the runtime (even if its path is just a regular crun),
then the podman container cleanup invoked at container exit will fail.
That has the effect of meaning that a container started with -d --rm won't
be correctly removed once complete.
Fixes #12917
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
|
|/ /
| |
| |
| |
| |
| |
| |
| | |
This comment refers to overiding $PODMAN although the code below does
nothing of the sort. Presumbly the comment has been outdated by altering
the containers.conf / $CONTAINERS_CONF instead.
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
|
|/
|
|
|
|
|
|
|
| |
We could remove the container running the volume plugins, before
the containers using the volume plugins; this could cause
unmounting the volumes to fail because the plugin could not be
contacted.
Signed-off-by: Matthew Heon <mheon@redhat.com>
|
|
|
|
|
|
|
| |
Merge the two tests to speed up testing. Both built the exact same
images.
Signed-off-by: Valentin Rothberg <vrothberg@redhat.com>
|
|
|
|
|
|
|
|
|
| |
It looks like some descriptions have changed on the docker registry
where we had been searching for images that include 'alpine'. We are
now seeing an image in the initial list that has 'alpine' in its
description.
Signed-off-by: Brent Baude <bbaude@redhat.com>
|
|
|
|
|
|
|
|
| |
For the since and after imagve filter tests, instead of using the
read-only cache of images, we just use the empty r/w store. We then
build three images that are strictly predictable.
Signed-off-by: Brent Baude <bbaude@redhat.com>
|
|\
| |
| | |
enable netavark specific tests
|
| |
| |
| |
| |
| |
| |
| | |
These are copies of the CNI tests with modifications wherever
neccessary.
Signed-off-by: Lokesh Mandvekar <lsm5@fedoraproject.org>
|
|\ \
| |/
|/| |
Fix checkpoint/restore pod tests
|
| |
| |
| |
| |
| |
| |
| |
| |
| | |
Checkpoint/restore pod tests are not running with an older runc and now
that runc 1.1.0 appears in the repositories it was detected that the
tests were failing. This was not detected in CI as CI was not using runc
1.1.0 yet.
Signed-off-by: Adrian Reber <areber@redhat.com>
|
|/
|
|
|
|
| |
Fixes: https://github.com/containers/podman/issues/12763
Signed-off-by: Daniel J Walsh <dwalsh@redhat.com>
|