| Commit message (Collapse) | Author | Age |
|\
| |
| | |
Fix podman network IDs handling
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
The libpod network logic knows about networks IDs but OCICNI
does not. We cannot pass the network ID to OCICNI. Instead we
need to make sure we only use network names internally. This
is also important for libpod since we also only store the
network names in the state. If we would add a ID there the
same networks could accidentally be added twice.
Fixes #9451
Signed-off-by: Paul Holzinger <paul.holzinger@web.de>
|
|\ \
| |/
|/| |
No header info for systemd generation
|
| |
| |
| |
| | |
Signed-off-by: Jakub Guzik <jakubmguzik@gmail.com>
|
|\ \
| | |
| | | |
[NO TESTS NEEDED] Allow podman play kube to read yaml file from stdin
|
| |/
| |
| |
| |
| |
| | |
Fixes: https://github.com/containers/podman/issues/8996
Signed-off-by: Daniel J Walsh <dwalsh@redhat.com>
|
|\ \
| | |
| | | |
Add missing params for podman-remote build
|
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
Fixes: https://github.com/containers/podman/issues/9290
Currently we still have hard coded --isolation=chroot for podman-remote build.
Implement missing arguments for podman build
Implements
--jobs, --disable-compression, --excludes
Fixes:
MaxPullPushRetries
RetryDuration
Signed-off-by: Daniel J Walsh <dwalsh@redhat.com>
|
|\ \ \
| | | |
| | | | |
bump go module to v3
|
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | | |
We missed bumping the go module, so let's do it now :)
* Automated go code with github.com/sirkon/go-imports-rename
* Manually via `vgrep podman/v2` the rest
Signed-off-by: Valentin Rothberg <rothberg@redhat.com>
|
|/ / /
| | |
| | |
| | | |
Signed-off-by: Jakub Guzik <jakubmguzik@gmail.com>
|
|\ \ \
| |_|/
|/| | |
Fix journald logs
|
| | |
| | |
| | |
| | | |
Signed-off-by: Ashley Cui <acui@redhat.com>
|
|/ /
| |
| |
| |
| |
| |
| |
| |
| | |
when unlimited (-1) was being passed to memory-swap, podman threw a
segfault.
Fixes #9429
Signed-off-by: baude <bbaude@redhat.com>
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
Instead of using the container's mountpoint as the base of the
chroot and indexing from there by the volume directory, instead
use the full path of what we want to copy as the base of the
chroot and copy everything in it. This resolves the bug, ends up
being a bit simpler code-wise (no string concatenation, as we
already have the full path calculated for other checks), and
seems more understandable than trying to resolve things on the
destination side of the copy-up.
Fixes #9354
Signed-off-by: Matthew Heon <matthew.heon@pm.me>
|
|\ \
| | |
| | | |
Fix an issue where copyup could fail with ENOENT
|
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
This one is rather bizarre because it triggers only on some
systems. I've included a CI test, for example, but I'm 99% sure
we use images in CI that have volumes over empty directories, and
the earlier patch to change copy-up implementation passed CI
without complaint.
I can reproduce this on a stock F33 VM, but that's the only place
I have been able to see it.
Regardless, the issue: under certain as-yet-unidentified
environmental conditions, the copier.Get method will return an
ENOENT attempting to stream a directory that is empty. Work
around this by avoiding the copy altogether in this case.
Signed-off-by: Matthew Heon <mheon@redhat.com>
|
|\ \ \
| | | |
| | | | |
podman ps --format '{{ .Size }}' requires --size option
|
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | | |
Podman -s crashes when the user specifies the '{{ .Size }}` format
on the podman ps command, without specifying the --size option.
This PR will stop the crash and print out a logrus.Error stating that
the caller should add the --size option.
Fixes: https://github.com/containers/podman/issues/9408
Signed-off-by: Daniel J Walsh <dwalsh@redhat.com>
|
|/ / /
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
We recieved an issue with an image that was built with
entrypoint=[""]
This blows up on Podman, but works on Docker.
When we setup the OCI Runtime, we should drop
entrypoint if it is == [""]
https://github.com/containers/podman/issues/9377
Signed-off-by: Daniel J Walsh <dwalsh@redhat.com>
|
|\ \ \
| | | |
| | | | |
Do not reset storage when running inside of a container
|
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | | |
Currently if the host shares container storage with a container
running podman, the podman inside of the container resets the
storage on the host. This can cause issues on the host, as
well as causes the podman command running the container, to
fail to unmount /dev/shm.
podman run -ti --rm --privileged -v /var/lib/containers:/var/lib/containers quay.io/podman/stable podman run alpine echo hello
* unlinkat /var/lib/containers/storage/overlay-containers/a7f3c9deb0656f8de1d107e7ddff2d3c3c279c11c1635f233a0bffb16051fb2c/userdata/shm: device or resource busy
* unlinkat /var/lib/containers/storage/overlay-containers/a7f3c9deb0656f8de1d107e7ddff2d3c3c279c11c1635f233a0bffb16051fb2c/userdata/shm: device or resource busy
Since podman is volume mounting in the graphroot, it will add a flag to
/run/.containerenv to tell podman inside of container whether to reset storage or not.
Since the inner podman is running inside of the container, no reason to assume this is a fresh reboot, so if "container" environment variable is set then skip
reset of storage.
Also added tests to make sure /run/.containerenv is runnig correctly.
Fixes: https://github.com/containers/podman/issues/9191
Signed-off-by: Daniel J Walsh <dwalsh@redhat.com>
|
|/ / /
| | |
| | |
| | |
| | |
| | |
| | | |
Docker always reports back the users input, not the full
id, we should do the same.
Signed-off-by: Daniel J Walsh <dwalsh@redhat.com>
|
|\ \ \
| | | |
| | | | |
When stopping a container, print rawInput
|
| | |/
| |/|
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
When we stop a container we are printing the full id,
this does not match Docker behaviour or the start behavior.
We should be printing the users rawInput when we successfully
stop the container.
Fixes: https://github.com/containers/podman/issues/9386
Signed-off-by: Daniel J Walsh <dwalsh@redhat.com>
|
|\ \ \
| |_|/
|/| | |
Fix panic in pod creation
|
| |/
| |
| |
| |
| |
| |
| |
| |
| |
| | |
when creating a pod with --infra-image and using a untagged image for
the infra-image (none/none), the lookup for the image's name was
creating a panic.
Fixes: #9374
Signed-off-by: baude <bbaude@redhat.com>
|
|/
|
|
|
|
|
|
|
| |
Currently podman is always chowning the WORKDIR to root:root
This PR will return if the WORKDIR already exists.
Fixes: https://github.com/containers/podman/issues/9387
Signed-off-by: Daniel J Walsh <dwalsh@redhat.com>
|
|
|
|
|
|
| |
ubuntu's dns seems a little odd and requires a fq name in its tests.
Signed-off-by: baude <bbaude@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
| |
The logic in the e2e test for multiple network aliases is indicating the
test should wait for the containerized nginx to be ready. As this may
take some time, the test does an exponential backoff starting at 2050ms.
Fix the logic by removing the `Expect(...)` call during the exponential
backoff. Otherwise, the test errors immediately.
Signed-off-by: Valentin Rothberg <rothberg@redhat.com>
|
|
|
|
|
|
|
|
| |
The timestamps of some images must have changed changing the number of
expected filtered images. The test conditions seem fragile but for now
it's more important to get CI back.
Signed-off-by: Valentin Rothberg <rothberg@redhat.com>
|
|\
| |
| | |
utils: takes the longest path on cgroup v1
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
now getCgroupProcess takes the longest path on cgroup v1, instead of
complaining if the paths are different.
This should help when --cgroups=split is used on cgroup v1 and the
process cgroups look like:
$ cat /proc/self/cgroup
11:pids:/user.slice/user-0.slice/session-4.scope
10:blkio:/
9:cpuset:/
8:devices:/user.slice
7:freezer:/
6:memory:/user.slice/user-0.slice/session-4.scope
5:net_cls,net_prio:/
4:hugetlb:/
3:cpu,cpuacct:/
2:perf_event:/
Signed-off-by: Giuseppe Scrivano <gscrivan@redhat.com>
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
when printing out json format, we mistakenly changed the Created field
output to be a time.time in a different commit. This allows for
override of the Created field to be a unix ts as type int64.
Fixes: #9315
Signed-off-by: baude <bbaude@redhat.com>
|
|\ \
| | |
| | | |
Rewrite copy-up to use buildah Copier
|
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
The old copy-up implementation was very unhappy with symlinks,
which could cause containers to fail to start for unclear reasons
when a directory we wanted to copy-up contained one. Rewrite to
use the Buildah Copier, which is more recent and should be both
safer and less likely to blow up over links.
At the same time, fix a deadlock in copy-up for volumes requiring
mounting - the Mountpoint() function tried to take the
already-acquired volume lock.
Fixes #6003
Signed-off-by: Matthew Heon <mheon@redhat.com>
|
| |/
|/|
| |
| |
| |
| |
| |
| |
| |
| |
| | |
When doing a container inspect on a container with unlimited ulimits,
the value should be -1. But because the OCI spec requires the ulimit
value to be uint64, we were displaying the inspect values as a uint64 as
well. Simple change to display as an int64.
Fixes: #9303
Signed-off-by: baude <bbaude@redhat.com>
|
|\ \
| | |
| | | |
Support annotations from containers.conf
|
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
Service needs to be restarted in order to read the CONTAINERS_CONF file.
Not resetting this can lead to lots of flakes, since the test will use
whatever the host system has to be set in it's containers.conf.
Fixes: https://github.com/containers/podman/issues/9286
Signed-off-by: Daniel J Walsh <dwalsh@redhat.com>
|
| |/
| |
| |
| |
| |
| |
| | |
Currently podman does not use the annotations specified in the
containers.conf. This PR fixes this.
Signed-off-by: Daniel J Walsh <dwalsh@redhat.com>
|
|\ \
| |/
|/| |
generate kube: do not set caps with --privileged
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
Do not play with capabilities for privileged containers where all
capabilities will be set implicitly.
Also, avoid the device check when running privileged since all of /dev/*
will be mounted in any case.
Fixes: #8897
Signed-off-by: Valentin Rothberg <rothberg@redhat.com>
|
|\ \
| |/
|/| |
Implement Secrets
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
Implement podman secret create, inspect, ls, rm
Implement podman run/create --secret
Secrets are blobs of data that are sensitive.
Currently, the only secret driver supported is filedriver, which means creating a secret stores it in base64 unencrypted in a file.
After creating a secret, a user can use the --secret flag to expose the secret inside the container at /run/secrets/[secretname]
This secret will not be commited to an image on a podman commit
Signed-off-by: Ashley Cui <acui@redhat.com>
|
| |
| |
| |
| |
| |
| | |
Fix handling of --iidfile to happen on the client side.
Signed-off-by: Daniel J Walsh <dwalsh@redhat.com>
|
|\ \
| | |
| | | |
Implement missing arguments for podman build
|
| |/
| |
| |
| |
| |
| |
| |
| | |
Buildah bud passes a bunch more flags then podman build.
We need to implement hook up all of these flags to get full functionality.
Signed-off-by: Daniel J Walsh <dwalsh@redhat.com>
|
|\ \
| | |
| | | |
make `podman rmi` more robust
|
| |/
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
The c/storage library is subject to TOCTOUs as the central container and
image storage may be shared by many instances of many tools. As shown
in #6510, it's fairly easy to have multiple instances of Podman running
in parallel and yield image-lookup errors when removing them.
The underlying issue is the TOCTOU of removal being split into multiple
stages of first reading the local images and then removing them. Some
images may already have been removed in between the two stages. To make
image removal more robust, handle errors at stage two when a given image
is not present (anymore) in the storage.
Fixes: #6510
Signed-off-by: Valentin Rothberg <rothberg@redhat.com>
|
|\ \
| |/
|/| |
add network prune
|