| Commit message (Collapse) | Author | Age |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Currently if the host shares container storage with a container
running podman, the podman inside of the container resets the
storage on the host. This can cause issues on the host, as
well as causes the podman command running the container, to
fail to unmount /dev/shm.
podman run -ti --rm --privileged -v /var/lib/containers:/var/lib/containers quay.io/podman/stable podman run alpine echo hello
* unlinkat /var/lib/containers/storage/overlay-containers/a7f3c9deb0656f8de1d107e7ddff2d3c3c279c11c1635f233a0bffb16051fb2c/userdata/shm: device or resource busy
* unlinkat /var/lib/containers/storage/overlay-containers/a7f3c9deb0656f8de1d107e7ddff2d3c3c279c11c1635f233a0bffb16051fb2c/userdata/shm: device or resource busy
Since podman is volume mounting in the graphroot, it will add a flag to
/run/.containerenv to tell podman inside of container whether to reset storage or not.
Since the inner podman is running inside of the container, no reason to assume this is a fresh reboot, so if "container" environment variable is set then skip
reset of storage.
Also added tests to make sure /run/.containerenv is runnig correctly.
Fixes: https://github.com/containers/podman/issues/9191
Signed-off-by: Daniel J Walsh <dwalsh@redhat.com>
|
|\
| |
| | |
do not set empty $HOME
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
Make sure to not set an empty $HOME for containers and let it default to
"/".
https://github.com/containers/crun/pull/599 is required to fully
address #9378.
Partially-Fixes: #9378
Signed-off-by: Valentin Rothberg <rothberg@redhat.com>
|
|/
|
|
|
|
|
|
|
| |
Currently podman is always chowning the WORKDIR to root:root
This PR will return if the WORKDIR already exists.
Fixes: https://github.com/containers/podman/issues/9387
Signed-off-by: Daniel J Walsh <dwalsh@redhat.com>
|
|
|
|
|
|
|
|
| |
Use the whitespace linter and fix the reported problems.
[NO TESTS NEEDED]
Signed-off-by: Paul Holzinger <paul.holzinger@web.de>
|
|
|
|
|
|
|
|
| |
Use the stylecheck linter and fix the reported problems.
[NO TESTS NEEDED]
Signed-off-by: Paul Holzinger <paul.holzinger@web.de>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The old copy-up implementation was very unhappy with symlinks,
which could cause containers to fail to start for unclear reasons
when a directory we wanted to copy-up contained one. Rewrite to
use the Buildah Copier, which is more recent and should be both
safer and less likely to blow up over links.
At the same time, fix a deadlock in copy-up for volumes requiring
mounting - the Mountpoint() function tried to take the
already-acquired volume lock.
Fixes #6003
Signed-off-by: Matthew Heon <mheon@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
| |
Implement podman secret create, inspect, ls, rm
Implement podman run/create --secret
Secrets are blobs of data that are sensitive.
Currently, the only secret driver supported is filedriver, which means creating a secret stores it in base64 unencrypted in a file.
After creating a secret, a user can use the --secret flag to expose the secret inside the container at /run/secrets/[secretname]
This secret will not be commited to an image on a podman commit
Signed-off-by: Ashley Cui <acui@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
| |
When resolving the workdir of a container, we may need to create unless
the user set it explicitly on the command line. Otherwise, we just do a
presence check. Unfortunately, there was a missing return that lead us
to fall through into attempting to create and chown the workdir. That
caused a regression when running on a read-only root fs.
Fixes: #9230
Signed-off-by: Valentin Rothberg <rothberg@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
A container's workdir can be specified via the CLI via `--workdir` and
via an image config with the CLI having precedence.
Since images have a tendency to specify workdirs without necessarily
shipping the paths with the root FS, make sure that Podman creates the
workdir. When specified via the CLI, do not create the path, but check
for its existence and return a human-friendly error.
NOTE: `crun` is performing a similar check that would yield exit code
127. With this change, however, Podman performs the check and yields
exit code 126. Since this is specific to `crun`, I do not consider it
to be a breaking change of Podman.
Fixes: #9040
Signed-off-by: Valentin Rothberg <rothberg@redhat.com>
|
|
|
|
| |
Signed-off-by: Giuseppe Scrivano <gscrivan@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This implements support for mounting and unmounting volumes
backed by volume plugins. Support for actually retrieving
plugins requires a pull request to land in containers.conf and
then that to be vendored, and as such is not yet ready. Given
this, this code is only compile tested. However, the code for
everything past retrieving the plugin has been written - there is
support for creating, removing, mounting, and unmounting volumes,
which should allow full functionality once the c/common PR is
merged.
A major change is the signature of the MountPoint function for
volumes, which now, by necessity, returns an error. Named volumes
managed by a plugin do not have a mountpoint we control; instead,
it is managed entirely by the plugin. As such, we need to cache
the path in the DB, and calls to retrieve it now need to access
the DB (and may fail as such).
Notably absent is support for SELinux relabelling and chowning
these volumes. Given that we don't manage the mountpoint for
these volumes, I am extremely reluctant to try and modify it - we
could easily break the plugin trying to chown or relabel it.
Also, we had no less than *5* separate implementations of
inspecting a volume floating around in pkg/infra/abi and
pkg/api/handlers/libpod. And none of them used volume.Inspect(),
the only correct way of inspecting volumes. Remove them all and
consolidate to using the correct way. Compat API is likely still
doing things the wrong way, but that is an issue for another day.
Fixes #4304
Signed-off-by: Matthew Heon <matthew.heon@pm.me>
|
|
|
|
| |
Signed-off-by: zhangguanzhang <zhangguanzhang@qq.com>
|
|\
| |
| | |
Fix problems reported by staticcheck
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
`staticcheck` is a golang code analysis tool. https://staticcheck.io/
This commit fixes a lot of problems found in our code. Common problems are:
- unnecessary use of fmt.Sprintf
- duplicated imports with different names
- unnecessary check that a key exists before a delete call
There are still a lot of reported problems in the test files but I have
not looked at those.
Signed-off-by: Paul Holzinger <paul.holzinger@web.de>
|
|/
|
|
| |
Signed-off-by: Zhuohan Chen <chen_zhuohan@163.com>
|
|\
| |
| | |
Add support for checkpoint/restore of containers with volumes
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
When migrating a container with associated volumes, the content of
these volumes should be made available on the destination machine.
This patch enables container checkpoint/restore with named volumes
by including the content of volumes in checkpoint file. On restore,
volumes associated with container are created and their content is
restored.
The --ignore-volumes option is introduced to disable this feature.
Example:
# podman container checkpoint --export checkpoint.tar.gz <container>
The content of all volumes associated with the container are included
in `checkpoint.tar.gz`
# podman container checkpoint --export checkpoint.tar.gz --ignore-volumes <container>
The content of volumes is not included in `checkpoint.tar.gz`. This is
useful, for example, when the checkpoint/restore is performed on the
same machine.
# podman container restore --import checkpoint.tar.gz
The associated volumes will be created and their content will be
restored. Podman will exit with an error if volumes with the same
name already exist on the system or the content of volumes is not
included in checkpoint.tar.gz
# podman container restore --ignore-volumes --import checkpoint.tar.gz
Volumes associated with container must already exist. Podman will not
create them or restore their content.
Signed-off-by: Radostin Stoyanov <rstoyanov@fedoraproject.org>
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
Instead of individual values from ContainerCheckpointOptions,
provide the options object.
This is a preparation for the next patch where one more value
of the options object is required in exportCheckpoint().
Signed-off-by: Radostin Stoyanov <rstoyanov@fedoraproject.org>
|
|/
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
writing to the id map fails when an extent overlaps multiple mappings
in the parent user namespace:
$ cat /proc/self/uid_map
0 1000 1
1 100000 65536
$ unshare -U sleep 100 &
[1] 1029703
$ printf "0 0 100\n" | tee /proc/$!/uid_map
0 0 100
tee: /proc/1029703/uid_map: Operation not permitted
This limitation is particularly annoying when working with rootless
containers as each container runs in the rootless user namespace, so a
command like:
$ podman run --uidmap 0:0:2 --rm fedora echo hi
Error: writing file `/proc/664087/gid_map`: Operation not permitted: OCI permission denied
would fail since the specified mapping overlaps the first
mapping (where the user id is mapped to root) and the second extent
with the additional IDs available.
Detect such cases and automatically split the specified mapping with
the equivalent of:
$ podman run --uidmap 0:0:1 --uidmap 1:1:1 --rm fedora echo hi
hi
A fix has already been proposed for the kernel[1], but even if it
accepted it will take time until it is available in a released kernel,
so fix it also in pkg/rootless.
[1] https://lkml.kernel.org/lkml/20201203150252.1229077-1-gscrivan@redhat.com/
Signed-off-by: Giuseppe Scrivano <gscrivan@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
| |
When adding the HOSTNAME environment variable, only do so if it
is not already present in the spec. If it is already present, it
was likely added by the user, and we should honor their requested
value.
Fixes #8886
Signed-off-by: Matthew Heon <mheon@redhat.com>
|
|
|
|
| |
Signed-off-by: Josh Soref <jsoref@users.noreply.github.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
do not check whether the specified ID is valid in the user namespace.
crun handles this case[1], so the check in Podman prevents to get to
the OCI runtime at all.
$ podman run --user 10:0 --uidmap 0:0:1 --rm -ti fedora:33 sh -c 'id; cat /proc/self/uid_map'
uid=10(10) gid=0(root) groups=0(root),65534(nobody)
10 0 1
[1] https://github.com/containers/crun/pull/556
Signed-off-by: Giuseppe Scrivano <gscrivan@redhat.com>
|
|\
| |
| | |
Implement pod-network-reload
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
This adds a new command, 'podman network reload', to reload the
networks of existing containers, forcing recreation of firewall
rules after e.g. `firewall-cmd --reload` wipes them out.
Under the hood, this works by calling CNI to tear down the
existing network, then recreate it using identical settings. We
request that CNI preserve the old IP and MAC address in most
cases (where the container only had 1 IP/MAC), but there will be
some downtime inherent to the teardown/bring-up approach. The
architecture of CNI doesn't really make doing this without
downtime easy (or maybe even possible...).
At present, this only works for root Podman, and only locally.
I don't think there is much of a point to adding remote support
(this is very much a local debugging command), but I think adding
rootless support (to kill/recreate slirp4netns) could be
valuable.
Signed-off-by: Matthew Heon <matthew.heon@pm.me>
Signed-off-by: Paul Holzinger <paul.holzinger@web.de>
|
|\ \
| | |
| | | |
Add containerenv information to /run/.containerenv
|
| |/
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
We have been asked to leak some information into the container
to indicate:
* The name and id of the container
* The version of podman used to launch the container
* The image name and ID the container is based on.
* Whether the container engine is running in rootless mode.
Fixes: https://github.com/containers/podman/issues/6192
Signed-off-by: Daniel J Walsh <dwalsh@redhat.com>
|
|/
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Our users are missing certain warning messages that would
make debugging issues with Podman easier.
For example if you do a podman build with a Containerfile
that contains the SHELL directive, the Derective is silently
ignored.
If you run with the log-level warn you get a warning message explainging
what happened.
$ podman build --no-cache -f /tmp/Containerfile1 /tmp/
STEP 1: FROM ubi8
STEP 2: SHELL ["/bin/bash", "-c"]
STEP 3: COMMIT
--> 7a207be102a
7a207be102aa8993eceb32802e6ceb9d2603ceed9dee0fee341df63e6300882e
$ podman --log-level=warn build --no-cache -f /tmp/Containerfile1 /tmp/
STEP 1: FROM ubi8
STEP 2: SHELL ["/bin/bash", "-c"]
STEP 3: COMMIT
WARN[0000] SHELL is not supported for OCI image format, [/bin/bash -c] will be ignored. Must use `docker` format
--> 7bd96fd25b9
7bd96fd25b9f755d8a045e31187e406cf889dcf3799357ec906e90767613e95f
These messages will no longer be lost, when we default to WARNing level.
Signed-off-by: Daniel J Walsh <dwalsh@redhat.com>
|
|
|
|
|
|
|
|
| |
The buildah/pkg/secrts package was move to
containers/common/pkg/subscriptions.
Switch to using this by default.
Signed-off-by: Daniel J Walsh <dwalsh@redhat.com>
|
|\
| |
| | |
use container cgroups path
|
| |
| |
| |
| |
| |
| |
| |
| |
| | |
When looking up a container's cgroup path, parse /proc/[PID]/cgroup.
This will work across all cgroup managers and configurations and is
supported on cgroups v1 and v2.
Fixes: #8265
Signed-off-by: Valentin Rothberg <rothberg@redhat.com>
|
|/
|
|
|
|
|
|
|
|
|
| |
The --hostname and containername should always be added to containers.
Added some tests to make sure you can always ping the hostname and container
name from within the container.
Fixes: https://github.com/containers/podman/issues/8095
Signed-off-by: Daniel J Walsh <dwalsh@redhat.com>
|
|
|
|
|
|
|
|
| |
Allow users to specify unbindable on volume command line
Switch internal mounts to rprivate to help prevent leaks.
Signed-off-by: Daniel J Walsh <dwalsh@redhat.com>
|
|\
| |
| | |
Stop excessive wrapping of errors
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
Most of the builtin golang functions like os.Stat and
os.Open report errors including the file system object
path. We should not wrap these errors and put the file path
in a second time, causing stuttering of errors when they
get presented to the user.
This patch tries to cleanup a bunch of these errors.
Signed-off-by: Daniel J Walsh <dwalsh@redhat.com>
|
|/
|
|
|
|
| |
fixes #7661
Signed-off-by: Andy Librian <andylibrian@gmail.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Add a new "image" mount type to `--mount`. The source of the mount is
the name or ID of an image. The destination is the path inside the
container. Image mounts further support an optional `rw,readwrite`
parameter which if set to "true" will yield the mount writable inside
the container. Note that no changes are propagated to the image mount
on the host (which in any case is read only).
Mounts are overlay mounts. To support read-only overlay mounts, vendor
a non-release version of Buildah.
Signed-off-by: Valentin Rothberg <rothberg@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
| |
If the resolv.conf file is empty we provide default dns servers.
If the file does not exists we error and don't create the
container. We should also provide the default entries in this
case. This is also what docker does.
Fixes #8089
Signed-off-by: Paul Holzinger <paul.holzinger@web.de>
|
|
|
|
|
|
|
|
|
| |
This does not match Docker, which does not add hostname in this
case, but it seems harmless enough.
Fixes #8095
Signed-off-by: Matthew Heon <matthew.heon@pm.me>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
When a container uses --net=host the default hostname is set to
the host's hostname. However, we were not creating any entries
in `/etc/hosts` despite having a hostname, which is incorrect.
This hostname, for Docker compat, will always be the hostname of
the host system, not the container, and will be assigned to IP
127.0.1.1 (not the standard localhost address).
Also, when `--hostname` and `--net=host` are both passed, still
use the hostname from `--hostname`, not the host's hostname (we
still use the host's hostname by default in this case if the
`--hostname` flag is not passed).
Fixes #8054
Signed-off-by: Matthew Heon <matthew.heon@pm.me>
|
|
|
|
|
|
|
|
|
| |
We need to do a length check before we can access the
networkStatus slice by index to prevent a runtime panic.
Fixes #8026
Signed-off-by: Paul Holzinger <paul.holzinger@web.de>
|
|
|
|
|
|
|
|
|
|
|
| |
Currently the HOME environment is set to /root if
the user does not override it.
Also walk the parent directories of users homedir
to see if it is volume mounted into the container,
if yes, then set it correctly.
Signed-off-by: Daniel J Walsh <dwalsh@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
When we create a container, we assign a cgroup parent based on
the current cgroup manager in use. This parent is only usable
with the cgroup manager the container is created with, so if the
default cgroup manager is later changed or overridden, the
container will not be able to start.
To solve this, store the cgroup manager that created the
container in container configuration, so we can guarantee a
container with a systemd cgroup parent will always be started
with systemd cgroups.
Unfortunately, this is very difficult to test in CI, due to the
fact that we hard-code cgroup manager on all invocations of
Podman in CI.
Fixes #7830
Signed-off-by: Matthew Heon <matthew.heon@pm.me>
|
|
|
|
|
|
|
|
|
| |
We do not populate the hostname field with the IP Address
when running within a user namespace.
Fixes https://github.com/containers/podman/issues/7490
Signed-off-by: Daniel J Walsh <dwalsh@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This commit is courtesy of
```
for f in $(git ls-files *.go | grep -v ^vendor/); do \
sed -i 's/\(errors\..*\)"Error /\1"error /' $f;
done
for f in $(git ls-files *.go | grep -v ^vendor/); do \
sed -i 's/\(errors\..*\)"Failed to /\1"failed to /' $f;
done
```
etc.
Self-reviewed using `git diff --word-diff`, found no issues.
Signed-off-by: Kir Kolyshkin <kolyshkin@gmail.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
In case os.Open[File], os.Mkdir[All], ioutil.ReadFile and the like
fails, the error message already contains the file name and the
operation that fails, so there is no need to wrap the error with
something like "open %s failed".
While at it
- replace a few places with os.Open, ioutil.ReadAll with
ioutil.ReadFile.
- replace errors.Wrapf with errors.Wrap for cases where there
are no %-style arguments.
Signed-off-by: Kir Kolyshkin <kolyshkin@gmail.com>
|
|
|
|
|
|
|
|
|
| |
check there are enough gids in the user namespace before adding
supplementary gids from /etc/group.
Follow-up for baede7cd2776b1f722dcbb65cff6228eeab5db44
Signed-off-by: Giuseppe Scrivano <gscrivan@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
There is a risk here, that if the GID does not exists
within the User Namespace the container will fail to start.
This is only likely to happen in HPC Envioronments, and I think
we should add a field to disable it for this environment,
Added a FIXME for this issue.
We currently have this problem with running a rootfull container within
a user namespace, it will fail if the GID is not available.
I looked at potentially checking the usernamespace that you are assigned
to, but I believe this will be very difficult to code up and to figure out.
Signed-off-by: Daniel J Walsh <dwalsh@redhat.com>
|
|
|
|
|
|
|
|
|
| |
The kernel will not allow you to modify existing mount flags on a volume
when bind mounting it to another place. Since /sys/fs/cgroup/systemd is
mounted noexec on the host, it needs to be mounted with the same flags
in the rootless container.
Signed-off-by: Daniel J Walsh <dwalsh@redhat.com>
|
|
|
|
| |
Signed-off-by: Eduardo Vega <edvegavalerio@gmail.com>
|