| Commit message (Collapse) | Author | Age |
|
|
|
| |
Signed-off-by: zhangguanzhang <zhangguanzhang@qq.com>
|
|
|
|
|
|
|
|
|
|
|
|
| |
because a pod's network information is dictated by the infra container at creation, a container cannot be created with network attributes. this has been difficult for users to understand. we now return an error when a container is being created inside a pod and passes any of the following attributes:
* static IP (v4 and v6)
* static mac
* ports -p (i.e. -p 8080:80)
* exposed ports (i.e. 222-225)
* publish ports from image -P
Signed-off-by: Brent Baude <bbaude@redhat.com>
|
|\
| |
| | |
Unmount c/storage containers before removing them
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
When `podman rmi --force` is run, it will remove any containers
that depend on the image. This includes Podman containers, but
also any other c/storage users who may be using it. With Podman
containers, we use the standard Podman removal function for
containers, which handles all edge cases nicely, shutting down
running containers, ensuring they're unmounted, etc.
Unfortunately, no such convient function exists (or can exist)
for all c/storage containers. Identifying the PID of a Buildah,
CRI-O, or Podman container is extremely different, and those are
just the implementations under the containers org. We can't
reasonably be able to know if a c/storage container is *in use*
and safe for removal if it's not a Podman container.
At the very least, though, we can attempt to unmount a storage
container before removing it. If it is in use, this will fail
(probably with a not-particularly-helpful error message), but if
it is not in use but not fully cleaned up, this should make our
removing it much more robust than it normally is.
Signed-off-by: Matthew Heon <matthew.heon@pm.me>
|
|\ \
| |/
|/| |
Don't limit the size on /run for systemd based containers
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
We had a customer incident where they ran out of space on /run.
If you don't specify size, it will be still limited to 50% or memory
available in the cgroup the container is running in. If the cgroup is
unlimited then the /run will be limited to 50% of the total memory
on the system.
Also /run is mounted on the host as exec, so no reason for us to mount
it noexec.
Signed-off-by: Daniel J Walsh <dwalsh@redhat.com>
|
|/
|
|
|
|
| |
Support podman service sighup reload configuration files(containers.conf, registries.conf, storage.conf).
Signed-off-by: Qi Wang <qiwan@redhat.com>
|
|\
| |
| | |
Ensure pod infra containers have an exit command
|
| |
| |
| |
| |
| |
| |
| | |
This should help alleviate races where the pod is not fully
cleaned up before subsequent API calls happen.
Signed-off-by: Matthew Heon <mheon@redhat.com>
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
Most Libpod containers are made via `pkg/specgen/generate` which
includes code to generate an appropriate exit command which will
handle unmounting the container's storage, cleaning up the
container's network, etc. There is one notable exception: pod
infra containers, which are made entirely within Libpod and do
not touch pkg/specgen. As such, no cleanup process, network never
cleaned up, bad things can happen.
There is good news, though - it's not that difficult to add this,
and it's done in this PR. Generally speaking, we don't allow
passing options directly to the infra container at create time,
but we do (optionally) proxy a pre-approved set of options into
it when we create it. Add ExitCommand to these options, and set
it at time of pod creation using the same code we use to generate
exit commands for normal containers.
Fixes #7103
Signed-off-by: Matthew Heon <mheon@redhat.com>
|
|\ \
| | |
| | | |
Change /sys/fs/cgroup/systemd mount to rprivate
|
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
I used the wrong propagation first time around because I forgot
that rprivate is the default propagation. Oops. Switch to
rprivate so we're using the default.
Signed-off-by: Matthew Heon <mheon@redhat.com>
|
|\ \ \
| | | |
| | | | |
Add support for setting the CIDR when using slirp4netns
|
| |/ /
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
This adds support for the --cidr parameter that is supported
by slirp4netns since v0.3.0. This allows the user to change
the ip range that is used for the network inside the container.
Signed-off-by: Adis Hamzić <adis@hamzadis.com>
|
|/ /
| |
| |
| |
| |
| |
| |
| | |
upon image build completion, a new image type event is written for "build". more intricate details, like pulling an image, that might be done by build must be implemented in different vendored packages only after libpod is split from podman.
Fixes: #7022
Signed-off-by: Brent Baude <bbaude@redhat.com>
|
|/
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
On cgroups v1 systems, we need to mount /sys/fs/cgroup/systemd
into the container. We were doing this with no explicit mount
propagation tag, which means that, under some circumstances, the
shared mount propagation could be chosen - which, combined with
the fact that we need a mount to mask
/sys/fs/cgroup/systemd/release_agent in the container, means we
would leak a never-ending set of mounts under
/sys/fs/cgroup/systemd/ on container restart.
Fortunately, the fix is very simple - hardcode mount propagation
to something that won't leak.
Signed-off-by: Matthew Heon <mheon@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The ListContainers API previously had a Pod parameter, which
determined if pod name was returned (but, notably, not Pod ID,
which was returned unconditionally). This was fairly confusing,
so we decided to deprecate/remove the parameter and return it
unconditionally.
To do this without serious performance implications, we need to
avoid expensive JSON decodes of pod configuration in the DB. The
way our Bolt tables are structured, retrieving name given ID is
actually quite cheap, but we did not expose this via the Libpod
API. Add a new GetName API to do this.
Fixes #7214
Signed-off-by: Matthew Heon <matthew.heon@pm.me>
|
|\
| |
| | |
support outbound-addr
|
| |
| |
| |
| |
| |
| | |
Fixes #6064
Signed-off-by: Bohumil Cervenka <5eraph@protonmail.com>
|
|\ \
| | |
| | | |
images: speed up lists
|
| |/
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
Listing images has shown increasing performance penalties with an
increasing number of images. Unless `--all` is specified, Podman
will filter intermediate images. Determining intermediate images
has been done by finding (and comparing!) parent images which is
expensive. We had to query the storage many times which turned it
into a bottleneck.
Instead, create a layer tree and assign one or more images to nodes that
match the images' top layer. Determining the children of an image is
now exponentially faster as we already know the child images from the
layer graph and the images using the same top layer, which may also be
considered child images based on their history.
On my system with 510 images, a rootful image list drops from 6 secs
down to 0.3 secs.
Also use the tree to compute parent nodes, and to filter intermediate
images for pruning.
Signed-off-by: Valentin Rothberg <rothberg@redhat.com>
|
|\ \
| | |
| | | |
fix podman logs --tail when log is bigger than pagesize
|
| | |
| | |
| | |
| | | |
Signed-off-by: Paul Holzinger <paul.holzinger@web.de>
|
|\ \ \
| | | |
| | | | |
Ensure that exec errors write exit codes to the DB
|
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | | |
In local Podman, the frontend interprets the error and exit code
given by the Exec API to determine the appropriate exit code to
set for Podman itself; special cases like a missing executable
receive special exit codes.
Exec for the remote API, however, has to do this inside Libpod
itself, as Libpod will be directly queried (via the Inspect API
for exec sessions) to get the exit code. This was done correctly
when the exec session started properly, but we did not properly
handle cases where the OCI runtime fails before the exec session
can properly start. Making two error returns that would otherwise
not set exit code actually do so should resolve the issue.
Fixes #6893
Signed-off-by: Matthew Heon <matthew.heon@pm.me>
|
|\ \ \ \
| | | | |
| | | | | |
Ensure WORKDIR from images is created
|
| |/ / /
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | | |
A recent crun change stopped the creation of the container's
working directory if it does not exist. This is arguably correct
for user-specified directories, to protect against typos; it is
definitely not correct for image WORKDIR, where the image author
definitely intended for the directory to be used.
This makes Podman create the working directory and chown it to
container root, if it does not already exist, and only if it was
specified by an image, not the user.
Signed-off-by: Matthew Heon <matthew.heon@pm.me>
|
| |/ /
|/| |
| | |
| | |
| | |
| | | |
Wrap the inner helper in the retry function. Functions pullimage failed with retriable error will default maxretry 3 times using exponential backoff.
Signed-off-by: Qi Wang <qiwan@redhat.com>
|
| |/
|/|
| |
| |
| |
| |
| |
| |
| | |
Check if there is an pod or container an return
the appropriate error message instead of blindly
return 'container exists' with `podman create` and
'pod exists' with `podman pod create`.
Signed-off-by: Paul Holzinger <paul.holzinger@web.de>
|
|\ \
| | |
| | | |
API returns 500 in case network is not found instead of 404
|
| | |
| | |
| | |
| | | |
Signed-off-by: zhangguanzhang <zhangguanzhang@qq.com>
|
|/ /
| |
| |
| |
| |
| |
| | |
Some calls to `Sprintf("%s")` can be avoided by using direct string
type assertions.
Signed-off-by: Sascha Grunert <sgrunert@suse.com>
|
|/
|
|
|
|
|
|
|
|
|
|
|
| |
The define package under Libpod is intended to be an extremely
minimal package, including constants and very little else.
However, as a result of some legacy code, it was dragging in all
of libpod/image (and, less significantly, the util package).
Fortunately, this was just to ensure that error constants were
not duplicating, and there's nothing preventing us from
importing in the other direction and keeping libpod/define free
of dependencies.
Signed-off-by: Matthew Heon <matthew.heon@pm.me>
|
|\
| |
| | |
Fix close fds of exec --preserve-fds
|
| |
| |
| |
| |
| |
| | |
Fix the closing of fds from --preserve-fds to avoid the operation on unrelated fds.
Signed-off-by: Qi Wang <qiwan@redhat.com>
|
|\ \
| |/
|/| |
Fix close fds of run --preserve-fds
|
| |
| |
| |
| |
| |
| |
| | |
Test flakes mentioned in #6987 might be caused by uncorrect closing of file descriptor.
Fix the code to close file descriptors for podman run since it may close those used by other processes.
Signed-off-by: Qi Wang <qiwan@redhat.com>
|
|/
|
|
|
|
|
|
|
| |
keep the file ownership when chowning and honor the user namespace
mappings.
Closes: https://github.com/containers/podman/issues/7130
Signed-off-by: Giuseppe Scrivano <gscrivan@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The logic for `podman rmi --force` includes a bit of code that
will remove Libpod containers using Libpod's container removal
logic - this ensures that they're cleanly and completely removed.
For other containers (Buildah, CRI-O, etc) we fall back to
manually removing the containers using the image from c/storage.
Unfortunately, our logic for invoking the Podman removal function
had an error, and it did not properly handle cases where we were
force-removing an image with >1 name. Force-removing such images
by ID guarantees their removal, not just an untag of a single
name; our code for identifying whether to remove containers did
not proper detect this case, so we fell through and deleted the
Podman containers as storage containers, leaving traces of them
in the Libpod DB.
Fixes #7153
Signed-off-by: Matthew Heon <matthew.heon@pm.me>
|
|\
| |
| | |
Make changes to /etc/passwd on disk for non-read only
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
Bind-mounting /etc/passwd into the container is problematic
becuase of how system utilities like `useradd` work. They want
to make a copy and then rename to try to prevent breakage; this
is, unfortunately, impossible when the file they want to rename
is a bind mount. The current behavior is fine for read-only
containers, though, because we expect useradd to fail in those
cases.
Instead of bind-mounting, we can edit /etc/passwd in the
container's rootfs. This is kind of gross, because the change
will show up in `podman diff` and similar tools, and will be
included in images made by `podman commit`. However, it's a lot
better than breaking important system tools.
Fixes #6953
Signed-off-by: Matthew Heon <matthew.heon@pm.me>
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
There are many use cases where you want to just mount an image
without creating a container on it. For example you might want
to just examine the content in an image after you pull it for
security analysys. Or you might want to just use the executables
on the image without running it in a container.
The image is mounted readonly since we do not want people changing
images.
Signed-off-by: Daniel J Walsh <dwalsh@redhat.com>
|
| |
| |
| |
| | |
Signed-off-by: Daniel J Walsh <dwalsh@redhat.com>
|
|\ \
| | |
| | | |
When chowning we should not follow symbolic link
|
| | |
| | |
| | |
| | | |
Signed-off-by: Daniel J Walsh <dwalsh@redhat.com>
|
|/ /
| |
| |
| |
| |
| |
| | |
This commit handle the TODO task of breaking the Container
config into smaller sub-configs
Signed-off-by: ldelossa <ldelossa@redhat.com>
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
Currently you can not apply an ApparmorProfile if you specify
--privileged. This patch will allow both to be specified
simultaniosly.
By default Apparmor should be disabled if the user
specifies --privileged, but if the user specifies --security apparmor:PROFILE,
with --privileged, we should do both.
Added e2e run_apparmor_test.go
Signed-off-by: Daniel J Walsh <dwalsh@redhat.com>
|
|\ \
| | |
| | | |
Add --umask flag for create, run
|
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
--umask sets the umask inside the container
Defaults to 0022
Co-authored-by: Daniel J Walsh <dwalsh@redhat.com>
Signed-off-by: Ashley Cui <acui@redhat.com>
|
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
This was added with an earlier exec rework, and honestly is very
confusing. Podman is printing an error message, but the error had
nothing to do with Podman; it was the executable we ran inside
the container that errored, and per `podman run` convention we
should set the Podman exit code to the process's exit code and
print no error.
Signed-off-by: Matthew Heon <mheon@redhat.com>
|