| Commit message (Collapse) | Author | Age |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
We want to modify /etc/passwd to add an entry for the user in
question, but at the same time we don't want to require the
container provide a /etc/passwd (a container with a single,
statically linked binary and nothing else is perfectly fine and
should be allowed, for example). We could create the passwd file
if it does not exist, but if the container doesn't provide one,
it's probably better not to make one at all. Gate changes to
/etc/passwd behind a stat() of the file in the container
returning cleanly.
Fixes #7515
Signed-off-by: Matthew Heon <mheon@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
We added code to create a `/etc/passwd` file that we bind-mount
into the container in some cases (most notably,
`--userns=keep-id` containers). This, unfortunately, was not
persistent, so user-added users would be dropped on container
restart. Changing where we store the file should fix this.
Further, we want to ensure that lookups of users in the container
use the right /etc/passwd if we replaced it. There was already
logic to do this, but it only worked for user-added mounts; it's
easy enough to alter it to use our mounts as well.
Signed-off-by: Matthew Heon <matthew.heon@pm.me>
|
|
|
|
|
|
|
| |
This should help alleviate races where the pod is not fully
cleaned up before subsequent API calls happen.
Signed-off-by: Matthew Heon <mheon@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Most Libpod containers are made via `pkg/specgen/generate` which
includes code to generate an appropriate exit command which will
handle unmounting the container's storage, cleaning up the
container's network, etc. There is one notable exception: pod
infra containers, which are made entirely within Libpod and do
not touch pkg/specgen. As such, no cleanup process, network never
cleaned up, bad things can happen.
There is good news, though - it's not that difficult to add this,
and it's done in this PR. Generally speaking, we don't allow
passing options directly to the infra container at create time,
but we do (optionally) proxy a pre-approved set of options into
it when we create it. Add ExitCommand to these options, and set
it at time of pod creation using the same code we use to generate
exit commands for normal containers.
Fixes #7103
Signed-off-by: Matthew Heon <mheon@redhat.com>
<MH: Fixed cherry-pick conflicts>
Signed-off-by: Matthew Heon <mheon@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
because a pod's network information is dictated by the infra container at creation, a container cannot be created with network attributes. this has been difficult for users to understand. we now return an error when a container is being created inside a pod and passes any of the following attributes:
* static IP (v4 and v6)
* static mac
* ports -p (i.e. -p 8080:80)
* exposed ports (i.e. 222-225)
* publish ports from image -P
Signed-off-by: Brent Baude <bbaude@redhat.com>
<MH: Fixed cherry pick conflicts and compile>
Signed-off-by: Matthew Heon <mheon@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
When `podman rmi --force` is run, it will remove any containers
that depend on the image. This includes Podman containers, but
also any other c/storage users who may be using it. With Podman
containers, we use the standard Podman removal function for
containers, which handles all edge cases nicely, shutting down
running containers, ensuring they're unmounted, etc.
Unfortunately, no such convient function exists (or can exist)
for all c/storage containers. Identifying the PID of a Buildah,
CRI-O, or Podman container is extremely different, and those are
just the implementations under the containers org. We can't
reasonably be able to know if a c/storage container is *in use*
and safe for removal if it's not a Podman container.
At the very least, though, we can attempt to unmount a storage
container before removing it. If it is in use, this will fail
(probably with a not-particularly-helpful error message), but if
it is not in use but not fully cleaned up, this should make our
removing it much more robust than it normally is.
Signed-off-by: Matthew Heon <matthew.heon@pm.me>
|
|
|
|
| |
Signed-off-by: Matthew Heon <matthew.heon@pm.me>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The ListContainers API previously had a Pod parameter, which
determined if pod name was returned (but, notably, not Pod ID,
which was returned unconditionally). This was fairly confusing,
so we decided to deprecate/remove the parameter and return it
unconditionally.
To do this without serious performance implications, we need to
avoid expensive JSON decodes of pod configuration in the DB. The
way our Bolt tables are structured, retrieving name given ID is
actually quite cheap, but we did not expose this via the Libpod
API. Add a new GetName API to do this.
Fixes #7214
Signed-off-by: Matthew Heon <matthew.heon@pm.me>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
A recent crun change stopped the creation of the container's
working directory if it does not exist. This is arguably correct
for user-specified directories, to protect against typos; it is
definitely not correct for image WORKDIR, where the image author
definitely intended for the directory to be used.
This makes Podman create the working directory and chown it to
container root, if it does not already exist, and only if it was
specified by an image, not the user.
Signed-off-by: Matthew Heon <matthew.heon@pm.me>
|
|
|
|
|
|
|
|
|
|
|
|
| |
upon image build completion, a new image type event is written for "build". more intricate details, like pulling an image, that might be done by build must be implemented in different vendored packages only after libpod is split from podman.
Fixes: #7022
Signed-off-by: Brent Baude <bbaude@redhat.com>
<MH: Fixed imports during cherry-pick>
Signed-off-by: Matt Heon <matthew.heon@pm.me>
|
|
|
|
|
|
|
|
| |
I used the wrong propagation first time around because I forgot
that rprivate is the default propagation. Oops. Switch to
rprivate so we're using the default.
Signed-off-by: Matthew Heon <mheon@redhat.com>
|
|
|
|
|
|
|
|
| |
Enable pagination until the search result reaches the limit, instead of
returning default 100 limit from registry API.
BZ: https://bugzilla.redhat.com/show_bug.cgi?id=1866153
Signed-off-by: Valentin Rothberg <rothberg@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Bind-mounting /etc/passwd into the container is problematic
becuase of how system utilities like `useradd` work. They want
to make a copy and then rename to try to prevent breakage; this
is, unfortunately, impossible when the file they want to rename
is a bind mount. The current behavior is fine for read-only
containers, though, because we expect useradd to fail in those
cases.
Instead of bind-mounting, we can edit /etc/passwd in the
container's rootfs. This is kind of gross, because the change
will show up in `podman diff` and similar tools, and will be
included in images made by `podman commit`. However, it's a lot
better than breaking important system tools.
Fixes #6953
Signed-off-by: Matthew Heon <matthew.heon@pm.me>
|
|
|
|
|
|
|
|
|
|
| |
If I enter a continer with --userns keep-id, my UID will be present
inside of the container, but most likely my user will not be defined.
This patch will take information about the user and stick it into the
container.
Signed-off-by: Daniel J Walsh <dwalsh@redhat.com>
|
|
|
|
|
|
| |
Fix the closing of fds from --preserve-fds to avoid the operation on unrelated fds.
Signed-off-by: Qi Wang <qiwan@redhat.com>
|
|
|
|
|
| |
Backported-by: Valentin Rothberg <rothberg@redhat.com>
Signed-off-by: zhangguanzhang <zhangguanzhang@qq.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
In local Podman, the frontend interprets the error and exit code
given by the Exec API to determine the appropriate exit code to
set for Podman itself; special cases like a missing executable
receive special exit codes.
Exec for the remote API, however, has to do this inside Libpod
itself, as Libpod will be directly queried (via the Inspect API
for exec sessions) to get the exit code. This was done correctly
when the exec session started properly, but we did not properly
handle cases where the OCI runtime fails before the exec session
can properly start. Making two error returns that would otherwise
not set exit code actually do so should resolve the issue.
Fixes #6893
Signed-off-by: Matthew Heon <matthew.heon@pm.me>
|
|
|
|
| |
Signed-off-by: Paul Holzinger <paul.holzinger@web.de>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Listing images has shown increasing performance penalties with an
increasing number of images. Unless `--all` is specified, Podman
will filter intermediate images. Determining intermediate images
has been done by finding (and comparing!) parent images which is
expensive. We had to query the storage many times which turned it
into a bottleneck.
Instead, create a layer tree and assign one or more images to nodes that
match the images' top layer. Determining the children of an image is
now exponentially faster as we already know the child images from the
layer graph and the images using the same top layer, which may also be
considered child images based on their history.
On my system with 510 images, a rootful image list drops from 6 secs
down to 0.3 secs.
Also use the tree to compute parent nodes, and to filter intermediate
images for pruning.
Signed-off-by: Valentin Rothberg <rothberg@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
keep the file ownership when chowning and honor the user namespace
mappings.
Closes: https://github.com/containers/podman/issues/7130
Signed-off-by: Giuseppe Scrivano <gscrivan@redhat.com>
<MH: Fixed conflicts from cherry pick>
Signed-off-by: Matthew Heon <matthew.heon@pm.me>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The logic for `podman rmi --force` includes a bit of code that
will remove Libpod containers using Libpod's container removal
logic - this ensures that they're cleanly and completely removed.
For other containers (Buildah, CRI-O, etc) we fall back to
manually removing the containers using the image from c/storage.
Unfortunately, our logic for invoking the Podman removal function
had an error, and it did not properly handle cases where we were
force-removing an image with >1 name. Force-removing such images
by ID guarantees their removal, not just an untag of a single
name; our code for identifying whether to remove containers did
not proper detect this case, so we fell through and deleted the
Podman containers as storage containers, leaving traces of them
in the Libpod DB.
Fixes #7153
Signed-off-by: Matthew Heon <matthew.heon@pm.me>
|
|
|
|
| |
Signed-off-by: Daniel J Walsh <dwalsh@redhat.com>
|
|
|
|
|
|
|
|
|
| |
Add a `context.Context` to the log APIs to allow for cancelling
streaming (e.g., via `podman logs -f`). This fixes issues for
the remote API where some go routines of the server will continue
writing and produce nothing but heat and waste CPU cycles.
Signed-off-by: Valentin Rothberg <rothberg@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This was inspired by https://github.com/cri-o/cri-o/pull/3934 and
much of the logic for it is contained there. However, in brief,
a named return called "err" can cause lots of code confusion and
encourages using the wrong err variable in defer statements,
which can make them work incorrectly. Using a separate name which
is not used elsewhere makes it very clear what the defer should
be doing.
As part of this, remove a large number of named returns that were
not used anywhere. Most of them were once needed, but are no
longer necessary after previous refactors (but were accidentally
retained).
Signed-off-by: Matthew Heon <matthew.heon@pm.me>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
In `podman inspect` output for containers and pods, we include
the command that was used to create the container. This is also
used by `podman generate systemd --new` to generate unit files.
With remote podman, the generated create commands were incorrect
since we sourced directly from os.Args on the server side, which
was guaranteed to be `podman system service` (or some variant
thereof). The solution is to pass the command along in the
Specgen or PodSpecgen, where we can source it from the client's
os.Args.
This will still be VERY iffy for mixed local/remote use (doing a
`podman --remote run ...` on a remote client then a
`podman generate systemd --new` on the server on the same
container will not work, because the `--remote` flag will slip
in) but at the very least the output of `podman inspect` will be
correct. We can look into properly handling `--remote` (parsing
it out would be a little iffy) in a future PR.
Signed-off-by: Matthew Heon <matthew.heon@pm.me>
<MH: Fixed build after cherry-pick>
Signed-off-by: Matthew Heon <matthew.heon@pm.me>
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This allows us to determine if the container auto-detected that
systemd was in use, and correctly activated systemd integration.
Use this to wire up some integration tests to verify that systemd
integration is working properly.
Signed-off-by: Matthew Heon <matthew.heon@pm.me>
<MH: Fixed Compile after cherry-pick>
Signed-off-by: Matthew Heon <matthew.heon@pm.me>
|
|
|
|
| |
Signed-off-by: Matthew Heon <mheon@redhat.com>
|
|
|
|
|
|
|
|
|
|
| |
We were hard-coding two fields to false, instead of grabbing
their value from the pod config, which means that `pod inspect`
would print the wrong value always.
Fixes #6968
Signed-off-by: Matthew Heon <mheon@redhat.com>
|
|
|
|
|
|
|
|
|
|
| |
We had a field for this in the inspect data, but it was never
being populated. Because of this, `podman pod inspect` stopped
showing port bindings (and other infra container settings). Add
code to populate the infra container inspect data, and add a test
to ensure we don't regress again.
Signed-off-by: Matthew Heon <mheon@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
| |
the code got lost in the migration to podman 2.0, reintroduce it.
Closes: https://github.com/containers/podman/issues/6989
Signed-off-by: Giuseppe Scrivano <gscrivan@redhat.com>
<MH: Fixed build>
Signed-off-by: Matthew Heon <matthew.heon@pm.me>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Currently you can not apply an ApparmorProfile if you specify
--privileged. This patch will allow both to be specified
simultaniosly.
By default Apparmor should be disabled if the user
specifies --privileged, but if the user specifies --security apparmor:PROFILE,
with --privileged, we should do both.
Added e2e run_apparmor_test.go
Signed-off-by: Daniel J Walsh <dwalsh@redhat.com>
|
|\
| |
| | |
[2.0] events fixes
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
Fix a potential panic in the events endpoint when parsing the filters
parameter. Values of the filters map might be empty, so we need to
account for that instead of uncondtitionally accessing the first item.
Also apply a similar for race conditions as done in commit f4a2d25c0fca:
Fix a race that could cause read errors to be masked. Masking
such errors is likely to report red herrings since users don't
see that reading failed for some reasons but that a given event
could not be found.
Another race was the handler closing event channel, which could lead to
two kinds of panics: double close, send to close channel. The backend
takes care of that. However, make sure that the backend stops working
in case the context has been cancelled.
Fixes: #6899
Signed-off-by: Valentin Rothberg <rothberg@redhat.com>
|
|/
|
|
| |
Signed-off-by: Daniel J Walsh <dwalsh@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Allow wildcards in the search term. Note that not all registries
support wildcards and it may only work with v1 registries.
Note that searching implies figuring out if the specified search term
includes a registry. If there's not registry detected, the search term
will be used against all configured "unqualified-serach-registries" in
the registries.conf. The parsing logic considers a registry to be the
substring before the first slash `/`.
With these changes we now not only support wildcards but arbitrary
input; ultimately it's up to the registries to decide whether they
support given input or not.
Fixes: bugzilla.redhat.com/show_bug.cgi?id=1846629
Cherry-pick-of: commit b05888a97dbb
Signed-off-by: Valentin Rothberg <rothberg@redhat.com>
|
|
|
|
|
|
|
|
|
|
| |
Fix a race that could cause read errors to be masked. Masking such
errors is likely to report red herrings since users don't see that
reading failed for some reasons but that a given event could not be
found.
Backport-of: commit f4a2d25c0fca
Signed-off-by: Valentin Rothberg <rothberg@redhat.com>
|
|
|
|
|
|
|
|
|
| |
move the chown for newly created volumes after the spec generation so
the correct UID/GID are known.
Closes: https://github.com/containers/libpod/issues/5698
Signed-off-by: Giuseppe Scrivano <gscrivan@redhat.com>
|
|
|
|
| |
Signed-off-by: Giuseppe Scrivano <gscrivan@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
| |
We weren't actually halting the goroutine that sent events, so it
would continue sending even when the channel closed (the most
notable cause being early hangup - e.g. Control-c on a curl
session). Use a context to cancel the events goroutine and stop
sending events.
Fixes #6805
Signed-off-by: Matthew Heon <matthew.heon@pm.me>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The infra/abi code for pods was written in a flawed way, assuming
that the map[string]error containing individual container errors
was only set when the global error for the pod function was nil;
that is not accurate, and we are actually *guaranteed* to set the
global error when any individual container errors. Thus, we'd
never actually include individual container errors, because the
infra code assumed that err being set meant everything failed and
no container operations were attempted.
We were originally setting the cause of the error to something
nonsensical ("container already exists"), so I made a new error
indicating that some containers in the pod failed. We can then
ignore that error when building the report on the pod operation
and actually return errors from individual containers.
Unfortunately, this exposed another weakness of the infra code,
which was discarding the container IDs. Errors from individual
containers are not guaranteed to identify which container they
came from, hence the use of map[string]error in the Pod API
functions. Rather than restructuring the structs we return from
pkg/infra, I just wrapped the returned errors with a message
including the ID of the container.
Signed-off-by: Matthew Heon <matthew.heon@pm.me>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
With the advent of Podman 2.0.0 we crossed the magical barrier of go
modules. While we were able to continue importing all packages inside
of the project, the project could not be vendored anymore from the
outside.
Move the go module to new major version and change all imports to
github.com/containers/libpod/v2. The renaming of the imports
was done via gomove [1].
[1] https://github.com/KSubedi/gomove
Signed-off-by: Valentin Rothberg <rothberg@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
When going through the output of `podman inspect` to try and
identify another issue, I noticed that Podman 2.0 was setting
StopSignal to 0 on containers by default. After chasing it
through the command line and SpecGen, I determined that we were
actually not setting a default in Libpod, which is strange
because I swear we used to do that. I re-added the disappeared
default and now all is well again.
Also, while I was looking for the bug in SpecGen, I found a bunch
of TODOs that have already been done. Eliminate the comments for
these.
Signed-off-by: Matthew Heon <matthew.heon@pm.me>
|
|
|
|
|
|
|
|
|
|
|
| |
Throw an error if a specified tag does not exist. Also make sure that
the user input is normalized as we already do for `podman tag`.
To prevent regressions, add a set of end-to-end and systemd tests.
Last but not least, update the docs and add bash completions.
Signed-off-by: Valentin Rothberg <rothberg@redhat.com>
|
|
|
|
|
|
| |
Reformat ports of inspect network settings to compatible with docker inspect. Close #5380
Signed-off-by: Qi Wang <qiwan@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
| |
specify the mappings in the container configuration to the storage
when creating the container so that the correct mappings can be
configured.
Regression introduced with Podman 2.0.
Closes: https://github.com/containers/libpod/issues/6735
Signed-off-by: Giuseppe Scrivano <gscrivan@redhat.com>
|
|
|
|
| |
Signed-off-by: jgallucci32 <john.gallucci.iv@gmail.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This incorporates code from PR #6591 and #6614 but does not use
event channels to detect container state and rather uses timers
with a defined wait duration before calling t.StopAtEOF() to
ensure the last log entry is output before a container exits.
The polling interval is set to 250 milliseconds based on polling
interval defined in hpcloud/tail here:
https://github.com/hpcloud/tail/blob/v1.0.0/watch/polling.go#L117
Co-authored-by: Qi Wang <qiwan@redhat.com>
Signed-off-by: jgallucci32 <john.gallucci.iv@gmail.com>
|
|
|
|
|
|
|
|
| |
When multiple connections are monitoring events via the remote API, the inotify in the hpcloud library seems unable to consistently send events. Switching from inotify to poll seems to clear this up.
Fixes: #6664
Signed-off-by: Brent Baude <bbaude@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
As part of APIv2 Attach, we need to be able to attach to freshly
created containers (in ContainerStateConfigured). This isn't
something Libpod is interested in supporting, so we use Init() to
get the container into ContainerStateCreated, in which attach is
possible. Problem: Init() will fail if dependencies are not
started, so a fresh container in a fresh pod will fail. The
simplest solution is to extend the existing recursive start code
from Start() to Init(), allowing dependency containers to be
started when we initialize the container (optionally, controlled
via bool).
Also, update some comments in container_api.go to make it more
clear how some of our major API calls work.
Fixes #6646
Signed-off-by: Matthew Heon <mheon@redhat.com>
|
|\
| |
| | |
Do not share container log driver for exec
|