| Commit message (Collapse) | Author | Age |
|\
| |
| | |
Fix up errors found by codespell
|
| |
| |
| |
| | |
Signed-off-by: Daniel J Walsh <dwalsh@redhat.com>
|
|\ \
| |/
|/| |
Add global options --runtime-flags
|
| |
| |
| |
| |
| |
| | |
Add global options --runtime-flags for setting options to container runtime.
Signed-off-by: Qi Wang <qiwan@redhat.com>
|
|/
|
|
|
|
|
|
|
| |
when joining an existing container user namespace, read the existing
mappings so the storage can be created with the correct ownership.
Closes: https://github.com/containers/podman/issues/7547
Signed-off-by: Giuseppe Scrivano <giuseppe@scrivano.org>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Most Libpod containers are made via `pkg/specgen/generate` which
includes code to generate an appropriate exit command which will
handle unmounting the container's storage, cleaning up the
container's network, etc. There is one notable exception: pod
infra containers, which are made entirely within Libpod and do
not touch pkg/specgen. As such, no cleanup process, network never
cleaned up, bad things can happen.
There is good news, though - it's not that difficult to add this,
and it's done in this PR. Generally speaking, we don't allow
passing options directly to the infra container at create time,
but we do (optionally) proxy a pre-approved set of options into
it when we create it. Add ExitCommand to these options, and set
it at time of pod creation using the same code we use to generate
exit commands for normal containers.
Fixes #7103
Signed-off-by: Matthew Heon <mheon@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
A recent crun change stopped the creation of the container's
working directory if it does not exist. This is arguably correct
for user-specified directories, to protect against typos; it is
definitely not correct for image WORKDIR, where the image author
definitely intended for the directory to be used.
This makes Podman create the working directory and chown it to
container root, if it does not already exist, and only if it was
specified by an image, not the user.
Signed-off-by: Matthew Heon <matthew.heon@pm.me>
|
|
|
|
| |
Signed-off-by: Daniel J Walsh <dwalsh@redhat.com>
|
|
|
|
|
|
|
|
| |
--umask sets the umask inside the container
Defaults to 0022
Co-authored-by: Daniel J Walsh <dwalsh@redhat.com>
Signed-off-by: Ashley Cui <acui@redhat.com>
|
|
|
|
|
|
|
|
| |
Add support -v for overlay volume mounts in podman.
Signed-off-by: Daniel J Walsh <dwalsh@redhat.com>
Signed-off-by: Qi Wang <qiwan@redhat.com>
|
|
|
|
|
|
| |
do not pass network specific options through the network namespace.
Signed-off-by: Giuseppe Scrivano <gscrivan@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
In `podman inspect` output for containers and pods, we include
the command that was used to create the container. This is also
used by `podman generate systemd --new` to generate unit files.
With remote podman, the generated create commands were incorrect
since we sourced directly from os.Args on the server side, which
was guaranteed to be `podman system service` (or some variant
thereof). The solution is to pass the command along in the
Specgen or PodSpecgen, where we can source it from the client's
os.Args.
This will still be VERY iffy for mixed local/remote use (doing a
`podman --remote run ...` on a remote client then a
`podman generate systemd --new` on the server on the same
container will not work, because the `--remote` flag will slip
in) but at the very least the output of `podman inspect` will be
correct. We can look into properly handling `--remote` (parsing
it out would be a little iffy) in a future PR.
Signed-off-by: Matthew Heon <matthew.heon@pm.me>
|
|
|
|
|
|
|
|
|
|
| |
If I enter a continer with --userns keep-id, my UID will be present
inside of the container, but most likely my user will not be defined.
This patch will take information about the user and stick it into the
container.
Signed-off-by: Daniel J Walsh <dwalsh@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
--sdnotify container|conmon|ignore
With "conmon", we send the MAINPID, and clear the NOTIFY_SOCKET so the OCI
runtime doesn't pass it into the container. We also advertise "ready" when the
OCI runtime finishes to advertise the service as ready.
With "container", we send the MAINPID, and leave the NOTIFY_SOCKET so the OCI
runtime passes it into the container for initialization, and let the container advertise further metadata.
This is the default, which is closest to the behavior podman has done in the past.
The "ignore" option removes NOTIFY_SOCKET from the environment, so neither podman nor
any child processes will talk to systemd.
This removes the need for hardcoded CID and PID files in the command line, and
the PIDFile directive, as the pid is advertised directly through sd-notify.
Signed-off-by: Joseph Gooch <mrwizard@dok.org>
|
|\
| |
| | |
Add --tz flag to create, run
|
| |
| |
| |
| |
| |
| |
| | |
--tz flag sets timezone inside container
Can be set to IANA timezone as well as `local` to match host machine
Signed-off-by: Ashley Cui <acui@redhat.com>
|
|/
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
With the advent of Podman 2.0.0 we crossed the magical barrier of go
modules. While we were able to continue importing all packages inside
of the project, the project could not be vendored anymore from the
outside.
Move the go module to new major version and change all imports to
`github.com/containers/libpod/v2`. The renaming of the imports
was done via `gomove` [1].
[1] https://github.com/KSubedi/gomove
Signed-off-by: Valentin Rothberg <rothberg@redhat.com>
|
|\
| |
| | |
container: move volume chown after spec generation
|
| |
| |
| |
| |
| |
| |
| |
| |
| | |
move the chown for newly created volumes after the spec generation so
the correct UID/GID are known.
Closes: https://github.com/containers/libpod/issues/5698
Signed-off-by: Giuseppe Scrivano <gscrivan@redhat.com>
|
|/
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
When running under systemd there is no need to create yet another
cgroup for the container.
With conmon-delegated the current cgroup will be split in two sub
cgroups:
- supervisor
- container
The supervisor cgroup will hold conmon and the podman process, while
the container cgroup is used by the OCI runtime (using the cgroupfs
backend).
Closes: https://github.com/containers/libpod/issues/6400
Signed-off-by: Giuseppe Scrivano <gscrivan@redhat.com>
|
|
|
|
|
|
| |
Add --preservefds to podman run. close https://github.com/containers/libpod/issues/6458
Signed-off-by: Qi Wang <qiwan@redhat.com>
|
|\
| |
| | |
Do not share container log driver for exec
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
When the container uses journald logging, we don't want to
automatically use the same driver for its exec sessions. If we do
we will pollute the journal (particularly in the case of
healthchecks) with large amounts of undesired logs. Instead,
force exec sessions logs to file for now; we can add a log-driver
flag later (we'll probably want to add a `podman logs` command
that reads exec session logs at the same time).
As part of this, add support for the new 'none' logs driver in
Conmon. It will be the default log driver for exec sessions, and
can be optionally selected for containers.
Great thanks to Joe Gooch (mrwizard@dok.org) for adding support
to Conmon for a null log driver, and wiring it in here.
Fixes #6555
Signed-off-by: Matthew Heon <matthew.heon@pm.me>
|
|/
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
We initially believed that implementing this required support for
restarting containers after reboot, but this is not the case.
The unless-stopped restart policy acts identically to the always
restart policy except in cases related to reboot (which we do not
support yet), but it does not require that support for us to
implement it.
Changes themselves are quite simple, we need a new restart policy
constant, we need to remove existing checks that block creation
of containers when unless-stopped was used, and we need to update
the manpages.
Fixes #6508
Signed-off-by: Matthew Heon <matthew.heon@pm.me>
|
|
|
|
|
|
|
|
|
|
|
| |
Add an `--infra-conmon-pidfile` flag to `podman-pod-create` to write the
infra container's conmon process ID to a specified path. Several
container sub-commands already support `--conmon-pidfile` which is
especially helpful to allow for systemd to access and track the conmon
processes. This allows for easily tracking the conmon process of a
pod's infra container.
Signed-off-by: Valentin Rothberg <rothberg@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Add a `CreateCommand` field to the pod config which includes the entire
`os.Args` at pod-creation. Similar to the already existing field in a
container config, we need this information to properly generate generic
systemd unit files for pods. It's a prerequisite to support the `--new`
flag for pods.
Also add the `CreateCommand` to the pod-inspect data, which can come in
handy for debugging, general inspection and certainly for the tests that
are added along with the other changes.
Signed-off-by: Valentin Rothberg <rothberg@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
* Support the `X-Registry-Auth` http-request header.
* The content of the header is a base64 encoded JSON payload which can
either be a single auth config or a map of auth configs (user+pw or
token) with the corresponding registries being the keys. Vanilla
Docker, projectatomic Docker and the bindings are transparantly
supported.
* Add a hidden `--registries-conf` flag. Buildah exposes the same
flag, mostly for testing purposes.
* Do all credential parsing in the client (i.e., `cmd/podman`) pass
the username and password in the backend instead of unparsed
credentials.
* Add a `pkg/auth` which handles most of the heavy lifting.
* Go through the authentication-handling code of most commands, bindings
and endpoints. Migrate them to the new code and fix issues as seen.
A final evaluation and more tests is still required *after* this
change.
* The manifest-push endpoint is missing certain parameters and should
use the ABI function instead. Adding auth-support isn't really
possible without these parts working.
* The container commands and endpoints (i.e., create and run) have not
been changed yet. The APIs don't yet account for the authfile.
* Add authentication tests to `pkg/bindings`.
Fixes: #6384
Signed-off-by: Valentin Rothberg <rothberg@redhat.com>
|
|
|
|
|
|
|
| |
By moving a couple of variables from libpod/libpod to libpod/libpod/define
I am able shrink the podman-remote-* executables by another megabyte.
Signed-off-by: Daniel J Walsh <dwalsh@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This one was a massive pain to track down.
The original symptom was an error message from rootless Podman
trying to make a container in a pod. I unfortunately did not look
at the error message closely enough to realize that the namespace
in question was the cgroup namespace (the reproducer pod was
explicitly set to only share the network namespace), else this
would have been quite a bit shorter.
I spent considerable effort trying to track down differences
between the inspect output of the two containers, and when that
failed I was forced to resort to diffing the OCI specs. That
finally proved fruitful, and I was able to determine what should
have been obvious all along: the container was joining the cgroup
namespace of the infra container when it really ought not to
have.
From there, I discovered a variable collision in pod config. The
UsePodCgroup variable means "create a parent cgroup for the pod
and join containers in the pod to it". Unfortunately, it is very
similar to UsePodUTS, UsePodNet, etc, which mean "the pod shares
this namespace", so an accessor was accidentally added for it
that indicated the pod shared the cgroup namespace when it really
did not. Once I realized that, it was a quick fix - add a bool to
the pod's configuration to indicate whether the cgroup ns was
shared (distinct from UsePodCgroup) and use that for the
accessor.
Also included are fixes for `podman inspect` and
`podman pod inspect` that fix them to actually display the state
of the cgroup namespace (for container inspect) and what
namespaces are shared (for pod inspect). Either of those would
have made tracking this down considerably quicker.
Fixes #6149
Signed-off-by: Matthew Heon <mheon@redhat.com>
|
|
|
|
|
|
|
|
| |
enabled integration tests for volumes. there are two exceptions that still need work because of something not yet implemented.
also, add code to deal with the fact that containers conf appears to set a local volume driver where it used to be simply blank.
Signed-off-by: Brent Baude <bbaude@redhat.com>
|
|
|
|
|
|
| |
rid ourseleves of libpod references in v2 client
Signed-off-by: Brent Baude <bbaude@redhat.com>
|
|
|
|
|
|
|
|
|
|
| |
Instead of getting mount options from /proc/self/mountinfo, which is
very costly to read/parse (and can even be unreliable), let's use
statfs(2) to figure out the flags we need.
[v2: move getting default options to pkg/util, make it linux-specific]
Signed-off-by: Kir Kolyshkin <kolyshkin@gmail.com>
|
|
|
|
|
|
|
| |
vendor in c/common config pkg for containers.conf
Signed-off-by: Qi Wang qiwan@redhat.com
Signed-off-by: Daniel J Walsh <dwalsh@redhat.com>
|
|
|
|
|
|
|
| |
Also adds some basic tests for these two. More tests are needed
but will have to wait for state to be finished.
Signed-off-by: Matthew Heon <matthew.heon@pm.me>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Add support to auto-update containers running in systemd units as
generated with `podman generate systemd --new`.
`podman auto-update` looks up containers with a specified
"io.containers.autoupdate" label (i.e., the auto-update policy).
If the label is present and set to "image", Podman reaches out to the
corresponding registry to check if the image has been updated. We
consider an image to be updated if the digest in the local storage is
different than the one of the remote image. If an image must be
updated, Podman pulls it down and restarts the container. Note that the
restarting sequence relies on systemd.
At container-creation time, Podman looks up the "PODMAN_SYSTEMD_UNIT"
environment variables and stores it verbatim in the container's label.
This variable is now set by all systemd units generated by
`podman-generate-systemd` and is set to `%n` (i.e., the name of systemd
unit starting the container). This data is then being used in the
auto-update sequence to instruct systemd (via DBUS) to restart the unit
and hence to restart the container.
Note that this implementation of auto-updates relies on systemd and
requires a fully-qualified image reference to be used to create the
container. This enforcement is necessary to know which image to
actually check and pull. If we used an image ID, we would not know
which image to check/pull anymore.
Fixes: #3575
Signed-off-by: Valentin Rothberg <rothberg@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Until now, we've been validating every part of container
configuration through the With... functions that set the options.
This if fine when we are just validating the options to an
individual function, but things get complicated once we need to
validate conflicts between different options. We don't know the
order in which things were passed, so we need the validation on
both of the potential options that can conflict, resulting in
significant code duplication. To solve this, add a validate()
function for containers, and use this to check whether everything
is in a good state.
We can probably move more into this function (there are other
parts of container creation that also do validation of a sort)
but this is a good start to simplifying our options.
Signed-off-by: Matthew Heon <matthew.heon@pm.me>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Before Libpod supported named volumes, we approximated image
volumes by bind-mounting in per-container temporary directories.
This was handled by Libpod, and had a corresponding database
entry to enable/disable it.
However, when we enabled named volumes, we completely rewrote the
old implementation; none of the old bind mount implementation
still exists, save one flag in the database. With nothing
remaining to use it, it has no further purpose.
Signed-off-by: Matthew Heon <mheon@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Enables most of the network-related functionality from
`podman run` in `podman pod create`. Custom CNI networks can be
specified, host networking is supported, DNS options can be
configured.
Also enables host networking in `podman play kube`.
Fixes #2808
Fixes #3837
Fixes #4432
Fixes #4718
Fixes #4770
Signed-off-by: Matthew Heon <matthew.heon@pm.me>
|
|
|
|
| |
Signed-off-by: Matthew Heon <matthew.heon@pm.me>
|
|
|
|
|
|
|
|
|
|
| |
This adds network-related options to the pod in the database. We
are going to add the CLI frontend in further patches.
In short, this should greatly improve the ability of pods to
configure networking, once the CLI parsing is added.
Signed-off-by: Matthew Heon <matthew.heon@pm.me>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
In Podman 1.6.3, we added support for anonymous volumes - fixing
our old, broken support for named volumes that were created with
containers. Unfortunately, this reused the database field we used
for the old implementation, and toggled volume removal on for
`podman run --rm` - so now, we were removing *named* volumes
created with older versions of Podman.
We can't modify these old volumes in the DB, so the next-safest
thing to do is swap to a new field to indicate volumes should be
removed. Problem: Volumes created with 1.6.3 and up until this
lands, even anonymous volumes, will not be removed. However, this
is safer than removing too many volumes, as we were doing before.
Fixes #5009
Signed-off-by: Matthew Heon <matthew.heon@pm.me>
|
|
|
|
|
|
|
|
| |
it allows to disable cgroups creation only for the conmon process.
A new cgroup is created for the container payload.
Signed-off-by: Giuseppe Scrivano <gscrivan@redhat.com>
|
|
|
|
|
|
|
| |
`gocritic` is a powerful linter that helps in preventing certain kinds
of errors as well as enforcing a coding style.
Signed-off-by: Valentin Rothberg <rothberg@redhat.com>
|
|
|
|
|
|
|
|
|
|
| |
support a custom tag to add to each log for the container.
It is currently supported only by the journald backend.
Closes: https://github.com/containers/libpod/issues/3653
Signed-off-by: Giuseppe Scrivano <gscrivan@redhat.com>
|
|
|
|
| |
Signed-off-by: Valentin Rothberg <rothberg@redhat.com>
|
|
|
|
| |
Signed-off-by: Valentin Rothberg <rothberg@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Store the full command plus arguments of the process the container has
been created with. Expose this data as a `Config.CreateCommand` field
in the container-inspect data as well.
This information can be useful for debugging, as we can find out which
command has created the container, and, if being created via the Podman
CLI, we know exactly with which flags the container has been created
with.
The immediate motivation for this change is to use this information for
`podman-generate-systemd` to generate systemd-service files that allow
for creating new containers (in contrast to only starting existing
ones).
Signed-off-by: Valentin Rothberg <rothberg@redhat.com>
|
|\
| |
| | |
Add ContainerStateRemoving
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
When Libpod removes a container, there is the possibility that
removal will not fully succeed. The most notable problems are
storage issues, where the container cannot be removed from
c/storage.
When this occurs, we were faced with a choice. We can keep the
container in the state, appearing in `podman ps` and available for
other API operations, but likely unable to do any of them as it's
been partially removed. Or we can remove it very early and clean
up after it's already gone. We have, until now, used the second
approach.
The problem that arises is intermittent problems removing
storage. We end up removing a container, failing to remove its
storage, and ending up with a container permanently stuck in
c/storage that we can't remove with the normal Podman CLI, can't
use the name of, and generally can't interact with. A notable
cause is when Podman is hit by a SIGKILL midway through removal,
which can consistently cause `podman rm` to fail to remove
storage.
We now add a new state for containers that are in the process of
being removed, ContainerStateRemoving. We set this at the
beginning of the removal process. It notifies Podman that the
container cannot be used anymore, but preserves it in the DB
until it is fully removed. This will allow Remove to be run on
these containers again, which should successfully remove storage
if it fails.
Fixes #3906
Signed-off-by: Matthew Heon <mheon@redhat.com>
|
|/
|
|
|
|
|
|
|
|
| |
These only conflict when joining more than one network. We can
still set a single CNI network and set a static IP and/or static
MAC.
Fixes #4500
Signed-off-by: Matthew Heon <matthew.heon@pm.me>
|