| Commit message (Collapse) | Author | Age |
... | |
|
|
|
|
|
|
|
|
|
|
| |
this enables the ability to connect and disconnect a container from a
given network. it is only for the compatibility layer. some code had to
be refactored to avoid circular imports.
additionally, tests are being deferred temporarily due to some
incompatibility/bug in either docker-py or our stack.
Signed-off-by: baude <bbaude@redhat.com>
|
|
|
|
| |
Signed-off-by: zhangguanzhang <zhangguanzhang@qq.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Convert the existing network aliases set/remove code to network
connect and disconnect. We can no longer modify aliases for an
existing network, but we can add and remove entire networks. As
part of this, we need to add a new function to retrieve current
aliases the container is connected to (we had a table for this
as of the first aliases PR, but it was not externally exposed).
At the same time, remove all deconflicting logic for aliases.
Docker does absolutely no checks of this nature, and allows two
containers to have the same aliases, aliases that conflict with
container names, etc - it's just left to DNS to return all the
IP addresses, and presumably we round-robin from there? Most
tests for the existing code had to be removed because of this.
Convert all uses of the old container config.Networks field,
which previously included all networks in the container, to use
the new DB table. This ensures we actually get an up-to-date list
of in-use networks. Also, add network aliases to the output of
`podman inspect`.
Signed-off-by: Matthew Heon <matthew.heon@pm.me>
|
|\
| |
| | |
Add network aliases for containers to DB
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
This adds the database backend for network aliases. Aliases are
additional names for a container that are used with the CNI
dnsname plugin - the container will be accessible by these names
in addition to its name. Aliases are allowed to change over time
as the container connects to and disconnects from networks.
Aliases are implemented as another bucket in the database to
register all aliases, plus two buckets for each container (one to
hold connected CNI networks, a second to hold its aliases). The
aliases are only unique per-network, to the global and
per-container aliases buckets have a sub-bucket for each CNI
network that has aliases, and the aliases are stored within that
sub-bucket. Aliases are formatted as alias (key) to container ID
(value) in both cases.
Three DB functions are defined for aliases: retrieving current
aliases for a given network, setting aliases for a given network,
and removing all aliases for a given network.
Signed-off-by: Matthew Heon <matthew.heon@pm.me>
|
|\ \
| | |
| | | |
Podman often reports OCI Runtime does not exist, even if it does
|
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
When the OCI Runtime tries to set certain settings in cgroups
it can get the error "no such file or directory", the wrapper
ends up reporting a bogus error like:
```
Request Failed(Internal Server Error): open io.max: No such file or directory: OCI runtime command not found error
{"cause":"OCI runtime command not found error","message":"open io.max: No such file or directory: OCI runtime command not found error","response":500}
```
On first reading of this, you would think the OCI Runtime (crun or runc) were not found. But the error is actually reporting
message":"open io.max: No such file or directory
Which is what we want the user to concentrate on.
Signed-off-by: Daniel J Walsh <dwalsh@redhat.com>
|
|\ \ \
| |/ /
|/| | |
NewFromLocal can return multiple images
|
| |/
| |
| |
| |
| |
| |
| |
| |
| |
| | |
If you use additional stores and pull the same image into
writable stores, you can end up with the situation where
you have the same image twice. This causes image exists
to return the wrong error. It should return true in this
situation rather then an error.
Signed-off-by: Daniel J Walsh <dwalsh@redhat.com>
|
|/
|
|
|
|
|
|
|
|
|
|
|
| |
Make a distinction between pods that are completely running (all
containers running) and those that have some containers going,
but not all, by introducing an intermediate state between Stopped
and Running called Degraded. A Degraded pod has at least one, but
not all, containers running; a Running pod has all containers
running.
First step to a solution for #7213.
Signed-off-by: Matthew Heon <matthew.heon@pm.me>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
When we create a container, we assign a cgroup parent based on
the current cgroup manager in use. This parent is only usable
with the cgroup manager the container is created with, so if the
default cgroup manager is later changed or overridden, the
container will not be able to start.
To solve this, store the cgroup manager that created the
container in container configuration, so we can guarantee a
container with a systemd cgroup parent will always be started
with systemd cgroups.
Unfortunately, this is very difficult to test in CI, due to the
fact that we hard-code cgroup manager on all invocations of
Podman in CI.
Fixes #7830
Signed-off-by: Matthew Heon <matthew.heon@pm.me>
|
|
|
|
|
|
|
|
|
|
|
|
| |
In podman containers rm and podman images rm, the commands
exit with error code 1 if the object does not exists.
This PR implements similar functionality to volumes, networks, and Pods.
Similarly if volumes or Networks are in use by other containers, and return
exit code 2.
Signed-off-by: Daniel J Walsh <dwalsh@redhat.com>
|
|\
| |
| | |
Add support for slirp network for pods
|
| |
| |
| |
| |
| |
| | |
flag --network=slirp4netns[options] for root and rootless pods
Signed-off-by: Ashley Cui <acui@redhat.com>
|
| |
| |
| |
| |
| |
| |
| |
| | |
This is very useful for debugging cgroups v2, especially on
rootless - we need to ensure people are correctly using systemd
cgroups in these cases.
Signed-off-by: Matthew Heon <matthew.heon@pm.me>
|
|/
|
|
|
|
|
| |
* Move from simple string to semver objects
* Change client API Version from '1' to 2.0.0
Signed-off-by: Jhon Honce <jhonce@redhat.com>
|
|
|
|
| |
Signed-off-by: Daniel J Walsh <dwalsh@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The `podman ps --all` command will now show containers that
are under the control of other c/storage container systems and
the new `ps --storage` option will show only containers that are
in c/storage but are not controlled by libpod.
In the below examples, the '*working-container' entries were created
by Buildah.
```
podman ps -a
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
9257ef8c786c docker.io/library/busybox:latest ls /etc 8 hours ago Exited (0) 8 hours ago gifted_jang
d302c81856da docker.io/library/busybox:latest buildah 30 hours ago storage busybox-working-container
7a5a7b099d33 localhost/tom:latest ls -alF 30 hours ago Exited (0) 30 hours ago hopeful_hellman
01d601fca090 localhost/tom:latest ls -alf 30 hours ago Exited (1) 30 hours ago determined_panini
ee58f429ff26 localhost/tom:latest buildah 33 hours ago storage alpine-working-container
podman ps --external
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
d302c81856da docker.io/library/busybox:latest buildah 30 hours ago external busybox-working-container
ee58f429ff26 localhost/tom:latest buildah 33 hours ago external alpine-working-container
```
Signed-off-by: TomSweeneyRedHat <tsweeney@redhat.com>
Signed-off-by: Daniel J Walsh <dwalsh@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
it allows to manually tweak the configuration for cgroup v2.
we will expose some of the options in future as single
options (e.g. the new memory knobs), but for now add the more generic
--cgroup-conf mechanism for maximum control on the cgroup
configuration.
OCI specs change: https://github.com/opencontainers/runtime-spec/pull/1040
Requires: https://github.com/containers/crun/pull/459
Signed-off-by: Giuseppe Scrivano <gscrivan@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
| |
because a pod's network information is dictated by the infra container at creation, a container cannot be created with network attributes. this has been difficult for users to understand. we now return an error when a container is being created inside a pod and passes any of the following attributes:
* static IP (v4 and v6)
* static mac
* ports -p (i.e. -p 8080:80)
* exposed ports (i.e. 222-225)
* publish ports from image -P
Signed-off-by: Brent Baude <bbaude@redhat.com>
|
|
|
|
| |
Signed-off-by: zhangguanzhang <zhangguanzhang@qq.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The define package under Libpod is intended to be an extremely
minimal package, including constants and very little else.
However, as a result of some legacy code, it was dragging in all
of libpod/image (and, less significantly, the util package).
Fortunately, this was just to ensure that error constants were
not duplicating, and there's nothing preventing us from
importing in the other direction and keeping libpod/define free
of dependencies.
Signed-off-by: Matthew Heon <matthew.heon@pm.me>
|
|
|
|
| |
Signed-off-by: Daniel J Walsh <dwalsh@redhat.com>
|
|
|
|
|
|
|
|
| |
--umask sets the umask inside the container
Defaults to 0022
Co-authored-by: Daniel J Walsh <dwalsh@redhat.com>
Signed-off-by: Ashley Cui <acui@redhat.com>
|
|
|
|
|
|
|
|
| |
the code got lost in the migration to podman 2.0, reintroduce it.
Closes: https://github.com/containers/podman/issues/6989
Signed-off-by: Giuseppe Scrivano <gscrivan@redhat.com>
|
|\
| |
| | |
Include infra container information in `pod inspect`
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
We had a field for this in the inspect data, but it was never
being populated. Because of this, `podman pod inspect` stopped
showing port bindings (and other infra container settings). Add
code to populate the infra container inspect data, and add a test
to ensure we don't regress again.
Signed-off-by: Matthew Heon <mheon@redhat.com>
|
|/
|
|
|
|
|
|
|
| |
This allows us to determine if the container auto-detected that
systemd was in use, and correctly activated systemd integration.
Use this to wire up some integration tests to verify that systemd
integration is working properly.
Signed-off-by: Matthew Heon <matthew.heon@pm.me>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
--sdnotify container|conmon|ignore
With "conmon", we send the MAINPID, and clear the NOTIFY_SOCKET so the OCI
runtime doesn't pass it into the container. We also advertise "ready" when the
OCI runtime finishes to advertise the service as ready.
With "container", we send the MAINPID, and leave the NOTIFY_SOCKET so the OCI
runtime passes it into the container for initialization, and let the container advertise further metadata.
This is the default, which is closest to the behavior podman has done in the past.
The "ignore" option removes NOTIFY_SOCKET from the environment, so neither podman nor
any child processes will talk to systemd.
This removes the need for hardcoded CID and PID files in the command line, and
the PIDFile directive, as the pid is advertised directly through sd-notify.
Signed-off-by: Joseph Gooch <mrwizard@dok.org>
|
|\
| |
| | |
Add --tz flag to create, run
|
| |
| |
| |
| |
| |
| |
| | |
--tz flag sets timezone inside container
Can be set to IANA timezone as well as `local` to match host machine
Signed-off-by: Ashley Cui <acui@redhat.com>
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
With the advent of Podman 2.0.0 we crossed the magical barrier of go
modules. While we were able to continue importing all packages inside
of the project, the project could not be vendored anymore from the
outside.
Move the go module to new major version and change all imports to
`github.com/containers/libpod/v2`. The renaming of the imports
was done via `gomove` [1].
[1] https://github.com/KSubedi/gomove
Signed-off-by: Valentin Rothberg <rothberg@redhat.com>
|
|/
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The infra/abi code for pods was written in a flawed way, assuming
that the map[string]error containing individual container errors
was only set when the global error for the pod function was nil;
that is not accurate, and we are actually *guaranteed* to set the
global error when any individual container errors. Thus, we'd
never actually include individual container errors, because the
infra code assumed that err being set meant everything failed and
no container operations were attempted.
We were originally setting the cause of the error to something
nonsensical ("container already exists"), so I made a new error
indicating that some containers in the pod failed. We can then
ignore that error when building the report on the pod operation
and actually return errors from individual containers.
Unfortunately, this exposed another weakness of the infra code,
which was discarding the container IDs. Errors from individual
containers are not guaranteed to identify which container they
came from, hence the use of map[string]error in the Pod API
functions. Rather than restructuring the structs we return from
pkg/infra, I just wrapped the returned errors with a message
including the ID of the container.
Signed-off-by: Matthew Heon <matthew.heon@pm.me>
|
|\
| |
| | |
podman untag: error if tag doesn't exist
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
Throw an error if a specified tag does not exist. Also make sure that
the user input is normalized as we already do for `podman tag`.
To prevent regressions, add a set of end-to-end and systemd tests.
Last but not least, update the docs and add bash completions.
Signed-off-by: Valentin Rothberg <rothberg@redhat.com>
|
|/
|
|
|
|
| |
Reformat ports of inspect network settings to compatible with docker inspect. Close #5380
Signed-off-by: Qi Wang <qiwan@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
When the container uses journald logging, we don't want to
automatically use the same driver for its exec sessions. If we do
we will pollute the journal (particularly in the case of
healthchecks) with large amounts of undesired logs. Instead,
force exec sessions logs to file for now; we can add a log-driver
flag later (we'll probably want to add a `podman logs` command
that reads exec session logs at the same time).
As part of this, add support for the new 'none' logs driver in
Conmon. It will be the default log driver for exec sessions, and
can be optionally selected for containers.
Great thanks to Joe Gooch (mrwizard@dok.org) for adding support
to Conmon for a null log driver, and wiring it in here.
Fixes #6555
Signed-off-by: Matthew Heon <matthew.heon@pm.me>
|
|
|
|
|
|
|
|
|
| |
- misspell
- prealloc
- unparam
- nakedret
Signed-off-by: Daniel J Walsh <dwalsh@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Add a `CreateCommand` field to the pod config which includes the entire
`os.Args` at pod-creation. Similar to the already existing field in a
container config, we need this information to properly generate generic
systemd unit files for pods. It's a prerequisite to support the `--new`
flag for pods.
Also add the `CreateCommand` to the pod-inspect data, which can come in
handy for debugging, general inspection and certainly for the tests that
are added along with the other changes.
Signed-off-by: Valentin Rothberg <rothberg@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This came out of a conversation with Valentin about
systemd-managed Podman. He discovered that unit files did not
properly handle cases where Conmon was dead - the ExecStopPost
`podman rm --force` line was not actually removing the container,
but interestingly, adding a `podman cleanup --rm` line would
remove it. Both of these commands do the same thing (minus the
`podman cleanup --rm` command not force-removing running
containers).
Without a running Conmon instance, the container process is still
running (assuming you killed Conmon with SIGKILL and it had no
chance to kill the container it managed), but you can still kill
the container itself with `podman stop` - Conmon is not involved,
only the OCI Runtime. (`podman rm --force` and `podman stop` use
the same code to kill the container). The problem comes when we
want to get the container's exit code - we expect Conmon to make
us an exit file, which it's obviously not going to do, being
dead. The first `podman rm` would fail because of this, but
importantly, it would (after failing to retrieve the exit code
correctly) set container status to Exited, so that the second
`podman cleanup` process would succeed.
To make sure the first `podman rm --force` succeeds, we need to
catch the case where Conmon is already dead, and instead of
waiting for an exit file that will never come, immediately set
the Stopped state and remove an error that can be caught and
handled.
Signed-off-by: Matthew Heon <mheon@redhat.com>
|
|
|
|
|
|
|
| |
this is step 1 to self-discovery of remote ssh connections. we add a remotesocket struct to info to detect what the socket path might be.
Co-authored-by: Jhon Honce <jhonce@redhat.com>
Signed-off-by: Brent Baude <bbaude@redhat.com>
|
|
|
|
|
|
| |
This patch fixes the podman --version --format command.
Signed-off-by: Daniel J Walsh <dwalsh@redhat.com>
|
|
|
|
|
|
|
| |
Currently we are displaying the Seconds since EPOCH
this will change to displaying date, similar to `podman version`
Signed-off-by: Daniel J Walsh <dwalsh@redhat.com>
|
|\
| |
| | |
Remove github.com/libpod/libpod from cmd/pkg/podman
|
| |
| |
| |
| |
| |
| |
| | |
By moving a couple of variables from libpod/libpod to libpod/libpod/define
I am able shrink the podman-remote-* executables by another megabyte.
Signed-off-by: Daniel J Walsh <dwalsh@redhat.com>
|
|/
|
|
|
|
| |
the one remaining test that is still skipped do to missing exec function
Signed-off-by: Brent Baude <bbaude@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
| |
We’re now able to build a static podman binary based on a custom nix
derivation. This is integrated in cirrus as well, whereas a later target
would be to provide a self-contained static binary bundle which can be
installed on any Linux x64-bit system.
Fixes: https://github.com/containers/libpod/issues/1399
Signed-off-by: Sascha Grunert <sgrunert@suse.com>
|
|\
| |
| | |
Fix bug where pods would unintentionally share cgroupns
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
This one was a massive pain to track down.
The original symptom was an error message from rootless Podman
trying to make a container in a pod. I unfortunately did not look
at the error message closely enough to realize that the namespace
in question was the cgroup namespace (the reproducer pod was
explicitly set to only share the network namespace), else this
would have been quite a bit shorter.
I spent considerable effort trying to track down differences
between the inspect output of the two containers, and when that
failed I was forced to resort to diffing the OCI specs. That
finally proved fruitful, and I was able to determine what should
have been obvious all along: the container was joining the cgroup
namespace of the infra container when it really ought not to
have.
From there, I discovered a variable collision in pod config. The
UsePodCgroup variable means "create a parent cgroup for the pod
and join containers in the pod to it". Unfortunately, it is very
similar to UsePodUTS, UsePodNet, etc, which mean "the pod shares
this namespace", so an accessor was accidentally added for it
that indicated the pod shared the cgroup namespace when it really
did not. Once I realized that, it was a quick fix - add a bool to
the pod's configuration to indicate whether the cgroup ns was
shared (distinct from UsePodCgroup) and use that for the
accessor.
Also included are fixes for `podman inspect` and
`podman pod inspect` that fix them to actually display the state
of the cgroup namespace (for container inspect) and what
namespaces are shared (for pod inspect). Either of those would
have made tracking this down considerably quicker.
Fixes #6149
Signed-off-by: Matthew Heon <mheon@redhat.com>
|
|/
|
|
| |
Signed-off-by: Jhon Honce <jhonce@redhat.com>
|