| Commit message (Collapse) | Author | Age |
|
|
|
|
|
| |
[NO TESTS NEEDED] Since we are just testing the default.
Signed-off-by: Daniel J Walsh <dwalsh@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
| |
When a network id is used to create a container we translate it to use the
name internally for the db. The network aliases are also stored with the
network name as key so we have to also translate them for the db.
Also removed some outdated skips from the e2e tests.
Fixes #11285
Signed-off-by: Paul Holzinger <pholzing@redhat.com>
|
|
|
|
|
|
|
|
| |
Fixes: https://github.com/containers/podman/issues/11124
Signed-off-by: Daniel J Walsh <dwalsh@redhat.com>
Signed-off-by: Daniel J Walsh <dwalsh@localhost.localdomain>
Signed-off-by: Daniel J Walsh <dwalsh@redhat.com>
|
|
|
|
| |
Signed-off-by: flouthoc <flouthoc.git@gmail.com>
|
|
|
|
|
|
| |
Fixes: #10262
Signed-off-by: Vikas Goel <vikas.goel@gmail.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
After #8906, there is a potential race condition in container
removal of running containers with `--rm`. Running containers
must first be stopped, which was changed to unlock the container
to allow commands like `podman ps` to continue to run while
stopping; however, this also means that the cleanup process can
potentially run before we re-lock, and remove the container from
under us, resulting in error messages from `podman rm`. The end
result is unchanged, the container is still cleanly removed, but
the `podman rm` command will seem to have failed.
Work around this by pinging the database after we stop the
container to make sure it still exists. If it doesn't, our job is
done and we can exit cleanly.
Signed-off-by: Matthew Heon <mheon@redhat.com>
|
|
|
|
|
|
|
| |
Support UID, GID, Mode options for mount type secrets. Also, change
default secret permissions to 444 so all users can read secret.
Signed-off-by: Ashley Cui <acui@redhat.com>
|
|
|
|
|
|
|
|
|
|
| |
extend to pods the existing check whether the cgroup is usable when
running as rootless with cgroupfs.
commit 17ce567c6827abdcd517699bc07e82ccf48f7619 introduced the
regression.
Signed-off-by: Giuseppe Scrivano <gscrivan@redhat.com>
|
|
|
|
|
|
|
|
|
| |
if --cgroup-parent is specified, always honor it without doing any
detection whether cgroups are supported or not.
Closes: https://github.com/containers/podman/issues/10173
Signed-off-by: Giuseppe Scrivano <gscrivan@redhat.com>
|
|
|
|
| |
Signed-off-by: chenkang <kongchen28@gmail.com>
|
|
|
|
| |
Signed-off-by: chenkang <kongchen28@gmail.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
10 lines above we had
// Set ContainerStateRemoving
c.state.State = define.ContainerStateRemoving
Which causes the state to not be the two checked states. Since the
c.cleanup call already deleted the OCI state, this meant that we were
calling cleanup, and hence the postHook hook twice.
Fixes: https://github.com/containers/podman/issues/9983
[NO TESTS NEEDED] Since it would be difficult to tests this. Main tests
should handle that the container is being deleted successfully.
Signed-off-by: Daniel J Walsh <dwalsh@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
As part of a fix for an earlier bug (#5698) we added the ability
for Podman to chown volumes to correctly match the user running
in the container, even in adverse circumstances (where we don't
know the right UID/GID until very late in the process). However,
we only did this for volumes created automatically by a
`podman run` or `podman create`. Volumes made by
`podman volume create` do not get this chown, so their
permissions may not be correct. I've looked, and I don't think
there's a good reason not to do this chwon for all volumes the
first time the container is started.
I would prefer to do this as part of volume copy-up, but I don't
think that's really possible (copy-up happens earlier in the
process and we don't have a spec). There is a small chance, as
things stand, that a copy-up happens for one container and then
a chown for a second, unrelated container, but the odds of this
are astronomically small (we'd need a very close race between two
starting containers).
Fixes #9608
Signed-off-by: Matthew Heon <mheon@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Currently we were overwrapping error returned from removal
of a non existing container.
$ podman rm bogus -f
Error: failed to evict container: "": failed to find container "bogus" in state: no container with name or ID bogus found: no such container
Removal of wraps gets us to.
./bin/podman rm bogus -f
Error: no container with name or ID "bogus" found: no such container
Finally also added quotes around container name to help make it standout
when you get an error, currently it gets lost in the error.
Signed-off-by: Daniel J Walsh <dwalsh@redhat.com>
|
|
|
|
|
|
|
|
|
|
| |
The --trace has helped in early stages analyze Podman code. However,
it's contributing to dependency and binary bloat. The standard go
tooling can also help in profiling, so let's turn `--trace` into a NOP.
[NO TESTS NEEDED]
Signed-off-by: Valentin Rothberg <rothberg@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Move the core of renaming logic into the DB. This guarantees a
lot more atomicity than we have right now (our current solution,
removing the container from the DB and re-creating it, is *VERY*
not atomic and prone to leaving a corrupted state behind if
things go wrong. Moving things into the DB allows us to remove
most, but not all, of this - there's still a potential scenario
where the c/storage rename fails but the Podman rename succeeds,
and we end up with a mismatched state.
Signed-off-by: Matthew Heon <matthew.heon@pm.me>
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The libpod network logic knows about networks IDs but OCICNI
does not. We cannot pass the network ID to OCICNI. Instead we
need to make sure we only use network names internally. This
is also important for libpod since we also only store the
network names in the state. If we would add a ID there the
same networks could accidentally be added twice.
Fixes #9451
Signed-off-by: Paul Holzinger <paul.holzinger@web.de>
|
|
|
|
|
|
|
|
|
| |
We missed bumping the go module, so let's do it now :)
* Automated go code with github.com/sirkon/go-imports-rename
* Manually via `vgrep podman/v2` the rest
Signed-off-by: Valentin Rothberg <rothberg@redhat.com>
|
|
|
|
|
|
|
|
| |
Use the whitespace linter and fix the reported problems.
[NO TESTS NEEDED]
Signed-off-by: Paul Holzinger <paul.holzinger@web.de>
|
|
|
|
|
|
|
|
|
|
|
| |
Implement podman secret create, inspect, ls, rm
Implement podman run/create --secret
Secrets are blobs of data that are sensitive.
Currently, the only secret driver supported is filedriver, which means creating a secret stores it in base64 unencrypted in a file.
After creating a secret, a user can use the --secret flag to expose the secret inside the container at /run/secrets/[secretname]
This secret will not be commited to an image on a podman commit
Signed-off-by: Ashley Cui <acui@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Basic theory: We remove the container, but *only from the DB*.
We leave it in c/storage, we leave the lock allocated, we leave
it running (if it is). Then we create an identical container with
an altered name, and add that back to the database. Theoretically
we now have a renamed container.
The advantage of this approach is that it doesn't just apply to
rename - we can use this to make *any* configuration change to a
container that does not alter its container ID.
Potential problems are numerous. This process is *THOROUGHLY*
non-atomic at present - if you `kill -9` Podman mid-rename things
will be in a bad place, for example. Also, we can't rename
containers that can't be removed normally - IE, containers with
dependencies (pod infra containers, for example).
The largest potential improvement will be to move the majority of
the work into the DB, with a `RecreateContainer()` method - that
will add atomicity, and let us remove the container without
worrying about depencies and similar issues.
Potential problems: long-running processes that edit the DB and
may have an older version of the configuration around. Most
notable example is `podman run --rm` - the removal command needed
to be manually edited to avoid this one. This begins to get at
the heart of me not wanting to do this in the first place...
This provides CLI and API implementations for frontend, but no
tunnel implementation. It will be added in a future release (just
held back for time now - we need this in 3.0 and are running low
on time).
This is honestly kind of horrifying, but I think it will work.
Signed-off-by: Matthew Heon <mheon@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
| |
This change adds code to report the reclaimed space after a prune.
Reclaimed space from volumes, images, and containers is recorded
during the prune call in a PruneReport struct. These structs are
collected into a slice during a system prune and processed afterwards
to calculate the total reclaimed space.
Closes #8658
Signed-off-by: Baron Lenardson <lenardson.baron@gmail.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
When making containers, we want to lock all named volumes we are
adding the container to, to ensure they aren't removed from under
us while we are working. Unfortunately, this code did not account
for a container having the same volume mounted in multiple places
so it could deadlock. Add a map to ensure that we don't lock the
same name more than once to resolve this.
Fixes #8221
Signed-off-by: Matthew Heon <matthew.heon@pm.me>
|
|
|
|
|
|
|
|
|
| |
Continue progress on use of external containers.
This PR adds the ability to mount, umount and list the
storage containers whether they are in libpod or not.
Signed-off-by: Daniel J Walsh <dwalsh@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Expand the use of the Shutdown package such that we now use it
to handle signals any time we run Libpod. From there, add code to
container creation to use the Inhibit function to prevent a
shutdown from occuring during the critical parts of container
creation.
We also need to turn off signal handling when --sig-proxy is
invoked - we don't want to catch the signals ourselves then, but
instead to forward them into the container via the existing
sig-proxy handler.
Fixes #7941
Signed-off-by: Matthew Heon <mheon@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
When we create a container, we assign a cgroup parent based on
the current cgroup manager in use. This parent is only usable
with the cgroup manager the container is created with, so if the
default cgroup manager is later changed or overridden, the
container will not be able to start.
To solve this, store the cgroup manager that created the
container in container configuration, so we can guarantee a
container with a systemd cgroup parent will always be started
with systemd cgroups.
Unfortunately, this is very difficult to test in CI, due to the
fact that we hard-code cgroup manager on all invocations of
Podman in CI.
Fixes #7830
Signed-off-by: Matthew Heon <matthew.heon@pm.me>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This commit is courtesy of
```
for f in $(git ls-files *.go | grep -v ^vendor/); do \
sed -i 's/\(errors\..*\)"Error /\1"error /' $f;
done
for f in $(git ls-files *.go | grep -v ^vendor/); do \
sed -i 's/\(errors\..*\)"Failed to /\1"failed to /' $f;
done
```
etc.
Self-reviewed using `git diff --word-diff`, found no issues.
Signed-off-by: Kir Kolyshkin <kolyshkin@gmail.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
In case os.Open[File], os.Mkdir[All], ioutil.ReadFile and the like
fails, the error message already contains the file name and the
operation that fails, so there is no need to wrap the error with
something like "open %s failed".
While at it
- replace a few places with os.Open, ioutil.ReadAll with
ioutil.ReadFile.
- replace errors.Wrapf with errors.Wrap for cases where there
are no %-style arguments.
Signed-off-by: Kir Kolyshkin <kolyshkin@gmail.com>
|
|\
| |
| | |
rootless: support `podman network create` (CNI-in-slirp4netns)
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
Usage:
```
$ podman network create foo
$ podman run -d --name web --hostname web --network foo nginx:alpine
$ podman run --rm --network foo alpine wget -O - http://web.dns.podman
Connecting to web.dns.podman (10.88.4.6:80)
...
<h1>Welcome to nginx!</h1>
...
```
See contrib/rootless-cni-infra for the design.
Signed-off-by: Akihiro Suda <akihiro.suda.cz@hco.ntt.co.jp>
|
|/
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The `podman ps --all` command will now show containers that
are under the control of other c/storage container systems and
the new `ps --storage` option will show only containers that are
in c/storage but are not controlled by libpod.
In the below examples, the '*working-container' entries were created
by Buildah.
```
podman ps -a
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
9257ef8c786c docker.io/library/busybox:latest ls /etc 8 hours ago Exited (0) 8 hours ago gifted_jang
d302c81856da docker.io/library/busybox:latest buildah 30 hours ago storage busybox-working-container
7a5a7b099d33 localhost/tom:latest ls -alF 30 hours ago Exited (0) 30 hours ago hopeful_hellman
01d601fca090 localhost/tom:latest ls -alf 30 hours ago Exited (1) 30 hours ago determined_panini
ee58f429ff26 localhost/tom:latest buildah 33 hours ago storage alpine-working-container
podman ps --external
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
d302c81856da docker.io/library/busybox:latest buildah 30 hours ago external busybox-working-container
ee58f429ff26 localhost/tom:latest buildah 33 hours ago external alpine-working-container
```
Signed-off-by: TomSweeneyRedHat <tsweeney@redhat.com>
Signed-off-by: Daniel J Walsh <dwalsh@redhat.com>
|
|
|
|
| |
Signed-off-by: Daniel J Walsh <dwalsh@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This was inspired by https://github.com/cri-o/cri-o/pull/3934 and
much of the logic for it is contained there. However, in brief,
a named return called "err" can cause lots of code confusion and
encourages using the wrong err variable in defer statements,
which can make them work incorrectly. Using a separate name which
is not used elsewhere makes it very clear what the defer should
be doing.
As part of this, remove a large number of named returns that were
not used anywhere. Most of them were once needed, but are no
longer necessary after previous refactors (but were accidentally
retained).
Signed-off-by: Matthew Heon <matthew.heon@pm.me>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
With the advent of Podman 2.0.0 we crossed the magical barrier of go
modules. While we were able to continue importing all packages inside
of the project, the project could not be vendored anymore from the
outside.
Move the go module to new major version and change all imports to
`github.com/containers/libpod/v2`. The renaming of the imports
was done via `gomove` [1].
[1] https://github.com/KSubedi/gomove
Signed-off-by: Valentin Rothberg <rothberg@redhat.com>
|
|\
| |
| | |
container: move volume chown after spec generation
|
| |
| |
| |
| |
| |
| |
| |
| |
| | |
move the chown for newly created volumes after the spec generation so
the correct UID/GID are known.
Closes: https://github.com/containers/libpod/issues/5698
Signed-off-by: Giuseppe Scrivano <gscrivan@redhat.com>
|
|/
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
When running under systemd there is no need to create yet another
cgroup for the container.
With conmon-delegated the current cgroup will be split in two sub
cgroups:
- supervisor
- container
The supervisor cgroup will hold conmon and the podman process, while
the container cgroup is used by the OCI runtime (using the cgroupfs
backend).
Closes: https://github.com/containers/libpod/issues/6400
Signed-off-by: Giuseppe Scrivano <gscrivan@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
When going through the output of `podman inspect` to try and
identify another issue, I noticed that Podman 2.0 was setting
StopSignal to 0 on containers by default. After chasing it
through the command line and SpecGen, I determined that we were
actually not setting a default in Libpod, which is strange
because I swear we used to do that. I re-added the disappeared
default and now all is well again.
Also, while I was looking for the bug in SpecGen, I found a bunch
of TODOs that have already been done. Eliminate the comments for
these.
Signed-off-by: Matthew Heon <matthew.heon@pm.me>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
When the container uses journald logging, we don't want to
automatically use the same driver for its exec sessions. If we do
we will pollute the journal (particularly in the case of
healthchecks) with large amounts of undesired logs. Instead,
force exec sessions logs to file for now; we can add a log-driver
flag later (we'll probably want to add a `podman logs` command
that reads exec session logs at the same time).
As part of this, add support for the new 'none' logs driver in
Conmon. It will be the default log driver for exec sessions, and
can be optionally selected for containers.
Great thanks to Joe Gooch (mrwizard@dok.org) for adding support
to Conmon for a null log driver, and wiring it in here.
Fixes #6555
Signed-off-by: Matthew Heon <matthew.heon@pm.me>
|
|
|
|
|
|
|
|
|
| |
- misspell
- prealloc
- unparam
- nakedret
Signed-off-by: Daniel J Walsh <dwalsh@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This came out of a conversation with Valentin about
systemd-managed Podman. He discovered that unit files did not
properly handle cases where Conmon was dead - the ExecStopPost
`podman rm --force` line was not actually removing the container,
but interestingly, adding a `podman cleanup --rm` line would
remove it. Both of these commands do the same thing (minus the
`podman cleanup --rm` command not force-removing running
containers).
Without a running Conmon instance, the container process is still
running (assuming you killed Conmon with SIGKILL and it had no
chance to kill the container it managed), but you can still kill
the container itself with `podman stop` - Conmon is not involved,
only the OCI Runtime. (`podman rm --force` and `podman stop` use
the same code to kill the container). The problem comes when we
want to get the container's exit code - we expect Conmon to make
us an exit file, which it's obviously not going to do, being
dead. The first `podman rm` would fail because of this, but
importantly, it would (after failing to retrieve the exit code
correctly) set container status to Exited, so that the second
`podman cleanup` process would succeed.
To make sure the first `podman rm --force` succeeds, we need to
catch the case where Conmon is already dead, and instead of
waiting for an exit file that will never come, immediately set
the Stopped state and remove an error that can be caught and
handled.
Signed-off-by: Matthew Heon <mheon@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The cleanup command creation logic is made public as part of this
and wired such that we can call it both within SpecGen (to make
container exit commands) and from the ABI detached exec handler.
Exit commands are presently only used for detached exec, but
theoretically could be turned on for all exec sessions if we
wanted (I'm declining to do this because of potential overhead).
I also forgot to copy the exit command from the exec config into
the ExecOptions struct used by the OCI runtime, so it was not
being added.
There are also two significant bugfixes for exec in here. One is
for updating the status of running exec sessions - this was
always failing as I had coded it to remove the exit file *before*
reading it, instead of after (oops). The second was that removing
a running exec session would always fail because I inverted the
check to see if it was running.
Signed-off-by: Matthew Heon <matthew.heon@pm.me>
|
|
|
|
|
|
|
|
|
| |
Cleaning up the OCI runtime is not allowed in the Removing state.
To ensure it is actually cleaned up, when calling cleanup() as
part of removing a container, do so before we set the Removing
state, so we can successfully remove.
Signed-off-by: Matthew Heon <mheon@redhat.com>
|
|
|
|
|
|
|
| |
Fixes container prune to prune created and configured containers.
Disables couple of system prune test as not yet in with v2.
Signed-off-by: Sujil02 <sushah@redhat.com>
|
|
|
|
|
|
| |
rid ourseleves of libpod references in v2 client
Signed-off-by: Brent Baude <bbaude@redhat.com>
|
|
|
|
|
|
|
| |
Adds ability to prune containers for v2.
Adds client side prompt with force flag and filters options to prune.
Signed-off-by: Sujil02 <sushah@redhat.com>
|
|
|
|
|
|
|
| |
vendor in c/common config pkg for containers.conf
Signed-off-by: Qi Wang qiwan@redhat.com
Signed-off-by: Daniel J Walsh <dwalsh@redhat.com>
|
|
|
|
|
|
| |
Start and Resize require further implementation work.
Signed-off-by: Matthew Heon <matthew.heon@pm.me>
|
|\
| |
| | |
Begin exec rework
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
As part of the rework of exec sessions, we need to address them
independently of containers. In the new API, we need to be able
to fetch them by their ID, regardless of what container they are
associated with. Unfortunately, our existing exec sessions are
tied to individual containers; there's no way to tell what
container a session belongs to and retrieve it without getting
every exec session for every container.
This adds a pointer to the container an exec session is
associated with to the database. The sessions themselves are
still stored in the container.
Exec-related APIs have been restructured to work with the new
database representation. The originally monolithic API has been
split into a number of smaller calls to allow more fine-grained
control of lifecycle. Support for legacy exec sessions has been
retained, but in a deprecated fashion; we should remove this in
a few releases.
Signed-off-by: Matthew Heon <matthew.heon@pm.me>
|