| Commit message (Collapse) | Author | Age |
|\
| |
| | |
k8systemd: run k8s workloads in systemd
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
Support running `podman play kube` in systemd by exploiting the
previously added "service containers". During `play kube`, a service
container is started before all the pods and containers, and is stopped
last. The service container communicates its conmon PID via sdnotify.
Add a new systemd template to dispatch such k8s workloads. The argument
of the template is the path to the k8s file. Note that the path must be
escaped for systemd not to bark:
Let's assume we have a `top.yaml` file in the home directory:
```
$ escaped=$(systemd-escape ~/top.yaml)
$ systemctl --user start podman-play-kube@$escaped.service
```
Closes: https://issues.redhat.com/browse/RUN-1287
Signed-off-by: Valentin Rothberg <vrothberg@redhat.com>
|
|/
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Removing exec sessions is guaranteed to evict them from the DB,
but in the case of a zombie process (or similar) it may error and
block removal of the container. A subsequent run of `podman rm`
would succeed (because the exec sessions have been purged from
the DB), which is potentially confusing to users. So let's just
continue, instead of erroring out, if removing exec sessions
fails.
[NO NEW TESTS NEEDED] I wouldn't want to spawn a zombie in our
test VMs even if I could.
Fixes #14252
Signed-off-by: Matthew Heon <matthew.heon@pm.me>
|
|
|
|
|
|
|
|
| |
The errcheck linter makes sure that errors are always check and not
ignored by accident. It spotted a lot of unchecked errors, mostly in the
tests but also some real problem in the code.
Signed-off-by: Paul Holzinger <pholzing@redhat.com>
|
|
|
|
|
|
|
| |
The unparam linter is useful to detect unused function parameters and
return values.
Signed-off-by: Paul Holzinger <pholzing@redhat.com>
|
|
|
|
|
|
|
| |
Fix many problems reported by the staticcheck linter, including many
real bugs!
Signed-off-by: Paul Holzinger <pholzing@redhat.com>
|
|
|
|
|
|
| |
Fixes: #13265
Signed-off-by: Daniel J Walsh <dwalsh@redhat.com>
|
|
|
|
|
|
|
|
|
| |
`--mount` should allow setting driver specific options using
`volume-opt` when `type=volume` is set.
This ensures parity with docker's `volume-opt`.
Signed-off-by: Aditya R <arajan@redhat.com>
|
|
|
|
|
|
|
|
| |
Following commit ensures that podman generates a valid event on `podman
container rename` where event specifies that it is a rename event and
container name swtichted to the latest name.
Signed-off-by: Aditya R <arajan@redhat.com>
|
|\
| |
| | |
Don't log errors on removing volumes inuse, if container --volumes-from
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
When removing a container created with a --volumes-from a container
created with a built in volume, we complain if the original container
still exists. Since this is an expected state, we should not complain
about it.
Fixes: https://github.com/containers/podman/issues/12808
Signed-off-by: Daniel J Walsh <dwalsh@redhat.com>
|
|/
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This primarily served to protect us against shutting down the
Libpod runtime while operations (like creating a container) were
happening. However, it was very inconsistently implemented (a lot
of our longer-lived functions, like pulling images, just didn't
implement it at all...) and I'm not sure how much we really care
about this very-specific error case?
Removing it also removes a lot of potential deadlocks, which is
nice.
[NO NEW TESTS NEEDED]
Signed-off-by: Matthew Heon <mheon@redhat.com>
|
|
|
|
|
|
|
|
|
| |
Checkpoint/restore pod tests are not running with an older runc and now
that runc 1.1.0 appears in the repositories it was detected that the
tests were failing. This was not detected in CI as CI was not using runc
1.1.0 yet.
Signed-off-by: Adrian Reber <areber@redhat.com>
|
|\
| |
| | |
exec: retry rm -rf on ENOTEMPTY and EBUSY
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
when running on NFS, a RemoveAll could cause EBUSY because of some
unlinked files that are still kept open and "silly renamed" to
.nfs$ID.
This is only half of the fix, as conmon needs to be fixed too.
Closes: https://bugzilla.redhat.com/show_bug.cgi?id=2040379
Related: https://github.com/containers/conmon/pull/319
[NO NEW TESTS NEEDED] as it requires NFS as the underlying storage.
Signed-off-by: Giuseppe Scrivano <gscrivan@redhat.com>
|
|/
|
|
|
|
|
|
|
|
|
| |
We should not check if the network supports dns when we create a
container with network aliases. This could be the case for containers
created by docker-compose for example if the dnsname plugin is not
installed or the user uses a macvlan config where we do not support dns.
Fixes #12972
Signed-off-by: Paul Holzinger <pholzing@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Automated for .go files via gomove [1]:
`gomove github.com/containers/podman/v3 github.com/containers/podman/v4`
Remaining files via vgrep [2]:
`vgrep github.com/containers/podman/v3`
[1] https://github.com/KSubedi/gomove
[2] https://github.com/vrothberg/vgrep
Signed-off-by: Valentin Rothberg <rothberg@redhat.com>
|
|
|
|
| |
Signed-off-by: Daniel J Walsh <dwalsh@redhat.com>
|
|\
| |
| | |
libpod: drop check for empty pod cgroup
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
rootless containers do not use cgroups on cgroupv1 or if using
cgroupfs, so improve the check to account for such configuration.
Closes: https://github.com/containers/podman/issues/10800
Closes: https://bugzilla.redhat.com/show_bug.cgi?id=2028243
[NO NEW TESTS NEEDED] it requires rebooting and the rundir on a non
tmpfs file system.
Signed-off-by: Giuseppe Scrivano <gscrivan@redhat.com>
|
|\ \
| | |
| | | |
podman container rm: remove pod
|
| |/
| |
| |
| |
| |
| |
| |
| | |
Support removing the entire pod when --depend is used on an infra
container. --all now implies --depend to properly support removing all
containers and not error out when hitting infra containers.
Signed-off-by: Valentin Rothberg <rothberg@redhat.com>
|
|/
|
|
|
|
|
|
| |
The libpod/network packages were moved to c/common so that buildah can
use it as well. To prevent duplication use it in podman as well and
remove it from here.
Signed-off-by: Paul Holzinger <pholzing@redhat.com>
|
|
|
|
|
|
|
|
|
| |
This directory needs to be world searchable so users can access it from
different user namespaces.
Fixes: https://github.com/containers/podman/issues/12779
Signed-off-by: Daniel J Walsh <dwalsh@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
| |
This option causes Podman to not only remove the specified containers
but all of the containers that depend on the specified
containers.
Fixes: https://github.com/containers/podman/issues/10360
Also ran codespell on the code
Signed-off-by: Daniel J Walsh <dwalsh@redhat.com>
|
|
|
|
|
|
|
|
|
|
| |
Make sure we create new containers in the db with the correct structure.
Also remove some unneeded code for alias handling. We no longer need this
functions.
The specgen format has not been changed for now.
Signed-off-by: Paul Holzinger <pholzing@redhat.com>
|
|
|
|
|
|
|
| |
[NO NEW TESTS NEEDED] This is just moving pkg/cgroups out so
existing tests should be fine.
Signed-off-by: Daniel J Walsh <dwalsh@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
There is a problem with creating and storing the exit command when the
container was created. It only contains the options the container was
created with but NOT the options the container is started with. One
example would be a CNI network config. If I start a container once, then
change the cni config dir with `--cni-config-dir` ans start it a second
time it will start successfully. However the exit command still contains
the wrong `--cni-config-dir` because it was not updated.
To fix this we do not want to store the exit command at all. Instead we
create it every time the conmon process for the container is startet.
This guarantees us that the container cleanup process is startet with
the correct settings.
[NO NEW TESTS NEEDED]
Signed-off-by: Paul Holzinger <pholzing@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
| |
Address the TOCTOU when generating random names by having at most 10
attempts to assign a random name when creating a pod or container.
[NO TESTS NEEDED] since I do not know a way to force a conflict with
randomly generated names in a reasonable time frame.
Fixes: #11735
Signed-off-by: Valentin Rothberg <rothberg@redhat.com>
|
|
|
|
|
|
|
|
|
| |
This will change mount of /dev within container to noexec, making
containers slightly more secure.
[NO NEW TESTS NEEDED]
Signed-off-by: Daniel J Walsh <dwalsh@redhat.com>
|
|
|
|
|
|
|
|
|
| |
Add --time flag to podman container rm
Add --time flag to podman pod rm
Add --time flag to podman volume rm
Add --time flag to podman network rm
Signed-off-by: Daniel J Walsh <dwalsh@redhat.com>
|
|\
| |
| | |
logging: new mode -l passthrough
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
it allows to pass the current std streams down to the container.
conmon support: https://github.com/containers/conmon/pull/289
[NO TESTS NEEDED] it needs a new conmon.
Signed-off-by: Giuseppe Scrivano <gscrivan@redhat.com>
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
Do not create an expensive deep copy for the provided spec.Spec
when creating a container. No API should be expected to create
deep copies of arguments unless explicitly documented.
This removes the last call to JSONDeepCopy in a simple
`podman run --rm -d busybox true`.
[NO TESTS NEEDED]
Signed-off-by: Valentin Rothberg <rothberg@redhat.com>
|
|/
|
|
|
|
|
|
|
|
| |
Podman 4.0 currently errors when you use network aliases for a network which
has dns disabled. Because the error happens on network setup this can
cause regression for old working containers. The network backend should not
validate this. Instead podman should check this at container create time
and also for network connect.
Signed-off-by: Paul Holzinger <pholzing@redhat.com>
|
|
|
|
|
|
|
|
| |
Remove ERROR: Error stutter from logrus messages also.
[ NO TESTS NEEDED] This is just code cleanup.
Signed-off-by: Daniel J Walsh <dwalsh@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Make use of the new network interface in libpod.
This commit contains several breaking changes:
- podman network create only outputs the new network name and not file
path.
- podman network ls shows the network driver instead of the cni version
and plugins.
- podman network inspect outputs the new network struct and not the cni
conflist.
- The bindings and libpod api endpoints have been changed to use the new
network structure.
The container network status is stored in a new field in the state. The
status should be received with the new `c.getNetworkStatus`. This will
migrate the old status to the new format. Therefore old containers should
contine to work correctly in all cases even when network connect/
disconnect is used.
New features:
- podman network reload keeps the ip and mac for more than one network.
- podman container restore keeps the ip and mac for more than one
network.
- The network create compat endpoint can now use more than one ipam
config.
The man pages and the swagger doc are updated to reflect the latest
changes.
Signed-off-by: Paul Holzinger <pholzing@redhat.com>
|
|
|
|
|
|
|
|
|
|
| |
InfraContainer should go through the same creation process as regular containers. This change was from the cmd level
down, involving new container CLI opts and specgen creating functions. What now happens is that both container and pod
cli options are populated in cmd and used to create a podSpecgen and a containerSpecgen. The process then goes as follows
FillOutSpecGen (infra) -> MapSpec (podOpts -> infraOpts) -> PodCreate -> MakePod -> createPodOptions -> NewPod -> CompleteSpec (infra) -> MakeContainer -> NewContainer -> newContainer -> AddInfra (to pod state)
Signed-off-by: cdoern <cdoern@redhat.com>
|
|
|
|
|
|
| |
[NO TESTS NEEDED] Since we are just testing the default.
Signed-off-by: Daniel J Walsh <dwalsh@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
| |
When a network id is used to create a container we translate it to use the
name internally for the db. The network aliases are also stored with the
network name as key so we have to also translate them for the db.
Also removed some outdated skips from the e2e tests.
Fixes #11285
Signed-off-by: Paul Holzinger <pholzing@redhat.com>
|
|
|
|
|
|
|
|
| |
Fixes: https://github.com/containers/podman/issues/11124
Signed-off-by: Daniel J Walsh <dwalsh@redhat.com>
Signed-off-by: Daniel J Walsh <dwalsh@localhost.localdomain>
Signed-off-by: Daniel J Walsh <dwalsh@redhat.com>
|
|
|
|
| |
Signed-off-by: flouthoc <flouthoc.git@gmail.com>
|
|
|
|
|
|
| |
Fixes: #10262
Signed-off-by: Vikas Goel <vikas.goel@gmail.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
After #8906, there is a potential race condition in container
removal of running containers with `--rm`. Running containers
must first be stopped, which was changed to unlock the container
to allow commands like `podman ps` to continue to run while
stopping; however, this also means that the cleanup process can
potentially run before we re-lock, and remove the container from
under us, resulting in error messages from `podman rm`. The end
result is unchanged, the container is still cleanly removed, but
the `podman rm` command will seem to have failed.
Work around this by pinging the database after we stop the
container to make sure it still exists. If it doesn't, our job is
done and we can exit cleanly.
Signed-off-by: Matthew Heon <mheon@redhat.com>
|
|
|
|
|
|
|
| |
Support UID, GID, Mode options for mount type secrets. Also, change
default secret permissions to 444 so all users can read secret.
Signed-off-by: Ashley Cui <acui@redhat.com>
|
|
|
|
|
|
|
|
|
|
| |
extend to pods the existing check whether the cgroup is usable when
running as rootless with cgroupfs.
commit 17ce567c6827abdcd517699bc07e82ccf48f7619 introduced the
regression.
Signed-off-by: Giuseppe Scrivano <gscrivan@redhat.com>
|
|
|
|
|
|
|
|
|
| |
if --cgroup-parent is specified, always honor it without doing any
detection whether cgroups are supported or not.
Closes: https://github.com/containers/podman/issues/10173
Signed-off-by: Giuseppe Scrivano <gscrivan@redhat.com>
|
|
|
|
| |
Signed-off-by: chenkang <kongchen28@gmail.com>
|
|
|
|
| |
Signed-off-by: chenkang <kongchen28@gmail.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
10 lines above we had
// Set ContainerStateRemoving
c.state.State = define.ContainerStateRemoving
Which causes the state to not be the two checked states. Since the
c.cleanup call already deleted the OCI state, this meant that we were
calling cleanup, and hence the postHook hook twice.
Fixes: https://github.com/containers/podman/issues/9983
[NO TESTS NEEDED] Since it would be difficult to tests this. Main tests
should handle that the container is being deleted successfully.
Signed-off-by: Daniel J Walsh <dwalsh@redhat.com>
|