| Commit message (Collapse) | Author | Age |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
It solves a race where a container cleanup process launched because of
the container process exiting normally would hang.
It also solves a problem when running as rootless on cgroup v1 since
it is not possible to force pids.max = 1 on conmon to limit spawning
the cleanup process.
Partially copied from https://github.com/containers/podman/pull/13403
Related to: https://github.com/containers/podman/issues/14057
[NO NEW TESTS NEEDED] it doesn't add any new functionality
Signed-off-by: Giuseppe Scrivano <gscrivan@redhat.com>
|
|\
| |
| | |
libpod: drop warning if cgroup doesn't exist
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
do not print a warning on cgroup removal if it doesn't exist.
Closes: https://github.com/containers/podman/issues/13382
[NO NEW TESTS NEEDED]
Signed-off-by: Giuseppe Scrivano <gscrivan@redhat.com>
|
|/
|
|
|
|
|
|
|
| |
do not attempt to use cgroups with pods if the cgroups are disabled.
A similar check is already in place for containers.
Closes: https://github.com/containers/podman/issues/13411
Signed-off-by: Giuseppe Scrivano <gscrivan@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This primarily served to protect us against shutting down the
Libpod runtime while operations (like creating a container) were
happening. However, it was very inconsistently implemented (a lot
of our longer-lived functions, like pulling images, just didn't
implement it at all...) and I'm not sure how much we really care
about this very-specific error case?
Removing it also removes a lot of potential deadlocks, which is
nice.
[NO NEW TESTS NEEDED]
Signed-off-by: Matthew Heon <mheon@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Automated for .go files via gomove [1]:
`gomove github.com/containers/podman/v3 github.com/containers/podman/v4`
Remaining files via vgrep [2]:
`vgrep github.com/containers/podman/v3`
[1] https://github.com/KSubedi/gomove
[2] https://github.com/vrothberg/vgrep
Signed-off-by: Valentin Rothberg <rothberg@redhat.com>
|
|
|
|
| |
Signed-off-by: Daniel J Walsh <dwalsh@redhat.com>
|
|
|
|
|
|
|
| |
[NO NEW TESTS NEEDED] This is just moving pkg/cgroups out so
existing tests should be fine.
Signed-off-by: Daniel J Walsh <dwalsh@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
| |
Address the TOCTOU when generating random names by having at most 10
attempts to assign a random name when creating a pod or container.
[NO TESTS NEEDED] since I do not know a way to force a conflict with
randomly generated names in a reasonable time frame.
Fixes: #11735
Signed-off-by: Valentin Rothberg <rothberg@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
If we interrupt pod removal between removing containers and
removing the whole pod, the infra ID was still in the DB, and
most pod operations would try to retrieve the infra container
(and would this fail). Clear the infra ID from the DB just before
we remove all containers to prevent this.
Fixes #12034
[NO NEW TESTS NEEDED] This is a very narrow race and I have no
idea how to repro it.
Signed-off-by: Matthew Heon <mheon@redhat.com>
|
|\
| |
| | |
Pod Rm Infra Handling Improvements
|
| |
| |
| |
| |
| |
| |
| |
| | |
Made changes so that if the pod contains all exited containers and only infra is running, remove the pod.
resolves #11713
Signed-off-by: cdoern <cdoern@redhat.com>
|
|/
|
|
|
|
|
|
|
| |
Add --time flag to podman container rm
Add --time flag to podman pod rm
Add --time flag to podman volume rm
Add --time flag to podman network rm
Signed-off-by: Daniel J Walsh <dwalsh@redhat.com>
|
|
|
|
|
|
|
|
| |
Remove ERROR: Error stutter from logrus messages also.
[ NO TESTS NEEDED] This is just code cleanup.
Signed-off-by: Daniel J Walsh <dwalsh@redhat.com>
|
|
|
|
|
|
|
|
|
|
| |
InfraContainer should go through the same creation process as regular containers. This change was from the cmd level
down, involving new container CLI opts and specgen creating functions. What now happens is that both container and pod
cli options are populated in cmd and used to create a podSpecgen and a containerSpecgen. The process then goes as follows
FillOutSpecGen (infra) -> MapSpec (podOpts -> infraOpts) -> PodCreate -> MakePod -> createPodOptions -> NewPod -> CompleteSpec (infra) -> MakeContainer -> NewContainer -> newContainer -> AddInfra (to pod state)
Signed-off-by: cdoern <cdoern@redhat.com>
|
|
|
|
|
|
|
|
| |
added support for --pid flag. User can specify ns:file, pod, private, or host.
container returns an error since you cannot point the ns of the pods infra container
to a container outside of the pod.
Signed-off-by: cdoern <cdoern@redhat.com>
|
|
|
|
|
|
|
|
|
|
| |
extend to pods the existing check whether the cgroup is usable when
running as rootless with cgroupfs.
commit 17ce567c6827abdcd517699bc07e82ccf48f7619 introduced the
regression.
Signed-off-by: Giuseppe Scrivano <gscrivan@redhat.com>
|
|
|
|
|
|
|
|
|
| |
We missed bumping the go module, so let's do it now :)
* Automated go code with github.com/sirkon/go-imports-rename
* Manually via `vgrep podman/v2` the rest
Signed-off-by: Valentin Rothberg <rothberg@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Our users are missing certain warning messages that would
make debugging issues with Podman easier.
For example if you do a podman build with a Containerfile
that contains the SHELL directive, the Derective is silently
ignored.
If you run with the log-level warn you get a warning message explainging
what happened.
$ podman build --no-cache -f /tmp/Containerfile1 /tmp/
STEP 1: FROM ubi8
STEP 2: SHELL ["/bin/bash", "-c"]
STEP 3: COMMIT
--> 7a207be102a
7a207be102aa8993eceb32802e6ceb9d2603ceed9dee0fee341df63e6300882e
$ podman --log-level=warn build --no-cache -f /tmp/Containerfile1 /tmp/
STEP 1: FROM ubi8
STEP 2: SHELL ["/bin/bash", "-c"]
STEP 3: COMMIT
WARN[0000] SHELL is not supported for OCI image format, [/bin/bash -c] will be ignored. Must use `docker` format
--> 7bd96fd25b9
7bd96fd25b9f755d8a045e31187e406cf889dcf3799357ec906e90767613e95f
These messages will no longer be lost, when we default to WARNing level.
Signed-off-by: Daniel J Walsh <dwalsh@redhat.com>
|
|
|
|
| |
Signed-off-by: Daniel J Walsh <dwalsh@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
With the advent of Podman 2.0.0 we crossed the magical barrier of go
modules. While we were able to continue importing all packages inside
of the project, the project could not be vendored anymore from the
outside.
Move the go module to new major version and change all imports to
`github.com/containers/libpod/v2`. The renaming of the imports
was done via `gomove` [1].
[1] https://github.com/KSubedi/gomove
Signed-off-by: Valentin Rothberg <rothberg@redhat.com>
|
|
|
|
|
|
|
|
|
| |
- misspell
- prealloc
- unparam
- nakedret
Signed-off-by: Daniel J Walsh <dwalsh@redhat.com>
|
|
|
|
| |
Signed-off-by: Daniel J Walsh <dwalsh@redhat.com>
|
|
|
|
|
|
|
| |
vendor in c/common config pkg for containers.conf
Signed-off-by: Qi Wang qiwan@redhat.com
Signed-off-by: Daniel J Walsh <dwalsh@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
If there are no running containers - for example, if the pod was
just created - the cgroup in question may not exist (under
certain circumstances that we're not 100% sure about). However,
regardless, we don't need to set a PID limit, as nothing will be
making cleanup processes (no running conmon processes), so not
changing the cgroup is safe regardless.
Fixes #5072
Signed-off-by: Matthew Heon <matthew.heon@pm.me>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
In Podman 1.6.3, we added support for anonymous volumes - fixing
our old, broken support for named volumes that were created with
containers. Unfortunately, this reused the database field we used
for the old implementation, and toggled volume removal on for
`podman run --rm` - so now, we were removing *named* volumes
created with older versions of Podman.
We can't modify these old volumes in the DB, so the next-safest
thing to do is swap to a new field to indicate volumes should be
removed. Problem: Volumes created with 1.6.3 and up until this
lands, even anonymous volumes, will not be removed. However, this
is safer than removing too many volumes, as we were doing before.
Fixes #5009
Signed-off-by: Matthew Heon <matthew.heon@pm.me>
|
|
|
|
|
|
|
| |
`gocritic` is a powerful linter that helps in preventing certain kinds
of errors as well as enforcing a coding style.
Signed-off-by: Valentin Rothberg <rothberg@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
When trying to reproduce #4704 I noticed that the named volumes
from the Postgres containers in the reproducer weren't being
removed by `podman pod rm -f` saying that the container they were
attached to was still in use. This was rather odd, considering
they were only in use by one container, and that container was in
the process of being removed with the pod.
After a bit of tracing, I realized that the cause is the ordering
of container removal when we remove a pod. Normally, it's done
in removeContainer() before volume removal (which is the last
thing in that function). However, when we are removing a pod, we
remove containers all at once, after removeContainer has already
finished - meaning the container still exists when we try to
remove its volumes, and thus the volume can't be removed.
Solution: collect a list of all named volumes in use by the pod,
and remove them all at once after every container in the pod is
gone. This ensures that there are no dependency issues.
Signed-off-by: Matthew Heon <matthew.heon@pm.me>
|
|
|
|
|
|
|
|
|
|
|
|
| |
Refactor the `RuntimeConfig` along with related code from libpod into
libpod/config. Note that this is a first step of consolidating code
into more coherent packages to make the code more maintainable and less
prone to regressions on the long runs.
Some libpod definitions were moved to `libpod/define` to resolve
circular dependencies.
Signed-off-by: Valentin Rothberg <rothberg@redhat.com>
|
|
|
|
|
|
|
| |
podman stats does not work in rootless environments with cgroups V1.
Fix error message and document this fact.
Signed-off-by: Daniel J Walsh <dwalsh@redhat.com>
|
|
|
|
|
|
|
| |
* set hostname in pod yaml file
* set --hostname in pod create command
Signed-off-by: Chen Zhiwei <zhiweik@gmail.com>
|
|
|
|
|
|
|
| |
this is the third round of preparing to use the golangci-lint on our
code base.
Signed-off-by: baude <bbaude@redhat.com>
|
|
|
|
|
|
|
|
| |
If we don't do this, we can leak locks on every failure, and that
is very, very bad - can render Podman unusable without a 'system
renumber' being run.
Signed-off-by: Matthew Heon <mheon@redhat.com>
|
|
|
|
| |
Signed-off-by: Giuseppe Scrivano <gscrivan@redhat.com>
|
|
|
|
|
|
| |
use the new implementation for dealing with cgroups.
Signed-off-by: Giuseppe Scrivano <gscrivan@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
the compilation demands of having libpod in main is a burden for the
remote client compilations. to combat this, we should move the use of
libpod structs, vars, constants, and functions into the adapter code
where it will only be compiled by the local client.
this should result in cleaner code organization and smaller binaries. it
should also help if we ever need to compile the remote client on
non-Linux operating systems natively (not cross-compiled).
Signed-off-by: baude <bbaude@redhat.com>
|
|
|
|
|
|
|
| |
Without this we leak allocated locks, which is definitely not a
good thing.
Signed-off-by: Matthew Heon <matthew.heon@pm.me>
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
When using CGroupfs, we see races during pod removal between
removing the CGroup and the cleanup process starting (in the
CGroup, thus preventing removal).
The simplest way to avoid this is to prevent the forking of the
cleanup process. Conveniently, we can do this via the CGroup that
we already created for Conmon - we just need to update the PID
limit to 0, which completely inhibits new forks.
Signed-off-by: Matthew Heon <matthew.heon@pm.me>
|
|
|
|
|
|
|
| |
Instead of rewriting the logic, reuse the standard logic we use
for removing containers, which is much better tested.
Signed-off-by: Matthew Heon <matthew.heon@pm.me>
|
|
|
|
|
|
|
|
| |
Ensure that, if an error occurs somewhere along the way when we
remove a pod, it's preserved until the end and returned, even as
we continue to remove the pod.
Signed-off-by: Matthew Heon <matthew.heon@pm.me>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Removing a pod must first removal all containers in the pod.
Libpod requires the state to remain consistent at all times, so
references to a deleted pod must all be cleansed first.
Pods can have many containers in them. We presently iterate
through all of them, and if an error occurs trying to clean up
and remove any single container, we abort the entire operation
(but cannot recover anything already removed - pod removal is not
an atomic operation).
Because of this, if a removal error occurs partway through, we
can end up with a pod in an inconsistent state that is no longer
usable. What's worse, if the error is in the infra container, and
it's persistent, we get zombie pods - completely unable to be
removed.
When we saw some of these same issues with containers not in
pods, we modified the removal code there to aggressively purge
containers from the database, then try to clean up afterwards.
Take the same approach here, and make cleanup errors nonfatal.
Once we've gone ahead and removed containers, we need to see
pod deletion through to the end - we'll log errors but keep
going.
Also, fix some other small things (most notably, we didn't make
events for the containers removed).
Signed-off-by: Matthew Heon <matthew.heon@pm.me>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
In lipod, we now log major events that occurr. These events
can be displayed using the `podman events` command. Each
event contains:
* Type (container, image, volume, pod...)
* Status (create, rm, stop, kill, ....)
* Timestamp in RFC3339Nano format
* Name (if applicable)
* Image (if applicable)
The format of the event and the varlink endpoint are to not
be considered stable until cockpit has done its enablement.
Signed-off-by: baude <bbaude@redhat.com>
|
|
|
|
|
|
| |
Before, a pod create would fail if it was set to share no namespaces, but had an infra container. While inefficient (you add a container for no reason), it shouldn't be a fatal failure. Fix this by only failing if the pod was set to share namespaces, but had no infra container, and writing a warning if vice versa.
Signed-off-by: Peter Hunt <pehunt@redhat.com>
|
|
|
|
|
|
|
|
|
|
| |
or attached.
Prior, a pod would have to be started immediately when created, leading to confusion about what a pod state should be immediately after creation. The problem was podman run --pod ... would error out if the infra container wasn't started (as it is a dependency). Fix this by allowing for recursive start, where each of the container's dependencies are started prior to the new container. This is only applied to the case where a new container is attached to a pod.
Also rework container_api Start, StartAndAttach, and Init functions, as there was some duplicated code, which made addressing the problem easier to fix.
Signed-off-by: Peter Hunt <pehunt@redhat.com>
|
|
|
|
|
|
|
|
|
| |
Remove runtime's lockDir as it is no longer needed after the lock
rework.
Add a trivial in-memory lock manager for unit testing
Signed-off-by: Matthew Heon <matthew.heon@gmail.com>
|
|
|
|
| |
Signed-off-by: Matthew Heon <matthew.heon@gmail.com>
|
|
|
|
| |
Signed-off-by: Matthew Heon <matthew.heon@gmail.com>
|
|
|
|
|
|
|
|
|
|
| |
For pods using cgroupfs, we were seeing some error messages in CI
from an inability to remove the pod CGroup, which was traced down
to the conmon cgroup still being present as a child. Try to
remove these error messages and ensure successful CGroup deletion
by removing the conmon CGroup first, then the pod cgroup.
Signed-off-by: Matthew Heon <matthew.heon@gmail.com>
|
|
|
|
|
|
|
|
| |
unfortunately the papr CI system cannot test ubuntu as a VM; therefore,
this PR still keeps travis. but it does include fixes that will be required
for running on modern versions of ubuntu.
Signed-off-by: baude <bbaude@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
To work better with Kata containers, we need to delete() from the
OCI runtime as a part of cleanup, to ensure resources aren't
retained longer than they need to be.
To enable this, we need to add a new state to containers,
ContainerStateExited. Containers transition from
ContainerStateStopped to ContainerStateExited via cleanupRuntime
which is invoked as part of cleanup(). A container in the Exited
state is identical to Stopped, except it has been removed from
the OCI runtime and thus will be handled differently when
initializing the container.
Signed-off-by: Matthew Heon <matthew.heon@gmail.com>
|