| Commit message (Collapse) | Author | Age |
|
|
|
|
|
| |
Tying filtering logic for podman stop and start to same place in getContainersAndInputByContext() to reduce code redundancy
Signed-off-by: Karthik Elango <kelango@redhat.com>
|
|
|
|
|
|
|
|
|
|
| |
Fixes https://github.com/containers/podman/issues/15049
Signed-off-by: Daniel J Walsh <dwalsh@redhat.com>
<MH: Fixed cherry-pick conflicts>
Signed-off-by: Matthew Heon <mheon@redhat.com>
|
|
|
|
|
|
|
| |
Filter flag is added for podman stop and podman --remote stop. Filtering logic is implemented in
getContainersAndInputByContext(). Start filtering can be manipulated to use this logic as well to limit redundancy.
Signed-off-by: Karthik Elango <kelango@redhat.com>
|
|
|
|
|
|
|
|
| |
Podman wait should not be defaulting to just stopped. By default
wait API waits for stopped and exited. We should not override this on
the client side.
Signed-off-by: Daniel J Walsh <dwalsh@redhat.com>
|
|
|
|
|
|
|
|
|
| |
We now use the golang error wrapping format specifier `%w` instead of
the deprecated github.com/pkg/errors package.
[NO NEW TESTS NEEDED]
Signed-off-by: Sascha Grunert <sgrunert@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This commit addresses three intertwined bugs to fix an issue when using
Gitlab runner on Podman. The three bug fixes are not split into
separate commits as tests won't pass otherwise; avoidable noise when
bisecting future issues.
1) Podman conflated states: even when asking to wait for the `exited`
state, Podman returned as soon as a container transitioned to
`stopped`. The issues surfaced in Gitlab tests to fail [1] as
`conmon`'s buffers have not (yet) been emptied when attaching to a
container right after a wait. The race window was extremely narrow,
and I only managed to reproduce with the Gitlab runner [1] unit
tests.
2) The clearer separation between `exited` and `stopped` revealed a race
condition predating the changes. If a container is configured for
autoremoval (e.g., via `run --rm`), the "run" process competes with
the "cleanup" process running in the background. The window of the
race condition was sufficiently large that the "cleanup" process has
already removed the container and storage before the "run" process
could read the exit code and hence waited indefinitely.
Address the exit-code race condition by recording exit codes in the
main libpod database. Exit codes can now be read from a database.
When waiting for a container to exit, Podman first waits for the
container to transition to `exited` and will then query the database
for its exit code. Outdated exit codes are pruned during cleanup
(i.e., non-performance critical) and when refreshing the database
after a reboot. An exit code is considered outdated when it is older
than 5 minutes.
While the race condition predates this change, the waiting process
has apparently always been fast enough in catching the exit code due
to issue 1): `exited` and `stopped` were conflated. The waiting
process hence caught the exit code after the container transitioned
to `stopped` but before it `exited` and got removed.
3) With 1) and 2), Podman is now waiting for a container to properly
transition to the `exited` state. Some tests did not pass after 1)
and 2) which revealed the third bug: `conmon` was executed with its
working directory pointing to the OCI runtime bundle of the
container. The changed working directory broke resolving relative
paths in the "cleanup" process. The "cleanup" process error'ed
before actually cleaning up the container and waiting "main" process
ran indefinitely - or until hitting a timeout. Fix the issue by
executing `conmon` with the same working directory as Podman.
Note that fixing 3) *may* address a number of issues we have seen in the
past where for *some* reason cleanup processes did not fire.
[1] https://gitlab.com/gitlab-org/gitlab-runner/-/issues/27119#note_970712864
Signed-off-by: Valentin Rothberg <vrothberg@redhat.com>
[MH: Minor reword of commit message]
Signed-off-by: Matthew Heon <mheon@redhat.com>
|
|
|
|
|
|
|
|
|
|
| |
* Replace "setup", "lookup", "cleanup", "backup" with
"set up", "look up", "clean up", "back up"
when used as verbs. Replace also variations of those.
* Improve language in a few places.
Signed-off-by: Erik Sjölund <erik.sjolund@gmail.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
implement podman pod clone, a command to create an exact copy of a pod while changing
certain config elements
current supported flags are:
--name change the pod name
--destroy remove the original pod
--start run the new pod on creation
and all infra-container related flags from podman pod create (namespaces etc)
resolves #12843
Signed-off-by: cdoern <cdoern@redhat.com>
|
|
|
|
|
|
|
|
| |
`podman container restore --file-locks` does not restore file locks
because this option is not passed to OCI runtime. This patch fixes this
issue.
Signed-off-by: Hironori Shiina <shiina.hironori@jp.fujitsu.com>
|
|
|
|
|
|
|
| |
Detects unneccessary type conversions and helps in keeping the code base
cleaner.
Signed-off-by: Valentin Rothberg <vrothberg@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Support running `podman play kube` in systemd by exploiting the
previously added "service containers". During `play kube`, a service
container is started before all the pods and containers, and is stopped
last. The service container communicates its conmon PID via sdnotify.
Add a new systemd template to dispatch such k8s workloads. The argument
of the template is the path to the k8s file. Note that the path must be
escaped for systemd not to bark:
Let's assume we have a `top.yaml` file in the home directory:
```
$ escaped=$(systemd-escape ~/top.yaml)
$ systemctl --user start podman-play-kube@$escaped.service
```
Closes: https://issues.redhat.com/browse/RUN-1287
Signed-off-by: Valentin Rothberg <vrothberg@redhat.com>
|
|\
| |
| | |
pass networks to container clone
|
| |
| |
| |
| |
| |
| |
| |
| |
| | |
since the network config is a string map, json.unmarshal does not recognize
the config and spec as the same entity, need to map this option manually
resolves #13713
Signed-off-by: cdoern <cbdoer23@g.holycross.edu>
|
|/
|
|
|
|
|
|
| |
The errcheck linter makes sure that errors are always check and not
ignored by accident. It spotted a lot of unchecked errors, mostly in the
tests but also some real problem in the code.
Signed-off-by: Paul Holzinger <pholzing@redhat.com>
|
|\
| |
| | |
podman container clone -f
|
| |
| |
| |
| |
| |
| |
| |
| | |
add the option -f to force remove the parent container if --destory is specified
resolves #13917
Signed-off-by: cdoern <cbdoer23@g.holycross.edu>
|
|/
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This is an enhancement proposal for the checkpoint / restore feature of
Podman that enables container migration across multiple systems with
standard image distribution infrastructure.
A new option `--create-image <image>` has been added to the
`podman container checkpoint` command. This option tells Podman to
create a container image. This is a standard image with a single layer,
tar archive, that that contains all checkpoint files. This is similar to
the current approach with checkpoint `--export`/`--import`.
This image can be pushed to a container registry and pulled on a
different system. It can also be exported locally with `podman image
save` and inspected with `podman inspect`. Inspecting the image would
display additional information about the host and the versions of
Podman, criu, crun/runc, kernel, etc.
`podman container restore` has also been extended to support image
name or ID as input.
Suggested-by: Adrian Reber <areber@redhat.com>
Signed-off-by: Radostin Stoyanov <radostin@redhat.com>
|
|\
| |
| | |
pod logs enhancements: option to color logs
|
| |
| |
| |
| |
| | |
Signed-off-by: Krzysztof Baran <krysbaran@gmail.com>
Signed-off-by: gcalin <caling@protonmail.com>
|
|/
|
|
|
|
|
|
|
|
|
|
|
| |
the infra Inherit function was not properly passing pod volume information to new containers
alter the inherit function and struct to use the new `ConfigToSpec` function used in clone
pick and choose the proper entities from a temp spec and validate them on the spegen side rather
than passing directly to a config
resolves #13548
Signed-off-by: cdoern <cbdoer23@g.holycross.edu>
Signed-off-by: cdoern <cdoern@redhat.com>
Signed-off-by: cdoern <cbdoer23@g.holycross.edu>
|
|
|
|
|
|
| |
Closes: https://github.com/containers/podman/issues/3979
Signed-off-by: Giuseppe Scrivano <gscrivan@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
When you run podman stats, the first interval always shows the wrong cpu
usage. To calculate cpu percentage we get the cpu time from the cgroup
and compare this against the system time between two stats. Since the
first time we do not have a previous stats an empty struct is used
instead. Thus we do not use the actual running time of the container but
the current unix timestamp (time since Jan 1 1970).
To fix this we make sure that the previous stats time is set to the
container start time, when it is empty.
[NO NEW TESTS NEEDED] No idea how I could create a test which would have
a predictable cpu usage.
See the linked bugzilla for a reproducer.
Fixes https://bugzilla.redhat.com/show_bug.cgi?id=2066145
Signed-off-by: Paul Holzinger <pholzing@redhat.com>
|
|\
| |
| | |
container-commit: support `--squash` to squash layers into one if users want.
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
Allow users to commit containers into a single layer.
Usage
```bash
podman container commit --squash <name>
```
Signed-off-by: Aditya R <arajan@redhat.com>
|
|/
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
podman container clone takes the id of an existing continer and creates a specgen from the given container's config
recreating all proper namespaces and overriding spec options like resource limits and the container name if given in the cli options
this command utilizes the common function DefineCreateFlags meaning that we can funnel as many create options as we want
into clone over time allowing the user to clone with as much or as little of the original config as they want.
container clone takes a second argument which is a new name and a third argument which is an image name to use instead of the original container's
the current supported flags are:
--destroy (remove the original container)
--name (new ctr name)
--cpus (sets cpu period and quota)
--cpuset-cpus
--cpu-period
--cpu-rt-period
--cpu-rt-runtime
--cpu-shares
--cpuset-mems
--memory
--run
resolves #10875
Signed-off-by: cdoern <cdoern@redhat.com>
Signed-off-by: cdoern <cbdoer23@g.holycross.edu>
Signed-off-by: cdoern <cdoern@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Automated for .go files via gomove [1]:
`gomove github.com/containers/podman/v3 github.com/containers/podman/v4`
Remaining files via vgrep [2]:
`vgrep github.com/containers/podman/v3`
[1] https://github.com/KSubedi/gomove
[2] https://github.com/vrothberg/vgrep
Signed-off-by: Valentin Rothberg <rothberg@redhat.com>
|
|
|
|
| |
Signed-off-by: Daniel J Walsh <dwalsh@redhat.com>
|
|
|
|
|
|
|
|
| |
Support removing the entire pod when --depend is used on an infra
container. --all now implies --depend to properly support removing all
containers and not error out when hitting infra containers.
Signed-off-by: Valentin Rothberg <rothberg@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
| |
This option causes Podman to not only remove the specified containers
but all of the containers that depend on the specified
containers.
Fixes: https://github.com/containers/podman/issues/10360
Also ran codespell on the code
Signed-off-by: Daniel J Walsh <dwalsh@redhat.com>
|
|
|
|
|
|
|
|
|
| |
added support for a new flag --passwd which, when false prohibits podman from creating entries in
/etc/passwd and /etc/groups allowing users to modify those files in the container entrypoint
resolves #11805
Signed-off-by: cdoern <cdoern@redhat.com>
|
|
|
|
|
|
|
| |
[NO NEW TESTS NEEDED] This is just moving pkg/cgroups out so
existing tests should be fine.
Signed-off-by: Daniel J Walsh <dwalsh@redhat.com>
|
|\
| |
| | |
Do not store the exit command in container config
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
There is a problem with creating and storing the exit command when the
container was created. It only contains the options the container was
created with but NOT the options the container is started with. One
example would be a CNI network config. If I start a container once, then
change the cni config dir with `--cni-config-dir` ans start it a second
time it will start successfully. However the exit command still contains
the wrong `--cni-config-dir` because it was not updated.
To fix this we do not want to store the exit command at all. Instead we
create it every time the conmon process for the container is startet.
This guarantees us that the container cleanup process is startet with
the correct settings.
[NO NEW TESTS NEEDED]
Signed-off-by: Paul Holzinger <pholzing@redhat.com>
|
|/
|
|
|
|
|
|
| |
CRIU supports checkpoint/restore of file locks. This feature is
required to checkpoint/restore containers running applications
such as MySQL.
Signed-off-by: Radostin Stoyanov <radostin@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This adds the parameter '--print-stats' to 'podman container restore'.
With '--print-stats' Podman will measure how long Podman itself, the OCI
runtime and CRIU requires to restore a checkpoint and print out these
information. CRIU already creates process restore statistics which are
just read in addition to the added measurements. In contrast to just
printing out the ID of the restored container, Podman will now print
out JSON:
# podman container restore --latest --print-stats
{
"podman_restore_duration": 305871,
"container_statistics": [
{
"Id": "47b02e1d474b5d5fe917825e91ac653efa757c91e5a81a368d771a78f6b5ed20",
"runtime_restore_duration": 140614,
"criu_statistics": {
"forking_time": 5,
"restore_time": 67672,
"pages_restored": 14
}
}
]
}
The output contains 'podman_restore_duration' which contains the
number of microseconds Podman required to restore the checkpoint. The
output also includes 'runtime_restore_duration' which is the time
the runtime needed to restore that specific container. Each container
also includes 'criu_statistics' which displays the timing information
collected by CRIU.
Signed-off-by: Adrian Reber <areber@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This adds the parameter '--print-stats' to 'podman container checkpoint'.
With '--print-stats' Podman will measure how long Podman itself, the OCI
runtime and CRIU requires to create a checkpoint and print out these
information. CRIU already creates checkpointing statistics which are
just read in addition to the added measurements. In contrast to just
printing out the ID of the checkpointed container, Podman will now print
out JSON:
# podman container checkpoint --latest --print-stats
{
"podman_checkpoint_duration": 360749,
"container_statistics": [
{
"Id": "25244244bf2efbef30fb6857ddea8cb2e5489f07eb6659e20dda117f0c466808",
"runtime_checkpoint_duration": 177222,
"criu_statistics": {
"freezing_time": 100657,
"frozen_time": 60700,
"memdump_time": 8162,
"memwrite_time": 4224,
"pages_scanned": 20561,
"pages_written": 2129
}
}
]
}
The output contains 'podman_checkpoint_duration' which contains the
number of microseconds Podman required to create the checkpoint. The
output also includes 'runtime_checkpoint_duration' which is the time
the runtime needed to checkpoint that specific container. Each container
also includes 'criu_statistics' which displays the timing information
collected by CRIU.
Signed-off-by: Adrian Reber <areber@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
| |
Podman stats is not supported for rootless cgroupv1 setups. The check
for this must be on the server side and not the client.
[NO NEW TESTS NEEDED] we cannot test this because remote and server are
always on the same machine in CI
Fixes #11909
Signed-off-by: Paul Holzinger <pholzing@redhat.com>
|
|
|
|
|
|
|
|
|
| |
Add --time flag to podman container rm
Add --time flag to podman pod rm
Add --time flag to podman volume rm
Add --time flag to podman network rm
Signed-off-by: Daniel J Walsh <dwalsh@redhat.com>
|
|
|
|
|
|
|
|
|
| |
Following commit ensures we silently return container id on `stop` if
container was never created in OCI runtime.
Following behaviour ensures that we are in parity with docker.
Signed-off-by: Aditya Rajan <arajan@redhat.com>
|
|
|
|
|
|
|
|
| |
Remove ERROR: Error stutter from logrus messages also.
[ NO TESTS NEEDED] This is just code cleanup.
Signed-off-by: Daniel J Walsh <dwalsh@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
There's a potential race around extremely short-running
containers and events with journald. Events may not be written
for some time (small, but appreciable) after they are received,
and as such we can fail to retrieve it if there is a sufficiently
short time between us writing the event and trying to read it.
Work around this by just retrying, with a 0.25 second delay
between retries, up to 4 times.
[NO TESTS NEEDED] because I have no idea how to reproduce this
race in CI.
Fixes #11633
Signed-off-by: Matthew Heon <matthew.heon@pm.me>
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Improve the error handling of `container inspect` to properly handle
when the container has been removed _between_ the lookup and the
inspect. That will yield the correct "no such object" error message in
`inspect`.
[NO TESTS NEEDED] since I do not know have a reliable and cheap
reproducer. It's fixing a CI flake, so there's already an indicator.
Fixes: #11392
Signed-off-by: Valentin Rothberg <rothberg@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
When a container is configured for auto removal podman stop should still
do cleanup, there is no guarantee the the cleanup process spawned by
conmon will be successful. Also a user expects after podman stop that
the network/mounts are cleaned up. Therefore podman stop should not return
early and instead do the cleanup and ignore errors if the container was
already removed.
[NO TESTS NEEDED] I don't know how to test this.
Fixes #11384
Signed-off-by: Paul Holzinger <pholzing@redhat.com>
|
|
|
|
|
|
|
|
|
|
| |
InfraContainer should go through the same creation process as regular containers. This change was from the cmd level
down, involving new container CLI opts and specgen creating functions. What now happens is that both container and pod
cli options are populated in cmd and used to create a podSpecgen and a containerSpecgen. The process then goes as follows
FillOutSpecGen (infra) -> MapSpec (podOpts -> infraOpts) -> PodCreate -> MakePod -> createPodOptions -> NewPod -> CompleteSpec (infra) -> MakeContainer -> NewContainer -> newContainer -> AddInfra (to pod state)
Signed-off-by: cdoern <cdoern@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Currently if you execute podman unpause --all, podman pause --all
Podman shows attempts to unpause containers that are not paused
and prints an error. This PR catches this error and only prints errors if
a paused container was not able to be unpaused.
Currently if you execute podman pause --all or podman kill --all, Podman
Podman shows attempts to pause or kill containers that are not running
and prints an error. This PR catches this error and only prints errors if
a running container was not able to be paused or killed.
Also change printing of multiple errors to go to stderr and to prefix
"Error: " in front to match the output of the last error.
Fixes: https://github.com/containers/podman/issues/11098
Signed-off-by: Daniel J Walsh <dwalsh@redhat.com>
|
|\
| |
| | |
stats: add a interval parameter to cli and api stats streaming
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
podman stats polled by default in a 1 sec period.
This can put quite some load on a machine if you run many containers.
The default value is now 5 seconds.
You can change this interval with a new, optional, --interval, -i cli flag.
The api request got also a interval query parameter for the same purpose.
Additionally a unused const was removed.
Api and cli will fail the request if a 0 or negative value is passed in.
Signed-off-by: Thomas Weber <towe75@googlemail.com>
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
This adds support to checkpoint containers out of pods and restore
container into pods.
It is only possible to restore a container into a pod if it has been
checkpointed out of pod. It is also not possible to restore a non pod
container into a pod.
The main reason this does not work is the PID namespace. If a non pod
container is being restored in a pod with a shared PID namespace, at
least one process in the restored container uses PID 1 which is already
in use by the infrastructure container. If someone tries to restore
container from a pod with a shared PID namespace without a shared PID
namespace it will also fail because the resulting PID namespace will not
have a PID 1.
Signed-off-by: Adrian Reber <areber@redhat.com>
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
Commit 341e6a1 made sure that all exec sessions are getting cleaned up.
But it also came with a peformance penalty. Fix that penalty by
spawning the cleanup process to really only cleanup the exec session
without attempting to remove the container.
[NO TESTS NEEDED] since we have no means to test such performance
issues in CI.
Fixes: #10701
Signed-off-by: Valentin Rothberg <rothberg@redhat.com>
|
|/
|
|
|
|
|
| |
compat containers/logs was missing actual usage of until query param.
This led me to implement the until param for libpod's container logs as well. Added e2e tests.
Signed-off-by: cdoern <cdoern@redhat.com>
|