| Commit message (Collapse) | Author | Age |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
When you run podman stats, the first interval always shows the wrong cpu
usage. To calculate cpu percentage we get the cpu time from the cgroup
and compare this against the system time between two stats. Since the
first time we do not have a previous stats an empty struct is used
instead. Thus we do not use the actual running time of the container but
the current unix timestamp (time since Jan 1 1970).
To fix this we make sure that the previous stats time is set to the
container start time, when it is empty.
[NO NEW TESTS NEEDED] No idea how I could create a test which would have
a predictable cpu usage.
See the linked bugzilla for a reproducer.
Fixes https://bugzilla.redhat.com/show_bug.cgi?id=2066145
Signed-off-by: Paul Holzinger <pholzing@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Automated for .go files via gomove [1]:
`gomove github.com/containers/podman/v3 github.com/containers/podman/v4`
Remaining files via vgrep [2]:
`vgrep github.com/containers/podman/v3`
[1] https://github.com/KSubedi/gomove
[2] https://github.com/vrothberg/vgrep
Signed-off-by: Valentin Rothberg <rothberg@redhat.com>
|
|
|
|
| |
Signed-off-by: Daniel J Walsh <dwalsh@redhat.com>
|
|
|
|
|
|
|
|
| |
Support removing the entire pod when --depend is used on an infra
container. --all now implies --depend to properly support removing all
containers and not error out when hitting infra containers.
Signed-off-by: Valentin Rothberg <rothberg@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
| |
This option causes Podman to not only remove the specified containers
but all of the containers that depend on the specified
containers.
Fixes: https://github.com/containers/podman/issues/10360
Also ran codespell on the code
Signed-off-by: Daniel J Walsh <dwalsh@redhat.com>
|
|
|
|
|
|
|
|
|
| |
added support for a new flag --passwd which, when false prohibits podman from creating entries in
/etc/passwd and /etc/groups allowing users to modify those files in the container entrypoint
resolves #11805
Signed-off-by: cdoern <cdoern@redhat.com>
|
|
|
|
|
|
|
| |
[NO NEW TESTS NEEDED] This is just moving pkg/cgroups out so
existing tests should be fine.
Signed-off-by: Daniel J Walsh <dwalsh@redhat.com>
|
|\
| |
| | |
Do not store the exit command in container config
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
There is a problem with creating and storing the exit command when the
container was created. It only contains the options the container was
created with but NOT the options the container is started with. One
example would be a CNI network config. If I start a container once, then
change the cni config dir with `--cni-config-dir` ans start it a second
time it will start successfully. However the exit command still contains
the wrong `--cni-config-dir` because it was not updated.
To fix this we do not want to store the exit command at all. Instead we
create it every time the conmon process for the container is startet.
This guarantees us that the container cleanup process is startet with
the correct settings.
[NO NEW TESTS NEEDED]
Signed-off-by: Paul Holzinger <pholzing@redhat.com>
|
|/
|
|
|
|
|
|
| |
CRIU supports checkpoint/restore of file locks. This feature is
required to checkpoint/restore containers running applications
such as MySQL.
Signed-off-by: Radostin Stoyanov <radostin@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This adds the parameter '--print-stats' to 'podman container restore'.
With '--print-stats' Podman will measure how long Podman itself, the OCI
runtime and CRIU requires to restore a checkpoint and print out these
information. CRIU already creates process restore statistics which are
just read in addition to the added measurements. In contrast to just
printing out the ID of the restored container, Podman will now print
out JSON:
# podman container restore --latest --print-stats
{
"podman_restore_duration": 305871,
"container_statistics": [
{
"Id": "47b02e1d474b5d5fe917825e91ac653efa757c91e5a81a368d771a78f6b5ed20",
"runtime_restore_duration": 140614,
"criu_statistics": {
"forking_time": 5,
"restore_time": 67672,
"pages_restored": 14
}
}
]
}
The output contains 'podman_restore_duration' which contains the
number of microseconds Podman required to restore the checkpoint. The
output also includes 'runtime_restore_duration' which is the time
the runtime needed to restore that specific container. Each container
also includes 'criu_statistics' which displays the timing information
collected by CRIU.
Signed-off-by: Adrian Reber <areber@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This adds the parameter '--print-stats' to 'podman container checkpoint'.
With '--print-stats' Podman will measure how long Podman itself, the OCI
runtime and CRIU requires to create a checkpoint and print out these
information. CRIU already creates checkpointing statistics which are
just read in addition to the added measurements. In contrast to just
printing out the ID of the checkpointed container, Podman will now print
out JSON:
# podman container checkpoint --latest --print-stats
{
"podman_checkpoint_duration": 360749,
"container_statistics": [
{
"Id": "25244244bf2efbef30fb6857ddea8cb2e5489f07eb6659e20dda117f0c466808",
"runtime_checkpoint_duration": 177222,
"criu_statistics": {
"freezing_time": 100657,
"frozen_time": 60700,
"memdump_time": 8162,
"memwrite_time": 4224,
"pages_scanned": 20561,
"pages_written": 2129
}
}
]
}
The output contains 'podman_checkpoint_duration' which contains the
number of microseconds Podman required to create the checkpoint. The
output also includes 'runtime_checkpoint_duration' which is the time
the runtime needed to checkpoint that specific container. Each container
also includes 'criu_statistics' which displays the timing information
collected by CRIU.
Signed-off-by: Adrian Reber <areber@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
| |
Podman stats is not supported for rootless cgroupv1 setups. The check
for this must be on the server side and not the client.
[NO NEW TESTS NEEDED] we cannot test this because remote and server are
always on the same machine in CI
Fixes #11909
Signed-off-by: Paul Holzinger <pholzing@redhat.com>
|
|
|
|
|
|
|
|
|
| |
Add --time flag to podman container rm
Add --time flag to podman pod rm
Add --time flag to podman volume rm
Add --time flag to podman network rm
Signed-off-by: Daniel J Walsh <dwalsh@redhat.com>
|
|
|
|
|
|
|
|
|
| |
Following commit ensures we silently return container id on `stop` if
container was never created in OCI runtime.
Following behaviour ensures that we are in parity with docker.
Signed-off-by: Aditya Rajan <arajan@redhat.com>
|
|
|
|
|
|
|
|
| |
Remove ERROR: Error stutter from logrus messages also.
[ NO TESTS NEEDED] This is just code cleanup.
Signed-off-by: Daniel J Walsh <dwalsh@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
There's a potential race around extremely short-running
containers and events with journald. Events may not be written
for some time (small, but appreciable) after they are received,
and as such we can fail to retrieve it if there is a sufficiently
short time between us writing the event and trying to read it.
Work around this by just retrying, with a 0.25 second delay
between retries, up to 4 times.
[NO TESTS NEEDED] because I have no idea how to reproduce this
race in CI.
Fixes #11633
Signed-off-by: Matthew Heon <matthew.heon@pm.me>
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Improve the error handling of `container inspect` to properly handle
when the container has been removed _between_ the lookup and the
inspect. That will yield the correct "no such object" error message in
`inspect`.
[NO TESTS NEEDED] since I do not know have a reliable and cheap
reproducer. It's fixing a CI flake, so there's already an indicator.
Fixes: #11392
Signed-off-by: Valentin Rothberg <rothberg@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
When a container is configured for auto removal podman stop should still
do cleanup, there is no guarantee the the cleanup process spawned by
conmon will be successful. Also a user expects after podman stop that
the network/mounts are cleaned up. Therefore podman stop should not return
early and instead do the cleanup and ignore errors if the container was
already removed.
[NO TESTS NEEDED] I don't know how to test this.
Fixes #11384
Signed-off-by: Paul Holzinger <pholzing@redhat.com>
|
|
|
|
|
|
|
|
|
|
| |
InfraContainer should go through the same creation process as regular containers. This change was from the cmd level
down, involving new container CLI opts and specgen creating functions. What now happens is that both container and pod
cli options are populated in cmd and used to create a podSpecgen and a containerSpecgen. The process then goes as follows
FillOutSpecGen (infra) -> MapSpec (podOpts -> infraOpts) -> PodCreate -> MakePod -> createPodOptions -> NewPod -> CompleteSpec (infra) -> MakeContainer -> NewContainer -> newContainer -> AddInfra (to pod state)
Signed-off-by: cdoern <cdoern@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Currently if you execute podman unpause --all, podman pause --all
Podman shows attempts to unpause containers that are not paused
and prints an error. This PR catches this error and only prints errors if
a paused container was not able to be unpaused.
Currently if you execute podman pause --all or podman kill --all, Podman
Podman shows attempts to pause or kill containers that are not running
and prints an error. This PR catches this error and only prints errors if
a running container was not able to be paused or killed.
Also change printing of multiple errors to go to stderr and to prefix
"Error: " in front to match the output of the last error.
Fixes: https://github.com/containers/podman/issues/11098
Signed-off-by: Daniel J Walsh <dwalsh@redhat.com>
|
|\
| |
| | |
stats: add a interval parameter to cli and api stats streaming
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
podman stats polled by default in a 1 sec period.
This can put quite some load on a machine if you run many containers.
The default value is now 5 seconds.
You can change this interval with a new, optional, --interval, -i cli flag.
The api request got also a interval query parameter for the same purpose.
Additionally a unused const was removed.
Api and cli will fail the request if a 0 or negative value is passed in.
Signed-off-by: Thomas Weber <towe75@googlemail.com>
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
This adds support to checkpoint containers out of pods and restore
container into pods.
It is only possible to restore a container into a pod if it has been
checkpointed out of pod. It is also not possible to restore a non pod
container into a pod.
The main reason this does not work is the PID namespace. If a non pod
container is being restored in a pod with a shared PID namespace, at
least one process in the restored container uses PID 1 which is already
in use by the infrastructure container. If someone tries to restore
container from a pod with a shared PID namespace without a shared PID
namespace it will also fail because the resulting PID namespace will not
have a PID 1.
Signed-off-by: Adrian Reber <areber@redhat.com>
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
Commit 341e6a1 made sure that all exec sessions are getting cleaned up.
But it also came with a peformance penalty. Fix that penalty by
spawning the cleanup process to really only cleanup the exec session
without attempting to remove the container.
[NO TESTS NEEDED] since we have no means to test such performance
issues in CI.
Fixes: #10701
Signed-off-by: Valentin Rothberg <rothberg@redhat.com>
|
|/
|
|
|
|
|
| |
compat containers/logs was missing actual usage of until query param.
This led me to implement the until param for libpod's container logs as well. Added e2e tests.
Signed-off-by: cdoern <cdoern@redhat.com>
|
|
|
|
|
|
|
|
|
| |
Make sure that containers configured for auto removal
(e.g., via `podman create --rm`) are removed in `podman start`
if starting the container failed.
Fixes: #10935
Signed-off-by: Valentin Rothberg <rothberg@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
First, make podman diff accept optionally a second argument. This allows
the user to specify a second image/container to compare the first with.
If it is not set the parent layer will be used as before.
Second, podman container diff should only use containers and podman
image diff should only use images. Previously, podman container diff
would use the image when both an image and container with this name
exists.
To make this work two new parameters have been added to the api. If they
are not used the previous behaviour is used. The same applies to the
bindings.
Fixes #10649
Signed-off-by: Paul Holzinger <pholzing@redhat.com>
|
|
|
|
|
|
| |
[NO TESTS NEEDED]
Signed-off-by: flouthoc <flouthoc.git@gmail.com>
|
|\
| |
| | |
Add podman-restart systemd unit file
|
| |
| |
| |
| |
| |
| |
| | |
* Add podman-restart systemd unit file and add it to podman RPM package
* Fix podman start to filter all containers + unit test
Signed-off-by: Boaz Shuster <boaz.shuster.github@gmail.com>
|
|/
|
|
|
|
|
|
|
|
|
|
| |
We were previously only doing this for detached exec. I don't
know why we did that, but I don't see any reason not to extend it
to all exec sessions - it guarantees that we will always clean up
exec sessions, even if the original `podman exec` process died.
[NO TESTS NEEDED] because I don't really know how to test this
one.
Signed-off-by: Matthew Heon <mheon@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The checkpoint archive compression was hardcoded to `archive.Gzip`.
There have been requests to make the used compression algorithm
selectable. There was especially the request to not compress the
checkpoint archive to be able to create faster checkpoints when not
compressing it.
This also changes the default from `gzip` to `zstd`. This change should
not break anything as the restore code path automatically handles
whatever compression the user provides during restore.
Signed-off-by: Adrian Reber <areber@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
We have race conditions where a container can be removed
by two different processes when running podman --remove rm.
It can be cleaned up in the API or by the conmon executing
podman container cleanup.
When we fail to remove a container that does not exists we should
not be printing errors or warnings, we should just debug the fact.
[NO TESTS NEEDED] Since this is a race condition it is difficult to
test.
Signed-off-by: Daniel J Walsh <dwalsh@redhat.com>
|
|
|
|
| |
Signed-off-by: Boaz Shuster <boaz.shuster.github@gmail.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Migrate the Podman code base over to `common/libimage` which replaces
`libpod/image` and a lot of glue code entirely.
Note that I tried to leave bread crumbs for changed tests.
Miscellaneous changes:
* Some errors yield different messages which required to alter some
tests.
* I fixed some pre-existing issues in the code. Others were marked as
`//TODO`s to prevent the PR from exploding.
* The `NamesHistory` of an image is returned as is from the storage.
Previously, we did some filtering which I think is undesirable.
Instead we should return the data as stored in the storage.
* Touched handlers use the ABI interfaces where possible.
* Local image resolution: previously Podman would match "foo" on
"myfoo". This behaviour has been changed and Podman will now
only match on repository boundaries such that "foo" would match
"my/foo" but not "myfoo". I consider the old behaviour to be a
bug, at the very least an exotic corner case.
* Futhermore, "foo:none" does *not* resolve to a local image "foo"
without tag anymore. It's a hill I am (almost) willing to die on.
* `image prune` prints the IDs of pruned images. Previously, in some
cases, the names were printed instead. The API clearly states ID,
so we should stick to it.
* Compat endpoint image removal with _force_ deletes the entire not
only the specified tag.
Signed-off-by: Valentin Rothberg <rothberg@redhat.com>
|
|
|
|
|
| |
Signed-off-by: Boaz Shuster <boaz.shuster.github@gmail.com>
Co-authored-by: Ed Santiago <santiago@redhat.com>
|
|\
| |
| | |
Add --requires flag to podman run/create
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
Podman has, for a long time, had an internal concept of
dependency management, used mainly to ensure that pod infra
containers are started before any other container in the pod. We
also have the ability to recursively start these dependencies,
which we use to ensure that `podman start` on a container in a
pod will not fail because the infra container is stopped. We have
not, however, exposed these via the command line until now.
Add a `--requires` flag to `podman run` and `podman create` to
allow users to manually specify dependency containers. These
containers must be running before the container will start. Also,
make recursive starting with `podman start` default so we can
start these containers and their dependencies easily.
Fixes #9250
Signed-off-by: Matthew Heon <matthew.heon@pm.me>
|
|/
|
|
|
|
|
|
| |
Containers endpoints for HTTP compad and libpod APIs allowed usage of list HTTP
endpoint filter funcs. Documentation in case of libpod and compat API does not allow that.
This commit aligns code with the documentation.
Signed-off-by: Jakub Guzik <jakubmguzik@gmail.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Currently we were overwrapping error returned from removal
of a non existing container.
$ podman rm bogus -f
Error: failed to evict container: "": failed to find container "bogus" in state: no container with name or ID bogus found: no such container
Removal of wraps gets us to.
./bin/podman rm bogus -f
Error: no container with name or ID "bogus" found: no such container
Finally also added quotes around container name to help make it standout
when you get an error, currently it gets lost in the error.
Signed-off-by: Daniel J Walsh <dwalsh@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
| |
Since commit d54478d8eaec, a container's lock is released before
attempting to stop it via the OCI runtime. This opened the window
for various kinds of race conditions. One of them led to #9479 where
the removal+cleanup sequences of a `run --rm` session overlapped with
`rm -af`. Make both execution paths more robust by handling the case of
an already removed container.
Fixes: #9479
Signed-off-by: Valentin Rothberg <rothberg@redhat.com>
|
|
|
|
|
|
|
|
|
| |
We missed bumping the go module, so let's do it now :)
* Automated go code with github.com/sirkon/go-imports-rename
* Manually via `vgrep podman/v2` the rest
Signed-off-by: Valentin Rothberg <rothberg@redhat.com>
|
|
|
|
|
|
|
| |
Docker always reports back the users input, not the full
id, we should do the same.
Signed-off-by: Daniel J Walsh <dwalsh@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
| |
When we stop a container we are printing the full id,
this does not match Docker behaviour or the start behavior.
We should be printing the users rawInput when we successfully
stop the container.
Fixes: https://github.com/containers/podman/issues/9386
Signed-off-by: Daniel J Walsh <dwalsh@redhat.com>
|
|
|
|
| |
Signed-off-by: Matej Vasek <mvasek@redhat.com>
|
|
|
|
| |
Signed-off-by: Matej Vasek <mvasek@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
| |
Change API Handlers to use the same functions that the
local podman uses.
At the same time:
implement remote API for --all and --ignore flags for podman stop
implement remote API for --all flags for podman stop
Signed-off-by: Daniel J Walsh <dwalsh@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
I found several problems with container remove
podman-remote rm --all
Was not handled
podman-remote rm --ignore
Was not handled
Return better errors when attempting to remove an --external container.
Currently we return the container does not exists, as opposed to container
is an external container that is being used.
This patch also consolidates the tunnel code to use the same code for
removing the container, as the local API, removing duplication of code
and potential problems.
Signed-off-by: Daniel J Walsh <dwalsh@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Basic theory: We remove the container, but *only from the DB*.
We leave it in c/storage, we leave the lock allocated, we leave
it running (if it is). Then we create an identical container with
an altered name, and add that back to the database. Theoretically
we now have a renamed container.
The advantage of this approach is that it doesn't just apply to
rename - we can use this to make *any* configuration change to a
container that does not alter its container ID.
Potential problems are numerous. This process is *THOROUGHLY*
non-atomic at present - if you `kill -9` Podman mid-rename things
will be in a bad place, for example. Also, we can't rename
containers that can't be removed normally - IE, containers with
dependencies (pod infra containers, for example).
The largest potential improvement will be to move the majority of
the work into the DB, with a `RecreateContainer()` method - that
will add atomicity, and let us remove the container without
worrying about depencies and similar issues.
Potential problems: long-running processes that edit the DB and
may have an older version of the configuration around. Most
notable example is `podman run --rm` - the removal command needed
to be manually edited to avoid this one. This begins to get at
the heart of me not wanting to do this in the first place...
This provides CLI and API implementations for frontend, but no
tunnel implementation. It will be added in a future release (just
held back for time now - we need this in 3.0 and are running low
on time).
This is honestly kind of horrifying, but I think it will work.
Signed-off-by: Matthew Heon <mheon@redhat.com>
|