| Commit message (Collapse) | Author | Age |
|\
| |
| | |
bindings: Add support for `Delete` for deleting manifest list from local storage.
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
Bindings already support `Remove` which removes a manifest from the list
following function adds support for removing entire manifest for local
storage.
Similar functionality can be also used indirectly by using `Remove` defined in
image bindings
Signed-off-by: Aditya R <arajan@redhat.com>
|
|\ \
| | |
| | | |
fix volume reporting in system df
|
| |/
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
currently, podman system df incorrectly calculates the reclaimable storage for
volumes, using a cumulative reclaimable variable that is incremented and placed into each
report entry causing values to rise above 100%.
Switch this variables to be in the context of the loop, so it resets per volume just like the size variable does.
resolves #13516
Signed-off-by: Charlie Doern <cdoern@redhat.com>
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
add support for podman-remote image scp as well as direct access via the API. This entailed
a full rework of the layering of image scp functions as well as the usual API plugging and type creation
also, implemented podman image scp tagging. which makes the syntax much more readable and allows users t tag the new image
they are loading to the local/remote machine:
allow users to pass a "new name" for the image they are transferring
`podman tag` as implemented creates a new image im `image list` when tagging, so this does the same
meaning that when transferring images with tags, podman on the remote machine/user will load two images
ex: `podman image scp computer1::alpine computer2::foobar` creates alpine:latest and localhost/foobar on the remote host
implementing tags means removal of the flexible syntax. In the currently released podman image scp, the user can either specify
`podman image scp source::img dest::` or `podman image scp dest:: source::img`. However, with tags this task becomes really hard to check
which is the image (src) and which is the new tag (dst). Removal of that streamlines the arg parsing process
Signed-off-by: Charlie Doern <cdoern@redhat.com>
|
|\ \
| | |
| | | |
pod: ps does not race with rm
|
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
the "pod ps" command first retrieves the list of all pods, then
iterates over the list to inspect each pod. This introduce a race
since a pod could be deleted in the meanwhile by another process.
Solve it by ignoring the define.ErrNoSuchPod error.
Closes: https://github.com/containers/podman/issues/14736
Signed-off-by: Giuseppe Scrivano <gscrivan@redhat.com>
|
| | |
| | |
| | |
| | |
| | |
| | | |
it is a preparatory change for the next commit.
Signed-off-by: Giuseppe Scrivano <gscrivan@redhat.com>
|
| | |
| | |
| | |
| | | |
Signed-off-by: Boaz Shuster <boaz.shuster.github@gmail.com>
|
|\ \ \
| | | |
| | | | |
add podman volume reload to sync volume plugins
|
| | |/
| |/|
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
Libpod requires that all volumes are stored in the libpod db. Because
volume plugins can be created outside of podman, it will not show all
available plugins. This podman volume reload command allows users to
sync the libpod db with their external volume plugins. All new volumes
from the plugin are also created in the libpod db and when a volume from
the db no longer exists it will be removed if possible.
There are some problems:
- naming conflicts, in this case we only use the first volume we found.
This is not deterministic.
- race conditions, we have no control over the volume plugins. It is
possible that the volumes changed while we run this command.
Fixes #14207
Signed-off-by: Paul Holzinger <pholzing@redhat.com>
|
|\ \ \
| |_|/
|/| | |
Show Health Status events
|
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
Previously, health status events were not being generated at all. Both
the API and `podman events` will generate health_status events.
```
{"status":"health_status","id":"ae498ac3aa6c63db8b69a37583a6eae1a9cefbdbdbeeadcf8e1d66d745f0df63","from":"localhost/healthcheck-demo:latest","Type":"container","Action":"health_status","Actor":{"ID":"ae498ac3aa6c63db8b69a37583a6eae1a9cefbdbdbeeadcf8e1d66d745f0df63","Attributes":{"containerExitCode":"0","image":"localhost/healthcheck-demo:latest","io.buildah.version":"1.26.1","maintainer":"NGINX Docker Maintainers \u003cdocker-maint@nginx.com\u003e","name":"healthcheck-demo"}},"scope":"local","time":1656082205,"timeNano":1656082205882271276,"HealthStatus":"healthy"}
```
```
2022-06-24 11:06:04.886238493 -0400 EDT container health_status ae498ac3aa6c63db8b69a37583a6eae1a9cefbdbdbeeadcf8e1d66d745f0df63 (image=localhost/healthcheck-demo:latest, name=healthcheck-demo, health_status=healthy, io.buildah.version=1.26.1, maintainer=NGINX Docker Maintainers <docker-maint@nginx.com>)
```
Signed-off-by: Jake Correnti <jcorrenti13@gmail.com>
|
| |/
|/|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
This commit addresses three intertwined bugs to fix an issue when using
Gitlab runner on Podman. The three bug fixes are not split into
separate commits as tests won't pass otherwise; avoidable noise when
bisecting future issues.
1) Podman conflated states: even when asking to wait for the `exited`
state, Podman returned as soon as a container transitioned to
`stopped`. The issues surfaced in Gitlab tests to fail [1] as
`conmon`'s buffers have not (yet) been emptied when attaching to a
container right after a wait. The race window was extremely narrow,
and I only managed to reproduce with the Gitlab runner [1] unit
tests.
2) The clearer separation between `exited` and `stopped` revealed a race
condition predating the changes. If a container is configured for
autoremoval (e.g., via `run --rm`), the "run" process competes with
the "cleanup" process running in the background. The window of the
race condition was sufficiently large that the "cleanup" process has
already removed the container and storage before the "run" process
could read the exit code and hence waited indefinitely.
Address the exit-code race condition by recording exit codes in the
main libpod database. Exit codes can now be read from a database.
When waiting for a container to exit, Podman first waits for the
container to transition to `exited` and will then query the database
for its exit code. Outdated exit codes are pruned during cleanup
(i.e., non-performance critical) and when refreshing the database
after a reboot. An exit code is considered outdated when it is older
than 5 minutes.
While the race condition predates this change, the waiting process
has apparently always been fast enough in catching the exit code due
to issue 1): `exited` and `stopped` were conflated. The waiting
process hence caught the exit code after the container transitioned
to `stopped` but before it `exited` and got removed.
3) With 1) and 2), Podman is now waiting for a container to properly
transition to the `exited` state. Some tests did not pass after 1)
and 2) which revealed the third bug: `conmon` was executed with its
working directory pointing to the OCI runtime bundle of the
container. The changed working directory broke resolving relative
paths in the "cleanup" process. The "cleanup" process error'ed
before actually cleaning up the container and waiting "main" process
ran indefinitely - or until hitting a timeout. Fix the issue by
executing `conmon` with the same working directory as Podman.
Note that fixing 3) *may* address a number of issues we have seen in the
past where for *some* reason cleanup processes did not fire.
[1] https://gitlab.com/gitlab-org/gitlab-runner/-/issues/27119#note_970712864
Signed-off-by: Valentin Rothberg <vrothberg@redhat.com>
[MH: Minor reword of commit message]
Signed-off-by: Matthew Heon <mheon@redhat.com>
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
* Replace "setup", "lookup", "cleanup", "backup" with
"set up", "look up", "clean up", "back up"
when used as verbs. Replace also variations of those.
* Improve language in a few places.
Signed-off-by: Erik Sjölund <erik.sjolund@gmail.com>
|
|\ \
| | |
| | | |
Update to use gopkg.in/yaml.v3
|
| | |
| | |
| | |
| | |
| | |
| | | |
[NO NEW TESTS NEEDED]
Signed-off-by: Daniel J Walsh <dwalsh@redhat.com>
|
|\ \ \
| | | |
| | | | |
allow filter networks by dangling status
|
| | |/
| |/|
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
add the ability to filter networks by their dangling status via:
`network ls --filter dangling=true/false`
Fixes: #14595
Signed-off-by: Carlo Lobrano <c.lobrano@gmail.com>
|
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
when there are multiple paths specified, attempt to join them all
before returning an error. Previously we were failing on the first
pid found.
[NO NEW TESTS NEEDED]
Signed-off-by: Giuseppe Scrivano <gscrivan@redhat.com>
|
|\ \ \
| |_|/
|/| | |
podman system prune support prune unused networks
|
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
This is an enhancement for the podman system prune feature.
In this issue, it is mentioned that 'network prune' should be
wired into 'podman system prune'
https://github.com/containers/podman/issues/8673
Therefore, I add the function to remove unused networks.
Signed-off-by: Toshiki Sonoda <sonoda.toshiki@fujitsu.com>
|
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
[NO NEW TESTS NEEDED] podman pod clone somehow snuck by the new linter code that went in while it was in flight
fix that here
Signed-off-by: cdoern <cdoern@redhat.com>
|
|\ \ \
| | | |
| | | | |
implement podman pod clone
|
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | | |
implement podman pod clone, a command to create an exact copy of a pod while changing
certain config elements
current supported flags are:
--name change the pod name
--destroy remove the original pod
--start run the new pod on creation
and all infra-container related flags from podman pod create (namespaces etc)
resolves #12843
Signed-off-by: cdoern <cdoern@redhat.com>
|
| |_|/
|/| |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
The nolintlint linter does not deny the use of `//nolint`
Instead it allows us to enforce a common nolint style:
- force that a linter name must be specified
- do not add a space between `//` and `nolint`
- make sure nolint is only used when there is actually a problem
Signed-off-by: Paul Holzinger <pholzing@redhat.com>
|
|\ \ \
| |_|/
|/| | |
podman-remote push --remove-signatures support
|
| |/
| |
| |
| |
| |
| |
| |
| |
| | |
I don't see a reason why we don't support --remove-signatures
from remote push, so adding support.
Fixes: https://github.com/containers/podman/issues/14558
Signed-off-by: Daniel J Walsh <dwalsh@redhat.com>
|
|/
|
|
|
|
|
|
|
|
| |
Add a new `--overwrite` flag to `podman cp` to allow for overwriting in
case existing users depend on the behavior; they will have a workaround.
By default, the flag is turned off to be compatible with Docker and to
have a more sane behavior.
Fixes: #14420
Signed-off-by: Valentin Rothberg <vrothberg@redhat.com>
|
|
|
|
|
|
|
|
| |
`podman container restore --file-locks` does not restore file locks
because this option is not passed to OCI runtime. This patch fixes this
issue.
Signed-off-by: Hironori Shiina <shiina.hironori@jp.fujitsu.com>
|
|\
| |
| | |
Improve robustness of `podman system reset`
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
Firstly, reset is now managed by the runtime itself as a part of
initialization. This ensures that it can be used even with
runtimes that would otherwise fail to be created - most notably,
when the user has changed a core path
(runroot/root/tmpdir/staticdir).
Secondly, we now attempt a best-effort removal even if the store
completely fails to be configured.
Third, we now hold the alive lock for the entire reset operation.
This ensures that no other Podman process can start while we are
running a system reset, and removes any possibility of a race
where a user tries to create containers or pull images while we
are trying to perform a reset.
[NO NEW TESTS NEEDED] we do not test reset last I checked.
Fixes #9075
Signed-off-by: Matthew Heon <mheon@redhat.com>
|
|/
|
|
|
|
|
|
|
| |
Libpod or packages under /pkg should never import from /cmd/...
This will quickly result in import cycles and weird code paths.
Also there is no reason to use this special code we can just use
syscall.SIGHUB as SIGNAL.
Signed-off-by: Paul Holzinger <pholzing@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
| |
Update method signatures and structs to pass option to buildah code
```release-note
NONE
```
[NO NEW TESTS NEEDED]
Signed-off-by: Jhon Honce <jhonce@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
* Option left in images/diff.go CLI as comment implies it is needed for
backwards compatibility.
```release-note
NONE
```
[NO NEW TESTS NEEDED]
Signed-off-by: Jhon Honce <jhonce@redhat.com>
|
|\
| |
| | |
Use containers/common/pkg/util.StringToSlice
|
| |
| |
| |
| |
| |
| | |
[NO NEW TESTS NEEDED] Just code cleanup for better reuse
Signed-off-by: Daniel J Walsh <dwalsh@redhat.com>
|
| |
| |
| |
| | |
Signed-off-by: Valentin Rothberg <vrothberg@redhat.com>
|
| |
| |
| |
| | |
Signed-off-by: Valentin Rothberg <vrothberg@redhat.com>
|
| |
| |
| |
| |
| |
| |
| |
| |
| | |
Remove the `ConfigDigest` field from `entities.ImageSummary` which has
never been populated (or documented) until now. Unless there is a
specific request or need to support it, remove the TODO that was added
during the libimage migration.
Signed-off-by: Valentin Rothberg <vrothberg@redhat.com>
|
|/
|
|
|
|
|
|
|
| |
Make sure that `podman image mount` prints a pretty table unless there
is only argument passed and without a custom format. Fixing a TODO item
brought me to the specific code location and revealed the fart in the
logic.
Signed-off-by: Valentin Rothberg <vrothberg@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
* Remove duplicate or unused types and constants
* Move all documetation-only models and responses into swagger package
* Remove all unecessary names, go-swagger will determine names from
struct declarations
* Use Libpod suffix to differentiate between compat and libpod models
and responses. Taken from swagger:operation declarations.
* Models and responses that start with lowercase are for swagger use
only while uppercase are used "as is" in the code and swagger comments
* Used gofumpt on new code
```release-note
```
Signed-off-by: Jhon Honce <jhonce@redhat.com>
|
|
|
|
|
|
|
| |
Detects unneccessary type conversions and helps in keeping the code base
cleaner.
Signed-off-by: Valentin Rothberg <vrothberg@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Support running `podman play kube` in systemd by exploiting the
previously added "service containers". During `play kube`, a service
container is started before all the pods and containers, and is stopped
last. The service container communicates its conmon PID via sdnotify.
Add a new systemd template to dispatch such k8s workloads. The argument
of the template is the path to the k8s file. Note that the path must be
escaped for systemd not to bark:
Let's assume we have a `top.yaml` file in the home directory:
```
$ escaped=$(systemd-escape ~/top.yaml)
$ systemctl --user start podman-play-kube@$escaped.service
```
Closes: https://issues.redhat.com/browse/RUN-1287
Signed-off-by: Valentin Rothberg <vrothberg@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Add the notion of a "service container" to play kube. A service
container is started before the pods in play kube and is (reverse)
linked to them. The service container is stopped/removed *after*
all pods it is associated with are stopped/removed.
In other words, a service container tracks the entire life cycle
of a service started via `podman play kube`. This is required to
enable `play kube` in a systemd unit file.
The service container is only used when the `--service-container`
flag is set on the CLI. This flag has been marked as hidden as it
is not meant to be used outside the context of `play kube`. It is
further not supported on the remote client.
The wiring with systemd will be done in a later commit.
Signed-off-by: Valentin Rothberg <vrothberg@redhat.com>
|
|
|
|
|
|
|
|
| |
add support to override the user namespace to use for the pod.
Closes: https://github.com/containers/podman/issues/7504
Signed-off-by: Giuseppe Scrivano <gscrivan@redhat.com>
|
|
|
|
|
|
|
|
| |
If the RunAsUser, RunAsGroup, SupplementalGroups settings are not
overriden in the container security context, then take the value from
the pod security context.
Signed-off-by: Giuseppe Scrivano <gscrivan@redhat.com>
|
|\
| |
| | |
pass networks to container clone
|
| |
| |
| |
| |
| |
| |
| |
| |
| | |
since the network config is a string map, json.unmarshal does not recognize
the config and spec as the same entity, need to map this option manually
resolves #13713
Signed-off-by: cdoern <cbdoer23@g.holycross.edu>
|
|\ \
| | |
| | | |
Report correct RemoteURI
|