| Commit message (Collapse) | Author | Age |
|\
| |
| | |
Add "podman volume" command
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
Add support for podman volume and its subcommands.
The commands supported are:
podman volume create
podman volume inspect
podman volume ls
podman volume rm
podman volume prune
This is a tool to manage volumes used by podman. For now it only handle
named volumes, but eventually it will handle all volumes used by podman.
Signed-off-by: umohnani8 <umohnani@redhat.com>
|
|\ \
| | |
| | | |
Add ability to prune containers and images
|
| |/
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
Allow user to prune unused/unnamed images, the layer images from building,
via podman rmi --prune.
Allow user to prune stopped/exiuted containers via podman rm --prune.
This should resolve #1910
Signed-off-by: baude <bbaude@redhat.com>
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
Per discussion with Dan, it would be better to automatically
handle potential runtime errors by automatically syncing if they
occur. Retaining the flag for `ps` makes sense, as we won't even
be calling the OCI runtime and as such won't see errors if the
state desyncs, but rm can be handled automatically.
The automatic desync handling code will take some additional work
so we'll land this as-is (sync on ps is enough to solve most
desync issues).
Signed-off-by: Matthew Heon <mheon@redhat.com>
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
The previous commit added support for --sync to podman rm to
ensure state inconsistencies would not prevent containers from
being removed.
Add the flag to podman ps as well, so that all containers can be
forcibly synced and all state inconsistencies resolved.
Signed-off-by: Matthew Heon <mheon@redhat.com>
|
|/
|
|
|
|
|
|
|
|
|
| |
With the changes made recently to ensure Podman does not hit the
OCI runtime as often to sync state, we can find ourselves in a
situation where the runtime's state does not match ours.
Add a --sync flag to podman rm to ensure we can still remove
containers when this happens.
Signed-off-by: Matthew Heon <mheon@redhat.com>
|
|\
| |
| | |
Adding more varlink endpoints
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
* runlabel
* checkpoint
* restore
* container|image exists
* mount
* unmount
Signed-off-by: baude <bbaude@redhat.com>
|
|\ \
| | |
| | | |
Use paths written in DB instead if they differ from our defaults
|
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
Previous commits ensured that we would use database-configured
paths if not explicitly overridden.
However, our runtime generation did unconditionally override
storage config, which made this useless.
Move rootless storage configuration setup to libpod, and change
storage setup so we only override if a setting is explicitly
set, so we can still override what we want.
Signed-off-by: Matthew Heon <mheon@redhat.com>
|
|\ \ \
| | | |
| | | | |
libpod/container_internal: Deprecate implicit hook directories
|
| |/ /
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
Part of the motivation for 800eb863 (Hooks supports two directories,
process default and override, 2018-09-17, #1487) was [1]:
> We only use this for override. The reason this was caught is people
> are trying to get hooks to work with CoreOS. You are not allowed to
> write to /usr/share... on CoreOS, so they wanted podman to also look
> at /etc, where users and third parties can write.
But we'd also been disabling hooks completely for rootless users. And
even for root users, the override logic was tricky when folks actually
had content in both directories. For example, if you wanted to
disable a hook from the default directory, you'd have to add a no-op
hook to the override directory.
Also, the previous implementation failed to handle the case where
there hooks defined in the override directory but the default
directory did not exist:
$ podman version
Version: 0.11.2-dev
Go Version: go1.10.3
Git Commit: "6df7409cb5a41c710164c42ed35e33b28f3f7214"
Built: Sun Dec 2 21:30:06 2018
OS/Arch: linux/amd64
$ ls -l /etc/containers/oci/hooks.d/test.json
-rw-r--r--. 1 root root 184 Dec 2 16:27 /etc/containers/oci/hooks.d/test.json
$ podman --log-level=debug run --rm docker.io/library/alpine echo 'successful container' 2>&1 | grep -i hook
time="2018-12-02T21:31:19-08:00" level=debug msg="reading hooks from /usr/share/containers/oci/hooks.d"
time="2018-12-02T21:31:19-08:00" level=warning msg="failed to load hooks: {}%!(EXTRA *os.PathError=open /usr/share/containers/oci/hooks.d: no such file or directory)"
With this commit:
$ podman --log-level=debug run --rm docker.io/library/alpine echo 'successful container' 2>&1 | grep -i hook
time="2018-12-02T21:33:07-08:00" level=debug msg="reading hooks from /usr/share/containers/oci/hooks.d"
time="2018-12-02T21:33:07-08:00" level=debug msg="reading hooks from /etc/containers/oci/hooks.d"
time="2018-12-02T21:33:07-08:00" level=debug msg="added hook /etc/containers/oci/hooks.d/test.json"
time="2018-12-02T21:33:07-08:00" level=debug msg="hook test.json matched; adding to stages [prestart]"
time="2018-12-02T21:33:07-08:00" level=warning msg="implicit hook directories are deprecated; set --hooks-dir="/etc/containers/oci/hooks.d" explicitly to continue to load hooks from this directory"
time="2018-12-02T21:33:07-08:00" level=error msg="container create failed: container_linux.go:336: starting container process caused "process_linux.go:399: container init caused \"process_linux.go:382: running prestart hook 0 caused \\\"error running hook: exit status 1, stdout: , stderr: oh, noes!\\\\n\\\"\""
(I'd setup the hook to error out). You can see that it's silenly
ignoring the ENOENT for /usr/share/containers/oci/hooks.d and
continuing on to load hooks from /etc/containers/oci/hooks.d.
When it loads the hook, it also logs a warning-level message
suggesting that callers explicitly configure their hook directories.
That will help consumers migrate, so we can drop the implicit hook
directories in some future release. When folks *do* explicitly
configure hook directories (via the newly-public --hooks-dir and
hooks_dir options), we error out if they're missing:
$ podman --hooks-dir /does/not/exist run --rm docker.io/library/alpine echo 'successful container'
error setting up OCI Hooks: open /does/not/exist: no such file or directory
I've dropped the trailing "path" from the old, hidden --hooks-dir-path
and hooks_dir_path because I think "dir(ectory)" is already enough
context for "we expect a path argument". I consider this name change
non-breaking because the old forms were undocumented.
Coming back to rootless users, I've enabled hooks now. I expect they
were previously disabled because users had no way to avoid
/usr/share/containers/oci/hooks.d which might contain hooks that
required root permissions. But now rootless users will have to
explicitly configure hook directories, and since their default config
is from ~/.config/containers/libpod.conf, it's a misconfiguration if
it contains hooks_dir entries which point at directories with hooks
that require root access. We error out so they can fix their
libpod.conf.
[1]: https://github.com/containers/libpod/pull/1487#discussion_r218149355
Signed-off-by: W. Trevor King <wking@tremily.us>
|
|\ \ \
| | | |
| | | | |
correct algorithm for deleting all images
|
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | | |
when deleting all images, we need to iterate all the images deleting on those who dont
have children first. And then reiterate until they are all gone.
This resolves #1926
Signed-off-by: baude <bbaude@redhat.com>
|
|/ / /
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
when a user specifies --pod to podman create|run, we should create that pod
automatically. the port bindings from the container are then inherited by
the infra container. this signicantly improves the workflow of running
containers inside pods with podman. the user is still encouraged to use
podman pod create to have more granular control of the pod create options.
Signed-off-by: baude <bbaude@redhat.com>
|
| |/
|/|
| |
| |
| |
| |
| |
| |
| | |
like containers and images, users would benefit from being able to check
if a pod exists in local storage. if the pod exists, the return code is 0.
if the pod does not exists, the return code is 1. Any other return code
indicates a real errors, such as permissions or runtime.
Signed-off-by: baude <bbaude@redhat.com>
|
|/
|
|
|
|
|
| |
podman logs already supports the latest command line switch. users should be able
to use the short-options combined (i.e. podman logs -lf).
Signed-off-by: baude <bbaude@redhat.com>
|
|
|
|
|
|
| |
podman ps has a flag --pod; simply adding a short option of -p
Signed-off-by: baude <bbaude@redhat.com>
|
|
|
|
|
|
|
| |
until the kube commands are ironed out, we dont want it drawing
attention in any release
Signed-off-by: baude <bbaude@redhat.com>
|
|\
| |
| | |
Fix golang formatting issues
|
| |
| |
| |
| |
| |
| |
| | |
Whe running unittests on newer golang versions, we observe failures with some
formatting types when no declared correctly.
Signed-off-by: baude <bbaude@redhat.com>
|
|\ \
| |/
|/| |
Stopping a stopped container is not an error for Podman
|
| |
| |
| |
| | |
Signed-off-by: Matthew Heon <matthew.heon@gmail.com>
|
|\ \
| |/
|/| |
Add tcp-established to checkpoint/restore
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
podman container restore -a was using the wrong filter to restore
checkpointed containers. This switches from 'running' containers to
'exited' containers.
Restoring with -a only works if all exited containers have been
checkpointed. Maybe it would make sense to track which containers have
been really checkpointed. This is just to fix '-a' to work at least
if all exited containers have been checkpointed.
Signed-off-by: Adrian Reber <areber@redhat.com>
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
CRIU can checkpoint and restore processes/containers with established
TCP connections if the correct option is specified. To implement
checkpoint and restore with support for established TCP connections with
Podman this commit adds the necessary options to runc during checkpoint
and also tells conmon during restore to use 'runc restore' with
'--tcp-established'.
For this Podman feature to work a corresponding conmon change is
required.
Example:
$ podman run --tmpfs /tmp --name podman-criu-test -d docker://docker.io/yovfiatbeb/podman-criu-test
$ nc `podman inspect -l | jq -r '.[0].NetworkSettings.IPAddress'` 8080
GET /examples/servlets/servlet/HelloWorldExample
Connection: keep-alive
1
GET /examples/servlets/servlet/HelloWorldExample
Connection: keep-alive
2
$ # Using HTTP keep-alive multiple requests are send to the server in the container
$ # Different terminal:
$ podman container checkpoint -l
criu failed: type NOTIFY errno 0
$ # Looking at the log file would show errors because of established TCP connections
$ podman container checkpoint -l --tcp-established
$ # This works now and after the restore the same connection as above can be used for requests
$ podman container restore -l --tcp-established
The restore would fail without '--tcp-established' as the checkpoint image
contains established TCP connections.
Signed-off-by: Adrian Reber <areber@redhat.com>
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
This is basically the same change as
ff47a4c2d5485fc49f937f3ce0c4e2fd6bdb1956 (Use a struct to pass options to Checkpoint())
just for the Restore() function. It is used to pass multiple restore
options to the API and down to conmon which is used to restore
containers. This is for the upcoming changes to support checkpointing
and restoring containers with '--tcp-established'.
Signed-off-by: Adrian Reber <areber@redhat.com>
|
|/
|
|
|
|
|
|
| |
so that inspect reports the correct network configuration.
Closes: https://github.com/containers/libpod/issues/1453
Signed-off-by: Giuseppe Scrivano <gscrivan@redhat.com>
|
|\
| |
| | |
libpod should know if the network is disabled
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
/etc/resolv.conf and /etc/hosts should not be created and mounted when the
network is disabled.
We should not be calling the network setup and cleanup functions when it is
disabled either.
In doing this patch, I found that all of the bind mounts were particular to
Linux along with the generate functions, so I moved them to
container_internal_linux.go
Since we are checking if we are using a network namespace, we need to check
after the network namespaces has been created in the spec.
Signed-off-by: Daniel J Walsh <dwalsh@redhat.com>
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
Add an exists subcommand to podman container and podman image that allows
users to verify the existence of a container or image by ID or name. The return
code can be 0 (success), 1 (failed to find), or 125 (failed to work with runtime).
Issue #1845
Signed-off-by: baude <bbaude@redhat.com>
|
| |
| |
| |
| | |
Signed-off-by: Tomas Tomecek <ttomecek@redhat.com>
|
|\ \
| | |
| | | |
Added option to keep container running after checkpointing
|
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
CRIU supports to leave processes running after checkpointing:
-R|--leave-running leave tasks in running state after checkpoint
runc also support to leave containers running after checkpointing:
--leave-running leave the process running after checkpointing
With this commit the support to leave a container running after
checkpointing is brought to Podman:
--leave-running, -R leave the container running after writing checkpoint to disk
Now it is possible to checkpoint a container at some point in time
without stopping the container. This can be used to rollback the
container to an early state:
$ podman run --tmpfs /tmp --name podman-criu-test -d docker://docker.io/yovfiatbeb/podman-criu-test
$ curl 10.88.64.253:8080/examples/servlets/servlet/HelloWorldExample
3
$ podman container checkpoint -R -l
$ curl 10.88.64.253:8080/examples/servlets/servlet/HelloWorldExample
4
$ curl 10.88.64.253:8080/examples/servlets/servlet/HelloWorldExample
5
$ podman stop -l
$ podman container restore -l
$ curl 10.88.64.253:8080/examples/servlets/servlet/HelloWorldExample
4
So after checkpointing the container kept running and was stopped after
some time. Restoring this container will restore the state right at the
checkpoint.
Signed-off-by: Adrian Reber <areber@redhat.com>
|
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
For upcoming changes to the Checkpoint() functions this commit switches
checkpoint options from a boolean to a struct, so that additional
options can be passed easily to Checkpoint() without changing the
function parameters all the time.
Signed-off-by: Adrian Reber <areber@redhat.com>
|
|\ \ \
| | | |
| | | | |
generate kubernetes YAML from a libpod container
|
| |/ /
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
scope out new kube subcommand where we can add generate. you can now generate kubernetes
YAML that will allow you to run the container in a kubernetes environment. When
The YAML description will always "wrap" a container in a simple v1.Pod description.
Tests and further documentation will be added in additional PRs.
This function should be considered very much "under heavy development" at
this point.
Signed-off-by: baude <bbaude@redhat.com>
|
|/ /
| |
| |
| |
| |
| |
| |
| |
| |
| | |
we need to allow users to expose ports to the host for the purposes
of networking, like a webserver. the port exposure must be done at
the time the pod is created.
strictly speaking, the port exposure occurs on the infra container.
Signed-off-by: baude <bbaude@redhat.com>
|
| |
| |
| |
| |
| |
| |
| |
| | |
Use github.com/google/shlex for splitting commands instead of splitting
at whitespaces. This way, we avoid accidentally splitting single string
arguments into mutliple ones.
Signed-off-by: Valentin Rothberg <vrothberg@suse.com>
|
|\ \
| |/
|/| |
Set --force-rm for podman build to true by default
|
| |
| |
| |
| |
| |
| |
| |
| |
| | |
Since we use buildah containers for the build process, the
user will not know if we have any buildah containers lingering
due to a failed build. Setting this to true by default till
we figure out a better way to solve this.
Signed-off-by: Urvashi Mohnani <umohnani@redhat.com>
|
| |
| |
| |
| | |
Signed-off-by: Qi Wang <qiwan@redhat.com>
|
|/
|
|
|
|
|
|
|
|
|
|
| |
We now can remove a paused container by sending it a kill signal while it
is paused. We then unpause the container and it is immediately killed.
Also, reworked how the parallelWorker results are handled to provide a
more consistent approach to how each subcommand implements it. It also
fixes a bug where if one container errors, the error message is duplicated
when printed out.
Signed-off-by: baude <bbaude@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Addressing:
podman run -it -a STDERR --rm alpine /bin/ash
hanging. As we droped stdin as soon as -a was used. Notice this is contrary to
what D-tool does and contrary to what podman help implies:
podman run --help | grep interact
--interactive, -i Keep STDIN open even if not attached
Signed-off-by: Šimon Lukašík <slukasik@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
| |
Operations like kill, pause, and unpause -- which can operation on one or
more containers -- can greatly benefit from parallizing its main job (eq kill).
In the case of pauseand unpause, an --all option as was added. pause --all will
pause all **running** containers. And unpause --all will unpause all **paused**
containers.
Signed-off-by: baude <bbaude@redhat.com>
|
|
|
|
|
|
|
|
|
|
| |
When attempting to restart many containers, we can benefit from making
the restarts parallel. For convenience, two new options are added:
--all attempts to restart all containers
--run-only when used with --all will attempt to restart only running containers
Signed-off-by: baude <bbaude@redhat.com>
|
|\
| |
| | |
Fix setting of version information
|
| |
| |
| |
| |
| |
| |
| | |
It was setting the wrong variable (CamelCase)
in the wrong module ("main", not "libpod")...
Signed-off-by: Anders F Björklund <anders.f.bjorklund@gmail.com>
|
|\ \
| | |
| | | |
attach: fix attach when cuid is too long
|