| Commit message (Collapse) | Author | Age |
|
|
|
|
|
|
|
|
|
| |
Signed-off-by: TomSweeneyRedHat <tsweeney@redhat.com>
Vendors in Buildah 1.7 into Podman.
Also the latest imagebuilder and changes for
`build --target`
Signed-off-by: TomSweeneyRedHat <tsweeney@redhat.com>
|
|\
| |
| | |
podman: --runtime has higher priority on runtime_path
|
| |
| |
| |
| |
| |
| |
| | |
if --runtime is specified, then it has higher priority on the
runtime_path option, which was added for backward compatibility.
Signed-off-by: Giuseppe Scrivano <gscrivan@redhat.com>
|
|\ \
| |/
|/| |
podman-remote pod inspect|exists
|
| |
| |
| |
| |
| |
| |
| |
| | |
enable the remote client to be able to inspect a pod. also, bonus of
enabling the podman pod exists command which returns a 0 or 1 depending
on whether the given pod exists.
Signed-off-by: baude <bbaude@redhat.com>
|
|\ \
| |/
|/| |
Add lock renumbering
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
The original intent behind the requirement was to ensure that, if
two SHM lock structs were open at the same time, we should not
make such a runtime available to the user, and should clean it up
instead.
It turns out that we don't even need to open a second SHM lock
struct - if we get an error mapping the first one due to a lock
count mismatch, we can just delete it, and it cleans itself up
when it errors. So there's no reason not to return a valid
runtime.
Signed-off-by: Matthew Heon <matthew.heon@pm.me>
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
When we're renumbering locks, we're destroying all existing
allocations anyways, so destroying the old lock struct is not a
particularly big deal. Existing long-lived libpod instances will
continue to use the old locks, but that will be solved in a
followon.
Also, solve an issue with returning error values in the C code.
There were a few places where we return ERRNO where it was not
set, so make them return actual error codes).
Signed-off-by: Matthew Heon <matthew.heon@pm.me>
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
We can't do renumbering after init - we need to open a
potentially invalid locks file (too many/too few locks), and then
potentially delete the old locks and make new ones.
We need to be in init to bypass the checks that would otherwise
make this impossible.
This leaves us with two choices: make RenumberLocks a separate
entrypoint from NewRuntime, duplicating a lot of configuration
load code (we need to know where the locks live, how many there
are, etc) - or modify NewRuntime to allow renumbering during it.
Previous experience says the first is not really a viable option
and produces massive code bloat, so the second it is.
Signed-off-by: Matthew Heon <matthew.heon@pm.me>
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
I was looking into why we have locks in volumes, and I'm fairly
convinced they're unnecessary.
We don't have a state whose accesses we need to guard with locks
and syncs. The only real purpose for the lock was to prevent
concurrent removal of the same volume.
Looking at the code, concurrent removal ought to be fine with a
bit of reordering - one or the other might fail, but we will
successfully evict the volume from the state.
Also, remove the 'prune' bool from RemoveVolume. None of our
other API functions accept it, and it only served to toggle off
more verbose error messages.
Signed-off-by: Matthew Heon <matthew.heon@pm.me>
|
| |
| |
| |
| | |
Signed-off-by: Matthew Heon <matthew.heon@pm.me>
|
| |
| |
| |
| |
| |
| | |
Necessary for rewriting lock IDs as part of renumber.
Signed-off-by: Matthew Heon <matthew.heon@pm.me>
|
| |
| |
| |
| |
| |
| |
| |
| |
| | |
Renumber is a way of renumbering container locks after the number
of locks available has changed.
For now, renumber only works with containers.
Signed-off-by: Matthew Heon <matthew.heon@pm.me>
|
| |
| |
| |
| | |
Signed-off-by: Matthew Heon <matthew.heon@pm.me>
|
|/
|
|
|
|
|
| |
enable the ability to load an image into remote storage
using the remote client.
Signed-off-by: baude <bbaude@redhat.com>
|
|
|
|
|
|
| |
add the ability to delete a pod from the remote client.
Signed-off-by: baude <bbaude@redhat.com>
|
|
|
|
|
|
|
| |
Add the ability to save an image from the remote-host to the
remote-client.
Signed-off-by: baude <bbaude@redhat.com>
|
|
|
|
|
|
|
| |
Use an `image.SearchFilter` instead of a `[]string` in the SearchImages
API.
Signed-off-by: Valentin Rothberg <rothberg@redhat.com>
|
|
|
|
|
|
|
|
| |
Refactor the image-search logic from cmd/podman/search.go to
libpod/image/search.go and update podman-search and the Varlink API to
use it.
Signed-off-by: Valentin Rothberg <rothberg@redhat.com>
|
|
|
|
|
|
| |
Add status for remote users and podman remote-client pull.
Signed-off-by: baude <bbaude@redhat.com>
|
|
|
|
|
|
| |
Before, a container being run or started in a pod always restarted the infra container. This was because we didn't take running dependencies into account. Fix this by filtering for dependencies in the running state.
Signed-off-by: Peter Hunt <pehunt@redhat.com>
|
|
|
|
|
|
| |
Drop context.Context field from cli.Context
Signed-off-by: Sebastian Jug <sejug@redhat.com>
|
|
|
|
|
|
|
|
|
|
| |
or attached.
Prior, a pod would have to be started immediately when created, leading to confusion about what a pod state should be immediately after creation. The problem was podman run --pod ... would error out if the infra container wasn't started (as it is a dependency). Fix this by allowing for recursive start, where each of the container's dependencies are started prior to the new container. This is only applied to the case where a new container is attached to a pod.
Also rework container_api Start, StartAndAttach, and Init functions, as there was some duplicated code, which made addressing the problem easier to fix.
Signed-off-by: Peter Hunt <pehunt@redhat.com>
|
|\
| |
| | |
libpod.conf: add backward compatibility for runtime_path
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
Add backward compatibility for `runtime_path` that was used by older
versions of Podman.
The issue was introduced with: 650cf122e1b33f4d8f4426ee1cc1a4bf00c14798
If `runtime_path` is specified, it overrides any other configuration
and a warning is printed.
It should be considered deprecated and will be removed in future.
Signed-off-by: Giuseppe Scrivano <gscrivan@redhat.com>
|
| |
| |
| |
| | |
Signed-off-by: Giuseppe Scrivano <gscrivan@redhat.com>
|
|\ \
| |/
|/| |
Add tlsVerify bool to SearchImage for varlink
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
Cockpit wants to be able to search images on systems without
tlsverify turned on.
tlsverify should be an optional parameter, if not set then we default
to the system defaults defined in /etc/containers/registries.conf.
Signed-off-by: Daniel J Walsh <dwalsh@redhat.com>
|
|\ \
| | |
| | | |
volume prune
|
| |/
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
allow users to remotely prune volumes.
this is the last volume command for remote enablement. as such,
the volume commands are being folded back into main because they
are supported for both local and remote clients.
also, enable all volume tests that do not use containers
as containers are not enabled for the remote client yet.
Signed-off-by: baude <bbaude@redhat.com>
|
|/
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
iFix builtin volumes to work with podman volume
Currently builtin volumes are not recored in podman volumes when
they are created automatically. This patch fixes this.
Remove container volumes when requested
Currently the --volume option on podman remove does nothing.
This will implement the changes needed to remove the volumes
if the user requests it.
When removing a volume make sure that no container uses the volume.
Signed-off-by: Daniel J Walsh dwalsh@redhat.com
Signed-off-by: Daniel J Walsh <dwalsh@redhat.com>
|
|\
| |
| | |
podman-remote build
|
| |
| |
| |
| |
| |
| |
| | |
add the ability to build images using files local to the remote-client
but over a varlink interface to a "remote" server.
Signed-off-by: baude <bbaude@redhat.com>
|
|\ \
| | |
| | | |
Fix manual detach from containers to not wait for exit
|
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
When cleaning up containers, we presently remove the exit file
created by Conmon, to ensure that if we restart the container, we
won't have conflicts when Conmon tries writing a new exit file.
Unfortunately, we need to retain that exit file (at least until
we get a workable events system), so we can read it in cases
where the container has been removed before 'podman run' can read
its exit code.
So instead of removing it, rename it, so there's no conflict with
Conmon, and we can still read it later.
Fixes: #1640
Signed-off-by: Matthew Heon <mheon@redhat.com>
|
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
At present, when manually detaching from an attached container
(using the detach hotkeys, default C-p C-q), Podman will still
wait for the container to exit to obtain its exit code (so we can
set Podman's exit code to match). This is correct in the case
where attach finished because the container exited, but very
wrong for the manual detach case.
As a result of this, we can no longer guarantee that the cleanup
and --rm functions will fire at the end of 'podman run' - we may
be exiting before we get that far. Cleanup is easy enough - we
swap to unconditionally using the cleanup processes we've used
for detached and rootless containers all along. To duplicate --rm
we need to also teach 'podman cleanup' to optionally remove
containers instead of cleaning them up.
(There is an argument for just using 'podman rm' instead of
'podman cleanup --rm', but cleanup does have different semantics
given that we only ever expect it to run when the container has
just exited. I think it might be useful to keep the two separate
for things like 'podman events'...)
Signed-off-by: Matthew Heon <mheon@redhat.com>
|
|\ \ \
| |_|/
|/| | |
show container ports of network namespace
|
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
in cases where a container is part of a network namespace, we should
show the network namespace's ports when dealing with ports. this
impacts ps, kube, and port.
fixes: #846
Signed-off-by: baude <bbaude@redhat.com>
|
|/ /
| |
| |
| |
| |
| |
| | |
add the ability to list and inspect volumes using the remote
client and varlink
Signed-off-by: baude <bbaude@redhat.com>
|
|\ \
| | |
| | | |
Parse fq name correctly for images
|
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
When parsing a string name for repo and tag (for images output), we
should be using parsenormalizedname and reference.Canonical to
get the proper output.
Resolves: #2175
Signed-off-by: baude <bbaude@redhat.com>
|
|/ /
| |
| |
| |
| |
| |
| |
| |
| |
| | |
enable podman-remote push so that users can push images from a
remote client.
change in push API to deal with the need to see output over the
varlink connection.
Signed-off-by: baude <bbaude@redhat.com>
|
| |
| |
| |
| |
| |
| |
| |
| |
| | |
Image more clearly describes what the type represents.
Also, only include the image name in the `ImageNotFound` error returned
by `GetImage()`, not the full error message.
Signed-off-by: Lars Karlitski <lars@karlitski.net>
|
|/
|
|
|
|
|
|
|
| |
This is more consistent and eaiser to parse than the format that
golang's time.String() returns.
Fixes #2260
Signed-off-by: Lars Karlitski <lars@karlitski.net>
|
|
|
|
|
|
|
| |
add the ability to remove/delete volumes with the podman remote
client.
Signed-off-by: baude <bbaude@redhat.com>
|
|\
| |
| | |
podman-remote volume create
|
| |
| |
| |
| |
| |
| | |
create a volume using the remote client over varlink
Signed-off-by: baude <bbaude@redhat.com>
|
|\ \
| | |
| | | |
Remove container from storage on --force
|
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
Currently we can get into a state where a container exists in
storage but does not exist in libpod. If the user forces a
removal of this container, then we should remove it from storage
even if the container is owned by another tool.
Signed-off-by: Daniel J Walsh <dwalsh@redhat.com>
|
| |/
|/|
| |
| |
| |
| |
| |
| |
| | |
when checking for a container's mountpoint, you must lock and sync
the container or the result may be "".
Fixes: #2304
Signed-off-by: baude <bbaude@redhat.com>
|