summaryrefslogtreecommitdiff
path: root/libpod
Commit message (Collapse)AuthorAge
* Only remove image volumes when removing containersMatthew Heon2019-02-26
| | | | | | | | | | | | When removing volumes with rm --volumes we want to only remove volumes that were created with the container. Volumes created separately via 'podman volume create' should not be removed. Also ensure that --rm implies volumes will be removed. Fixes #2441 Signed-off-by: Matthew Heon <matthew.heon@pm.me>
* Record when volume path is explicitly set in configMatthew Heon2019-02-26
| | | | | | | This ensures we won't overwrite it when it's set in the config we load from disk. Signed-off-by: Matthew Heon <matthew.heon@pm.me>
* Add debug information when overriding paths with the DBMatthew Heon2019-02-26
| | | | Signed-off-by: Matthew Heon <matthew.heon@pm.me>
* Add path for named volumes to `podman info`Matthew Heon2019-02-26
| | | | Signed-off-by: Matthew Heon <matthew.heon@pm.me>
* Validate VolumePath against DB configurationMatthew Heon2019-02-26
| | | | | | | | | If this doesn't match, we end up not being able to access named volumes mounted into containers, which is bad. Use the same validation that we use for other critical paths to ensure this one also matches. Signed-off-by: Matthew Heon <matthew.heon@pm.me>
* When location of c/storage root changes, set VolumePathMatthew Heon2019-02-26
| | | | | | | | | | | | | | | We want named volumes to be created in a subdirectory of the c/storage graph root, the same as the libpod root directory is now. As such, we need to adjust its location when the graph root changes location. Also, make a change to how we set the default. There's no need to explicitly set it every time we initialize via an option - that might conflict with WithStorageConfig setting it based on graph root changes. Instead, just initialize it in the default config like our other settings. Signed-off-by: Matthew Heon <matthew.heon@pm.me>
* Merge pull request #2382 from adrianreber/selinuxOpenShift Merge Robot2019-02-26
|\ | | | | Fix one (of two) SELinux denials during checkpointing
| * Label CRIU log files correctlyAdrian Reber2019-02-26
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | CRIU creates a log file during checkpointing in .../userdata/dump.log. The problem with this file is, is that CRIU injects a parasite code into the container processes and this parasite code also writes to the same log file. At this point a process from the inside of the container is trying to access the log file on the outside of the container and SELinux prohibits this. To enable writing to the log file from the injected parasite code, this commit creates an empty log file and labels the log file with c.MountLabel(). CRIU uses existing files when writing it logs so the log file label persists and now, with the correct label, SELinux no longer blocks access to the log file. Signed-off-by: Adrian Reber <areber@redhat.com>
* | oci: improve error message when the OCI runtime is not foundGiuseppe Scrivano2019-02-26
|/ | | | | | | We were previously returning the not so nice error directly from conmon. Signed-off-by: Giuseppe Scrivano <gscrivan@redhat.com>
* Merge pull request #2358 from rhatdan/namespaceOpenShift Merge Robot2019-02-25
|\ | | | | Fix up handling of user defined network namespaces
| * Fix up handling of user defined network namespacesDaniel J Walsh2019-02-23
| | | | | | | | | | | | | | | | | | If user specifies network namespace and the /etc/netns/XXX/resolv.conf exists, we should use this rather then /etc/resolv.conf Also fail cleaner if the user specifies an invalid Network Namespace. Signed-off-by: Daniel J Walsh <dwalsh@redhat.com>
* | Merge pull request #2417 from rhatdan/resolv.confOpenShift Merge Robot2019-02-25
|\ \ | | | | | | In shared networkNS /etc/resolv.conf&/etc/hosts should be shared
| * | In shared networkNS /etc/resolv.conf&/etc/hosts should be sharedDaniel J Walsh2019-02-23
| |/ | | | | | | | | | | | | | | | | | | | | We should just bind mount the original containers /etc/resolv.conf and /etchosts into the new container. Changes in the resolv.conf and hosts should be seen by all containers, This matches Docker behaviour. In order to make this work the labels on these files need to have a shared SELinux label. Signed-off-by: Daniel J Walsh <dwalsh@redhat.com>
* | Merge pull request #2413 from baude/remotepodstopOpenShift Merge Robot2019-02-24
|\ \ | |/ |/| Enable more podman-remote pod commands
| * Enable more podman-remote pod commandsbaude2019-02-22
| | | | | | | | | | | | enable pod start, stop, and kill subcommands for the remote-client. Signed-off-by: baude <bbaude@redhat.com>
* | Vendor Buildah v1.7TomSweeneyRedHat2019-02-22
|/ | | | | | | | | | Signed-off-by: TomSweeneyRedHat <tsweeney@redhat.com> Vendors in Buildah 1.7 into Podman. Also the latest imagebuilder and changes for `build --target` Signed-off-by: TomSweeneyRedHat <tsweeney@redhat.com>
* Merge pull request #2403 from giuseppe/fix-runtimeOpenShift Merge Robot2019-02-22
|\ | | | | podman: --runtime has higher priority on runtime_path
| * podman: --runtime has higher priority on runtime_pathGiuseppe Scrivano2019-02-22
| | | | | | | | | | | | | | if --runtime is specified, then it has higher priority on the runtime_path option, which was added for backward compatibility. Signed-off-by: Giuseppe Scrivano <gscrivan@redhat.com>
* | Merge pull request #2402 from baude/remotepodinspectOpenShift Merge Robot2019-02-22
|\ \ | |/ |/| podman-remote pod inspect|exists
| * podman-remote pod inspect|existsbaude2019-02-22
| | | | | | | | | | | | | | | | enable the remote client to be able to inspect a pod. also, bonus of enabling the podman pod exists command which returns a 0 or 1 depending on whether the given pod exists. Signed-off-by: baude <bbaude@redhat.com>
* | Merge pull request #2350 from mheon/lock_renumberOpenShift Merge Robot2019-02-21
|\ \ | |/ |/| Add lock renumbering
| * Do not make renumber shut down the runtimeMatthew Heon2019-02-21
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The original intent behind the requirement was to ensure that, if two SHM lock structs were open at the same time, we should not make such a runtime available to the user, and should clean it up instead. It turns out that we don't even need to open a second SHM lock struct - if we get an error mapping the first one due to a lock count mismatch, we can just delete it, and it cleans itself up when it errors. So there's no reason not to return a valid runtime. Signed-off-by: Matthew Heon <matthew.heon@pm.me>
| * Recreate SHM locks when renumbering on count mismatchMatthew Heon2019-02-21
| | | | | | | | | | | | | | | | | | | | | | | | | | | | When we're renumbering locks, we're destroying all existing allocations anyways, so destroying the old lock struct is not a particularly big deal. Existing long-lived libpod instances will continue to use the old locks, but that will be solved in a followon. Also, solve an issue with returning error values in the C code. There were a few places where we return ERRNO where it was not set, so make them return actual error codes). Signed-off-by: Matthew Heon <matthew.heon@pm.me>
| * Move RenumberLocks into runtime initMatthew Heon2019-02-21
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | We can't do renumbering after init - we need to open a potentially invalid locks file (too many/too few locks), and then potentially delete the old locks and make new ones. We need to be in init to bypass the checks that would otherwise make this impossible. This leaves us with two choices: make RenumberLocks a separate entrypoint from NewRuntime, duplicating a lot of configuration load code (we need to know where the locks live, how many there are, etc) - or modify NewRuntime to allow renumbering during it. Previous experience says the first is not really a viable option and produces massive code bloat, so the second it is. Signed-off-by: Matthew Heon <matthew.heon@pm.me>
| * Remove locks from volumesMatthew Heon2019-02-21
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | I was looking into why we have locks in volumes, and I'm fairly convinced they're unnecessary. We don't have a state whose accesses we need to guard with locks and syncs. The only real purpose for the lock was to prevent concurrent removal of the same volume. Looking at the code, concurrent removal ought to be fine with a bit of reordering - one or the other might fail, but we will successfully evict the volume from the state. Also, remove the 'prune' bool from RemoveVolume. None of our other API functions accept it, and it only served to toggle off more verbose error messages. Signed-off-by: Matthew Heon <matthew.heon@pm.me>
| * Expand renumber to also renumber pod locksMatthew Heon2019-02-21
| | | | | | | | Signed-off-by: Matthew Heon <matthew.heon@pm.me>
| * Add ability to rewrite pod configs in the databaseMatthew Heon2019-02-21
| | | | | | | | | | | | Necessary for rewriting lock IDs as part of renumber. Signed-off-by: Matthew Heon <matthew.heon@pm.me>
| * Add initial version of renumber backendMatthew Heon2019-02-21
| | | | | | | | | | | | | | | | | | Renumber is a way of renumbering container locks after the number of locks available has changed. For now, renumber only works with containers. Signed-off-by: Matthew Heon <matthew.heon@pm.me>
| * Add a function for overwriting container configMatthew Heon2019-02-21
| | | | | | | | Signed-off-by: Matthew Heon <matthew.heon@pm.me>
* | podman-remote load imagebaude2019-02-21
|/ | | | | | | enable the ability to load an image into remote storage using the remote client. Signed-off-by: baude <bbaude@redhat.com>
* enable podman-remote pod rmbaude2019-02-21
| | | | | | add the ability to delete a pod from the remote client. Signed-off-by: baude <bbaude@redhat.com>
* podman-remote save [image]baude2019-02-20
| | | | | | | Add the ability to save an image from the remote-host to the remote-client. Signed-off-by: baude <bbaude@redhat.com>
* image.SearchImages: use SearchFilter typeValentin Rothberg2019-02-20
| | | | | | | Use an `image.SearchFilter` instead of a `[]string` in the SearchImages API. Signed-off-by: Valentin Rothberg <rothberg@redhat.com>
* podman-search: refactor code to libpod/image/search.goValentin Rothberg2019-02-20
| | | | | | | | Refactor the image-search logic from cmd/podman/search.go to libpod/image/search.go and update podman-search and the Varlink API to use it. Signed-off-by: Valentin Rothberg <rothberg@redhat.com>
* podman-remote pullbaude2019-02-19
| | | | | | Add status for remote users and podman remote-client pull. Signed-off-by: baude <bbaude@redhat.com>
* Don't start running dependenciesPeter Hunt2019-02-19
| | | | | | Before, a container being run or started in a pod always restarted the infra container. This was because we didn't take running dependencies into account. Fix this by filtering for dependencies in the running state. Signed-off-by: Peter Hunt <pehunt@redhat.com>
* OpenTracing support added to start, stop, run, create, pull, and psSebastian Jug2019-02-18
| | | | | | Drop context.Context field from cli.Context Signed-off-by: Sebastian Jug <sejug@redhat.com>
* pod infra container is started before a container in a pod is run, started, ↵Peter Hunt2019-02-15
| | | | | | | | | | or attached. Prior, a pod would have to be started immediately when created, leading to confusion about what a pod state should be immediately after creation. The problem was podman run --pod ... would error out if the infra container wasn't started (as it is a dependency). Fix this by allowing for recursive start, where each of the container's dependencies are started prior to the new container. This is only applied to the case where a new container is attached to a pod. Also rework container_api Start, StartAndAttach, and Init functions, as there was some duplicated code, which made addressing the problem easier to fix. Signed-off-by: Peter Hunt <pehunt@redhat.com>
* Merge pull request #2346 from giuseppe/fix-runtime-lookupOpenShift Merge Robot2019-02-15
|\ | | | | libpod.conf: add backward compatibility for runtime_path
| * libpod: honor runtime_path from libpod.confGiuseppe Scrivano2019-02-15
| | | | | | | | | | | | | | | | | | | | | | | | | | | | Add backward compatibility for `runtime_path` that was used by older versions of Podman. The issue was introduced with: 650cf122e1b33f4d8f4426ee1cc1a4bf00c14798 If `runtime_path` is specified, it overrides any other configuration and a warning is printed. It should be considered deprecated and will be removed in future. Signed-off-by: Giuseppe Scrivano <gscrivan@redhat.com>
| * rootless: open the correct fileDaniel J Walsh2019-02-15
| | | | | | | | Signed-off-by: Giuseppe Scrivano <gscrivan@redhat.com>
* | Merge pull request #2305 from rhatdan/tlsverifyOpenShift Merge Robot2019-02-15
|\ \ | |/ |/| Add tlsVerify bool to SearchImage for varlink
| * Add tlsVerify bool to SearchImage for varlinkDaniel J Walsh2019-02-14
| | | | | | | | | | | | | | | | | | | | Cockpit wants to be able to search images on systems without tlsverify turned on. tlsverify should be an optional parameter, if not set then we default to the system defaults defined in /etc/containers/registries.conf. Signed-off-by: Daniel J Walsh <dwalsh@redhat.com>
* | Merge pull request #2332 from baude/remotevolumepruneOpenShift Merge Robot2019-02-14
|\ \ | | | | | | volume prune
| * | enable podman-remote volume prunebaude2019-02-14
| |/ | | | | | | | | | | | | | | | | | | | | | | | | allow users to remotely prune volumes. this is the last volume command for remote enablement. as such, the volume commands are being folded back into main because they are supported for both local and remote clients. also, enable all volume tests that do not use containers as containers are not enabled for the remote client yet. Signed-off-by: baude <bbaude@redhat.com>
* / Fix volume handling in podmanDaniel J Walsh2019-02-14
|/ | | | | | | | | | | | | | | | | | iFix builtin volumes to work with podman volume Currently builtin volumes are not recored in podman volumes when they are created automatically. This patch fixes this. Remove container volumes when requested Currently the --volume option on podman remove does nothing. This will implement the changes needed to remove the volumes if the user requests it. When removing a volume make sure that no container uses the volume. Signed-off-by: Daniel J Walsh dwalsh@redhat.com Signed-off-by: Daniel J Walsh <dwalsh@redhat.com>
* Merge pull request #2321 from baude/remotebuildOpenShift Merge Robot2019-02-14
|\ | | | | podman-remote build
| * podman-remote buildbaude2019-02-13
| | | | | | | | | | | | | | add the ability to build images using files local to the remote-client but over a varlink interface to a "remote" server. Signed-off-by: baude <bbaude@redhat.com>
* | Merge pull request #2319 from mheon/unconditional_cleanupOpenShift Merge Robot2019-02-13
|\ \ | | | | | | Fix manual detach from containers to not wait for exit
| * | Retain a copy of container exit file on cleanupMatthew Heon2019-02-12
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | When cleaning up containers, we presently remove the exit file created by Conmon, to ensure that if we restart the container, we won't have conflicts when Conmon tries writing a new exit file. Unfortunately, we need to retain that exit file (at least until we get a workable events system), so we can read it in cases where the container has been removed before 'podman run' can read its exit code. So instead of removing it, rename it, so there's no conflict with Conmon, and we can still read it later. Fixes: #1640 Signed-off-by: Matthew Heon <mheon@redhat.com>