summaryrefslogtreecommitdiff
path: root/libpod
Commit message (Collapse)AuthorAge
* Fixup issues found by golintDaniel J Walsh2020-06-10
| | | | Signed-off-by: Daniel J Walsh <dwalsh@redhat.com>
* Ensure Conmon is alive before waiting for exit fileMatthew Heon2020-06-08
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This came out of a conversation with Valentin about systemd-managed Podman. He discovered that unit files did not properly handle cases where Conmon was dead - the ExecStopPost `podman rm --force` line was not actually removing the container, but interestingly, adding a `podman cleanup --rm` line would remove it. Both of these commands do the same thing (minus the `podman cleanup --rm` command not force-removing running containers). Without a running Conmon instance, the container process is still running (assuming you killed Conmon with SIGKILL and it had no chance to kill the container it managed), but you can still kill the container itself with `podman stop` - Conmon is not involved, only the OCI Runtime. (`podman rm --force` and `podman stop` use the same code to kill the container). The problem comes when we want to get the container's exit code - we expect Conmon to make us an exit file, which it's obviously not going to do, being dead. The first `podman rm` would fail because of this, but importantly, it would (after failing to retrieve the exit code correctly) set container status to Exited, so that the second `podman cleanup` process would succeed. To make sure the first `podman rm --force` succeeds, we need to catch the case where Conmon is already dead, and instead of waiting for an exit file that will never come, immediately set the Stopped state and remove an error that can be caught and handled. Signed-off-by: Matthew Heon <mheon@redhat.com>
* Ensure that containers in pods properly set hostnameMatthew Heon2020-06-04
| | | | | | | | | | | | | | | | | | When we moved to the new Namespace types in Specgen, we made a distinction between taking a namespace from a pod, and taking it from another container. Due to this new distinction, some code that previously worked for both `--pod=$ID` and `--uts=container:$ID` has accidentally become conditional on only the latter case. This happened for Hostname - we weren't properly setting it in cases where the container joined a pod. Fortunately, this is an easy fix once we know to check the condition. Also, ensure that `podman pod inspect` actually prints hostname. Fixes #6494 Signed-off-by: Matthew Heon <matthew.heon@pm.me>
* Merge pull request #6470 from mheon/fix_stats_nonetOpenShift Merge Robot2020-06-04
|\ | | | | Properly follow linked namespace container for stats
| * Properly follow linked namespace container for statsMatthew Heon2020-06-02
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Podman containers can specify that they get their network namespace from another container. This is automatic in pods, but any container can do it. The problem is that these containers are not guaranteed to have a network namespace of their own; it is perfectly valid to join the network namespace of a --net=host container, and both containers will end up in the host namespace. The code for obtaining network stats did not account for this, and could cause segfaults as a result. Fortunately, the fix is simple - the function we use to get said stats already performs appropriate checks, so we just need to recursively call it. Fixes #5652 Signed-off-by: Matthew Heon <matthew.heon@pm.me>
* | add socket information to podman infoBrent Baude2020-06-03
| | | | | | | | | | | | | | this is step 1 to self-discovery of remote ssh connections. we add a remotesocket struct to info to detect what the socket path might be. Co-authored-by: Jhon Honce <jhonce@redhat.com> Signed-off-by: Brent Baude <bbaude@redhat.com>
* | Merge pull request #6468 from mheon/remote_detached_execOpenShift Merge Robot2020-06-03
|\ \ | |/ |/| Enable detached exec for remote
| * Enable detached exec for remoteMatthew Heon2020-06-02
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The biggest obstacle here was cleanup - we needed a way to remove detached exec sessions after they exited, but there's no way to tell if an exec session will be attached or detached when it's created, and that's when we must add the exit command that would do the removal. The solution was adding a delay to the exit command (5 minutes), which gives sufficient time for attached exec sessions to retrieve the exit code of the session after it exits, but still guarantees that they will be removed, even for detached sessions. This requires Conmon 2.0.17, which has the new `--exit-delay` flag. As part of the exit command rework, we can drop the hack we were using to clean up exec sessions (remove them as part of inspect). This is a lot cleaner, and I'm a lot happier about it. Otherwise, this is just plumbing - we need a bindings call for detached exec, and that needed to be added to the tunnel mode backend for entities. Signed-off-by: Matthew Heon <matthew.heon@pm.me>
* | check --user range for rootless containersQi Wang2020-06-02
|/ | | | | | Check --user range if it's a uid for rootless containers. Returns error if it is out of the range. From https://github.com/containers/libpod/issues/6431#issuecomment-636124686 Signed-off-by: Qi Wang <qiwan@redhat.com>
* compat handlers: add X-Registry-Auth header supportValentin Rothberg2020-05-29
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | * Support the `X-Registry-Auth` http-request header. * The content of the header is a base64 encoded JSON payload which can either be a single auth config or a map of auth configs (user+pw or token) with the corresponding registries being the keys. Vanilla Docker, projectatomic Docker and the bindings are transparantly supported. * Add a hidden `--registries-conf` flag. Buildah exposes the same flag, mostly for testing purposes. * Do all credential parsing in the client (i.e., `cmd/podman`) pass the username and password in the backend instead of unparsed credentials. * Add a `pkg/auth` which handles most of the heavy lifting. * Go through the authentication-handling code of most commands, bindings and endpoints. Migrate them to the new code and fix issues as seen. A final evaluation and more tests is still required *after* this change. * The manifest-push endpoint is missing certain parameters and should use the ABI function instead. Adding auth-support isn't really possible without these parts working. * The container commands and endpoints (i.e., create and run) have not been changed yet. The APIs don't yet account for the authfile. * Add authentication tests to `pkg/bindings`. Fixes: #6384 Signed-off-by: Valentin Rothberg <rothberg@redhat.com>
* podman version --format ... was not workingDaniel J Walsh2020-05-21
| | | | | | This patch fixes the podman --version --format command. Signed-off-by: Daniel J Walsh <dwalsh@redhat.com>
* Display human build date in podman infoDaniel J Walsh2020-05-21
| | | | | | | Currently we are displaying the Seconds since EPOCH this will change to displaying date, similar to `podman version` Signed-off-by: Daniel J Walsh <dwalsh@redhat.com>
* Merge pull request #6323 from rhatdan/shrinkOpenShift Merge Robot2020-05-21
|\ | | | | Remove github.com/libpod/libpod from cmd/pkg/podman
| * Remove github.com/libpod/libpod from cmd/pkg/podmanDaniel J Walsh2020-05-21
| | | | | | | | | | | | | | By moving a couple of variables from libpod/libpod to libpod/libpod/define I am able shrink the podman-remote-* executables by another megabyte. Signed-off-by: Daniel J Walsh <dwalsh@redhat.com>
* | Merge pull request #6312 from rhatdan/imageOpenShift Merge Robot2020-05-21
|\ \ | | | | | | Fix remote handling of podman images calls
| * | Fix remote handling of podman images callsDaniel J Walsh2020-05-21
| | | | | | | | | | | | | | | | | | | | | Enable three more tests Fix handling of image filters Signed-off-by: Daniel J Walsh <dwalsh@redhat.com>
* | | Merge pull request #6281 from rhatdan/fipsOpenShift Merge Robot2020-05-21
|\ \ \ | | | | | | | | Fix mountpont in SecretMountsWithUIDGID
| * | | Fix mountpont in SecretMountsWithUIDGIDDaniel J Walsh2020-05-19
| | | | | | | | | | | | | | | | | | | | | | | | | | | | In FIPS Mode we expect to work off of the Mountpath not the Rundir path. This is causing FIPS Mode checks to fail. Signed-off-by: Daniel J Walsh <dwalsh@redhat.com>
* | | | vendor: update seccomp/containers-golang to v0.4.1Giuseppe Scrivano2020-05-21
| | | | | | | | | | | | | | | | Signed-off-by: Giuseppe Scrivano <gscrivan@redhat.com>
* | | | Merge pull request #6304 from baude/v2remotehctestsOpenShift Merge Robot2020-05-21
|\ \ \ \ | |_|_|/ |/| | | Fix remote integration for healthchecks
| * | | Fix remote integration for healthchecksBrent Baude2020-05-20
| |/ / | | | | | | | | | | | | | | | the one remaining test that is still skipped do to missing exec function Signed-off-by: Brent Baude <bbaude@redhat.com>
* | | Enable cleanup processes for detached execMatthew Heon2020-05-20
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The cleanup command creation logic is made public as part of this and wired such that we can call it both within SpecGen (to make container exit commands) and from the ABI detached exec handler. Exit commands are presently only used for detached exec, but theoretically could be turned on for all exec sessions if we wanted (I'm declining to do this because of potential overhead). I also forgot to copy the exit command from the exec config into the ExecOptions struct used by the OCI runtime, so it was not being added. There are also two significant bugfixes for exec in here. One is for updating the status of running exec sessions - this was always failing as I had coded it to remove the exit file *before* reading it, instead of after (oops). The second was that removing a running exec session would always fail because I inverted the check to see if it was running. Signed-off-by: Matthew Heon <matthew.heon@pm.me>
* | | Add ability to clean up exec sessions with cleanupMatthew Heon2020-05-20
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | We need to be able to use cleanup processes to remove exec sessions as part of detached exec. This PR adds that ability. A new flag is added to `podman container cleanup`, `--exec`, to specify an exec session to be cleaned up. As part of this, ensure that `ExecCleanup` can clean up exec sessions that were running, but have since exited. This ensures that we can come back to an exec session that was running but has since stopped, and clean it up. Signed-off-by: Matthew Heon <matthew.heon@pm.me>
* | | Add backend code for detached execMatthew Heon2020-05-20
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | As part of the massive exec rework, I stubbed out a function for non-detached exec, which is implemented here. It's largely similar to the existing exec functions, but missing a few pieces. This also involves implemented a new OCI runtime call for detached exec. Again, very similar to the other functions, but with a few missing pieces. Signed-off-by: Matthew Heon <matthew.heon@pm.me>
* | | Add exit commands to exec sessionsMatthew Heon2020-05-20
| |/ |/| | | | | | | | | | | | | | | | | | | These are required for detached exec, where they will be used to clean up and remove exec sessions when they exit. As part of this, move all Exec related functionality for the Conmon OCI runtime into a separate file; the existing one was around 2000 lines. Signed-off-by: Matthew Heon <matthew.heon@pm.me>
* | oci conmon: tell conmon to log container namePeter Hunt2020-05-20
|/ | | | | | | | | | specifying `-n=ctr-name` tells conmon to log CONTAINER_NAME=name if the log driver is journald add this, and a test! also, refactor the args slice creation to not append() unnecessarily. Signed-off-by: Peter Hunt <pehunt@redhat.com>
* Merge pull request #6231 from mheon/fix_coverityOpenShift Merge Robot2020-05-17
|\ | | | | Fix two coverity issues (unchecked null return)
| * Fix two coverity issues (unchecked null return)Matthew Heon2020-05-14
| | | | | | | | | | | | | | | | | | Theoretically these should never happen, but it never hurts to be sure and check. Add a check to one, make the other one a create-if-not-exist (it was just adding, not checking the contents). Signed-off-by: Matthew Heon <mheon@redhat.com>
* | Drop a debug line which could print very large messagesMatthew Heon2020-05-15
| | | | | | | | Signed-off-by: Matthew Heon <matthew.heon@pm.me>
* | Remove duplicated exec handling codeMatthew Heon2020-05-14
| | | | | | | | | | | | | | | | | | | | | | During the initial workup of HTTP exec, I duplicated most of the existing exec handling code so I could work on it without breaking normal exec (and compare what I was doing to the nroaml version). Now that it's done and working, we can switch over to the refactored version and ditch the original, removing a lot of duplicated code. Signed-off-by: Matthew Heon <matthew.heon@pm.me>
* | Fix lintMatthew Heon2020-05-14
| | | | | | | | Signed-off-by: Matthew Heon <matthew.heon@pm.me>
* | Prune stale exec sessions on inspectMatthew Heon2020-05-14
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The usual flow for exec is going to be: - Create exec session - Start and attach to exec session - Exec session exits, attach session terminates - Client does an exec inspect to pick up exit code The safest point to remove the exec session, without doing any database changes to track stale sessions, is to remove during the last part of this - the single inspect after the exec session exits. This is definitely different from Docker (which would retain the exec session for up to 10 minutes after it exits, where we will immediately discard) but should be close enough to be not noticeable in regular usage. Signed-off-by: Matthew Heon <matthew.heon@pm.me>
* | Remove exec sessions on container restartMatthew Heon2020-05-14
| | | | | | | | | | | | | | | | | | | | | | | | | | With APIv2, we cannot guarantee that exec sessions will be removed cleanly on exit (Docker does not include an API for removing exec sessions, instead using a timer-based reaper which we cannot easily replicate). This is part 1 of a 2-part approach to providing a solution to this. This ensures that exec sessions will be reaped, at the very least, on container restart, which takes care of any that were not properly removed during the run of a container. Signed-off-by: Matthew Heon <matthew.heon@pm.me>
* | Fix start order for APIv2 exec start endpointMatthew Heon2020-05-14
| | | | | | | | | | | | This makes the endpoint (mostly) functional. Signed-off-by: Matthew Heon <matthew.heon@pm.me>
* | Don't fail when saving exec status fails on removed ctrMatthew Heon2020-05-14
| | | | | | | | | | | | | | We can't save the exec session, but it's because the container is entirely gone, so no point erroring. Signed-off-by: Matthew Heon <mheon@redhat.com>
* | Ensure that Streams are set to defaults for HTTP attachMatthew Heon2020-05-14
| | | | | | | | | | | | | | | | | | | | | | If not overridden, we should use the attach configuration given when the exec session was first created. Also, setting streams should not conflict with a TTY - the two are allowed together with Attach and should be allowed together here. Signed-off-by: Matthew Heon <mheon@redhat.com>
* | Add an initial implementation of HTTP-forwarded execMatthew Heon2020-05-14
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This is heavily based off the existing exec implementation, but does not presently share code with it, to try and ensure we don't break anything. Still to do: - Add code sharing with existing exec implementation - Wire in the frontend (exec HTTP endpoint) - Move all exec-related code in oci_conmon_linux.go into a new file - Investigate code sharing between HTTP attach and HTTP exec. Signed-off-by: Matthew Heon <mheon@redhat.com>
* | Ensure that cleanup runs before we set Removing stateMatthew Heon2020-05-14
| | | | | | | | | | | | | | | | | | Cleaning up the OCI runtime is not allowed in the Removing state. To ensure it is actually cleaned up, when calling cleanup() as part of removing a container, do so before we set the Removing state, so we can successfully remove. Signed-off-by: Matthew Heon <mheon@redhat.com>
* | Cleanup OCI runtime before storageMatthew Heon2020-05-14
|/ | | | | | | | | Some runtimes (e.g. Kata containers) seem to object to having us unmount storage before the container is removed from the runtime. This is an easy fix (change the order of operations in cleanup) and seems to make more sense than the way we were doing things. Signed-off-by: Matthew Heon <mheon@redhat.com>
* WIP V2 attach bindings and testJhon Honce2020-05-13
| | | | | | | | * Add ErrLostSync to report lost of sync when de-mux'ing stream * Add logus.SetLevel(logrus.DebugLevel) when `go test -v` given * Add context to debugging messages Signed-off-by: Jhon Honce <jhonce@redhat.com>
* Merge pull request #6169 from vrothberg/fix-6164OpenShift Merge Robot2020-05-11
|\ | | | | shm_lock_test: add nil check
| * shm_lock_test: add nil checkValentin Rothberg2020-05-11
| | | | | | | | | | Fixes: #6164 Signed-off-by: Valentin Rothberg <rothberg@redhat.com>
* | Add podman static buildSascha Grunert2020-05-11
|/ | | | | | | | | | | We’re now able to build a static podman binary based on a custom nix derivation. This is integrated in cirrus as well, whereas a later target would be to provide a self-contained static binary bundle which can be installed on any Linux x64-bit system. Fixes: https://github.com/containers/libpod/issues/1399 Signed-off-by: Sascha Grunert <sgrunert@suse.com>
* Merge pull request #6152 from mheon/fix_pod_join_cgroupnsOpenShift Merge Robot2020-05-09
|\ | | | | Fix bug where pods would unintentionally share cgroupns
| * Ensure `podman inspect` output for NetworkMode is rightMatthew Heon2020-05-08
| | | | | | | | | | | | | | | | | | | | | | I realized that setting NetworkMode to private when we are making a network namespace but not configuring it with CNI or Slirp is wrong; that's considered `--net=none` not `--net=private`. At the same time, realized that we actually store whether Slirp is in use, so we can be more specific than just "default" and instead say slirp4netns or bridge. Signed-off-by: Matthew Heon <mheon@redhat.com>
| * Fix bug where pods would unintentionally share cgroupnsMatthew Heon2020-05-08
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This one was a massive pain to track down. The original symptom was an error message from rootless Podman trying to make a container in a pod. I unfortunately did not look at the error message closely enough to realize that the namespace in question was the cgroup namespace (the reproducer pod was explicitly set to only share the network namespace), else this would have been quite a bit shorter. I spent considerable effort trying to track down differences between the inspect output of the two containers, and when that failed I was forced to resort to diffing the OCI specs. That finally proved fruitful, and I was able to determine what should have been obvious all along: the container was joining the cgroup namespace of the infra container when it really ought not to have. From there, I discovered a variable collision in pod config. The UsePodCgroup variable means "create a parent cgroup for the pod and join containers in the pod to it". Unfortunately, it is very similar to UsePodUTS, UsePodNet, etc, which mean "the pod shares this namespace", so an accessor was accidentally added for it that indicated the pod shared the cgroup namespace when it really did not. Once I realized that, it was a quick fix - add a bool to the pod's configuration to indicate whether the cgroup ns was shared (distinct from UsePodCgroup) and use that for the accessor. Also included are fixes for `podman inspect` and `podman pod inspect` that fix them to actually display the state of the cgroup namespace (for container inspect) and what namespaces are shared (for pod inspect). Either of those would have made tracking this down considerably quicker. Fixes #6149 Signed-off-by: Matthew Heon <mheon@redhat.com>
* | V2 Impliment tunnelled podman versionJhon Honce2020-05-08
|/ | | | Signed-off-by: Jhon Honce <jhonce@redhat.com>
* Fix handling of overridden paths from databaseDaniel J Walsh2020-05-08
| | | | | | | | | | | | | | If the first time you run podman in a user account you do a su - USER, and the second time, you run as the logged in USER podman fails, because it is not handling the tmpdir definition in the database. This PR fixes this problem. vendor containers/common v0.11.1 This should fix a couple of issues we have seen in podman 1.9.1 with handling of libpod.conf. Signed-off-by: Daniel J Walsh <dwalsh@redhat.com>
* Merge pull request #6091 from rhatdan/v2OpenShift Merge Robot2020-05-06
|\ | | | | Eliminate race condition on podman info
| * Eliminate race condition on podman infoDaniel J Walsh2020-05-05
| | | | | | | | | | | | | | | | | | | | | | | | There is a potential of a race condition where a container is removed while podman is looking up information on the total containers. This can cause podman info to fail with an error "no such container". This change ignores the failure. Signed-off-by: Daniel J Walsh <dwalsh@redhat.com>