summaryrefslogtreecommitdiff
path: root/libpod/define
Commit message (Collapse)AuthorAge
* Add SystemdMode to inspect for containersMatthew Heon2020-07-22
| | | | | | | | | | | | | This allows us to determine if the container auto-detected that systemd was in use, and correctly activated systemd integration. Use this to wire up some integration tests to verify that systemd integration is working properly. Signed-off-by: Matthew Heon <matthew.heon@pm.me> <MH: Fixed Compile after cherry-pick> Signed-off-by: Matthew Heon <matthew.heon@pm.me>
* Include infra container information in `pod inspect`Matthew Heon2020-07-22
| | | | | | | | | | We had a field for this in the inspect data, but it was never being populated. Because of this, `podman pod inspect` stopped showing port bindings (and other infra container settings). Add code to populate the infra container inspect data, and add a test to ensure we don't regress again. Signed-off-by: Matthew Heon <mheon@redhat.com>
* abi: set default umask and rlimitsGiuseppe Scrivano2020-07-22
| | | | | | | | | | | | the code got lost in the migration to podman 2.0, reintroduce it. Closes: https://github.com/containers/podman/issues/6989 Signed-off-by: Giuseppe Scrivano <gscrivan@redhat.com> <MH: Fixed build> Signed-off-by: Matthew Heon <matthew.heon@pm.me>
* Print errors from individual containers in podsMatthew Heon2020-07-06
| | | | | | | | | | | | | | | | | | | | | | | | | | | The infra/abi code for pods was written in a flawed way, assuming that the map[string]error containing individual container errors was only set when the global error for the pod function was nil; that is not accurate, and we are actually *guaranteed* to set the global error when any individual container errors. Thus, we'd never actually include individual container errors, because the infra code assumed that err being set meant everything failed and no container operations were attempted. We were originally setting the cause of the error to something nonsensical ("container already exists"), so I made a new error indicating that some containers in the pod failed. We can then ignore that error when building the report on the pod operation and actually return errors from individual containers. Unfortunately, this exposed another weakness of the infra code, which was discarding the container IDs. Errors from individual containers are not guaranteed to identify which container they came from, hence the use of map[string]error in the Pod API functions. Rather than restructuring the structs we return from pkg/infra, I just wrapped the returned errors with a message including the ID of the container. Signed-off-by: Matthew Heon <matthew.heon@pm.me>
* move go module to v2Valentin Rothberg2020-07-06
| | | | | | | | | | | | | | | With the advent of Podman 2.0.0 we crossed the magical barrier of go modules. While we were able to continue importing all packages inside of the project, the project could not be vendored anymore from the outside. Move the go module to new major version and change all imports to github.com/containers/libpod/v2. The renaming of the imports was done via gomove [1]. [1] https://github.com/KSubedi/gomove Signed-off-by: Valentin Rothberg <rothberg@redhat.com>
* podman untag: error if tag doesn't existValentin Rothberg2020-06-24
| | | | | | | | | | | Throw an error if a specified tag does not exist. Also make sure that the user input is normalized as we already do for `podman tag`. To prevent regressions, add a set of end-to-end and systemd tests. Last but not least, update the docs and add bash completions. Signed-off-by: Valentin Rothberg <rothberg@redhat.com>
* Reformat inspect network settingsQi Wang2020-06-24
| | | | | | Reformat ports of inspect network settings to compatible with docker inspect. Close #5380 Signed-off-by: Qi Wang <qiwan@redhat.com>
* Do not share container log driver for execMatthew Heon2020-06-17
| | | | | | | | | | | | | | | | | | | | | When the container uses journald logging, we don't want to automatically use the same driver for its exec sessions. If we do we will pollute the journal (particularly in the case of healthchecks) with large amounts of undesired logs. Instead, force exec sessions logs to file for now; we can add a log-driver flag later (we'll probably want to add a `podman logs` command that reads exec session logs at the same time). As part of this, add support for the new 'none' logs driver in Conmon. It will be the default log driver for exec sessions, and can be optionally selected for containers. Great thanks to Joe Gooch (mrwizard@dok.org) for adding support to Conmon for a null log driver, and wiring it in here. Fixes #6555 Signed-off-by: Matthew Heon <matthew.heon@pm.me>
* Turn on More lintersDaniel J Walsh2020-06-15
| | | | | | | | | - misspell - prealloc - unparam - nakedret Signed-off-by: Daniel J Walsh <dwalsh@redhat.com>
* pod config: add a `CreateCommand` fieldValentin Rothberg2020-06-11
| | | | | | | | | | | | | | Add a `CreateCommand` field to the pod config which includes the entire `os.Args` at pod-creation. Similar to the already existing field in a container config, we need this information to properly generate generic systemd unit files for pods. It's a prerequisite to support the `--new` flag for pods. Also add the `CreateCommand` to the pod-inspect data, which can come in handy for debugging, general inspection and certainly for the tests that are added along with the other changes. Signed-off-by: Valentin Rothberg <rothberg@redhat.com>
* Ensure Conmon is alive before waiting for exit fileMatthew Heon2020-06-08
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This came out of a conversation with Valentin about systemd-managed Podman. He discovered that unit files did not properly handle cases where Conmon was dead - the ExecStopPost `podman rm --force` line was not actually removing the container, but interestingly, adding a `podman cleanup --rm` line would remove it. Both of these commands do the same thing (minus the `podman cleanup --rm` command not force-removing running containers). Without a running Conmon instance, the container process is still running (assuming you killed Conmon with SIGKILL and it had no chance to kill the container it managed), but you can still kill the container itself with `podman stop` - Conmon is not involved, only the OCI Runtime. (`podman rm --force` and `podman stop` use the same code to kill the container). The problem comes when we want to get the container's exit code - we expect Conmon to make us an exit file, which it's obviously not going to do, being dead. The first `podman rm` would fail because of this, but importantly, it would (after failing to retrieve the exit code correctly) set container status to Exited, so that the second `podman cleanup` process would succeed. To make sure the first `podman rm --force` succeeds, we need to catch the case where Conmon is already dead, and instead of waiting for an exit file that will never come, immediately set the Stopped state and remove an error that can be caught and handled. Signed-off-by: Matthew Heon <mheon@redhat.com>
* add socket information to podman infoBrent Baude2020-06-03
| | | | | | | this is step 1 to self-discovery of remote ssh connections. we add a remotesocket struct to info to detect what the socket path might be. Co-authored-by: Jhon Honce <jhonce@redhat.com> Signed-off-by: Brent Baude <bbaude@redhat.com>
* podman version --format ... was not workingDaniel J Walsh2020-05-21
| | | | | | This patch fixes the podman --version --format command. Signed-off-by: Daniel J Walsh <dwalsh@redhat.com>
* Display human build date in podman infoDaniel J Walsh2020-05-21
| | | | | | | Currently we are displaying the Seconds since EPOCH this will change to displaying date, similar to `podman version` Signed-off-by: Daniel J Walsh <dwalsh@redhat.com>
* Merge pull request #6323 from rhatdan/shrinkOpenShift Merge Robot2020-05-21
|\ | | | | Remove github.com/libpod/libpod from cmd/pkg/podman
| * Remove github.com/libpod/libpod from cmd/pkg/podmanDaniel J Walsh2020-05-21
| | | | | | | | | | | | | | By moving a couple of variables from libpod/libpod to libpod/libpod/define I am able shrink the podman-remote-* executables by another megabyte. Signed-off-by: Daniel J Walsh <dwalsh@redhat.com>
* | Fix remote integration for healthchecksBrent Baude2020-05-20
|/ | | | | | the one remaining test that is still skipped do to missing exec function Signed-off-by: Brent Baude <bbaude@redhat.com>
* Add podman static buildSascha Grunert2020-05-11
| | | | | | | | | | | We’re now able to build a static podman binary based on a custom nix derivation. This is integrated in cirrus as well, whereas a later target would be to provide a self-contained static binary bundle which can be installed on any Linux x64-bit system. Fixes: https://github.com/containers/libpod/issues/1399 Signed-off-by: Sascha Grunert <sgrunert@suse.com>
* Merge pull request #6152 from mheon/fix_pod_join_cgroupnsOpenShift Merge Robot2020-05-09
|\ | | | | Fix bug where pods would unintentionally share cgroupns
| * Fix bug where pods would unintentionally share cgroupnsMatthew Heon2020-05-08
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This one was a massive pain to track down. The original symptom was an error message from rootless Podman trying to make a container in a pod. I unfortunately did not look at the error message closely enough to realize that the namespace in question was the cgroup namespace (the reproducer pod was explicitly set to only share the network namespace), else this would have been quite a bit shorter. I spent considerable effort trying to track down differences between the inspect output of the two containers, and when that failed I was forced to resort to diffing the OCI specs. That finally proved fruitful, and I was able to determine what should have been obvious all along: the container was joining the cgroup namespace of the infra container when it really ought not to have. From there, I discovered a variable collision in pod config. The UsePodCgroup variable means "create a parent cgroup for the pod and join containers in the pod to it". Unfortunately, it is very similar to UsePodUTS, UsePodNet, etc, which mean "the pod shares this namespace", so an accessor was accidentally added for it that indicated the pod shared the cgroup namespace when it really did not. Once I realized that, it was a quick fix - add a bool to the pod's configuration to indicate whether the cgroup ns was shared (distinct from UsePodCgroup) and use that for the accessor. Also included are fixes for `podman inspect` and `podman pod inspect` that fix them to actually display the state of the cgroup namespace (for container inspect) and what namespaces are shared (for pod inspect). Either of those would have made tracking this down considerably quicker. Fixes #6149 Signed-off-by: Matthew Heon <mheon@redhat.com>
* | V2 Impliment tunnelled podman versionJhon Honce2020-05-08
|/ | | | Signed-off-by: Jhon Honce <jhonce@redhat.com>
* add {generate,play} kubeValentin Rothberg2020-05-06
| | | | | | | | | | | | | | | | | | | Add the `podman generate kube` and `podman play kube` command. The code has largely been copied from Podman v1 but restructured to not leak the K8s core API into the (remote) client. Both commands are added in the same commit to allow for enabling the tests at the same time. Move some exports from `cmd/podman/common` to the appropriate places in the backend to avoid circular dependencies. Move definitions of label annotations to `libpod/define` and set the security-opt labels in the frontend to make kube tests pass. Implement rest endpoints, bindings and the tunnel interface. Signed-off-by: Valentin Rothberg <rothberg@redhat.com>
* v2 podman statsbaude2020-05-05
| | | | Signed-off-by: baude <bbaude@redhat.com>
* V2 Restore rmi testsJhon Honce2020-04-22
| | | | | | | * Introduced define.ErrImageInUse to assist in determining the exit code without resorting string searches. Signed-off-by: Jhon Honce <jhonce@redhat.com>
* Update podman to use containers.confDaniel J Walsh2020-04-20
| | | | | | | | Add more default options parsing Switch to using --time as opposed to --timeout to better match Docker. Signed-off-by: Daniel J Walsh <dwalsh@redhat.com>
* Update pod inspect report to hold current pod status.Sujil022020-04-20
| | | | | | | Added status field in pod inspect report. Fixed pod tests to use it. Signed-off-by: Sujil02 <sushah@redhat.com>
* podman v2 remove bloat v2Brent Baude2020-04-16
| | | | | | rid ourseleves of libpod references in v2 client Signed-off-by: Brent Baude <bbaude@redhat.com>
* Add version to podman info commandDaniel J Walsh2020-04-15
| | | | Signed-off-by: Daniel J Walsh <dwalsh@redhat.com>
* Add basic structure of output for APIv2 pod inspectMatthew Heon2020-04-15
| | | | | | | | This will replace the structs in use in libpod, which cannot be used as they are also directly involved in the database representation of pods and cannot be moved out of Libpod. Signed-off-by: Matthew Heon <mheon@redhat.com>
* refactor infoBrent Baude2020-04-06
| | | | | | the current implementation of info, while typed, is very loosely done so. we need stronger types for our apiv2 implmentation and bindings. Signed-off-by: Brent Baude <bbaude@redhat.com>
* v2podman attach and execBrent Baude2020-04-05
| | | | | | | | add the ability to attach to a running container. the tunnel side of this is not enabled yet as we have work on the endpoints and plumbing to do yet. add the ability to exec a command in a running container. the tunnel side is also being deferred for same reason. Signed-off-by: Brent Baude <bbaude@redhat.com>
* podmanv2 save imageBrent Baude2020-04-03
| | | | | | add ability to save an image for podman v2 Signed-off-by: Brent Baude <bbaude@redhat.com>
* Add support for containers.confDaniel J Walsh2020-03-27
| | | | | | | vendor in c/common config pkg for containers.conf Signed-off-by: Qi Wang qiwan@redhat.com Signed-off-by: Daniel J Walsh <dwalsh@redhat.com>
* use `pause:3.2` image for infra containersValentin Rothberg2020-03-27
| | | | | | | | | | The `pause:3.1` has wrong configs for non-amd64 images as they all claim to be for amd64. The issue has now been fixed in the latest `pause:3.2`. [1] https://github.com/kubernetes/kubernetes/issues/87325 Signed-off-by: Valentin Rothberg <rothberg@redhat.com>
* podmanv2 container inspectBrent Baude2020-03-26
| | | | | | add ability to inspect a container Signed-off-by: Brent Baude <bbaude@redhat.com>
* Implement APIv2 Exec Create and Inspect EndpointsMatthew Heon2020-03-23
| | | | | | Start and Resize require further implementation work. Signed-off-by: Matthew Heon <matthew.heon@pm.me>
* Add structure for new exec session tracking to DBMatthew Heon2020-03-18
| | | | | | | | | | | | | | | | | | | | | | | As part of the rework of exec sessions, we need to address them independently of containers. In the new API, we need to be able to fetch them by their ID, regardless of what container they are associated with. Unfortunately, our existing exec sessions are tied to individual containers; there's no way to tell what container a session belongs to and retrieve it without getting every exec session for every container. This adds a pointer to the container an exec session is associated with to the database. The sessions themselves are still stored in the container. Exec-related APIs have been restructured to work with the new database representation. The originally monolithic API has been split into a number of smaller calls to allow more fine-grained control of lifecycle. Support for legacy exec sessions has been retained, but in a deprecated fashion; we should remove this in a few releases. Signed-off-by: Matthew Heon <matthew.heon@pm.me>
* Populate ExecSession with all required fieldsMatthew Heon2020-03-18
| | | | | | | | | As part of the rework of exec sessions, we want to split Create and Start - and, as a result, we need to keep everything needed to start exec sessions in the struct, not just the bare minimum for tracking running ones. Signed-off-by: Matthew Heon <matthew.heon@pm.me>
* Revert "Exec: use ErrorConmonRead"Matthew Heon2020-03-09
| | | | | | | | | This reverts commit d3d97a25e8c87cf741b2e24ac01ef84962137106. This does not resolve the issues we expected it would, and has some unexpected side effects with the upcoming exec rework. Signed-off-by: Matthew Heon <matthew.heon@pm.me>
* Exec: use ErrorConmonReadPeter Hunt2020-03-03
| | | | | | | | Before, we were using -1 as a bogus value in podman to signify something went wrong when reading from a conmon pipe. However, conmon uses negative values to indicate the runtime failed, and return the runtime's exit code. instead, we should use a bogus value that is actually bogus. Define that value in the define package as MinInt32 (-1<< 31 - 1), which is outside of the range of possible pids (-1 << 31) Signed-off-by: Peter Hunt <pehunt@redhat.com>
* Add basic deadlock detection for container start/removeMatthew Heon2020-02-24
| | | | | | | | | | | | | | | We can easily tell if we're going to deadlock by comparing lock IDs before actually taking the lock. Add a few checks for this in common places where deadlocks might occur. This does not yet cover pod operations, where detection is more difficult (and costly) due to the number of locks being involved being higher than 2. Also, add some error wrapping on the Podman side, so we can tell people to use `system renumber` when it occurs. Signed-off-by: Matthew Heon <matthew.heon@pm.me>
* APIv2 review corrections #3Brent Baude2020-01-25
| | | | | | The third pass of corrections for the APIv2. Signed-off-by: Brent Baude <bbaude@redhat.com>
* Add ContainerStateRemovingMatthew Heon2019-11-19
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | When Libpod removes a container, there is the possibility that removal will not fully succeed. The most notable problems are storage issues, where the container cannot be removed from c/storage. When this occurs, we were faced with a choice. We can keep the container in the state, appearing in `podman ps` and available for other API operations, but likely unable to do any of them as it's been partially removed. Or we can remove it very early and clean up after it's already gone. We have, until now, used the second approach. The problem that arises is intermittent problems removing storage. We end up removing a container, failing to remove its storage, and ending up with a container permanently stuck in c/storage that we can't remove with the normal Podman CLI, can't use the name of, and generally can't interact with. A notable cause is when Podman is hit by a SIGKILL midway through removal, which can consistently cause `podman rm` to fail to remove storage. We now add a new state for containers that are in the process of being removed, ContainerStateRemoving. We set this at the beginning of the removal process. It notifies Podman that the container cannot be used anymore, but preserves it in the DB until it is fully removed. This will allow Remove to be run on these containers again, which should successfully remove storage if it fails. Fixes #3906 Signed-off-by: Matthew Heon <mheon@redhat.com>
* add libpod/configValentin Rothberg2019-10-31
| | | | | | | | | | | | Refactor the `RuntimeConfig` along with related code from libpod into libpod/config. Note that this is a first step of consolidating code into more coherent packages to make the code more maintainable and less prone to regressions on the long runs. Some libpod definitions were moved to `libpod/define` to resolve circular dependencies. Signed-off-by: Valentin Rothberg <rothberg@redhat.com>
* Ensure volumes can be removed when they fail to unmountMatthew Heon2019-10-14
| | | | | | | | | | | | | | | | | | | | Also, ensure that we don't try to mount them without root - it appears that it can somehow not error and report that mount was successful when it clearly did not succeed, which can induce this case. We reuse the `--force` flag to indicate that a volume should be removed even after unmount errors. It seems fairly natural to expect that --force will remove a volume that is otherwise presenting problems. Finally, ignore EINVAL on unmount - if the mount point no longer exists our job is done. Fixes: #4247 Fixes: #4248 Signed-off-by: Matthew Heon <matthew.heon@pm.me>
* rm: add containers eviction with `rm --force`Marco Vedovati2019-09-25
| | | | | | | | | Add ability to evict a container when it becomes unusable. This may happen when the host setup changes after a container creation, making it impossible for that container to be used or removed. Evicting a container is done using the `rm --force` command. Signed-off-by: Marco Vedovati <mvedovati@suse.com>
* Fix exit code failureDaniel J Walsh2019-09-17
| | | | | | Be less precise on the exit code and lot the exit code to the journal when it fails. Signed-off-by: Daniel J Walsh <dwalsh@redhat.com>
* Use exit code constantsDaniel J Walsh2019-09-12
| | | | | | | We have leaked the exit number codess all over the code, this patch removes the numbers to constants. Signed-off-by: Daniel J Walsh <dwalsh@redhat.com>
* Add support for launching containers without CGroupsMatthew Heon2019-09-10
| | | | | | | This is mostly used with Systemd, which really wants to manage CGroups itself when managing containers via unit file. Signed-off-by: Matthew Heon <matthew.heon@pm.me>
* Correctly report errors on unmounting SHMMatthew Heon2019-09-05
| | | | | | | | | | | When we fail to remove a container's SHM, that's an error, and we need to report it as such. This may be part of our lingering storage woes. Also, remove MNT_DETACH. It may be another cause of the storage removal failures. Signed-off-by: Matthew Heon <matthew.heon@pm.me>