aboutsummaryrefslogtreecommitdiff
path: root/libpod/define
Commit message (Collapse)AuthorAge
* Implement SecretsAshley Cui2021-02-09
| | | | | | | | | | | Implement podman secret create, inspect, ls, rm Implement podman run/create --secret Secrets are blobs of data that are sensitive. Currently, the only secret driver supported is filedriver, which means creating a secret stores it in base64 unencrypted in a file. After creating a secret, a user can use the --secret flag to expose the secret inside the container at /run/secrets/[secretname] This secret will not be commited to an image on a podman commit Signed-off-by: Ashley Cui <acui@redhat.com>
* Fix per review requestMatej Vasek2021-02-04
| | | | Signed-off-by: Matej Vasek <mvasek@redhat.com>
* Initial implementation of volume pluginsMatthew Heon2021-01-14
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This implements support for mounting and unmounting volumes backed by volume plugins. Support for actually retrieving plugins requires a pull request to land in containers.conf and then that to be vendored, and as such is not yet ready. Given this, this code is only compile tested. However, the code for everything past retrieving the plugin has been written - there is support for creating, removing, mounting, and unmounting volumes, which should allow full functionality once the c/common PR is merged. A major change is the signature of the MountPoint function for volumes, which now, by necessity, returns an error. Named volumes managed by a plugin do not have a mountpoint we control; instead, it is managed entirely by the plugin. As such, we need to cache the path in the DB, and calls to retrieve it now need to access the DB (and may fail as such). Notably absent is support for SELinux relabelling and chowning these volumes. Given that we don't manage the mountpoint for these volumes, I am extremely reluctant to try and modify it - we could easily break the plugin trying to chown or relabel it. Also, we had no less than *5* separate implementations of inspecting a volume floating around in pkg/infra/abi and pkg/api/handlers/libpod. And none of them used volume.Inspect(), the only correct way of inspecting volumes. Remove them all and consolidate to using the correct way. Compat API is likely still doing things the wrong way, but that is an issue for another day. Fixes #4304 Signed-off-by: Matthew Heon <matthew.heon@pm.me>
* Merge pull request #8906 from vrothberg/fix-8501OpenShift Merge Robot2021-01-14
|\ | | | | container stop: release lock before calling the runtime
| * container stop: release lock before calling the runtimeValentin Rothberg2021-01-14
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Podman defers stopping the container to the runtime, which can take some time. Keeping the lock while waiting for the runtime to complete the stop procedure, prevents other commands from acquiring the lock as shown in #8501. To improve the user experience, release the lock before invoking the runtime, and re-acquire the lock when the runtime is finished. Also introduce an intermediate "stopping" to properly distinguish from "stopped" containers etc. Fixes: #8501 Signed-off-by: Valentin Rothberg <rothberg@redhat.com>
* | Merge pull request #8950 from mheon/exorcise_driverOpenShift Merge Robot2021-01-12
|\ \ | | | | | | Exorcise Driver code from libpod/define
| * | Exorcise Driver code from libpod/defineMatthew Heon2021-01-12
| |/ | | | | | | | | | | | | | | | | | | | | | | The libpod/define code should not import any large dependencies, as it is intended to be structures and definitions only. It included the libpod/driver package for information on the storage driver, though, which brought in all of c/storage. Split the driver package so that define has the struct, and thus does not need to import Driver. And simplify the driver code while we're at it. Signed-off-by: Matthew Heon <mheon@redhat.com>
* / Expose security attribute errors with their own messagesJuan Antonio Osorio Robles2021-01-12
|/ | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This creates error objects for runtime errors that might come from the runtime. Thus, indicating to users that the place to debug should be in the security attributes of the container. When creating a container with a SELinux label that doesn't exist, we get a fairly cryptic error message: ``` $ podman run --security-opt label=type:my_container.process -it fedora bash Error: OCI runtime error: write file `/proc/thread-self/attr/exec`: Invalid argument ``` This instead handles any errors coming from LSM's `/proc` API and enhances the error message with a relevant indicator that it's related to the container's security attributes. A sample run looks as follows: ``` $ bin/podman run --security-opt label=type:my_container.process -it fedora bash Error: `/proc/thread-self/attr/exec`: OCI runtime error: unable to assign security attribute ``` With `debug` log level enabled it would be: ``` Error: write file `/proc/thread-self/attr/exec`: Invalid argument: OCI runtime error: unable to assign security attribute ``` Note that these errors wrap ErrOCIRuntime, so it's still possible to to compare these errors with `errors.Is/errors.As`. One advantage of this approach is that we could start handling these errors in a more efficient manner in the future. e.g. If a SELinux label doesn't exist (yet), we could retry until it becomes available. Signed-off-by: Juan Antonio Osorio Robles <jaosorior@redhat.com>
* SpellingJosh Soref2020-12-22
| | | | Signed-off-by: Josh Soref <jsoref@users.noreply.github.com>
* Add Security information to podman infoDaniel J Walsh2020-12-22
| | | | | | | | When debugging issues, it would be helpful to know the security settings of the system running into the problem. Adding security info to `podman info` is also useful to users. Signed-off-by: Daniel J Walsh <dwalsh@redhat.com>
* Add LogSize to container inspectDaniel J Walsh2020-12-15
| | | | | | | Other log options are available so we need to add ability to look up LogSize. Signed-off-by: Daniel J Walsh <dwalsh@redhat.com>
* add network connect|disconnect compat endpointsbaude2020-11-19
| | | | | | | | | | | this enables the ability to connect and disconnect a container from a given network. it is only for the compatibility layer. some code had to be refactored to avoid circular imports. additionally, tests are being deferred temporarily due to some incompatibility/bug in either docker-py or our stack. Signed-off-by: baude <bbaude@redhat.com>
* Fix podman pod inspect show wrong MAC stringzhangguanzhang2020-11-18
| | | | Signed-off-by: zhangguanzhang <zhangguanzhang@qq.com>
* Add support for network connect / disconnect to DBMatthew Heon2020-11-11
| | | | | | | | | | | | | | | | | | | | | | | | Convert the existing network aliases set/remove code to network connect and disconnect. We can no longer modify aliases for an existing network, but we can add and remove entire networks. As part of this, we need to add a new function to retrieve current aliases the container is connected to (we had a table for this as of the first aliases PR, but it was not externally exposed). At the same time, remove all deconflicting logic for aliases. Docker does absolutely no checks of this nature, and allows two containers to have the same aliases, aliases that conflict with container names, etc - it's just left to DNS to return all the IP addresses, and presumably we round-robin from there? Most tests for the existing code had to be removed because of this. Convert all uses of the old container config.Networks field, which previously included all networks in the container, to use the new DB table. This ensures we actually get an up-to-date list of in-use networks. Also, add network aliases to the output of `podman inspect`. Signed-off-by: Matthew Heon <matthew.heon@pm.me>
* Merge pull request #8156 from mheon/add_net_aliases_dbOpenShift Merge Robot2020-11-04
|\ | | | | Add network aliases for containers to DB
| * Add network aliases for containers to DBMatthew Heon2020-10-27
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This adds the database backend for network aliases. Aliases are additional names for a container that are used with the CNI dnsname plugin - the container will be accessible by these names in addition to its name. Aliases are allowed to change over time as the container connects to and disconnects from networks. Aliases are implemented as another bucket in the database to register all aliases, plus two buckets for each container (one to hold connected CNI networks, a second to hold its aliases). The aliases are only unique per-network, to the global and per-container aliases buckets have a sub-bucket for each CNI network that has aliases, and the aliases are stored within that sub-bucket. Aliases are formatted as alias (key) to container ID (value) in both cases. Three DB functions are defined for aliases: retrieving current aliases for a given network, setting aliases for a given network, and removing all aliases for a given network. Signed-off-by: Matthew Heon <matthew.heon@pm.me>
* | Merge pull request #8174 from rhatdan/errorsOpenShift Merge Robot2020-10-29
|\ \ | | | | | | Podman often reports OCI Runtime does not exist, even if it does
| * | Podman often reports OCI Runtime does not exist, even if it doesDaniel J Walsh2020-10-29
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | When the OCI Runtime tries to set certain settings in cgroups it can get the error "no such file or directory", the wrapper ends up reporting a bogus error like: ``` Request Failed(Internal Server Error): open io.max: No such file or directory: OCI runtime command not found error {"cause":"OCI runtime command not found error","message":"open io.max: No such file or directory: OCI runtime command not found error","response":500} ``` On first reading of this, you would think the OCI Runtime (crun or runc) were not found. But the error is actually reporting message":"open io.max: No such file or directory Which is what we want the user to concentrate on. Signed-off-by: Daniel J Walsh <dwalsh@redhat.com>
* | | Merge pull request #8178 from rhatdan/existsOpenShift Merge Robot2020-10-29
|\ \ \ | |/ / |/| | NewFromLocal can return multiple images
| * | NewFromLocal can return multiple imagesDaniel J Walsh2020-10-28
| |/ | | | | | | | | | | | | | | | | | | If you use additional stores and pull the same image into writable stores, you can end up with the situation where you have the same image twice. This causes image exists to return the wrong error. It should return true in this situation rather then an error. Signed-off-by: Daniel J Walsh <dwalsh@redhat.com>
* / Add a Degraded state to podsMatthew Heon2020-10-21
|/ | | | | | | | | | | | | Make a distinction between pods that are completely running (all containers running) and those that have some containers going, but not all, by introducing an intermediate state between Stopped and Running called Degraded. A Degraded pod has at least one, but not all, containers running; a Running pod has all containers running. First step to a solution for #7213. Signed-off-by: Matthew Heon <matthew.heon@pm.me>
* Store cgroup manager on a per-container basisMatthew Heon2020-10-08
| | | | | | | | | | | | | | | | | | | | | When we create a container, we assign a cgroup parent based on the current cgroup manager in use. This parent is only usable with the cgroup manager the container is created with, so if the default cgroup manager is later changed or overridden, the container will not be able to start. To solve this, store the cgroup manager that created the container in container configuration, so we can guarantee a container with a systemd cgroup parent will always be started with systemd cgroups. Unfortunately, this is very difficult to test in CI, due to the fact that we hard-code cgroup manager on all invocations of Podman in CI. Fixes #7830 Signed-off-by: Matthew Heon <matthew.heon@pm.me>
* Fix handling of remove of bogus volumes, networks and PodsDaniel J Walsh2020-09-29
| | | | | | | | | | | | In podman containers rm and podman images rm, the commands exit with error code 1 if the object does not exists. This PR implements similar functionality to volumes, networks, and Pods. Similarly if volumes or Networks are in use by other containers, and return exit code 2. Signed-off-by: Daniel J Walsh <dwalsh@redhat.com>
* Merge pull request #7783 from ashley-cui/slirpOpenShift Merge Robot2020-09-29
|\ | | | | Add support for slirp network for pods
| * Add support for slirp network for podsAshley Cui2020-09-25
| | | | | | | | | | | | flag --network=slirp4netns[options] for root and rootless pods Signed-off-by: Ashley Cui <acui@redhat.com>
* | Include cgroup manager in `podman info` outputMatthew Heon2020-09-22
| | | | | | | | | | | | | | | | This is very useful for debugging cgroups v2, especially on rootless - we need to ensure people are correctly using systemd cgroups in these cases. Signed-off-by: Matthew Heon <matthew.heon@pm.me>
* | Refactor version handling in cmd treeJhon Honce2020-09-18
|/ | | | | | | * Move from simple string to semver objects * Change client API Version from '1' to 2.0.0 Signed-off-by: Jhon Honce <jhonce@redhat.com>
* Fix up errors found by codespellDaniel J Walsh2020-09-11
| | | | Signed-off-by: Daniel J Walsh <dwalsh@redhat.com>
* Show c/storage (Buildah/CRI-O) containers in psDaniel J Walsh2020-09-09
| | | | | | | | | | | | | | | | | | | | | | | | | | | | The `podman ps --all` command will now show containers that are under the control of other c/storage container systems and the new `ps --storage` option will show only containers that are in c/storage but are not controlled by libpod. In the below examples, the '*working-container' entries were created by Buildah. ``` podman ps -a CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 9257ef8c786c docker.io/library/busybox:latest ls /etc 8 hours ago Exited (0) 8 hours ago gifted_jang d302c81856da docker.io/library/busybox:latest buildah 30 hours ago storage busybox-working-container 7a5a7b099d33 localhost/tom:latest ls -alF 30 hours ago Exited (0) 30 hours ago hopeful_hellman 01d601fca090 localhost/tom:latest ls -alf 30 hours ago Exited (1) 30 hours ago determined_panini ee58f429ff26 localhost/tom:latest buildah 33 hours ago storage alpine-working-container podman ps --external CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES d302c81856da docker.io/library/busybox:latest buildah 30 hours ago external busybox-working-container ee58f429ff26 localhost/tom:latest buildah 33 hours ago external alpine-working-container ``` Signed-off-by: TomSweeneyRedHat <tsweeney@redhat.com> Signed-off-by: Daniel J Walsh <dwalsh@redhat.com>
* podman: add option --cgroup-confGiuseppe Scrivano2020-08-21
| | | | | | | | | | | | | | | it allows to manually tweak the configuration for cgroup v2. we will expose some of the options in future as single options (e.g. the new memory knobs), but for now add the more generic --cgroup-conf mechanism for maximum control on the cgroup configuration. OCI specs change: https://github.com/opencontainers/runtime-spec/pull/1040 Requires: https://github.com/containers/crun/pull/459 Signed-off-by: Giuseppe Scrivano <gscrivan@redhat.com>
* error when adding container to pod with network informationBrent Baude2020-08-21
| | | | | | | | | | | | because a pod's network information is dictated by the infra container at creation, a container cannot be created with network attributes. this has been difficult for users to understand. we now return an error when a container is being created inside a pod and passes any of the following attributes: * static IP (v4 and v6) * static mac * ports -p (i.e. -p 8080:80) * exposed ports (i.e. 222-225) * publish ports from image -P Signed-off-by: Brent Baude <bbaude@redhat.com>
* API returns 500 in case network is not found instead of 404zhangguanzhang2020-08-02
| | | | Signed-off-by: zhangguanzhang <zhangguanzhang@qq.com>
* Ensure libpod/define does not include libpod/imageMatthew Heon2020-07-31
| | | | | | | | | | | | | The define package under Libpod is intended to be an extremely minimal package, including constants and very little else. However, as a result of some legacy code, it was dragging in all of libpod/image (and, less significantly, the util package). Fortunately, this was just to ensure that error constants were not duplicating, and there's nothing preventing us from importing in the other direction and keeping libpod/define free of dependencies. Signed-off-by: Matthew Heon <matthew.heon@pm.me>
* Switch all references to github.com/containers/libpod -> podmanDaniel J Walsh2020-07-28
| | | | Signed-off-by: Daniel J Walsh <dwalsh@redhat.com>
* Add --umask flag for create, runAshley Cui2020-07-21
| | | | | | | | --umask sets the umask inside the container Defaults to 0022 Co-authored-by: Daniel J Walsh <dwalsh@redhat.com> Signed-off-by: Ashley Cui <acui@redhat.com>
* abi: set default umask and rlimitsGiuseppe Scrivano2020-07-17
| | | | | | | | the code got lost in the migration to podman 2.0, reintroduce it. Closes: https://github.com/containers/podman/issues/6989 Signed-off-by: Giuseppe Scrivano <gscrivan@redhat.com>
* Merge pull request #6956 from mheon/add_ports_to_pod_inspectOpenShift Merge Robot2020-07-15
|\ | | | | Include infra container information in `pod inspect`
| * Include infra container information in `pod inspect`Matthew Heon2020-07-14
| | | | | | | | | | | | | | | | | | | | We had a field for this in the inspect data, but it was never being populated. Because of this, `podman pod inspect` stopped showing port bindings (and other infra container settings). Add code to populate the infra container inspect data, and add a test to ensure we don't regress again. Signed-off-by: Matthew Heon <mheon@redhat.com>
* | Add SystemdMode to inspect for containersMatthew Heon2020-07-14
|/ | | | | | | | | This allows us to determine if the container auto-detected that systemd was in use, and correctly activated systemd integration. Use this to wire up some integration tests to verify that systemd integration is working properly. Signed-off-by: Matthew Heon <matthew.heon@pm.me>
* Implement --sdnotify cmdline option to control sd-notify behaviorJoseph Gooch2020-07-06
| | | | | | | | | | | | | | | | | | | --sdnotify container|conmon|ignore With "conmon", we send the MAINPID, and clear the NOTIFY_SOCKET so the OCI runtime doesn't pass it into the container. We also advertise "ready" when the OCI runtime finishes to advertise the service as ready. With "container", we send the MAINPID, and leave the NOTIFY_SOCKET so the OCI runtime passes it into the container for initialization, and let the container advertise further metadata. This is the default, which is closest to the behavior podman has done in the past. The "ignore" option removes NOTIFY_SOCKET from the environment, so neither podman nor any child processes will talk to systemd. This removes the need for hardcoded CID and PID files in the command line, and the PIDFile directive, as the pid is advertised directly through sd-notify. Signed-off-by: Joseph Gooch <mrwizard@dok.org>
* Merge pull request #6836 from ashley-cui/tzlibpodOpenShift Merge Robot2020-07-06
|\ | | | | Add --tz flag to create, run
| * Add --tz flag to create, runAshley Cui2020-07-02
| | | | | | | | | | | | | | --tz flag sets timezone inside container Can be set to IANA timezone as well as `local` to match host machine Signed-off-by: Ashley Cui <acui@redhat.com>
* | move go module to v2Valentin Rothberg2020-07-06
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | With the advent of Podman 2.0.0 we crossed the magical barrier of go modules. While we were able to continue importing all packages inside of the project, the project could not be vendored anymore from the outside. Move the go module to new major version and change all imports to `github.com/containers/libpod/v2`. The renaming of the imports was done via `gomove` [1]. [1] https://github.com/KSubedi/gomove Signed-off-by: Valentin Rothberg <rothberg@redhat.com>
* | Print errors from individual containers in podsMatthew Heon2020-07-02
|/ | | | | | | | | | | | | | | | | | | | | | | | | | | The infra/abi code for pods was written in a flawed way, assuming that the map[string]error containing individual container errors was only set when the global error for the pod function was nil; that is not accurate, and we are actually *guaranteed* to set the global error when any individual container errors. Thus, we'd never actually include individual container errors, because the infra code assumed that err being set meant everything failed and no container operations were attempted. We were originally setting the cause of the error to something nonsensical ("container already exists"), so I made a new error indicating that some containers in the pod failed. We can then ignore that error when building the report on the pod operation and actually return errors from individual containers. Unfortunately, this exposed another weakness of the infra code, which was discarding the container IDs. Errors from individual containers are not guaranteed to identify which container they came from, hence the use of map[string]error in the Pod API functions. Rather than restructuring the structs we return from pkg/infra, I just wrapped the returned errors with a message including the ID of the container. Signed-off-by: Matthew Heon <matthew.heon@pm.me>
* Merge pull request #6746 from vrothberg/untagOpenShift Merge Robot2020-06-24
|\ | | | | podman untag: error if tag doesn't exist
| * podman untag: error if tag doesn't existValentin Rothberg2020-06-24
| | | | | | | | | | | | | | | | | | | | | | Throw an error if a specified tag does not exist. Also make sure that the user input is normalized as we already do for `podman tag`. To prevent regressions, add a set of end-to-end and systemd tests. Last but not least, update the docs and add bash completions. Signed-off-by: Valentin Rothberg <rothberg@redhat.com>
* | Reformat inspect network settingsQi Wang2020-06-23
|/ | | | | | Reformat ports of inspect network settings to compatible with docker inspect. Close #5380 Signed-off-by: Qi Wang <qiwan@redhat.com>
* Do not share container log driver for execMatthew Heon2020-06-17
| | | | | | | | | | | | | | | | | | | | | When the container uses journald logging, we don't want to automatically use the same driver for its exec sessions. If we do we will pollute the journal (particularly in the case of healthchecks) with large amounts of undesired logs. Instead, force exec sessions logs to file for now; we can add a log-driver flag later (we'll probably want to add a `podman logs` command that reads exec session logs at the same time). As part of this, add support for the new 'none' logs driver in Conmon. It will be the default log driver for exec sessions, and can be optionally selected for containers. Great thanks to Joe Gooch (mrwizard@dok.org) for adding support to Conmon for a null log driver, and wiring it in here. Fixes #6555 Signed-off-by: Matthew Heon <matthew.heon@pm.me>
* Turn on More lintersDaniel J Walsh2020-06-15
| | | | | | | | | - misspell - prealloc - unparam - nakedret Signed-off-by: Daniel J Walsh <dwalsh@redhat.com>
* pod config: add a `CreateCommand` fieldValentin Rothberg2020-06-11
| | | | | | | | | | | | | | Add a `CreateCommand` field to the pod config which includes the entire `os.Args` at pod-creation. Similar to the already existing field in a container config, we need this information to properly generate generic systemd unit files for pods. It's a prerequisite to support the `--new` flag for pods. Also add the `CreateCommand` to the pod-inspect data, which can come in handy for debugging, general inspection and certainly for the tests that are added along with the other changes. Signed-off-by: Valentin Rothberg <rothberg@redhat.com>