aboutsummaryrefslogtreecommitdiff
path: root/libpod/boltdb_state_internal.go
Commit message (Collapse)AuthorAge
* Add --requires flag to podman run/createMatthew Heon2021-04-06
| | | | | | | | | | | | | | | | | | | | Podman has, for a long time, had an internal concept of dependency management, used mainly to ensure that pod infra containers are started before any other container in the pod. We also have the ability to recursively start these dependencies, which we use to ensure that `podman start` on a container in a pod will not fail because the infra container is stopped. We have not, however, exposed these via the command line until now. Add a `--requires` flag to `podman run` and `podman create` to allow users to manually specify dependency containers. These containers must be running before the container will start. Also, make recursive starting with `podman start` default so we can start these containers and their dependencies easily. Fixes #9250 Signed-off-by: Matthew Heon <matthew.heon@pm.me>
* Removing a non existing container API should return 404Daniel J Walsh2021-03-10
| | | | | | | | | | | | | | | | | | Currently we were overwrapping error returned from removal of a non existing container. $ podman rm bogus -f Error: failed to evict container: "": failed to find container "bogus" in state: no container with name or ID bogus found: no such container Removal of wraps gets us to. ./bin/podman rm bogus -f Error: no container with name or ID "bogus" found: no such container Finally also added quotes around container name to help make it standout when you get an error, currently it gets lost in the error. Signed-off-by: Daniel J Walsh <dwalsh@redhat.com>
* bump go module to v3Valentin Rothberg2021-02-22
| | | | | | | | | We missed bumping the go module, so let's do it now :) * Automated go code with github.com/sirkon/go-imports-rename * Manually via `vgrep podman/v2` the rest Signed-off-by: Valentin Rothberg <rothberg@redhat.com>
* Initial implementation of volume pluginsMatthew Heon2021-01-14
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This implements support for mounting and unmounting volumes backed by volume plugins. Support for actually retrieving plugins requires a pull request to land in containers.conf and then that to be vendored, and as such is not yet ready. Given this, this code is only compile tested. However, the code for everything past retrieving the plugin has been written - there is support for creating, removing, mounting, and unmounting volumes, which should allow full functionality once the c/common PR is merged. A major change is the signature of the MountPoint function for volumes, which now, by necessity, returns an error. Named volumes managed by a plugin do not have a mountpoint we control; instead, it is managed entirely by the plugin. As such, we need to cache the path in the DB, and calls to retrieve it now need to access the DB (and may fail as such). Notably absent is support for SELinux relabelling and chowning these volumes. Given that we don't manage the mountpoint for these volumes, I am extremely reluctant to try and modify it - we could easily break the plugin trying to chown or relabel it. Also, we had no less than *5* separate implementations of inspecting a volume floating around in pkg/infra/abi and pkg/api/handlers/libpod. And none of them used volume.Inspect(), the only correct way of inspecting volumes. Remove them all and consolidate to using the correct way. Compat API is likely still doing things the wrong way, but that is an issue for another day. Fixes #4304 Signed-off-by: Matthew Heon <matthew.heon@pm.me>
* Add support for network connect / disconnect to DBMatthew Heon2020-11-11
| | | | | | | | | | | | | | | | | | | | | | | | Convert the existing network aliases set/remove code to network connect and disconnect. We can no longer modify aliases for an existing network, but we can add and remove entire networks. As part of this, we need to add a new function to retrieve current aliases the container is connected to (we had a table for this as of the first aliases PR, but it was not externally exposed). At the same time, remove all deconflicting logic for aliases. Docker does absolutely no checks of this nature, and allows two containers to have the same aliases, aliases that conflict with container names, etc - it's just left to DNS to return all the IP addresses, and presumably we round-robin from there? Most tests for the existing code had to be removed because of this. Convert all uses of the old container config.Networks field, which previously included all networks in the container, to use the new DB table. This ensures we actually get an up-to-date list of in-use networks. Also, add network aliases to the output of `podman inspect`. Signed-off-by: Matthew Heon <matthew.heon@pm.me>
* Merge pull request #8156 from mheon/add_net_aliases_dbOpenShift Merge Robot2020-11-04
|\ | | | | Add network aliases for containers to DB
| * Add tests for network aliasesMatthew Heon2020-11-03
| | | | | | | | | | | | | | | | | | | | | | | | As part of this, we need two new functions, for retrieving all aliases for a network and removing all aliases for a network, both required to test. Also, rework handling for some things the tests discovered were broken (notably conflicts between container name and existing aliases). Signed-off-by: Matthew Heon <matthew.heon@pm.me>
| * Add network aliases for containers to DBMatthew Heon2020-10-27
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This adds the database backend for network aliases. Aliases are additional names for a container that are used with the CNI dnsname plugin - the container will be accessible by these names in addition to its name. Aliases are allowed to change over time as the container connects to and disconnects from networks. Aliases are implemented as another bucket in the database to register all aliases, plus two buckets for each container (one to hold connected CNI networks, a second to hold its aliases). The aliases are only unique per-network, to the global and per-container aliases buckets have a sub-bucket for each CNI network that has aliases, and the aliases are stored within that sub-bucket. Aliases are formatted as alias (key) to container ID (value) in both cases. Three DB functions are defined for aliases: retrieving current aliases for a given network, setting aliases for a given network, and removing all aliases for a given network. Signed-off-by: Matthew Heon <matthew.heon@pm.me>
* | libpod: clean paths before checkGiuseppe Scrivano2020-10-28
|/ | | | | | | | | clean the paths before checking whether its value is different than what is stored in the db. Closes: https://github.com/containers/podman/issues/8160 Signed-off-by: Giuseppe Scrivano <gscrivan@redhat.com>
* Re-create OCI runtimes by path when it is missingMatthew Heon2020-10-20
| | | | | | | | | | | | | | | When an OCI runtime is given by full path, we need to ensure we use the same runtime on subsequent use. Unfortunately, users are often not considerate enough to use the same `--runtime` flag every time they invoke runtime - and if the runtime was not in containers.conf, that means we don't have it stored inn the libpod Runtime. Fortunately, since we have the full path, we can initialize the OCI runtime for use at the point where we pull the container from the database. Signed-off-by: Matthew Heon <matthew.heon@pm.me>
* Improve error message when creating a pod/ctr with the same namePaul Holzinger2020-08-04
| | | | | | | | | Check if there is an pod or container an return the appropriate error message instead of blindly return 'container exists' with `podman create` and 'pod exists' with `podman pod create`. Signed-off-by: Paul Holzinger <paul.holzinger@web.de>
* Switch all references to github.com/containers/libpod -> podmanDaniel J Walsh2020-07-28
| | | | Signed-off-by: Daniel J Walsh <dwalsh@redhat.com>
* move go module to v2Valentin Rothberg2020-07-06
| | | | | | | | | | | | | | | With the advent of Podman 2.0.0 we crossed the magical barrier of go modules. While we were able to continue importing all packages inside of the project, the project could not be vendored anymore from the outside. Move the go module to new major version and change all imports to `github.com/containers/libpod/v2`. The renaming of the imports was done via `gomove` [1]. [1] https://github.com/KSubedi/gomove Signed-off-by: Valentin Rothberg <rothberg@redhat.com>
* Turn on More lintersDaniel J Walsh2020-06-15
| | | | | | | | | - misspell - prealloc - unparam - nakedret Signed-off-by: Daniel J Walsh <dwalsh@redhat.com>
* Fix two coverity issues (unchecked null return)Matthew Heon2020-05-14
| | | | | | | | | Theoretically these should never happen, but it never hurts to be sure and check. Add a check to one, make the other one a create-if-not-exist (it was just adding, not checking the contents). Signed-off-by: Matthew Heon <mheon@redhat.com>
* Update vendor of boltdb and containers/imageDaniel J Walsh2020-03-29
| | | | Signed-off-by: Daniel J Walsh <dwalsh@redhat.com>
* Add support for containers.confDaniel J Walsh2020-03-27
| | | | | | | vendor in c/common config pkg for containers.conf Signed-off-by: Qi Wang qiwan@redhat.com Signed-off-by: Daniel J Walsh <dwalsh@redhat.com>
* Add structure for new exec session tracking to DBMatthew Heon2020-03-18
| | | | | | | | | | | | | | | | | | | | | | | As part of the rework of exec sessions, we need to address them independently of containers. In the new API, we need to be able to fetch them by their ID, regardless of what container they are associated with. Unfortunately, our existing exec sessions are tied to individual containers; there's no way to tell what container a session belongs to and retrieve it without getting every exec session for every container. This adds a pointer to the container an exec session is associated with to the database. The sessions themselves are still stored in the container. Exec-related APIs have been restructured to work with the new database representation. The originally monolithic API has been split into a number of smaller calls to allow more fine-grained control of lifecycle. Support for legacy exec sessions has been retained, but in a deprecated fashion; we should remove this in a few releases. Signed-off-by: Matthew Heon <matthew.heon@pm.me>
* make lint: enable gocriticValentin Rothberg2020-01-13
| | | | | | | `gocritic` is a powerful linter that helps in preventing certain kinds of errors as well as enforcing a coding style. Signed-off-by: Valentin Rothberg <rothberg@redhat.com>
* Add a MissingRuntime implementationMatthew Heon2019-10-15
| | | | | | | | | | | | | | | | | When a container is created with a given OCI runtime, but then it is uninstalled or removed from the configuration file, Libpod presently reacts very poorly. The EvictContainer code can potentially remove these containers, but we still can't see them in `podman ps` (aside from the massive logrus.Errorf messages they create). Providing a minimal OCI runtime implementation for missing runtimes allows us to behave better. We'll be able to retrieve containers from the database, though we still pop up an error for each missing runtime. For containers which are stopped, we can remove them as normal. Signed-off-by: Matthew Heon <matthew.heon@pm.me>
* rm: add containers eviction with `rm --force`Marco Vedovati2019-09-25
| | | | | | | | | Add ability to evict a container when it becomes unusable. This may happen when the host setup changes after a container creation, making it impossible for that container to be used or removed. Evicting a container is done using the `rm --force` command. Signed-off-by: Marco Vedovati <mvedovati@suse.com>
* Add ability for volumes with options to mount/umountMatthew Heon2019-09-05
| | | | | | | | | | | | | When volume options and the local volume driver are specified, the volume is intended to be mounted using the 'mount' command. Supported options will be used to volume the volume before the first container using it starts, and unmount the volume after the last container using it dies. This should work for any local filesystem, though at present I've only tested with tmpfs and btrfs. Signed-off-by: Matthew Heon <matthew.heon@pm.me>
* Add volume stateMatthew Heon2019-09-05
| | | | | | | | | | | | We need to be able to track the number of times a volume has been mounted for tmpfs/nfs/etc volumes. As such, we need a mutable state for volumes. Add one, with the expected update/save methods in both states. There is backwards compat here, in that older volumes without a state will still be accepted. Signed-off-by: Matthew Heon <matthew.heon@pm.me>
* Re-add locks to volumes.Matthew Heon2019-08-28
| | | | | | | | | | This will require a 'podman system renumber' after being applied to get lock numbers for existing volumes. Add the DB backend code for rewriting volume configs and use it for updating lock numbers as part of 'system renumber'. Signed-off-by: Matthew Heon <matthew.heon@pm.me>
* first pass of corrections for golangci-lintbaude2019-07-10
| | | | Signed-off-by: baude <bbaude@redhat.com>
* code cleanupbaude2019-07-08
| | | | | | clean up code identified as problematic by golands inspection Signed-off-by: baude <bbaude@redhat.com>
* remove libpod from mainbaude2019-06-25
| | | | | | | | | | | | | the compilation demands of having libpod in main is a burden for the remote client compilations. to combat this, we should move the use of libpod structs, vars, constants, and functions into the adapter code where it will only be compiled by the local client. this should result in cleaner code organization and smaller binaries. it should also help if we ever need to compile the remote client on non-Linux operating systems natively (not cross-compiled). Signed-off-by: baude <bbaude@redhat.com>
* Merge pull request #3378 from mheon/multiple_runtimesOpenShift Merge Robot2019-06-21
|\ | | | | Begin adding support for multiple OCI runtimes
| * Handle containers whose OCIRuntime fields are pathsMatthew Heon2019-06-20
| | | | | | | | | | | | | | Try and locate the right runtime by using the basename of the path. Signed-off-by: Matthew Heon <matthew.heon@pm.me>
| * Begin adding support for multiple OCI runtimesMatthew Heon2019-06-19
| | | | | | | | | | | | | | | | | | | | | | | | | | Allow Podman containers to request to use a specific OCI runtime if multiple runtimes are configured. This is the first step to properly supporting containers in a multi-runtime environment. The biggest changes are that all OCI runtimes are now initialized when Podman creates its runtime, and containers now use the runtime requested in their configuration (instead of always the default runtime). Signed-off-by: Matthew Heon <matthew.heon@pm.me>
* | Make configuration validation not require a DB commitMatthew Heon2019-06-20
|/ | | | | | | If there are missing fields, we still require a commit, but that should not happen often. Signed-off-by: Matthew Heon <matthew.heon@pm.me>
* Update vendor of buildah and containers/imagesDaniel J Walsh2019-05-20
| | | | | | | | | Mainly add support for podman build using --overlay mounts. Updates containers/image also adds better support for new registries.conf file. Signed-off-by: Daniel J Walsh <dwalsh@redhat.com>
* Switch Libpod over to new explicit named volumesMatthew Heon2019-04-04
| | | | | | | | | | | | | This swaps the previous handling (parse all volume mounts on the container and look for ones that might refer to named volumes) for the new, explicit named volume lists stored per-container. It also deprecates force-removing volumes that are in use. I don't know how we want to handle this yet, but leaving containers that depend on a volume that no longer exists is definitely not correct. Signed-off-by: Matthew Heon <matthew.heon@pm.me>
* vendor buildah, image, storage, cniValentin Rothberg2019-03-28
| | | | Signed-off-by: Valentin Rothberg <rothberg@redhat.com>
* Validate VolumePath against DB configurationMatthew Heon2019-02-26
| | | | | | | | | If this doesn't match, we end up not being able to access named volumes mounted into containers, which is bad. Use the same validation that we use for other critical paths to ensure this one also matches. Signed-off-by: Matthew Heon <matthew.heon@pm.me>
* Remove locks from volumesMatthew Heon2019-02-21
| | | | | | | | | | | | | | | | | | | I was looking into why we have locks in volumes, and I'm fairly convinced they're unnecessary. We don't have a state whose accesses we need to guard with locks and syncs. The only real purpose for the lock was to prevent concurrent removal of the same volume. Looking at the code, concurrent removal ought to be fine with a bit of reordering - one or the other might fail, but we will successfully evict the volume from the state. Also, remove the 'prune' bool from RemoveVolume. None of our other API functions accept it, and it only served to toggle off more verbose error messages. Signed-off-by: Matthew Heon <matthew.heon@pm.me>
* Move all libpod/ JSON references over to jsoniterMatthew Heon2019-01-10
| | | | Signed-off-by: Matthew Heon <matthew.heon@pm.me>
* Remove runtime lockDir and add in-memory lock managerMatthew Heon2019-01-04
| | | | | | | | | Remove runtime's lockDir as it is no longer needed after the lock rework. Add a trivial in-memory lock manager for unit testing Signed-off-by: Matthew Heon <matthew.heon@gmail.com>
* Convert pods to SHM locksMatthew Heon2019-01-04
| | | | Signed-off-by: Matthew Heon <matthew.heon@gmail.com>
* Convert containers to SHM lockingMatthew Heon2019-01-04
| | | | Signed-off-by: Matthew Heon <matthew.heon@gmail.com>
* condition fixed for adding volume to boltdb.Kunal Kushwaha2018-12-13
| | | | Signed-off-by: Kunal Kushwaha <kushwaha_kunal_v7@lab.ntt.co.jp>
* Add "podman volume" commandumohnani82018-12-06
| | | | | | | | | | | | | | | Add support for podman volume and its subcommands. The commands supported are: podman volume create podman volume inspect podman volume ls podman volume rm podman volume prune This is a tool to manage volumes used by podman. For now it only handle named volumes, but eventually it will handle all volumes used by podman. Signed-off-by: umohnani8 <umohnani@redhat.com>
* Use runtime lockDir in BoltDB stateMatthew Heon2018-12-04
| | | | | | | | | | | | | Instead of storing the runtime's file lock dir in the BoltDB state, refer to the runtime inside the Bolt state instead, and use the path stored in the runtime. This is necessary since we moved DB initialization very far up in runtime init, before the locks dir is properly initialized (and it must happen before the locks dir can be created, as we use the DB to retrieve the proper path for the locks dir now). Signed-off-by: Matthew Heon <mheon@redhat.com>
* Add better descriptions for validation errors in DBMatthew Heon2018-12-03
| | | | | | | | | When validating fields against the DB, report more verbosely the name of the field being validated if it fails. Specifically, add the name used in config files, so people will actually know what to change it errors happen. Signed-off-by: Matthew Heon <mheon@redhat.com>
* Add ability to retrieve runtime configuration from DBMatthew Heon2018-12-02
| | | | | | | | | | When we create a Libpod database, we store a number of runtime configuration fields in it. If we can retrieve those, we can use them to configure the runtime to match the DB instead of inbuilt defaults, helping to ensure that we don't error in cases where our compiled-in defaults changed. Signed-off-by: Matthew Heon <mheon@redhat.com>
* Do not fetch pod and ctr State on retrieval in BoltMatthew Heon2018-07-31
| | | | | | | | | | | | | | | | | It's not necessary to fill in state immediately, as we'll be overwriting it on any API call accessing it thanks to syncContainer(). It is also causing races when we fetch it without holding the container lock (which syncContainer() does). As such, just don't retrieve the state on initial pull from the database with Bolt. Also, refactor some Linux-specific netns handling functions out of container_internal_linux.go into boltdb_linux.go. Signed-off-by: Matthew Heon <matthew.heon@gmail.com> Closes: #1186 Approved by: rhatdan
* Use the Linux version BoltState.getContainerFromDB on all platforms.Miloslav Trmač2018-07-26
| | | | | | | | | | | | | | | This just muves the Linux implementation, unchanged, to the platform-agnostic file. Should not change behavior on Linux. On non-Linux platforms, reading containers from BoltDB now works (and rejects containers with namespace data). The checkRuntimeConfig validation ensures that each BoltDB database is only used on one platform, so network namespaces should never exist in non-Linux BoltDB files. Signed-off-by: Miloslav Trmač <mitr@redhat.com> Closes: #1115 Approved by: rhatdan
* Add a mutex to BoltDB state to prevent lock issuesMatthew Heon2018-07-25
| | | | | | | | | | | | | | Per https://www.sqlite.org/src/artifact/c230a7a24?ln=994-1081, POSIX file advisory locks are unsafe to use within a single process if multiple file descriptors are open for the same file. Unfortunately, this has a strong potential to happen for multithreaded usage of libpod, and could result in DB corruption. To prevent this, wrap all access to BoltDB within a single libpod instance in a mutex to ensure concurrent access cannot occur. Signed-off-by: Matthew Heon <matthew.heon@gmail.com>
* Enforce namespace checks on container addMatthew Heon2018-07-24
| | | | Signed-off-by: Matthew Heon <matthew.heon@gmail.com>
* Untested implementation of namespaced BoltDB accessMatthew Heon2018-07-24
| | | | | | | | | | | All BoltDB access and update functions now understand namespaces. Accessing containers outside of your namespace will produce errors, except for Lookup and All functions, which will perform their tasks only on containers within your namespace. The "" namespace remains a reserved, no-restrictions namespace. Signed-off-by: Matthew Heon <matthew.heon@gmail.com>