aboutsummaryrefslogtreecommitdiff
path: root/libpod/boltdb_state.go
Commit message (Collapse)AuthorAge
* libpod: switch to golang native error wrappingSascha Grunert2022-07-05
| | | | | | | | | We now use the golang error wrapping format specifier `%w` instead of the deprecated github.com/pkg/errors package. [NO NEW TESTS NEEDED] Signed-off-by: Sascha Grunert <sgrunert@redhat.com>
* Two fixes for DB exit code handlingMatthew Heon2022-06-23
| | | | | | | | | | | | | | | | | Firstly: don't prune exit codes after a refresh - instead, clear the table entirely. We are guaranteed that all containers are gone after a refresh, we should not worry about exit codes given this. Secondly: alter the way pruning was done. We were updating the DB by calling Update from within an existing View, and stacking an RW transaction on top of an existing RO one seems dodgy; further, modifying a bucket while iterating over it with ForEach is undefined behavior. Hopefully this will resolve our CI issues. Signed-off-by: Matthew Heon <mheon@redhat.com>
* libpod: fix wait and exit-code logicValentin Rothberg2022-06-23
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This commit addresses three intertwined bugs to fix an issue when using Gitlab runner on Podman. The three bug fixes are not split into separate commits as tests won't pass otherwise; avoidable noise when bisecting future issues. 1) Podman conflated states: even when asking to wait for the `exited` state, Podman returned as soon as a container transitioned to `stopped`. The issues surfaced in Gitlab tests to fail [1] as `conmon`'s buffers have not (yet) been emptied when attaching to a container right after a wait. The race window was extremely narrow, and I only managed to reproduce with the Gitlab runner [1] unit tests. 2) The clearer separation between `exited` and `stopped` revealed a race condition predating the changes. If a container is configured for autoremoval (e.g., via `run --rm`), the "run" process competes with the "cleanup" process running in the background. The window of the race condition was sufficiently large that the "cleanup" process has already removed the container and storage before the "run" process could read the exit code and hence waited indefinitely. Address the exit-code race condition by recording exit codes in the main libpod database. Exit codes can now be read from a database. When waiting for a container to exit, Podman first waits for the container to transition to `exited` and will then query the database for its exit code. Outdated exit codes are pruned during cleanup (i.e., non-performance critical) and when refreshing the database after a reboot. An exit code is considered outdated when it is older than 5 minutes. While the race condition predates this change, the waiting process has apparently always been fast enough in catching the exit code due to issue 1): `exited` and `stopped` were conflated. The waiting process hence caught the exit code after the container transitioned to `stopped` but before it `exited` and got removed. 3) With 1) and 2), Podman is now waiting for a container to properly transition to the `exited` state. Some tests did not pass after 1) and 2) which revealed the third bug: `conmon` was executed with its working directory pointing to the OCI runtime bundle of the container. The changed working directory broke resolving relative paths in the "cleanup" process. The "cleanup" process error'ed before actually cleaning up the container and waiting "main" process ran indefinitely - or until hitting a timeout. Fix the issue by executing `conmon` with the same working directory as Podman. Note that fixing 3) *may* address a number of issues we have seen in the past where for *some* reason cleanup processes did not fire. [1] https://gitlab.com/gitlab-org/gitlab-runner/-/issues/27119#note_970712864 Signed-off-by: Valentin Rothberg <vrothberg@redhat.com> [MH: Minor reword of commit message] Signed-off-by: Matthew Heon <mheon@redhat.com>
* Instead of erroring, clean up after dangling IDs in DBMatthew Heon2022-05-23
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | For various (mostly legacy) reasons, Podman presently maintains a unified namespace for pods and containers - IE, we cannot have both a pod and a container named "test" at the same time. To implement this, we use a global database table of every pod and container ID (and another of every pod and container name). These entries should be added when containers/pods are added, and removed when containers/pods are removed, with the database's transactional integrity providing a guarantee that this is batched with the overall removal and that the DB should remain sane and consistent no matter what. As such, we treat a dangling ID as a hard error that stops the use of Podman. Unfortunately, we have someone run into this last Friday. I'm still not certain how exactly their DB got into this state, but without further clarification there, we can consider removing the error and making Podman instead clean up and remove any dangling IDs, which should restore Podman to a serviceable state. Drop an error message if we do this, though, because people should know that the DB is in a bad state. [NO NEW TESTS NEEDED] it is deliberately impossible to produce a configuration that would test this without hex-editing the DB file. Signed-off-by: Matthew Heon <mheon@redhat.com>
* fix a number of errcheck issuesValentin Rothberg2022-03-22
| | | | | | Numerous issues remain, especially in tests/e2e. Signed-off-by: Valentin Rothberg <vrothberg@redhat.com>
* linter: enable nilerrValentin Rothberg2022-03-22
| | | | | | | A number of cases looked suspicious, so I marked them with `FIXME`s to leave some breadcrumbs. Signed-off-by: Valentin Rothberg <vrothberg@redhat.com>
* bump go module to version 4Valentin Rothberg2022-01-18
| | | | | | | | | | | | | Automated for .go files via gomove [1]: `gomove github.com/containers/podman/v3 github.com/containers/podman/v4` Remaining files via vgrep [2]: `vgrep github.com/containers/podman/v3` [1] https://github.com/KSubedi/gomove [2] https://github.com/vrothberg/vgrep Signed-off-by: Valentin Rothberg <rothberg@redhat.com>
* Standardize on capatalized CgroupsDaniel J Walsh2022-01-14
| | | | Signed-off-by: Daniel J Walsh <dwalsh@redhat.com>
* use libnetwork from c/commonPaul Holzinger2022-01-12
| | | | | | | | The libpod/network packages were moved to c/common so that buildah can use it as well. To prevent duplication use it in podman as well and remove it from here. Signed-off-by: Paul Holzinger <pholzing@redhat.com>
* network db: add new strucutre to container createPaul Holzinger2021-12-14
| | | | | | | | | | Make sure we create new containers in the db with the correct structure. Also remove some unneeded code for alias handling. We no longer need this functions. The specgen format has not been changed for now. Signed-off-by: Paul Holzinger <pholzing@redhat.com>
* network db rewrite: migrate existing settingsPaul Holzinger2021-12-14
| | | | | | | | | | | | | | The new network db structure stores everything in the networks bucket. Previously some network settings were not written the the network bucket and only stored in the container config. Instead of the old format which used the container ID as value in the networks buckets we now use the PerNetworkoptions struct there. To migrate existing users we use the state.GetNetworks() function. If it fails to read the new format it will automatically migrate the old config format to the new one. This is allows a flawless migration path. Signed-off-by: Paul Holzinger <pholzing@redhat.com>
* Ensure pod ID bucket is properly updated on renameMatthew Heon2021-09-28
| | | | | | | | | | | | | As we were not updating the pod ID bucket, removing a pod with containers still in it (including the infra container, which will always suffer from this) will not properly update the name registry to remove the name of any renamed containers. This patch ensures that does not happen - all containers will be fully removed, even if renamed. Fixes #11750 Signed-off-by: Matthew Heon <matthew.heon@pm.me>
* standardize logrus messages to upper caseDaniel J Walsh2021-09-22
| | | | | | | | Remove ERROR: Error stutter from logrus messages also. [ NO TESTS NEEDED] This is just code cleanup. Signed-off-by: Daniel J Walsh <dwalsh@redhat.com>
* Wire network interface into libpodPaul Holzinger2021-09-15
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Make use of the new network interface in libpod. This commit contains several breaking changes: - podman network create only outputs the new network name and not file path. - podman network ls shows the network driver instead of the cni version and plugins. - podman network inspect outputs the new network struct and not the cni conflist. - The bindings and libpod api endpoints have been changed to use the new network structure. The container network status is stored in a new field in the state. The status should be received with the new `c.getNetworkStatus`. This will migrate the old status to the new format. Therefore old containers should contine to work correctly in all cases even when network connect/ disconnect is used. New features: - podman network reload keeps the ip and mac for more than one network. - podman container restore keeps the ip and mac for more than one network. - The network create compat endpoint can now use more than one ipam config. The man pages and the swagger doc are updated to reflect the latest changes. Signed-off-by: Paul Holzinger <pholzing@redhat.com>
* Removing a non existing container API should return 404Daniel J Walsh2021-03-10
| | | | | | | | | | | | | | | | | | Currently we were overwrapping error returned from removal of a non existing container. $ podman rm bogus -f Error: failed to evict container: "": failed to find container "bogus" in state: no container with name or ID bogus found: no such container Removal of wraps gets us to. ./bin/podman rm bogus -f Error: no container with name or ID "bogus" found: no such container Finally also added quotes around container name to help make it standout when you get an error, currently it gets lost in the error. Signed-off-by: Daniel J Walsh <dwalsh@redhat.com>
* Rewrite Rename backend in a more atomic fashionMatthew Heon2021-03-02
| | | | | | | | | | | | | Move the core of renaming logic into the DB. This guarantees a lot more atomicity than we have right now (our current solution, removing the container from the DB and re-creating it, is *VERY* not atomic and prone to leaving a corrupted state behind if things go wrong. Moving things into the DB allows us to remove most, but not all, of this - there's still a potential scenario where the c/storage rename fails but the Podman rename succeeds, and we end up with a mismatched state. Signed-off-by: Matthew Heon <matthew.heon@pm.me>
* bump go module to v3Valentin Rothberg2021-02-22
| | | | | | | | | We missed bumping the go module, so let's do it now :) * Automated go code with github.com/sirkon/go-imports-rename * Manually via `vgrep podman/v2` the rest Signed-off-by: Valentin Rothberg <rothberg@redhat.com>
* Enable whitespace linterPaul Holzinger2021-02-11
| | | | | | | | Use the whitespace linter and fix the reported problems. [NO TESTS NEEDED] Signed-off-by: Paul Holzinger <paul.holzinger@web.de>
* Enable stylecheck linterPaul Holzinger2021-02-11
| | | | | | | | Use the stylecheck linter and fix the reported problems. [NO TESTS NEEDED] Signed-off-by: Paul Holzinger <paul.holzinger@web.de>
* Initial implementation of volume pluginsMatthew Heon2021-01-14
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This implements support for mounting and unmounting volumes backed by volume plugins. Support for actually retrieving plugins requires a pull request to land in containers.conf and then that to be vendored, and as such is not yet ready. Given this, this code is only compile tested. However, the code for everything past retrieving the plugin has been written - there is support for creating, removing, mounting, and unmounting volumes, which should allow full functionality once the c/common PR is merged. A major change is the signature of the MountPoint function for volumes, which now, by necessity, returns an error. Named volumes managed by a plugin do not have a mountpoint we control; instead, it is managed entirely by the plugin. As such, we need to cache the path in the DB, and calls to retrieve it now need to access the DB (and may fail as such). Notably absent is support for SELinux relabelling and chowning these volumes. Given that we don't manage the mountpoint for these volumes, I am extremely reluctant to try and modify it - we could easily break the plugin trying to chown or relabel it. Also, we had no less than *5* separate implementations of inspecting a volume floating around in pkg/infra/abi and pkg/api/handlers/libpod. And none of them used volume.Inspect(), the only correct way of inspecting volumes. Remove them all and consolidate to using the correct way. Compat API is likely still doing things the wrong way, but that is an issue for another day. Fixes #4304 Signed-off-by: Matthew Heon <matthew.heon@pm.me>
* add network connect|disconnect compat endpointsbaude2020-11-17
| | | | | | | | | | | this enables the ability to connect and disconnect a container from a given network. it is only for the compatibility layer. some code had to be refactored to avoid circular imports. additionally, tests are being deferred temporarily due to some incompatibility/bug in either docker-py or our stack. Signed-off-by: baude <bbaude@redhat.com>
* Add support for network connect / disconnect to DBMatthew Heon2020-11-11
| | | | | | | | | | | | | | | | | | | | | | | | Convert the existing network aliases set/remove code to network connect and disconnect. We can no longer modify aliases for an existing network, but we can add and remove entire networks. As part of this, we need to add a new function to retrieve current aliases the container is connected to (we had a table for this as of the first aliases PR, but it was not externally exposed). At the same time, remove all deconflicting logic for aliases. Docker does absolutely no checks of this nature, and allows two containers to have the same aliases, aliases that conflict with container names, etc - it's just left to DNS to return all the IP addresses, and presumably we round-robin from there? Most tests for the existing code had to be removed because of this. Convert all uses of the old container config.Networks field, which previously included all networks in the container, to use the new DB table. This ensures we actually get an up-to-date list of in-use networks. Also, add network aliases to the output of `podman inspect`. Signed-off-by: Matthew Heon <matthew.heon@pm.me>
* Add tests for network aliasesMatthew Heon2020-11-03
| | | | | | | | | | | | As part of this, we need two new functions, for retrieving all aliases for a network and removing all aliases for a network, both required to test. Also, rework handling for some things the tests discovered were broken (notably conflicts between container name and existing aliases). Signed-off-by: Matthew Heon <matthew.heon@pm.me>
* Add a way to retrieve all network aliases for a ctrMatthew Heon2020-10-27
| | | | | | | | | The original interface only allowed retrieving aliases for a specific network, not for all networks. This will allow aliases to be retrieved for every network the container is present in, in a single DB operation. Signed-off-by: Matthew Heon <matthew.heon@pm.me>
* Add network aliases for containers to DBMatthew Heon2020-10-27
| | | | | | | | | | | | | | | | | | | | | | | This adds the database backend for network aliases. Aliases are additional names for a container that are used with the CNI dnsname plugin - the container will be accessible by these names in addition to its name. Aliases are allowed to change over time as the container connects to and disconnects from networks. Aliases are implemented as another bucket in the database to register all aliases, plus two buckets for each container (one to hold connected CNI networks, a second to hold its aliases). The aliases are only unique per-network, to the global and per-container aliases buckets have a sub-bucket for each CNI network that has aliases, and the aliases are stored within that sub-bucket. Aliases are formatted as alias (key) to container ID (value) in both cases. Three DB functions are defined for aliases: retrieving current aliases for a given network, setting aliases for a given network, and removing all aliases for a given network. Signed-off-by: Matthew Heon <matthew.heon@pm.me>
* Unconditionally retrieve pod names via APIMatthew Heon2020-08-10
| | | | | | | | | | | | | | | | | | The ListContainers API previously had a Pod parameter, which determined if pod name was returned (but, notably, not Pod ID, which was returned unconditionally). This was fairly confusing, so we decided to deprecate/remove the parameter and return it unconditionally. To do this without serious performance implications, we need to avoid expensive JSON decodes of pod configuration in the DB. The way our Bolt tables are structured, retrieving name given ID is actually quite cheap, but we did not expose this via the Libpod API. Add a new GetName API to do this. Fixes #7214 Signed-off-by: Matthew Heon <matthew.heon@pm.me>
* Improve error message when creating a pod/ctr with the same namePaul Holzinger2020-08-04
| | | | | | | | | Check if there is an pod or container an return the appropriate error message instead of blindly return 'container exists' with `podman create` and 'pod exists' with `podman pod create`. Signed-off-by: Paul Holzinger <paul.holzinger@web.de>
* Switch all references to github.com/containers/libpod -> podmanDaniel J Walsh2020-07-28
| | | | Signed-off-by: Daniel J Walsh <dwalsh@redhat.com>
* move go module to v2Valentin Rothberg2020-07-06
| | | | | | | | | | | | | | | With the advent of Podman 2.0.0 we crossed the magical barrier of go modules. While we were able to continue importing all packages inside of the project, the project could not be vendored anymore from the outside. Move the go module to new major version and change all imports to `github.com/containers/libpod/v2`. The renaming of the imports was done via `gomove` [1]. [1] https://github.com/KSubedi/gomove Signed-off-by: Valentin Rothberg <rothberg@redhat.com>
* Turn on More lintersDaniel J Walsh2020-06-15
| | | | | | | | | - misspell - prealloc - unparam - nakedret Signed-off-by: Daniel J Walsh <dwalsh@redhat.com>
* Update vendor of boltdb and containers/imageDaniel J Walsh2020-03-29
| | | | Signed-off-by: Daniel J Walsh <dwalsh@redhat.com>
* Add support for containers.confDaniel J Walsh2020-03-27
| | | | | | | vendor in c/common config pkg for containers.conf Signed-off-by: Qi Wang qiwan@redhat.com Signed-off-by: Daniel J Walsh <dwalsh@redhat.com>
* Add structure for new exec session tracking to DBMatthew Heon2020-03-18
| | | | | | | | | | | | | | | | | | | | | | | As part of the rework of exec sessions, we need to address them independently of containers. In the new API, we need to be able to fetch them by their ID, regardless of what container they are associated with. Unfortunately, our existing exec sessions are tied to individual containers; there's no way to tell what container a session belongs to and retrieve it without getting every exec session for every container. This adds a pointer to the container an exec session is associated with to the database. The sessions themselves are still stored in the container. Exec-related APIs have been restructured to work with the new database representation. The originally monolithic API has been split into a number of smaller calls to allow more fine-grained control of lifecycle. Support for legacy exec sessions has been retained, but in a deprecated fashion; we should remove this in a few releases. Signed-off-by: Matthew Heon <matthew.heon@pm.me>
* codespell: spelling correctionsDmitry Smirnov2019-11-13
| | | | Signed-off-by: Dmitry Smirnov <onlyjob@member.fsf.org>
* add libpod/configValentin Rothberg2019-10-31
| | | | | | | | | | | | Refactor the `RuntimeConfig` along with related code from libpod into libpod/config. Note that this is a first step of consolidating code into more coherent packages to make the code more maintainable and less prone to regressions on the long runs. Some libpod definitions were moved to `libpod/define` to resolve circular dependencies. Signed-off-by: Valentin Rothberg <rothberg@redhat.com>
* refresh: do not access network ns if not in the namespaceGiuseppe Scrivano2019-10-09
| | | | Signed-off-by: Giuseppe Scrivano <gscrivan@redhat.com>
* rm: add containers eviction with `rm --force`Marco Vedovati2019-09-25
| | | | | | | | | Add ability to evict a container when it becomes unusable. This may happen when the host setup changes after a container creation, making it impossible for that container to be used or removed. Evicting a container is done using the `rm --force` command. Signed-off-by: Marco Vedovati <mvedovati@suse.com>
* Volume lookup needs to include state to unmarshal intoMatthew Heon2019-09-11
| | | | | | | | | Lookup was written before volume states merged, but merged after, and CI didn't catch the obvious failure here. Without a valid state, we try to unmarshall into a null pointer, and 'volume rm' is completely broken because of it. Signed-off-by: Matthew Heon <matthew.heon@pm.me>
* Add function for looking up volumes by partial nameMatthew Heon2019-09-09
| | | | | | | | | | This isn't included in Docker, but seems handy enough. Use the new API for 'volume rm' and 'volume inspect'. Fixes #3891 Signed-off-by: Matthew Heon <matthew.heon@pm.me>
* Correctly report errors on unmounting SHMMatthew Heon2019-09-05
| | | | | | | | | | | When we fail to remove a container's SHM, that's an error, and we need to report it as such. This may be part of our lingering storage woes. Also, remove MNT_DETACH. It may be another cause of the storage removal failures. Signed-off-by: Matthew Heon <matthew.heon@pm.me>
* Add ability for volumes with options to mount/umountMatthew Heon2019-09-05
| | | | | | | | | | | | | When volume options and the local volume driver are specified, the volume is intended to be mounted using the 'mount' command. Supported options will be used to volume the volume before the first container using it starts, and unmount the volume after the last container using it dies. This should work for any local filesystem, though at present I've only tested with tmpfs and btrfs. Signed-off-by: Matthew Heon <matthew.heon@pm.me>
* Add volume stateMatthew Heon2019-09-05
| | | | | | | | | | | | We need to be able to track the number of times a volume has been mounted for tmpfs/nfs/etc volumes. As such, we need a mutable state for volumes. Add one, with the expected update/save methods in both states. There is backwards compat here, in that older volumes without a state will still be accepted. Signed-off-by: Matthew Heon <matthew.heon@pm.me>
* Re-add locks to volumes.Matthew Heon2019-08-28
| | | | | | | | | | This will require a 'podman system renumber' after being applied to get lock numbers for existing volumes. Add the DB backend code for rewriting volume configs and use it for updating lock numbers as part of 'system renumber'. Signed-off-by: Matthew Heon <matthew.heon@pm.me>
* first pass of corrections for golangci-lintbaude2019-07-10
| | | | Signed-off-by: baude <bbaude@redhat.com>
* libpod: discerne partial IDs between containers and podsMarco Vedovati2019-07-03
| | | | | | | | When specifying a podman command with a partial ID, container and pod commands matches respectively only containers or pods IDs in the BoltDB. Fixes: #3487 Signed-off-by: Marco Vedovati <mvedovati@suse.com>
* remove libpod from mainbaude2019-06-25
| | | | | | | | | | | | | the compilation demands of having libpod in main is a burden for the remote client compilations. to combat this, we should move the use of libpod structs, vars, constants, and functions into the adapter code where it will only be compiled by the local client. this should result in cleaner code organization and smaller binaries. it should also help if we ever need to compile the remote client on non-Linux operating systems natively (not cross-compiled). Signed-off-by: baude <bbaude@redhat.com>
* Avoid a read-write transaction on DB initMatthew Heon2019-06-20
| | | | | | | | Instead, use a less expensive read-only transaction to see if the DB is ready for use (it probably is), and only fire the expensive RW transaction if absolutely necessary. Signed-off-by: Matthew Heon <mheon@redhat.com>
* Update vendor of buildah and containers/imagesDaniel J Walsh2019-05-20
| | | | | | | | | Mainly add support for podman build using --overlay mounts. Updates containers/image also adds better support for new registries.conf file. Signed-off-by: Daniel J Walsh <dwalsh@redhat.com>
* Switch Libpod over to new explicit named volumesMatthew Heon2019-04-04
| | | | | | | | | | | | | This swaps the previous handling (parse all volume mounts on the container and look for ones that might refer to named volumes) for the new, explicit named volume lists stored per-container. It also deprecates force-removing volumes that are in use. I don't know how we want to handle this yet, but leaving containers that depend on a volume that no longer exists is definitely not correct. Signed-off-by: Matthew Heon <matthew.heon@pm.me>
* Change LookupContainer logic to match DockerMatthew Heon2019-03-06
| | | | | | | | | | | | | | | | | | | | When looking up a container or pod by from user input, we handle collisions between names and IDs differently than Docker at present. In Docker, when there is a container with an ID starting with "c1" and a container named "c1", commands on "c1" will always act on the container named "c1". For the same scenario in podman, we throw an error about name collision. Change Podman to follow Docker, by returning the named container or pod instead of erroring. This should also have a positive effect on performance in the lookup-by-full-name case, which no longer needs to fully traverse the list of all pods or containers. Signed-off-by: Matthew Heon <matthew.heon@pm.me>