summaryrefslogtreecommitdiff
path: root/libpod/runtime_renumber.go
Commit message (Collapse)AuthorAge
* libpod/runtime: switch to golang native error wrappingSascha Grunert2022-07-04
| | | | | | | | | We now use the golang error wrapping format specifier `%w` instead of the deprecated github.com/pkg/errors package. [NO NEW TESTS NEEDED] Signed-off-by: Sascha Grunert <sgrunert@redhat.com>
* auto update: create an eventValentin Rothberg2022-05-23
| | | | | | | | | Create an auto-update event for each invocation, independent if images and containers are updated or not. Those events will be indicated in the events already but users will now know why. Fixes: #14283 Signed-off-by: Valentin Rothberg <vrothberg@redhat.com>
* bump go module to version 4Valentin Rothberg2022-01-18
| | | | | | | | | | | | | Automated for .go files via gomove [1]: `gomove github.com/containers/podman/v3 github.com/containers/podman/v4` Remaining files via vgrep [2]: `vgrep github.com/containers/podman/v3` [1] https://github.com/KSubedi/gomove [2] https://github.com/vrothberg/vgrep Signed-off-by: Valentin Rothberg <rothberg@redhat.com>
* bump go module to v3Valentin Rothberg2021-02-22
| | | | | | | | | We missed bumping the go module, so let's do it now :) * Automated go code with github.com/sirkon/go-imports-rename * Manually via `vgrep podman/v2` the rest Signed-off-by: Valentin Rothberg <rothberg@redhat.com>
* Switch all references to github.com/containers/libpod -> podmanDaniel J Walsh2020-07-28
| | | | Signed-off-by: Daniel J Walsh <dwalsh@redhat.com>
* move go module to v2Valentin Rothberg2020-07-06
| | | | | | | | | | | | | | | With the advent of Podman 2.0.0 we crossed the magical barrier of go modules. While we were able to continue importing all packages inside of the project, the project could not be vendored anymore from the outside. Move the go module to new major version and change all imports to `github.com/containers/libpod/v2`. The renaming of the imports was done via `gomove` [1]. [1] https://github.com/KSubedi/gomove Signed-off-by: Valentin Rothberg <rothberg@redhat.com>
* Re-add locks to volumes.Matthew Heon2019-08-28
| | | | | | | | | | This will require a 'podman system renumber' after being applied to get lock numbers for existing volumes. Add the DB backend code for rewriting volume configs and use it for updating lock numbers as part of 'system renumber'. Signed-off-by: Matthew Heon <matthew.heon@pm.me>
* Add System event type and renumber, refresh eventsMatthew Heon2019-04-25
| | | | | | | Also, re-add locking to file eventer Write() to protect against concurrent events. Signed-off-by: Matthew Heon <mheon@redhat.com>
* Do not make renumber shut down the runtimeMatthew Heon2019-02-21
| | | | | | | | | | | | | | | The original intent behind the requirement was to ensure that, if two SHM lock structs were open at the same time, we should not make such a runtime available to the user, and should clean it up instead. It turns out that we don't even need to open a second SHM lock struct - if we get an error mapping the first one due to a lock count mismatch, we can just delete it, and it cleans itself up when it errors. So there's no reason not to return a valid runtime. Signed-off-by: Matthew Heon <matthew.heon@pm.me>
* Move RenumberLocks into runtime initMatthew Heon2019-02-21
| | | | | | | | | | | | | | | | | | We can't do renumbering after init - we need to open a potentially invalid locks file (too many/too few locks), and then potentially delete the old locks and make new ones. We need to be in init to bypass the checks that would otherwise make this impossible. This leaves us with two choices: make RenumberLocks a separate entrypoint from NewRuntime, duplicating a lot of configuration load code (we need to know where the locks live, how many there are, etc) - or modify NewRuntime to allow renumbering during it. Previous experience says the first is not really a viable option and produces massive code bloat, so the second it is. Signed-off-by: Matthew Heon <matthew.heon@pm.me>
* Remove locks from volumesMatthew Heon2019-02-21
| | | | | | | | | | | | | | | | | | | I was looking into why we have locks in volumes, and I'm fairly convinced they're unnecessary. We don't have a state whose accesses we need to guard with locks and syncs. The only real purpose for the lock was to prevent concurrent removal of the same volume. Looking at the code, concurrent removal ought to be fine with a bit of reordering - one or the other might fail, but we will successfully evict the volume from the state. Also, remove the 'prune' bool from RemoveVolume. None of our other API functions accept it, and it only served to toggle off more verbose error messages. Signed-off-by: Matthew Heon <matthew.heon@pm.me>
* Expand renumber to also renumber pod locksMatthew Heon2019-02-21
| | | | Signed-off-by: Matthew Heon <matthew.heon@pm.me>
* Add initial version of renumber backendMatthew Heon2019-02-21
Renumber is a way of renumbering container locks after the number of locks available has changed. For now, renumber only works with containers. Signed-off-by: Matthew Heon <matthew.heon@pm.me>