| Commit message (Collapse) | Author | Age |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Also, do a general cleanup of all the timeout code. Changes
include:
- Convert from int to *uint where possible. Timeouts cannot be
negative, hence the uint change; and a timeout of 0 is valid,
so we need a new way to detect that the user set a timeout
(hence, pointer).
- Change name in the database to avoid conflicts between new data
type and old one. This will cause timeouts set with 4.2.0 to be
lost, but considering nobody is using the feature at present
(and the lack of validation means we could have invalid,
negative timeouts in the DB) this feels safe.
- Ensure volume plugin timeouts can only be used with volumes
created using a plugin. Timeouts on the local driver are
nonsensical.
- Remove the existing test, as it did not use a volume plugin.
Write a new test that does.
The actual plumbing of the containers.conf timeout in is one line
in volume_api.go; the remainder are the above-described cleanups.
Signed-off-by: Matthew Heon <mheon@redhat.com>
|
|
|
|
|
|
|
| |
add an option to configure the driver timeout when creating a volume.
The default is 5 seconds but this value is too small for some custom drivers.
Signed-off-by: cdoern <cdoern@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
add a new option to completely disable xfs quota usage for a volume.
xfs quota set on a volume, even just for tracking disk usage, can
cause weird errors if the volume is later re-used by a container with
a different quota projid. More specifically, link(2) and rename(2)
might fail with EXDEV if the source file has a projid that is
different from the parent directory.
To prevent such kind of issues, the volume should be created
beforehand with `podman volume create -o o=noquota $ID`
Closes: https://github.com/containers/podman/issues/14049
Signed-off-by: Giuseppe Scrivano <gscrivan@redhat.com>
|
|
|
|
|
|
|
|
|
| |
WalkDir should be faster the Walk, since we often do
not need to stat files.
[NO NEW TESTS NEEDED] Existing tests should find errors.
Signed-off-by: Daniel J Walsh <dwalsh@redhat.com>
|
|
|
|
|
|
| |
Fixes: https://github.com/containers/podman/issues/12768
Signed-off-by: Daniel J Walsh <dwalsh@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Automated for .go files via gomove [1]:
`gomove github.com/containers/podman/v3 github.com/containers/podman/v4`
Remaining files via vgrep [2]:
`vgrep github.com/containers/podman/v3`
[1] https://github.com/KSubedi/gomove
[2] https://github.com/vrothberg/vgrep
Signed-off-by: Valentin Rothberg <rothberg@redhat.com>
|
|
|
|
|
|
|
| |
Adds support for transferring data between systems and backing up systems.
Use cases: recover from disasters or move data between machines.
Signed-off-by: flouthoc <flouthoc.git@gmail.com>
|
|
|
|
|
|
|
|
| |
[NO TESTS NEEDED] Since it is difficult to setup xfs quota
Fixes: https://bugzilla.redhat.com/show_bug.cgi?id=1982164
Signed-off-by: Daniel J Walsh <dwalsh@redhat.com>
|
|
|
|
|
|
|
|
|
| |
We missed bumping the go module, so let's do it now :)
* Automated go code with github.com/sirkon/go-imports-rename
* Manually via `vgrep podman/v2` the rest
Signed-off-by: Valentin Rothberg <rothberg@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The old copy-up implementation was very unhappy with symlinks,
which could cause containers to fail to start for unclear reasons
when a directory we wanted to copy-up contained one. Rewrite to
use the Buildah Copier, which is more recent and should be both
safer and less likely to blow up over links.
At the same time, fix a deadlock in copy-up for volumes requiring
mounting - the Mountpoint() function tried to take the
already-acquired volume lock.
Fixes #6003
Signed-off-by: Matthew Heon <mheon@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This implements support for mounting and unmounting volumes
backed by volume plugins. Support for actually retrieving
plugins requires a pull request to land in containers.conf and
then that to be vendored, and as such is not yet ready. Given
this, this code is only compile tested. However, the code for
everything past retrieving the plugin has been written - there is
support for creating, removing, mounting, and unmounting volumes,
which should allow full functionality once the c/common PR is
merged.
A major change is the signature of the MountPoint function for
volumes, which now, by necessity, returns an error. Named volumes
managed by a plugin do not have a mountpoint we control; instead,
it is managed entirely by the plugin. As such, we need to cache
the path in the DB, and calls to retrieve it now need to access
the DB (and may fail as such).
Notably absent is support for SELinux relabelling and chowning
these volumes. Given that we don't manage the mountpoint for
these volumes, I am extremely reluctant to try and modify it - we
could easily break the plugin trying to chown or relabel it.
Also, we had no less than *5* separate implementations of
inspecting a volume floating around in pkg/infra/abi and
pkg/api/handlers/libpod. And none of them used volume.Inspect(),
the only correct way of inspecting volumes. Remove them all and
consolidate to using the correct way. Compat API is likely still
doing things the wrong way, but that is an issue for another day.
Fixes #4304
Signed-off-by: Matthew Heon <matthew.heon@pm.me>
|
|
|
|
|
|
|
|
|
|
|
|
| |
This change adds code to report the reclaimed space after a prune.
Reclaimed space from volumes, images, and containers is recorded
during the prune call in a PruneReport struct. These structs are
collected into a slice during a system prune and processed afterwards
to calculate the total reclaimed space.
Closes #8658
Signed-off-by: Baron Lenardson <lenardson.baron@gmail.com>
|
|
|
|
| |
Signed-off-by: Daniel J Walsh <dwalsh@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
With the advent of Podman 2.0.0 we crossed the magical barrier of go
modules. While we were able to continue importing all packages inside
of the project, the project could not be vendored anymore from the
outside.
Move the go module to new major version and change all imports to
`github.com/containers/libpod/v2`. The renaming of the imports
was done via `gomove` [1].
[1] https://github.com/KSubedi/gomove
Signed-off-by: Valentin Rothberg <rothberg@redhat.com>
|
|\
| |
| | |
container: move volume chown after spec generation
|
| |
| |
| |
| |
| |
| |
| |
| |
| | |
move the chown for newly created volumes after the spec generation so
the correct UID/GID are known.
Closes: https://github.com/containers/libpod/issues/5698
Signed-off-by: Giuseppe Scrivano <gscrivan@redhat.com>
|
|/
|
|
|
|
|
|
|
| |
The dangling filter determine whether a volume is dangling - IE,
it has no containers attached using it. Unlike our other filters,
this one is a boolean - must be true or false, not arbitrary
values.
Signed-off-by: Matthew Heon <matthew.heon@pm.me>
|
|
|
|
|
|
|
|
|
|
| |
add system df, info, load, renumber, and migrate
Refactor for specialized libpod engines
add the ability to prune images, volumes, containers, and pods
Signed-off-by: baude <bbaude@redhat.com>
|
|
|
|
|
|
|
|
| |
add binding tests for volumes: inspect(get), create, remove, prune, and list
implement filters ability for volumes
Signed-off-by: Brent Baude <bbaude@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
In Podman 1.6.3, we added support for anonymous volumes - fixing
our old, broken support for named volumes that were created with
containers. Unfortunately, this reused the database field we used
for the old implementation, and toggled volume removal on for
`podman run --rm` - so now, we were removing *named* volumes
created with older versions of Podman.
We can't modify these old volumes in the DB, so the next-safest
thing to do is swap to a new field to indicate volumes should be
removed. Problem: Volumes created with 1.6.3 and up until this
lands, even anonymous volumes, will not be removed. However, this
is safer than removing too many volumes, as we were doing before.
Fixes #5009
Signed-off-by: Matthew Heon <matthew.heon@pm.me>
|
|
|
|
|
|
|
|
|
|
|
| |
Previously, we only did this for volumes created at the same time
as the container. However, this is not correct behavior - Docker
does so for all named volumes, even those made with
'podman volume create' and mounted into a container later.
Fixes #3945
Signed-off-by: Matthew Heon <matthew.heon@pm.me>
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
When volume options and the local volume driver are specified,
the volume is intended to be mounted using the 'mount' command.
Supported options will be used to volume the volume before the
first container using it starts, and unmount the volume after the
last container using it dies.
This should work for any local filesystem, though at present I've
only tested with tmpfs and btrfs.
Signed-off-by: Matthew Heon <matthew.heon@pm.me>
|
|
|
|
|
|
|
|
|
|
|
|
| |
We need to be able to track the number of times a volume has been
mounted for tmpfs/nfs/etc volumes. As such, we need a mutable
state for volumes. Add one, with the expected update/save methods
in both states.
There is backwards compat here, in that older volumes without a
state will still be accepted.
Signed-off-by: Matthew Heon <matthew.heon@pm.me>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
In upcoming commits, we're going to turn on the backends for
these fields. Volumes with these set will act fundamentally
differently from other volumes. There will probably be validation
required for each field.
Until now, though, we've freely allowed creation of volumes with
these set - they just did nothing. So we have no idea what could
be in the DB with old volumes.
Change the struct tags so we don't have to worry about old,
unvalidated data. We'll start fresh with new volumes.
Signed-off-by: Matthew Heon <matthew.heon@pm.me>
|
|
|
|
|
|
|
|
|
|
| |
This will require a 'podman system renumber' after being applied
to get lock numbers for existing volumes.
Add the DB backend code for rewriting volume configs and use it
for updating lock numbers as part of 'system renumber'.
Signed-off-by: Matthew Heon <matthew.heon@pm.me>
|
|
|
|
| |
Signed-off-by: Matthew Heon <matthew.heon@pm.me>
|
|
|
|
|
|
|
|
|
| |
Begin to separate the internal structures and frontend for
inspect on volumes. We can't rely on keeping internal data
structures for external presentation - separating presentation
and internal data format is good practice.
Signed-off-by: Matthew Heon <matthew.heon@pm.me>
|
|
|
|
|
|
|
|
|
| |
We're no longer using either of these JSON libraries, dropped
them in favor of jsoniter. We can't completely remove ffjson as
c/storage uses it and can't easily migrate, but we can make sure
that libpod itself isn't doing anything with them anymore.
Signed-off-by: Matthew Heon <matthew.heon@pm.me>
|
|
|
|
| |
Signed-off-by: Giuseppe Scrivano <gscrivan@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
| |
When removing volumes with rm --volumes we want to only remove
volumes that were created with the container. Volumes created
separately via 'podman volume create' should not be removed.
Also ensure that --rm implies volumes will be removed.
Fixes #2441
Signed-off-by: Matthew Heon <matthew.heon@pm.me>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
I was looking into why we have locks in volumes, and I'm fairly
convinced they're unnecessary.
We don't have a state whose accesses we need to guard with locks
and syncs. The only real purpose for the lock was to prevent
concurrent removal of the same volume.
Looking at the code, concurrent removal ought to be fine with a
bit of reordering - one or the other might fail, but we will
successfully evict the volume from the state.
Also, remove the 'prune' bool from RemoveVolume. None of our
other API functions accept it, and it only served to toggle off
more verbose error messages.
Signed-off-by: Matthew Heon <matthew.heon@pm.me>
|
|
|
|
|
|
|
|
|
| |
Remove runtime's lockDir as it is no longer needed after the lock
rework.
Add a trivial in-memory lock manager for unit testing
Signed-off-by: Matthew Heon <matthew.heon@gmail.com>
|
|
Add support for podman volume and its subcommands.
The commands supported are:
podman volume create
podman volume inspect
podman volume ls
podman volume rm
podman volume prune
This is a tool to manage volumes used by podman. For now it only handle
named volumes, but eventually it will handle all volumes used by podman.
Signed-off-by: umohnani8 <umohnani@redhat.com>
|