| Commit message (Collapse) | Author | Age |
|
|
|
|
|
|
|
|
|
|
|
|
| |
As we were not updating the pod ID bucket, removing a pod with
containers still in it (including the infra container, which will
always suffer from this) will not properly update the name
registry to remove the name of any renamed containers. This
patch ensures that does not happen - all containers will be fully
removed, even if renamed.
Fixes #11750
Signed-off-by: Matthew Heon <matthew.heon@pm.me>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Currently we were overwrapping error returned from removal
of a non existing container.
$ podman rm bogus -f
Error: failed to evict container: "": failed to find container "bogus" in state: no container with name or ID bogus found: no such container
Removal of wraps gets us to.
./bin/podman rm bogus -f
Error: no container with name or ID "bogus" found: no such container
Finally also added quotes around container name to help make it standout
when you get an error, currently it gets lost in the error.
Signed-off-by: Daniel J Walsh <dwalsh@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Move the core of renaming logic into the DB. This guarantees a
lot more atomicity than we have right now (our current solution,
removing the container from the DB and re-creating it, is *VERY*
not atomic and prone to leaving a corrupted state behind if
things go wrong. Moving things into the DB allows us to remove
most, but not all, of this - there's still a potential scenario
where the c/storage rename fails but the Podman rename succeeds,
and we end up with a mismatched state.
Signed-off-by: Matthew Heon <matthew.heon@pm.me>
|
|
|
|
|
|
|
|
|
| |
We missed bumping the go module, so let's do it now :)
* Automated go code with github.com/sirkon/go-imports-rename
* Manually via `vgrep podman/v2` the rest
Signed-off-by: Valentin Rothberg <rothberg@redhat.com>
|
|
|
|
|
|
|
|
| |
Use the whitespace linter and fix the reported problems.
[NO TESTS NEEDED]
Signed-off-by: Paul Holzinger <paul.holzinger@web.de>
|
|
|
|
|
|
|
|
| |
Use the stylecheck linter and fix the reported problems.
[NO TESTS NEEDED]
Signed-off-by: Paul Holzinger <paul.holzinger@web.de>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This implements support for mounting and unmounting volumes
backed by volume plugins. Support for actually retrieving
plugins requires a pull request to land in containers.conf and
then that to be vendored, and as such is not yet ready. Given
this, this code is only compile tested. However, the code for
everything past retrieving the plugin has been written - there is
support for creating, removing, mounting, and unmounting volumes,
which should allow full functionality once the c/common PR is
merged.
A major change is the signature of the MountPoint function for
volumes, which now, by necessity, returns an error. Named volumes
managed by a plugin do not have a mountpoint we control; instead,
it is managed entirely by the plugin. As such, we need to cache
the path in the DB, and calls to retrieve it now need to access
the DB (and may fail as such).
Notably absent is support for SELinux relabelling and chowning
these volumes. Given that we don't manage the mountpoint for
these volumes, I am extremely reluctant to try and modify it - we
could easily break the plugin trying to chown or relabel it.
Also, we had no less than *5* separate implementations of
inspecting a volume floating around in pkg/infra/abi and
pkg/api/handlers/libpod. And none of them used volume.Inspect(),
the only correct way of inspecting volumes. Remove them all and
consolidate to using the correct way. Compat API is likely still
doing things the wrong way, but that is an issue for another day.
Fixes #4304
Signed-off-by: Matthew Heon <matthew.heon@pm.me>
|
|
|
|
|
|
|
|
|
|
|
| |
this enables the ability to connect and disconnect a container from a
given network. it is only for the compatibility layer. some code had to
be refactored to avoid circular imports.
additionally, tests are being deferred temporarily due to some
incompatibility/bug in either docker-py or our stack.
Signed-off-by: baude <bbaude@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Convert the existing network aliases set/remove code to network
connect and disconnect. We can no longer modify aliases for an
existing network, but we can add and remove entire networks. As
part of this, we need to add a new function to retrieve current
aliases the container is connected to (we had a table for this
as of the first aliases PR, but it was not externally exposed).
At the same time, remove all deconflicting logic for aliases.
Docker does absolutely no checks of this nature, and allows two
containers to have the same aliases, aliases that conflict with
container names, etc - it's just left to DNS to return all the
IP addresses, and presumably we round-robin from there? Most
tests for the existing code had to be removed because of this.
Convert all uses of the old container config.Networks field,
which previously included all networks in the container, to use
the new DB table. This ensures we actually get an up-to-date list
of in-use networks. Also, add network aliases to the output of
`podman inspect`.
Signed-off-by: Matthew Heon <matthew.heon@pm.me>
|
|
|
|
|
|
|
|
|
|
|
|
| |
As part of this, we need two new functions, for retrieving all
aliases for a network and removing all aliases for a network,
both required to test.
Also, rework handling for some things the tests discovered were
broken (notably conflicts between container name and existing
aliases).
Signed-off-by: Matthew Heon <matthew.heon@pm.me>
|
|
|
|
|
|
|
|
|
| |
The original interface only allowed retrieving aliases for a
specific network, not for all networks. This will allow aliases
to be retrieved for every network the container is present in,
in a single DB operation.
Signed-off-by: Matthew Heon <matthew.heon@pm.me>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This adds the database backend for network aliases. Aliases are
additional names for a container that are used with the CNI
dnsname plugin - the container will be accessible by these names
in addition to its name. Aliases are allowed to change over time
as the container connects to and disconnects from networks.
Aliases are implemented as another bucket in the database to
register all aliases, plus two buckets for each container (one to
hold connected CNI networks, a second to hold its aliases). The
aliases are only unique per-network, to the global and
per-container aliases buckets have a sub-bucket for each CNI
network that has aliases, and the aliases are stored within that
sub-bucket. Aliases are formatted as alias (key) to container ID
(value) in both cases.
Three DB functions are defined for aliases: retrieving current
aliases for a given network, setting aliases for a given network,
and removing all aliases for a given network.
Signed-off-by: Matthew Heon <matthew.heon@pm.me>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The ListContainers API previously had a Pod parameter, which
determined if pod name was returned (but, notably, not Pod ID,
which was returned unconditionally). This was fairly confusing,
so we decided to deprecate/remove the parameter and return it
unconditionally.
To do this without serious performance implications, we need to
avoid expensive JSON decodes of pod configuration in the DB. The
way our Bolt tables are structured, retrieving name given ID is
actually quite cheap, but we did not expose this via the Libpod
API. Add a new GetName API to do this.
Fixes #7214
Signed-off-by: Matthew Heon <matthew.heon@pm.me>
|
|
|
|
|
|
|
|
|
| |
Check if there is an pod or container an return
the appropriate error message instead of blindly
return 'container exists' with `podman create` and
'pod exists' with `podman pod create`.
Signed-off-by: Paul Holzinger <paul.holzinger@web.de>
|
|
|
|
| |
Signed-off-by: Daniel J Walsh <dwalsh@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
With the advent of Podman 2.0.0 we crossed the magical barrier of go
modules. While we were able to continue importing all packages inside
of the project, the project could not be vendored anymore from the
outside.
Move the go module to new major version and change all imports to
`github.com/containers/libpod/v2`. The renaming of the imports
was done via `gomove` [1].
[1] https://github.com/KSubedi/gomove
Signed-off-by: Valentin Rothberg <rothberg@redhat.com>
|
|
|
|
|
|
|
|
|
| |
- misspell
- prealloc
- unparam
- nakedret
Signed-off-by: Daniel J Walsh <dwalsh@redhat.com>
|
|
|
|
| |
Signed-off-by: Daniel J Walsh <dwalsh@redhat.com>
|
|
|
|
|
|
|
| |
vendor in c/common config pkg for containers.conf
Signed-off-by: Qi Wang qiwan@redhat.com
Signed-off-by: Daniel J Walsh <dwalsh@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
As part of the rework of exec sessions, we need to address them
independently of containers. In the new API, we need to be able
to fetch them by their ID, regardless of what container they are
associated with. Unfortunately, our existing exec sessions are
tied to individual containers; there's no way to tell what
container a session belongs to and retrieve it without getting
every exec session for every container.
This adds a pointer to the container an exec session is
associated with to the database. The sessions themselves are
still stored in the container.
Exec-related APIs have been restructured to work with the new
database representation. The originally monolithic API has been
split into a number of smaller calls to allow more fine-grained
control of lifecycle. Support for legacy exec sessions has been
retained, but in a deprecated fashion; we should remove this in
a few releases.
Signed-off-by: Matthew Heon <matthew.heon@pm.me>
|
|
|
|
| |
Signed-off-by: Dmitry Smirnov <onlyjob@member.fsf.org>
|
|
|
|
|
|
|
|
|
|
|
|
| |
Refactor the `RuntimeConfig` along with related code from libpod into
libpod/config. Note that this is a first step of consolidating code
into more coherent packages to make the code more maintainable and less
prone to regressions on the long runs.
Some libpod definitions were moved to `libpod/define` to resolve
circular dependencies.
Signed-off-by: Valentin Rothberg <rothberg@redhat.com>
|
|
|
|
| |
Signed-off-by: Giuseppe Scrivano <gscrivan@redhat.com>
|
|
|
|
|
|
|
|
|
| |
Add ability to evict a container when it becomes unusable. This may
happen when the host setup changes after a container creation, making it
impossible for that container to be used or removed.
Evicting a container is done using the `rm --force` command.
Signed-off-by: Marco Vedovati <mvedovati@suse.com>
|
|
|
|
|
|
|
|
|
| |
Lookup was written before volume states merged, but merged after,
and CI didn't catch the obvious failure here. Without a valid
state, we try to unmarshall into a null pointer, and 'volume rm'
is completely broken because of it.
Signed-off-by: Matthew Heon <matthew.heon@pm.me>
|
|
|
|
|
|
|
|
|
|
| |
This isn't included in Docker, but seems handy enough.
Use the new API for 'volume rm' and 'volume inspect'.
Fixes #3891
Signed-off-by: Matthew Heon <matthew.heon@pm.me>
|
|
|
|
|
|
|
|
|
|
|
| |
When we fail to remove a container's SHM, that's an error, and we
need to report it as such. This may be part of our lingering
storage woes.
Also, remove MNT_DETACH. It may be another cause of the storage
removal failures.
Signed-off-by: Matthew Heon <matthew.heon@pm.me>
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
When volume options and the local volume driver are specified,
the volume is intended to be mounted using the 'mount' command.
Supported options will be used to volume the volume before the
first container using it starts, and unmount the volume after the
last container using it dies.
This should work for any local filesystem, though at present I've
only tested with tmpfs and btrfs.
Signed-off-by: Matthew Heon <matthew.heon@pm.me>
|
|
|
|
|
|
|
|
|
|
|
|
| |
We need to be able to track the number of times a volume has been
mounted for tmpfs/nfs/etc volumes. As such, we need a mutable
state for volumes. Add one, with the expected update/save methods
in both states.
There is backwards compat here, in that older volumes without a
state will still be accepted.
Signed-off-by: Matthew Heon <matthew.heon@pm.me>
|
|
|
|
|
|
|
|
|
|
| |
This will require a 'podman system renumber' after being applied
to get lock numbers for existing volumes.
Add the DB backend code for rewriting volume configs and use it
for updating lock numbers as part of 'system renumber'.
Signed-off-by: Matthew Heon <matthew.heon@pm.me>
|
|
|
|
| |
Signed-off-by: baude <bbaude@redhat.com>
|
|
|
|
|
|
|
|
| |
When specifying a podman command with a partial ID, container and pod
commands matches respectively only containers or pods IDs in the BoltDB.
Fixes: #3487
Signed-off-by: Marco Vedovati <mvedovati@suse.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
the compilation demands of having libpod in main is a burden for the
remote client compilations. to combat this, we should move the use of
libpod structs, vars, constants, and functions into the adapter code
where it will only be compiled by the local client.
this should result in cleaner code organization and smaller binaries. it
should also help if we ever need to compile the remote client on
non-Linux operating systems natively (not cross-compiled).
Signed-off-by: baude <bbaude@redhat.com>
|
|
|
|
|
|
|
|
| |
Instead, use a less expensive read-only transaction to see if the
DB is ready for use (it probably is), and only fire the expensive
RW transaction if absolutely necessary.
Signed-off-by: Matthew Heon <mheon@redhat.com>
|
|
|
|
|
|
|
|
|
| |
Mainly add support for podman build using --overlay mounts.
Updates containers/image also adds better support for new registries.conf
file.
Signed-off-by: Daniel J Walsh <dwalsh@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This swaps the previous handling (parse all volume mounts on the
container and look for ones that might refer to named volumes)
for the new, explicit named volume lists stored per-container.
It also deprecates force-removing volumes that are in use. I
don't know how we want to handle this yet, but leaving containers
that depend on a volume that no longer exists is definitely not
correct.
Signed-off-by: Matthew Heon <matthew.heon@pm.me>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
When looking up a container or pod by from user input, we handle
collisions between names and IDs differently than Docker at
present.
In Docker, when there is a container with an ID starting with
"c1" and a container named "c1", commands on "c1" will always act
on the container named "c1". For the same scenario in podman, we
throw an error about name collision.
Change Podman to follow Docker, by returning the named container
or pod instead of erroring.
This should also have a positive effect on performance in the
lookup-by-full-name case, which no longer needs to fully traverse
the list of all pods or containers.
Signed-off-by: Matthew Heon <matthew.heon@pm.me>
|
|
|
|
|
|
|
|
|
| |
If this doesn't match, we end up not being able to access named
volumes mounted into containers, which is bad. Use the same
validation that we use for other critical paths to ensure this
one also matches.
Signed-off-by: Matthew Heon <matthew.heon@pm.me>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
I was looking into why we have locks in volumes, and I'm fairly
convinced they're unnecessary.
We don't have a state whose accesses we need to guard with locks
and syncs. The only real purpose for the lock was to prevent
concurrent removal of the same volume.
Looking at the code, concurrent removal ought to be fine with a
bit of reordering - one or the other might fail, but we will
successfully evict the volume from the state.
Also, remove the 'prune' bool from RemoveVolume. None of our
other API functions accept it, and it only served to toggle off
more verbose error messages.
Signed-off-by: Matthew Heon <matthew.heon@pm.me>
|
|
|
|
|
|
| |
Necessary for rewriting lock IDs as part of renumber.
Signed-off-by: Matthew Heon <matthew.heon@pm.me>
|
|
|
|
| |
Signed-off-by: Matthew Heon <matthew.heon@pm.me>
|
|
|
|
|
|
| |
base enablement of the inspect command.
Signed-off-by: baude <bbaude@redhat.com>
|
|
|
|
| |
Signed-off-by: Matthew Heon <matthew.heon@pm.me>
|
|
|
|
|
|
|
|
|
|
| |
During an earlier bugfix, we swapped all instances of
ContainerConfig to Config, which was meant to fix some data we
were returning from Inspect. This unfortunately also renamed a
libpod internal struct for container configs. Undo the rename
here.
Signed-off-by: Matthew Heon <matthew.heon@pm.me>
|
|
|
|
|
|
| |
This will more closely match what Docker is doing.
Signed-off-by: Daniel J Walsh <dwalsh@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Add support for podman volume and its subcommands.
The commands supported are:
podman volume create
podman volume inspect
podman volume ls
podman volume rm
podman volume prune
This is a tool to manage volumes used by podman. For now it only handle
named volumes, but eventually it will handle all volumes used by podman.
Signed-off-by: umohnani8 <umohnani@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Instead of storing the runtime's file lock dir in the BoltDB
state, refer to the runtime inside the Bolt state instead, and
use the path stored in the runtime.
This is necessary since we moved DB initialization very far up in
runtime init, before the locks dir is properly initialized (and
it must happen before the locks dir can be created, as we use the
DB to retrieve the proper path for the locks dir now).
Signed-off-by: Matthew Heon <mheon@redhat.com>
|
|
|
|
|
|
|
| |
We already create the locks directory as part of the libpod
runtime's init - no need to do it again as part of BoltDB's init.
Signed-off-by: Matthew Heon <mheon@redhat.com>
|
|
|
|
|
|
|
|
|
|
| |
Previously, we implicitly validated runtime configuration against
what was stored in the database as part of database init. Make
this an explicit step, so we can call it after the database has
been initialized. This will allow us to retrieve paths from the
database and use them to overwrite our defaults if they differ.
Signed-off-by: Matthew Heon <mheon@redhat.com>
|
|
|
|
|
|
|
|
|
|
| |
When we create a Libpod database, we store a number of runtime
configuration fields in it. If we can retrieve those, we can use
them to configure the runtime to match the DB instead of inbuilt
defaults, helping to ensure that we don't error in cases where
our compiled-in defaults changed.
Signed-off-by: Matthew Heon <mheon@redhat.com>
|