| Commit message (Collapse) | Author | Age |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Convert the existing network aliases set/remove code to network
connect and disconnect. We can no longer modify aliases for an
existing network, but we can add and remove entire networks. As
part of this, we need to add a new function to retrieve current
aliases the container is connected to (we had a table for this
as of the first aliases PR, but it was not externally exposed).
At the same time, remove all deconflicting logic for aliases.
Docker does absolutely no checks of this nature, and allows two
containers to have the same aliases, aliases that conflict with
container names, etc - it's just left to DNS to return all the
IP addresses, and presumably we round-robin from there? Most
tests for the existing code had to be removed because of this.
Convert all uses of the old container config.Networks field,
which previously included all networks in the container, to use
the new DB table. This ensures we actually get an up-to-date list
of in-use networks. Also, add network aliases to the output of
`podman inspect`.
Signed-off-by: Matthew Heon <matthew.heon@pm.me>
|
|
|
|
|
|
|
|
|
|
|
|
| |
As part of this, we need two new functions, for retrieving all
aliases for a network and removing all aliases for a network,
both required to test.
Also, rework handling for some things the tests discovered were
broken (notably conflicts between container name and existing
aliases).
Signed-off-by: Matthew Heon <matthew.heon@pm.me>
|
|
|
|
|
|
|
|
|
| |
The original interface only allowed retrieving aliases for a
specific network, not for all networks. This will allow aliases
to be retrieved for every network the container is present in,
in a single DB operation.
Signed-off-by: Matthew Heon <matthew.heon@pm.me>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This adds the database backend for network aliases. Aliases are
additional names for a container that are used with the CNI
dnsname plugin - the container will be accessible by these names
in addition to its name. Aliases are allowed to change over time
as the container connects to and disconnects from networks.
Aliases are implemented as another bucket in the database to
register all aliases, plus two buckets for each container (one to
hold connected CNI networks, a second to hold its aliases). The
aliases are only unique per-network, to the global and
per-container aliases buckets have a sub-bucket for each CNI
network that has aliases, and the aliases are stored within that
sub-bucket. Aliases are formatted as alias (key) to container ID
(value) in both cases.
Three DB functions are defined for aliases: retrieving current
aliases for a given network, setting aliases for a given network,
and removing all aliases for a given network.
Signed-off-by: Matthew Heon <matthew.heon@pm.me>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The ListContainers API previously had a Pod parameter, which
determined if pod name was returned (but, notably, not Pod ID,
which was returned unconditionally). This was fairly confusing,
so we decided to deprecate/remove the parameter and return it
unconditionally.
To do this without serious performance implications, we need to
avoid expensive JSON decodes of pod configuration in the DB. The
way our Bolt tables are structured, retrieving name given ID is
actually quite cheap, but we did not expose this via the Libpod
API. Add a new GetName API to do this.
Fixes #7214
Signed-off-by: Matthew Heon <matthew.heon@pm.me>
|
|
|
|
|
|
|
|
|
| |
Check if there is an pod or container an return
the appropriate error message instead of blindly
return 'container exists' with `podman create` and
'pod exists' with `podman pod create`.
Signed-off-by: Paul Holzinger <paul.holzinger@web.de>
|
|
|
|
| |
Signed-off-by: Daniel J Walsh <dwalsh@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
With the advent of Podman 2.0.0 we crossed the magical barrier of go
modules. While we were able to continue importing all packages inside
of the project, the project could not be vendored anymore from the
outside.
Move the go module to new major version and change all imports to
`github.com/containers/libpod/v2`. The renaming of the imports
was done via `gomove` [1].
[1] https://github.com/KSubedi/gomove
Signed-off-by: Valentin Rothberg <rothberg@redhat.com>
|
|
|
|
|
|
|
|
|
| |
- misspell
- prealloc
- unparam
- nakedret
Signed-off-by: Daniel J Walsh <dwalsh@redhat.com>
|
|
|
|
| |
Signed-off-by: Daniel J Walsh <dwalsh@redhat.com>
|
|
|
|
|
|
|
| |
vendor in c/common config pkg for containers.conf
Signed-off-by: Qi Wang qiwan@redhat.com
Signed-off-by: Daniel J Walsh <dwalsh@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
As part of the rework of exec sessions, we need to address them
independently of containers. In the new API, we need to be able
to fetch them by their ID, regardless of what container they are
associated with. Unfortunately, our existing exec sessions are
tied to individual containers; there's no way to tell what
container a session belongs to and retrieve it without getting
every exec session for every container.
This adds a pointer to the container an exec session is
associated with to the database. The sessions themselves are
still stored in the container.
Exec-related APIs have been restructured to work with the new
database representation. The originally monolithic API has been
split into a number of smaller calls to allow more fine-grained
control of lifecycle. Support for legacy exec sessions has been
retained, but in a deprecated fashion; we should remove this in
a few releases.
Signed-off-by: Matthew Heon <matthew.heon@pm.me>
|
|
|
|
| |
Signed-off-by: Dmitry Smirnov <onlyjob@member.fsf.org>
|
|
|
|
|
|
|
|
|
|
|
|
| |
Refactor the `RuntimeConfig` along with related code from libpod into
libpod/config. Note that this is a first step of consolidating code
into more coherent packages to make the code more maintainable and less
prone to regressions on the long runs.
Some libpod definitions were moved to `libpod/define` to resolve
circular dependencies.
Signed-off-by: Valentin Rothberg <rothberg@redhat.com>
|
|
|
|
| |
Signed-off-by: Giuseppe Scrivano <gscrivan@redhat.com>
|
|
|
|
|
|
|
|
|
| |
Add ability to evict a container when it becomes unusable. This may
happen when the host setup changes after a container creation, making it
impossible for that container to be used or removed.
Evicting a container is done using the `rm --force` command.
Signed-off-by: Marco Vedovati <mvedovati@suse.com>
|
|
|
|
|
|
|
|
|
| |
Lookup was written before volume states merged, but merged after,
and CI didn't catch the obvious failure here. Without a valid
state, we try to unmarshall into a null pointer, and 'volume rm'
is completely broken because of it.
Signed-off-by: Matthew Heon <matthew.heon@pm.me>
|
|
|
|
|
|
|
|
|
|
| |
This isn't included in Docker, but seems handy enough.
Use the new API for 'volume rm' and 'volume inspect'.
Fixes #3891
Signed-off-by: Matthew Heon <matthew.heon@pm.me>
|
|
|
|
|
|
|
|
|
|
|
| |
When we fail to remove a container's SHM, that's an error, and we
need to report it as such. This may be part of our lingering
storage woes.
Also, remove MNT_DETACH. It may be another cause of the storage
removal failures.
Signed-off-by: Matthew Heon <matthew.heon@pm.me>
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
When volume options and the local volume driver are specified,
the volume is intended to be mounted using the 'mount' command.
Supported options will be used to volume the volume before the
first container using it starts, and unmount the volume after the
last container using it dies.
This should work for any local filesystem, though at present I've
only tested with tmpfs and btrfs.
Signed-off-by: Matthew Heon <matthew.heon@pm.me>
|
|
|
|
|
|
|
|
|
|
|
|
| |
We need to be able to track the number of times a volume has been
mounted for tmpfs/nfs/etc volumes. As such, we need a mutable
state for volumes. Add one, with the expected update/save methods
in both states.
There is backwards compat here, in that older volumes without a
state will still be accepted.
Signed-off-by: Matthew Heon <matthew.heon@pm.me>
|
|
|
|
|
|
|
|
|
|
| |
This will require a 'podman system renumber' after being applied
to get lock numbers for existing volumes.
Add the DB backend code for rewriting volume configs and use it
for updating lock numbers as part of 'system renumber'.
Signed-off-by: Matthew Heon <matthew.heon@pm.me>
|
|
|
|
| |
Signed-off-by: baude <bbaude@redhat.com>
|
|
|
|
|
|
|
|
| |
When specifying a podman command with a partial ID, container and pod
commands matches respectively only containers or pods IDs in the BoltDB.
Fixes: #3487
Signed-off-by: Marco Vedovati <mvedovati@suse.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
the compilation demands of having libpod in main is a burden for the
remote client compilations. to combat this, we should move the use of
libpod structs, vars, constants, and functions into the adapter code
where it will only be compiled by the local client.
this should result in cleaner code organization and smaller binaries. it
should also help if we ever need to compile the remote client on
non-Linux operating systems natively (not cross-compiled).
Signed-off-by: baude <bbaude@redhat.com>
|
|
|
|
|
|
|
|
| |
Instead, use a less expensive read-only transaction to see if the
DB is ready for use (it probably is), and only fire the expensive
RW transaction if absolutely necessary.
Signed-off-by: Matthew Heon <mheon@redhat.com>
|
|
|
|
|
|
|
|
|
| |
Mainly add support for podman build using --overlay mounts.
Updates containers/image also adds better support for new registries.conf
file.
Signed-off-by: Daniel J Walsh <dwalsh@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This swaps the previous handling (parse all volume mounts on the
container and look for ones that might refer to named volumes)
for the new, explicit named volume lists stored per-container.
It also deprecates force-removing volumes that are in use. I
don't know how we want to handle this yet, but leaving containers
that depend on a volume that no longer exists is definitely not
correct.
Signed-off-by: Matthew Heon <matthew.heon@pm.me>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
When looking up a container or pod by from user input, we handle
collisions between names and IDs differently than Docker at
present.
In Docker, when there is a container with an ID starting with
"c1" and a container named "c1", commands on "c1" will always act
on the container named "c1". For the same scenario in podman, we
throw an error about name collision.
Change Podman to follow Docker, by returning the named container
or pod instead of erroring.
This should also have a positive effect on performance in the
lookup-by-full-name case, which no longer needs to fully traverse
the list of all pods or containers.
Signed-off-by: Matthew Heon <matthew.heon@pm.me>
|
|
|
|
|
|
|
|
|
| |
If this doesn't match, we end up not being able to access named
volumes mounted into containers, which is bad. Use the same
validation that we use for other critical paths to ensure this
one also matches.
Signed-off-by: Matthew Heon <matthew.heon@pm.me>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
I was looking into why we have locks in volumes, and I'm fairly
convinced they're unnecessary.
We don't have a state whose accesses we need to guard with locks
and syncs. The only real purpose for the lock was to prevent
concurrent removal of the same volume.
Looking at the code, concurrent removal ought to be fine with a
bit of reordering - one or the other might fail, but we will
successfully evict the volume from the state.
Also, remove the 'prune' bool from RemoveVolume. None of our
other API functions accept it, and it only served to toggle off
more verbose error messages.
Signed-off-by: Matthew Heon <matthew.heon@pm.me>
|
|
|
|
|
|
| |
Necessary for rewriting lock IDs as part of renumber.
Signed-off-by: Matthew Heon <matthew.heon@pm.me>
|
|
|
|
| |
Signed-off-by: Matthew Heon <matthew.heon@pm.me>
|
|
|
|
|
|
| |
base enablement of the inspect command.
Signed-off-by: baude <bbaude@redhat.com>
|
|
|
|
| |
Signed-off-by: Matthew Heon <matthew.heon@pm.me>
|
|
|
|
|
|
|
|
|
|
| |
During an earlier bugfix, we swapped all instances of
ContainerConfig to Config, which was meant to fix some data we
were returning from Inspect. This unfortunately also renamed a
libpod internal struct for container configs. Undo the rename
here.
Signed-off-by: Matthew Heon <matthew.heon@pm.me>
|
|
|
|
|
|
| |
This will more closely match what Docker is doing.
Signed-off-by: Daniel J Walsh <dwalsh@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Add support for podman volume and its subcommands.
The commands supported are:
podman volume create
podman volume inspect
podman volume ls
podman volume rm
podman volume prune
This is a tool to manage volumes used by podman. For now it only handle
named volumes, but eventually it will handle all volumes used by podman.
Signed-off-by: umohnani8 <umohnani@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Instead of storing the runtime's file lock dir in the BoltDB
state, refer to the runtime inside the Bolt state instead, and
use the path stored in the runtime.
This is necessary since we moved DB initialization very far up in
runtime init, before the locks dir is properly initialized (and
it must happen before the locks dir can be created, as we use the
DB to retrieve the proper path for the locks dir now).
Signed-off-by: Matthew Heon <mheon@redhat.com>
|
|
|
|
|
|
|
| |
We already create the locks directory as part of the libpod
runtime's init - no need to do it again as part of BoltDB's init.
Signed-off-by: Matthew Heon <mheon@redhat.com>
|
|
|
|
|
|
|
|
|
|
| |
Previously, we implicitly validated runtime configuration against
what was stored in the database as part of database init. Make
this an explicit step, so we can call it after the database has
been initialized. This will allow us to retrieve paths from the
database and use them to overwrite our defaults if they differ.
Signed-off-by: Matthew Heon <mheon@redhat.com>
|
|
|
|
|
|
|
|
|
|
| |
When we create a Libpod database, we store a number of runtime
configuration fields in it. If we can retrieve those, we can use
them to configure the runtime to match the DB instead of inbuilt
defaults, helping to ensure that we don't error in cases where
our compiled-in defaults changed.
Signed-off-by: Matthew Heon <mheon@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
| |
This ensures that we can still use Podman even if a container or
pod with bad config JSON makes it into the state. We still can't
remove these containers, but at least we can do our best to make
things usable.
Signed-off-by: Matthew Heon <matthew.heon@gmail.com>
Closes: #1294
Approved by: rhatdan
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
It's not necessary to fill in state immediately, as we'll be
overwriting it on any API call accessing it thanks to
syncContainer(). It is also causing races when we fetch it
without holding the container lock (which syncContainer() does).
As such, just don't retrieve the state on initial pull from the
database with Bolt.
Also, refactor some Linux-specific netns handling functions out
of container_internal_linux.go into boltdb_linux.go.
Signed-off-by: Matthew Heon <matthew.heon@gmail.com>
Closes: #1186
Approved by: rhatdan
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Per https://www.sqlite.org/src/artifact/c230a7a24?ln=994-1081,
POSIX file advisory locks are unsafe to use within a single
process if multiple file descriptors are open for the same file.
Unfortunately, this has a strong potential to happen for
multithreaded usage of libpod, and could result in DB corruption.
To prevent this, wrap all access to BoltDB within a single
libpod instance in a mutex to ensure concurrent access cannot
occur.
Signed-off-by: Matthew Heon <matthew.heon@gmail.com>
|
|
|
|
|
|
|
| |
Better explain the inner workings of both state types in comments
to make reviews and changes easier.
Signed-off-by: Matthew Heon <matthew.heon@gmail.com>
|
|
|
|
| |
Signed-off-by: Matthew Heon <matthew.heon@gmail.com>
|
|
|
|
| |
Signed-off-by: Matthew Heon <matthew.heon@gmail.com>
|
|
|
|
|
|
|
|
|
|
|
| |
All BoltDB access and update functions now understand namespaces.
Accessing containers outside of your namespace will produce
errors, except for Lookup and All functions, which will perform
their tasks only on containers within your namespace.
The "" namespace remains a reserved, no-restrictions namespace.
Signed-off-by: Matthew Heon <matthew.heon@gmail.com>
|
|
|
|
|
|
|
| |
Dependency containers must be in the same namespace, to ensure
there are never problems resolving a dependency.
Signed-off-by: Matthew Heon <matthew.heon@gmail.com>
|