| Commit message (Collapse) | Author | Age |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
added the following flags and handling for podman pod create
--memory-swap
--cpuset-mems
--device-read-bps
--device-write-bps
--blkio-weight
--blkio-weight-device
--cpu-shares
given the new backend for systemd in c/common, all of these can now be exposed to pod create.
most of the heavy lifting (nearly all) is done within c/common. However, some rewiring needed to be done here
as well!
Signed-off-by: Charlie Doern <cdoern@redhat.com>
|
|\
| |
| | |
libpod: create /etc/passwd if missing
|
| |
| |
| |
| |
| |
| |
| |
| |
| | |
create the /etc/passwd and /etc/group files if they are missing in the
image.
Closes: https://github.com/containers/podman/issues/14966
Signed-off-by: Giuseppe Scrivano <gscrivan@redhat.com>
|
|\ \
| | |
| | | |
fix goroutine leaks in events and logs backend
|
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
When running a single podman logs this is not really important since we
will exit when we finish reading the logs. However for the system
service this is very important. Leaking goroutines will cause an
increased memory and CPU ussage over time.
Both the the event and log backend have goroutine leaks with both the
file and journald drivers.
The journald backend has the problem that journal.Wait(IndefiniteWait)
will block until we get a new journald event. So when a client closes
the connection the goroutine would still wait until there is a new
journal entry. To fix this we just wait for a maximum of 5 seconds,
after that we can check if the client connection was closed and exit
correctly in this case.
For the file backend we can fix this by waiting for either the log line
or context cancel at the same time. Currently it would block waiting for
new log lines and only check afterwards if the client closed the
connection and thus hang forever if there are no new log lines.
[NO NEW TESTS NEEDED] I am open to ideas how we can test memory leaks in
CI.
To test manually run a container like this:
`podman run --log-driver $driver --name test -d alpine sh -c 'i=1; while [ "$i" -ne 1000 ]; do echo "line $i"; i=$((i + 1)); done; sleep inf'`
where `$driver` can be either `journald` or `k8s-file`.
Then start the podman system service and use:
`curl -m 1 --output - --unix-socket $XDG_RUNTIME_DIR/podman/podman.sock -v 'http://d/containers/test/logs?follow=1&since=0&stderr=1&stdout=1' &>/dev/null`
to get the logs from the API and then it closes the connection after 1 second.
Now run the curl command several times and check the memory usage of the service.
Fixes #14879
Signed-off-by: Paul Holzinger <pholzing@redhat.com>
|
|\ \ \
| |_|/
|/| | |
pkg,libpod: remove `pkg/hooks` and use `hooks` from `c/common`
|
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
PR https://github.com/containers/common/pull/1071 moved `pkg/hooks` to
`c/common` hence remove that from podman and use `pkg/hooks` from
`c/common`
[NO NEW TESTS NEEDED]
[NO TESTS NEEDED]
Signed-off-by: Aditya R <arajan@redhat.com>
|
|/ /
| |
| |
| |
| |
| |
| |
| |
| | |
Update the init container type default to once instead
of always to match k8s behavior.
Add a new annotation that can be used to change the init
ctr type in the kube yaml.
Signed-off-by: Urvashi Mohnani <umohnani@redhat.com>
|
|/
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
NFS Servers will thrown ENOTSUPP error if you attempt to
chown a directory to the same UID and GID as the directory
already has. If volumes are stored on NFS directories this
throws an ugly error and then works on the next try.
Bottom line don't chown directories that already have the correct
UID and GID.
Fixes: https://github.com/containers/podman/issues/14766
[NO NEW TESTS NEEDED] Difficult to setup an NFS Server in testing.
Signed-off-by: Daniel J Walsh <dwalsh@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
[NO NEW TESTS NEEDED]
Empty path to runtime binary was printed instead of a real path.
Before fix:
TRAC[0000] found runtime ""
TRAC[0000] found runtime ""
After:
TRAC[0000] found runtime "/usr/bin/crun"
TRAC[0000] found runtime "/usr/bin/runc"
Signed-off-by: Mikhail Khachayants <khachayants@arrival.com>
|
|
|
|
|
|
|
|
|
| |
* Correct spelling and typos.
* Improve language.
Co-authored-by: Ed Santiago <santiago@redhat.com>
Signed-off-by: Erik Sjölund <erik.sjolund@gmail.com>
|
|\
| |
| | |
Add ports and hostname correctly in kube yaml
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
If a pod is created without net sharing, allow adding
separate ports for each container to the kube yaml
and also set the pod level hostname correctly if the
uts namespace is not being shared.
Add a warning if the default namespace sharing options
have been modified by the user.
Signed-off-by: Urvashi Mohnani <umohnani@redhat.com>
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
While for some call paths we may be doing this redundantly we need to
make sure the exit code is always read at this point.
[NO NEW TESTS NEEDED] as I do not manage to reproduce the issue which
is very likely caused by a code path not writing the exit code when
running concurrently.
Fixes: #14859
Signed-off-by: Valentin Rothberg <vrothberg@redhat.com>
|
| |
| |
| |
| |
| |
| |
| | |
Make sure to return/exit with 0 when waiting for a container that never
ran.
Signed-off-by: Valentin Rothberg <vrothberg@redhat.com>
|
|\ \
| | |
| | | |
use c/common code for resize and CopyDetachable
|
| |/
| |
| |
| |
| |
| |
| |
| |
| | |
Since conmon-rs also uses this code we moved it to c/common. Now podman
should has this also to prevent duplication.
[NO NEW TESTS NEEDED]
Signed-off-by: Paul Holzinger <pholzing@redhat.com>
|
|\ \
| |/
|/| |
podman pod create --uts support
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
add support for the --uts flag in pod create, allowing users to avoid
issues with default values in containers.conf.
uts follows the same format as other namespace flags:
--uts=private (default), --uts=host, --uts=ns:PATH
resolves #13714
Signed-off-by: Charlie Doern <cdoern@redhat.com>
|
| |
| |
| |
| |
| |
| |
| |
| |
| | |
We now use the golang error wrapping format specifier `%w` instead of
the deprecated github.com/pkg/errors package.
[NO NEW TESTS NEEDED]
Signed-off-by: Sascha Grunert <sgrunert@redhat.com>
|
|\ \
| | |
| | | |
Docker-compose disable healthcheck properly handled
|
| |/
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
Previously, if a container had healthchecks disabled in the
docker-compose.yml file and the user did a `podman inspect <container>`,
they would have an incorrect output:
```
"Healthcheck":{
"Test":[
"CMD-SHELL",
"NONE"
],
"Interval":30000000000,
"Timeout":30000000000,
"Retries":3
}
```
After a quick change, the correct output is now the result:
```
"Healthcheck":{
"Test":[
"NONE"
]
}
```
Additionally, I extracted the hard-coded strings that were used for
comparisons into constants in `libpod/define` to prevent a similar issue
from recurring.
Closes: #14493
Signed-off-by: Jake Correnti <jcorrenti13@gmail.com>
|
|/
|
|
|
|
|
|
|
|
|
| |
Make sure `Sync()` handles state transitions and exit codes correctly.
The function was only being called when batching which could render
containers in an unusable state when running concurrently with other
state-altering functions/commands since the state must be re-read from
the database before acting upon it.
Fixes: #14761
Signed-off-by: Valentin Rothberg <vrothberg@redhat.com>
|
|\
| |
| | |
libpod/runtime: switch to golang native error wrapping
|
| |
| |
| |
| |
| |
| |
| |
| |
| | |
We now use the golang error wrapping format specifier `%w` instead of
the deprecated github.com/pkg/errors package.
[NO NEW TESTS NEEDED]
Signed-off-by: Sascha Grunert <sgrunert@redhat.com>
|
|\ \
| | |
| | | |
[CI:DOCS] Fix spelling "read only" -> "read-only"
|
| | |
| | |
| | |
| | | |
Signed-off-by: Erik Sjölund <erik.sjolund@gmail.com>
|
|/ /
| |
| |
| |
| |
| |
| |
| | |
using the new resource backend, implement podman pod create --memory which enables
users to modify memory.max inside of the parent cgroup (the pod), implicitly impacting all
children unless overriden
Signed-off-by: Charlie Doern <cdoern@redhat.com>
|
| |
| |
| |
| |
| |
| |
| |
| |
| | |
PR containers/podman/pull/14449 had an outdated base. Merging it broke
builds.
[NO NEW TESTS NEEDED]
Signed-off-by: Valentin Rothberg <vrothberg@redhat.com>
|
|\ \
| |/
|/| |
podman volume create --opt=o=timeout...
|
| |
| |
| |
| |
| |
| |
| | |
add an option to configure the driver timeout when creating a volume.
The default is 5 seconds but this value is too small for some custom drivers.
Signed-off-by: cdoern <cdoern@redhat.com>
|
|\ \
| | |
| | | |
Fix: Prevent OCI runtime directory remain
|
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
This bug was introduced in https://github.com/containers/podman/pull/8906.
When we use 'podman rm/restart/stop/kill etc...' command to
the container running with --rm, the OCI runtime directory
remains at /run/<runtime name> (root user) or
/run/user/<user id>/<runtime name> (rootless user).
This bug could cause other bugs.
For example, when we checkpoint the container running with
--rm (podman checkpoint --export) and restore it
(podman restore --import) with crun, error message
"Error: OCI runtime error: crun: container `<container id>`
already exists" is outputted.
This error is caused by an attempt to restore the container with
the same container ID as the remaining OCI runtime's container ID.
Therefore, I fix that the cleanupRuntime() function runs to
remove the OCI runtime directory,
even if the container has already been removed by --rm option.
Signed-off-by: Toshiki Sonoda <sonoda.toshiki@fujitsu.com>
|
|\ \ \
| | | |
| | | | |
limit cgroupfs when rootless
|
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | | |
[NO NEW TESTS NEEDED] now that podman's cgroup config tries to initialize controllers, cgroupfs errors out on pod creation
we need to mimic the behavior that used to exist and only create the cgroup when running as rootful
Signed-off-by: Charlie Doern <cdoern@redhat.com>
|
|/ / /
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
the new version of runc has the same check in place and it
automatically resume the container if it is paused. So when Podman
tries to resume it again, it fails since the container is not in the
paused state.
Closes: https://bugzilla.redhat.com/show_bug.cgi?id=2100740
[NO NEW TESTS NEEDED] the CI doesn't use a new runc on cgroup v1 systems.
Signed-off-by: Giuseppe Scrivano <gscrivan@redhat.com>
|
|\ \ \
| | | |
| | | | |
volume: add two new options copy and nocopy
|
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | | |
add two new options to the volume create command: copy and nocopy.
When nocopy is specified, the files from the container image are not
copied up to the volume.
Closes: https://github.com/containers/podman/issues/14722
Signed-off-by: Giuseppe Scrivano <gscrivan@redhat.com>
|
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | | |
the two operations are equivalent since securejoin.SecureJoin() has
solved the symlinks. Prefer the Lstat version though to make sure
symlinks are never resolved and we do not end up using a path on the
host.
Signed-off-by: Giuseppe Scrivano <gscrivan@redhat.com>
|
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | | |
avoid any I/O operation on the volume if the source directory is empty.
This is useful on network file systems (since CAP_DAC_OVERRIDE is not
honored) where the root user might not have enough privileges to
perform an I/O operation on the NFS mount but the user running inside
the container has.
[NO NEW TESTS NEEDED] it needs a setup with a network file system
Signed-off-by: Giuseppe Scrivano <gscrivan@redhat.com>
|
|\ \ \ \
| | | | |
| | | | | |
add podman volume reload to sync volume plugins
|
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | | |
Libpod requires that all volumes are stored in the libpod db. Because
volume plugins can be created outside of podman, it will not show all
available plugins. This podman volume reload command allows users to
sync the libpod db with their external volume plugins. All new volumes
from the plugin are also created in the libpod db and when a volume from
the db no longer exists it will be removed if possible.
There are some problems:
- naming conflicts, in this case we only use the first volume we found.
This is not deterministic.
- race conditions, we have no control over the volume plugins. It is
possible that the volumes changed while we run this command.
Fixes #14207
Signed-off-by: Paul Holzinger <pholzing@redhat.com>
|
| | |/ /
| |/| |
| | | |
| | | |
| | | |
| | | |
| | | | |
There is no need for an extra parameter if the body is set. We can just
check to interface for not nil.
Signed-off-by: Paul Holzinger <pholzing@redhat.com>
|
|\ \ \ \
| | | | |
| | | | | |
Show Health Status events
|
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | | |
Previously, health status events were not being generated at all. Both
the API and `podman events` will generate health_status events.
```
{"status":"health_status","id":"ae498ac3aa6c63db8b69a37583a6eae1a9cefbdbdbeeadcf8e1d66d745f0df63","from":"localhost/healthcheck-demo:latest","Type":"container","Action":"health_status","Actor":{"ID":"ae498ac3aa6c63db8b69a37583a6eae1a9cefbdbdbeeadcf8e1d66d745f0df63","Attributes":{"containerExitCode":"0","image":"localhost/healthcheck-demo:latest","io.buildah.version":"1.26.1","maintainer":"NGINX Docker Maintainers \u003cdocker-maint@nginx.com\u003e","name":"healthcheck-demo"}},"scope":"local","time":1656082205,"timeNano":1656082205882271276,"HealthStatus":"healthy"}
```
```
2022-06-24 11:06:04.886238493 -0400 EDT container health_status ae498ac3aa6c63db8b69a37583a6eae1a9cefbdbdbeeadcf8e1d66d745f0df63 (image=localhost/healthcheck-demo:latest, name=healthcheck-demo, health_status=healthy, io.buildah.version=1.26.1, maintainer=NGINX Docker Maintainers <docker-maint@nginx.com>)
```
Signed-off-by: Jake Correnti <jcorrenti13@gmail.com>
|
|\ \ \ \ \
| |_|_|/ /
|/| | | | |
Add missing criu symbols to criu_unsupported.go
|
| | | | |
| | | | |
| | | | |
| | | | | |
Signed-off-by: Doug Rabson <dfr@rabson.org>
|
|\ \ \ \ \
| |/ / / /
|/| | | | |
podman cgroup enhancement
|
| | |/ /
| |/| |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | | |
currently, setting any sort of resource limit in a pod does nothing. With the newly refactored creation process in c/common, podman ca now set resources at a pod level
meaning that resource related flags can now be exposed to podman pod create.
cgroupfs and systemd are both supported with varying completion. cgroupfs is a much simpler process and one that is virtually complete for all resource types, the flags now just need to be added. systemd on the other hand
has to be handeled via the dbus api meaning that the limits need to be passed as recognized properties to systemd. The properties added so far are the ones that podman pod create supports as well as `cpuset-mems` as this will
be the next flag I work on.
Signed-off-by: Charlie Doern <cdoern@redhat.com>
|
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | | |
Firstly: don't prune exit codes after a refresh - instead, clear
the table entirely. We are guaranteed that all containers are
gone after a refresh, we should not worry about exit codes given
this.
Secondly: alter the way pruning was done. We were updating the DB
by calling Update from within an existing View, and stacking an
RW transaction on top of an existing RO one seems dodgy; further,
modifying a bucket while iterating over it with ForEach is
undefined behavior.
Hopefully this will resolve our CI issues.
Signed-off-by: Matthew Heon <mheon@redhat.com>
|