summaryrefslogtreecommitdiff
diff options
context:
space:
mode:
-rw-r--r--Makefile1
-rwxr-xr-xcontrib/cirrus/runner.sh6
-rw-r--r--docs/source/markdown/podman-build.1.md331
-rw-r--r--pkg/api/handlers/compat/containers_create.go4
-rw-r--r--pkg/api/handlers/compat/unsupported.go3
-rw-r--r--test/apiv2/rest_api/__init__.py1
-rw-r--r--test/apiv2/rest_api/test_rest_v2_0_0.py2
-rw-r--r--test/python/docker/README.md38
-rw-r--r--test/python/docker/__init__.py157
-rw-r--r--test/python/docker/common.py21
-rw-r--r--test/python/docker/constant.py6
-rw-r--r--test/python/docker/test_containers.py214
-rw-r--r--test/python/docker/test_images.py169
-rw-r--r--test/python/docker/test_system.py66
-rw-r--r--test/python/dockerpy/README.md40
-rw-r--r--test/python/dockerpy/__init__.py0
-rw-r--r--test/python/dockerpy/tests/__init__.py0
-rw-r--r--test/python/dockerpy/tests/common.py105
-rw-r--r--test/python/dockerpy/tests/constant.py13
-rw-r--r--test/python/dockerpy/tests/test_containers.py193
-rw-r--r--test/python/dockerpy/tests/test_images.py162
-rw-r--r--test/python/dockerpy/tests/test_info_version.py44
-rw-r--r--test/system/030-run.bats10
-rw-r--r--test/system/035-logs.bats10
24 files changed, 923 insertions, 673 deletions
diff --git a/Makefile b/Makefile
index 75b2e9833..d147987d4 100644
--- a/Makefile
+++ b/Makefile
@@ -358,6 +358,7 @@ remotesystem:
localapiv2:
env PODMAN=./bin/podman ./test/apiv2/test-apiv2
env PODMAN=./bin/podman ${PYTHON} -m unittest discover -v ./test/apiv2/rest_api/
+ env PODMAN=./bin/podman ${PYTHON} -m unittest discover -v ./test/python/docker
.PHONY: remoteapiv2
remoteapiv2:
diff --git a/contrib/cirrus/runner.sh b/contrib/cirrus/runner.sh
index 084b196a9..b7e7ab852 100755
--- a/contrib/cirrus/runner.sh
+++ b/contrib/cirrus/runner.sh
@@ -63,6 +63,12 @@ function _run_unit() {
}
function _run_apiv2() {
+ # TODO Remove once VM's with dependency
+ if [[ "$OS_RELEASE_ID" == "fedora" ]]; then
+ dnf install -y python3-docker
+ else
+ apt-get -qq -y install python3-docker
+ fi
make localapiv2 |& logformatter
}
diff --git a/docs/source/markdown/podman-build.1.md b/docs/source/markdown/podman-build.1.md
index 1e1e1d27e..405628912 100644
--- a/docs/source/markdown/podman-build.1.md
+++ b/docs/source/markdown/podman-build.1.md
@@ -9,21 +9,39 @@ podman\-build - Build a container image using a Containerfile
**podman image build** [*options*] [*context*]
## DESCRIPTION
-**podman build** Builds an image using instructions from one or more Containerfiles or Dockerfiles and a specified build context directory. A Containerfile uses the same syntax as a Dockerfile internally. For this document, a file referred to as a Containerfile can be a file named either 'Containerfile' or 'Dockerfile'.
+**podman build** Builds an image using instructions from one or more
+Containerfiles or Dockerfiles and a specified build context directory. A
+Containerfile uses the same syntax as a Dockerfile internally. For this
+document, a file referred to as a Containerfile can be a file named either
+'Containerfile' or 'Dockerfile'.
-The build context directory can be specified as the http(s) URL of an archive, git repository or Containerfile.
+The build context directory can be specified as the http(s) URL of an archive,
+git repository or Containerfile.
-If no context directory is specified, then Podman will assume the current working directory as the build context, which should contain the Containerfile.
+If no context directory is specified, then Podman will assume the current
+working directory as the build context, which should contain the Containerfile.
-Containerfiles ending with a ".in" suffix will be preprocessed via CPP(1). This can be useful to decompose Containerfiles into several reusable parts that can be used via CPP's **#include** directive. Notice, a Containerfile.in file can still be used by other tools when manually preprocessing them via `cpp -E`.
+Containerfiles ending with a ".in" suffix will be preprocessed via CPP(1). This
+can be useful to decompose Containerfiles into several reusable parts that can
+be used via CPP's **#include** directive. Notice, a Containerfile.in file can
+still be used by other tools when manually preprocessing them via `cpp -E`.
-When the URL is an archive, the contents of the URL is downloaded to a temporary location and extracted before execution.
+When the URL is an archive, the contents of the URL is downloaded to a temporary
+location and extracted before execution.
-When the URL is an Containerfile, the Containerfile is downloaded to a temporary location.
+When the URL is an Containerfile, the Containerfile is downloaded to a temporary
+location.
-When a Git repository is set as the URL, the repository is cloned locally and then set as the context.
+When a Git repository is set as the URL, the repository is cloned locally and
+then set as the context.
-NOTE: `podman build` uses code sourced from the `buildah` project to build container images. This `buildah` code creates `buildah` containers for the `RUN` options in container storage. In certain situations, when the `podman build` crashes or users kill the `podman build` process, these external containers can be left in container storage. Use the `podman ps --all --storage` command to see these contaienrs. External containers can be removed with the `podman rm --storage` command.
+NOTE: `podman build` uses code sourced from the `buildah` project to build
+container images. This `buildah` code creates `buildah` containers for the
+`RUN` options in container storage. In certain situations, when the
+`podman build` crashes or users kill the `podman build` process, these external
+containers can be left in container storage. Use the `podman ps --all --storage`
+command to see these contaienrs. External containers can be removed with the
+`podman rm --storage` command.
## OPTIONS
@@ -31,25 +49,32 @@ NOTE: `podman build` uses code sourced from the `buildah` project to build conta
Add a custom host-to-IP mapping (host:ip)
-Add a line to /etc/hosts. The format is hostname:ip. The **--add-host** option can be set multiple times.
+Add a line to /etc/hosts. The format is hostname:ip. The **--add-host** option
+can be set multiple times.
**--annotation**=*annotation*
-Add an image *annotation* (e.g. annotation=*value*) to the image metadata. Can be used multiple times.
+Add an image *annotation* (e.g. annotation=*value*) to the image metadata. Can
+be used multiple times.
-Note: this information is not present in Docker image formats, so it is discarded when writing images in Docker formats.
+Note: this information is not present in Docker image formats, so it is
+discarded when writing images in Docker formats.
**--arch**=*arch*
-Set the ARCH of the image to the provided value instead of the architecture of the host.
+Set the ARCH of the image to the provided value instead of the architecture of
+the host.
**--authfile**=*path*
-Path of the authentication file. Default is ${XDG\_RUNTIME\_DIR}/containers/auth.json, which is set using `podman login`.
-If the authorization state is not found there, $HOME/.docker/config.json is checked, which is set using `docker login`.
+Path of the authentication file. Default is
+${XDG\_RUNTIME\_DIR}/containers/auth.json, which is set using `podman login`.
+If the authorization state is not found there, $HOME/.docker/config.json is
+checked, which is set using `docker login`.
-Note: You can also override the default path of the authentication file by setting the REGISTRY\_AUTH\_FILE
-environment variable. `export REGISTRY_AUTH_FILE=path`
+Note: You can also override the default path of the authentication file by
+setting the REGISTRY\_AUTH\_FILE environment variable.
+`export REGISTRY_AUTH_FILE=path`
**--build-arg**=*arg=value*
@@ -60,7 +85,8 @@ resulting image's configuration.
**--cache-from**
-Images to utilize as potential cache sources. Podman does not currently support caching so this is a NOOP.
+Images to utilize as potential cache sources. Podman does not currently support
+caching so this is a NOOP.
**--cap-add**=*CAP\_xxx*
@@ -85,11 +111,14 @@ given.
**--cert-dir**=*path*
Use certificates at *path* (\*.crt, \*.cert, \*.key) to connect to the registry.
-Default certificates directory is _/etc/containers/certs.d_. (Not available for remote commands)
+Default certificates directory is _/etc/containers/certs.d_. (Not available for
+remote commands)
**--cgroup-parent**=*path*
-Path to cgroups under which the cgroup for the container will be created. If the path is not absolute, the path is considered to be relative to the cgroups path of the init process. Cgroups will be created if they do not already exist.
+Path to cgroups under which the cgroup for the container will be created. If the
+path is not absolute, the path is considered to be relative to the cgroups path
+of the init process. Cgroups will be created if they do not already exist.
**--compress**
@@ -137,9 +166,9 @@ https://github.com/containers/podman/blob/master/troubleshooting.md#26-running-c
CPU shares (relative weight)
-By default, all containers get the same proportion of CPU cycles. This proportion
-can be modified by changing the container's CPU share weighting relative
-to the weighting of all other running containers.
+By default, all containers get the same proportion of CPU cycles. This
+proportion can be modified by changing the container's CPU share weighting
+relative to the weighting of all other running containers.
To modify the proportion from the default of 1024, use the **--cpu-shares**
flag to set the weighting to 2 or higher.
@@ -162,7 +191,8 @@ use 100% of each individual CPU core.
For example, consider a system with more than three cores. If you start one
container **{C0}** with **-c=512** running one process, and another container
-**{C1}** with **-c=1024** running two processes, this can result in the following
+**{C1}** with **-c=1024** running two processes, this can result in the
+following
division of CPU shares:
PID container CPU CPU share
@@ -176,7 +206,8 @@ division of CPU shares:
**--cpuset-mems**=*nodes*
-Memory nodes (MEMs) in which to allow execution (0-3, 0,1). Only effective on NUMA systems.
+Memory nodes (MEMs) in which to allow execution (0-3, 0,1). Only effective on
+NUMA systems.
If you have four memory nodes on your system (0-3), use `--cpuset-mems=0,1`
then processes in your container will only use memory from the first
@@ -185,8 +216,8 @@ two memory nodes.
**--creds**=*creds*
The [username[:password]] to use to authenticate with the registry if required.
-If one or both values are not supplied, a command line prompt will appear and the
-value can be entered. The password is entered without echo.
+If one or both values are not supplied, a command line prompt will appear and
+the value can be entered. The password is entered without echo.
**--device**=_host-device_[**:**_container-device_][**:**_permissions_]
@@ -201,7 +232,8 @@ The container will only store the major and minor numbers of the host device.
Note: if the user only has access rights via a group, accessing the device
from inside a rootless container will fail. The **crun**(1) runtime offers a
-workaround for this by adding the option **--annotation run.oci.keep_original_groups=1**.
+workaround for this by adding the option
+**--annotation run.oci.keep_original_groups=1**.
**--disable-compression**, **-D**
@@ -222,9 +254,14 @@ solely for scripting compatibility.
Set custom DNS servers
-This option can be used to override the DNS configuration passed to the container. Typically this is necessary when the host DNS configuration is invalid for the container (e.g., 127.0.0.1). When this is the case the `--dns` flag is necessary for every run.
+This option can be used to override the DNS configuration passed to the
+container. Typically this is necessary when the host DNS configuration is
+invalid for the container (e.g., 127.0.0.1). When this is the case the `--dns`
+flag is necessary for every run.
-The special value **none** can be specified to disable creation of /etc/resolv.conf in the container by Podman. The /etc/resolv.conf file in the image will be used without changes.
+The special value **none** can be specified to disable creation of
+/etc/resolv.conf in the container by Podman. The /etc/resolv.conf file in the
+image will be used without changes.
**--dns-option**=*option*
@@ -249,7 +286,8 @@ If you specify `-f -`, the Containerfile contents will be read from stdin.
**--force-rm**=*true|false*
-Always remove intermediate containers after a build, even if the build fails (default false).
+Always remove intermediate containers after a build, even if the build fails
+(default false).
**--format**
@@ -296,11 +334,15 @@ Note: You can also override the default isolation type by setting the
BUILDAH\_ISOLATION environment variable. `export BUILDAH_ISOLATION=oci`
**--jobs**=*number*
-How many stages to run in parallel (default 1)
+
+Run up to N concurrent stages in parallel. If the number of jobs is greater
+than 1, stdin will be read from /dev/null. If 0 is specified, then there is
+no limit in the number of jobs that run in parallel.
**--label**=*label*
-Add an image *label* (e.g. label=*value*) to the image metadata. Can be used multiple times.
+Add an image *label* (e.g. label=*value*) to the image metadata. Can be used
+multiple times.
Users can set a special LABEL **io.containers.capabilities=CAP1,CAP2,CAP3** in
a Containerfile that specified the list of Linux capabilities required for the
@@ -316,8 +358,8 @@ print an error message and will run the container with the default capabilities.
Cache intermediate images during the build process (Default is `true`).
-Note: You can also override the default value of layers by setting the BUILDAH\_LAYERS
-environment variable. `export BUILDAH_LAYERS=true`
+Note: You can also override the default value of layers by setting the
+BUILDAH\_LAYERS environment variable. `export BUILDAH_LAYERS=true`
**--logfile**=*filename*
@@ -331,13 +373,15 @@ with 3 being roughly equivalent to using the global *--debug* option, and
values below 0 omitting even error messages which accompany fatal errors.
**--memory**, **-m**=*LIMIT*
-Memory limit (format: <number>[<unit>], where unit = b (bytes), k (kilobytes), m (megabytes), or g (gigabytes))
+Memory limit (format: <number>[<unit>], where unit = b (bytes), k (kilobytes),
+m (megabytes), or g (gigabytes))
Allows you to constrain the memory available to a container. If the host
supports swap memory, then the **-m** memory setting can be larger than physical
RAM. If a limit of 0 is specified (not using **-m**), the container's memory is
not limited. The actual limit may be rounded up to a multiple of the operating
-system's page size (the value would be very large, that's millions of trillions).
+system's page size (the value would be very large, that's millions of
+trillions).
**--memory-swap**=*LIMIT*
@@ -353,19 +397,25 @@ unit, `b` is used. Set LIMIT to `-1` to enable unlimited swap.
**--net**, **--network**=*string*
Sets the configuration for network namespaces when handling `RUN` instructions.
-The configured value can be "" (the empty string) or "container" to indicate
-that a new network namespace should be created, or it can be "host" to indicate
-that the network namespace in which `podman` itself is being run should be
-reused, or it can be the path to a network namespace which is already in use by
-another process.
+
+Valid _mode_ values are:
+
+- **none**: no networking.
+- **host**: use the Podman host network stack. Note: the host mode gives the
+container full access to local system services such as D-bus and is therefore
+considered insecure.
+- **ns:**_path_: path to a network namespace to join.
+- `private`: create a new namespace for the container (default).
**--no-cache**
-Do not use existing cached images for the container build. Build from the start with a new set of cached layers.
+Do not use existing cached images for the container build. Build from the start
+with a new set of cached layers.
**--os**=*string*
-Set the OS to the provided value instead of the current operating system of the host.
+Set the OS to the provided value instead of the current operating system of the
+host.
**--pid**=*pid*
@@ -384,23 +434,24 @@ not required for Buildah as it supports only Linux.
**--pull**
-When the option is specified or set to "true", pull the image from the first registry
-it is found in as listed in registries.conf. Raise an error if not found in the
-registries, even if the image is present locally.
+When the option is specified or set to "true", pull the image from the first
+registry it is found in as listed in registries.conf. Raise an error if not
+found in the registries, even if the image is present locally.
-If the option is disabled (with *--pull=false*), or not specified, pull the image
-from the registry only if the image is not present locally. Raise an error if the image
-is not found in the registries.
+If the option is disabled (with *--pull=false*), or not specified, pull the
+image from the registry only if the image is not present locally. Raise an
+error if the image is not found in the registries.
**--pull-always**
-Pull the image from the first registry it is found in as listed in registries.conf.
-Raise an error if not found in the registries, even if the image is present locally.
+Pull the image from the first registry it is found in as listed in
+registries.conf. Raise an error if not found in the registries, even if the
+image is present locally.
**--pull-never**
-Do not pull the image from the registry, use only the local version. Raise an error
-if the image is not present locally.
+Do not pull the image from the registry, use only the local version. Raise an
+error if the image is not present locally.
**--quiet**, **-q**
@@ -425,7 +476,8 @@ environment variable. `export BUILDAH_RUNTIME=/usr/local/bin/runc`
Security Options
- `apparmor=unconfined` : Turn off apparmor confinement for the container
-- `apparmor=your-profile` : Set the apparmor confinement profile for the container
+- `apparmor=your-profile` : Set the apparmor confinement profile for the
+container
- `label=user:USER` : Set the label user for the container processes
- `label=role:ROLE` : Set the label role for the container processes
@@ -433,15 +485,19 @@ Security Options
- `label=level:LEVEL` : Set the label level for the container processes
- `label=filetype:TYPE` : Set the label file type for the container files
- `label=disable` : Turn off label separation for the container
+- `no-new-privileges` : Not supported
- `seccomp=unconfined` : Turn off seccomp confinement for the container
-- `seccomp=profile.json` : White listed syscalls seccomp Json file to be used as a seccomp filter
+- `seccomp=profile.json` : White listed syscalls seccomp Json file to be used
+as a seccomp filter
**--shm-size**=*size*
-Size of `/dev/shm`. The format is `<number><unit>`. `number` must be greater than `0`.
-Unit is optional and can be `b` (bytes), `k` (kilobytes), `m`(megabytes), or `g` (gigabytes).
-If you omit the unit, the system uses bytes. If you omit the size entirely, the system uses `64m`.
+Size of `/dev/shm`. The format is `<number><unit>`. `number` must be greater
+than `0`.
+Unit is optional and can be `b` (bytes), `k` (kilobytes), `m`(megabytes), or
+`g` (gigabytes). If you omit the unit, the system uses bytes. If you omit the
+size entirely, the system uses `64m`.
**--sign-by**=*fingerprint*
@@ -449,41 +505,48 @@ Sign the image using a GPG key with the specified FINGERPRINT.
**--squash**
-Squash all of the image's new layers into a single new layer; any preexisting layers
-are not squashed.
+Squash all of the image's new layers into a single new layer; any preexisting
+layers are not squashed.
**--squash-all**
-Squash all of the new image's layers (including those inherited from a base image) into a single new layer.
+Squash all of the new image's layers (including those inherited from a base
+image) into a single new layer.
**--tag**, **-t**=*imageName*
Specifies the name which will be assigned to the resulting image if the build
process completes successfully.
-If _imageName_ does not include a registry name, the registry name *localhost* will be prepended to the image name.
+If _imageName_ does not include a registry name, the registry name *localhost*
+will be prepended to the image name.
**--target**=*stageName*
-Set the target build stage to build. When building a Containerfile with multiple build stages, --target
-can be used to specify an intermediate build stage by name as the final stage for the resulting image.
-Commands after the target stage will be skipped.
+Set the target build stage to build. When building a Containerfile with
+multiple build stages, --target can be used to specify an intermediate build
+stage by name as the final stage for the resulting image. Commands after the target stage will be skipped.
**--timestamp** *seconds*
-Set the create timestamp to seconds since epoch to allow for deterministic builds (defaults to current time).
-By default, the created timestamp is changed and written into the image manifest with every commit,
-causing the image's sha256 hash to be different even if the sources are exactly the same otherwise.
-When --timestamp is set, the created timestamp is always set to the time specified and therefore not changed, allowing the image's sha256 to remain the same. All files committed to the layers of the image will be created with the timestamp.
+Set the create timestamp to seconds since epoch to allow for deterministic
+builds (defaults to current time). By default, the created timestamp is changed
+and written into the image manifest with every commit, causing the image's
+sha256 hash to be different even if the sources are exactly the same otherwise.
+When --timestamp is set, the created timestamp is always set to the time
+specified and therefore not changed, allowing the image's sha256 hash to remain the
+same. All files committed to the layers of the image will be created with the
+timestamp.
**--tls-verify**=*true|false*
-Require HTTPS and verify certificates when talking to container registries (defaults to true).
+Require HTTPS and verify certificates when talking to container registries
+(defaults to true).
**--ulimit**=*type*=*soft-limit*[:*hard-limit*]
-Specifies resource limits to apply to processes launched when processing `RUN` instructions.
-This option can be specified multiple times. Recognized resource types
-include:
+Specifies resource limits to apply to processes launched when processing `RUN`
+instructions. This option can be specified multiple times. Recognized resource
+types include:
"core": maximum core dump size (ulimit -c)
"cpu": maximum CPU time (ulimit -t)
"data": maximum size of a process's data segment (ulimit -d)
@@ -622,49 +685,65 @@ Only the current container can use a private volume.
`Overlay Volume Mounts`
- The `:O` flag tells Podman to mount the directory from the host as a temporary storage using the Overlay file system. The `RUN` command containers are allowed to modify contents within the mountpoint and are stored in the container storage in a separate directory. In Overlay FS terms the source directory will be the lower, and the container storage directory will be the upper. Modifications to the mount point are destroyed when the `RUN` command finishes executing, similar to a tmpfs mount point.
+ The `:O` flag tells Podman to mount the directory from the host as a
+temporary storage using the Overlay file system. The `RUN` command containers
+are allowed to modify contents within the mountpoint and are stored in the
+container storage in a separate directory. In Overlay FS terms the source
+directory will be the lower, and the container storage directory will be the
+upper. Modifications to the mount point are destroyed when the `RUN` command
+finishes executing, similar to a tmpfs mount point.
- Any subsequent execution of `RUN` commands sees the original source directory content, any changes from previous RUN commands no longer exists.
+ Any subsequent execution of `RUN` commands sees the original source directory
+content, any changes from previous RUN commands no longer exists.
- One use case of the `overlay` mount is sharing the package cache from the host into the container to allow speeding up builds.
+ One use case of the `overlay` mount is sharing the package cache from the
+host into the container to allow speeding up builds.
Note:
- Overlay mounts are not currently supported in rootless mode.
- - The `O` flag is not allowed to be specified with the `Z` or `z` flags. Content mounted into the container is labeled with the private label.
- On SELinux systems, labels in the source directory needs to be readable by the container label. If not, SELinux container separation must be disabled for the container to work.
- - Modification of the directory volume mounted into the container with an overlay mount can cause unexpected failures. It is recommended that you do not modify the directory until the container finishes running.
+ - The `O` flag is not allowed to be specified with the `Z` or `z` flags.
+Content mounted into the container is labeled with the private label.
+ On SELinux systems, labels in the source directory needs to be readable
+by the container label. If not, SELinux container separation must be disabled
+for the container to work.
+ - Modification of the directory volume mounted into the container with an
+overlay mount can cause unexpected failures. It is recommended that you do not
+modify the directory until the container finishes running.
By default bind mounted volumes are `private`. That means any mounts done
-inside container will not be visible on the host and vice versa. This behavior can
-be changed by specifying a volume mount propagation property.
-
-When the mount propagation policy is set to `shared`, any mounts completed inside
-the container on that volume will be visible to both the host and container. When
-the mount propagation policy is set to `slave`, one way mount propagation is enabled
-and any mounts completed on the host for that volume will be visible only inside of the container.
-To control the mount propagation property of volume use the `:[r]shared`,
-`:[r]slave` or `:[r]private` propagation flag. The propagation property can
-be specified only for bind mounted volumes and not for internal volumes or
-named volumes. For mount propagation to work on the source mount point (mount point
-where source dir is mounted on) has to have the right propagation properties. For
-shared volumes, the source mount point has to be shared. And for slave volumes,
-the source mount has to be either shared or slave. <sup>[[1]](#Footnote1)</sup>
+inside containers will not be visible on the host and vice versa. This behavior
+can be changed by specifying a volume mount propagation property.
+
+When the mount propagation policy is set to `shared`, any mounts completed
+inside the container on that volume will be visible to both the host and
+container. When the mount propagation policy is set to `slave`, one way mount
+propagation is enabled and any mounts completed on the host for that volume will
+be visible only inside of the container. To control the mount propagation
+property of volume use the `:[r]shared`, `:[r]slave` or `:[r]private`
+propagation flag. The propagation property canbe specified only for bind mounted
+volumes and not for internal volumes or named volumes. For mount propagation to
+work on the source mount point (mount point where source dir is mounted on) has
+to have the right propagation properties. For shared volumes, the source mount
+point has to be shared. And for slave volumes, the source mount has to be either
+shared or slave. <sup>[[1]](#Footnote1)</sup>
Use `df <source-dir>` to determine the source mount and then use
`findmnt -o TARGET,PROPAGATION <source-mount-dir>` to determine propagation
-properties of source mount, if `findmnt` utility is not available, the source mount point
-can be determined by looking at the mount entry in `/proc/self/mountinfo`. Look
-at `optional fields` and see if any propagation properties are specified.
-`shared:X` means the mount is `shared`, `master:X` means the mount is `slave` and if
-nothing is there that means the mount is `private`. <sup>[[1]](#Footnote1)</sup>
+properties of source mount, if `findmnt` utility is not available, the source
+mount point can be determined by looking at the mount entry in
+`/proc/self/mountinfo`. Look at `optional fields` and see if any propagation
+properties are specified.
+`shared:X` means the mount is `shared`, `master:X` means the mount is `slave`
+and if nothing is there that means the mount is `private`. <sup>[[1]](#Footnote1)</sup>
To change propagation properties of a mount point use the `mount` command. For
example, to bind mount the source directory `/foo` do
`mount --bind /foo /foo` and `mount --make-private --make-shared /foo`. This
-will convert /foo into a `shared` mount point. The propagation properties of the source
-mount can be changed directly. For instance if `/` is the source mount for
-`/foo`, then use `mount --make-shared /` to convert `/` into a `shared` mount.
+will convert /foo into a `shared` mount point. The propagation properties of
+the source mount can be changed directly. For instance if `/` is the source
+mount for `/foo`, then use `mount --make-shared /` to convert `/` into a
+`shared` mount.
## EXAMPLES
@@ -712,11 +791,18 @@ $ podman build --no-cache --rm=false -t imageName .
### Building an image using a URL, Git repo, or archive
- The build context directory can be specified as a URL to a Containerfile, a Git repository, or URL to an archive. If the URL is a Containerfile, it is downloaded to a temporary location and used as the context. When a Git repository is set as the URL, the repository is cloned locally to a temporary location and then used as the context. Lastly, if the URL is an archive, it is downloaded to a temporary location and extracted before being used as the context.
+ The build context directory can be specified as a URL to a Containerfile, a
+Git repository, or URL to an archive. If the URL is a Containerfile, it is
+downloaded to a temporary location and used as the context. When a Git
+repository is set as the URL, the repository is cloned locally to a temporary
+location and then used as the context. Lastly, if the URL is an archive, it is
+downloaded to a temporary location and extracted before being used as the
+context.
#### Building an image using a URL to a Containerfile
- Podman will download the Containerfile to a temporary location and then use it as the build context.
+ Podman will download the Containerfile to a temporary location and then use
+it as the build context.
```
$ podman build https://10.10.10.1/podman/Containerfile
@@ -724,7 +810,9 @@ $ podman build https://10.10.10.1/podman/Containerfile
#### Building an image using a Git repository
- Podman will clone the specified GitHub repository to a temporary location and use it as the context. The Containerfile at the root of the repository will be used and it only works if the GitHub repository is a dedicated repository.
+ Podman will clone the specified GitHub repository to a temporary location and
+use it as the context. The Containerfile at the root of the repository will be
+used and it only works if the GitHub repository is a dedicated repository.
```
$ podman build git://github.com/scollier/purpletest
@@ -732,13 +820,18 @@ $ podman build git://github.com/scollier/purpletest
#### Building an image using a URL to an archive
- Podman will fetch the archive file, decompress it, and use its contents as the build context. The Containerfile at the root of the archive and the rest of the archive will get used as the context of the build. If you pass `-f PATH/Containerfile` option as well, the system will look for that file inside the contents of the archive.
+ Podman will fetch the archive file, decompress it, and use its contents as the
+build context. The Containerfile at the root of the archive and the rest of the
+archive will get used as the context of the build. If you pass
+`-f PATH/Containerfile` option as well, the system will look for that file
+inside the contents of the archive.
```
$ podman build -f dev/Containerfile https://10.10.10.1/podman/context.tar.gz
```
- Note: supported compression formats are 'xz', 'bzip2', 'gzip' and 'identity' (no compression).
+ Note: supported compression formats are 'xz', 'bzip2', 'gzip' and 'identity'
+(no compression).
## Files
@@ -766,7 +859,8 @@ src
```
`*/*.c`
-Excludes files and directories whose names ends with .c in any top level subdirectory. For example, the source file include/rootless.c.
+Excludes files and directories whose names ends with .c in any top level
+subdirectory. For example, the source file include/rootless.c.
`**/output*`
Excludes files and directories starting with `output` from any directory.
@@ -784,21 +878,29 @@ mechanism:
Exclude all doc files except Help.doc from the image.
-This functionality is compatible with the handling of .dockerignore files described here:
+This functionality is compatible with the handling of .dockerignore files
+described here:
https://docs.docker.com/engine/reference/builder/#dockerignore-file
**registries.conf** (`/etc/containers/registries.conf`)
-registries.conf is the configuration file which specifies which container registries should be consulted when completing image names which do not include a registry or domain portion.
+registries.conf is the configuration file which specifies which container
+registries should be consulted when completing image names which do not include
+a registry or domain portion.
## Troubleshooting
### lastlog sparse file
-If you are using a useradd command within a Containerfile with a large UID/GID, it will create a large sparse file `/var/log/lastlog`. This can cause the build to hang forever. Go language does not support sparse files correctly, which can lead to some huge files being created in your container image.
+If you are using a useradd command within a Containerfile with a large UID/GID,
+it will create a large sparse file `/var/log/lastlog`. This can cause the
+build to hang forever. Go language does not support sparse files correctly,
+which can lead to some huge files being created in your container image.
-If you are using `useradd` within your build script, you should pass the `--no-log-init or -l` option to the `useradd` command. This option tells useradd to stop creating the lastlog file.
+If you are using `useradd` within your build script, you should pass the
+`--no-log-init or -l` option to the `useradd` command. This option tells
+useradd to stop creating the lastlog file.
## SEE ALSO
podman(1), buildah(1), containers-registries.conf(5), crun(8), runc(8), useradd(8), podman-ps(1), podman-rm(1)
@@ -811,4 +913,9 @@ May 2018, Minor revisions added by Joe Doss <joe@solidadmin.com>
December 2017, Originally compiled by Tom Sweeney <tsweeney@redhat.com>
## FOOTNOTES
-<a name="Footnote1">1</a>: The Podman project is committed to inclusivity, a core value of open source. The `master` and `slave` mount propagation terminology used here is problematic and divisive, and should be changed. However, these terms are currently used within the Linux kernel and must be used as-is at this time. When the kernel maintainers rectify this usage, Podman will follow suit immediately.
+<a name="Footnote1">1</a>: The Podman project is committed to inclusivity, a
+core value of open source. The `master` and `slave` mount propagation
+terminology used here is problematic and divisive, and should be changed.
+However, these terms are currently used within the Linux kernel and must be
+used as-is at this time. When the kernel maintainers rectify this usage,
+Podman will follow suit immediately.
diff --git a/pkg/api/handlers/compat/containers_create.go b/pkg/api/handlers/compat/containers_create.go
index f9407df1a..4efe770b3 100644
--- a/pkg/api/handlers/compat/containers_create.go
+++ b/pkg/api/handlers/compat/containers_create.go
@@ -54,6 +54,9 @@ func CreateContainer(w http.ResponseWriter, r *http.Request) {
return
}
+ // Add the container name to the input struct
+ input.Name = query.Name
+
// Take input structure and convert to cliopts
cliOpts, args, err := common.ContainerCreateToContainerCLIOpts(input, rtc.Engine.CgroupManager)
if err != nil {
@@ -65,6 +68,7 @@ func CreateContainer(w http.ResponseWriter, r *http.Request) {
utils.Error(w, "Something went wrong.", http.StatusInternalServerError, errors.Wrap(err, "fill out specgen"))
return
}
+
ic := abi.ContainerEngine{Libpod: runtime}
report, err := ic.ContainerCreate(r.Context(), sg)
if err != nil {
diff --git a/pkg/api/handlers/compat/unsupported.go b/pkg/api/handlers/compat/unsupported.go
index 659c15328..e5ff266f9 100644
--- a/pkg/api/handlers/compat/unsupported.go
+++ b/pkg/api/handlers/compat/unsupported.go
@@ -14,6 +14,5 @@ func UnsupportedHandler(w http.ResponseWriter, r *http.Request) {
msg := fmt.Sprintf("Path %s is not supported", r.URL.Path)
log.Infof("Request Failed: %s", msg)
- utils.WriteJSON(w, http.StatusInternalServerError,
- entities.ErrorModel{Message: msg})
+ utils.WriteJSON(w, http.StatusNotFound, entities.ErrorModel{Message: msg})
}
diff --git a/test/apiv2/rest_api/__init__.py b/test/apiv2/rest_api/__init__.py
index 5f0777d58..8100a4df5 100644
--- a/test/apiv2/rest_api/__init__.py
+++ b/test/apiv2/rest_api/__init__.py
@@ -3,7 +3,6 @@ import json
import os
import shutil
import subprocess
-import sys
import tempfile
diff --git a/test/apiv2/rest_api/test_rest_v2_0_0.py b/test/apiv2/rest_api/test_rest_v2_0_0.py
index 5dfd1fc02..0ac4fde75 100644
--- a/test/apiv2/rest_api/test_rest_v2_0_0.py
+++ b/test/apiv2/rest_api/test_rest_v2_0_0.py
@@ -62,7 +62,7 @@ class TestApi(unittest.TestCase):
TestApi.podman = Podman()
TestApi.service = TestApi.podman.open(
- "system", "service", "tcp:localhost:8080", "--log-level=debug", "--time=0"
+ "system", "service", "tcp:localhost:8080", "--time=0"
)
# give the service some time to be ready...
time.sleep(2)
diff --git a/test/python/docker/README.md b/test/python/docker/README.md
new file mode 100644
index 000000000..c10fd636d
--- /dev/null
+++ b/test/python/docker/README.md
@@ -0,0 +1,38 @@
+# Docker regression test
+
+Python test suite to validate Podman endpoints using docker library (aka docker-py).
+See [Docker SDK for Python](https://docker-py.readthedocs.io/en/stable/index.html).
+
+## Running Tests
+
+To run the tests locally in your sandbox (Fedora 32,33):
+
+```shell
+# dnf install python3-docker
+```
+
+### Run the entire test suite
+
+```shell
+# python3 -m unittest discover test/python/docker
+```
+
+Passing the -v option to your test script will instruct unittest.main() to enable a higher level of verbosity, and produce detailed output:
+
+```shell
+# python3 -m unittest -v discover test/python/docker
+```
+
+### Run a specific test class
+
+```shell
+# cd test/python/docker
+# python3 -m unittest -v tests.test_images
+```
+
+### Run a specific test within the test class
+
+```shell
+# cd test/python/docker
+# python3 -m unittest tests.test_images.TestImages.test_import_image
+```
diff --git a/test/python/docker/__init__.py b/test/python/docker/__init__.py
new file mode 100644
index 000000000..0e10676b9
--- /dev/null
+++ b/test/python/docker/__init__.py
@@ -0,0 +1,157 @@
+import configparser
+import json
+import os
+import pathlib
+import shutil
+import subprocess
+import tempfile
+
+from test.python.docker import constant
+
+
+class Podman(object):
+ """
+ Instances hold the configuration and setup for running podman commands
+ """
+
+ def __init__(self):
+ """Initialize a Podman instance with global options"""
+ binary = os.getenv("PODMAN", "bin/podman")
+ self.cmd = [binary, "--storage-driver=vfs"]
+
+ cgroupfs = os.getenv("CGROUP_MANAGER", "systemd")
+ self.cmd.append(f"--cgroup-manager={cgroupfs}")
+
+ # No support for tmpfs (/tmp) or extfs (/var/tmp)
+ # self.cmd.append("--storage-driver=overlay")
+
+ if os.getenv("DEBUG"):
+ self.cmd.append("--log-level=debug")
+ self.cmd.append("--syslog=true")
+
+ self.anchor_directory = tempfile.mkdtemp(prefix="podman_docker_")
+
+ self.image_cache = os.path.join(self.anchor_directory, "cache")
+ os.makedirs(self.image_cache, exist_ok=True)
+
+ self.cmd.append("--root=" + os.path.join(self.anchor_directory, "crio"))
+ self.cmd.append("--runroot=" + os.path.join(self.anchor_directory, "crio-run"))
+
+ os.environ["REGISTRIES_CONFIG_PATH"] = os.path.join(
+ self.anchor_directory, "registry.conf"
+ )
+ p = configparser.ConfigParser()
+ p.read_dict(
+ {
+ "registries.search": {"registries": "['quay.io', 'docker.io']"},
+ "registries.insecure": {"registries": "[]"},
+ "registries.block": {"registries": "[]"},
+ }
+ )
+ with open(os.environ["REGISTRIES_CONFIG_PATH"], "w") as w:
+ p.write(w)
+
+ os.environ["CNI_CONFIG_PATH"] = os.path.join(
+ self.anchor_directory, "cni", "net.d"
+ )
+ os.makedirs(os.environ["CNI_CONFIG_PATH"], exist_ok=True)
+ self.cmd.append("--cni-config-dir=" + os.environ["CNI_CONFIG_PATH"])
+ cni_cfg = os.path.join(
+ os.environ["CNI_CONFIG_PATH"], "87-podman-bridge.conflist"
+ )
+ # json decoded and encoded to ensure legal json
+ buf = json.loads(
+ """
+ {
+ "cniVersion": "0.3.0",
+ "name": "podman",
+ "plugins": [{
+ "type": "bridge",
+ "bridge": "cni0",
+ "isGateway": true,
+ "ipMasq": true,
+ "ipam": {
+ "type": "host-local",
+ "subnet": "10.88.0.0/16",
+ "routes": [{
+ "dst": "0.0.0.0/0"
+ }]
+ }
+ },
+ {
+ "type": "portmap",
+ "capabilities": {
+ "portMappings": true
+ }
+ }
+ ]
+ }
+ """
+ )
+ with open(cni_cfg, "w") as w:
+ json.dump(buf, w)
+
+ def open(self, command, *args, **kwargs):
+ """Podman initialized instance to run a given command
+
+ :param self: Podman instance
+ :param command: podman sub-command to run
+ :param args: arguments and options for command
+ :param kwargs: See subprocess.Popen() for shell keyword
+ :return: subprocess.Popen() instance configured to run podman instance
+ """
+ cmd = self.cmd.copy()
+ cmd.append(command)
+ cmd.extend(args)
+
+ shell = kwargs.get("shell", False)
+
+ return subprocess.Popen(
+ cmd,
+ shell=shell,
+ stdin=subprocess.DEVNULL,
+ stdout=subprocess.DEVNULL,
+ stderr=subprocess.DEVNULL,
+ )
+
+ def run(self, command, *args, **kwargs):
+ """Podman initialized instance to run a given command
+
+ :param self: Podman instance
+ :param command: podman sub-command to run
+ :param args: arguments and options for command
+ :param kwargs: See subprocess.Popen() for shell and check keywords
+ :return: subprocess.Popen() instance configured to run podman instance
+ """
+ cmd = self.cmd.copy()
+ cmd.append(command)
+ cmd.extend(args)
+
+ check = kwargs.get("check", False)
+ shell = kwargs.get("shell", False)
+
+ return subprocess.run(
+ cmd,
+ shell=shell,
+ check=check,
+ stdout=subprocess.PIPE,
+ stderr=subprocess.PIPE,
+ )
+
+ def tear_down(self):
+ shutil.rmtree(self.anchor_directory, ignore_errors=True)
+
+ def restore_image_from_cache(self, client):
+ img = os.path.join(self.image_cache, constant.ALPINE_TARBALL)
+ if not os.path.exists(img):
+ client.pull(constant.ALPINE)
+ image = client.get_image(constant.ALPINE)
+ with open(img, mode="wb") as tarball:
+ for frame in image:
+ tarball.write(frame)
+ else:
+ self.run("load", "-i", img, check=True)
+
+ def flush_image_cache(self):
+ for f in pathlib.Path(self.image_cache).glob("*.tar"):
+ f.unlink(f)
diff --git a/test/python/docker/common.py b/test/python/docker/common.py
new file mode 100644
index 000000000..2828d2d20
--- /dev/null
+++ b/test/python/docker/common.py
@@ -0,0 +1,21 @@
+from docker import APIClient
+
+from test.python.docker import constant
+
+
+def run_top_container(client: APIClient):
+ c = client.create_container(
+ constant.ALPINE, command="top", detach=True, tty=True, name="top"
+ )
+ client.start(c.get("Id"))
+ return c.get("Id")
+
+
+def remove_all_containers(client: APIClient):
+ for ctnr in client.containers(quiet=True):
+ client.remove_container(ctnr, force=True)
+
+
+def remove_all_images(client: APIClient):
+ for image in client.images(quiet=True):
+ client.remove_image(image, force=True)
diff --git a/test/python/docker/constant.py b/test/python/docker/constant.py
new file mode 100644
index 000000000..892293c97
--- /dev/null
+++ b/test/python/docker/constant.py
@@ -0,0 +1,6 @@
+ALPINE = "quay.io/libpod/alpine:latest"
+ALPINE_SHORTNAME = "alpine"
+ALPINE_TARBALL = "alpine.tar"
+BB = "quay.io/libpod/busybox:latest"
+NGINX = "quay.io/libpod/alpine_nginx:latest"
+infra = "k8s.gcr.io/pause:3.2"
diff --git a/test/python/docker/test_containers.py b/test/python/docker/test_containers.py
new file mode 100644
index 000000000..1c4c9ab53
--- /dev/null
+++ b/test/python/docker/test_containers.py
@@ -0,0 +1,214 @@
+import subprocess
+import sys
+import time
+import unittest
+
+from docker import APIClient, errors
+
+from test.python.docker import Podman, common, constant
+
+
+class TestContainers(unittest.TestCase):
+ podman = None # initialized podman configuration for tests
+ service = None # podman service instance
+ topContainerId = ""
+
+ def setUp(self):
+ super().setUp()
+ self.client = APIClient(base_url="tcp://127.0.0.1:8080", timeout=15)
+ TestContainers.podman.restore_image_from_cache(self.client)
+ TestContainers.topContainerId = common.run_top_container(self.client)
+ self.assertIsNotNone(TestContainers.topContainerId)
+
+ def tearDown(self):
+ common.remove_all_containers(self.client)
+ common.remove_all_images(self.client)
+ self.client.close()
+ return super().tearDown()
+
+ @classmethod
+ def setUpClass(cls):
+ super().setUpClass()
+ TestContainers.podman = Podman()
+ TestContainers.service = TestContainers.podman.open(
+ "system", "service", "tcp:127.0.0.1:8080", "--time=0"
+ )
+ # give the service some time to be ready...
+ time.sleep(2)
+
+ rc = TestContainers.service.poll()
+ if rc is not None:
+ raise subprocess.CalledProcessError(rc, "podman system service")
+
+ @classmethod
+ def tearDownClass(cls):
+ TestContainers.service.terminate()
+ stdout, stderr = TestContainers.service.communicate(timeout=0.5)
+ if stdout:
+ sys.stdout.write("\nContainers Service Stdout:\n" + stdout.decode("utf-8"))
+ if stderr:
+ sys.stderr.write("\nContainers Service Stderr:\n" + stderr.decode("utf-8"))
+
+ TestContainers.podman.tear_down()
+ return super().tearDownClass()
+
+ def test_inspect_container(self):
+ # Inspect bogus container
+ with self.assertRaises(errors.NotFound) as error:
+ self.client.inspect_container("dummy")
+ self.assertEqual(error.exception.response.status_code, 404)
+
+ # Inspect valid container by Id
+ container = self.client.inspect_container(TestContainers.topContainerId)
+ self.assertIn("top", container["Name"])
+
+ # Inspect valid container by name
+ container = self.client.inspect_container("top")
+ self.assertIn(TestContainers.topContainerId, container["Id"])
+
+ def test_create_container(self):
+ # Run a container with detach mode
+ container = self.client.create_container(image="alpine", detach=True)
+ self.assertEqual(len(container), 2)
+
+ def test_start_container(self):
+ # Start bogus container
+ with self.assertRaises(errors.NotFound) as error:
+ self.client.start("dummy")
+ self.assertEqual(error.exception.response.status_code, 404)
+
+ # Podman docs says it should give a 304 but returns with no response
+ # # Start a already started container should return 304
+ # response = self.client.start(container=TestContainers.topContainerId)
+ # self.assertEqual(error.exception.response.status_code, 304)
+
+ # Create a new container and validate the count
+ self.client.create_container(image=constant.ALPINE, name="container2")
+ containers = self.client.containers(quiet=True, all=True)
+ self.assertEqual(len(containers), 2)
+
+ def test_stop_container(self):
+ # Stop bogus container
+ with self.assertRaises(errors.NotFound) as error:
+ self.client.stop("dummy")
+ self.assertEqual(error.exception.response.status_code, 404)
+
+ # Validate the container state
+ container = self.client.inspect_container("top")
+ self.assertEqual(container["State"]["Status"], "running")
+
+ # Stop a running container and validate the state
+ self.client.stop(TestContainers.topContainerId)
+ container = self.client.inspect_container("top")
+ self.assertIn(
+ container["State"]["Status"],
+ "stopped exited",
+ )
+
+ def test_restart_container(self):
+ # Restart bogus container
+ with self.assertRaises(errors.NotFound) as error:
+ self.client.restart("dummy")
+ self.assertEqual(error.exception.response.status_code, 404)
+
+ # Validate the container state
+ self.client.stop(TestContainers.topContainerId)
+ container = self.client.inspect_container("top")
+ self.assertEqual(container["State"]["Status"], "stopped")
+
+ # restart a running container and validate the state
+ self.client.restart(TestContainers.topContainerId)
+ container = self.client.inspect_container("top")
+ self.assertEqual(container["State"]["Status"], "running")
+
+ def test_remove_container(self):
+ # Remove bogus container
+ with self.assertRaises(errors.NotFound) as error:
+ self.client.remove_container("dummy")
+ self.assertEqual(error.exception.response.status_code, 404)
+
+ # Remove container by ID with force
+ self.client.remove_container(TestContainers.topContainerId, force=True)
+ containers = self.client.containers()
+ self.assertEqual(len(containers), 0)
+
+ def test_remove_container_without_force(self):
+ # Validate current container count
+ containers = self.client.containers()
+ self.assertTrue(len(containers), 1)
+
+ # Remove running container should throw error
+ with self.assertRaises(errors.APIError) as error:
+ self.client.remove_container(TestContainers.topContainerId)
+ self.assertEqual(error.exception.response.status_code, 500)
+
+ # Remove container by ID with force
+ self.client.stop(TestContainers.topContainerId)
+ self.client.remove_container(TestContainers.topContainerId)
+ containers = self.client.containers()
+ self.assertEqual(len(containers), 0)
+
+ def test_pause_container(self):
+ # Pause bogus container
+ with self.assertRaises(errors.NotFound) as error:
+ self.client.pause("dummy")
+ self.assertEqual(error.exception.response.status_code, 404)
+
+ # Validate the container state
+ container = self.client.inspect_container("top")
+ self.assertEqual(container["State"]["Status"], "running")
+
+ # Pause a running container and validate the state
+ self.client.pause(container["Id"])
+ container = self.client.inspect_container("top")
+ self.assertEqual(container["State"]["Status"], "paused")
+
+ def test_pause_stopped_container(self):
+ # Stop the container
+ self.client.stop(TestContainers.topContainerId)
+
+ # Pause exited container should trow error
+ with self.assertRaises(errors.APIError) as error:
+ self.client.pause(TestContainers.topContainerId)
+ self.assertEqual(error.exception.response.status_code, 500)
+
+ def test_unpause_container(self):
+ # Unpause bogus container
+ with self.assertRaises(errors.NotFound) as error:
+ self.client.unpause("dummy")
+ self.assertEqual(error.exception.response.status_code, 404)
+
+ # Validate the container state
+ self.client.pause(TestContainers.topContainerId)
+ container = self.client.inspect_container("top")
+ self.assertEqual(container["State"]["Status"], "paused")
+
+ # Pause a running container and validate the state
+ self.client.unpause(TestContainers.topContainerId)
+ container = self.client.inspect_container("top")
+ self.assertEqual(container["State"]["Status"], "running")
+
+ def test_list_container(self):
+ # Add container and validate the count
+ self.client.create_container(image="alpine", detach=True)
+ containers = self.client.containers(all=True)
+ self.assertEqual(len(containers), 2)
+
+ def test_filters(self):
+ self.skipTest("TODO Endpoint does not yet support filters")
+
+ # List container with filter by id
+ filters = {"id": TestContainers.topContainerId}
+ ctnrs = self.client.containers(all=True, filters=filters)
+ self.assertEqual(len(ctnrs), 1)
+
+ # List container with filter by name
+ filters = {"name": "top"}
+ ctnrs = self.client.containers(all=True, filters=filters)
+ self.assertEqual(len(ctnrs), 1)
+
+ def test_rename_container(self):
+ # rename bogus container
+ with self.assertRaises(errors.APIError) as error:
+ self.client.rename(container="dummy", name="newname")
+ self.assertEqual(error.exception.response.status_code, 404)
diff --git a/test/python/docker/test_images.py b/test/python/docker/test_images.py
new file mode 100644
index 000000000..f049da96f
--- /dev/null
+++ b/test/python/docker/test_images.py
@@ -0,0 +1,169 @@
+import collections
+import os
+import subprocess
+import sys
+import time
+import unittest
+
+from docker import APIClient, errors
+
+from test.python.docker import Podman, common, constant
+
+
+class TestImages(unittest.TestCase):
+ podman = None # initialized podman configuration for tests
+ service = None # podman service instance
+
+ def setUp(self):
+ super().setUp()
+ self.client = APIClient(base_url="tcp://127.0.0.1:8080", timeout=15)
+
+ TestImages.podman.restore_image_from_cache(self.client)
+
+ def tearDown(self):
+ common.remove_all_images(self.client)
+ self.client.close()
+ return super().tearDown()
+
+ @classmethod
+ def setUpClass(cls):
+ super().setUpClass()
+ TestImages.podman = Podman()
+ TestImages.service = TestImages.podman.open(
+ "system", "service", "tcp:127.0.0.1:8080", "--time=0"
+ )
+ # give the service some time to be ready...
+ time.sleep(2)
+
+ returncode = TestImages.service.poll()
+ if returncode is not None:
+ raise subprocess.CalledProcessError(returncode, "podman system service")
+
+ @classmethod
+ def tearDownClass(cls):
+ TestImages.service.terminate()
+ stdout, stderr = TestImages.service.communicate(timeout=0.5)
+ if stdout:
+ sys.stdout.write("\nImages Service Stdout:\n" + stdout.decode("utf-8"))
+ if stderr:
+ sys.stderr.write("\nImAges Service Stderr:\n" + stderr.decode("utf-8"))
+
+ TestImages.podman.tear_down()
+ return super().tearDownClass()
+
+ def test_inspect_image(self):
+ """Inspect Image"""
+ # Check for error with wrong image name
+ with self.assertRaises(errors.NotFound):
+ self.client.inspect_image("dummy")
+ alpine_image = self.client.inspect_image(constant.ALPINE)
+ self.assertIn(constant.ALPINE, alpine_image["RepoTags"])
+
+ def test_tag_invalid_image(self):
+ """Tag Image
+
+ Validates if invalid image name is given a bad response is encountered
+ """
+ with self.assertRaises(errors.NotFound):
+ self.client.tag("dummy", "demo")
+
+ def test_tag_valid_image(self):
+ """Validates if the image is tagged successfully"""
+ self.client.tag(constant.ALPINE, "demo", constant.ALPINE_SHORTNAME)
+ alpine_image = self.client.inspect_image(constant.ALPINE)
+ for x in alpine_image["RepoTags"]:
+ self.assertIn("alpine", x)
+
+ # @unittest.skip("doesn't work now")
+ def test_retag_valid_image(self):
+ """Validates if name updates when the image is retagged"""
+ self.client.tag(constant.ALPINE_SHORTNAME, "demo", "rename")
+ alpine_image = self.client.inspect_image(constant.ALPINE)
+ self.assertNotIn("demo:test", alpine_image["RepoTags"])
+
+ def test_list_images(self):
+ """List images"""
+ all_images = self.client.images()
+ self.assertEqual(len(all_images), 1)
+ # Add more images
+ self.client.pull(constant.BB)
+ all_images = self.client.images()
+ self.assertEqual(len(all_images), 2)
+
+ # List images with filter
+ filters = {"reference": "alpine"}
+ all_images = self.client.images(filters=filters)
+ self.assertEqual(len(all_images), 1)
+
+ def test_search_image(self):
+ """Search for image"""
+ response = self.client.search("libpod/alpine")
+ for i in response:
+ self.assertIn("quay.io/libpod/alpine", i["Name"])
+
+ def test_remove_image(self):
+ """Remove image"""
+ # Check for error with wrong image name
+ with self.assertRaises(errors.NotFound):
+ self.client.remove_image("dummy")
+ all_images = self.client.images()
+ self.assertEqual(len(all_images), 1)
+
+ alpine_image = self.client.inspect_image(constant.ALPINE)
+ self.client.remove_image(alpine_image["Id"])
+ all_images = self.client.images()
+ self.assertEqual(len(all_images), 0)
+
+ def test_image_history(self):
+ """Image history"""
+ # Check for error with wrong image name
+ with self.assertRaises(errors.NotFound):
+ self.client.history("dummy")
+
+ # NOTE: history() has incorrect return type hint
+ history = self.client.history(constant.ALPINE)
+ alpine_image = self.client.inspect_image(constant.ALPINE)
+ image_id = (
+ alpine_image["Id"][7:]
+ if alpine_image["Id"].startswith("sha256:")
+ else alpine_image["Id"]
+ )
+
+ found = False
+ for change in history:
+ found |= image_id in change.values()
+ self.assertTrue(found, f"image id {image_id} not found in history")
+
+ def test_get_image_exists_not(self):
+ """Negative test for get image"""
+ with self.assertRaises(errors.NotFound):
+ response = self.client.get_image("image_does_not_exists")
+ collections.deque(response)
+
+ def test_export_image(self):
+ """Export Image"""
+ self.client.pull(constant.BB)
+ image = self.client.get_image(constant.BB)
+
+ file = os.path.join(TestImages.podman.image_cache, "busybox.tar")
+ with open(file, mode="wb") as tarball:
+ for frame in image:
+ tarball.write(frame)
+ sz = os.path.getsize(file)
+ self.assertGreater(sz, 0)
+
+ def test_import_image(self):
+ """Import|Load Image"""
+ all_images = self.client.images()
+ self.assertEqual(len(all_images), 1)
+
+ file = os.path.join(TestImages.podman.image_cache, constant.ALPINE_TARBALL)
+ self.client.import_image_from_file(filename=file)
+
+ all_images = self.client.images()
+ self.assertEqual(len(all_images), 2)
+
+
+if __name__ == "__main__":
+ # Setup temporary space
+ unittest.main()
diff --git a/test/python/docker/test_system.py b/test/python/docker/test_system.py
new file mode 100644
index 000000000..f911baee4
--- /dev/null
+++ b/test/python/docker/test_system.py
@@ -0,0 +1,66 @@
+import subprocess
+import sys
+import time
+import unittest
+
+from docker import APIClient
+
+from test.python.docker import Podman, common, constant
+
+
+class TestSystem(unittest.TestCase):
+ podman = None # initialized podman configuration for tests
+ service = None # podman service instance
+ topContainerId = ""
+
+ def setUp(self):
+ super().setUp()
+ self.client = APIClient(base_url="tcp://127.0.0.1:8080", timeout=15)
+
+ TestSystem.podman.restore_image_from_cache(self.client)
+ TestSystem.topContainerId = common.run_top_container(self.client)
+
+ def tearDown(self):
+ common.remove_all_containers(self.client)
+ common.remove_all_images(self.client)
+ self.client.close()
+ return super().tearDown()
+
+ @classmethod
+ def setUpClass(cls):
+ super().setUpClass()
+ TestSystem.podman = Podman()
+ TestSystem.service = TestSystem.podman.open(
+ "system", "service", "tcp:127.0.0.1:8080", "--time=0"
+ )
+ # give the service some time to be ready...
+ time.sleep(2)
+
+ returncode = TestSystem.service.poll()
+ if returncode is not None:
+ raise subprocess.CalledProcessError(returncode, "podman system service")
+
+ @classmethod
+ def tearDownClass(cls):
+ TestSystem.service.terminate()
+ stdout, stderr = TestSystem.service.communicate(timeout=0.5)
+ if stdout:
+ sys.stdout.write("\nImages Service Stdout:\n" + stdout.decode("utf-8"))
+ if stderr:
+ sys.stderr.write("\nImAges Service Stderr:\n" + stderr.decode("utf-8"))
+
+ TestSystem.podman.tear_down()
+ return super().tearDownClass()
+
+ def test_Info(self):
+ self.assertIsNotNone(self.client.info())
+
+ def test_info_container_details(self):
+ info = self.client.info()
+ self.assertEqual(info["Containers"], 1)
+ self.client.create_container(image=constant.ALPINE)
+ info = self.client.info()
+ self.assertEqual(info["Containers"], 2)
+
+ def test_version(self):
+ self.assertIsNotNone(self.client.version())
diff --git a/test/python/dockerpy/README.md b/test/python/dockerpy/README.md
deleted file mode 100644
index 22908afc6..000000000
--- a/test/python/dockerpy/README.md
+++ /dev/null
@@ -1,40 +0,0 @@
-# Dockerpy regression test
-
-Python test suite to validate Podman endpoints using dockerpy library
-
-## Running Tests
-
-To run the tests locally in your sandbox (Fedora 32):
-
-```shell script
-# dnf install python3-docker
-```
-
-### Run the entire test suite
-
-```shell
-# cd test/python/dockerpy
-# PYTHONPATH=/usr/bin/python python -m unittest discover .
-```
-
-Passing the -v option to your test script will instruct unittest.main() to enable a higher level of verbosity, and produce detailed output:
-
-```shell
-# cd test/python/dockerpy
-# PYTHONPATH=/usr/bin/python python -m unittest -v discover .
-```
-
-### Run a specific test class
-
-```shell
-# cd test/python/dockerpy
-# PYTHONPATH=/usr/bin/python python -m unittest -v tests.test_images
-```
-
-### Run a specific test within the test class
-
-```shell
-# cd test/python/dockerpy
-# PYTHONPATH=/usr/bin/python python -m unittest tests.test_images.TestImages.test_import_image
-
-```
diff --git a/test/python/dockerpy/__init__.py b/test/python/dockerpy/__init__.py
deleted file mode 100644
index e69de29bb..000000000
--- a/test/python/dockerpy/__init__.py
+++ /dev/null
diff --git a/test/python/dockerpy/tests/__init__.py b/test/python/dockerpy/tests/__init__.py
deleted file mode 100644
index e69de29bb..000000000
--- a/test/python/dockerpy/tests/__init__.py
+++ /dev/null
diff --git a/test/python/dockerpy/tests/common.py b/test/python/dockerpy/tests/common.py
deleted file mode 100644
index f83f4076f..000000000
--- a/test/python/dockerpy/tests/common.py
+++ /dev/null
@@ -1,105 +0,0 @@
-import os
-import pathlib
-import subprocess
-import sys
-import time
-
-from docker import APIClient
-
-from . import constant
-
-alpineDict = {
- "name": "docker.io/library/alpine:latest",
- "shortName": "alpine",
- "tarballName": "alpine.tar"
-}
-
-
-def get_client():
- client = APIClient(base_url="http://localhost:8080", timeout=15)
- return client
-
-
-client = get_client()
-
-
-def podman():
- binary = os.getenv("PODMAN_BINARY")
- if binary is None:
- binary = "../../../bin/podman"
- return binary
-
-
-def restore_image_from_cache(TestClass):
- alpineImage = os.path.join(constant.ImageCacheDir,
- alpineDict["tarballName"])
- if not os.path.exists(alpineImage):
- os.makedirs(constant.ImageCacheDir, exist_ok=True)
- client.pull(constant.ALPINE)
- image = client.get_image(constant.ALPINE)
- tarball = open(alpineImage, mode="wb")
- for frame in image:
- tarball.write(frame)
- tarball.close()
- else:
- subprocess.run(
- [podman(), "load", "-i", alpineImage],
- shell=False,
- stdin=subprocess.DEVNULL,
- stdout=subprocess.DEVNULL,
- stderr=subprocess.DEVNULL,
- check=True,
- )
-
-
-def flush_image_cache(TestCase):
- for f in pathlib.Path(constant.ImageCacheDir).glob("*"):
- f.unlink(f)
-
-
-def run_top_container():
- c = client.create_container(image=constant.ALPINE,
- command='/bin/sleep 5',
- name=constant.TOP)
- client.start(container=c.get("Id"))
- return c.get("Id")
-
-
-def enable_sock(TestClass):
- TestClass.podman = subprocess.Popen(
- [
- podman(), "system", "service", "tcp:localhost:8080",
- "--log-level=debug", "--time=0"
- ],
- shell=False,
- stdin=subprocess.DEVNULL,
- stdout=subprocess.DEVNULL,
- stderr=subprocess.DEVNULL,
- )
- time.sleep(2)
-
-
-def terminate_connection(TestClass):
- TestClass.podman.terminate()
- stdout, stderr = TestClass.podman.communicate(timeout=0.5)
- if stdout:
- print("\nService Stdout:\n" + stdout.decode('utf-8'))
- if stderr:
- print("\nService Stderr:\n" + stderr.decode('utf-8'))
-
- if TestClass.podman.returncode > 0:
- sys.stderr.write("podman exited with error code {}\n".format(
- TestClass.podman.returncode))
- sys.exit(2)
-
-
-def remove_all_containers():
- containers = client.containers(quiet=True)
- for c in containers:
- client.remove_container(container=c.get("Id"), force=True)
-
-
-def remove_all_images():
- allImages = client.images()
- for image in allImages:
- client.remove_image(image, force=True)
diff --git a/test/python/dockerpy/tests/constant.py b/test/python/dockerpy/tests/constant.py
deleted file mode 100644
index b44442d02..000000000
--- a/test/python/dockerpy/tests/constant.py
+++ /dev/null
@@ -1,13 +0,0 @@
-BB = "docker.io/library/busybox:latest"
-NGINX = "docker.io/library/nginx:latest"
-ALPINE = "docker.io/library/alpine:latest"
-ALPINE_SHORTNAME = "alpine"
-ALPINELISTTAG = "docker.io/library/alpine:3.10.2"
-ALPINELISTDIGEST = "docker.io/library/alpine@sha256:72c42ed48c3a2db31b7dafe17d275b634664a708d901ec9fd57b1529280f01fb"
-ALPINEAMD64DIGEST = "docker.io/library/alpine@sha256:acd3ca9941a85e8ed16515bfc5328e4e2f8c128caa72959a58a127b7801ee01f"
-ALPINEAMD64ID = "961769676411f082461f9ef46626dd7a2d1e2b2a38e6a44364bcbecf51e66dd4"
-ALPINEARM64DIGEST = "docker.io/library/alpine@sha256:db7f3dcef3d586f7dd123f107c93d7911515a5991c4b9e51fa2a43e46335a43e"
-ALPINEARM64ID = "915beeae46751fc564998c79e73a1026542e945ca4f73dc841d09ccc6c2c0672"
-infra = "k8s.gcr.io/pause:3.2"
-TOP = "top"
-ImageCacheDir = "/tmp/podman/imagecachedir"
diff --git a/test/python/dockerpy/tests/test_containers.py b/test/python/dockerpy/tests/test_containers.py
deleted file mode 100644
index 6b89688d4..000000000
--- a/test/python/dockerpy/tests/test_containers.py
+++ /dev/null
@@ -1,193 +0,0 @@
-import os
-import time
-import unittest
-
-import requests
-
-from . import common, constant
-
-client = common.get_client()
-
-
-class TestContainers(unittest.TestCase):
- topContainerId = ""
-
- def setUp(self):
- super().setUp()
- common.restore_image_from_cache(self)
- TestContainers.topContainerId = common.run_top_container()
-
- def tearDown(self):
- common.remove_all_containers()
- common.remove_all_images()
- return super().tearDown()
-
- @classmethod
- def setUpClass(cls):
- super().setUpClass()
- common.enable_sock(cls)
-
- @classmethod
- def tearDownClass(cls):
- common.terminate_connection(cls)
- common.flush_image_cache(cls)
- return super().tearDownClass()
-
- def test_inspect_container(self):
- # Inspect bogus container
- with self.assertRaises(requests.HTTPError) as error:
- client.inspect_container("dummy")
- self.assertEqual(error.exception.response.status_code, 404)
- # Inspect valid container by name
- container = client.inspect_container(constant.TOP)
- self.assertIn(TestContainers.topContainerId, container["Id"])
- # Inspect valid container by Id
- container = client.inspect_container(TestContainers.topContainerId)
- self.assertIn(constant.TOP, container["Name"])
-
- def test_create_container(self):
- # Run a container with detach mode
- container = client.create_container(image="alpine", detach=True)
- self.assertEqual(len(container), 2)
-
- def test_start_container(self):
- # Start bogus container
- with self.assertRaises(requests.HTTPError) as error:
- client.start("dummy")
- self.assertEqual(error.exception.response.status_code, 404)
-
- # Podman docs says it should give a 304 but returns with no response
- # # Start a already started container should return 304
- # response = client.start(container=TestContainers.topContainerId)
- # self.assertEqual(error.exception.response.status_code, 304)
-
- # Create a new container and validate the count
- client.create_container(image=constant.ALPINE, name="container2")
- containers = client.containers(quiet=True, all=True)
- self.assertEqual(len(containers), 2)
-
- def test_stop_container(self):
- # Stop bogus container
- with self.assertRaises(requests.HTTPError) as error:
- client.stop("dummy")
- self.assertEqual(error.exception.response.status_code, 404)
-
- # Validate the container state
- container = client.inspect_container(constant.TOP)
- self.assertEqual(container["State"]["Status"], "running")
-
- # Stop a running container and validate the state
- client.stop(TestContainers.topContainerId)
- container = client.inspect_container(constant.TOP)
- self.assertIn(
- container["State"]["Status"],
- "stopped exited",
- )
-
- def test_restart_container(self):
- # Restart bogus container
- with self.assertRaises(requests.HTTPError) as error:
- client.restart("dummy")
- self.assertEqual(error.exception.response.status_code, 404)
-
- # Validate the container state
- client.stop(TestContainers.topContainerId)
- container = client.inspect_container(constant.TOP)
- self.assertEqual(container["State"]["Status"], "stopped")
-
- # restart a running container and validate the state
- client.restart(TestContainers.topContainerId)
- container = client.inspect_container(constant.TOP)
- self.assertEqual(container["State"]["Status"], "running")
-
- def test_remove_container(self):
- # Remove bogus container
- with self.assertRaises(requests.HTTPError) as error:
- client.remove_container("dummy")
- self.assertEqual(error.exception.response.status_code, 404)
-
- # Remove container by ID with force
- client.remove_container(TestContainers.topContainerId, force=True)
- containers = client.containers()
- self.assertEqual(len(containers), 0)
-
- def test_remove_container_without_force(self):
- # Validate current container count
- containers = client.containers()
- self.assertTrue(len(containers), 1)
-
- # Remove running container should throw error
- with self.assertRaises(requests.HTTPError) as error:
- client.remove_container(TestContainers.topContainerId)
- self.assertEqual(error.exception.response.status_code, 500)
-
- # Remove container by ID with force
- client.stop(TestContainers.topContainerId)
- client.remove_container(TestContainers.topContainerId)
- containers = client.containers()
- self.assertEqual(len(containers), 0)
-
- def test_pause_container(self):
- # Pause bogus container
- with self.assertRaises(requests.HTTPError) as error:
- client.pause("dummy")
- self.assertEqual(error.exception.response.status_code, 404)
-
- # Validate the container state
- container = client.inspect_container(constant.TOP)
- self.assertEqual(container["State"]["Status"], "running")
-
- # Pause a running container and validate the state
- client.pause(container)
- container = client.inspect_container(constant.TOP)
- self.assertEqual(container["State"]["Status"], "paused")
-
- def test_pause_stoped_container(self):
- # Stop the container
- client.stop(TestContainers.topContainerId)
-
- # Pause exited container should trow error
- with self.assertRaises(requests.HTTPError) as error:
- client.pause(TestContainers.topContainerId)
- self.assertEqual(error.exception.response.status_code, 500)
-
- def test_unpause_container(self):
- # Unpause bogus container
- with self.assertRaises(requests.HTTPError) as error:
- client.unpause("dummy")
- self.assertEqual(error.exception.response.status_code, 404)
-
- # Validate the container state
- client.pause(TestContainers.topContainerId)
- container = client.inspect_container(constant.TOP)
- self.assertEqual(container["State"]["Status"], "paused")
-
- # Pause a running container and validate the state
- client.unpause(TestContainers.topContainerId)
- container = client.inspect_container(constant.TOP)
- self.assertEqual(container["State"]["Status"], "running")
-
- def test_list_container(self):
-
- # Add container and validate the count
- client.create_container(image="alpine", detach=True)
- containers = client.containers(all=True)
- self.assertEqual(len(containers), 2)
-
- # Not working for now......checking
- # # List container with filter by id
- # filters = {'id':TestContainers.topContainerId}
- # filteredContainers = client.containers(all=True,filters = filters)
- # self.assertEqual(len(filteredContainers) , 1)
-
- # # List container with filter by name
- # filters = {'name':constant.TOP}
- # filteredContainers = client.containers(all=True,filters = filters)
- # self.assertEqual(len(filteredContainers) , 1)
-
- @unittest.skip("Not Supported yet")
- def test_rename_container(self):
- # rename bogus container
- with self.assertRaises(requests.HTTPError) as error:
- client.rename(container="dummy", name="newname")
- self.assertEqual(error.exception.response.status_code, 404)
diff --git a/test/python/dockerpy/tests/test_images.py b/test/python/dockerpy/tests/test_images.py
deleted file mode 100644
index 602a86de2..000000000
--- a/test/python/dockerpy/tests/test_images.py
+++ /dev/null
@@ -1,162 +0,0 @@
-import os
-import stat
-import unittest
-from os import remove
-from stat import ST_SIZE
-
-import docker
-import requests
-
-from . import common, constant
-
-client = common.get_client()
-
-
-class TestImages(unittest.TestCase):
- def setUp(self):
- super().setUp()
- common.restore_image_from_cache(self)
-
- def tearDown(self):
- common.remove_all_images()
- return super().tearDown()
-
- @classmethod
- def setUpClass(cls):
- super().setUpClass()
- common.enable_sock(cls)
-
- @classmethod
- def tearDownClass(cls):
- common.terminate_connection(cls)
- common.flush_image_cache(cls)
- return super().tearDownClass()
-
-# Inspect Image
-
- def test_inspect_image(self):
- # Check for error with wrong image name
- with self.assertRaises(requests.HTTPError):
- client.inspect_image("dummy")
- alpine_image = client.inspect_image(constant.ALPINE)
- self.assertIn(constant.ALPINE, alpine_image["RepoTags"])
-
-# Tag Image
-
-# Validates if invalid image name is given a bad response is encountered.
-
- def test_tag_invalid_image(self):
- with self.assertRaises(requests.HTTPError):
- client.tag("dummy", "demo")
-
- # Validates if the image is tagged successfully.
- def test_tag_valid_image(self):
- client.tag(constant.ALPINE, "demo", constant.ALPINE_SHORTNAME)
- alpine_image = client.inspect_image(constant.ALPINE)
- for x in alpine_image["RepoTags"]:
- if ("demo:alpine" in x):
- self.assertTrue
- self.assertFalse
-
- # Validates if name updates when the image is retagged.
- @unittest.skip("doesn't work now")
- def test_retag_valid_image(self):
- client.tag(constant.ALPINE_SHORTNAME, "demo", "rename")
- alpine_image = client.inspect_image(constant.ALPINE)
- self.assertNotIn("demo:test", alpine_image["RepoTags"])
-
-# List Image
-# List All Images
-
- def test_list_images(self):
- allImages = client.images()
- self.assertEqual(len(allImages), 1)
- # Add more images
- client.pull(constant.BB)
- allImages = client.images()
- self.assertEqual(len(allImages), 2)
-
- # List images with filter
- filters = {'reference': 'alpine'}
- allImages = client.images(filters=filters)
- self.assertEqual(len(allImages), 1)
-
-# Search Image
-
- def test_search_image(self):
- response = client.search("alpine")
- for i in response:
- # Alpine found
- if "docker.io/library/alpine" in i["Name"]:
- self.assertTrue
- self.assertFalse
-
-# Image Exist (No docker-py support yet)
-
-# Remove Image
-
- def test_remove_image(self):
- # Check for error with wrong image name
- with self.assertRaises(requests.HTTPError):
- client.remove_image("dummy")
- allImages = client.images()
- self.assertEqual(len(allImages), 1)
- alpine_image = client.inspect_image(constant.ALPINE)
- client.remove_image(alpine_image)
- allImages = client.images()
- self.assertEqual(len(allImages), 0)
-
-# Image History
-
- def test_image_history(self):
- # Check for error with wrong image name
- with self.assertRaises(requests.HTTPError):
- client.history("dummy")
-
- imageHistory = client.history(constant.ALPINE)
- alpine_image = client.inspect_image(constant.ALPINE)
- for h in imageHistory:
- if h["Id"] in alpine_image["Id"]:
- self.assertTrue
- self.assertFalse
-
-# Prune Image (No docker-py support yet)
-
- def test_get_image_dummy(self):
- # FIXME: seems to be an error in the library
- self.skipTest("Documentation and library do not match")
- # Check for error with wrong image name
- with self.assertRaises(docker.errors.ImageNotFound):
- client.get_image("dummy")
-
-# Export Image
-
- def test_export_image(self):
- client.pull(constant.BB)
- if not os.path.exists(constant.ImageCacheDir):
- os.makedirs(constant.ImageCacheDir)
-
- image = client.get_image(constant.BB)
-
- file = os.path.join(constant.ImageCacheDir, "busybox.tar")
- tarball = open(file, mode="wb")
- for frame in image:
- tarball.write(frame)
- tarball.close()
- sz = os.path.getsize(file)
- self.assertGreater(sz, 0)
-
-
-# Import|Load Image
-
- def test_import_image(self):
- allImages = client.images()
- self.assertEqual(len(allImages), 1)
- file = os.path.join(constant.ImageCacheDir, "alpine.tar")
- client.import_image_from_file(filename=file)
- allImages = client.images()
- self.assertEqual(len(allImages), 2)
-
-if __name__ == '__main__':
- # Setup temporary space
- unittest.main()
diff --git a/test/python/dockerpy/tests/test_info_version.py b/test/python/dockerpy/tests/test_info_version.py
deleted file mode 100644
index e3ee18ec7..000000000
--- a/test/python/dockerpy/tests/test_info_version.py
+++ /dev/null
@@ -1,44 +0,0 @@
-import unittest
-
-from . import common, constant
-
-client = common.get_client()
-
-
-class TestInfo_Version(unittest.TestCase):
-
- podman = None
- topContainerId = ""
-
- def setUp(self):
- super().setUp()
- common.restore_image_from_cache(self)
- TestInfo_Version.topContainerId = common.run_top_container()
-
- def tearDown(self):
- common.remove_all_containers()
- common.remove_all_images()
- return super().tearDown()
-
- @classmethod
- def setUpClass(cls):
- super().setUpClass()
- common.enable_sock(cls)
-
- @classmethod
- def tearDownClass(cls):
- common.terminate_connection(cls)
- return super().tearDownClass()
-
- def test_Info(self):
- self.assertIsNotNone(client.info())
-
- def test_info_container_details(self):
- info = client.info()
- self.assertEqual(info["Containers"], 1)
- client.create_container(image=constant.ALPINE)
- info = client.info()
- self.assertEqual(info["Containers"], 2)
-
- def test_version(self):
- self.assertIsNotNone(client.version())
diff --git a/test/system/030-run.bats b/test/system/030-run.bats
index b0c855d81..12df966e2 100644
--- a/test/system/030-run.bats
+++ b/test/system/030-run.bats
@@ -436,6 +436,16 @@ json-file | f
@test "podman run --log-driver journald" {
skip_if_remote "We cannot read journalctl over remote."
+ # We can't use journald on RHEL as rootless, either: rhbz#1895105
+ if is_rootless; then
+ run journalctl -n 1
+ if [[ $status -ne 0 ]]; then
+ if [[ $output =~ permission ]]; then
+ skip "Cannot use rootless journald on this system"
+ fi
+ fi
+ fi
+
msg=$(random_string 20)
pidfile="${PODMAN_TMPDIR}/$(random_string 20)"
diff --git a/test/system/035-logs.bats b/test/system/035-logs.bats
index 130bc5243..a3d6a5800 100644
--- a/test/system/035-logs.bats
+++ b/test/system/035-logs.bats
@@ -51,6 +51,16 @@ ${cid[0]} d" "Sequential output from logs"
}
@test "podman logs over journald" {
+ # We can't use journald on RHEL as rootless: rhbz#1895105
+ if is_rootless; then
+ run journalctl -n 1
+ if [[ $status -ne 0 ]]; then
+ if [[ $output =~ permission ]]; then
+ skip "Cannot use rootless journald on this system"
+ fi
+ fi
+ fi
+
msg=$(random_string 20)
run_podman run --name myctr --log-driver journald $IMAGE echo $msg