summaryrefslogtreecommitdiff
path: root/docs/tutorials
diff options
context:
space:
mode:
Diffstat (limited to 'docs/tutorials')
-rw-r--r--docs/tutorials/README.md4
-rw-r--r--docs/tutorials/image_signing.md194
-rw-r--r--docs/tutorials/remote_client.md37
-rw-r--r--docs/tutorials/rootless_tutorial.md55
4 files changed, 244 insertions, 46 deletions
diff --git a/docs/tutorials/README.md b/docs/tutorials/README.md
index bcd1b01d9..191d7a4b5 100644
--- a/docs/tutorials/README.md
+++ b/docs/tutorials/README.md
@@ -23,3 +23,7 @@ A brief how-to on using the Podman remote-client.
**[How to use libpod for custom/derivative projects](podman-derivative-api.md)**
How the libpod API can be used within your own project.
+
+**[Image Signing](image_signing.md)**
+
+Learn how to setup and use image signing with Podman.
diff --git a/docs/tutorials/image_signing.md b/docs/tutorials/image_signing.md
new file mode 100644
index 000000000..f0adca9af
--- /dev/null
+++ b/docs/tutorials/image_signing.md
@@ -0,0 +1,194 @@
+# How to sign and distribute container images using Podman
+
+Signing container images originates from the motivation of trusting only
+dedicated image providers to mitigate man-in-the-middle (MITM) attacks or
+attacks on container registries. One way to sign images is to utilize a GNU
+Privacy Guard ([GPG][0]) key. This technique is generally compatible with any
+OCI compliant container registry like [Quay.io][1]. It is worth mentioning that
+the OpenShift integrated container registry supports this signing mechanism out
+of the box, which makes separate signature storage unnecessary.
+
+[0]: https://gnupg.org
+[1]: https://quay.io
+
+From a technical perspective, we can utilize Podman to sign the image before
+pushing it into a remote registry. After that, all systems running Podman have
+to be configured to retrieve the signatures from a remote server, which can
+be any simple web server. This means that every unsigned image will be rejected
+during an image pull operation. But how does this work?
+
+First of all, we have to create a GPG key pair or select an already locally
+available one. To generate a new GPG key, just run `gpg --full-gen-key` and
+follow the interactive dialog. Now we should be able to verify that the key
+exists locally:
+
+```bash
+> gpg --list-keys sgrunert@suse.com
+pub rsa2048 2018-11-26 [SC] [expires: 2020-11-25]
+ XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX
+uid [ultimate] Sascha Grunert <sgrunert@suse.com>
+sub rsa2048 2018-11-26 [E] [expires: 2020-11-25]
+```
+
+Now let’s assume that we run a container registry. For example we could simply
+start one on our local machine:
+
+```bash
+> sudo podman run -d -p 5000:5000 docker.io/registry
+```
+
+The registry does not know anything about image signing, it just provides the remote
+storage for the container images. This means if we want to sign an image, we
+have to take care of how to distribute the signatures.
+
+Let’s choose a standard `alpine` image for our signing experiment:
+
+```bash
+> sudo podman pull docker://docker.io/alpine:latest
+```
+
+```bash
+> sudo podman images alpine
+REPOSITORY TAG IMAGE ID CREATED SIZE
+docker.io/library/alpine latest e7d92cdc71fe 6 weeks ago 5.86 MB
+```
+
+Now we can re-tag the image to point it to our local registry:
+
+```bash
+> sudo podman tag alpine localhost:5000/alpine
+```
+
+```bash
+> sudo podman images alpine
+REPOSITORY TAG IMAGE ID CREATED SIZE
+localhost:5000/alpine latest e7d92cdc71fe 6 weeks ago 5.86 MB
+docker.io/library/alpine latest e7d92cdc71fe 6 weeks ago 5.86 MB
+```
+
+Podman would now be able to push the image and sign it in one command. But to
+let this work, we have to modify our system-wide registries configuration at
+`/etc/containers/registries.d/default.yaml`:
+
+```yaml
+default-docker:
+ sigstore: http://localhost:8000 # Added by us
+ sigstore-staging: file:///var/lib/containers/sigstore
+```
+
+We can see that we have two signature stores configured:
+
+- `sigstore`: referencing a web server for signature reading
+- `sigstore-staging`: referencing a file path for signature writing
+
+Now, let’s push and sign the image:
+
+```bash
+> sudo -E GNUPGHOME=$HOME/.gnupg \
+ podman push \
+ --tls-verify=false \
+ --sign-by sgrunert@suse.com \
+ localhost:5000/alpine
+…
+Storing signatures
+```
+
+If we now take a look at the systems signature storage, then we see that there
+is a new signature available, which was caused by the image push:
+
+```bash
+> sudo ls /var/lib/containers/sigstore
+'alpine@sha256=e9b65ef660a3ff91d28cc50eba84f21798a6c5c39b4dd165047db49e84ae1fb9'
+```
+
+The default signature store in our edited version of
+`/etc/containers/registries.d/default.yaml` references a web server listening at
+`http://localhost:8000`. For our experiment, we simply start a new server inside
+the local staging signature store:
+
+```bash
+> sudo bash -c 'cd /var/lib/containers/sigstore && python3 -m http.server'
+Serving HTTP on 0.0.0.0 port 8000 (http://0.0.0.0:8000/) ...
+```
+
+Let’s remove the local images for our verification test:
+
+```
+> sudo podman rmi docker.io/alpine localhost:5000/alpine
+```
+
+We have to write a policy to enforce that the signature has to be valid. This
+can be done by adding a new rule in `/etc/containers/policy.json`. From the
+below example, copy the `"docker"` entry into the `"transports"` section of your
+`policy.json`.
+
+```json
+{
+ "default": [{ "type": "insecureAcceptAnything" }],
+ "transports": {
+ "docker": {
+ "localhost:5000": [
+ {
+ "type": "signedBy",
+ "keyType": "GPGKeys",
+ "keyPath": "/tmp/key.gpg"
+ }
+ ]
+ }
+ }
+}
+```
+
+The `keyPath` does not exist yet, so we have to put the GPG key there:
+
+```bash
+> gpg --output /tmp/key.gpg --armor --export sgrunert@suse.com
+```
+
+If we now pull the image:
+
+```bash
+> sudo podman pull --tls-verify=false localhost:5000/alpine
+…
+Storing signatures
+e7d92cdc71feacf90708cb59182d0df1b911f8ae022d29e8e95d75ca6a99776a
+```
+
+Then we can see in the logs of the web server that the signature has been
+accessed:
+
+```
+127.0.0.1 - - [04/Mar/2020 11:18:21] "GET /alpine@sha256=e9b65ef660a3ff91d28cc50eba84f21798a6c5c39b4dd165047db49e84ae1fb9/signature-1 HTTP/1.1" 200 -
+```
+
+As an counterpart example, if we specify the wrong key at `/tmp/key.gpg`:
+
+```bash
+> gpg --output /tmp/key.gpg --armor --export mail@saschagrunert.de
+File '/tmp/key.gpg' exists. Overwrite? (y/N) y
+```
+
+Then a pull is not possible any more:
+
+```bash
+> sudo podman pull --tls-verify=false localhost:5000/alpine
+Trying to pull localhost:5000/alpine...
+Error: error pulling image "localhost:5000/alpine": unable to pull localhost:5000/alpine: unable to pull image: Source image rejected: Invalid GPG signature: …
+```
+
+So in general there are four main things to be taken into consideration when
+signing container images with Podman and GPG:
+
+1. We need a valid private GPG key on the signing machine and corresponding
+ public keys on every system which would pull the image
+2. A web server has to run somewhere which has access to the signature storage
+3. The web server has to be configured in any
+ `/etc/containers/registries.d/*.yaml` file
+4. Every image pulling system has to be configured to contain the enforcing
+ policy configuration via `policy.conf`
+
+That’s it for image signing and GPG. The cool thing is that this setup works out
+of the box with [CRI-O][2] as well and can be used to sign container images in
+Kubernetes environments.
+
+[2]: https://cri-o.io
diff --git a/docs/tutorials/remote_client.md b/docs/tutorials/remote_client.md
index 197ff3d26..36d429417 100644
--- a/docs/tutorials/remote_client.md
+++ b/docs/tutorials/remote_client.md
@@ -18,7 +18,7 @@ installed on it and the varlink service activated. You will also need to be abl
system as a user with privileges to the varlink socket (more on this later).
## Building the remote client
-At this time, the remote-client is not being packaged for any distribution. It must be built from
+At this time, the Podman remote-client is not being packaged for any distribution. It must be built from
source. To set up your build environment, see [Installation notes](https://github.com/containers/libpod/blob/master/install.md) and follow the
section [Building from scratch](https://github.com/containers/libpod/blob/master/install.md#building-from-scratch). Once you can successfully
build the regular Podman binary, you can now build the remote-client.
@@ -34,7 +34,14 @@ To use the remote-client, you must perform some setup on both the remote and Pod
the remote node refers to where the remote-client is being run; and the Podman node refers to where
Podman and its storage reside.
+
### Podman node setup
+
+Varlink bridge support is provided by the varlink cli command and installed using:
+```
+$ sudo dnf install varlink-cli
+```
+
The Podman node must have Podman (not the remote-client) installed as normal. If your system uses systemd,
then simply start the Podman varlink socket.
```
@@ -54,24 +61,28 @@ access to the remote system. This limitation is being worked on.
### Remote node setup
#### Initiate an ssh session to the Podman node
-To use the remote client, we must establish an ssh connection to the Podman server. We will also use
-that session to bind the remote varlink socket locally.
+To use the remote client, an ssh connection to the Podman server must be established.
-```
-$ ssh -L 127.0.0.1:1234:/run/podman/io.podman root@remotehost
-```
-Note here we are binding the Podman socket to a local TCP socket on port 1234.
-
-#### Running the remote client
-With the ssh session established, we can now run the remote client in a different terminal window. You
-must inform Podman where to look for the bound socket you created in the previous step using an
-environment variable.
+Using the varlink bridge, an ssh tunnel must be initiated to connect to the server. Podman must then be informed of the location of the sshd server on the targeted server
```
-$ PODMAN_VARLINK_ADDRESS="tcp:127.0.0.1:1234" bin/podman-remote images
+$ export PODMAN_VARLINK_BRIDGE=$'ssh -T -p22 root@remotehost -- "varlink -A \'podman varlink \$VARLINK_ADDRESS\' bridge"'
+$ bin/podman-remote images
REPOSITORY TAG IMAGE ID CREATED SIZE
docker.io/library/ubuntu latest 47b19964fb50 2 weeks ago 90.7 MB
docker.io/library/alpine latest caf27325b298 3 weeks ago 5.8 MB
quay.io/cevich/gcloud_centos latest 641dad61989a 5 weeks ago 489 MB
k8s.gcr.io/pause 3.1 da86e6ba6ca1 14 months ago 747 kB
```
+
+The PODMAN_VARLINK_BRIDGE variable may be added to your log in settings. It does not change per connection.
+
+If coming from a Windows machine, the PODMAN_VARLINK_BRIDGE is formatted as:
+```
+set PODMAN_VARLINK_BRIDGE=C:\Windows\System32\OpenSSH\ssh.exe -T -p22 root@remotehost -- varlink -A "podman varlink $VARLINK_ADDRESS" bridge
+```
+
+The arguments before the `--` are presented to ssh while the arguments after are for the varlink cli. The varlink arguments should be copied verbatim.
+ - `-p` is the port on the remote host for the ssh tunnel. `22` is the default.
+ - `root` is the currently supported user, while `remotehost` is the name or IP address of the host providing the Podman service.
+ - `-i` may be added to select an identity file.
diff --git a/docs/tutorials/rootless_tutorial.md b/docs/tutorials/rootless_tutorial.md
index 9a31826bd..8e048c746 100644
--- a/docs/tutorials/rootless_tutorial.md
+++ b/docs/tutorials/rootless_tutorial.md
@@ -31,9 +31,26 @@ The [slirp4netns](https://github.com/rootless-containers/slirp4netns) package pr
### Ensure fuse-overlayfs is installed
-When using Podman in a rootless environment, it is recommended to use fuse-overlayfs rather than the VFS file system. Installing the fuse3-devel package gives Podman the dependencies it needs to install, build and use fuse-overlayfs in a rootless environment for you. The fuse-overlayfs project is also available from [GitHub](https://github.com/containers/fuse-overlayfs). This especially needs to be checked on Ubuntu distributions as fuse-overlayfs is not generally installed by default.
+When using Podman in a rootless environment, it is recommended to use fuse-overlayfs rather than the VFS file system. For that you need the `fuse-overlayfs` executable available in `$PATH`.
-If Podman is installed before fuse-overlayfs, it may be necessary to change the `driver` option under `[storage]` to `"overlay"`.
+Your distribution might already provide it in the `fuse-overlayfs` package, but be aware that you need at least version **0.7.6**. This especially needs to be checked on Ubuntu distributions as `fuse-overlayfs` is not generally installed by default and the 0.7.6 version is not available natively on Ubuntu releases prior to **20.04**.
+
+The fuse-overlayfs project is available from [GitHub](https://github.com/containers/fuse-overlayfs), and provides instructions for easily building a static `fuse-overlayfs` executable.
+
+If Podman is used before fuse-overlayfs is installed, it may be necessary to adjust the `storage.conf` file (see "User Configuration Files" below) to change the `driver` option under `[storage]` to `"overlay"` and point the `mount_program` option in `[storage.options]` to the path of the `fuse-overlayfs` executable:
+
+```
+[storage]
+ driver = "overlay"
+
+ (...)
+
+ [storage.options]
+
+ (...)
+
+ mount_program = "/usr/bin/fuse-overlayfs"
+```
### Enable user namespaces (on RHEL7 machines)
@@ -87,39 +104,11 @@ The majority of the work necessary to run Podman in a rootless environment is on
Once the Administrator has completed the setup on the machine and then the configurations for the user in /etc/subuid and /etc/subgid, the user can just start using any Podman command that they wish.
-### User Configuration Files.
-
-The Podman configuration files for root reside in /usr/share/containers with overrides in /etc/containers. In the rootless environment they reside in ${XDG\_CONFIG\_HOME}/containers and are owned by each individual user. The main files are libpod.conf and storage.conf and the user can modify these files as they wish.
-
-The default authorization file used by the `podman login` and `podman logout` commands reside in ${XDG\_RUNTIME\_DIR}/containers/auth.json.
-
-## Systemd unit for rootless container
-
-```
-[Unit]
-Description=nginx
-Requires=user@1001.service
-After=user@1001.service
-[Service]
-Type=simple
-KillMode=none
-MemoryMax=200M
-ExecStartPre=-/usr/bin/podman rm -f nginx
-ExecStartPre=/usr/bin/podman pull nginx
-ExecStart=/usr/bin/podman run --name=nginx -p 8080:80 -v /home/nginx/html:/usr/share/nginx/html:Z nginx
-ExecStop=/usr/bin/podman stop nginx
-Restart=always
-User=nginx
-Group=nginx
-[Install]
-WantedBy=multi-user.target
-```
-
-This example unit will launch a nginx container using the existing user nginx with id 1001, serving static content from /home/nginx/html and limited to 200MB of RAM.
+### User Configuration Files
-You can use all the usual systemd flags to control the process, including capabilities and cgroup directives to limit memory or CPU.
+The Podman configuration files for root reside in `/usr/share/containers` with overrides in `/etc/containers`. In the rootless environment they reside in `${XDG_CONFIG_HOME}/containers` (usually `~/.config/containers`) and are owned by each individual user. The main files are `libpod.conf` and `storage.conf` and the user can modify these files as they wish.
-See #3866 for more details.
+The default authorization file used by the `podman login` and `podman logout` commands reside in `${XDG_RUNTIME_DIR}/containers/auth.json`.
## More information