diff options
-rw-r--r-- | contrib/cirrus/README.md | 62 | ||||
-rw-r--r-- | contrib/cirrus/swagger_stack_trace.png | bin | 0 -> 42799 bytes | |||
-rw-r--r-- | docs/Readme.md | 30 | ||||
-rw-r--r-- | test/dockerpy/README.md | 5 | ||||
-rw-r--r-- | test/dockerpy/__init__.py | 0 | ||||
-rw-r--r-- | test/dockerpy/common.py | 64 | ||||
-rw-r--r-- | test/dockerpy/constant.py | 2 | ||||
-rw-r--r-- | test/dockerpy/containers.py | 46 | ||||
-rw-r--r-- | test/dockerpy/images.py | 40 |
9 files changed, 211 insertions, 38 deletions
diff --git a/contrib/cirrus/README.md b/contrib/cirrus/README.md index 541cf2f54..c8ec766e7 100644 --- a/contrib/cirrus/README.md +++ b/contrib/cirrus/README.md @@ -167,26 +167,50 @@ env: ### `docs` Task -Builds swagger API documentation YAML and uploads to google storage for both -PR's (for testing the process) and after a merge into any branch. For PR's +Builds swagger API documentation YAML and uploads to google storage (an online +service for storing unstructured data) for both +PR's (for testing the process) and the master branch. For PR's the YAML is uploaded into a [dedicated short-pruning cycle -bucket.](https://storage.googleapis.com/libpod-pr-releases/) For branches, -a [separate bucket is -used.](https://storage.googleapis.com/libpod-master-releases) -In both cases the filename includes the source -PR number or branch name. - -***Note***: [The online documentation](http://docs.podman.io/en/latest/_static/api.html) -is presented through javascript on the client-side. This requires CORS to be properly -configured on the bucket, for the `http://docs.podman.io` origin. Please see -[Configuring CORS on a bucket](https://cloud.google.com/storage/docs/configuring-cors#configure-cors-bucket) -for details. This may be performed by anybody with admin access to the google storage bucket, -using the following JSON: +bucket.](https://storage.googleapis.com/libpod-pr-releases/) for testing purposes +only. For the master branch, a [separate bucket is +used](https://storage.googleapis.com/libpod-master-releases) and provides the +content rendered on [the API Reference page](https://docs.podman.io/en/latest/_static/api.html) + +The online API reference is presented by javascript to the client. To prevent hijacking +of the client by malicious data, the [javascript utilises CORS](https://cloud.google.com/storage/docs/cross-origin). +This CORS metadata is served by `https://storage.googleapis.com` when configured correctly. +It will appear in [the request and response headers from the +client](https://cloud.google.com/storage/docs/configuring-cors#troubleshooting) when accessing +the API reference page. + +However, when the CORS metadata is missing or incorrectly configured, clients will receive an +error-message similar to: + +![Javascript Stack Trace Image](swagger_stack_trace.png) + +For documentation built by Read The Docs from the master branch, CORS metadata is +set on the `libpod-master-releases` storage bucket. Viewing or setting the CORS +metadata on the bucket requires having locally [installed and +configured the google-cloud SDK](https://cloud.google.com/sdk/docs). It also requires having +admin access to the google-storage bucket. Contact a project owner for help if you are +unsure of your permissions or need help resolving an error similar to the picture above. + +Assuming the SDK is installed, and you have the required admin access, the following command +will display the current CORS metadata: + +``` +gsutil cors get gs://libpod-master-releases +``` + +To function properly (allow client "trust" of content from `storage.googleapis.com`) the followiing +metadata JSON should be used. Following the JSON, is an example of the command used to set this +metadata on the libpod-master-releases bucket. For additional information about configuring CORS +please referr to [the google-storage documentation](https://cloud.google.com/storage/docs/configuring-cors). ```JSON [ { - "origin": ["http://docs.podman.io"], + "origin": ["http://docs.podman.io", "https://docs.podman.io"], "responseHeader": ["Content-Type"], "method": ["GET"], "maxAgeSeconds": 600 @@ -194,6 +218,14 @@ using the following JSON: ] ``` +``` +gsutil cors set /path/to/file.json gs://libpod-master-releases +``` + +***Note:*** The CORS metadata does _NOT_ change after the `docs` task uploads a new swagger YAML +file. Therefore, if it is not functioning or misconfigured, a person must have altered it or +changes were made to the referring site (e.g. `docs.podman.io`). + ## Base-images Base-images are VM disk-images specially prepared for executing as GCE VMs. diff --git a/contrib/cirrus/swagger_stack_trace.png b/contrib/cirrus/swagger_stack_trace.png Binary files differnew file mode 100644 index 000000000..6aa063bab --- /dev/null +++ b/contrib/cirrus/swagger_stack_trace.png diff --git a/docs/Readme.md b/docs/Readme.md index 987a5b8e4..9d3b9d06f 100644 --- a/docs/Readme.md +++ b/docs/Readme.md @@ -30,10 +30,26 @@ link on that page. ## API Reference The [latest online documentation](http://docs.podman.io/en/latest/_static/api.html) is -automatically generated from committed upstream sources. There is a short-duration -cache involved, in case old content or an error is returned, try clearing your browser -cache or returning to the site after 10-30 minutes. - -***Maintainers Note***: Please refer to [the Cirrus-CI tasks -documentation](../contrib/cirrus/README.md#docs-task) for -important operational details. +automatically generated by two cooperating automation systems based on committed upstream +source code. Firstly, [the Cirrus-CI docs task](../contrib/cirrus/README.md#docs-task) builds +`pkg/api/swagger.yaml` and uploads it to a public-facing location (Google Storage Bucket - +an online service for storing unstructured data). Second, [Read The Docs](readthedocs.com) +reacts to the github.com repository change, building the content for the [libpod documentation +site](https://podman.readthedocs.io/). This site includes for the API section, +some javascript which consumes the uploaded `swagger.yaml` file directly from the Google +Storage Bucket. + +Since there are multiple systems and local cache is involved, it's possible that updates to +documentation (especially the swagger/API docs) will lag by 10-or-so minutes. However, +because the client (i.e. your web browser) is fetching content from multiple locations that +do not share a common domain, accessing the API section may show a stack-trace similar to +the following: + +![Javascript Stack Trace Image](../contrib/cirrus/swagger_stack_trace.png) + +If reloading the page, or clearing your local cache does not fix the problem, it is +likely caused by broken metadata needed to protect clients from cross-site-scripting +style attacks. Please [notify a maintainer](https://github.com/containers/libpod#communications) +so they may investigate how/why the swagger.yaml file's CORS-metadata is incorrect. See +[the Cirrus-CI tasks documentation](../contrib/cirrus/README.md#docs-task) for +details regarding this situation. diff --git a/test/dockerpy/README.md b/test/dockerpy/README.md index 2894fc8ab..32e426d58 100644 --- a/test/dockerpy/README.md +++ b/test/dockerpy/README.md @@ -6,11 +6,6 @@ Running tests ============= To run the tests locally in your sandbox: -#### Make sure that the Podman system service is running to do so - -``` -sudo podman --log-level=debug system service -t0 unix:/run/podman/podman.sock -``` #### Run the entire test ``` diff --git a/test/dockerpy/__init__.py b/test/dockerpy/__init__.py new file mode 100644 index 000000000..e69de29bb --- /dev/null +++ b/test/dockerpy/__init__.py diff --git a/test/dockerpy/common.py b/test/dockerpy/common.py index 767a94ec0..fdacb49be 100644 --- a/test/dockerpy/common.py +++ b/test/dockerpy/common.py @@ -1,6 +1,68 @@ import docker +import subprocess +import os +import sys +import time from docker import Client +from . import constant +alpineDict = { + "name": "docker.io/library/alpine:latest", + "shortName": "alpine", + "tarballName": "alpine.tar"} def get_client(): - return docker.Client(base_url="unix:/run/podman/podman.sock") + client = docker.Client(base_url="http://localhost:8080",timeout=15) + return client + +client = get_client() + +def podman(): + binary = os.getenv("PODMAN_BINARY") + if binary is None: + binary = "bin/podman" + return binary + +def restore_image_from_cache(): + client.load_image(constant.ImageCacheDir+alpineDict["tarballName"]) + +def run_top_container(): + client.pull(constant.ALPINE) + c = client.create_container(constant.ALPINE,name=constant.TOP) + client.start(container=c.get("Id")) + +def enable_sock(TestClass): + TestClass.podman = subprocess.Popen( + [ + podman(), "system", "service", "tcp:localhost:8080", + "--log-level=debug", "--time=0" + ], + shell=False, + stdin=subprocess.DEVNULL, + stdout=subprocess.DEVNULL, + stderr=subprocess.DEVNULL, + ) + time.sleep(2) + +def terminate_connection(TestClass): + TestClass.podman.terminate() + stdout, stderr = TestClass.podman.communicate(timeout=0.5) + if stdout: + print("\nService Stdout:\n" + stdout.decode('utf-8')) + if stderr: + print("\nService Stderr:\n" + stderr.decode('utf-8')) + + if TestClass.podman.returncode > 0: + sys.stderr.write("podman exited with error code {}\n".format( + TestClass.podman.returncode)) + sys.exit(2) + +def remove_all_containers(): + containers = client.containers(quiet=True) + for c in containers: + client.remove_container(container=c.get("Id"), force=True) + +def remove_all_images(): + allImages = client.images() + for image in allImages: + client.remove_image(image,force=True) diff --git a/test/dockerpy/constant.py b/test/dockerpy/constant.py index e00457442..8a3f1d984 100644 --- a/test/dockerpy/constant.py +++ b/test/dockerpy/constant.py @@ -9,3 +9,5 @@ ALPINEAMD64ID = "961769676411f082461f9ef46626dd7a2d1e2b2a38e6a44364bcbecf51e ALPINEARM64DIGEST = "docker.io/library/alpine@sha256:db7f3dcef3d586f7dd123f107c93d7911515a5991c4b9e51fa2a43e46335a43e" ALPINEARM64ID = "915beeae46751fc564998c79e73a1026542e945ca4f73dc841d09ccc6c2c0672" infra = "k8s.gcr.io/pause:3.2" +TOP = "top" +ImageCacheDir = "/tmp/podman/imagecachedir" diff --git a/test/dockerpy/containers.py b/test/dockerpy/containers.py new file mode 100644 index 000000000..d70ec932c --- /dev/null +++ b/test/dockerpy/containers.py @@ -0,0 +1,46 @@ + +import unittest +import docker +import requests +import os +from docker import Client +from . import constant +from . import common + +client = common.get_client() + +class TestContainers(unittest.TestCase): + + podman = None + + def setUp(self): + super().setUp() + common.run_top_container() + + def tearDown(self): + common.remove_all_containers() + common.remove_all_images() + return super().tearDown() + + @classmethod + def setUpClass(cls): + super().setUpClass() + common.enable_sock(cls) + + @classmethod + def tearDownClass(cls): + common.terminate_connection(cls) + return super().tearDownClass() + + def test_inspect_container(self): + # Inspect bogus container + with self.assertRaises(requests.HTTPError): + client.inspect_container("dummy") + # Inspect valid container + container = client.inspect_container(constant.TOP) + self.assertIn(constant.TOP , container["Name"]) + + +if __name__ == '__main__': + # Setup temporary space + unittest.main() diff --git a/test/dockerpy/images.py b/test/dockerpy/images.py index 07ea6c0f8..1e07d25c7 100644 --- a/test/dockerpy/images.py +++ b/test/dockerpy/images.py @@ -11,19 +11,29 @@ client = common.get_client() class TestImages(unittest.TestCase): + podman = None def setUp(self): super().setUp() client.pull(constant.ALPINE) def tearDown(self): - allImages = client.images() - for image in allImages: - client.remove_image(image,force=True) + common.remove_all_images() return super().tearDown() -# Inspect Image + @classmethod + def setUpClass(cls): + super().setUpClass() + common.enable_sock(cls) + + + @classmethod + def tearDownClass(cls): + common.terminate_connection(cls) + return super().tearDownClass() +# Inspect Image + def test_inspect_image(self): # Check for error with wrong image name with self.assertRaises(requests.HTTPError): @@ -79,8 +89,8 @@ class TestImages(unittest.TestCase): for i in response: # Alpine found if "docker.io/library/alpine" in i["Name"]: - self.assertTrue(True, msg="Image found") - self.assertFalse(False,msg="Image not found") + self.assertTrue + self.assertFalse # Image Exist (No docker-py support yet) @@ -105,19 +115,22 @@ class TestImages(unittest.TestCase): alpine_image = client.inspect_image(constant.ALPINE) for h in imageHistory: if h["Id"] in alpine_image["Id"]: - self.assertTrue(True,msg="Image History validated") - self.assertFalse(False,msg="Unable to get image history") + self.assertTrue + self.assertFalse # Prune Image (No docker-py support yet) # Export Image def test_export_image(self): - file = "/tmp/alpine-latest.tar" + client.pull(constant.BB) + file = os.path.join(constant.ImageCacheDir , "busybox.tar") + if not os.path.exists(constant.ImageCacheDir): + os.makedirs(constant.ImageCacheDir) # Check for error with wrong image name with self.assertRaises(requests.HTTPError): client.get_image("dummy") - response = client.get_image(constant.ALPINE) + response = client.get_image(constant.BB) image_tar = open(file,mode="wb") image_tar.write(response.data) image_tar.close() @@ -125,6 +138,13 @@ class TestImages(unittest.TestCase): # Import|Load Image + def test_import_image(self): + allImages = client.images() + self.assertEqual(len(allImages), 1) + file = os.path.join(constant.ImageCacheDir , "busybox.tar") + client.import_image_from_file(filename=file) + allImages = client.images() + self.assertEqual(len(allImages), 2) if __name__ == '__main__': # Setup temporary space |