summaryrefslogtreecommitdiff
path: root/test/e2e/run_entrypoint_test.go
Commit message (Collapse)AuthorAge
* e2e tests: use Should(Exit()) and ExitWithError()Ed Santiago2021-07-15
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | e2e test failures are rife with messages like: Expected 1 to equal 0 These make me cry. They're anti-helpful, requiring the reader to dive into the source code to figure out what those numbers mean. Solution: Go tests have a '.Should(Exit(NNN))' mechanism. I don't know if it spits out a better diagnostic (I have no way to run e2e tests on my laptop), but I have to fantasize that it will, and given the state of our flakes I assume that at least one test will fail and give me the opportunity to see what the error message looks like. THIS IS NOT REVIEWABLE CODE. There is no way for a human to review it. Don't bother. Maybe look at a few random ones for sanity. If you want to really review, here is a reproducer of what I did: cd test/e2e ! positive assertions. The second is the same as the first, ! with the addition of (unnecessary) parentheses because ! some invocations were written that way. The third is BeZero(). perl -pi -e 's/Expect\((\S+)\.ExitCode\(\)\)\.To\(Equal\((\d+)\)\)/Expect($1).Should(Exit($2))/' *_test.go perl -pi -e 's/Expect\((\S+)\.ExitCode\(\)\)\.To\(\(Equal\((\d+)\)\)\)/Expect($1).Should(Exit($2))/' *_test.go perl -pi -e 's/Expect\((\S+)\.ExitCode\(\)\)\.To\(BeZero\(\)\)/Expect($1).Should(Exit(0))/' *_test.go ! Same as above, but handles three non-numeric exit codes ! in run_exit_test.go perl -pi -e 's/Expect\((\S+)\.ExitCode\(\)\)\.To\(Equal\((\S+)\)\)/Expect($1).Should(Exit($2))/' *_test.go ! negative assertions. Difference is the spelling of 'To(Not)', ! 'ToNot', and 'NotTo'. I assume those are all the same. perl -pi -e 's/Expect\((\S+)\.ExitCode\(\)\)\.To\(Not\(Equal\((0)\)\)\)/Expect($1).To(ExitWithError())/' *_test.go perl -pi -e 's/Expect\((\S+)\.ExitCode\(\)\)\.ToNot\(Equal\((0)\)\)/Expect($1).To(ExitWithError())/' *_test.go perl -pi -e 's/Expect\((\S+)\.ExitCode\(\)\)\.NotTo\(Equal\((0)\)\)/Expect($1).To(ExitWithError())/' *_test.go ! negative, old use of BeZero() perl -pi -e 's/Expect\((\S+)\.ExitCode\(\)\)\.ToNot\(BeZero\(\)\)/Expect($1).Should(ExitWithError())/' *_test.go Run those on a clean copy of main branch (at the same branch point as my PR, of course), then diff against a checked-out copy of my PR. There should be no differences. Then all you have to review is that my replacements above are sane. UPDATE: nope, that's not enough, you also need to add gomega/gexec to the files that don't have it: perl -pi -e '$_ .= "$1/gexec\"\n" if m!^(.*/onsi/gomega)"!' $(grep -L gomega/gexec $(git log -1 --stat | awk '$1 ~ /test\/e2e\// { print $1}')) UPDATE 2: hand-edit run_volume_test.go UPDATE 3: sigh, add WaitWithDefaultTimeout() to a couple of places UPDATE 4: skip a test due to bug #10935 (race condition) Signed-off-by: Ed Santiago <santiago@redhat.com>
* bump go module to v3Valentin Rothberg2021-02-22
| | | | | | | | | We missed bumping the go module, so let's do it now :) * Automated go code with github.com/sirkon/go-imports-rename * Manually via `vgrep podman/v2` the rest Signed-off-by: Valentin Rothberg <rothberg@redhat.com>
* Ignore entrypoint=[\"\"]Daniel J Walsh2021-02-17
| | | | | | | | | | | | | We recieved an issue with an image that was built with entrypoint=[""] This blows up on Podman, but works on Docker. When we setup the OCI Runtime, we should drop entrypoint if it is == [""] https://github.com/containers/podman/issues/9377 Signed-off-by: Daniel J Walsh <dwalsh@redhat.com>
* Fix typo in testsDaniel J Walsh2020-12-01
| | | | Signed-off-by: Daniel J Walsh <dwalsh@redhat.com>
* move from docker.ioEd Santiago2020-10-28
| | | | | | | | | | | | | | | | | | | | | | | | | | | | Followon to #7965 (mirror registry). mirror.gcr.io doesn't cache all the images we need, and I can't find a way to add to its cache, so let's just use quay.io for those images that it can't serve. Tools used: skopeo copy --all docker://docker.io/library/alpine:3.10.2 \ docker://quay.io/libpod/alpine:3.10.2 ...and also: docker.io/library/alpine:3.2 docker.io/library/busybox:latest docker.io/library/busybox:glibc docker.io/library/busybox:1.30.1 docker.io/library/redis:alpine docker.io/libpod/alpine-with-bogus-seccomp:label docker.io/libpod/alpine-with-seccomp:label docker.io/libpod/alpine_healthcheck:latest docker.io/libpod/badhealthcheck:latest Since most of those were new quay.io/libpod images, they required going in through the quay.io GUI, image, settings, Make Public. Signed-off-by: Ed Santiago <santiago@redhat.com>
* Attempt to turn on some more remote testsDaniel J Walsh2020-10-07
| | | | Signed-off-by: Daniel J Walsh <dwalsh@redhat.com>
* e2e tests: SkipIfRemote(): add a reasonEd Santiago2020-09-23
| | | | | | | | | | | | | | | | | Now that Dan has added helpful comments to each SkipIfRemote, let's take the next step and include those messages in the Skip() output so someone viewing test results can easily see if a remote test is skipped for a real reason or for a FIXME. This commit is the result of a simple: perl -pi -e 's;(SkipIfRemote)\(\)(\s+//\s+(.*))?;$1("$3");' *.go in the test/e2e directory, with a few minor (manual) changes in wording. Signed-off-by: Ed Santiago <santiago@redhat.com>
* Examine all SkipIfRemote functionsDaniel J Walsh2020-09-22
| | | | | | | | Remove ones that are not needed. Document those that should be there. Document those that should be fixed. Signed-off-by: Daniel J Walsh <dwalsh@redhat.com>
* Switch all references to github.com/containers/libpod -> podmanDaniel J Walsh2020-07-28
| | | | Signed-off-by: Daniel J Walsh <dwalsh@redhat.com>
* Turn on a bunch more remote testsDaniel J Walsh2020-07-22
| | | | | | We need to be more specific about the remote tests we turn off. Signed-off-by: Daniel J Walsh <dwalsh@redhat.com>
* Fix handling of entrypointDaniel J Walsh2020-07-14
| | | | | | | If a user specifies an entrypoint of "" then we should not use the images entrypoint. Signed-off-by: Daniel J Walsh <dwalsh@redhat.com>
* Change buildtag for remoteclient to remote for testingDaniel J Walsh2020-07-06
| | | | Signed-off-by: Daniel J Walsh <dwalsh@redhat.com>
* move go module to v2Valentin Rothberg2020-07-06
| | | | | | | | | | | | | | | With the advent of Podman 2.0.0 we crossed the magical barrier of go modules. While we were able to continue importing all packages inside of the project, the project could not be vendored anymore from the outside. Move the go module to new major version and change all imports to `github.com/containers/libpod/v2`. The renaming of the imports was done via `gomove` [1]. [1] https://github.com/KSubedi/gomove Signed-off-by: Valentin Rothberg <rothberg@redhat.com>
* test: enable entrypoint testsGiuseppe Scrivano2020-04-30
| | | | Signed-off-by: Giuseppe Scrivano <gscrivan@redhat.com>
* Force integration tests to passBrent Baude2020-04-21
| | | | | | Failing tests are now skipped and we should work from this. Signed-off-by: Brent Baude <bbaude@redhat.com>
* use imagecaches for local testsbaude2019-05-29
| | | | | | | | | when doing localized tests (not varlink), we can use secondary image stores as read-only image caches. this cuts down on test time significantly because each test does not need to restore the images from a tarball anymore. Signed-off-by: baude <bbaude@redhat.com>
* ginkgo status improvementsbaude2019-03-08
| | | | | | | a series of improvements to our ginkgo test framework so we can get better ideas of whats going on when run in CI Signed-off-by: baude <bbaude@redhat.com>
* Run integrations test with remote-clientbaude2019-01-14
| | | | | | | | | | | | Add the ability to run the integration (ginkgo) suite using the remote client. Only the images_test.go file is run right now; all the rest are isolated with a // +build !remotelinux. As more content is developed for the remote client, we can unblock the files and just block single tests as needed. Signed-off-by: baude <bbaude@redhat.com>
* Separate common used test functions and structs to test/utilsYiqiao Pu2018-11-16
| | | | | | | Put common used test functions and structs to a separated package. So we can use them for more testsuites. Signed-off-by: Yiqiao Pu <ypu@redhat.com>
* Make podman ps fastbaude2018-10-23
| | | | | | Like Ricky Bobby, we want to go fast. Signed-off-by: baude <bbaude@redhat.com>
* Show duration for each ginkgo test and test speed improvementsbaude2018-07-28
| | | | | | | | | | | | | | Because our tests are getting so long, we want to be able to audit which tests are taking the longest to complete. This may indicate a bad test, bad CI, bad code, etc and therefore should be auditable. Also, make speed improvements to tests by making sure we only unpack caches images that actually get used. Signed-off-by: baude <bbaude@redhat.com> Closes: #1178 Approved by: mheon
* Add --all,-a flag to podman imagesumohnani82018-06-18
| | | | | | | | | | | podman images will not show intermediate images by default. To view all images, including intermediate images created during a build, use the --all flag. Signed-off-by: umohnani8 <umohnani@redhat.com> Closes: #947 Approved by: rhatdan
* No entrpoint, cmd, or commandbaude2018-02-15
| | | | | | | | | | | | | When an image does not have an ENTRYPOINT nor a CMD and the user does not provide a command in the CLI, we should fail gracefully. This resolves issue #328 Signed-off-by: baude <bbaude@redhat.com> Closes: #333 Approved by: mheon
* Honor ENTRYPOINT in imagebaude2018-02-11
When an image has an ENTRYPOINT defined, we should be honoring it. The problem is described in issue #321. Also, added buildah binary to test runtimes for testing entrypoint and will also allow us to test podman build as well. Signed-off-by: baude <bbaude@redhat.com> Closes: #322 Approved by: rhatdan