aboutsummaryrefslogtreecommitdiff
path: root/test/e2e/wait_test.go
Commit message (Collapse)AuthorAge
* enable errcheck linterPaul Holzinger2022-04-29
| | | | | | | | The errcheck linter makes sure that errors are always check and not ignored by accident. It spotted a lot of unchecked errors, mostly in the tests but also some real problem in the code. Signed-off-by: Paul Holzinger <pholzing@redhat.com>
* bump go module to version 4Valentin Rothberg2022-01-18
| | | | | | | | | | | | | Automated for .go files via gomove [1]: `gomove github.com/containers/podman/v3 github.com/containers/podman/v4` Remaining files via vgrep [2]: `vgrep github.com/containers/podman/v3` [1] https://github.com/KSubedi/gomove [2] https://github.com/vrothberg/vgrep Signed-off-by: Valentin Rothberg <rothberg@redhat.com>
* e2e tests: use Should(Exit()) and ExitWithError()Ed Santiago2021-07-15
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | e2e test failures are rife with messages like: Expected 1 to equal 0 These make me cry. They're anti-helpful, requiring the reader to dive into the source code to figure out what those numbers mean. Solution: Go tests have a '.Should(Exit(NNN))' mechanism. I don't know if it spits out a better diagnostic (I have no way to run e2e tests on my laptop), but I have to fantasize that it will, and given the state of our flakes I assume that at least one test will fail and give me the opportunity to see what the error message looks like. THIS IS NOT REVIEWABLE CODE. There is no way for a human to review it. Don't bother. Maybe look at a few random ones for sanity. If you want to really review, here is a reproducer of what I did: cd test/e2e ! positive assertions. The second is the same as the first, ! with the addition of (unnecessary) parentheses because ! some invocations were written that way. The third is BeZero(). perl -pi -e 's/Expect\((\S+)\.ExitCode\(\)\)\.To\(Equal\((\d+)\)\)/Expect($1).Should(Exit($2))/' *_test.go perl -pi -e 's/Expect\((\S+)\.ExitCode\(\)\)\.To\(\(Equal\((\d+)\)\)\)/Expect($1).Should(Exit($2))/' *_test.go perl -pi -e 's/Expect\((\S+)\.ExitCode\(\)\)\.To\(BeZero\(\)\)/Expect($1).Should(Exit(0))/' *_test.go ! Same as above, but handles three non-numeric exit codes ! in run_exit_test.go perl -pi -e 's/Expect\((\S+)\.ExitCode\(\)\)\.To\(Equal\((\S+)\)\)/Expect($1).Should(Exit($2))/' *_test.go ! negative assertions. Difference is the spelling of 'To(Not)', ! 'ToNot', and 'NotTo'. I assume those are all the same. perl -pi -e 's/Expect\((\S+)\.ExitCode\(\)\)\.To\(Not\(Equal\((0)\)\)\)/Expect($1).To(ExitWithError())/' *_test.go perl -pi -e 's/Expect\((\S+)\.ExitCode\(\)\)\.ToNot\(Equal\((0)\)\)/Expect($1).To(ExitWithError())/' *_test.go perl -pi -e 's/Expect\((\S+)\.ExitCode\(\)\)\.NotTo\(Equal\((0)\)\)/Expect($1).To(ExitWithError())/' *_test.go ! negative, old use of BeZero() perl -pi -e 's/Expect\((\S+)\.ExitCode\(\)\)\.ToNot\(BeZero\(\)\)/Expect($1).Should(ExitWithError())/' *_test.go Run those on a clean copy of main branch (at the same branch point as my PR, of course), then diff against a checked-out copy of my PR. There should be no differences. Then all you have to review is that my replacements above are sane. UPDATE: nope, that's not enough, you also need to add gomega/gexec to the files that don't have it: perl -pi -e '$_ .= "$1/gexec\"\n" if m!^(.*/onsi/gomega)"!' $(grep -L gomega/gexec $(git log -1 --stat | awk '$1 ~ /test\/e2e\// { print $1}')) UPDATE 2: hand-edit run_volume_test.go UPDATE 3: sigh, add WaitWithDefaultTimeout() to a couple of places UPDATE 4: skip a test due to bug #10935 (race condition) Signed-off-by: Ed Santiago <santiago@redhat.com>
* bump go module to v3Valentin Rothberg2021-02-22
| | | | | | | | | We missed bumping the go module, so let's do it now :) * Automated go code with github.com/sirkon/go-imports-rename * Manually via `vgrep podman/v2` the rest Signed-off-by: Valentin Rothberg <rothberg@redhat.com>
* Increase timeouts in some testsMatej Vasek2021-02-03
| | | | Signed-off-by: Matej Vasek <mvasek@redhat.com>
* Fix handling and documentation of podman wait --intervalDaniel J Walsh2020-10-21
| | | | | | | | | | | | | In older versions of podman, we supported decimal numbers defaulting to microseconds. This PR fixes to allow users to continue to specify only digits. Also cleaned up documentation to fully describe what input for --interval flag. Finally improved testing on podman wait to actually make sure the command succeeded. Fixed tests to work on podman-remote. Signed-off-by: Daniel J Walsh <dwalsh@redhat.com>
* podman wait accept args > 1Paul Holzinger2020-09-15
| | | | Signed-off-by: Paul Holzinger <paul.holzinger@web.de>
* Switch all references to github.com/containers/libpod -> podmanDaniel J Walsh2020-07-28
| | | | Signed-off-by: Daniel J Walsh <dwalsh@redhat.com>
* move go module to v2Valentin Rothberg2020-07-06
| | | | | | | | | | | | | | | With the advent of Podman 2.0.0 we crossed the magical barrier of go modules. While we were able to continue importing all packages inside of the project, the project could not be vendored anymore from the outside. Move the go module to new major version and change all imports to `github.com/containers/libpod/v2`. The renaming of the imports was done via `gomove` [1]. [1] https://github.com/KSubedi/gomove Signed-off-by: Valentin Rothberg <rothberg@redhat.com>
* use imagecaches for local testsbaude2019-05-29
| | | | | | | | | when doing localized tests (not varlink), we can use secondary image stores as read-only image caches. this cuts down on test time significantly because each test does not need to restore the images from a tarball anymore. Signed-off-by: baude <bbaude@redhat.com>
* enable integration tests for remote-clientbaude2019-05-07
| | | | | | | first pass at enabling a swath of integration tests for the remote-client. Signed-off-by: baude <bbaude@redhat.com>
* ginkgo status improvementsbaude2019-03-08
| | | | | | | a series of improvements to our ginkgo test framework so we can get better ideas of whats going on when run in CI Signed-off-by: baude <bbaude@redhat.com>
* Add tests to make sure podman container and podman image commands workDaniel J Walsh2019-03-02
| | | | | | | | | We have little to no testing to make sure we don't break podman image and podman container commands that wrap traditional commands. This PR adds tests for each of the commands. Signed-off-by: Daniel J Walsh <dwalsh@redhat.com>
* Run integrations test with remote-clientbaude2019-01-14
| | | | | | | | | | | | Add the ability to run the integration (ginkgo) suite using the remote client. Only the images_test.go file is run right now; all the rest are isolated with a // +build !remotelinux. As more content is developed for the remote client, we can unblock the files and just block single tests as needed. Signed-off-by: baude <bbaude@redhat.com>
* Separate common used test functions and structs to test/utilsYiqiao Pu2018-11-16
| | | | | | | Put common used test functions and structs to a separated package. So we can use them for more testsuites. Signed-off-by: Yiqiao Pu <ypu@redhat.com>
* Speed up test resultsbaude2018-07-30
| | | | | | | | | | | | Stop all containers with a zero timeout prior to trying to rm -fa. This results in quicker teardown times by not waiting for timeouts. Also, with wait tests, no need to wait the full 10 second sleep. 1 will do. Signed-off-by: baude <bbaude@redhat.com> Closes: #1181 Approved by: rhatdan
* Show duration for each ginkgo test and test speed improvementsbaude2018-07-28
| | | | | | | | | | | | | | Because our tests are getting so long, we want to be able to audit which tests are taking the longest to complete. This may indicate a bad test, bad CI, bad code, etc and therefore should be auditable. Also, make speed improvements to tests by making sure we only unpack caches images that actually get used. Signed-off-by: baude <bbaude@redhat.com> Closes: #1178 Approved by: mheon
* Initial gingko workbaude2018-01-29
This implements the ginkgo integration test framework for podman. As tests are migrated from bats to ginkgo, we will still run both integration suites. When a test is migrated, we remove the tests from bats at that time. All new tests should be just for the ginkgo framework. One exception is that we only run the ginkgo suit in the travis/ubuntu environment. The CentOS and Fedora PAPR nodes will more than cover those. Signed-off-by: baude <bbaude@redhat.com> Closes: #261 Approved by: baude