summaryrefslogtreecommitdiff
path: root/test/e2e/toolbox_test.go
Commit message (Collapse)AuthorAge
* bump go module to version 4Valentin Rothberg2022-01-18
| | | | | | | | | | | | | Automated for .go files via gomove [1]: `gomove github.com/containers/podman/v3 github.com/containers/podman/v4` Remaining files via vgrep [2]: `vgrep github.com/containers/podman/v3` [1] https://github.com/KSubedi/gomove [2] https://github.com/vrothberg/vgrep Signed-off-by: Valentin Rothberg <rothberg@redhat.com>
* Standardize on capatalized CgroupsDaniel J Walsh2022-01-14
| | | | Signed-off-by: Daniel J Walsh <dwalsh@redhat.com>
* e2e tests: enable golintEd Santiago2021-11-29
| | | | | | ...and fix problems found therewith. Signed-off-by: Ed Santiago <santiago@redhat.com>
* e2e tests: use Should(Exit()) and ExitWithError()Ed Santiago2021-07-15
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | e2e test failures are rife with messages like: Expected 1 to equal 0 These make me cry. They're anti-helpful, requiring the reader to dive into the source code to figure out what those numbers mean. Solution: Go tests have a '.Should(Exit(NNN))' mechanism. I don't know if it spits out a better diagnostic (I have no way to run e2e tests on my laptop), but I have to fantasize that it will, and given the state of our flakes I assume that at least one test will fail and give me the opportunity to see what the error message looks like. THIS IS NOT REVIEWABLE CODE. There is no way for a human to review it. Don't bother. Maybe look at a few random ones for sanity. If you want to really review, here is a reproducer of what I did: cd test/e2e ! positive assertions. The second is the same as the first, ! with the addition of (unnecessary) parentheses because ! some invocations were written that way. The third is BeZero(). perl -pi -e 's/Expect\((\S+)\.ExitCode\(\)\)\.To\(Equal\((\d+)\)\)/Expect($1).Should(Exit($2))/' *_test.go perl -pi -e 's/Expect\((\S+)\.ExitCode\(\)\)\.To\(\(Equal\((\d+)\)\)\)/Expect($1).Should(Exit($2))/' *_test.go perl -pi -e 's/Expect\((\S+)\.ExitCode\(\)\)\.To\(BeZero\(\)\)/Expect($1).Should(Exit(0))/' *_test.go ! Same as above, but handles three non-numeric exit codes ! in run_exit_test.go perl -pi -e 's/Expect\((\S+)\.ExitCode\(\)\)\.To\(Equal\((\S+)\)\)/Expect($1).Should(Exit($2))/' *_test.go ! negative assertions. Difference is the spelling of 'To(Not)', ! 'ToNot', and 'NotTo'. I assume those are all the same. perl -pi -e 's/Expect\((\S+)\.ExitCode\(\)\)\.To\(Not\(Equal\((0)\)\)\)/Expect($1).To(ExitWithError())/' *_test.go perl -pi -e 's/Expect\((\S+)\.ExitCode\(\)\)\.ToNot\(Equal\((0)\)\)/Expect($1).To(ExitWithError())/' *_test.go perl -pi -e 's/Expect\((\S+)\.ExitCode\(\)\)\.NotTo\(Equal\((0)\)\)/Expect($1).To(ExitWithError())/' *_test.go ! negative, old use of BeZero() perl -pi -e 's/Expect\((\S+)\.ExitCode\(\)\)\.ToNot\(BeZero\(\)\)/Expect($1).Should(ExitWithError())/' *_test.go Run those on a clean copy of main branch (at the same branch point as my PR, of course), then diff against a checked-out copy of my PR. There should be no differences. Then all you have to review is that my replacements above are sane. UPDATE: nope, that's not enough, you also need to add gomega/gexec to the files that don't have it: perl -pi -e '$_ .= "$1/gexec\"\n" if m!^(.*/onsi/gomega)"!' $(grep -L gomega/gexec $(git log -1 --stat | awk '$1 ~ /test\/e2e\// { print $1}')) UPDATE 2: hand-edit run_volume_test.go UPDATE 3: sigh, add WaitWithDefaultTimeout() to a couple of places UPDATE 4: skip a test due to bug #10935 (race condition) Signed-off-by: Ed Santiago <santiago@redhat.com>
* podman image tree: restore previous behaviorValentin Rothberg2021-05-12
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The initial version of libimage changed the order of layers which has now been restored to remain backwards compatible. Further changes: * Fix a bug in the journald logging which requires to strip trailing new lines from the message. The system tests did not pass due to empty new lines. Triggered by changing the default logger to journald in containers/common. * Fix another bug in the journald logging which embedded the container ID inside the message rather than the specifid field. That surfaced in a preceeding whitespace of each log line which broke the system tests. * Alter the system tests to make sure that the k8s-file and the journald logging drivers are executed. * A number of e2e tests have been changed to force the k8s-file driver to make them pass when running inside a root container. * Increase the timeout in a kill test which seems to take longer now. Reasons are unknown. Tests passed earlier and no signal-related changes happend. It may be CI VM flake since some system tests but other flaked. Signed-off-by: Valentin Rothberg <rothberg@redhat.com>
* speed up CI handling of imagesbaude2021-04-07
| | | | | | | now that ci uses cached images, putting the large toolbox image into cache should help speed up tests. Signed-off-by: baude <bbaude@redhat.com>
* bump go module to v3Valentin Rothberg2021-02-22
| | | | | | | | | We missed bumping the go module, so let's do it now :) * Automated go code with github.com/sirkon/go-imports-rename * Manually via `vgrep podman/v2` the rest Signed-off-by: Valentin Rothberg <rothberg@redhat.com>
* Disable incompatible rootless + CGroupsV1 testsChris Evich2020-12-15
| | | | | | | | | | These tests simply will not work under these conditions. Note: Recently updated F32 (prior-fedora) and Ubuntu 20.04 (prior-ubuntu) VMs always use CGroupsV1 with runc. F33 and Ubuntu 20.10 were updated to always use CGroupsV2 with crun. Signed-off-by: Chris Evich <cevich@redhat.com>
* podman logs honor stderr correctlyPaul Holzinger2020-12-10
| | | | | | | | Make the ContainerLogsOptions support two io.Writers, one for stdout and the other for stderr. The logline already includes the information to which Writer it has to be written. Signed-off-by: Paul Holzinger <paul.holzinger@web.de>
* use lookaside storage for remote testsbaude2020-11-16
| | | | | | | | | | in an effort to speed up the remote testing, we should be using lookaside storage to avoid pull images as well as importing multiple images into the RW store. one test was removed and added into system test by Ed in #8325 Signed-off-by: baude <bbaude@redhat.com>
* Fix issues found with codespellDaniel J Walsh2020-11-12
| | | | Signed-off-by: Daniel J Walsh <dwalsh@redhat.com>
* Test $HOME when it's parent is bind mounted with --userns=keep-idDebarshi Ray2020-11-03
| | | | | | | | | | | | | | | | | | | | | | When --userns=keep-id is used, Podman is supposed to set up the home directory of the user inside the container to match that on the host as long as the home directory or any of its parents are marked as volumes to be bind mounted into the container. Currently, the test only considers the case where the home directory itself is bind mounted into the container. It doesn't cover the Podman code that walks through all the bind mounts looking for ancestors in case the home directory itself wasn't specified as a bind mount. Therefore, this improves the existing test added in commit 6ca8067956128585 ("Setup HOME environment when using --userns=keep-id") Note that this test can't be run as root. The home directory of the root user is /root, and it's parent is /. Bind mounting the entire / from the host into the container prevents it from starting: Error: openat2 ``: No such file or directory: OCI not found Signed-off-by: Debarshi Ray <rishi@fedoraproject.org>
* Tests: Fix common flakes, and improve apiv2 test logEd Santiago2020-10-20
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | - apiv2 - the 'ten /info requests' test is flaking often, taking ~8 seconds (our limit is 7, up from 5 a few weeks ago). Brent suggested that the first /info call might be expensive, because it needs to access storage. So, let's prime it by running one /info outside the timing loop. And, because even that continues to fail, bump it up to 10 seconds and file #8076 to track the slowdown. - toolbox test - WaitForReady() has timed out, even on one occasion causing a run failure because it failed 3 times. Solution: bump up timeout from 2s to 5s. Not really great, but CI systems are underpowered, and it's not unreasonable that 2s might be too low. - sdnotify test - add a 'podman wait' between stop & rm. This may prevent a "cannot rm container as it is running" race condition. While working on this, Brent and I noticed a few ways that test-apiv2 logging can be improved: - test name: when request is POST, display the jsonified parameters, not the original input ones. This should make it much easier to reproduce failures. - use curl's "--write-out" option to capture http code, content type, and request time. We were getting the first two via grep from logged headers; this is cleaner. And there was no other way to get timing. We now include the timing as X-Response-Time in the log file. - abort on *any* curl error, not just 7 (cannot connect). Any error at all from curl is bad news. Signed-off-by: Ed Santiago <santiago@redhat.com>
* Setup HOME environment when using --userns=keep-idDaniel J Walsh2020-10-14
| | | | | | | | | | | Currently the HOME environment is set to /root if the user does not override it. Also walk the parent directories of users homedir to see if it is volume mounted into the container, if yes, then set it correctly. Signed-off-by: Daniel J Walsh <dwalsh@redhat.com>
* tests/e2e: Add Toolbox-specific test casesOndřej Míchal2020-10-09
In the past, Toolbox[0] has been affected by several of Podman's bugs/changes of behaviour. This is one of the steps to assure that as Podman progresses, Podman itself and subsequently Toolbox do not regress. One of the other steps is including Toolbox's system tests in Podman's gating systems (which and to what extent is yet to be decided on). The tests are trying to stress parts of Podman that Toolbox needs for its functionality: permission to handle some system files, correct values/permissions/limits in certain parts, management of users and groups, mounting of paths,.. The list is most likely longer and therefore more commits will be needed to control every aspect of the Toolbox/Podman relationship :). Some test cases in test/e2e/toolbox_test.go rely on some tools being present in the base image[1]. That is not the case with the common ALPINE image or the basic Fedora image. Some tests might be duplicates of already existing tests. I'm more in favour of having those duplicates. Thanks to that it will be clear what functionality/behaviour Toolbox requires. [0] https://github.com/containers/toolbox [1] https://github.com/containers/toolbox/#image-requirements Signed-off-by: Ondřej Míchal <harrymichal@seznam.cz>