| Commit message (Collapse) | Author | Age |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Commit 2 of 2: there were (still are?) a bunch of string
checks that didn't have a corresponding Expect(). IIUC
that means they were NOPs. Try to identify and fix those.
The first few were caught by Go linting, "ok is defined
but not used". When I realized the problem, I looked for
more using:
$ ack -A2 LineInOutputStartsWith
...and tediously eyeballing the results, looking for
matches in which the next line was not Expect(). If
test was wrong (e.g. "server" should've been "nameserver"),
fix that.
Also: remove the remove-betrue script. We don't need it
in the repo, I just wanted to preserve it for posterity.
Signed-off-by: Ed Santiago <santiago@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Many ginkgo tests have been written to use this evil form:
GrepString("foo")
Expect(that to BeTrue())
...which yields horrible useless messages on failure:
false is not true
Identify those (automatically, via script) and convert to:
Expect(output to ContainSubstring("foo"))
...which yields:
"this output" does not contain substring "foo"
There are still many BeTrue()s left. This is just a start.
This is commit 1 of 2. It includes the script I used, and
all changes to *.go are those computed by the script.
Commit 2 will apply some manual fixes.
Signed-off-by: Ed Santiago <santiago@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
e2e test failures are rife with messages like:
Expected 1 to equal 0
These make me cry. They're anti-helpful, requiring the reader
to dive into the source code to figure out what those numbers
mean.
Solution: Go tests have a '.Should(Exit(NNN))' mechanism. I
don't know if it spits out a better diagnostic (I have no way
to run e2e tests on my laptop), but I have to fantasize that
it will, and given the state of our flakes I assume that at
least one test will fail and give me the opportunity to see
what the error message looks like.
THIS IS NOT REVIEWABLE CODE. There is no way for a human
to review it. Don't bother. Maybe look at a few random
ones for sanity. If you want to really review, here is
a reproducer of what I did:
cd test/e2e
! positive assertions. The second is the same as the first,
! with the addition of (unnecessary) parentheses because
! some invocations were written that way. The third is BeZero().
perl -pi -e 's/Expect\((\S+)\.ExitCode\(\)\)\.To\(Equal\((\d+)\)\)/Expect($1).Should(Exit($2))/' *_test.go
perl -pi -e 's/Expect\((\S+)\.ExitCode\(\)\)\.To\(\(Equal\((\d+)\)\)\)/Expect($1).Should(Exit($2))/' *_test.go
perl -pi -e 's/Expect\((\S+)\.ExitCode\(\)\)\.To\(BeZero\(\)\)/Expect($1).Should(Exit(0))/' *_test.go
! Same as above, but handles three non-numeric exit codes
! in run_exit_test.go
perl -pi -e 's/Expect\((\S+)\.ExitCode\(\)\)\.To\(Equal\((\S+)\)\)/Expect($1).Should(Exit($2))/' *_test.go
! negative assertions. Difference is the spelling of 'To(Not)',
! 'ToNot', and 'NotTo'. I assume those are all the same.
perl -pi -e 's/Expect\((\S+)\.ExitCode\(\)\)\.To\(Not\(Equal\((0)\)\)\)/Expect($1).To(ExitWithError())/' *_test.go
perl -pi -e 's/Expect\((\S+)\.ExitCode\(\)\)\.ToNot\(Equal\((0)\)\)/Expect($1).To(ExitWithError())/' *_test.go
perl -pi -e 's/Expect\((\S+)\.ExitCode\(\)\)\.NotTo\(Equal\((0)\)\)/Expect($1).To(ExitWithError())/' *_test.go
! negative, old use of BeZero()
perl -pi -e 's/Expect\((\S+)\.ExitCode\(\)\)\.ToNot\(BeZero\(\)\)/Expect($1).Should(ExitWithError())/' *_test.go
Run those on a clean copy of main branch (at the same branch
point as my PR, of course), then diff against a checked-out
copy of my PR. There should be no differences. Then all you
have to review is that my replacements above are sane.
UPDATE: nope, that's not enough, you also need to add gomega/gexec
to the files that don't have it:
perl -pi -e '$_ .= "$1/gexec\"\n" if m!^(.*/onsi/gomega)"!' $(grep -L gomega/gexec $(git log -1 --stat | awk '$1 ~ /test\/e2e\// { print $1}'))
UPDATE 2: hand-edit run_volume_test.go
UPDATE 3: sigh, add WaitWithDefaultTimeout() to a couple of places
UPDATE 4: skip a test due to bug #10935 (race condition)
Signed-off-by: Ed Santiago <santiago@redhat.com>
|
|
|
|
|
|
|
|
|
| |
We missed bumping the go module, so let's do it now :)
* Automated go code with github.com/sirkon/go-imports-rename
* Manually via `vgrep podman/v2` the rest
Signed-off-by: Valentin Rothberg <rothberg@redhat.com>
|
|
|
|
| |
Signed-off-by: Daniel J Walsh <dwalsh@redhat.com>
|
|
|
|
|
|
|
|
| |
Remove ones that are not needed.
Document those that should be there.
Document those that should be fixed.
Signed-off-by: Daniel J Walsh <dwalsh@redhat.com>
|
|
|
|
| |
Signed-off-by: Daniel J Walsh <dwalsh@redhat.com>
|
|
|
|
|
|
| |
We need to be more specific about the remote tests we turn off.
Signed-off-by: Daniel J Walsh <dwalsh@redhat.com>
|
|
|
|
| |
Signed-off-by: Daniel J Walsh <dwalsh@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
With the advent of Podman 2.0.0 we crossed the magical barrier of go
modules. While we were able to continue importing all packages inside
of the project, the project could not be vendored anymore from the
outside.
Move the go module to new major version and change all imports to
`github.com/containers/libpod/v2`. The renaming of the imports
was done via `gomove` [1].
[1] https://github.com/KSubedi/gomove
Signed-off-by: Valentin Rothberg <rothberg@redhat.com>
|
|
|
|
|
|
|
| |
Adds port subcommand for containers
Updates check for no args when all flag is set.
Signed-off-by: Sujil02 <sushah@redhat.com>
|
|
|
|
|
|
| |
Failing tests are now skipped and we should work from this.
Signed-off-by: Brent Baude <bbaude@redhat.com>
|
|
|
|
|
|
|
| |
Rather than checking for non-zero, we need to check for >0 to
distinguish between timeouts and error exit codes.
Signed-off-by: Jhon Honce <jhonce@redhat.com>
|
|
|
|
|
|
|
|
| |
Add '.To(BeTrue())' to 'Expect(' statements in unit tests that
are missing them. These tests weren't being compared to anything,
thus reporting false positives.
Signed-off-by: gabi beyer <gabrielle.n.beyer@intel.com>
|
|
|
|
|
|
|
|
|
| |
when listing multiple ports on a container with podman port, an early
return was limiting results.
Fixes: #3747
Signed-off-by: baude <bbaude@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
many of the port tests use our nginx container image. in some cases, we have timing
issues between when the nginx and the container are running and when the port -l
command is run causing test flakes. we now use the container image's built in
healthcheck to ensure that nginx is running (and subsequently the container
itself) before running the port command.
Fixes: #3309
Signed-off-by: Brent Baude <bbaude@redhat.com>
Signed-off-by: baude <bbaude@redhat.com>
|
|
|
|
|
|
|
|
|
| |
when doing localized tests (not varlink), we can use secondary image
stores as read-only image caches. this cuts down on test time
significantly because each test does not need to restore the images from
a tarball anymore.
Signed-off-by: baude <bbaude@redhat.com>
|
|
|
|
|
|
|
| |
a series of improvements to our ginkgo test framework so we can
get better ideas of whats going on when run in CI
Signed-off-by: baude <bbaude@redhat.com>
|
|
|
|
|
|
|
|
|
| |
We have little to no testing to make sure we don't break podman image and
podman container commands that wrap traditional commands.
This PR adds tests for each of the commands.
Signed-off-by: Daniel J Walsh <dwalsh@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
| |
Add the ability to run the integration (ginkgo) suite using
the remote client.
Only the images_test.go file is run right now; all the rest are
isolated with a // +build !remotelinux. As more content is
developed for the remote client, we can unblock the files and
just block single tests as needed.
Signed-off-by: baude <bbaude@redhat.com>
|
|
|
|
|
|
|
| |
Put common used test functions and structs to a separated package.
So we can use them for more testsuites.
Signed-off-by: Yiqiao Pu <ypu@redhat.com>
|
|
|
|
|
|
|
|
|
|
| |
Also it fix the issue of exposing both tc/udp port even if
only one proto specified.
Signed-off-by: Kunal Kushwaha <kushwaha_kunal_v7@lab.ntt.co.jp>
Closes: #1325
Approved by: mheon
|
|
|
|
|
|
|
|
|
|
| |
This is the second round of performance improvements for out
integration tests.
Signed-off-by: baude <bbaude@redhat.com>
Closes: #1190
Approved by: rhatdan
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Because our tests are getting so long, we want to be able to audit which tests are taking
the longest to complete. This may indicate a bad test, bad CI, bad code, etc and therefore
should be auditable.
Also, make speed improvements to tests by making sure we only unpack caches images that
actually get used.
Signed-off-by: baude <bbaude@redhat.com>
Closes: #1178
Approved by: mheon
|
|
Completion of the migration from bats to ginkgo. This includes:
* load
* mount
* pause
* port
* run_networking
* search
Note: build will be done within a different PR
Signed-off-by: baude <bbaude@redhat.com>
|