Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Allow failing a test immediately if before-each fails #189

Open
chrahunt opened this issue Aug 1, 2020 · 4 comments
Open

Allow failing a test immediately if before-each fails #189

chrahunt opened this issue Aug 1, 2020 · 4 comments

Comments

@chrahunt
Copy link

chrahunt commented Aug 1, 2020

It often does not make sense to execute tests if there are errors in any of the setup steps. #179 discusses a related issue of ignored errors in before-all. This issue is targeting the expected behavior when an error occurs while executing before-each.

Currently, if an error occurs in a before-each then the corresponding test still executes. To reproduce, see:

script.sh
#!/bin/sh
cd "$(mktemp -d)"
cat <<EOF > Cask
(source gnu)
(source melpa-stable)

(development
 (depends-on "buttercup"))
EOF

cat <<EOF > test-example.el
(setq i 0)
(describe "Running a test"
  (before-each
    (print "before-each")
    (setq i (+ i 1))
    (when (> i 1)
      (expect t :to-be nil)))

  (before-each
    (print "before-each 2"))

  (after-each
    (print "after-each"))

  (it "should not execute"
    (print "1"))

  (it "should not execute 2"
    (print "2")))
EOF

cask
cask exec buttercup -L .

which gives this output:

output
Loading package information... done
Package operations: 1 installs, 0 removals
  - Installing [ 1/1] buttercup (latest)... done
Running 2 specs.

Running a test
  should not execute
"before-each"

"before-each 2"

"1"

"after-each"
  should not execute (0.03ms)
  should not execute 2
"before-each"

"before-each 2"

"2"

"after-each"
  should not execute 2  FAILED (4.24ms)

========================================
Running a test should not execute 2

Traceback (most recent call last):
  (buttercup-fail "%s" "Expected `t' to be `eq' to `nil', but instead it was...
  (signal buttercup-failed "Expected `t' to be `eq' to `nil', but instead it...
FAILED: Expected `t' to be `eq' to `nil', but instead it was `t'.

Ran 2 specs, 1 failed, in 5.56ms.

The desired behavior is:

  1. No before-each methods execute after the first one fails
  2. The test is not executed
  3. after-each methods are executed

This aligns with the behavior of jasmine when using --stop-on-failure=true (it is false by default). To reproduce, see:

script.sh
#!/bin/sh

cd "$(mktemp -d)"
npm init -y
npm install --save-dev jasmine
npx jasmine init
cat <<EOF > spec/test_spec.js
var i = 0;
describe("Running a test", function() {
    beforeEach(function() {
        console.log("beforeEach");
        i++;
        if (i > 1) {
            throw "Failure";
        }
    });

    beforeEach(function() {
        console.log("beforeEach2");
    });

    afterEach(function() {
        console.log("afterEach");
    });

    it("should not execute", function() {
        console.log("1");
    });

    it("should not execute 2", function() {
        console.log("2");
    });
});
EOF
npx jasmine --stop-on-failure=true

which gives this output:

output
Wrote to /tmp/user/1000/tmp.m5b4SFGP8y/package.json:

{
  "name": "tmp.m5b4SFGP8y",
  "version": "1.0.0",
  "description": "",
  "main": "index.js",
  "scripts": {
    "test": "echo \"Error: no test specified\" && exit 1"
  },
  "keywords": [],
  "author": "",
  "license": "ISC"
}


npm notice created a lockfile as package-lock.json. You should commit this file.
npm WARN [email protected] No description
npm WARN [email protected] No repository field.

+ [email protected]
added 121 packages from 101 contributors and audited 121 packages in 0.961s
found 0 vulnerabilities

Randomized with seed 91165
Started
beforeEach
beforeEach2
1
afterEach
.beforeEach
afterEach
F

Failures:
1) Running a test should not execute 2
  Message:
    Failure thrown
  Stack:


2 specs, 1 failure
Finished in 0.005 seconds
Randomized with seed 91165 (jasmine --random=true --seed=91165)
@snogge
Copy link
Collaborator

snogge commented Aug 1, 2020

Thank you for a very thorough issue description.

The problem I see with not running the rest of the before-all,
before-each, test code, after-each, after-all is that cleanup code may
be skipped. If you have some setup in a before-all and then some
necessary cleanup in an after-all, how do you make sure it is executed?

@chrahunt
Copy link
Author

chrahunt commented Aug 2, 2020

I think we should execute the after-each and after-all like normal, even if any before-each fails (item 3 under desired behavior).

@snogge
Copy link
Collaborator

snogge commented Aug 2, 2020

Ah, I missed that in your original report. Do you think we should tie this to the same option as #188? If I understand you correctly that is what Jasmine does. (I've never used Jasmine myself, I just know that buttercup is inspired by it.)

@chrahunt
Copy link
Author

chrahunt commented Aug 2, 2020

Jasmine continues executing the remaining tests if a before-each fails and --stop-on-failure=true (and personally this is what I want to happen by default). Only the specific test that the failing before-each preceded is skipped. Here's an updated

script
#!/bin/sh

cd "$(mktemp -d)"
npm init -y
npm install --save-dev jasmine
npx jasmine init
cat <<EOF > spec/test_spec.js
var i = 0;
describe("Running a test", function() {
    beforeEach(function() {
        console.log("beforeEach");
        i++;
        if (i > 1) {
            throw "Failure";
        }
    });

    beforeEach(function() {
        console.log("beforeEach2");
    });

    afterEach(function() {
        console.log("afterEach");
    });

    it("should not execute", function() {
        console.log("1");
    });

    it("should not execute 2", function() {
        console.log("2");
    });

    it("should not execute 3", function() {
        console.log("3");
    });
});
EOF
npx jasmine --stop-on-failure=true --random=false

and

output
Wrote to /tmp/user/1000/tmp.9wT06oYJtr/package.json:

{
  "name": "tmp.9wT06oYJtr",
  "version": "1.0.0",
  "description": "",
  "main": "index.js",
  "scripts": {
    "test": "echo \"Error: no test specified\" && exit 1"
  },
  "keywords": [],
  "author": "",
  "license": "ISC"
}


npm notice created a lockfile as package-lock.json. You should commit this file.
npm WARN [email protected] No description
npm WARN [email protected] No repository field.

+ [email protected]
added 121 packages from 101 contributors and audited 121 packages in 0.91s
found 0 vulnerabilities

Started
beforeEach
beforeEach2
1
afterEach
.beforeEach
afterEach
FbeforeEach
afterEach
F

Failures:
1) Running a test should not execute 2
  Message:
    Failure thrown
  Stack:


2) Running a test should not execute 3
  Message:
    Failure thrown
  Stack:


3 specs, 2 failures
Finished in 0.005 seconds

if that makes it a little clearer.

#188 sounds closer to the --maxfail/-x option in pytest. I checked the Jasmine docs and I couldn't find an equivalent flag, but we can always take inspiration from more than one place. 😄

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

2 participants