-
Notifications
You must be signed in to change notification settings - Fork 108
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Less self-writing tests, more generated tests? #48
Comments
An easier approach might be to create a smaller suite of "representative tests" that are fully written out, such that you are pretty likely to fail the representative test for a given section if you fail the generated tests. Then people could use the representative tests as debugging aids. Sometimes they might miss a subtlety, and pass the representative test but fail the relevant generated ones, in which case, they have to do the extra legwork of digging through our levels of indirection. But ideally, the representative tests alone should be enough to guide you toward a correct implementation. |
That's a good point, it might not be necessary to test every combination of factors but use pair-wise testing. Start with one 'canonical' or 'happy path' test, and have one test that varies each factor in turn. e.g. if you're testing a form validator, you check it passes with a valid set of inputs, then a test for each input, setting it to something invalid and checking the whole thing is invalid as a result. I know I'm probably over-simplifying but I've used this approach a few times and it tends to produce good-enough code with massively reduced testing costs. Also, thanks for listening despite my snark <3 |
To @jcoglan's point, there are a few generative testing tools for JS, like Crockford's JSCheck and gent. Typically they deal with inputs that are easy to generate like numbers, strings, etc. and they explore the input space (usually randomly) for some number of iterations. Not sure how easy it would be to use such a thing for these tests, but when I've used them, they've drastically cut down on the sheer volume manual test writing. Sometimes it takes longer to devise how to generate the inputs, tho, like when you need more custom things like strings in a specific format, etc. Anyway, it could be a good direction, but seems challenging for promises :) It'd sure be interesting to try it, though. |
There are two clear styles for programmatic testing, the modern functional Each suite has its benefits and failings, and there is a strong range of That said, it does have failings worthy of note:
My experience with bringing a promises-1.0/compliant suite like covenant to Then I realized that I had only spent a bit of time doing the work that So, the first thing I did was add one or two tests to my tdd suite, getting After only a few passes, I was reduced to a handful of test categories, In other words, I used the suite to generate some noise, parsed the noise, It is a terrible mistake to use the suite for tdd-style development. This Could the experience be made better? Absolutely. A breakout of "big I believe that thinking about the suite as a functional-test-style suite I am looking at the angular.js $q library and thinking about what it will On Wed, Dec 4, 2013 at 11:22 AM, Brian Cavalier [email protected]:
|
@wizardwerdna This chimes entirely with my experience. From an impl that passed the 1.3.x tests, I started out passing < 150 of the 2.0.x tests. Replacing It's certainly a case of each missing feature causing 100s of test failures, and it's hard to narrow down which tests to run to drive out your implementation. I had a much better experience with 1.x. |
If you are a TDD-er, my advice would be to ignore the suite for the moment, One piece of guidance is that, for the assimilation portions of How you do that is up to you. I actually wrote a functor, "once" with a And those of you who wrote those killer tests, you are very sneaky, nasty On Wed, Dec 4, 2013 at 4:39 PM, James Coglan [email protected]:
|
This makes me smile :D |
@domenic i am also glad. As some (or lots ) of those scenarios are derived from real breakage in real apps. I am glad that the effort of the few, can benefit the masses. |
Indeed. These tests are b@#2-breakers, digging deep into the nuances of On Wed, Dec 4, 2013 at 8:37 PM, Stefan Penner [email protected]:
|
i found using mocha's grep feature to isolate slabs of tests has made it easier to debug issues and update the implementation i maintain, when the spec changes. |
A lot of the tests, especially for 2.3, are created programmatically via various permutations of the objects and possibilities involved. This has always been problematic, and recently @jcoglan has run into it and discussed on Twitter.
It might be nicer if we generated test cases that were readable by themselves, somehow. Or just wrote them all out manually, I dunno.
This is probably necessary for adaptation to test-262 as well; I can't imagine the test harness there is nearly as flexible as Mocha, from what I've seen.
The text was updated successfully, but these errors were encountered: