Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Custom Runner e TestCase #122

Closed
rodrigondec opened this issue Jul 28, 2021 · 19 comments
Closed

Custom Runner e TestCase #122

rodrigondec opened this issue Jul 28, 2021 · 19 comments

Comments

@rodrigondec
Copy link

rodrigondec commented Jul 28, 2021

Resume

I want to use a custom Runner and a custom TestCase.

I can't overwrite or configure such classes on the current lib.

I'm willing to open PR.

Runner

The Behave runner for the lib is located in

class SimpleTestRunner(DiscoverRunner, BehaveHooksMixin):

The SimpleTestRunner inherit from DiscoverRunner.

CustomRunner

I want to use SimpleTestRunner inheriting from MyTestSuiteRunner.

my runner has a performance measure. Could be related to #61

class TimedTextTestResult(TextTestResult):
    def __init__(self, *args, **kwargs):
        super(TimedTextTestResult, self).__init__(*args, **kwargs)
        self.clocks = dict()

    def startTest(self, test):
        self.clocks[test] = time()
        super(TextTestResult, self).startTest(test)
        if self.showAll:
            self.stream.write(self.getDescription(test))
            self.stream.write(" ... ")
            self.stream.flush()

    def addSuccess(self, test):
        super(TextTestResult, self).addSuccess(test)
        if self.showAll:
            self.stream.writeln("time spent: %.6fs" % (time() - self.clocks[test]))
        elif self.dots:
            self.stream.write(".")
            self.stream.flush()


class TimedTextTestRunner(TextTestRunner):
    resultclass = TimedTextTestResult


class MyTestSuiteRunner(DiscoverRunner):
    def __init__(self, *args, **kwargs):
        super(MyTestSuiteRunner, self).__init__(*args, **kwargs)
        settings.TEST = True

Possibilities

I can see two possible ways to achieve this.

  1. we can pass a --behave-test-runner arg to behave management command and configure a BEHAVE_TEST_RUNNER config in settings.py (django settings)
  2. we can use django's function get_runner code and documentation

In 5 min I made the possibility 2 work.

TestCase

The behave test cases for the lib are located in
https://github.com/behave/behave-django/blob/6fccd9b7dc2c61c9a894fa915cf87a7758581c69/behave_django/testcase.py

They are a little more tricky to work because there isn't a config TEST_CASE on Django (such as TEST_RUNNER).

Possibilities

  1. set 3 configs for each test_case on lib and let the runners use each specific test_case
  2. set 1 config for the BEHAVE_TEST_CASE. This way the Runner's could use a function get_test_case (which would work similarly as get_runner)
  3. 'expose' the django_test_runner on context (on the method BehaveHooksMixin.patch_context) and raise the custom behave hook behave_run_hook(self, 'before_django_ready', context) before doing django_test_runner.setup_testclass. This way we could do a monkey patch on the attribute django_test_runner.testcase_class the way we want on the hook before_django_ready

In 5 min I almost made the possibility 3 work.

_pre_setup and _post_teardown

I need to understand why the methods _pre_setup, _post_teardown have the additional flag run=False if these methods are always called with run=True.

def _pre_setup(self, run=False):

context.test._pre_setup(run=True)

context.test._post_teardown(run=True)

test.__call__

I need to undestand why the TestCase is called considering that the __call__ does nothing more than _pre_setup, run, _post_teardown and the method runTest is empty!

def runTest(self):

https://github.com/django/django/blob/acde91745656a852a15db7611c08cabf93bb735b/django/test/testcases.py#L233

https://github.com/django/django/blob/acde91745656a852a15db7611c08cabf93bb735b/django/test/testcases.py#L246

@bittner
Copy link
Member

bittner commented Jul 29, 2021

Thank you for the effort you put in this issue!

Can you please check whether #78 is a duplicate of your request?

Custom runners are now supported by behave itself, out-of-the-box. Please install behave from the master branch to take advantage of the new feature.

@rodrigondec
Copy link
Author

rodrigondec commented Jul 29, 2021

#78 is almost a duplicate indeed.

The difference is in the objective, approach and consequences.

I could use the manual integration explained on behave docs and it would work for me.

But I would prefer to use behave_django to avoid more lines and more heavy-lifting

My mindset was to not change ANY interface or CLI on the lib. The goal is to offer a slightly different implementation which could be more extensible/customizable. Instead of a new arg on CLI or new feature.

For example, with a extra custom behave hook we could provide a point on the code to do a monkey patch (if the dev wanted).

Main difference when I say "custom runner"

behave_django uses its own 'django runner' for compatibility between Django and Behave.

When I say custom runner, I'm meaning it comparing it with Django. I don't want to use the Django default runner.

I don't want to overwrite the runner for behave django.

I don't want to pass a custom runner to the behave django CLI (management command).

SimpleTestRunner could simply inherit the Django Runner dynamically instead of using DiscoverRunner directly.

Thats a Django approach from it's own documentation.

90% (my guess) of the cases or users of Django don't overwrite the DiscoverRunner or use the TEST_RUNNER configuration. The others ovewrite it and configure.

My point on this is

why use a hardcoded Django Runner instead of getting it dynamically as Django itself does?

This way, it doesn't matter if a Django project is using DiscoverRunner or a MyCustomDjangoCompliedRunner! The lib automagically uses the configured RUNNER to create the respectively behave_django runner.

I need to use a custom TestCase as well

I use a CustomTestCase instead of Django TestCase.

For this to be extensible/configurable, a monkey patch entry point would suffice.

@bittner
Copy link
Member

bittner commented Jul 30, 2021

Note that you can configure a custom runner directly with Behave, this doesn't need to be a CLI option, you can use the runner_class option in a configuration file that Behave recognizes. This feature was added with PR behave/behave#795.

For everything else that you'd like to make more automatic, more dynamic - yes! - please go forward and try a PR. I'm a big fan of avoiding excessive setup code. 👍

Be sure to put everything that is not strictly related to Django (or very specific to integrating with it) possibly into the Behave project itself. Ideally we keep our integration layer as slim as possible.

@rodrigondec
Copy link
Author

Update?

@bittner
Copy link
Member

bittner commented Oct 3, 2021

@rodrigondec I don't want to disappoint you. If we have your use case covered by a feature of Behave itself, though, it would be unwise to add code to this repository to support doing the same thing. What do you think?

Can you please verify hands-on whether you can use your custom runner by simply adding runner_class = MyCustomRunner to the behave configuration? See behave.readthedocs.io/en/latest/behave.html for related docs. Make sure you have Behave installed from the master branch (not from PyPI).

@danickfort
Copy link

@bittner Using a custom runner in the behave conf files doesn't fully work, because behave-django does monkey patching before behave is called

@bittner
Copy link
Member

bittner commented Oct 3, 2021

@bittner Using a custom runner in the behave conf files doesn't fully work, because behave-django does monkey patching before behave is called

@kingbuzzman can you confirm this? Why does it work for your use case?

@kingbuzzman
Copy link
Contributor

Using a custom runner in the behave conf files doesn't fully work

@kingbuzzman can you confirm this?

I can

because behave-django does monkey patching before behave is called

Not because of this though. It's becuase we're using django's TestRunner classes in order to setup the db... and whatever else django does internally AND we monkey patch the TestRunner class we receive from behave.

Why does it work for your use case?

Because I needed to modify the way behave was working, ie, the way it was loading where the feature files were/ step file location, and my use case excluded django entirely. Never needed to change anything in TestCase.

The greatest issue of all is that django's TestRunner is absolutely 100% incompatible with behave's TestRunner.

https://github.com/django/django/blob/stable/4.0.x/django/test/runner.py#L550 (django's "TestRunner")
https://github.com/behave/behave/blob/v1.2.7.dev2/behave/runner.py#L728 (behave's "TestRunner")

As you can clearly see.. there is very little in common

@rodrigondec
Copy link
Author

About this

@rodrigondec I don't want to disappoint you. If we have your use case covered by a feature of Behave itself, though, it would be unwise to add code to this repository to support doing the same thing. What do you think?

@danickfort-liip nailed the motive pretty well.

@bittner Using a custom runner in the behave conf files doesn't fully work, because behave-django does monkey patching before behave is called

I'm not looking for extending or customizing the behave aspect. I want to load my super complex Django TestCase onto behave-django.

It's not about behave itself. It's about the django DB setup and so on...

@danickfort
Copy link

danickfort commented Oct 6, 2021

@kingbuzzman and @rodrigondec are 100% right, it's about having the ability to run a custom Django TestCase. In my
particular case, I have to overload Django's LiveTestServerCase's behavior, which can't be achieved with behave's TestRunners.

@kingbuzzman
Copy link
Contributor

kingbuzzman commented Oct 6, 2021

I wrote a VERY basic PR to get the ball rolling. It should work.

@bittner any idea when behave 1.2.7 will be released? The issue is I'm having a hell of a hard time testing this through tox, since it validates it against pypi, and v1.2.7.dev2 is not pushed there, it fails. This line works fine locally, but tox hates it.

Ps. @bittner what's going on with the behave-django CI? My tests are not running for some reason (not like they will pass due to the aforementioned comment above).

-- Yes, the PR needs a little love.

@rodrigondec
Copy link
Author

@kingbuzzman and @rodrigondec are 100% right, it's about having the ability to run a custom Django TestCase. In my particular case, I have to overload Django's LiveTestServerCase's behavior, which can't be achieved with behave's TestRunners.

Thats exactly my situation too. Created my own DjangoTestCase and use on my whole system.

And just to inform, I opened the PR #123 about this too!

@kingbuzzman
Copy link
Contributor

@bittner friendly ping ;)

@kingbuzzman
Copy link
Contributor

@bittner poke.

@kingbuzzman
Copy link
Contributor

@bittner ??

@bittner
Copy link
Member

bittner commented Jan 8, 2022

I'm willing to merge the simpler solution that solves this issue. Simple in the sense of both low footprint in the code and simple for the user / the developer.

@kingbuzzman
Copy link
Contributor

kingbuzzman commented Jan 10, 2022

I think #130 fixes this issue, can this be closed now @rodrigondec ?

@bittner
Copy link
Member

bittner commented Jan 10, 2022

Let's be sure that the setup we now have satisfies everyone's use case here. Can you check, @rodrigondec?

See the updated docs for usage instruction of the new feature. 📘 🔖

@rodrigondec
Copy link
Author

rodrigondec commented Jan 11, 2022

I think #130 fixes this issue, can this be closed now @rodrigondec ?

This indeed solves the issue about the Runner class. Consequently it should resolve the TestCase issue as @bittner mentioned in #123 (comment)

At this point I'm no longer using behave-django. We decided to use the manual integration from behave seeing as my case may be off the charts compared to the "common" way.

Therefore, for me, this issue can be closed.

@kingbuzzman ty for the PR 🙏

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants