-
Notifications
You must be signed in to change notification settings - Fork 98
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[WIP] Create a custom Logger to gather the data from the test run #271
base: master
Are you sure you want to change the base?
Conversation
This is coming along nicely. I'm currently refactoring the things I stumble upon, e.g. the I've had to switch from .NET Standard 2.0 to .NET core 3.0 though, since the former does not have a JSON formatter and I got into DLL hell while trying to make Newtonsoft.Json work - vstest seems to screw around with the assembly loading logic. Maybe I'll try again tomorrow, or I'll file an issue with the vstest people. If that doesn't work out: Are we trying to remain compatibility to the .NET Core 2.something SDK? This would still enable people who need their code to be targeting netcoreapp2.something to do so, just the SDK needs to be upgraded. Otherwise, we might want to keep the TRX parsing as a fallback for a while for the old users, before v2 becomes unsupported by MS anyway (August 2021). My impression is that VSCode users are usually quite willing to upgrade regularly. @stefanforsberg |
Following this with great interest! Does that data collector also have events for test discovery? In general I would say that this change is so big that it might warrant a 1.0 release version. If it for various reasons is not compatible with netcore less than 3 we can always point people to a previous version of the extension (which the can choose to install in the vscode UI nowadays). It might not be ideal but I feel that it's hard enough to keep up with one moving target, having to keep track of the trx-side of things as well seems like a bit much too chew on imho. On that note, .net 5 is around the corner. Will we end up in similar territory there? I'm just slightly scared of ending up with different versions of the datacollector that we need to keep up to date as well as figure out which to use depending on the environment the extension is run in. Hope I'm not sounding too negative here, just trying to get as much information as possible 👍 |
That's nice to hear!
Wondered so too. Haven't seen them yet :-/ I know, right now we're dependent on the human-readable output of
Agreed.
Let's be honest: The TRX file was pretty stable. The fact that XUnit was emitting the entire name in the name field was more of a bug that got fixed. I would expect the API to be similarly, if not more, stable. But the real deal of this change is the "streaming" and the fact that we don't have to write the results to disk. Right now, the data collector has about 40 lines of code. It has literally one single dependency apart from the standard library:
Sure, transparency never hurts. |
I almost wrote a long rant in this place, since the vstest system is really over-engineered. It turns out that we need a logger instead of a datacollector. This also allows us to gather the data from the test discovery. |
That sounds great. Currently it's in addition not only parsing the cmd output but if the user is running nunit or mstests it has to resolve down to an undocument parameter you can send to vstest for it to create a file where we read the test namnes. This is odne because nunit and mstest do not report the FQN of the test but only the method name of the discovered test. I would be very glad to get rid of that code.
Does this mean that we are able to update the tree more in "real time" as tests are run rather than waiting for the whole test run to be completed before updating the tree? |
Exactly. |
Okay, I'm almost done - all that's left is testing stuff. I'd be glad if you could pull the branch and try it out.
Yeah, I stumbled upon the same issue. There's now some code in the C# logger that covers this inconsistency. While I was working on the project, I also stumbled upon a lot of |
Great! In general I love the direction of moving code from parsing cmd output in node to csharp code and the streaming of test results. One thing that I'm curious about is that there is (at least when I try it on my machine) quite a significant difference in speed. When I run all tests for the NUnit project it takes ~40 seconds before the tree is completely updated with this branch compared to the ~4 seconds it takes if I run the same test project in the master branch. Is it something I'm missing on my end here? |
Is that speed difference from the NUnit project from the repository? Anyway, I already suspected that it would be a little inefficient, since the tests are currently not async to the TCP connection. That, and a new connection is made for each test. Both of these things could be trimmed down easily. |
” Is that speed difference from the NUnit project from the repository?” Yep! |
Ok, weird - that one takes more like the 4 seconds you were talking about on my machine. I've changed the test results to be sent asynchronously, can you try again now? |
Now it's more like 4 seconds =) |
I tried aggregating messages before sending them to the extension, but it adds quite a bit of complexity and potentially subtle bugs. It would, however, probably improve performance by quite a bit. It's on a branch of mine, if you want to have a look. |
The current solution for adding test errors as problems in VSCode also is built around getting an aggregated test result. Before adding the problems it clear all previous problems which with the streaming solution means that the next test result clears the potential problem for the last result. |
Also I'm getting errors when I try to run a specific test (that is either right clicking on a test in the tree and selecting run test or clicking the run icon at the far right of a test) |
Wow, XUnit is terrible. For context:
I'm seriously considering to use a proxy like |
Haha now you're starting to sound like me =) But yes, if there was some sort of specification and the different test providers adhered to that specification that would have made things easier. |
The different behaviour is causing me some headaches.
What do you think about the following:
I think this should really cover everyone's use cases. |
Ah so we don't really get around the problem of different test frameworks implementing the "spec" in slightly different ways by moving to the logger approach. We just need to handle them there instead if I follow you. Do you still feel that getting away from processing files (which seems way more stable now that we don't rely on chokidar to watch files for us btw) and streaming test results is worth it in terms of the work needed (I realise that it's you that does the work)? |
Since most of the work is already done and the remaining work is not that much, I'm rather sure that it is worth the work. We also get around the issue of detecting language-dependent strings in the The question to me is more whether we can get the building and testing to be as ergonomic as it should be. I'll have to have a look at different projects with a similar mixture of languages and how they set it up. However, I'm rather busy right now, so I'm probably not going to work on this until the weekend. |
See the proposal in #269 .
TODO:
--list-tests
can be replaced