Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

GUI or Ruby console? #106

Open
MSP-Greg opened this issue Feb 22, 2018 · 7 comments
Open

GUI or Ruby console? #106

MSP-Greg opened this issue Feb 22, 2018 · 7 comments
Labels
Milestone

Comments

@MSP-Greg
Copy link
Contributor

I'm experimenting with pushing newer versions of Ruby into SU, so I started working with testup-2 for two reasons. First, using the tests to verify everything works as previous, and secondly, to create a test file to check that the ruby installation is setup correctly.

Having used (and written tests for) Minitest with stand-alone ruby, rubygems, several gems, and my own code, I was frustrated with the speed of running the SU tests. Also, the SU tests didn't seem to be stable. Using Minitest from the command line can be 'test method' based, but it (with rake) is more setup for running subsets of tests based on files or folders. At present, testup-2 is more 'test method' based.

Thanks to Thomas for the GUI, but, for instance, I rarely want to run subsets of tests from several test files. I want to run a few files, or a group, and a folder based file system can implement that.

Anyway, I went hunting for stability and speed. Stability testing requires running tests (or the whole suite) several times (multiple runs may also help identify other issues, like resource/memory leaks, etc). Due to that and speed issues, I wrote a command based system for Minitest. Without the testup-2 dialog and all the regex matching, it ran considerably faster.

Then I started on the tests. Some saw speed improvements by starting an operation, performing the test, then aborting. Also, I re-wrote the Observer code and assertions and started on Observer specific tests.

A good example is the three Observer tests I've done. Running them from the console takes about one fifth the time it takes to run them with the GUI.

Sorry for the long story, but now I've got a lot of code changes, and I'm wondering what to do with them. The GUI is set up for a 'flat' set of files, whereas I'd rather the files be organized with folders based on a combination of namespaces and/or subclasses.

Some examples of console statements (SUMT = SketchUp MiniTest):

# run all the files in the Sketchup/Observers folder
SUMT.run %w[Sketchup/Observers]

# run all the Geom files
SUMT.run %w[Geom]

# run Entities and Faces
SUMT.run %w[TC_Entities.rb TC_Faces.rb]

Since stability is often an issue, one can simply expand the command to run several sets (with a different seed each time) and go afk:

1.upto(10).each { SUMT.run %w[Sketchup/Observers] }

Options would be handled (somewhat mimicking standard CLI style) with keyword arguments, so a seed could be specified with s:2160 or seed:2160. Logging works, but I think I'd prefer to make it optional. Options for the root folder of the test suite, the logging folder, logging test order before it starts, logging time by test file, etc. Maybe some standards for auto-loading suite helpers and misc test files...

Note that the test suite folder and logging folder can be anywhere, so you can have plugin tests within the main plugin folder or have plugin test suites grouped together elsewhere. Same for logging.

So, any thoughts?

Thanks,

Greg

@thomthom
Copy link
Member

thomthom commented Mar 9, 2018

This sounds like it fits into our TestUp automation project. We want to set up continuous integration testing against the SketchUp Ruby API tests.

Do you have a branch/repo with the experiments you mentioned above?

@thomthom thomthom added this to the TestUp 2.4 milestone Mar 9, 2018
@MSP-Greg
Copy link
Contributor Author

MSP-Greg commented Mar 9, 2018

We want to set up CI testing against the SketchUp Ruby API tests.

I thought that might be of interest. How would results be returned?

Anyway, I started with a fresh set of files (the tests and some files copied from testup-2), and it's almost presentable. Calling it SUMT (SketchUp Minitest). I'll try to post it later today or tomorrow am (-0600). Like a lot of code projects, it started small and grew...

Greg

@MSP-Greg
Copy link
Contributor Author

MSP-Greg commented Mar 9, 2018

Screw it. Just posted the code at:

https://github.com/MSP-Greg/SUMT

The docs (with the ReadMe rendered correctly, GitHub markdown is somewhat restrictive) are at:

https://msp-greg.github.io/sumt/

Of interest - create a temp folder for testing, maybe C:/sumt_temp, and maybe one for logs C:/sumt_logs, then run:

SUMT.run td:'C:/sumt_temp'

Then checkout all the artifacts left over from testing.

Note - to save the folders to config, issue:

SUMT.run td:'C:/sumt_temp', ld:'C:/sumt_logs', save_opts:true

Greg

@thomthom
Copy link
Member

I'm not sure how the results will be consumed. Will have to work with the QA department about that. Given how easy it is to create Minitest formatters they can have it in whatever format they want.

@MSP-Greg
Copy link
Contributor Author

Okay, so you're thinking file based results. In a lot of CI, it's exit codes, but I've never checked if that could be done running in SU; I assume it can't.

But, one could start a process that starts SU and the testing, and have the testing code pass the info back to the starting process, which could then use an exit code that CI would pick up. I think that made sense...

Also, if QA want's CI (which certainly sounds like a good idea), we may need to think about a means of classifying failures and errors such that they can hopefully stay 'as is' in the tests, but the classification would determine whether they're included in the results parsed by the CI.

An example might be some of the things that are currently failing in the tests, but not expected to be fixed in the next release. For CI, you need to account for those by having some means of ignoring their failures.

Should that be on the CI side or the test side?

@thomthom
Copy link
Member

I don't have all the requirements yet. I just set up a Project to start tracking all of this last friday; https://github.com/SketchUp/testup-2/projects/2

@MSP-Greg
Copy link
Contributor Author

Re testing & CI, ran the following cmd file with a slight modification to SUMT

start "" "C:\Program Files\SketchUp\SketchUp 2018_23\SketchUp.exe" -RubyStartup E:/GitHub/SUMT/lib/sumt/sumt_runner.rb
ruby E:\GitHub\SUMT\lib\sumt\udp_receiver.rb

Works as expected, in that SU output (via UDPReporter) is directed to the cmd window. It would be relatively easy to modify udp_receiver.rb to check for the finish summary string, then exit with 0 or fail/error number. Not sure how to close SU, never looked into it.

Took a break, tired of Puma... Greg

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

No branches or pull requests

2 participants