You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Our unit test coverage isn't high. That's not necessarily a bad thing, though. It allows us to develop faster, and if there's a problem, we typically catch it quickly because of Bugsnag error monitoring. Still, I find myself manually testing our Streamlit apps when, for example, refactoring internal code (such as the recent SQLAlchemy upgrade). It's also frustrating for our users when things don't work, and no one is around to fix them.
I don't think that blindly increasing unit test coverage is the answer. The problem is that Streamlit apps are complex, and they interact with databases, which are non-trivial to mock. Is there any low-hanging fruit that would increase our coverage, keep tests simple (without requiring changes with every update), and be non-invasive? Some ideas:
Spin up a real MySQL database from our S3 backup and run integration tests against it.
Fill a MySQL database with mock data automatically from our SQLAlchemy ORM and run integration tests on it.
Test that all Wizard apps can start without an error (in the past, we've had import errors and trivial initialization issues that would have been detected by this).
It'd be interesting to hear others' thoughts. As I mentioned, it's also possible that testing just isn't worth our time and we should focus on something else.
The text was updated successfully, but these errors were encountered:
This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
Our unit test coverage isn't high. That's not necessarily a bad thing, though. It allows us to develop faster, and if there's a problem, we typically catch it quickly because of Bugsnag error monitoring. Still, I find myself manually testing our Streamlit apps when, for example, refactoring internal code (such as the recent SQLAlchemy upgrade). It's also frustrating for our users when things don't work, and no one is around to fix them.
I don't think that blindly increasing unit test coverage is the answer. The problem is that Streamlit apps are complex, and they interact with databases, which are non-trivial to mock. Is there any low-hanging fruit that would increase our coverage, keep tests simple (without requiring changes with every update), and be non-invasive? Some ideas:
It'd be interesting to hear others' thoughts. As I mentioned, it's also possible that testing just isn't worth our time and we should focus on something else.
The text was updated successfully, but these errors were encountered: