Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Creating Proper Test Cases #1

Open
chamilbuddhima opened this issue Nov 19, 2024 · 5 comments
Open

Creating Proper Test Cases #1

chamilbuddhima opened this issue Nov 19, 2024 · 5 comments
Assignees

Comments

@chamilbuddhima
Copy link
Collaborator

chamilbuddhima commented Nov 19, 2024

No description provided.

@chamilbuddhima chamilbuddhima self-assigned this Nov 19, 2024
@chamilbuddhima
Copy link
Collaborator Author

Framework for Test Cases

Objectives

• Ensure all parts of the pipeline function correctly under various conditions.
• Handle unexpected issues ( e.g. - sensor failures, missing data) without crashes, by recovering if able or giving error messages.

Key areas for testing

• Data synchronization- Verify camera and LiDAR data align correctly.
• Realtime functionality - Make sure that system can handle live data without delays.
• Data export - Check whether results are saved in the correct formats (e.g. - depth maps, point clouds) ,Check whether the data is exported to the intended save location(File) with proper naming.
• Edge cases – Test how the system recover from problems like sensor dropouts or corrupted data.

Implementation plan

• Design test cases - Write tests for normal, stress, and failure scenarios.
• Automate testing - Use scripts to run tests and log results.
• Record test details and results in a shared repository on GitHub.

@chamilbuddhima
Copy link
Collaborator Author

chamilbuddhima commented Nov 29, 2024

Choosing a Testing Framework

Decided to go with PyTest Framework after comparing it with Unittest.

Reasons :
• PyTest can discover tests automatically based on the filenames while unittest requires creation of test cases and proper naming conventions.
• When a test fails on PyTest it gives a more detailed/informative report why the test failed.
• Pytest’s built-in support for parameterized testing allows efficient testing of multiple input combinations, significantly reducing redundancy. This feature is not natively available in unittest and requires additional setup.
• PyTest includes numerous built-in conveniences like monkeypatching (modifying code at runtime for testing), temporary directory handling, and more, often reducing the need for external libraries.

@chamilbuddhima
Copy link
Collaborator Author

PyTest examples

1.Data export
Verify that results are saved in the correct format and location.

  • Normal case-Test if the function saves a file in the correct format and to the specified directory with the expected name.
  • Failure case- Test if the function raises an error when trying to save in an unsupported file format.

2.Real-Time data handling
Test the ability to process live data streams without delays.

  • Normal case -Simulate live data with acceptable processing delays. The function should process data without raising an error.
  • Failure case - Simulate live data with excessive delays . The function should raise a TimeoutError or handle the issue.

@chamilbuddhima
Copy link
Collaborator Author

Example 1
Code to test
Image

PyTest code
Image

Results after running
Image

@chamilbuddhima
Copy link
Collaborator Author

chamilbuddhima commented Dec 10, 2024

Test case example for MVP

Verify Image Conversion
Ensure the RGB to BGR conversion logic works as expected.
•Send a sample Image message with known RGB values. Verify the output Image message has correctly converted BGR values.

Code to be tested - imageconversion.py

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant