-
Notifications
You must be signed in to change notification settings - Fork 57
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Test script for updateRecommendation API #836
Conversation
@msvinaykumar - Please include functional test for updateRecommendations, I have listed a few scenarios below add if anything is missing to it. Test scenarios for updateRecommendations, invoke updateRecommendations as below: Post a valid experiment name & valid end time and validate recommendations are as expected (What is the valid range for end time?) Validate status code & expected error messages for the below: |
9545363
to
c4c8710
Compare
assert notification["type"] != "error" | ||
|
||
response = update_recommendations(experiment_name, interval_start_time, end_time) | ||
data = response.json() |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@msvinaykumar - Please include validation of the json response of update_recommendations for all the success cases
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think we should add a jsonschema validation for the output of updateRecommendations json too. Please include this in all tests were updateRecommendations API is invoked.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
done
|
||
@pytest.mark.sanity | ||
def test_update_valid_recommendations_after_results_after_create_exp(cluster_type): | ||
input_json_file = "../json_files/create_exp.json" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Include description for all the tests included with expected behavior
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
done
# Sleep for 1 sec to get recommendations | ||
time.sleep(1) | ||
|
||
response = update_recommendations(experiment_name, interval_start_time, end_time) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Please update all the tests in test_list_recommendations.py to invoke update_recommendations
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
done
response = update_recommendations(None, None, None) | ||
data = response.json() | ||
assert response.status_code == ERROR_STATUS_CODE | ||
assert data['message'] == UPDATE_RECOMMENDATIONS_MANDATORY_DEFAULT_MESSAGE |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Shouldn't the error messages returned for negative tests match what is mentioned in the monitoringAPI.md ?#854
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
done
response = update_recommendations(experiment_name, None, None) | ||
data = response.json() | ||
assert response.status_code == ERROR_STATUS_CODE | ||
assert data['message'] == UPDATE_RECOMMENDATIONS_MANDATORY_INTERVAL_END_DATE |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Here the experiment_name doesn't exist I think, will it still return error message related to interval end date? Would it return experiment_name is invalid ?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Update accordingly. In general it will display 'data not found'.
response = update_recommendations(experiment_name, start_time, end_time) | ||
data = response.json() | ||
assert response.status_code == ERROR_STATUS_CODE | ||
assert data['message'] == UPDATE_RECOMMENDATIONS_START_TIME_PRECEDE_END_TIME |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Please update the API doc with this error message
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Can you please share the results for all these tests.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Include these tests too
- Test with boundary values for the timestamps, I see 15 days as the acceptable range in the md file
- Include tests with invalid time format for start and end time
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Done
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Can you please share the results for all these tests.
https://drive.google.com/drive/folders/17e1KE3eZKi8Waf3o7uvlMQ1SqJC8cpfW?usp=sharing
@@ -165,12 +185,13 @@ def test_list_recommendations_invalid_exp(cluster_type): | |||
response = delete_experiment(input_json_file) | |||
print("delete exp = ", response.status_code) | |||
|
|||
|
|||
@pytest.mark.sanity | |||
def test_list_recommendations_without_results(cluster_type): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Invoke updateRecommendations API in this test too to check behavior without results
@@ -112,7 +131,7 @@ def test_list_recommendations_without_parameters(cluster_type): | |||
# Validate the json values | |||
create_exp_json = read_json_data_from_file(input_json_file) | |||
update_results_json = [] | |||
update_results_json.append(result_json_arr[len(result_json_arr)-1]) | |||
update_results_json.append(result_json_arr[len(result_json_arr) - 1]) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Does validate_reco_json work with this update_results_json?
assert data['message'] == "Bulk entries are currently unsupported!" | ||
assert response.status_code == SUCCESS_STATUS_CODE | ||
assert data['status'] == SUCCESS_STATUS | ||
assert data['message'] == UPDATE_RESULTS_SUCCESS_MSG |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Also update the below code in this test to invoke updateRecommendations API & validate recommendations using update results json instead of None
# Uncomment the below lines when bulk entries are allowed
# update_results_json = read_json_data_from_file(result_json_file)
# Since bulk entries are not supported passing None for update results json
update_results_json = None
validate_reco_json(create_exp_json[0], update_results_json, list_reco_json[0])
assert notification["type"] != "error" | ||
|
||
response = update_recommendations(experiment_name, interval_start_time, end_time) | ||
data = response.json() |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think we should add a jsonschema validation for the output of updateRecommendations json too. Please include this in all tests were updateRecommendations API is invoked.
@@ -453,17 +471,23 @@ def test_update_results_with_same_result(cluster_type): | |||
response = update_results(result_json_file) | |||
|
|||
data = response.json() | |||
assert response.status_code == ERROR_409_STATUS_CODE | |||
assert response.status_code == ERROR_STATUS_CODE |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Have we changed the response code to 400 now if the result with an existing timestamp is posted again? We will need to discuss this with Suraj too as they might have some logic based on 409 error code. Also if there is a change in error code we will need to update the UpdateResults.md
assert response.status_code == ERROR_STATUS_CODE | ||
assert data['status'] == ERROR_STATUS | ||
assert data['message'] == 'Out of a total of 1 records, 1 failed to save' | ||
assert data['data'][0]['errorReasons'][0] == UPDATE_RESULTS_DATE_PRECEDE_ERROR_MSG |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
UpdateResults.md needs to be updated with the errorReasons and error messages
@@ -18,359 +18,511 @@ | |||
import datetime | |||
import requests | |||
import argparse | |||
import multiprocessing |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@msvinaykumar - As discussed, please raise a separate PR for this on top of PR 832
@msvinaykumar - Tests did not run completely, extended and negative were not run due to the below error, did it work for you? Looking into this Cmd used:
Error:
|
grep at line 124 seems to be matching these prints in the test output - Out of a total of 1 records, 1 failed to save |
@@ -651,6 +708,7 @@ def test_list_recommendations_multiple_exps_with_missing_metrics(cluster_type): | |||
response = delete_experiment(create_exp_json_file) | |||
print("delete exp = ", response.status_code) | |||
|
|||
|
|||
@pytest.mark.extended | |||
@pytest.mark.parametrize("latest", ["true", "false"]) | |||
def test_list_recommendations_with_only_latest(latest, cluster_type): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@msvinaykumar - rest_apis/test_list_recommendations.py::test_list_recommendations_with_only_latest[false] this test failed with the below error, can you please check
# Validate the json against the json schema
errorMsg = validate_list_reco_json(list_reco_json, list_reco_json_schema)
> assert errorMsg == ""
E assert [<ValidationError: "'monitoring_start_time' is a required property">, <ValidationError: "'monitoring_end_time' is a re...Error: "'variation' is a required property">, <ValidationError: "'monitoring_start_time' is a required property">, ...] == ''
test_list_recommendations.py:805: AssertionError
@msvinaykumar does this need a rebase as well? |
…ous uploading of results and recommendations. Additionally, we have introduced extra flags to selectively post results or update recommendations only. Moreover, it is now possible to specify a start date as well. Signed-off-by: msvinaykumar <[email protected]>
Signed-off-by: msvinaykumar <[email protected]>
Signed-off-by: msvinaykumar <[email protected]>
Signed-off-by: msvinaykumar <[email protected]>
Signed-off-by: msvinaykumar <[email protected]>
Signed-off-by: msvinaykumar <[email protected]>
Signed-off-by: msvinaykumar <[email protected]>
Signed-off-by: msvinaykumar <[email protected]>
Signed-off-by: msvinaykumar <[email protected]>
Signed-off-by: msvinaykumar <[email protected]>
@msvinaykumar - Please check the timestamp format in the error message {"message":"Out of a total of 1 records, 1 failed to save","httpcode":400,"documentationLink":"","status":"ERROR","data":[{"interval_start_time":"Aug 17, 2023, 4:35:46 PM","interval_end_time":"Aug 17, 2023, 4:50:46 PM", |
Signed-off-by: msvinaykumar <[email protected]>
a9f085c
to
3184fae
Compare
Signed-off-by: msvinaykumar <[email protected]>
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM
added updateREcomendation API to scale test and enabled simultaneous uploading of results and update recommendations. Additionally, we have introduced extra flags to selectively post results or update recommendations only. Moreover, it is now possible to specify a start date as well.
syntax ex:
python3 -u quickTestScalability.py --ip master-0.kruizevin.lab.psi.pnq2.redhat.com --port 30792 --count 1,1440 --minutesjump=15 --generaterecommendation --parallelresultcount 1 --startdate=2023-01-01T00:00:00.000Z --name temp24Hrs3 --postresults
above example creates experiment name temp24Hrs3_1
with 15 days of result data also triggers updateRecommendations api
Signed-off-by: msvinaykumar [email protected]