-
Notifications
You must be signed in to change notification settings - Fork 11
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
1.0.6.1-GA #18
Merged
manjudr
merged 44 commits into
Sunbird-Obsrv:main
from
Sanketika-Obsrv:open-source-release
Aug 20, 2024
Merged
1.0.6.1-GA #18
manjudr
merged 44 commits into
Sunbird-Obsrv:main
from
Sanketika-Obsrv:open-source-release
Aug 20, 2024
Conversation
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
feat: added descriptions for default configurations
feat: modified kafka connector input topic
* #0 - Refactor Dockerfile and Github actions workflow --------- Co-authored-by: Santhosh Vasabhaktula <[email protected]> Co-authored-by: ManojCKrishna <[email protected]>
default config for streaming tasks
* testing new images * testing new images * testing new images * testing new images * testing new images * build new image with bug fixes * update dockerfile * update dockerfile * #0 fix: upgrade packages * #0 feat: add flink dockerfiles * #0 fix: add individual extraction * Issue #0 fix: upgrade ubuntu packages for vulnerabilities * #0 fix: update github actions release condition --------- Co-authored-by: ManojKrishnaChintaluri <[email protected]> Co-authored-by: Praveen <[email protected]> Co-authored-by: Sowmya N Dixit <[email protected]>
Issue #33 feat: add documentation for Dataset, Datasources, Data In a…
Issue #2 feat: JDBC Connector add connector config and connector stats update functions
feat: add function to get all datasets
* testing new images * testing new images * testing new images * testing new images * testing new images * build new image with bug fixes * update dockerfile * update dockerfile * #0 fix: upgrade packages * #0 feat: add flink dockerfiles * feat: update all failed, invalid and duplicate topic names * feat: update kafka topic names in test cases * #0 fix: add individual extraction * feat: update failed event * Update ErrorConstants.scala * feat: update failed event * Issue #0 fix: upgrade ubuntu packages for vulnerabilities * feat: add exception handling for json deserialization * Update BaseProcessFunction.scala * Update BaseProcessFunction.scala * feat: update batch failed event generation * Update ExtractionFunction.scala * feat: update invalid json exception handling * Issue #46 feat: update batch failed event * Issue #46 feat: update batch failed event * Issue #46 feat: update batch failed event * Issue #46 feat: update batch failed event * Issue #46 fix: remove cloning object * Issue #46 feat: update batch failed event * #0 fix: update github actions release condition * Issue #46 feat: add error reasons * Issue #46 feat: add exception stack trace * Issue #46 feat: add exception stack trace * Release 1.3.1 Changes (#42) * Dataset enhancements (#38) * feat: add connector config and connector stats update functions * Issue #33 feat: add documentation for Dataset, Datasources, Data In and Query APIs * Update DatasetModels.scala * #0 fix: upgrade packages * #0 feat: add flink dockerfiles * #0 fix: add individual extraction --------- Co-authored-by: ManojKrishnaChintaluri <[email protected]> Co-authored-by: Praveen <[email protected]> Co-authored-by: Sowmya N Dixit <[email protected]> * #0000 [SV] - Fallback to local redis instance if embedded redis is not starting * Update DatasetModels.scala * #0000 - refactor the denormalization logic 1. Do not fail the denormalization if the denorm key is missing 2. Add clear message whether the denorm is sucessful or failed or partially successful 3. Handle denorm for both text and number fields * #0000 - refactor: 1. Created a enum for dataset status and ignore events if the dataset is not in Live status 2. Created a outputtag for denorm failed stats 3. Parse event validation failed messages into a case class * #0000 - refactor: 1. Updated the DruidRouter job to publish data to router topics dynamically 2. Updated framework to created dynamicKafkaSink object * #0000 - mega refactoring: 1. Made calls to getAllDatasets and getAllDatasetSources to always query postgres 2. Created BaseDatasetProcessFunction for all flink functions to extend that would dynamically resolve dataset config, initialize metrics and handle common failures 3. Refactored serde - merged map and string serialization into one function and parameterized the function 4. Moved failed events sinking into a common base class 5. Master dataset processor can now do denormalization with another master dataset as well * #0000 - mega refactoring: 1. Made calls to getAllDatasets and getAllDatasetSources to always query postgres 2. Created BaseDatasetProcessFunction for all flink functions to extend that would dynamically resolve dataset config, initialize metrics and handle common failures 3. Refactored serde - merged map and string serialization into one function and parameterized the function 4. Moved failed events sinking into a common base class 5. Master dataset processor can now do denormalization with another master dataset as well * #0000 - mega refactoring: 1. Added validation to check if the event has a timestamp key and it is not blank nor invalid 2. Added timezone handling to store the data in druid in the TZ specified by the dataset * #0000 - minor refactoring: Updated DatasetRegistry.getDatasetSourceConfig to getAllDatasetSourceConfig * #0000 - mega refactoring: Refactored logs, error messages and metrics * #0000 - mega refactoring: Fix unit tests * #0000 - refactoring: 1. Introduced transformation mode to enable lenient transformations 2. Proper exception handling for transformer job * #0000 - refactoring: Fix test cases and code * #0000 - refactoring: upgrade embedded redis to work with macos sonoma m2 * #0000 - refactoring: Denormalizer test cases and bug fixes. Code coverage is 100% now * #0000 - refactoring: Router test cases and bug fixes. Code coverage is 100% now * #0000 - refactoring: Validator test cases and bug fixes. Code coverage is 100% now * #0000 - refactoring: Framework test cases and bug fixes * #0000 - refactoring: kafka connector test cases and bug fixes. Code coverage is 100% now * #0000 - refactoring: improve code coverage and fix bugs * #0000 - refactoring: improve code coverage and fix bugs --- Now the code coverage is 100% * #0000 - refactoring: organize imports * #0000 - refactoring: 1. transformer test cases and bug fixes - code coverage is 100% * #0000 - refactoring: test cases and bug fixes --------- Co-authored-by: shiva-rakshith <[email protected]> Co-authored-by: Aniket Sakinala <[email protected]> Co-authored-by: Manjunath Davanam <[email protected]> Co-authored-by: ManojKrishnaChintaluri <[email protected]> Co-authored-by: Praveen <[email protected]> Co-authored-by: Sowmya N Dixit <[email protected]> Co-authored-by: Anand Parthasarathy <[email protected]> * #000:feat: Removed the provided scope of the kafka-client in the framework (#40) * #0000 - feat: Add dataset-type to system events (#41) * #0000 - feat: Add dataset-type to system events * #0000 - feat: Modify tests for dataset-type in system events * #0000 - feat: Remove unused getDatasetType function * #0000 - feat: Remove unused pom test dependencies * #0000 - feat: Remove unused pom test dependencies --------- Co-authored-by: Santhosh <[email protected]> Co-authored-by: shiva-rakshith <[email protected]> Co-authored-by: Aniket Sakinala <[email protected]> Co-authored-by: ManojKrishnaChintaluri <[email protected]> Co-authored-by: Praveen <[email protected]> Co-authored-by: Sowmya N Dixit <[email protected]> Co-authored-by: Anand Parthasarathy <[email protected]> * Main conflicts fixes (#44) * feat: add connector config and connector stats update functions * Issue #33 feat: add documentation for Dataset, Datasources, Data In and Query APIs * Update DatasetModels.scala * Release 1.3.0 into Main branch (#34) * testing new images * testing new images * testing new images * testing new images * testing new images * build new image with bug fixes * update dockerfile * update dockerfile * #0 fix: upgrade packages * #0 feat: add flink dockerfiles * #0 fix: add individual extraction * Issue #0 fix: upgrade ubuntu packages for vulnerabilities * #0 fix: update github actions release condition --------- Co-authored-by: ManojKrishnaChintaluri <[email protected]> Co-authored-by: Praveen <[email protected]> Co-authored-by: Sowmya N Dixit <[email protected]> * Update DatasetModels.scala * Issue #2 feat: Remove kafka connector code * feat: add function to get all datasets * #000:feat: Resolve conflicts --------- Co-authored-by: shiva-rakshith <[email protected]> Co-authored-by: Aniket Sakinala <[email protected]> Co-authored-by: ManojKrishnaChintaluri <[email protected]> Co-authored-by: Praveen <[email protected]> Co-authored-by: Sowmya N Dixit <[email protected]> Co-authored-by: Santhosh <[email protected]> Co-authored-by: Anand Parthasarathy <[email protected]> Co-authored-by: Ravi Mula <[email protected]> --------- Co-authored-by: ManojKrishnaChintaluri <[email protected]> Co-authored-by: shiva-rakshith <[email protected]> Co-authored-by: Manjunath Davanam <[email protected]> Co-authored-by: Sowmya N Dixit <[email protected]> Co-authored-by: Santhosh <[email protected]> Co-authored-by: Aniket Sakinala <[email protected]> Co-authored-by: Anand Parthasarathy <[email protected]> Co-authored-by: Ravi Mula <[email protected]>
* testing new images * testing new images * testing new images * testing new images * testing new images * build new image with bug fixes * update dockerfile * update dockerfile * #0 fix: upgrade packages * #0 feat: add flink dockerfiles * feat: update all failed, invalid and duplicate topic names * feat: update kafka topic names in test cases * #0 fix: add individual extraction * feat: update failed event * Update ErrorConstants.scala * feat: update failed event * Issue #0 fix: upgrade ubuntu packages for vulnerabilities * feat: add exception handling for json deserialization * Update BaseProcessFunction.scala * Update BaseProcessFunction.scala * feat: update batch failed event generation * Update ExtractionFunction.scala * feat: update invalid json exception handling * Issue #46 feat: update batch failed event * Issue #46 feat: update batch failed event * Issue #46 feat: update batch failed event * Issue #46 feat: update batch failed event * Issue #46 fix: remove cloning object * Issue #46 feat: update batch failed event * #0 fix: update github actions release condition * Issue #46 feat: add error reasons * Issue #46 feat: add exception stack trace * Issue #46 feat: add exception stack trace * Release 1.3.1 Changes (#42) * Dataset enhancements (#38) * feat: add connector config and connector stats update functions * Issue #33 feat: add documentation for Dataset, Datasources, Data In and Query APIs * Update DatasetModels.scala * #0 fix: upgrade packages * #0 feat: add flink dockerfiles * #0 fix: add individual extraction --------- Co-authored-by: ManojKrishnaChintaluri <[email protected]> Co-authored-by: Praveen <[email protected]> Co-authored-by: Sowmya N Dixit <[email protected]> * #0000 [SV] - Fallback to local redis instance if embedded redis is not starting * Update DatasetModels.scala * #0000 - refactor the denormalization logic 1. Do not fail the denormalization if the denorm key is missing 2. Add clear message whether the denorm is sucessful or failed or partially successful 3. Handle denorm for both text and number fields * #0000 - refactor: 1. Created a enum for dataset status and ignore events if the dataset is not in Live status 2. Created a outputtag for denorm failed stats 3. Parse event validation failed messages into a case class * #0000 - refactor: 1. Updated the DruidRouter job to publish data to router topics dynamically 2. Updated framework to created dynamicKafkaSink object * #0000 - mega refactoring: 1. Made calls to getAllDatasets and getAllDatasetSources to always query postgres 2. Created BaseDatasetProcessFunction for all flink functions to extend that would dynamically resolve dataset config, initialize metrics and handle common failures 3. Refactored serde - merged map and string serialization into one function and parameterized the function 4. Moved failed events sinking into a common base class 5. Master dataset processor can now do denormalization with another master dataset as well * #0000 - mega refactoring: 1. Made calls to getAllDatasets and getAllDatasetSources to always query postgres 2. Created BaseDatasetProcessFunction for all flink functions to extend that would dynamically resolve dataset config, initialize metrics and handle common failures 3. Refactored serde - merged map and string serialization into one function and parameterized the function 4. Moved failed events sinking into a common base class 5. Master dataset processor can now do denormalization with another master dataset as well * #0000 - mega refactoring: 1. Added validation to check if the event has a timestamp key and it is not blank nor invalid 2. Added timezone handling to store the data in druid in the TZ specified by the dataset * #0000 - minor refactoring: Updated DatasetRegistry.getDatasetSourceConfig to getAllDatasetSourceConfig * #0000 - mega refactoring: Refactored logs, error messages and metrics * #0000 - mega refactoring: Fix unit tests * #0000 - refactoring: 1. Introduced transformation mode to enable lenient transformations 2. Proper exception handling for transformer job * #0000 - refactoring: Fix test cases and code * #0000 - refactoring: upgrade embedded redis to work with macos sonoma m2 * #0000 - refactoring: Denormalizer test cases and bug fixes. Code coverage is 100% now * #0000 - refactoring: Router test cases and bug fixes. Code coverage is 100% now * #0000 - refactoring: Validator test cases and bug fixes. Code coverage is 100% now * #0000 - refactoring: Framework test cases and bug fixes * #0000 - refactoring: kafka connector test cases and bug fixes. Code coverage is 100% now * #0000 - refactoring: improve code coverage and fix bugs * #0000 - refactoring: improve code coverage and fix bugs --- Now the code coverage is 100% * #0000 - refactoring: organize imports * #0000 - refactoring: 1. transformer test cases and bug fixes - code coverage is 100% * #0000 - refactoring: test cases and bug fixes --------- Co-authored-by: shiva-rakshith <[email protected]> Co-authored-by: Aniket Sakinala <[email protected]> Co-authored-by: Manjunath Davanam <[email protected]> Co-authored-by: ManojKrishnaChintaluri <[email protected]> Co-authored-by: Praveen <[email protected]> Co-authored-by: Sowmya N Dixit <[email protected]> Co-authored-by: Anand Parthasarathy <[email protected]> * #000:feat: Removed the provided scope of the kafka-client in the framework (#40) * #0000 - feat: Add dataset-type to system events (#41) * #0000 - feat: Add dataset-type to system events * #0000 - feat: Modify tests for dataset-type in system events * #0000 - feat: Remove unused getDatasetType function * #0000 - feat: Remove unused pom test dependencies * #0000 - feat: Remove unused pom test dependencies --------- Co-authored-by: Santhosh <[email protected]> Co-authored-by: shiva-rakshith <[email protected]> Co-authored-by: Aniket Sakinala <[email protected]> Co-authored-by: ManojKrishnaChintaluri <[email protected]> Co-authored-by: Praveen <[email protected]> Co-authored-by: Sowmya N Dixit <[email protected]> Co-authored-by: Anand Parthasarathy <[email protected]> * Main conflicts fixes (#44) * feat: add connector config and connector stats update functions * Issue #33 feat: add documentation for Dataset, Datasources, Data In and Query APIs * Update DatasetModels.scala * Release 1.3.0 into Main branch (#34) * testing new images * testing new images * testing new images * testing new images * testing new images * build new image with bug fixes * update dockerfile * update dockerfile * #0 fix: upgrade packages * #0 feat: add flink dockerfiles * #0 fix: add individual extraction * Issue #0 fix: upgrade ubuntu packages for vulnerabilities * #0 fix: update github actions release condition --------- Co-authored-by: ManojKrishnaChintaluri <[email protected]> Co-authored-by: Praveen <[email protected]> Co-authored-by: Sowmya N Dixit <[email protected]> * Update DatasetModels.scala * Issue #2 feat: Remove kafka connector code * feat: add function to get all datasets * #000:feat: Resolve conflicts --------- Co-authored-by: shiva-rakshith <[email protected]> Co-authored-by: Aniket Sakinala <[email protected]> Co-authored-by: ManojKrishnaChintaluri <[email protected]> Co-authored-by: Praveen <[email protected]> Co-authored-by: Sowmya N Dixit <[email protected]> Co-authored-by: Santhosh <[email protected]> Co-authored-by: Anand Parthasarathy <[email protected]> Co-authored-by: Ravi Mula <[email protected]> * #0000 - fix: Fix null dataset_type in DruidRouterFunction (#48) --------- Co-authored-by: ManojKrishnaChintaluri <[email protected]> Co-authored-by: Praveen <[email protected]> Co-authored-by: shiva-rakshith <[email protected]> Co-authored-by: Sowmya N Dixit <[email protected]> Co-authored-by: Santhosh <[email protected]> Co-authored-by: Aniket Sakinala <[email protected]> Co-authored-by: Anand Parthasarathy <[email protected]> Co-authored-by: Ravi Mula <[email protected]>
* testing new images * testing new images * testing new images * testing new images * testing new images * build new image with bug fixes * update dockerfile * update dockerfile * #0 fix: upgrade packages * #0 feat: add flink dockerfiles * feat: update all failed, invalid and duplicate topic names * feat: update kafka topic names in test cases * #0 fix: add individual extraction * feat: update failed event * Update ErrorConstants.scala * feat: update failed event * Issue #0 fix: upgrade ubuntu packages for vulnerabilities * feat: add exception handling for json deserialization * Update BaseProcessFunction.scala * Update BaseProcessFunction.scala * feat: update batch failed event generation * Update ExtractionFunction.scala * feat: update invalid json exception handling * Issue #46 feat: update batch failed event * Issue #46 feat: update batch failed event * Issue #46 feat: update batch failed event * Issue #46 feat: update batch failed event * Issue #46 fix: remove cloning object * Issue #46 feat: update batch failed event * #0 fix: update github actions release condition * Issue #46 feat: add error reasons * Issue #46 feat: add exception stack trace * Issue #46 feat: add exception stack trace * Dataset enhancements (#38) * feat: add connector config and connector stats update functions * Issue #33 feat: add documentation for Dataset, Datasources, Data In and Query APIs * Update DatasetModels.scala * #0 fix: upgrade packages * #0 feat: add flink dockerfiles * #0 fix: add individual extraction --------- * #0000 [SV] - Fallback to local redis instance if embedded redis is not starting * Update DatasetModels.scala * #0000 - refactor the denormalization logic 1. Do not fail the denormalization if the denorm key is missing 2. Add clear message whether the denorm is sucessful or failed or partially successful 3. Handle denorm for both text and number fields * #0000 - refactor: 1. Created a enum for dataset status and ignore events if the dataset is not in Live status 2. Created a outputtag for denorm failed stats 3. Parse event validation failed messages into a case class * #0000 - refactor: 1. Updated the DruidRouter job to publish data to router topics dynamically 2. Updated framework to created dynamicKafkaSink object * #0000 - mega refactoring: 1. Made calls to getAllDatasets and getAllDatasetSources to always query postgres 2. Created BaseDatasetProcessFunction for all flink functions to extend that would dynamically resolve dataset config, initialize metrics and handle common failures 3. Refactored serde - merged map and string serialization into one function and parameterized the function 4. Moved failed events sinking into a common base class 5. Master dataset processor can now do denormalization with another master dataset as well * #0000 - mega refactoring: 1. Made calls to getAllDatasets and getAllDatasetSources to always query postgres 2. Created BaseDatasetProcessFunction for all flink functions to extend that would dynamically resolve dataset config, initialize metrics and handle common failures 3. Refactored serde - merged map and string serialization into one function and parameterized the function 4. Moved failed events sinking into a common base class 5. Master dataset processor can now do denormalization with another master dataset as well * #0000 - mega refactoring: 1. Added validation to check if the event has a timestamp key and it is not blank nor invalid 2. Added timezone handling to store the data in druid in the TZ specified by the dataset * #0000 - minor refactoring: Updated DatasetRegistry.getDatasetSourceConfig to getAllDatasetSourceConfig * #0000 - mega refactoring: Refactored logs, error messages and metrics * #0000 - mega refactoring: Fix unit tests * #0000 - refactoring: 1. Introduced transformation mode to enable lenient transformations 2. Proper exception handling for transformer job * #0000 - refactoring: Fix test cases and code * #0000 - refactoring: upgrade embedded redis to work with macos sonoma m2 * #0000 - refactoring: Denormalizer test cases and bug fixes. Code coverage is 100% now * #0000 - refactoring: Router test cases and bug fixes. Code coverage is 100% now * #0000 - refactoring: Validator test cases and bug fixes. Code coverage is 100% now * #0000 - refactoring: Framework test cases and bug fixes * #0000 - refactoring: kafka connector test cases and bug fixes. Code coverage is 100% now * #0000 - refactoring: improve code coverage and fix bugs * #0000 - refactoring: improve code coverage and fix bugs --- Now the code coverage is 100% * #0000 - refactoring: organize imports * #0000 - refactoring: 1. transformer test cases and bug fixes - code coverage is 100% * #0000 - refactoring: test cases and bug fixes --------- * #000:feat: Removed the provided scope of the kafka-client in the framework (#40) * #0000 - feat: Add dataset-type to system events (#41) * #0000 - feat: Add dataset-type to system events * #0000 - feat: Modify tests for dataset-type in system events * #0000 - feat: Remove unused getDatasetType function * #0000 - feat: Remove unused pom test dependencies * #0000 - feat: Remove unused pom test dependencies * #67 feat: query system configurations from meta store * #67 fix: Refactor system configuration retrieval and update dynamic router function * #67 fix: update system config according to review * #67 fix: update test cases for system config * #67 fix: update default values in test cases * #67 fix: add get all system settings method and update test cases * #67 fix: add test case for covering exception case * #67 fix: fix data types in test cases * #67 fix: Refactor event indexing in DynamicRouterFunction * Issue #67 refactor: SystemConfig read from DB implementation * #226 fix: update test cases according to the refactor --------- Co-authored-by: Manjunath Davanam <[email protected]> Co-authored-by: ManojKrishnaChintaluri <[email protected]> Co-authored-by: shiva-rakshith <[email protected]> Co-authored-by: Sowmya N Dixit <[email protected]> Co-authored-by: Santhosh <[email protected]> Co-authored-by: Aniket Sakinala <[email protected]> Co-authored-by: Anand Parthasarathy <[email protected]>
* testing new images * testing new images * testing new images * testing new images * testing new images * build new image with bug fixes * update dockerfile * update dockerfile * #0 fix: upgrade packages * #0 feat: add flink dockerfiles * feat: update all failed, invalid and duplicate topic names * feat: update kafka topic names in test cases * #0 fix: add individual extraction * feat: update failed event * Update ErrorConstants.scala * feat: update failed event * Issue #0 fix: upgrade ubuntu packages for vulnerabilities * feat: add exception handling for json deserialization * Update BaseProcessFunction.scala * Update BaseProcessFunction.scala * feat: update batch failed event generation * Update ExtractionFunction.scala * feat: update invalid json exception handling * Issue #46 feat: update batch failed event * Issue #46 feat: update batch failed event * Issue #46 feat: update batch failed event * Issue #46 feat: update batch failed event * Issue #46 fix: remove cloning object * Issue #46 feat: update batch failed event * #0 fix: update github actions release condition * Issue #46 feat: add error reasons * Issue #46 feat: add exception stack trace * Issue #46 feat: add exception stack trace * Dataset enhancements (#38) * feat: add connector config and connector stats update functions * Issue #33 feat: add documentation for Dataset, Datasources, Data In and Query APIs * Update DatasetModels.scala * #0 fix: upgrade packages * #0 feat: add flink dockerfiles * #0 fix: add individual extraction --------- * #0000 [SV] - Fallback to local redis instance if embedded redis is not starting * Update DatasetModels.scala * #0000 - refactor the denormalization logic 1. Do not fail the denormalization if the denorm key is missing 2. Add clear message whether the denorm is sucessful or failed or partially successful 3. Handle denorm for both text and number fields * #0000 - refactor: 1. Created a enum for dataset status and ignore events if the dataset is not in Live status 2. Created a outputtag for denorm failed stats 3. Parse event validation failed messages into a case class * #0000 - refactor: 1. Updated the DruidRouter job to publish data to router topics dynamically 2. Updated framework to created dynamicKafkaSink object * #0000 - mega refactoring: 1. Made calls to getAllDatasets and getAllDatasetSources to always query postgres 2. Created BaseDatasetProcessFunction for all flink functions to extend that would dynamically resolve dataset config, initialize metrics and handle common failures 3. Refactored serde - merged map and string serialization into one function and parameterized the function 4. Moved failed events sinking into a common base class 5. Master dataset processor can now do denormalization with another master dataset as well * #0000 - mega refactoring: 1. Made calls to getAllDatasets and getAllDatasetSources to always query postgres 2. Created BaseDatasetProcessFunction for all flink functions to extend that would dynamically resolve dataset config, initialize metrics and handle common failures 3. Refactored serde - merged map and string serialization into one function and parameterized the function 4. Moved failed events sinking into a common base class 5. Master dataset processor can now do denormalization with another master dataset as well * #0000 - mega refactoring: 1. Added validation to check if the event has a timestamp key and it is not blank nor invalid 2. Added timezone handling to store the data in druid in the TZ specified by the dataset * #0000 - minor refactoring: Updated DatasetRegistry.getDatasetSourceConfig to getAllDatasetSourceConfig * #0000 - mega refactoring: Refactored logs, error messages and metrics * #0000 - mega refactoring: Fix unit tests * #0000 - refactoring: 1. Introduced transformation mode to enable lenient transformations 2. Proper exception handling for transformer job * #0000 - refactoring: Fix test cases and code * #0000 - refactoring: upgrade embedded redis to work with macos sonoma m2 * #0000 - refactoring: Denormalizer test cases and bug fixes. Code coverage is 100% now * #0000 - refactoring: Router test cases and bug fixes. Code coverage is 100% now * #0000 - refactoring: Validator test cases and bug fixes. Code coverage is 100% now * #0000 - refactoring: Framework test cases and bug fixes * #0000 - refactoring: kafka connector test cases and bug fixes. Code coverage is 100% now * #0000 - refactoring: improve code coverage and fix bugs * #0000 - refactoring: improve code coverage and fix bugs --- Now the code coverage is 100% * #0000 - refactoring: organize imports * #0000 - refactoring: 1. transformer test cases and bug fixes - code coverage is 100% * #0000 - refactoring: test cases and bug fixes --------- * #000:feat: Removed the provided scope of the kafka-client in the framework (#40) * #0000 - feat: Add dataset-type to system events (#41) * #0000 - feat: Add dataset-type to system events * #0000 - feat: Modify tests for dataset-type in system events * #0000 - feat: Remove unused getDatasetType function * #0000 - feat: Remove unused pom test dependencies * #0000 - feat: Remove unused pom test dependencies * #67 feat: query system configurations from meta store * #67 fix: Refactor system configuration retrieval and update dynamic router function * #67 fix: update system config according to review * #67 fix: update test cases for system config * #67 fix: update default values in test cases * #67 fix: add get all system settings method and update test cases * #67 fix: add test case for covering exception case * #67 fix: fix data types in test cases * #67 fix: Refactor event indexing in DynamicRouterFunction * Issue #67 refactor: SystemConfig read from DB implementation * #226 fix: update test cases according to the refactor * Dataset Registry Update (#57) * Issue #0000: feat: updateConnectorStats method includes last run timestamp * Issue #0000: fix: updateConnectorStats sql query updated * Issue #0000: fix: updateConnectorStats sql query updated --------- Co-authored-by: Manjunath Davanam <[email protected]> Co-authored-by: ManojKrishnaChintaluri <[email protected]> Co-authored-by: Praveen <[email protected]> Co-authored-by: shiva-rakshith <[email protected]> Co-authored-by: Sowmya N Dixit <[email protected]> Co-authored-by: Santhosh <[email protected]> Co-authored-by: Aniket Sakinala <[email protected]> Co-authored-by: Anand Parthasarathy <[email protected]> Co-authored-by: Shreyas Bhaktharam <[email protected]>
* testing new images * testing new images * testing new images * testing new images * testing new images * build new image with bug fixes * update dockerfile * update dockerfile * #0 fix: upgrade packages * #0 feat: add flink dockerfiles * feat: update all failed, invalid and duplicate topic names * feat: update kafka topic names in test cases * #0 fix: add individual extraction * feat: update failed event * Update ErrorConstants.scala * feat: update failed event * Issue #0 fix: upgrade ubuntu packages for vulnerabilities * feat: add exception handling for json deserialization * Update BaseProcessFunction.scala * Update BaseProcessFunction.scala * feat: update batch failed event generation * Update ExtractionFunction.scala * feat: update invalid json exception handling * Issue #46 feat: update batch failed event * Issue #46 feat: update batch failed event * Issue #46 feat: update batch failed event * Issue #46 feat: update batch failed event * Issue #46 fix: remove cloning object * Issue #46 feat: update batch failed event * #0 fix: update github actions release condition * Issue #46 feat: add error reasons * Issue #46 feat: add exception stack trace * Issue #46 feat: add exception stack trace * Dataset enhancements (#38) * feat: add connector config and connector stats update functions * Issue #33 feat: add documentation for Dataset, Datasources, Data In and Query APIs * Update DatasetModels.scala * #0 fix: upgrade packages * #0 feat: add flink dockerfiles * #0 fix: add individual extraction --------- * #0000 [SV] - Fallback to local redis instance if embedded redis is not starting * Update DatasetModels.scala * #0000 - refactor the denormalization logic 1. Do not fail the denormalization if the denorm key is missing 2. Add clear message whether the denorm is sucessful or failed or partially successful 3. Handle denorm for both text and number fields * #0000 - refactor: 1. Created a enum for dataset status and ignore events if the dataset is not in Live status 2. Created a outputtag for denorm failed stats 3. Parse event validation failed messages into a case class * #0000 - refactor: 1. Updated the DruidRouter job to publish data to router topics dynamically 2. Updated framework to created dynamicKafkaSink object * #0000 - mega refactoring: 1. Made calls to getAllDatasets and getAllDatasetSources to always query postgres 2. Created BaseDatasetProcessFunction for all flink functions to extend that would dynamically resolve dataset config, initialize metrics and handle common failures 3. Refactored serde - merged map and string serialization into one function and parameterized the function 4. Moved failed events sinking into a common base class 5. Master dataset processor can now do denormalization with another master dataset as well * #0000 - mega refactoring: 1. Made calls to getAllDatasets and getAllDatasetSources to always query postgres 2. Created BaseDatasetProcessFunction for all flink functions to extend that would dynamically resolve dataset config, initialize metrics and handle common failures 3. Refactored serde - merged map and string serialization into one function and parameterized the function 4. Moved failed events sinking into a common base class 5. Master dataset processor can now do denormalization with another master dataset as well * #0000 - mega refactoring: 1. Added validation to check if the event has a timestamp key and it is not blank nor invalid 2. Added timezone handling to store the data in druid in the TZ specified by the dataset * #0000 - minor refactoring: Updated DatasetRegistry.getDatasetSourceConfig to getAllDatasetSourceConfig * #0000 - mega refactoring: Refactored logs, error messages and metrics * #0000 - mega refactoring: Fix unit tests * #0000 - refactoring: 1. Introduced transformation mode to enable lenient transformations 2. Proper exception handling for transformer job * #0000 - refactoring: Fix test cases and code * #0000 - refactoring: upgrade embedded redis to work with macos sonoma m2 * #0000 - refactoring: Denormalizer test cases and bug fixes. Code coverage is 100% now * #0000 - refactoring: Router test cases and bug fixes. Code coverage is 100% now * #0000 - refactoring: Validator test cases and bug fixes. Code coverage is 100% now * #0000 - refactoring: Framework test cases and bug fixes * #0000 - refactoring: kafka connector test cases and bug fixes. Code coverage is 100% now * #0000 - refactoring: improve code coverage and fix bugs * #0000 - refactoring: improve code coverage and fix bugs --- Now the code coverage is 100% * #0000 - refactoring: organize imports * #0000 - refactoring: 1. transformer test cases and bug fixes - code coverage is 100% * #0000 - refactoring: test cases and bug fixes --------- * #000:feat: Removed the provided scope of the kafka-client in the framework (#40) * #0000 - feat: Add dataset-type to system events (#41) * #0000 - feat: Add dataset-type to system events * #0000 - feat: Modify tests for dataset-type in system events * #0000 - feat: Remove unused getDatasetType function * #0000 - feat: Remove unused pom test dependencies * #0000 - feat: Remove unused pom test dependencies * #67 feat: query system configurations from meta store * #67 fix: Refactor system configuration retrieval and update dynamic router function * #67 fix: update system config according to review * #67 fix: update test cases for system config * #67 fix: update default values in test cases * #67 fix: add get all system settings method and update test cases * #67 fix: add test case for covering exception case * #67 fix: fix data types in test cases * #67 fix: Refactor event indexing in DynamicRouterFunction * Issue #67 refactor: SystemConfig read from DB implementation * #226 fix: update test cases according to the refactor * Dataset Registry Update (#57) * Issue #0000: feat: updateConnectorStats method includes last run timestamp * Issue #0000: fix: updateConnectorStats sql query updated * Issue #0000: fix: updateConnectorStats sql query updated * #0000 - fix: Fix Postgres connection issue with defaultDatasetID (#64) * Metrics implementation for MasterDataIndexerJob (#55) * Issue #50 fix: Kafka Metrics implementation for MasterDataIndexerJob * Issue #50 fix: Changed 'ets' to UTC * Issue #50 feat: added log statements * Issue #50 fix: FIxed issue related to update query * Issue #50 fix: Code refactoring * Issue #50 fix: updated implementation of 'createDataFile' method * Issue #50 fix: code refactorig * Issue #50 test: Test cases for MasterDataIndexer * Issue #50 test: test cases implementation * Issue #50 test: Test case implementation for data-products * Issue #50 test: Test cases * Issue #50 test: test cases * Issue #50 test: test cases for data-products * Issue #50-fix: fixed jackson-databind issue * Isuue-#50-fix: code structure modifications * Issue #50-fix: code refactoring * Issue #50-fix: code refactoing * Issue-#50-Fix: test case fixes * Issue #50-fix: code formatting and code fixes * feat #50 - refactor the implementation * Issue-#50-fix: test cases fix * modified README file * revert readme file changes * revert dataset-registry * Issue-#50-fix: test cases fix * Issue-#50-fix: adding missing tests * Issue-#50-fix: refatoring code * Issue-#50-fix: code fixes and code formatting * fix #50: modified class declaration * fix #50: code refactor * fix #50: code refactor * fix #50: test cases fixes --------- * Remove kafka connector as it is moved to a independent repository --------- Signed-off-by: SurabhiAngadi <[email protected]> Co-authored-by: Manjunath Davanam <[email protected]> Co-authored-by: ManojKrishnaChintaluri <[email protected]> Co-authored-by: Praveen <[email protected]> Co-authored-by: shiva-rakshith <[email protected]> Co-authored-by: Sowmya N Dixit <[email protected]> Co-authored-by: Santhosh <[email protected]> Co-authored-by: Aniket Sakinala <[email protected]> Co-authored-by: Anand Parthasarathy <[email protected]> Co-authored-by: Shreyas Bhaktharam <[email protected]> Co-authored-by: SurabhiAngadi <[email protected]>
* Sanketika-Obsrv/issue-tracker#106:fix: Fix postgres connection issue with dataset read and handling an errors while parsing the message * Sanketika-Obsrv/issue-tracker#107:fix: Denorm job fix to handle error when denorm field node is contains empty value * Sanketika-Obsrv/issue-tracker#106:fix: Review comments fix - Changed the generic exception to actual exception (NullPointer)
fix: #0000: update datasourceRef only if dataset has records
…o include type. (#79) Co-authored-by: sowmya-dixit <[email protected]>
* feat: Hudi Flink Implementation. * feat: local working with metastore and localstack. * #0000 - feat: Hudi Sink implementation * #0000 - feat: Hudi Sink implementation * #0000 - feat: Initialize dataset RowType during job startup * refactor: Integrate hudi connector with dataset registry. * refactor: Integrate hudi connector with dataset registry. * Sanketika-Obsrv/issue-tracker#141 refactor: Enable timestamp based partition * Sanketika-Obsrv/issue-tracker#141 refactor: Fix Hudi connector job to handle empty datasets list for lakehouse. * Sanketika-Obsrv/issue-tracker#141 fix: Set Timestamp based partition configurations only if partition key is of timestamp type. * Sanketika-Obsrv/issue-tracker#170 fix: Resolve timestamp based partition without using TimestampBasedAvroKeyGenerator. * Sanketika-Obsrv/issue-tracker#177 fix: Lakehouse connector flink job fixes. * Sanketika-Obsrv/issue-tracker#177 fix: Dockerfile changes for hudi-connector * Sanketika-Obsrv/issue-tracker#177 fix: Lakehouse connector flink job fixes. * Sanketika-Obsrv/issue-tracker#177 fix: remove unused code * Sanketika-Obsrv/issue-tracker#177 fix: remove unused code * Sanketika-Obsrv/issue-tracker#177 fix: remove unused code * Sanketika-Obsrv/issue-tracker#177 fix: remove commented code
* Pipeline Bug fixes (#74) * Sanketika-Obsrv/issue-tracker#106:fix: Fix postgres connection issue with dataset read and handling an errors while parsing the message * Sanketika-Obsrv/issue-tracker#107:fix: Denorm job fix to handle error when denorm field node is contains empty value * Sanketika-Obsrv/issue-tracker#106:fix: Review comments fix - Changed the generic exception to actual exception (NullPointer) * fix: #0000: update datasourceRef only if dataset has records * Sanketika-Obsrv/issue-tracker#180 fix: Datasource DB schema changes to include type. (#79) Co-authored-by: sowmya-dixit <[email protected]> * Hudi connector flink job implementation (#80) * feat: Hudi Flink Implementation. * feat: local working with metastore and localstack. * #0000 - feat: Hudi Sink implementation * #0000 - feat: Hudi Sink implementation * #0000 - feat: Initialize dataset RowType during job startup * refactor: Integrate hudi connector with dataset registry. * refactor: Integrate hudi connector with dataset registry. * Sanketika-Obsrv/issue-tracker#141 refactor: Enable timestamp based partition * Sanketika-Obsrv/issue-tracker#141 refactor: Fix Hudi connector job to handle empty datasets list for lakehouse. * Sanketika-Obsrv/issue-tracker#141 fix: Set Timestamp based partition configurations only if partition key is of timestamp type. * Sanketika-Obsrv/issue-tracker#170 fix: Resolve timestamp based partition without using TimestampBasedAvroKeyGenerator. * Sanketika-Obsrv/issue-tracker#177 fix: Lakehouse connector flink job fixes. * Sanketika-Obsrv/issue-tracker#177 fix: Dockerfile changes for hudi-connector * Sanketika-Obsrv/issue-tracker#177 fix: Lakehouse connector flink job fixes. * Sanketika-Obsrv/issue-tracker#177 fix: remove unused code * Sanketika-Obsrv/issue-tracker#177 fix: remove unused code * Sanketika-Obsrv/issue-tracker#177 fix: remove unused code * Sanketika-Obsrv/issue-tracker#177 fix: remove commented code --------- Co-authored-by: Manjunath Davanam <[email protected]> Co-authored-by: SurabhiAngadi <[email protected]> Co-authored-by: Sowmya N Dixit <[email protected]> Co-authored-by: sowmya-dixit <[email protected]>
Sanketika-Obsrv/issue-tracker#228 Updated github actions for lakehouse job
…ractor job (#76) Co-authored-by: Santhosh Vasabhaktula <[email protected]>
…ort retire workflow
Develop to Main
* #OBS-I148: Migration of SQL queries to prepared statement to avoid the SQL injection * #OBS-I148: Migration of SQL queries to prepared statement to avoid the SQL injection * #OBS-I148: Removed the unwanted imports * #OBS-I148: System Config Changes - Converted from raw query to prepared statements Co-authored-by: Ravi Mula <[email protected]>
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
This pull request updates the database interaction code by migrating raw SQL statements to prepared statements. The changes improve security and performance by leveraging parameterized queries.