-
Notifications
You must be signed in to change notification settings - Fork 85
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[EPIC]: Track migration process and display it in a dashboard #2074
Comments
During team discussion we talked about this in the context of refreshing the inventory, which is slightly different to the proposed solution above but maybe represents an acceptable alternative. A complication that we discussed is that in the current crawling code there's not always a clear separation between inventory and findings: sometimes findings are attached to inventory items during the crawling process and these are stored in the same table. Teasing out how to separate these is likely to be quite time-consuming. |
## Changes This PR is stacked on top of #2402 and is further refactors the crawler snapshotting pattern to use common code without changing the existing system behaviour. This is intended as groundwork to the next step, whereby `snapshot()` will be updated to support refreshing the inventory. ### Linked issues Progresses #2074, stacked on top of #2402. ### Tests - existing unit tests (with minor updates) - existing integration tests
After some more deliberation, we've narrowed the requirements for this a bit. The way this feature is going to work, in blunt terms:
There's still an open question about the groups and permissions crawlers: these can be time-consuming and are intended to be both an early (and relatively simple) part of the migration process: once completed is there any point to re-running the crawlers as part of the assessment? At the moment I'm leaning towards not re-running them by default, but I think there needs to be a configuration option to include them:
The idea of keeping a history so that progress can be tracked still needs to be refined. |
…#2402) ## Changes This PR: - Refactors the existing permissions crawler so that it uses the same fetch/crawl/snapshot pattern as the rest of the crawlers. - Removes the permissions crawler's bespoke `.cleanup()` implementation in favour of the `.reset()` method already available on the base class. - Some dead code was removed on the permissions crawler: `.load_for_all()` - Specifies `.snapshot()` as an interface (on `CrawlerBase`) that all crawlers need to support. ### Linked issues Progresses #2074, by laying the ground for crawlers to support refreshing. Note that this PR and #2392 have a behaviour conflict that will trigger a failure in `test_runtime_crawl_permissions` if either is merged: resolving this is trivial. ### Functionality - modified existing workflow: `assessment` ### Tests - [X] updated unit tests - [X] updated integration tests
…run. (#2560) ## Changes Following on from some discussions yesterday, this PR updates the project document to clarify that the `assessment` job is only intended to be run once and should not be re-run to refresh the inventory and/or findings. The intent is that UCX should be reinstalled first if the assessment needs to be re-run. ### Linked issues As part of #2074 a new (daily) job will be introduced that refreshes parts of the inventory; however the `assessment` job will remain as-is. (These documentation changes are intended to reflect the status quo.) ### Functionality - updated user documentation
migration-progress
scheduled daily workflow
The top comment is updated according to our design session. The other comments inbetween the top and this one might be outdated |
## Changes This PR updates the crawler internals so that when requesting a snapshot a refresh can be forced. In addition, the default semantics for crawlers have been updated so that they use an overwrite operation to write the data rather than appending (with the assumption that the table is already empty). ### Linked issues - Progresses: #2074 - Resolves: #2576 - Closes: #2597 ### Functionality - modifies all existing crawlers ### Tests - updated unit tests - existing integration tests
## Changes This is a refactoring PR that renames some of the classes used for tracking the state of table migration. This is partly to avoid confusion with the upcoming migration progress code, but also to better reflect the scope of responsibility. ### Linked issues Relates to #2074. ### Functionality - Functionality is not intended to change. The existing `migration_status` table has been left alone. ### Tests - Existing unit and integration tests.
## Changes This PR introduces a new workflow, `migration-progress-experimental`, along with a new `migration-progress` CLI command for triggering it. This new pipeline duplicates a lot (but not all of) the `assessment` pipeline and refreshes the inventory for: - Clusters - Grants - Jobs - Pipelines - Policies - Tables[^1] - TableMigrationStatus[^2] - UDFs This job will eventually run on a schedule; this is out-of-scope for this PR. ### Linked issues Resolves #2574, progresses #2074. ### Functionality - [x] added relevant user documentation - [X] added new CLI command: `databricks labs ucx migration-progress` - [X] added a new workflow: `migration-progress-experimental` ### Tests - [X] added unit tests - [X] added integration tests - [X] verified on staging environment (screenshot attached) Staging screenshot: <img width="832" alt="Successful run of the migration-progress-experimental workflow" src="https://github.com/user-attachments/assets/7ba2e4e3-daff-4c6b-a7bb-fc7bbae22c1f"> [^1]: Differs from `assessment` by using `TablesCrawler`: the Scala-based crawler is about to be replaced by a new implementation, but that's not ready yet. [^2]: Not part of the `assessment` workflow.
* Added Py4j implementation of tables crawler to retrieve a list of HMS tables in the assessment workflow ([#2579](#2579)). In this release, we have added a Py4j implementation of a tables crawler to retrieve a list of Hive Metastore tables in the assessment workflow. A new `FasterTableScanCrawler` class has been introduced, which can be used in the Assessment Job based on a feature flag to replace the old Scala code, allowing for better logging during table scans. The existing `assessment.crawl_tables` workflow now utilizes the new py4j crawler instead of the scala one. Integration tests have been added to ensure the functionality works correctly. The commit also includes a new method for listing table names in the specified database and improvements to error handling and logging mechanisms. The new Py4j tables crawler enhances the functionality of the assessment workflow by improving error handling, resulting in better logging and faster table scanning during the assessment process. This change is part of addressing issue [#2190](#2190) and was co-authored by Serge Smertin. * Added `create-ucx-catalog` cli command ([#2694](#2694)). A new CLI command, `create-ucx-catalog`, has been added to create a catalog for migration tracking that can be used across multiple workspaces. The command creates a UCX catalog for tracking migration status and artifacts, and is created by running `databricks labs ucx create-ucx-catalog` and specifying the storage location for the catalog. Relevant user documentation, unit tests, and integration tests have been added for this command. The `assign-metastore` command has also been updated to allow for the selection of a metastore when multiple metastores are available in the workspace region. This change improves the migration tracking feature and enhances the user experience. * Added experimental `migration-progress-experimental` workflow ([#2658](#2658)). This commit introduces an experimental workflow, `migration-progress-experimental`, which refreshes the inventory for various resources such as clusters, grants, jobs, pipelines, policies, tables, TableMigrationStatus, and UDFs. The workflow can be triggered using the `databricks labs ucx migration-progress` CLI command and uses a new implementation of a Scala-based crawler, `TablesCrawler`, which will eventually replace the current implementation. The new workflow is a duplicate of most of the `assessment` pipeline's functionality but with some differences, such as the use of `TablesCrawler`. Relevant user documentation has been added, along with unit tests, integration tests, and a screenshot of a successful staging environment run. The new workflow is expected to run on a schedule in the future. This change resolves [#2574](#2574) and progresses [#2074](#2074). * Added handling for `InternalError` in `Listing.__iter__` ([#2697](#2697)). This release introduces improved error handling in the `Listing.__iter__` method of the `Generic` class, located in the `workspace_access/generic.py` file. Previously, only `NotFound` exceptions were handled, but now both `InternalError` and `NotFound` exceptions are caught and logged appropriately. This change enhances the robustness of the method, which is responsible for listing objects of a specific type and returning them as `GenericPermissionsInfo` objects. To ensure the correct functionality, we have added new unit tests and manual testing. The logging of the `InternalError` exception is properly handled in the `GenericPermissionsSupport` class when listing serving endpoints. This behavior is verified by the newly added test function `test_internal_error_in_serving_endpoints_raises_warning` and the updated `test_serving_endpoints_not_enabled_raises_warning`. * Added handling for `PermissionDenied` when listing accessible workspaces ([#2733](#2733)). A new `can_administer` method has been added to the `Workspaces` class in the `workspaces.py` file, which allows for more fine-grained control over which users can administer workspaces. This method checks if the user has access to a given workspace and is a member of the workspace's `admins` group, indicating that the user has administrative privileges for that workspace. If the user does not have access to the workspace or is not a member of the `admins` group, the method returns `False`. Additionally, error handling in the `get_accessible_workspaces` method has been improved by adding a `PermissionDenied` exception to the list of exceptions that are caught and logged. New unit tests have been added for the `AccountWorkspaces` class of the `databricks.labs.blueprint.account` module to ensure that the new method is functioning as intended, specifically checking if a user is a workspace administrator based on whether they belong to the `admins` group. The linked issue [#2732](#2732) is resolved by this change. All changes have been manually and unit tested. * Added static code analysis results to assessment dashboard ([#2696](#2696)). This commit introduces two new tasks, `assess_dashboards` and `assess_workflows`, to the existing assessment dashboard for identifying migration problems in dashboards and workflows. These tasks analyze embedded queries and notebooks for migration issues and collect direct filesystem access patterns requiring attention. Upon completion, the results are stored in the inventory database and displayed on the Migration dashboard. Additionally, two new widgets, job/query problem widgets and directfs access widgets, have been added to enhance the dashboard's functionality by providing additional information related to code compatibility and access control. Integration tests using mock data have been added and manually tested to ensure the proper functionality of these new features. This update improves the overall assessment and compatibility checking capabilities of the dashboard, making it easier for users to identify and address issues related to Unity Catalog compatibility in their workflows and dashboards. * Added unskip CLI command to undo a skip on schema or a table ([#2727](#2727)). This pull request introduces a new CLI command, "unskip", which allows users to reverse a previously applied `skip` on a schema or table. The `unskip` command accepts a required `--schema` parameter and an optional `--table` parameter. A new function, also named "unskip", has been added, which takes the same parameters as the `skip` command. The function checks for the required `--schema` parameter and creates a new WorkspaceContext object to call the appropriate method on the table_mapping object. Two new methods, `unskip_schema` and "unskip_table_or_view", have been added to the HiveMapping class. These methods remove the skip mark from a schema or table, respectively, and handle exceptions such as NotFound and BadRequest. The get_tables_to_migrate method has been updated to consider the unskipped tables or schemas. Currently, the feature is tested manually and has not been added to the user documentation. * Added unskip CLI command to undo a skip on schema or a table ([#2734](#2734)). A new `unskip` CLI command has been added to the project, which allows users to remove the `skip` mark set by the existing `skip` command on a specified schema or table. This command takes an optional `--table` flag, and if not provided, it will unskip the entire schema. The new functionality is accompanied by a unit test and relevant user documentation, and addresses issue [#1938](#1938). The implementation includes the addition of the `unskip_table_or_view` method, which generates the appropriate `ALTER TABLE/VIEW` statement to remove the skip marker, and updates to the `unskip_schema` method to include the schema name in the `ALTER SCHEMA` statement. Additionally, exception handling has been updated to include `NotFound` and `BadRequest` exceptions. This feature simplifies the process of undoing a skip on a schema, table, or view in the Hive metastore, which previously required manual editing of the Hive metastore properties. * Assess source code as part of the assessment ([#2678](#2678)). This commit introduces enhancements to the assessment workflow, including the addition of two new tasks for evaluating source code from SQL queries in dashboards and from notebooks/files in jobs and tasks. The existing `databricks labs install ucx` command has been modified to incorporate linting during the assessment. The `QueryLinter` class has been updated to accept an additional argument for linting source code. These changes have been thoroughly tested through integration tests to ensure proper functionality. Co-authored by Eric Vergnaud. * Bump astroid version, pylint version and drop our f-string workaround ([#2746](#2746)). In this update, we have bumped the versions of astroid and pylint to 3.3.1 and removed workarounds related to f-string inference limitations in previous versions of astroid (< 3.3). These workarounds were necessary for handling issues such as uninferrable sys.path values and the lack of f-string inference in loops. We have also updated corresponding tests to reflect these changes and improve the overall code quality and maintainability of the project. These changes are part of a larger effort to update dependencies and simplify the codebase by leveraging the latest features of updated tools and removing obsolete workarounds. * Delete temporary files when running solacc ([#2750](#2750)). This commit includes changes to the `solacc.py` script to improve the linting process for the `solacc` repository, specifically targeting the issue of excessive temporary files that were exceeding CI storage capacity. The modifications include linting the repository on a per-top-level `solution` basis, where each solution resides within the top folders and is independent of others. Post-linting, temporary files and directories registered in `PathLookup` are deleted to enhance storage efficiency. Additionally, this commit prepares for improving false positive detection and introduces a new `SolaccContext` class that tracks various aspects of the linting process, providing more detailed feedback on the linting results. This change does not introduce new functionality or modify existing functionality, but rather optimizes the linting process for the `solacc` repository, maintaining CI storage capacity levels within acceptable limits. * Don't report direct filesystem access for API calls ([#2689](#2689)). This release introduces enhancements to the Direct File System Access (DFSA) linter, resolving false positives in API call reporting. The `ws.api_client.do` call previously triggered inaccurate direct filesystem access alerts, which have been addressed by adding new methods to identify HTTP call parameters and specific API calls. The linter now disregards DFSA patterns within known API calls, eliminating false positives with relative URLs and duplicate advice from SparkSqlPyLinter. Additionally, improvements in the `python_ast.py` and `python_infer.py` files include the addition of `is_instance_of` and `is_from_module` methods, along with safer inference methods to prevent infinite recursion and enhance value inference. These changes significantly improve the DFSA linter's accuracy and effectiveness when analyzing code containing API calls. * Enables cli cmd `databricks labs ucx create-catalog-schemas` to apply catalog/schema acl from legacy hive_metastore ([#2676](#2676)). The new release introduces a `databricks labs ucx create-catalog-schemas` command, which applies catalog/schema Access Control List (ACL) from a legacy hive_metastore. This command modifies the existing `table_mapping` method to include a new `grants_crawler` parameter in the `CatalogSchema` constructor, enabling the application of ACLs from the legacy hive_metastore. A corresponding unit test is included to ensure proper functionality. The `CatalogSchema` class in the `databricks.labs.ucx.hive_metastore.catalog_schema` module has been updated with a new argument `hive_acl` and the integration of the `GrantsCrawler` class. The `GrantsCrawler` class is responsible for crawling the Hive metastore and retrieving grants for catalogs, schemas, and tables. The `prepare_test` function has been updated to include the `hive_acl` argument and the `test_catalog_schema_acl` function has been updated to test the new functionality, ensuring that the correct grant statements are generated for a wider range of principals and catalogs/schemas. These changes improve the functionality and usability of the `databricks labs ucx create-catalog-schemas` command, allowing for a more seamless transition from a legacy hive metastore. * Fail `make test` on coverage below 90% ([#2682](#2682)). A new change has been introduced to the pyproject.toml file to enhance the codebase's quality and robustness by ensuring that the test coverage remains above 90%. This has been accomplished by adding the `--cov-fail-under=90` flag to the `test` and `coverage` scripts in the `[tool.hatch.envs.default.scripts]` section. This flag will cause the `make test` command to fail if the coverage percentage falls below the specified value of 90%, ensuring that all new changes are thoroughly tested and that the codebase maintains a minimum coverage threshold. This is a best practice for maintaining code coverage and improving the overall quality and reliability of the codebase. * Fixed DFSA false positives from f-string fragments ([#2679](#2679)). This commit addresses false positive DataFrame API Scanning Antipattern (DFSA) reports in Python code, specifically in f-string fragments containing forward slashes and curly braces. The linter has been updated to accurately detect DFSA paths while avoiding false positives, and it now checks for `JoinedStr` fragments in string constants. Additionally, the commit rectifies issues with duplicate advices reported by `SparkSqlPyLinter`. No new features or major functionality changes have been introduced; instead, the focus has been on improving the reliability and accuracy of DFSA detection. Co-authored by Eric Vergnaud, this commit includes new unit tests and refinements to the DFSA linter, specifically addressing false positive patterns like `f"/Repos/{thing1}/sdk-{thing2}-{thing3}"`. To review these changes, consult the updated tests in the `tests/unit/source_code/linters/test_directfs.py` file, such as the new test case for the f-string pattern causing false positives. By understanding these improvements, you'll ensure your project adheres to the latest updates, maintaining quality and accurate DFSA detection. * Fixed failing integration tests that perform a real assessment ([#2736](#2736)). In this release, we have made significant improvements to the integration tests in the `assessment` workflow, by reducing the scope of the assessment and improving efficiency and reliability. We have removed several object creation functions and added a new function `populate_for_linting` for linting purposes. The `populate_for_linting` function adds necessary information to the installation context, and is used to ensure that the integration tests still have the required data for linting. We have also added a pytest fixture `populate_for_linting` to set up a minimal amount of data in the workspace for linting purposes. These changes have been implemented in the `test_workflows.py` file in the integration/assessment directory. This will help to ensure that the tests are not unnecessarily extensive, and that they are able to accurately assess the functionality of the library. * Fixed sqlglot crasher with 'drop schema ...' statement ([#2758](#2758)). In this release, we have addressed a crash issue in the `sqlglot` library caused by the `drop schema` statement. A new method, `_unsafe_lint_expression`, has been introduced to prevent the crash by checking if the current expression is a `Use`, `Create`, or `Drop` statement and updating the `schema` attribute accordingly. The library now correctly handles the `drop schema` statement and returns a `Deprecation` warning if the table being processed is in the `hive_metastore` catalog and has been migrated to the Unity Catalog. Unit tests have been added to ensure the correct behavior of this code, and the linter for `from table` SQL has been updated to parse and handle the `drop schema` statement without raising any errors. These changes improve the library's overall reliability and stability, allowing it to operate smoothly with the `drop schema` statement. * Fixed test failure: `test_table_migration_job_refreshes_migration_status[regular-migrate-tables]` ([#2625](#2625)). In this release, we have addressed two issues ([#2621](#2621) and [#2537](#2537)) and fixed a test failure in `test_table_migration_job_refreshes_migration_status[regular-migrate-tables]`. The `index` and `index_full_refresh` methods in `table_migrate.py` have been updated to accept a new `force_refresh` flag. When set to `True`, these methods will ensure that the migration status is up-to-date. This change also affects the `ViewsMigrationSequencer` class, which now passes `force_refresh=True` to the `index` method. Additionally, we have fixed a test failure by reusing the `force_refresh` flag to ensure the migration status is up-to-date. The `TableMigrationStatus` class in `table_migration_status.py` has been modified to accept an optional `force_refresh` parameter in the `index` method, and a unit test has been updated to assert the correct behavior when updating the migration status. * Fixes error message ([#2759](#2759)). The `load` method of the `mapping.py` file in the `databricks/labs/ucx/hive_metastore` package has been updated to correct an error message displayed when a `NotFound` exception is raised. The previous message suggested running an incorrect command, which has been updated to the correct one: "Please run: databricks labs ucx create-table-mapping". This change does not add any new methods or alter existing functionality, but instead focuses on improving the user experience by providing accurate information when an error occurs. The scope of this change is limited to updating the error message, and no other modifications have been made. * Fixes issue of circular dependency of migrate-location ACL ([#2741](#2741)). In this release, we have resolved two issues ([#274](#274) * Fixes source table alias dissapearance during migrate_views ([#2726](#2726)). This release introduces a fix to preserve the alias for the source table during the conversion of CREATE VIEW SQL from the legacy Hive metastore to the Unity Catalog. The issue was addressed by adding a new test case, `test_migrate_view_alias_test`, to verify the correct handling of table aliases during migration. The changes also include a fix for the SQL conversion and new test cases to ensure the correct handling of table aliases, reflected in accurate SQL conversion. A new parameter, `alias`, has been added to the Table class, and the `apply` method in the `from_table.py` file has been updated. The migration process has been updated to retain the original alias of the table. Unit tests have been added and thoroughly tested to confirm the correctness of the changes, including handling potential intermittent failures caused by external dependencies. * Py4j table crawler: suggestions/fixes for describing tables ([#2684](#2684)). This release introduces significant improvements and fixes to the Py4J-based table crawler, enhancing its capability to describe tables effectively. The code for fetching table properties over the bridge has been updated, and error tracing has been improved through individual fetching of each table property and providing python backtrace on JVM side errors. Scala `Option` values unboxing issues have been resolved, and a small optimization has been implemented to detect partitioned tables without materializing the collection. The table's `.viewText()` property is now properly handled as a Scala `Option`. The `catalog` argument is now explicitly verified to be `hive_metastore`, and a new static method `_option_as_python` has been introduced for safely extracting values from Scala `Option`. The `_describe` method has been refactored to handle exceptions more gracefully and improved code readability. These changes result in better functionality, error handling, logging, and performance when describing tables within a specified catalog and database. The linked issues [#2658](#2658) and [#2579](#2579) are progressed through these updates, and appropriate testing has been conducted to ensure the improvements' effectiveness. * Speedup assessment workflow by making DBFS root table size calculation parallel ([#2745](#2745)). In this release, the assessment workflow for calculating DBFS root table size has been optimized through the parallelization of the calculation process, resulting in improved performance. This has been achieved by updating the `pipelines_crawler` function in `src/databricks/labs/ucx/contexts/workflow_task.py`, specifically the `cached_property table_size_crawler`, to include an additional argument `self.config.include_databases`. The `TablesCrawler` class has also been modified to include a generic type parameter `Table`, enabling type hinting and more robust type checking. Furthermore, the unit test file `test_table_size.py` in the `hive_metastore` directory has been updated to handle corrupt tables and invalid delta format errors more effectively. Additionally, a new entry `databricks-pydabs` has been added to the "known.json" file, potentially enabling better integration with the `databricks-pydabs` library or providing necessary configuration information for parallel processing. Overall, these changes improve the efficiency and scalability of the codebase and optimize the assessment workflow for calculating DBFS root table size. * Updated databricks-labs-blueprint requirement from <0.9,>=0.8 to >=0.8,<0.10 ([#2747](#2747)). In this update, the requirement for `databricks-labs-blueprint` has been updated to version `>=0.8,<0.10` in the `pyproject.toml` file. This change allows the project to utilize the latest features and bug fixes included in version 0.9.0 of the `databricks-labs-blueprint` library. Notable updates in version 0.9.0 consist of the addition of Databricks CLI version as part of routed command telemetry and support for Unicode Byte Order Mark (BOM) in file upload and download operations. Additionally, various bug fixes and improvements have been implemented for the `WorkspacePath` class, including the addition of `stat()` methods and improved compatibility with different versions of Python. * Updated databricks-labs-lsql requirement from <0.12,>=0.5 to >=0.5,<0.13 ([#2688](#2688)). In this update, the version requirement of the `databricks-labs-lsql` library has been changed from a version greater than or equal to 0.5 and less than 0.12 to a version greater than or equal to 0.5 and less than 0.13. This allows the project to utilize the latest version of 'databricks-labs-lsql', which includes new methods for differentiating between a table that has never been written to and one with zero rows in the MockBackend class. Additionally, the update adds support for various filter types and improves testing coverage and reliability. The release notes and changelog for the updated library are provided in the commit message for reference. * Updated documentation to explain the usage of collections and eligible commands ([#2738](#2738)). The latest update to the Databricks Labs Unified CLI (UCX) tool introduces the `join-collection` command, which enables users to join two or more workspaces into a collection, allowing for streamlined and consolidated command execution across multiple workspaces. This feature is available to Account admins on the Databricks account, Workspace admins on the workspaces to be joined, and requires UCX installation on the workspace. To run collection-eligible commands, users can simply pass the `--run-as-collection=True` flag. This enhancement enhances the UCX tool's functionality, making it easier to manage and execute commands on multiple workspaces. * Updated sqlglot requirement from <25.22,>=25.5.0 to >=25.5.0,<25.23 ([#2687](#2687)). In this pull request, we have updated the version requirement for the `sqlglot` library in the pyproject.toml file. The previous requirement specified a version greater than or equal to 25.5.0 and less than 25.22, but we have updated it to allow for versions greater than or equal to 25.5.0 and less than 25.23. This change allows us to use the latest version of 'sqlglot', while still ensuring compatibility with other dependencies. Additionally, this pull request includes a detailed changelog from the `sqlglot` repository, which provides information on the features, bug fixes, and changes included in each version. This can help us understand the scope of the update and how it may impact our project. * [DOCUMENTATION] Improve documentation on using account profile for `sync-workspace-info` cli command ([#2683](#2683)). The `sync-workspace-info` CLI command has been added to the Databricks Labs UCX package, which uploads the workspace configuration to all workspaces in the Databricks account where the `ucx` tool is installed. This feature requires Databricks Account Administrator privileges and is necessary to create an immutable default catalog mapping for the table migration process. It also serves as a prerequisite for the `create-table-mapping` command. To utilize this command, users must configure the Databricks CLI profile with access to the Databricks account console, available at "accounts.cloud.databricks.com" or "accounts.azuredatabricks.net". Additionally, the documentation for using the account profile with the `sync-workspace-info` command has been enhanced, addressing issue [#1762](#1762). * [DOCUMENTATION] Improve documentation when installing UCX from a machine with restricted internet access ([#2690](#2690)). "A new section has been added to the `ADVANCED` installation section of the UCX library documentation, providing detailed instructions for installing UCX with a company-hosted PyPI mirror. This feature is intended for environments with restricted internet access, allowing users to bypass the public PyPI index and use a company-controlled mirror instead. Users will need to add all UCX dependencies to the company-hosted PyPI mirror and set the `PIP_INDEX_URL` environment variable to the mirror URL during installation. The solution also includes a prompt asking the user if their workspace blocks internet access. Additionally, the documentation has been updated to clarify that UCX requires internet access to connect to GitHub for downloading the tool, specifying the necessary URLs that need to be accessible. This update aims to improve the installation process for users with restricted internet access and provide clear instructions and prompts for installing UCX on machines with limited internet connectivity." Dependency updates: * Updated sqlglot requirement from <25.22,>=25.5.0 to >=25.5.0,<25.23 ([#2687](#2687)). * Updated databricks-labs-lsql requirement from <0.12,>=0.5 to >=0.5,<0.13 ([#2688](#2688)). * Updated databricks-labs-blueprint requirement from <0.9,>=0.8 to >=0.8,<0.10 ([#2747](#2747)).
* Added Py4j implementation of tables crawler to retrieve a list of HMS tables in the assessment workflow ([#2579](#2579)). In this release, we have added a Py4j implementation of a tables crawler to retrieve a list of Hive Metastore tables in the assessment workflow. A new `FasterTableScanCrawler` class has been introduced, which can be used in the Assessment Job based on a feature flag to replace the old Scala code, allowing for better logging during table scans. The existing `assessment.crawl_tables` workflow now utilizes the new py4j crawler instead of the scala one. Integration tests have been added to ensure the functionality works correctly. The commit also includes a new method for listing table names in the specified database and improvements to error handling and logging mechanisms. The new Py4j tables crawler enhances the functionality of the assessment workflow by improving error handling, resulting in better logging and faster table scanning during the assessment process. This change is part of addressing issue [#2190](#2190) and was co-authored by Serge Smertin. * Added `create-ucx-catalog` cli command ([#2694](#2694)). A new CLI command, `create-ucx-catalog`, has been added to create a catalog for migration tracking that can be used across multiple workspaces. The command creates a UCX catalog for tracking migration status and artifacts, and is created by running `databricks labs ucx create-ucx-catalog` and specifying the storage location for the catalog. Relevant user documentation, unit tests, and integration tests have been added for this command. The `assign-metastore` command has also been updated to allow for the selection of a metastore when multiple metastores are available in the workspace region. This change improves the migration tracking feature and enhances the user experience. * Added experimental `migration-progress-experimental` workflow ([#2658](#2658)). This commit introduces an experimental workflow, `migration-progress-experimental`, which refreshes the inventory for various resources such as clusters, grants, jobs, pipelines, policies, tables, TableMigrationStatus, and UDFs. The workflow can be triggered using the `databricks labs ucx migration-progress` CLI command and uses a new implementation of a Scala-based crawler, `TablesCrawler`, which will eventually replace the current implementation. The new workflow is a duplicate of most of the `assessment` pipeline's functionality but with some differences, such as the use of `TablesCrawler`. Relevant user documentation has been added, along with unit tests, integration tests, and a screenshot of a successful staging environment run. The new workflow is expected to run on a schedule in the future. This change resolves [#2574](#2574) and progresses [#2074](#2074). * Added handling for `InternalError` in `Listing.__iter__` ([#2697](#2697)). This release introduces improved error handling in the `Listing.__iter__` method of the `Generic` class, located in the `workspace_access/generic.py` file. Previously, only `NotFound` exceptions were handled, but now both `InternalError` and `NotFound` exceptions are caught and logged appropriately. This change enhances the robustness of the method, which is responsible for listing objects of a specific type and returning them as `GenericPermissionsInfo` objects. To ensure the correct functionality, we have added new unit tests and manual testing. The logging of the `InternalError` exception is properly handled in the `GenericPermissionsSupport` class when listing serving endpoints. This behavior is verified by the newly added test function `test_internal_error_in_serving_endpoints_raises_warning` and the updated `test_serving_endpoints_not_enabled_raises_warning`. * Added handling for `PermissionDenied` when listing accessible workspaces ([#2733](#2733)). A new `can_administer` method has been added to the `Workspaces` class in the `workspaces.py` file, which allows for more fine-grained control over which users can administer workspaces. This method checks if the user has access to a given workspace and is a member of the workspace's `admins` group, indicating that the user has administrative privileges for that workspace. If the user does not have access to the workspace or is not a member of the `admins` group, the method returns `False`. Additionally, error handling in the `get_accessible_workspaces` method has been improved by adding a `PermissionDenied` exception to the list of exceptions that are caught and logged. New unit tests have been added for the `AccountWorkspaces` class of the `databricks.labs.blueprint.account` module to ensure that the new method is functioning as intended, specifically checking if a user is a workspace administrator based on whether they belong to the `admins` group. The linked issue [#2732](#2732) is resolved by this change. All changes have been manually and unit tested. * Added static code analysis results to assessment dashboard ([#2696](#2696)). This commit introduces two new tasks, `assess_dashboards` and `assess_workflows`, to the existing assessment dashboard for identifying migration problems in dashboards and workflows. These tasks analyze embedded queries and notebooks for migration issues and collect direct filesystem access patterns requiring attention. Upon completion, the results are stored in the inventory database and displayed on the Migration dashboard. Additionally, two new widgets, job/query problem widgets and directfs access widgets, have been added to enhance the dashboard's functionality by providing additional information related to code compatibility and access control. Integration tests using mock data have been added and manually tested to ensure the proper functionality of these new features. This update improves the overall assessment and compatibility checking capabilities of the dashboard, making it easier for users to identify and address issues related to Unity Catalog compatibility in their workflows and dashboards. * Added unskip CLI command to undo a skip on schema or a table ([#2727](#2727)). This pull request introduces a new CLI command, "unskip", which allows users to reverse a previously applied `skip` on a schema or table. The `unskip` command accepts a required `--schema` parameter and an optional `--table` parameter. A new function, also named "unskip", has been added, which takes the same parameters as the `skip` command. The function checks for the required `--schema` parameter and creates a new WorkspaceContext object to call the appropriate method on the table_mapping object. Two new methods, `unskip_schema` and "unskip_table_or_view", have been added to the HiveMapping class. These methods remove the skip mark from a schema or table, respectively, and handle exceptions such as NotFound and BadRequest. The get_tables_to_migrate method has been updated to consider the unskipped tables or schemas. Currently, the feature is tested manually and has not been added to the user documentation. * Added unskip CLI command to undo a skip on schema or a table ([#2734](#2734)). A new `unskip` CLI command has been added to the project, which allows users to remove the `skip` mark set by the existing `skip` command on a specified schema or table. This command takes an optional `--table` flag, and if not provided, it will unskip the entire schema. The new functionality is accompanied by a unit test and relevant user documentation, and addresses issue [#1938](#1938). The implementation includes the addition of the `unskip_table_or_view` method, which generates the appropriate `ALTER TABLE/VIEW` statement to remove the skip marker, and updates to the `unskip_schema` method to include the schema name in the `ALTER SCHEMA` statement. Additionally, exception handling has been updated to include `NotFound` and `BadRequest` exceptions. This feature simplifies the process of undoing a skip on a schema, table, or view in the Hive metastore, which previously required manual editing of the Hive metastore properties. * Assess source code as part of the assessment ([#2678](#2678)). This commit introduces enhancements to the assessment workflow, including the addition of two new tasks for evaluating source code from SQL queries in dashboards and from notebooks/files in jobs and tasks. The existing `databricks labs install ucx` command has been modified to incorporate linting during the assessment. The `QueryLinter` class has been updated to accept an additional argument for linting source code. These changes have been thoroughly tested through integration tests to ensure proper functionality. Co-authored by Eric Vergnaud. * Bump astroid version, pylint version and drop our f-string workaround ([#2746](#2746)). In this update, we have bumped the versions of astroid and pylint to 3.3.1 and removed workarounds related to f-string inference limitations in previous versions of astroid (< 3.3). These workarounds were necessary for handling issues such as uninferrable sys.path values and the lack of f-string inference in loops. We have also updated corresponding tests to reflect these changes and improve the overall code quality and maintainability of the project. These changes are part of a larger effort to update dependencies and simplify the codebase by leveraging the latest features of updated tools and removing obsolete workarounds. * Delete temporary files when running solacc ([#2750](#2750)). This commit includes changes to the `solacc.py` script to improve the linting process for the `solacc` repository, specifically targeting the issue of excessive temporary files that were exceeding CI storage capacity. The modifications include linting the repository on a per-top-level `solution` basis, where each solution resides within the top folders and is independent of others. Post-linting, temporary files and directories registered in `PathLookup` are deleted to enhance storage efficiency. Additionally, this commit prepares for improving false positive detection and introduces a new `SolaccContext` class that tracks various aspects of the linting process, providing more detailed feedback on the linting results. This change does not introduce new functionality or modify existing functionality, but rather optimizes the linting process for the `solacc` repository, maintaining CI storage capacity levels within acceptable limits. * Don't report direct filesystem access for API calls ([#2689](#2689)). This release introduces enhancements to the Direct File System Access (DFSA) linter, resolving false positives in API call reporting. The `ws.api_client.do` call previously triggered inaccurate direct filesystem access alerts, which have been addressed by adding new methods to identify HTTP call parameters and specific API calls. The linter now disregards DFSA patterns within known API calls, eliminating false positives with relative URLs and duplicate advice from SparkSqlPyLinter. Additionally, improvements in the `python_ast.py` and `python_infer.py` files include the addition of `is_instance_of` and `is_from_module` methods, along with safer inference methods to prevent infinite recursion and enhance value inference. These changes significantly improve the DFSA linter's accuracy and effectiveness when analyzing code containing API calls. * Enables cli cmd `databricks labs ucx create-catalog-schemas` to apply catalog/schema acl from legacy hive_metastore ([#2676](#2676)). The new release introduces a `databricks labs ucx create-catalog-schemas` command, which applies catalog/schema Access Control List (ACL) from a legacy hive_metastore. This command modifies the existing `table_mapping` method to include a new `grants_crawler` parameter in the `CatalogSchema` constructor, enabling the application of ACLs from the legacy hive_metastore. A corresponding unit test is included to ensure proper functionality. The `CatalogSchema` class in the `databricks.labs.ucx.hive_metastore.catalog_schema` module has been updated with a new argument `hive_acl` and the integration of the `GrantsCrawler` class. The `GrantsCrawler` class is responsible for crawling the Hive metastore and retrieving grants for catalogs, schemas, and tables. The `prepare_test` function has been updated to include the `hive_acl` argument and the `test_catalog_schema_acl` function has been updated to test the new functionality, ensuring that the correct grant statements are generated for a wider range of principals and catalogs/schemas. These changes improve the functionality and usability of the `databricks labs ucx create-catalog-schemas` command, allowing for a more seamless transition from a legacy hive metastore. * Fail `make test` on coverage below 90% ([#2682](#2682)). A new change has been introduced to the pyproject.toml file to enhance the codebase's quality and robustness by ensuring that the test coverage remains above 90%. This has been accomplished by adding the `--cov-fail-under=90` flag to the `test` and `coverage` scripts in the `[tool.hatch.envs.default.scripts]` section. This flag will cause the `make test` command to fail if the coverage percentage falls below the specified value of 90%, ensuring that all new changes are thoroughly tested and that the codebase maintains a minimum coverage threshold. This is a best practice for maintaining code coverage and improving the overall quality and reliability of the codebase. * Fixed DFSA false positives from f-string fragments ([#2679](#2679)). This commit addresses false positive DataFrame API Scanning Antipattern (DFSA) reports in Python code, specifically in f-string fragments containing forward slashes and curly braces. The linter has been updated to accurately detect DFSA paths while avoiding false positives, and it now checks for `JoinedStr` fragments in string constants. Additionally, the commit rectifies issues with duplicate advices reported by `SparkSqlPyLinter`. No new features or major functionality changes have been introduced; instead, the focus has been on improving the reliability and accuracy of DFSA detection. Co-authored by Eric Vergnaud, this commit includes new unit tests and refinements to the DFSA linter, specifically addressing false positive patterns like `f"/Repos/{thing1}/sdk-{thing2}-{thing3}"`. To review these changes, consult the updated tests in the `tests/unit/source_code/linters/test_directfs.py` file, such as the new test case for the f-string pattern causing false positives. By understanding these improvements, you'll ensure your project adheres to the latest updates, maintaining quality and accurate DFSA detection. * Fixed failing integration tests that perform a real assessment ([#2736](#2736)). In this release, we have made significant improvements to the integration tests in the `assessment` workflow, by reducing the scope of the assessment and improving efficiency and reliability. We have removed several object creation functions and added a new function `populate_for_linting` for linting purposes. The `populate_for_linting` function adds necessary information to the installation context, and is used to ensure that the integration tests still have the required data for linting. We have also added a pytest fixture `populate_for_linting` to set up a minimal amount of data in the workspace for linting purposes. These changes have been implemented in the `test_workflows.py` file in the integration/assessment directory. This will help to ensure that the tests are not unnecessarily extensive, and that they are able to accurately assess the functionality of the library. * Fixed sqlglot crasher with 'drop schema ...' statement ([#2758](#2758)). In this release, we have addressed a crash issue in the `sqlglot` library caused by the `drop schema` statement. A new method, `_unsafe_lint_expression`, has been introduced to prevent the crash by checking if the current expression is a `Use`, `Create`, or `Drop` statement and updating the `schema` attribute accordingly. The library now correctly handles the `drop schema` statement and returns a `Deprecation` warning if the table being processed is in the `hive_metastore` catalog and has been migrated to the Unity Catalog. Unit tests have been added to ensure the correct behavior of this code, and the linter for `from table` SQL has been updated to parse and handle the `drop schema` statement without raising any errors. These changes improve the library's overall reliability and stability, allowing it to operate smoothly with the `drop schema` statement. * Fixed test failure: `test_table_migration_job_refreshes_migration_status[regular-migrate-tables]` ([#2625](#2625)). In this release, we have addressed two issues ([#2621](#2621) and [#2537](#2537)) and fixed a test failure in `test_table_migration_job_refreshes_migration_status[regular-migrate-tables]`. The `index` and `index_full_refresh` methods in `table_migrate.py` have been updated to accept a new `force_refresh` flag. When set to `True`, these methods will ensure that the migration status is up-to-date. This change also affects the `ViewsMigrationSequencer` class, which now passes `force_refresh=True` to the `index` method. Additionally, we have fixed a test failure by reusing the `force_refresh` flag to ensure the migration status is up-to-date. The `TableMigrationStatus` class in `table_migration_status.py` has been modified to accept an optional `force_refresh` parameter in the `index` method, and a unit test has been updated to assert the correct behavior when updating the migration status. * Fixes error message ([#2759](#2759)). The `load` method of the `mapping.py` file in the `databricks/labs/ucx/hive_metastore` package has been updated to correct an error message displayed when a `NotFound` exception is raised. The previous message suggested running an incorrect command, which has been updated to the correct one: "Please run: databricks labs ucx create-table-mapping". This change does not add any new methods or alter existing functionality, but instead focuses on improving the user experience by providing accurate information when an error occurs. The scope of this change is limited to updating the error message, and no other modifications have been made. * Fixes issue of circular dependency of migrate-location ACL ([#2741](#2741)). In this release, we have resolved two issues ([#274](#274) * Fixes source table alias dissapearance during migrate_views ([#2726](#2726)). This release introduces a fix to preserve the alias for the source table during the conversion of CREATE VIEW SQL from the legacy Hive metastore to the Unity Catalog. The issue was addressed by adding a new test case, `test_migrate_view_alias_test`, to verify the correct handling of table aliases during migration. The changes also include a fix for the SQL conversion and new test cases to ensure the correct handling of table aliases, reflected in accurate SQL conversion. A new parameter, `alias`, has been added to the Table class, and the `apply` method in the `from_table.py` file has been updated. The migration process has been updated to retain the original alias of the table. Unit tests have been added and thoroughly tested to confirm the correctness of the changes, including handling potential intermittent failures caused by external dependencies. * Py4j table crawler: suggestions/fixes for describing tables ([#2684](#2684)). This release introduces significant improvements and fixes to the Py4J-based table crawler, enhancing its capability to describe tables effectively. The code for fetching table properties over the bridge has been updated, and error tracing has been improved through individual fetching of each table property and providing python backtrace on JVM side errors. Scala `Option` values unboxing issues have been resolved, and a small optimization has been implemented to detect partitioned tables without materializing the collection. The table's `.viewText()` property is now properly handled as a Scala `Option`. The `catalog` argument is now explicitly verified to be `hive_metastore`, and a new static method `_option_as_python` has been introduced for safely extracting values from Scala `Option`. The `_describe` method has been refactored to handle exceptions more gracefully and improved code readability. These changes result in better functionality, error handling, logging, and performance when describing tables within a specified catalog and database. The linked issues [#2658](#2658) and [#2579](#2579) are progressed through these updates, and appropriate testing has been conducted to ensure the improvements' effectiveness. * Speedup assessment workflow by making DBFS root table size calculation parallel ([#2745](#2745)). In this release, the assessment workflow for calculating DBFS root table size has been optimized through the parallelization of the calculation process, resulting in improved performance. This has been achieved by updating the `pipelines_crawler` function in `src/databricks/labs/ucx/contexts/workflow_task.py`, specifically the `cached_property table_size_crawler`, to include an additional argument `self.config.include_databases`. The `TablesCrawler` class has also been modified to include a generic type parameter `Table`, enabling type hinting and more robust type checking. Furthermore, the unit test file `test_table_size.py` in the `hive_metastore` directory has been updated to handle corrupt tables and invalid delta format errors more effectively. Additionally, a new entry `databricks-pydabs` has been added to the "known.json" file, potentially enabling better integration with the `databricks-pydabs` library or providing necessary configuration information for parallel processing. Overall, these changes improve the efficiency and scalability of the codebase and optimize the assessment workflow for calculating DBFS root table size. * Updated databricks-labs-blueprint requirement from <0.9,>=0.8 to >=0.8,<0.10 ([#2747](#2747)). In this update, the requirement for `databricks-labs-blueprint` has been updated to version `>=0.8,<0.10` in the `pyproject.toml` file. This change allows the project to utilize the latest features and bug fixes included in version 0.9.0 of the `databricks-labs-blueprint` library. Notable updates in version 0.9.0 consist of the addition of Databricks CLI version as part of routed command telemetry and support for Unicode Byte Order Mark (BOM) in file upload and download operations. Additionally, various bug fixes and improvements have been implemented for the `WorkspacePath` class, including the addition of `stat()` methods and improved compatibility with different versions of Python. * Updated databricks-labs-lsql requirement from <0.12,>=0.5 to >=0.5,<0.13 ([#2688](#2688)). In this update, the version requirement of the `databricks-labs-lsql` library has been changed from a version greater than or equal to 0.5 and less than 0.12 to a version greater than or equal to 0.5 and less than 0.13. This allows the project to utilize the latest version of 'databricks-labs-lsql', which includes new methods for differentiating between a table that has never been written to and one with zero rows in the MockBackend class. Additionally, the update adds support for various filter types and improves testing coverage and reliability. The release notes and changelog for the updated library are provided in the commit message for reference. * Updated documentation to explain the usage of collections and eligible commands ([#2738](#2738)). The latest update to the Databricks Labs Unified CLI (UCX) tool introduces the `join-collection` command, which enables users to join two or more workspaces into a collection, allowing for streamlined and consolidated command execution across multiple workspaces. This feature is available to Account admins on the Databricks account, Workspace admins on the workspaces to be joined, and requires UCX installation on the workspace. To run collection-eligible commands, users can simply pass the `--run-as-collection=True` flag. This enhancement enhances the UCX tool's functionality, making it easier to manage and execute commands on multiple workspaces. * Updated sqlglot requirement from <25.22,>=25.5.0 to >=25.5.0,<25.23 ([#2687](#2687)). In this pull request, we have updated the version requirement for the `sqlglot` library in the pyproject.toml file. The previous requirement specified a version greater than or equal to 25.5.0 and less than 25.22, but we have updated it to allow for versions greater than or equal to 25.5.0 and less than 25.23. This change allows us to use the latest version of 'sqlglot', while still ensuring compatibility with other dependencies. Additionally, this pull request includes a detailed changelog from the `sqlglot` repository, which provides information on the features, bug fixes, and changes included in each version. This can help us understand the scope of the update and how it may impact our project. * [DOCUMENTATION] Improve documentation on using account profile for `sync-workspace-info` cli command ([#2683](#2683)). The `sync-workspace-info` CLI command has been added to the Databricks Labs UCX package, which uploads the workspace configuration to all workspaces in the Databricks account where the `ucx` tool is installed. This feature requires Databricks Account Administrator privileges and is necessary to create an immutable default catalog mapping for the table migration process. It also serves as a prerequisite for the `create-table-mapping` command. To utilize this command, users must configure the Databricks CLI profile with access to the Databricks account console, available at "accounts.cloud.databricks.com" or "accounts.azuredatabricks.net". Additionally, the documentation for using the account profile with the `sync-workspace-info` command has been enhanced, addressing issue [#1762](#1762). * [DOCUMENTATION] Improve documentation when installing UCX from a machine with restricted internet access ([#2690](#2690)). "A new section has been added to the `ADVANCED` installation section of the UCX library documentation, providing detailed instructions for installing UCX with a company-hosted PyPI mirror. This feature is intended for environments with restricted internet access, allowing users to bypass the public PyPI index and use a company-controlled mirror instead. Users will need to add all UCX dependencies to the company-hosted PyPI mirror and set the `PIP_INDEX_URL` environment variable to the mirror URL during installation. The solution also includes a prompt asking the user if their workspace blocks internet access. Additionally, the documentation has been updated to clarify that UCX requires internet access to connect to GitHub for downloading the tool, specifying the necessary URLs that need to be accessible. This update aims to improve the installation process for users with restricted internet access and provide clear instructions and prompts for installing UCX on machines with limited internet connectivity." Dependency updates: * Updated sqlglot requirement from <25.22,>=25.5.0 to >=25.5.0,<25.23 ([#2687](#2687)). * Updated databricks-labs-lsql requirement from <0.12,>=0.5 to >=0.5,<0.13 ([#2688](#2688)). * Updated databricks-labs-blueprint requirement from <0.9,>=0.8 to >=0.8,<0.10 ([#2747](#2747)).
Reopening to add Dashboards |
Is there an existing issue for this?
Problem statement
There's no easy way to know what still needs to be migrated to UC within a given workspace
Proposed Solution
migration-progress
scheduled daily workflow #2578hive_metastore
#2575migration-progress
crawled objects #2574migration-progress
crawled objects in theucx.history
table #2573ucx
catalog via a cli-command #2571ucx.history
table #2572ucx.workflow_runs
table #2600ucx.errors
table #2603migration-progress
run when theucx
catalog does not exists #2577migration-progress
run when theassessment
job did not run yet #2816RemoveAfter
property onucx
catalogs in integration test #2594migration progress
workflow #2595migration-progress
dashboard #3045UsedTables
Potential challenges
append
to tables #2597but for migration progress purposes, we need to overwrite the tables. or add another column with a timestamp and modify "fetch latest" queries to fetch the latest timestamp of the snapshot. fetching the latest timestamp from the snapshot allows to build a bar-chart widget to see how fast migration progresses, but we don't really care about it it. if we do fetch-latest-timestamp, all our views and dashboards would become a bit more complicated. but that's fine.--> Decision is made to keep the current ucx inventory and store history in a separate table (see proposed solution above)let's keep the status of migration progress in HMS (for now), but we can change this decision in a few weeks.--> Decision is made to store the migration process in a ucx catalog.Migration process crawlers
Assessment tasks that make sense to re-run on
migration-progress
workflow:crawl_tables
assess_jobs
- potentially harden the code there as wellassess_clusters
- potentially harden as wellassess_pipelines
- potentially harden as wellcrawl_cluster_policies
assess_global_init_scripts
not to be re-run:
crawl_mounts
- we already pre-created external locationssetup_tacl
- we don't need to crawl grantscrawl_grants
- no need to, i thinkestimate_table_size_for_migration
- most likely not necessaryguess_external_locations
- we already migrated external locations by this pointassess_incompatible_submit_runs
- not going to be necessary in septemberworkspace_listing
- we are going to analyse only those notebooks that are part of jobs in the scope of static analysiscrawl_permissions
- we expect permissions to already be migratedcrawl_groups
- we expect groups to already be migratedAdditional Context
The text was updated successfully, but these errors were encountered: