diff --git a/docs/docusaurus/docs/core/configure_project_settings/access_secrets_managers/_aws_secrets_manager.md b/docs/docusaurus/docs/core/configure_project_settings/access_secrets_managers/_aws_secrets_manager.md index 205a0fef7829..c9c08840e43e 100644 --- a/docs/docusaurus/docs/core/configure_project_settings/access_secrets_managers/_aws_secrets_manager.md +++ b/docs/docusaurus/docs/core/configure_project_settings/access_secrets_managers/_aws_secrets_manager.md @@ -1,13 +1,13 @@ import GxData from '../../_core_components/_data.jsx' import PreReqFileDataContext from '../../_core_components/prerequisites/_file_data_context.md' -### Prerequisites +### Prerequisites {#prerequisites-aws} - An AWS Secrets Manager instance. See [AWS Secrets Manager](https://docs.aws.amazon.com/secretsmanager/latest/userguide/tutorials_basic.html). - The ability to install Python packages with `pip`. - . -### Procedure +### Procedure {#procedure-aws} 1. Set up AWS Secrets Manager support. diff --git a/docs/docusaurus/docs/core/configure_project_settings/access_secrets_managers/_azure_key_vault.md b/docs/docusaurus/docs/core/configure_project_settings/access_secrets_managers/_azure_key_vault.md index 8a7c75b5ab6c..332badc33e09 100644 --- a/docs/docusaurus/docs/core/configure_project_settings/access_secrets_managers/_azure_key_vault.md +++ b/docs/docusaurus/docs/core/configure_project_settings/access_secrets_managers/_azure_key_vault.md @@ -1,13 +1,13 @@ import GxData from '../../_core_components/_data.jsx' import PreReqFileDataContext from '../../_core_components/prerequisites/_file_data_context.md' -### Prerequisites +### Prerequisites {#prerequisites-azure} - An [Azure Key Vault instance with configured secrets](https://docs.microsoft.com/en-us/azure/key-vault/general/overview). - The ability to install Python packages with `pip`. - . -### Procedure +### Procedure {#procedure-azure} 1. Set up Azure Key Vault support. diff --git a/docs/docusaurus/docs/core/configure_project_settings/access_secrets_managers/_gcp_secret_manager.md b/docs/docusaurus/docs/core/configure_project_settings/access_secrets_managers/_gcp_secret_manager.md index e5796027c39f..cf9ffb5c8712 100644 --- a/docs/docusaurus/docs/core/configure_project_settings/access_secrets_managers/_gcp_secret_manager.md +++ b/docs/docusaurus/docs/core/configure_project_settings/access_secrets_managers/_gcp_secret_manager.md @@ -1,13 +1,13 @@ import GxData from '../../_core_components/_data.jsx' import PreReqFileDataContext from '../../_core_components/prerequisites/_file_data_context.md' -### Prerequisites +### Prerequisites {#prerequisites-gcp} - A [GCP Secret Manager instance with configured secrets](https://cloud.google.com/secret-manager/docs/quickstart). - The ability to install Python packages with `pip`. - . -### Procedure +### Procedure {#prerequisites-gcp} 1. Set up GCP Secret Manager support. diff --git a/docs/docusaurus/docs/core/configure_project_settings/toggle_analytics_events/_tab_context_variable.md b/docs/docusaurus/docs/core/configure_project_settings/toggle_analytics_events/_tab_context_variable.md index b2afed1cd3b3..2e56203a6d23 100644 --- a/docs/docusaurus/docs/core/configure_project_settings/toggle_analytics_events/_tab_context_variable.md +++ b/docs/docusaurus/docs/core/configure_project_settings/toggle_analytics_events/_tab_context_variable.md @@ -8,13 +8,13 @@ import PrereqFileDataContext from '../../_core_components/prerequisites/_file_da The Data Context variable `analytics_enabled` can be used to toggle the collection of analytics information. Because the analytics configuration is loaded when a Data Context is initialized this method is only suitable when working with a File Data Context. For other types of Data Context, use the [Environment Variable](/core/configure_project_settings/toggle_analytics_events/toggle_analytics_events.md?config_method=environment_variable#methods-for-toggling-analytics-collection) method for toggling analytics collection. -### Prerequisites +### Prerequisites {#prerequisites-context-variable} - . - . - . -### Procedure +### Procedure {#procedure-context-variable} diff --git a/docs/docusaurus/docs/core/configure_project_settings/toggle_analytics_events/_tab_environment_variable.md b/docs/docusaurus/docs/core/configure_project_settings/toggle_analytics_events/_tab_environment_variable.md index a42f3c627d1d..da12aec31b40 100644 --- a/docs/docusaurus/docs/core/configure_project_settings/toggle_analytics_events/_tab_environment_variable.md +++ b/docs/docusaurus/docs/core/configure_project_settings/toggle_analytics_events/_tab_environment_variable.md @@ -5,13 +5,13 @@ The environment variable `GX_ANALYTICS_ENABLED` can be used to toggle the collec `GX_ANALYTICS_ENABLED` will also work to toggle analytics collection when using a GX Cloud Data Context or a File Data Context. -### Prerequisites +### Prerequisites {#prerequisites-environment-variable} - - - Permissions necessary to set local Environment Variables. -### Procedure +### Procedure {#procedure-environment-variable} 1. Set the environment variable `GX_ANALYTICS_ENABLED`. diff --git a/docs/docusaurus/docs/core/connect_to_data/dataframes/dataframes.md b/docs/docusaurus/docs/core/connect_to_data/dataframes/dataframes.md index 36b5edfc6ead..6f01625a0eba 100644 --- a/docs/docusaurus/docs/core/connect_to_data/dataframes/dataframes.md +++ b/docs/docusaurus/docs/core/connect_to_data/dataframes/dataframes.md @@ -20,14 +20,14 @@ A dataframe is a set of data that resides in-memory and is represented in your c Because the dataframes reside in memory you do not need to specify the location of the data when you create your Data Source. Instead, the type of Data Source you create depends on the type of dataframe containing your data. Great Expectations has methods for connecting to both pandas and Spark dataframes. -### Prerequisites +### Prerequisites {#prerequisites-data-source} - - - Optional. . - . These examples assume the variable `context` contains your Data Context. -### Procedure +### Procedure {#procedure-data-source} - @@ -115,7 +115,7 @@ A dataframe Data Asset is used to group your Validation Results. For instance, - . These examples assume the variable `context` contains your Data Context. - A [pandas or Spark dataframe Data Source](#create-a-data-source). -### Procedure +### Procedure {#procedure-data-asset} - @@ -181,7 +181,7 @@ If you use GX Cloud and GX Core together, note that Batch Definitions you create - . These examples assume the variable `context` contains your Data Context. - A [pandas or Spark dataframe Data Asset](#create-a-data-asset). -### Procedure +### Procedure {#procedure-batch-definition} - @@ -246,7 +246,7 @@ Because dataframes exist in memory and cease to exist when a Python session ends - Data in a pandas or Spark dataframe. These examples assume the variable `dataframe` contains your pandas or Spark dataframe. - Optional. A Validation Definition. -### Procedure +### Procedure {#procedure-dataframes} 1. Define the Batch Parameter dictionary. diff --git a/docs/docusaurus/docs/core/connect_to_data/filesystem_data/_create_a_batch_definition/_tab-directory_batch_definition.md b/docs/docusaurus/docs/core/connect_to_data/filesystem_data/_create_a_batch_definition/_tab-directory_batch_definition.md index afce657d13bf..d4d304382525 100644 --- a/docs/docusaurus/docs/core/connect_to_data/filesystem_data/_create_a_batch_definition/_tab-directory_batch_definition.md +++ b/docs/docusaurus/docs/core/connect_to_data/filesystem_data/_create_a_batch_definition/_tab-directory_batch_definition.md @@ -5,11 +5,11 @@ import PreReqDataContext from '../../../_core_components/prerequisites/_preconfi Batch Definitions for a Directory Data Asset can be configured to return all of the records for the files in the Data Asset, or to subdivide the Data Asset's records on the content of a Datetime field and only return the records that correspond to a specific year, month, or day. -### Prerequisites +### Prerequisites {#prerequisites-batch-definition-directory} - . The variable `context` is used for your Data Context in the following example code. - [A File Data Asset on a Filesystem Data Source](#create-a-data-asset). -### Procedure +### Procedure {#procedure-batch-definition-directory} . The variable `context` is used for your Data Context in the following example code. - [A File Data Asset on a Filesystem Data Source](#create-a-data-asset). diff --git a/docs/docusaurus/docs/core/connect_to_data/filesystem_data/_create_a_data_asset/_abs/_tab-directory_data_asset.md b/docs/docusaurus/docs/core/connect_to_data/filesystem_data/_create_a_data_asset/_abs/_tab-directory_data_asset.md index 4b02a297bba7..46d514bbb58b 100644 --- a/docs/docusaurus/docs/core/connect_to_data/filesystem_data/_create_a_data_asset/_abs/_tab-directory_data_asset.md +++ b/docs/docusaurus/docs/core/connect_to_data/filesystem_data/_create_a_data_asset/_abs/_tab-directory_data_asset.md @@ -6,13 +6,13 @@ import PrereqGxInstall from '../../../../_core_components/prerequisites/_gx_inst import PrereqDataContext from '../../../../_core_components/prerequisites/_preconfigured_data_context.md' import PrereqSparkFilesystemDataSource from '../../../../_core_components/prerequisites/_data_source_spark_filesystem.md' -### Prerequisites +### Prerequisites {#prerequisites-data-asset-directory-abs} - . - and [Spark dependencies](/core/set_up_a_gx_environment/install_additional_dependencies.md?dependencies=spark). - . - [A Filesystem Data Source configured to access data files in Azure Blob Storage](/core/connect_to_data/filesystem_data/filesystem_data.md?data_source_type=spark&environment=abs#create-a-data-source). -### Procedure +### Procedure {#procedure-data-asset-directory-abs} . - . - . - Access to data files in Azure Blob Storage. - A pandas or Spark [Filesystem Data Source configured for Azure Blob Storage data files](/core/connect_to_data/filesystem_data/filesystem_data.md?data_source_type=spark&environment=abs#create-a-data-source). -### Procedure +### Procedure {#procedure-data-asset-file-abs} . - and [Spark dependencies](/core/set_up_a_gx_environment/install_additional_dependencies.md?dependencies=spark). - . - [A Filesystem Data Source configured to access data files in Google Cloud Storage](/core/connect_to_data/filesystem_data/filesystem_data.md?data_source_type=spark&environment=gcs#create-a-data-source). -### Procedure +### Procedure {#procedure-data-asset-directory-gcs} . - . - . - Access to data files in Google Cloud Storage. - [A pandas](/core/connect_to_data/filesystem_data/filesystem_data.md?data_source_type=pandas&environment=gcs#create-a-data-source) or [Spark Filesystem Data Source configured for Google Cloud Storage data files](/core/connect_to_data/filesystem_data/filesystem_data.md?data_source_type=spark&environment=gcs#create-a-data-source). -### Procedure +### Procedure {#procedure-data-asset-file-gcs} . - . - . - [A Spark Filesystem Data Source configured to access data files in a local or networked folder hierarchy](/core/connect_to_data/filesystem_data/filesystem_data.md?data_source_type=spark&environment=filesystem#create-a-data-source). -### Procedure +### Procedure {#procedure-data-asset-directory-lon} . - . - . - Access to data files (such as `.csv` or `.parquet` files) in a local or networked folder hierarchy. - [A pandas](/core/connect_to_data/filesystem_data/filesystem_data.md?data_source_type=pandas&environment=filesystem#create-a-data-source) or [Spark Filesystem Data Source configured for local or networked data files](/core/connect_to_data/filesystem_data/filesystem_data.md?data_source_type=spark&environment=filesystem#create-a-data-source). -### Procedure +### Procedure {#procedure-data-asset-file-lon} . - and [Spark dependencies](/core/set_up_a_gx_environment/install_additional_dependencies.md?dependencies=spark). - . - [A Filesystem Data Source configured to access data files in S3](/core/connect_to_data/filesystem_data/filesystem_data.md?data_source_type=spark&environment=s3#create-a-data-source). -### Procedure +### Procedure {#procedure-data-asset-directory-s3} . - . - . - Access to data files in S3. - [A Filesystem Data Source configured to access data files in S3](/core/connect_to_data/filesystem_data/filesystem_data.md?data_source_type=spark&environment=s3#create-a-data-source). -### Procedure +### Procedure {#procedure-data-asset-file-s3} - - Optional. To create a Spark Filesystem Data Source you will also need to [install the Spark Python dependencies](/core/set_up_a_gx_environment/install_additional_dependencies.md?dependencies=spark). - - Access to data files in Azure Blob Storage. -### Procedure +### Procedure {#procedure-data-source-abs} - - Optional. To create a Spark Filesystem Data Source you will also need to [install the Spark Python dependencies](/core/set_up_a_gx_environment/install_additional_dependencies.md?dependencies=spark). - - Access to data files in Google Cloud Storage. -### Procedure +### Procedure {#procedure-data-source-gcs} - - Optional. To create a Spark Filesystem Data Source you will also need to [install the Spark Python dependencies](/core/set_up_a_gx_environment/install_additional_dependencies.md?dependencies=spark). @@ -18,7 +18,7 @@ import PandasDefault from './_pandas_default.md' ::: -### Procedure +### Procedure {#procedure-data-source-lon} - - Optional. To create a Spark Filesystem Data Source you will also need to [install the Spark Python dependencies](/core/set_up_a_gx_environment/install_additional_dependencies.md?dependencies=spark). - - Access to data files on a S3 bucket. -### Procedure +### Procedure {#procedure-data-source-s3} . The variable `context` is used for your Data Context in the following example code. - [A Data Asset on a SQL Data Source](#create-a-data-asset). -### Procedure +### Procedure {#procedure-batch-definition} . The variable `context` is used for your Data Context in the following example code. - . -### Procedure +### Procedure {#procedure-data-asset} . - - . - . -### Procedure +### Procedure {#procedure-data-source} Prerequisites +## Prerequisites - . - . diff --git a/docs/docusaurus/docs/core/customize_expectations/use_sql_to_define_a_custom_expectation.md b/docs/docusaurus/docs/core/customize_expectations/use_sql_to_define_a_custom_expectation.md index 91028741bb14..1da841a44f70 100644 --- a/docs/docusaurus/docs/core/customize_expectations/use_sql_to_define_a_custom_expectation.md +++ b/docs/docusaurus/docs/core/customize_expectations/use_sql_to_define_a_custom_expectation.md @@ -17,7 +17,7 @@ Like any other Expectation, you can instantiate the `UnexpectedRowsExpectation` -

Prerequisites

+## Prerequisites - . - . diff --git a/docs/docusaurus/docs/core/define_expectations/_retrieve_a_batch_of_test_data/_from_a_batch_definition.md b/docs/docusaurus/docs/core/define_expectations/_retrieve_a_batch_of_test_data/_from_a_batch_definition.md index 0bbe4ee99e91..f5b49e3453c4 100644 --- a/docs/docusaurus/docs/core/define_expectations/_retrieve_a_batch_of_test_data/_from_a_batch_definition.md +++ b/docs/docusaurus/docs/core/define_expectations/_retrieve_a_batch_of_test_data/_from_a_batch_definition.md @@ -8,14 +8,14 @@ import PrereqDataSourceAndAssetConnectedToData from '../../_core_components/prer Batch Definitions both organize a Data Asset's records into Batches and provide a method for retrieving those records. Any Batch Definition can be used to retrieve a Batch of records for use in testing Expectations or data exploration. -## Prerequisites +## Prerequisites {#prerequisites-batch-definition} - . - . - . These examples assume the variable `context` contains your Data Context. - . -### Procedure +### Procedure {#procedure-batch-definition} . - . - . These examples assume the variable `context` contains your Data Context. - Data in a file format supported by pandas, such as `.csv` or `.parquet`. -### Procedure +### Procedure {#procedure-pandas-default} Prerequisites +## Prerequisites {#prerequisites-create-expectation} - . - . -### Procedure +### Procedure {#procedure-create-expectation} Prerequisites +## Prerequisites {#prerequisites-expectation-suites} - . - . - Recommended. . - Recommended. . -### Procedure +### Procedure {#procedure-expectation-suites} Prerequisites +## Prerequisites {#prerequisites-test-expectation} - . - . @@ -21,7 +21,7 @@ Data can be validated against individual Expectations. This workflow is general - [A Batch of sample data](/core/define_expectations/retrieve_a_batch_of_test_data.md). This guide assumes the variable `batch` contains your sample data. - . This guide assumes the variable `expectation` contains the Expectation to be tested. -### Procedure +### Procedure {#procedure-test-expectation} Prerequisites +## Prerequisites - . - . diff --git a/docs/docusaurus/docs/core/run_validations/run_a_validation_definition.md b/docs/docusaurus/docs/core/run_validations/run_a_validation_definition.md index 266df1381a17..37c1658cd49e 100644 --- a/docs/docusaurus/docs/core/run_validations/run_a_validation_definition.md +++ b/docs/docusaurus/docs/core/run_validations/run_a_validation_definition.md @@ -10,8 +10,7 @@ import PrereqPreconfiguredDataContext from '../_core_components/prerequisites/_p import PrereqValidationDefinition from '../_core_components/prerequisites/_validation_definition.md'; - -

Prerequisites

+## Prerequisites - . - . diff --git a/docs/docusaurus/docs/core/set_up_a_gx_environment/_create_a_data_context/_cloud_data_context.md b/docs/docusaurus/docs/core/set_up_a_gx_environment/_create_a_data_context/_cloud_data_context.md index cb60a6d5e040..9811bd24714d 100644 --- a/docs/docusaurus/docs/core/set_up_a_gx_environment/_create_a_data_context/_cloud_data_context.md +++ b/docs/docusaurus/docs/core/set_up_a_gx_environment/_create_a_data_context/_cloud_data_context.md @@ -4,7 +4,7 @@ import TabItem from '@theme/TabItem' import PrereqPythonInstallation from '../../_core_components/prerequisites/_python_installation.md' import PrereqGxInstallation from '../../_core_components/prerequisites/_gx_installation.md' -## Prerequisites +## Prerequisites {#prerequisites-cloud-data-context} - - diff --git a/docs/docusaurus/docs/core/set_up_a_gx_environment/_create_a_data_context/_ephemeral_data_context.md b/docs/docusaurus/docs/core/set_up_a_gx_environment/_create_a_data_context/_ephemeral_data_context.md index 4dcea518509c..7d2042624764 100644 --- a/docs/docusaurus/docs/core/set_up_a_gx_environment/_create_a_data_context/_ephemeral_data_context.md +++ b/docs/docusaurus/docs/core/set_up_a_gx_environment/_create_a_data_context/_ephemeral_data_context.md @@ -4,7 +4,7 @@ import TabItem from '@theme/TabItem' import PrereqPythonInstallation from '../../_core_components/prerequisites/_python_installation.md' import PrereqGxInstallation from '../../_core_components/prerequisites/_gx_installation.md' -## Prerequisites +## Prerequisites {#prerequisites-ephemeral} - - diff --git a/docs/docusaurus/docs/core/set_up_a_gx_environment/_create_a_data_context/_file_data_context.md b/docs/docusaurus/docs/core/set_up_a_gx_environment/_create_a_data_context/_file_data_context.md index b2446f273875..cec3318a6748 100644 --- a/docs/docusaurus/docs/core/set_up_a_gx_environment/_create_a_data_context/_file_data_context.md +++ b/docs/docusaurus/docs/core/set_up_a_gx_environment/_create_a_data_context/_file_data_context.md @@ -4,7 +4,7 @@ import TabItem from '@theme/TabItem' import PrereqPythonInstallation from '../../_core_components/prerequisites/_python_installation.md' import PrereqGxInstallation from '../../_core_components/prerequisites/_gx_installation.md' -## Prerequisites +## Prerequisites {#prerequisites-file-data-context} - - diff --git a/docs/docusaurus/docs/core/set_up_a_gx_environment/_create_a_data_context/_quick_start.md b/docs/docusaurus/docs/core/set_up_a_gx_environment/_create_a_data_context/_quick_start.md index 78f6c616ae82..925ab7bf293d 100644 --- a/docs/docusaurus/docs/core/set_up_a_gx_environment/_create_a_data_context/_quick_start.md +++ b/docs/docusaurus/docs/core/set_up_a_gx_environment/_create_a_data_context/_quick_start.md @@ -4,7 +4,7 @@ import TabItem from '@theme/TabItem' import PrereqPythonInstallation from '../../_core_components/prerequisites/_python_installation.md' import PrereqGxInstallation from '../../_core_components/prerequisites/_gx_installation.md' -## Prerequisites +## Prerequisites {#prerequisites-quick-start} - - diff --git a/docs/docusaurus/docs/core/set_up_a_gx_environment/_install_additional_dependencies/_amazon_s3.md b/docs/docusaurus/docs/core/set_up_a_gx_environment/_install_additional_dependencies/_amazon_s3.md index 17065bff7a2c..02c477b3e0cd 100644 --- a/docs/docusaurus/docs/core/set_up_a_gx_environment/_install_additional_dependencies/_amazon_s3.md +++ b/docs/docusaurus/docs/core/set_up_a_gx_environment/_install_additional_dependencies/_amazon_s3.md @@ -5,14 +5,14 @@ import InfoUsingAVirtualEnvironment from '../../_core_components/admonitions/_if GX Core uses the Python library `boto3` to access objects stored in Amazon S3 buckets, but you must configure your Amazon S3 account and credentials through AWS and the AWS command line interface (CLI). -## Prerequisites +## Prerequisites {#prerequisites-amazon} - The AWS CLI. See [Installing or updating the latest version of the AWS CLI](https://docs.aws.amazon.com/cli/latest/userguide/getting-started-install.html). - AWS credentials. See [Configuring the AWS CLI](https://docs.aws.amazon.com/cli/latest/userguide/cli-chap-configure.html). - - -## Installation +## Installation {#installation-amazon} Python interacts with AWS through the `boto3` library. GX Core uses the library in the background when working with AWS. Although you won't use `boto3` directly, must install it in your Python environment. diff --git a/docs/docusaurus/docs/core/set_up_a_gx_environment/_install_additional_dependencies/_azure_blob_storage.md b/docs/docusaurus/docs/core/set_up_a_gx_environment/_install_additional_dependencies/_azure_blob_storage.md index 013eabd73b0b..0cf1d7dd3d7b 100644 --- a/docs/docusaurus/docs/core/set_up_a_gx_environment/_install_additional_dependencies/_azure_blob_storage.md +++ b/docs/docusaurus/docs/core/set_up_a_gx_environment/_install_additional_dependencies/_azure_blob_storage.md @@ -5,14 +5,14 @@ import InfoUsingAVirtualEnvironment from '../../_core_components/admonitions/_if Azure Blob Storage stores unstructured data on the Microsoft cloud data storage platform. To validate Azure Blob Storage data with GX Core you install additional Python libraries and define a connection string. -## Prerequisites +## Prerequisites {#prerequisites-azure} - An [Azure Storage account](https://docs.microsoft.com/en-us/azure/storage). - [Azure storage account access keys](https://docs.microsoft.com/en-us/azure/storage/common/storage-account-keys-manage?tabs=azure-portal). - - -## Installation +## Installation {#installation-azure} 1. Install the Python dependencies for Azure Blob Storage support. diff --git a/docs/docusaurus/docs/core/set_up_a_gx_environment/_install_additional_dependencies/_google_cloud_platform.md b/docs/docusaurus/docs/core/set_up_a_gx_environment/_install_additional_dependencies/_google_cloud_platform.md index 7bbb167ba0f4..06b2e0c07da2 100644 --- a/docs/docusaurus/docs/core/set_up_a_gx_environment/_install_additional_dependencies/_google_cloud_platform.md +++ b/docs/docusaurus/docs/core/set_up_a_gx_environment/_install_additional_dependencies/_google_cloud_platform.md @@ -5,7 +5,7 @@ import InfoUsingAVirtualEnvironment from '../../_core_components/admonitions/_if To validate Google Cloud Platform (GCP) data with GX Core, you create your GX Python environment, configure your GCP credentials, and install GX Core locally with the additional dependencies to support GCP. -## Prerequisites +## Prerequisites {#prerequisites-cloud} - A [GCP service account](https://cloud.google.com/iam/docs/service-account-overview) with permissions to access GCP resources and storage Objects. - The `GOOGLE_APPLICATION_CREDENTIALS` environment variable is set. See the Google documentation [Set up Application Default Credentials](https://cloud.google.com/docs/authentication/provide-credentials-adc). @@ -13,7 +13,7 @@ To validate Google Cloud Platform (GCP) data with GX Core, you create your GX Py - - -## Installation +## Installation {#installation-cloud} 1. Ensure your GCP credentials are correctly configured. This process includes: diff --git a/docs/docusaurus/docs/core/set_up_a_gx_environment/_install_additional_dependencies/_spark.md b/docs/docusaurus/docs/core/set_up_a_gx_environment/_install_additional_dependencies/_spark.md index e14249209e6c..5a234f650a0e 100644 --- a/docs/docusaurus/docs/core/set_up_a_gx_environment/_install_additional_dependencies/_spark.md +++ b/docs/docusaurus/docs/core/set_up_a_gx_environment/_install_additional_dependencies/_spark.md @@ -5,12 +5,12 @@ import InfoUsingAVirtualEnvironment from '../../_core_components/admonitions/_if To validate data while using Spark to read from dataframes or file formats such as `.csv` and `.parquet` with GX Core, you create your GX Python environment, install GX Core locally, and then configure the necessary dependencies. -## Prerequisites +## Prerequisites {#prerequisites-spark} - - -## Installation +## Installation {#installation-spark} 1. Optional. Activate your virtual environment. diff --git a/docs/docusaurus/docs/core/set_up_a_gx_environment/_install_additional_dependencies/_sql.md b/docs/docusaurus/docs/core/set_up_a_gx_environment/_install_additional_dependencies/_sql.md index d48b9588f8c2..842847e6c614 100644 --- a/docs/docusaurus/docs/core/set_up_a_gx_environment/_install_additional_dependencies/_sql.md +++ b/docs/docusaurus/docs/core/set_up_a_gx_environment/_install_additional_dependencies/_sql.md @@ -6,12 +6,12 @@ import SqlDialectInstallationCommands from './_sql_dialect_installation_commands To validate data stored on SQL databases with GX Core, you create your GX Python environment, install GX Core locally, and then configure the necessary dependencies. -## Prerequisites +## Prerequisites {#prerequisites-sql} - - -## Installation +## Installation {#installation-sql} 1. Run the pip command to install the dependencies for your data's SQL dialect. diff --git a/docs/docusaurus/docs/core/set_up_a_gx_environment/_install_gx/_databricks_installation.md b/docs/docusaurus/docs/core/set_up_a_gx_environment/_install_gx/_databricks_installation.md index 240daedb26a7..98a1f9c5dfb9 100644 --- a/docs/docusaurus/docs/core/set_up_a_gx_environment/_install_gx/_databricks_installation.md +++ b/docs/docusaurus/docs/core/set_up_a_gx_environment/_install_gx/_databricks_installation.md @@ -6,12 +6,12 @@ To avoid configuring external resources, you'll use the [Databricks File System DBFS is a distributed file system mounted in a Databricks workspace and available on Databricks clusters. Files on DBFS can be written and read as if they were on a local filesystem by adding the /dbfs/ prefix to the path. It also persists in object storage, so you won’t lose data after terminating a cluster. See the Databricks documentation for best practices, including mounting object stores. -### Additional prerequisites +### Additional prerequisites {#additional-prerequisites-databricks} - A complete Databricks setup, including a running Databricks cluster with an attached notebook - Access to [DBFS](https://docs.databricks.com/dbfs/index.html) -### Installation and setup +### Installation and setup {#installation-setup-databricks} 1. Run the following command in your notebook to install GX as a notebook-scoped library: diff --git a/docs/docusaurus/docs/core/set_up_a_gx_environment/_install_gx/_emr_spark_installation.md b/docs/docusaurus/docs/core/set_up_a_gx_environment/_install_gx/_emr_spark_installation.md index 2a40ec6c86ac..c6f597c57240 100644 --- a/docs/docusaurus/docs/core/set_up_a_gx_environment/_install_gx/_emr_spark_installation.md +++ b/docs/docusaurus/docs/core/set_up_a_gx_environment/_install_gx/_emr_spark_installation.md @@ -1,11 +1,11 @@ Use the information provided here to install GX on an EMR Spark cluster and instantiate a Data Context without a full configuration directory. -### Additional prerequisites +### Additional prerequisites {#additional-prerequisites-spark} - An EMR Spark cluster. - Access to the EMR Spark notebook. -### Installation and setup +### Installation and setup {#installation-setup-spark} 1. To install GX on your EMR Spark cluster copy this code snippet into a cell in your EMR Spark notebook and then run it: diff --git a/docs/docusaurus/docs/core/set_up_a_gx_environment/_install_gx/_gx_cloud_installation.md b/docs/docusaurus/docs/core/set_up_a_gx_environment/_install_gx/_gx_cloud_installation.md index 166a00df75a4..2b163ae83306 100644 --- a/docs/docusaurus/docs/core/set_up_a_gx_environment/_install_gx/_gx_cloud_installation.md +++ b/docs/docusaurus/docs/core/set_up_a_gx_environment/_install_gx/_gx_cloud_installation.md @@ -3,7 +3,7 @@ import GxData from '../../_core_components/_data.jsx' GX Cloud provides a web interface for using GX to validate your data without creating and running complex Python code. However, GX Core can connect to a GX Cloud account if you want to customize or automate your workflows through Python scripts. -### Installation and setup +### Installation and setup {#installation-setup-cloud} To deploy a GX Agent, which serves as an intermediary between GX Cloud's interface and your organization's data stores, see [Connect GX Cloud](/cloud/connect/connect_lp.md). The GX Agent serves all GX Cloud users within your organization. If a GX Agent has already been deployed for your organization, you can use the GX Cloud online application without further installation or setup. diff --git a/docs/docusaurus/docs/core/set_up_a_gx_environment/_install_gx/_local_installation.md b/docs/docusaurus/docs/core/set_up_a_gx_environment/_install_gx/_local_installation.md index b003f68d2108..ee59866f6632 100644 --- a/docs/docusaurus/docs/core/set_up_a_gx_environment/_install_gx/_local_installation.md +++ b/docs/docusaurus/docs/core/set_up_a_gx_environment/_install_gx/_local_installation.md @@ -3,7 +3,7 @@ import GxData from '../../_core_components/_data.jsx' GX Core is a Python library and as such can be used with a local Python installation to access the functionality of GX through Python scripts. -### Installation and setup +### Installation and setup {#installation-setup-local} 1. Optional. Activate your virtual environment. diff --git a/docs/docusaurus/docs/core/trigger_actions_based_on_results/run_a_checkpoint.md b/docs/docusaurus/docs/core/trigger_actions_based_on_results/run_a_checkpoint.md index 2aed12725620..35cfcef3f42e 100644 --- a/docs/docusaurus/docs/core/trigger_actions_based_on_results/run_a_checkpoint.md +++ b/docs/docusaurus/docs/core/trigger_actions_based_on_results/run_a_checkpoint.md @@ -13,7 +13,8 @@ Running a Checkpoint will cause it to validate all of its Validation Definitions At runtime, a Checkpoint can take in a `batch_parameters` dictionary that selects the Batch to validate from each Validation Definition. A Checkpoint will also accept an `expectation_parameters` dictionary that provides values for the parameters of the any Expectations that have been configured to accept parameters at runtime. -

Prerequisites

+## Prerequisites + - . - . - .