Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Bug Fix] Update Quickstart.yml casing #61

Open
wants to merge 6 commits into
base: main
Choose a base branch
from
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
25 changes: 8 additions & 17 deletions .quickstart/quickstart.yml
Original file line number Diff line number Diff line change
Expand Up @@ -5,31 +5,22 @@ dbt_versions: ">=1.3.0 <2.0.0"

table_variables:
salesforce__user_role_enabled:
- user_role
- UserRole
salesforce__lead_enabled:
- lead
- Lead
salesforce__event_enabled:
- event
- Event
salesforce__task_enabled:
- task
- Task
salesforce__opportunity_line_item_enabled:
- opportunity_line_item
- OpportunityLineItem
salesforce__order_enabled:
- order
- Order
salesforce__product_2_enabled:
- product_2
- Product2

destination_configurations:
databricks:
dispatch:
- macro_namespace: dbt_utils
search_order: [ 'spark_utils', 'dbt_utils' ]
public_models: [
"salesforce__contact_enhanced",
"salesforce__daily_activity",
"salesforce__manager_performance",
"salesforce__opportunity_enhanced",
"salesforce__opportunity_line_item_enhanced",
"salesforce__owner_performance",
"salesforce__sales_snapshot"
]
search_order: [ 'spark_utils', 'dbt_utils' ]
11 changes: 11 additions & 0 deletions CHANGELOG.md
Original file line number Diff line number Diff line change
@@ -1,3 +1,14 @@
# dbt_salesforce v1.1.2
This release includes the following updates:

## Quickstart Fixes
- Fixed casing syntax in `quickstart.yml` to match the default options in the Salesforce connector schema tab. Source tables are now in upper case, and snake casing is updated to camel casing. ([#61](https://github.com/fivetran/dbt_salesforce/pull/61))
- Now Quickstart customers can now properly reference the source tables and enable or disable the models as they wish.

## Documentation
- Added Quickstart model counts to README. ([#59](https://github.com/fivetran/dbt_salesforce/pull/59))
- Corrected references to connectors and connections in the README. ([#59](https://github.com/fivetran/dbt_salesforce/pull/59))

# dbt_salesforce v1.1.1
[PR #56](https://github.com/fivetran/dbt_salesforce/pull/56) includes the following updates:
## Bugfix
Expand Down
30 changes: 16 additions & 14 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -45,13 +45,15 @@ You can also refer to the table below for a detailed view of all tables material
| [salesforce__opportunity_daily_history](https://fivetran.github.io/dbt_salesforce/#!/model/model.salesforce.salesforce__opportunity_daily_history) | Each record is a daily record in an opportunity, starting with its first active date and updating up toward either the current date (if still active) or its last active date. | No

**Note**: For Quickstart Data Model users only, in addition to the above output models that are Quickstart compatible, you will also receive models in your transformation list which replicate **all** of your Salesforce objects with the inclusion of the relevant formula fields in the generated output models.
### Materialized Models
Each Quickstart transformation job run materializes 23 models if all components of this data model are enabled. This count includes all staging, intermediate, and final models materialized as `view`, `table`, or `incremental`.
<!--section-end-->

## How do I use the dbt package?
### Step 1: Pre-Requisites
You will need to ensure you have the following before leveraging the dbt package.
- **Connector**: Have the Fivetran Salesforce connector syncing data into your warehouse.
- **Database support**: This package has been tested on **BigQuery**, **Snowflake**, **Redshift**, **Databricks**, and **Postgres**. Ensure you are using one of these supported databases.
To use this dbt package, you must have the following:
- At least one Fivetran Salesforce connection syncing data into your destination.
- A BigQuery, Snowflake, Redshift, PostgreSQL, or Databricks destination.

#### Databricks Dispatch Configuration
If you are using a Databricks destination with this package you will need to add the below (or a variation of the below) dispatch configuration within your `dbt_project.yml`. This is required in order for the package to accurately search for macros within the `dbt-labs/spark_utils` then the `dbt-labs/dbt_utils` packages respectively.
Expand Down Expand Up @@ -91,7 +93,7 @@ vars:
```

#### Disabling Models
It is possible that your Salesforce connector does not sync every table that this package expects. If your syncs exclude certain tables, it is because you either don't use that functionality in Salesforce or actively excluded some tables from your syncs.
Your Salesforce connection may not sync every table that this package expects. If your syncs exclude certain tables, it is because you either don't use that functionality in Salesforce or actively excluded some tables from your syncs.

To disable the corresponding functionality in this package, you must add the corresponding variable(s) to your `dbt_project.yml`, which are listed below. By default, that is if none of these variables are added, all variables are assumed to be true. Add variables only for the tables you would like to disable:

Expand All @@ -113,31 +115,31 @@ If you do not have the `OPPORTUNITY` table, there is no variable to turn off opp
However, you may still find value in this package without opportunity data, specifically in the `salesforce__contact_enhanced`, `salesforce__daily_activity`, `salesforce__account_daily_history` and `salesforce__contact_daily_history` (if using History Mode) end models.

For this use case, to ensure the package runs successfully, we recommend leveraging this [Fivetran Feature](https://fivetran.com/docs/using-fivetran/features#syncingemptytablesandcolumns) to create an empty `opportunity` table. To do so, follow these steps:
1. Navigate to your Salesforce connector in the "Connectors" tab within the Fivetran UI.
1. Navigate to your Salesforce connection in the "Connectors" tab within the Fivetran UI.
2. Click on the "Schema" tab.
3. Scroll down to `Opportunity` and click on its checkbox to add it into your schema.
4. Click "Save Changes" in the upper righthand corner of the screen.
5. Either click "Resync" for the `Opportunity` table specifically or wait for your next connector-level sync.
5. Either click "Resync" for the `Opportunity` table specifically or wait for your next connection-level sync.

> Note that all other end models (`salesforce__opportunity_enhanced`, `salesforce__opportunity_line_item_enhanced`, `salesforce__manager_performance`, `salesforce__owner_performance`, `salesforce__sales_snapshot`, and `salesforce__opportunity_daily_history`) will still materialize after a blanket `dbt run` but will be largely empty/null.

### (Optional) Step 4: Utilizing Salesforce History Mode records
If you have Salesforce [History Mode](https://fivetran.com/docs/using-fivetran/features#historymode) enabled for your connector, we now include support for the `account`, `contact`, and `opportunity` tables directly. These staging models from our `dbt_salesforce_source` package flow into our daily history models. This will allow you access to your historical data for these tables while taking advantage of incremental loads to help with compute.
If you have Salesforce [History Mode](https://fivetran.com/docs/using-fivetran/features#historymode) enabled for your connection, we now include support for the `account`, `contact`, and `opportunity` tables directly. These staging models from our `dbt_salesforce_source` package flow into our daily history models. This will allow you access to your historical data for these tables while taking advantage of incremental loads to help with compute.

#### IMPORTANT: How To Update Your History Models
To ensure maximum value for these history mode models and avoid messy historical data that could come with picking and choosing which fields you bring in, **all fields in your Salesforce history mode connector are being synced into your end staging models**. That means all custom fields you picked to sync are being brought in to the final models. [See our DECISIONLOG for more details on why we are bringing in all fields](https://github.com/fivetran/dbt_salesforce_source/blob/main/DECISIONLOG.md).
To ensure maximum value for these history mode models and avoid messy historical data that could come with picking and choosing which fields you bring in, **all fields in your Salesforce history mode connection are being synced into your end staging models**. That means all custom fields you picked to sync are being brought in to the final models. [See our DECISIONLOG for more details on why we are bringing in all fields](https://github.com/fivetran/dbt_salesforce_source/blob/main/DECISIONLOG.md).

To update the history mode models, you must follow these steps:
1) Go to your Fivetran Salesforce History Mode connector page.
1) Go to your Fivetran Salesforce History Mode connection page.
2) Update the fields that you are bringing into the model.
3) Run a `dbt run --full-refresh` on the specific staging models you've updated to bring in these fields and all the historical data available with these fields.

We are aware that bringing in additional fields will be very process-heavy, so we do emphasize caution in making changes to your history mode connector. It would be best to batch as many field changes as possible before executing a `--full-refresh` to save on processing.
We are aware that bringing in additional fields will be very process-heavy, so we do emphasize caution in making changes to your history mode connection. It would be best to batch as many field changes as possible before executing a `--full-refresh` to save on processing.

#### Configuring Your Salesforce History Mode Database and Schema Variables
Customers leveraging the Salesforce connector generally fall into one of two categories when taking advantage of History mode. They either have one connector that is syncing non-historical records and a separate connector that syncs historical records, **or** they have one connector that is syncing historical records. We have designed this feature to support both scenarios.
Customers with a Salesforce connection generally fall into one of two categories when taking advantage of History mode. They either have one connection that is syncing non-historical records and a separate connection that syncs historical records, **or** they have one connection that is syncing historical records. We have designed this feature to support both scenarios.

##### Option 1: Two connectors, one with non-historical data and another with historical data
##### Option 1: Two connections, one with non-historical data and another with historical data
If you are gathering data from both standard Salesforce as well as Salesforce History Mode, and your target database and schema differ as well, you will need to add an additional configuration for the history schema and database to your `dbt_project.yml`.

```yml
Expand All @@ -149,7 +151,7 @@ vars:
salesforce_history_schema: your_history_schema_name
```

##### Option 2: One connector being used to sync historical data
##### Option 2: One connection being used to sync historical data
Perhaps you may only want to use the Salesforce History Mode to bring in your data. Because the Salesforce schema is pointing to the default `salesforce` schema and database, you will want to add the following variable into your `dbt_project.yml` to point it to the `salesforce_history` equivalents.

```yml
Expand All @@ -161,7 +163,7 @@ vars:
salesforce_history_schema: your_history_schema_name
```

**IMPORTANT**: If you utilize Option 2, you must sync the equivalent enabled tables and fields in your history mode connector that are being brought into your end reports. Examine your data lineage and the model fields within the `salesforce` folder to see which tables and fields you are using and need to bring in and sync in the history mode connector.
**IMPORTANT**: If you utilize Option 2, you must sync the equivalent enabled tables and fields in your history mode connection that are being brought into your end reports. Examine your data lineage and the model fields within the `salesforce` folder to see which tables and fields you are using and need to bring in and sync in the history mode connection.

#### Enabling Salesforce History Mode Models
The History Mode models can get quite expansive since it will take in **ALL** historical records, so we've disabled them by default. You can enable the history models you'd like to utilize by adding the below variable configurations within your `dbt_project.yml` file for the equivalent models.
Expand Down
2 changes: 1 addition & 1 deletion dbt_project.yml
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
config-version: 2
name: 'salesforce'
version: '1.1.1'
version: '1.1.2'
require-dbt-version: [">=1.3.0", "<2.0.0"]
models:
salesforce:
Expand Down
2 changes: 1 addition & 1 deletion docs/catalog.json

Large diffs are not rendered by default.

47 changes: 37 additions & 10 deletions docs/index.html

Large diffs are not rendered by default.

2 changes: 1 addition & 1 deletion docs/manifest.json

Large diffs are not rendered by default.

1 change: 0 additions & 1 deletion docs/run_results.json

This file was deleted.

10 changes: 5 additions & 5 deletions integration_tests/ci/sample.profiles.yml
Original file line number Diff line number Diff line change
Expand Up @@ -16,13 +16,13 @@ integration_tests:
pass: "{{ env_var('CI_REDSHIFT_DBT_PASS') }}"
dbname: "{{ env_var('CI_REDSHIFT_DBT_DBNAME') }}"
port: 5439
schema: salesforce_integrations_tests_3
schema: salesforce_integrations_tests_4
threads: 8
bigquery:
type: bigquery
method: service-account-json
project: 'dbt-package-testing'
schema: salesforce_integrations_tests_3
schema: salesforce_integrations_tests_4
threads: 8
keyfile_json: "{{ env_var('GCLOUD_SERVICE_KEY') | as_native }}"
snowflake:
Expand All @@ -33,7 +33,7 @@ integration_tests:
role: "{{ env_var('CI_SNOWFLAKE_DBT_ROLE') }}"
database: "{{ env_var('CI_SNOWFLAKE_DBT_DATABASE') }}"
warehouse: "{{ env_var('CI_SNOWFLAKE_DBT_WAREHOUSE') }}"
schema: salesforce_integrations_tests_3
schema: salesforce_integrations_tests_4
threads: 8
postgres:
type: postgres
Expand All @@ -42,13 +42,13 @@ integration_tests:
pass: "{{ env_var('CI_POSTGRES_DBT_PASS') }}"
dbname: "{{ env_var('CI_POSTGRES_DBT_DBNAME') }}"
port: 5432
schema: salesforce_integrations_tests_3
schema: salesforce_integrations_tests_4
threads: 8
databricks:
catalog: "{{ env_var('CI_DATABRICKS_DBT_CATALOG') }}"
host: "{{ env_var('CI_DATABRICKS_DBT_HOST') }}"
http_path: "{{ env_var('CI_DATABRICKS_DBT_HTTP_PATH') }}"
schema: salesforce_integrations_tests_3
schema: salesforce_integrations_tests_4
threads: 8
token: "{{ env_var('CI_DATABRICKS_DBT_TOKEN') }}"
type: databricks
8 changes: 4 additions & 4 deletions integration_tests/dbt_project.yml
Original file line number Diff line number Diff line change
@@ -1,12 +1,12 @@
name: 'salesforce_integration_tests'
version: '1.1.1'
version: '1.1.2'
config-version: 2

profile: 'integration_tests'

models:
+materialized: table
+schema: "salesforce_{{ var('directed_schema','dev') }}"
# +schema: "salesforce_{{ var('directed_schema','dev') }}" --Used for validation tests

vars:
# enable history models when generating docs!
Expand All @@ -15,8 +15,8 @@ vars:
# salesforce__opportunity_history_enabled: true

salesforce_source:
salesforce_schema: salesforce_integrations_tests_3
salesforce_history_schema: salesforce_integrations_tests_3
salesforce_schema: salesforce_integrations_tests_4
salesforce_history_schema: salesforce_integrations_tests_4

salesforce_account_identifier: "sf_account_data"
salesforce_opportunity_identifier: "sf_opportunity_data"
Expand Down
4 changes: 2 additions & 2 deletions packages.yml
Original file line number Diff line number Diff line change
@@ -1,3 +1,3 @@
packages:
- package: fivetran/salesforce_source
version: [">=1.1.0", "<1.2.0"]
- package: fivetran/salesforce_source
version: [">=1.1.0", "<1.2.0"]