Skip to content

Commit

Permalink
This branch was auto-updated!
Browse files Browse the repository at this point in the history
  • Loading branch information
github-actions[bot] authored Jan 23, 2025
2 parents da13496 + 6f6b242 commit 217ade2
Show file tree
Hide file tree
Showing 13 changed files with 303 additions and 74 deletions.
6 changes: 2 additions & 4 deletions website/docs/docs/build/metricflow-commands.md
Original file line number Diff line number Diff line change
Expand Up @@ -536,14 +536,12 @@ limit 10
</TabItem>
<TabItem value="eg7" label=" Export to CSV">
Add the `--csv file_name.csv` flag to export the results of your query to a csv.
Add the `--csv file_name.csv` flag to export the results of your query to a csv. The `--csv` flag is available in dbt Core only and not supported in dbt Cloud.
**Query**
```bash
# In dbt Cloud
dbt sl query --metrics order_total --group-by metric_time,is_food_order --limit 10 --order-by -metric_time --where "is_food_order = True" --start-time '2017-08-22' --end-time '2017-08-27' --csv query_example.csv
# In dbt Core
mf query --metrics order_total --group-by metric_time,is_food_order --limit 10 --order-by -metric_time --where "is_food_order = True" --start-time '2017-08-22' --end-time '2017-08-27' --csv query_example.csv
Expand Down
Original file line number Diff line number Diff line change
@@ -1,16 +1,18 @@
---
title: "Change your dbt Cloud Theme"
id: dark-mode
title: "Change your dbt Cloud theme"
id: change-your-dbt-cloud-theme
description: "Learn about theme switching in dbt Cloud"
sidebar_label: dbt Cloud dark mode
image: /img/docs/dbt-cloud/using-dbt-cloud/dark-mode.png
sidebar_label: Change your dbt Cloud theme
image: /img/docs/dbt-cloud/using-dbt-cloud/light-vs-dark.png
---

# Change your dbt Cloud theme <Lifecycle status="preview" />

dbt Cloud supports **Light mode** (default), **Dark mode**, and **System mode** (respects your browser's theme for light or dark mode) under the **Theme** section of your user profile. You can seamlessly switch between these modes directly from the profile menu, customizing your viewing experience.

Your selected theme is stored in your user profile, ensuring a consistent experience across dbt Cloud.

Theme selection applies across all areas of dbt Cloud, including the [IDE](/docs/cloud/dbt-cloud-ide/develop-in-the-cloud), [dbt Explorer](/docs/collaborate/explore-projects), [environments](/docs/environments-in-dbt), [jobs](/docs/deploy/jobs), and more. Learn more about customizing themes in [Enable dark mode in dbt Cloud](/docs/cloud/about-cloud/dark-mode#enable-dark-mode-in-dbt-cloud).
Theme selection applies across all areas of dbt Cloud, including the [IDE](/docs/cloud/dbt-cloud-ide/develop-in-the-cloud), [dbt Explorer](/docs/collaborate/explore-projects), [environments](/docs/environments-in-dbt), [jobs](/docs/deploy/jobs), and more. Learn more about customizing themes in [Change themes in dbt Cloud](/docs/cloud/about-cloud/change-your-dbt-cloud-theme#change-themes-in-dbt-cloud).

## Prerequisites

Expand Down
3 changes: 2 additions & 1 deletion website/docs/docs/community-adapters.md
Original file line number Diff line number Diff line change
Expand Up @@ -15,4 +15,5 @@ Community adapters are adapter plugins contributed and maintained by members of
| [MySQL](/docs/core/connect-data-platform/mysql-setup) | [RisingWave](/docs/core/connect-data-platform/risingwave-setup) | [Rockset](/docs/core/connect-data-platform/rockset-setup) |
| [SingleStore](/docs/core/connect-data-platform/singlestore-setup)| [SQL Server & Azure SQL](/docs/core/connect-data-platform/mssql-setup) | [SQLite](/docs/core/connect-data-platform/sqlite-setup) |
| [Starrocks](/docs/core/connect-data-platform/starrocks-setup) | [TiDB](/docs/core/connect-data-platform/tidb-setup)| [TimescaleDB](https://dbt-timescaledb.debruyn.dev/) |
| [Upsolver](/docs/core/connect-data-platform/upsolver-setup) | [Vertica](/docs/core/connect-data-platform/vertica-setup) | [Yellowbrick](/docs/core/connect-data-platform/yellowbrick-setup) |
| [Upsolver](/docs/core/connect-data-platform/upsolver-setup) | [Vertica](/docs/core/connect-data-platform/vertica-setup) | [Watsonx-Presto](/docs/core/connect-data-platform/watsonx-presto-setup) |
| [Yellowbrick](/docs/core/connect-data-platform/yellowbrick-setup) |
42 changes: 30 additions & 12 deletions website/docs/docs/core/connect-data-platform/teradata-setup.md
Original file line number Diff line number Diff line change
Expand Up @@ -38,17 +38,19 @@ import SetUpPages from '/snippets/_setup-pages-intro.md';
| 1.6.x |||||
| 1.7.x |||||
| 1.8.x |||||
| 1.9.x |||||

## dbt dependent packages version compatibility

| dbt-teradata | dbt-core | dbt-teradata-util | dbt-util |
|--------------|------------|-------------------|----------------|
| 1.2.x | 1.2.x | 0.1.0 | 0.9.x or below |
| 1.6.7 | 1.6.7 | 1.1.1 | 1.1.1 |
| 1.7.x | 1.7.x | 1.1.1 | 1.1.1 |
| 1.8.x | 1.8.x | 1.1.1 | 1.1.1 |
| 1.8.x | 1.8.x | 1.2.0 | 1.2.0 |
| 1.8.x | 1.8.x | 1.3.0 | 1.3.0 |
| dbt-teradata | dbt-core | dbt-teradata-util | dbt-util |
|--------------|----------|-------------------|----------------|
| 1.2.x | 1.2.x | 0.1.0 | 0.9.x or below |
| 1.6.7 | 1.6.7 | 1.1.1 | 1.1.1 |
| 1.7.x | 1.7.x | 1.1.1 | 1.1.1 |
| 1.8.x | 1.8.x | 1.1.1 | 1.1.1 |
| 1.8.x | 1.8.x | 1.2.0 | 1.2.0 |
| 1.8.x | 1.8.x | 1.3.0 | 1.3.0 |
| 1.9.x | 1.9.x | 1.3.0 | 1.3.0 |


### Connecting to Teradata
Expand Down Expand Up @@ -95,7 +97,6 @@ Parameter | Default | Type | Description
`browser_tab_timeout` | `"5"` | quoted integer | Specifies the number of seconds to wait before closing the browser tab after Browser Authentication is completed. The default is 5 seconds. The behavior is under the browser's control, and not all browsers support automatic closing of browser tabs.
`browser_timeout` | `"180"` | quoted integer | Specifies the number of seconds that the driver will wait for Browser Authentication to complete. The default is 180 seconds (3 minutes).
`column_name` | `"false"` | quoted boolean | Controls the behavior of cursor `.description` sequence `name` items. Equivalent to the Teradata JDBC Driver `COLUMN_NAME` connection parameter. False specifies that a cursor `.description` sequence `name` item provides the AS-clause name if available, or the column name if available, or the column title. True specifies that a cursor `.description` sequence `name` item provides the column name if available, but has no effect when StatementInfo parcel support is unavailable.
`connect_failure_ttl` | `"0"` | quoted integer | Specifies the time-to-live in seconds to remember the most recent connection failure for each IP address/port combination. The driver subsequently skips connection attempts to that IP address/port for the duration of the time-to-live. The default value of zero disables this feature. The recommended value is half the database restart time. Equivalent to the Teradata JDBC Driver `CONNECT_FAILURE_TTL` connection parameter.
`connect_timeout` | `"10000"` | quoted integer | Specifies the timeout in milliseconds for establishing a TCP socket connection. Specify 0 for no timeout. The default is 10 seconds (10000 milliseconds).
`cop` | `"true"` | quoted boolean | Specifies whether COP Discovery is performed. Equivalent to the Teradata JDBC Driver `COP` connection parameter.
`coplast` | `"false"` | quoted boolean | Specifies how COP Discovery determines the last COP hostname. Equivalent to the Teradata JDBC Driver `COPLAST` connection parameter. When `coplast` is `false` or omitted, or COP Discovery is turned off, then no DNS lookup occurs for the coplast hostname. When `coplast` is `true`, and COP Discovery is turned on, then a DNS lookup occurs for a coplast hostname.
Expand All @@ -110,7 +111,7 @@ Parameter | Default | Type | Description
`log` | `"0"` | quoted integer | Controls debug logging. Somewhat equivalent to the Teradata JDBC Driver `LOG` connection parameter. This parameter's behavior is subject to change in the future. This parameter's value is currently defined as an integer in which the 1-bit governs function and method tracing, the 2-bit governs debug logging, the 4-bit governs transmit and receive message hex dumps, and the 8-bit governs timing. Compose the value by adding together 1, 2, 4, and/or 8.
`logdata` | | string | Specifies extra data for the chosen logon authentication method. Equivalent to the Teradata JDBC Driver `LOGDATA` connection parameter.
`logon_timeout` | `"0"` | quoted integer | Specifies the logon timeout in seconds. Zero means no timeout.
`logmech` | `"TD2"` | string | Specifies the logon authentication method. Equivalent to the Teradata JDBC Driver `LOGMECH` connection parameter. Possible values are `TD2` (the default), `JWT`, `LDAP`, `KRB5` for Kerberos, or `TDNEGO`.
`logmech` | `"TD2"` | string | Specifies the logon authentication method. Equivalent to the Teradata JDBC Driver `LOGMECH` connection parameter. Possible values are `TD2` (the default), `JWT`, `LDAP`, `BROWSER`, `KRB5` for Kerberos, or `TDNEGO`.
`max_message_body` | `"2097000"` | quoted integer | Specifies the maximum Response Message size in bytes. Equivalent to the Teradata JDBC Driver `MAX_MESSAGE_BODY` connection parameter.
`partition` | `"DBC/SQL"` | string | Specifies the database partition. Equivalent to the Teradata JDBC Driver `PARTITION` connection parameter.
`request_timeout` | `"0"` | quoted integer | Specifies the timeout for executing each SQL request. Zero means no timeout.
Expand Down Expand Up @@ -210,7 +211,9 @@ For using cross-DB macros, teradata-utils as a macro namespace will not be used,

##### <a name="hash"></a>hash

`Hash` macro needs an `md5` function implementation. Teradata doesn't support `md5` natively. You need to install a User Defined Function (UDF):
`Hash` macro needs an `md5` function implementation. Teradata doesn't support `md5` natively. You need to install a User Defined Function (UDF) and optionally specify `md5_udf` [variable](/docs/build/project-variables).

If not specified the code defaults to using `GLOBAL_FUNCTIONS.hash_md5`. See the following instructions on how to install the custom UDF:
1. Download the md5 UDF implementation from Teradata (registration required): https://downloads.teradata.com/download/extensibility/md5-message-digest-udf.
1. Unzip the package and go to `src` directory.
1. Start up `bteq` and connect to your database.
Expand All @@ -228,6 +231,12 @@ For using cross-DB macros, teradata-utils as a macro namespace will not be used,
```sql
GRANT EXECUTE FUNCTION ON GLOBAL_FUNCTIONS TO PUBLIC WITH GRANT OPTION;
```
Instruction on how to add md5_udf variable in dbt_project.yml for custom hash function:
```yaml
vars:
md5_udf: Custom_database_name.hash_method_function
```

##### <a name="last_day"></a>last_day

`last_day` in `teradata_utils`, unlike the corresponding macro in `dbt_utils`, doesn't support `quarter` datepart.
Expand All @@ -241,6 +250,15 @@ dbt-teradata 1.8.0 and later versions support unit tests, enabling you to valida

## Limitations

### Browser authentication

* When running a dbt job with logmech set to "browser", the initial authentication opens a browser window where you must enter your username and password.
* After authentication, this window remains open, requiring you to manually switch back to the dbt console.
* For every subsequent connection, a new browser tab briefly opens, displaying the message "TERADATA BROWSER AUTHENTICATION COMPLETED," and silently reuses the existing session.
* However, the focus stays on the browser window, so you’ll need to manually switch back to the dbt console each time.
* This behavior is the default functionality of the teradatasql driver and cannot be avoided at this time.
* To prevent session expiration and the need to re-enter credentials, ensure the authentication browser window stays open until the job is complete.

### Transaction mode
Both ANSI and TERA modes are now supported in dbt-teradata. TERA mode's support is introduced with dbt-teradata 1.7.1, it is an initial implementation.

Expand All @@ -254,4 +272,4 @@ The adapter was originally created by [Doug Beatty](https://github.com/dbeatty10

## License

The adapter is published using Apache-2.0 License. Refer to the [terms and conditions](https://github.com/dbt-labs/dbt-core/blob/main/License.md) to understand items such as creating derivative work and the support model.
The adapter is published using Apache-2.0 License. Refer to the [terms and conditions](https://github.com/dbt-labs/dbt-core/blob/main/License.md) to understand items such as creating derivative work and the support model.
103 changes: 103 additions & 0 deletions website/docs/docs/core/connect-data-platform/watsonx-presto-setup.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,103 @@
---
title: "IBM watsonx.data Presto setup"
description: "Read this guide to learn about the IBM watsonx.data Presto setup in dbt."
id: "watsonx-presto-setup"
meta:
maintained_by: IBM
authors: Karnati Naga Vivek, Hariharan Ashokan, Biju Palliyath, Gopikrishnan Varadarajulu, Rohan Pednekar
github_repo: 'IBM/dbt-watsonx-presto'
pypi_package: 'dbt-watsonx-presto'
min_core_version: v1.8.0
cloud_support: 'Not Supported'
min_supported_version: 'n/a'
slack_channel_name:
slack_channel_link:
platform_name: IBM watsonx.data
config_page: /reference/resource-configs/watsonx-presto-config
---

The dbt-watsonx-presto adapter allows you to use dbt to transform and manage data on IBM watsonx.data Presto(Java), leveraging its distributed SQL query engine capabilities. Before proceeding, ensure you have the following:
<ul>
<li>An active IBM watsonx.data Presto(Java) engine with connection details (host, port, catalog, schema) in SaaS/Software.</li>
<li>Authentication credentials: Username and password/apikey.</li>
<li>For watsonx.data instances, SSL verification is required for secure connections. If the instance host uses HTTPS, there is no need to specify the SSL certificate parameter. However, if the instance host uses an unsecured HTTP connection, ensure you provide the path to the SSL certificate file.</li>
</ul>
Refer to [Configuring dbt-watsonx-presto](https://www.ibm.com/docs/en/watsonx/watsonxdata/2.1.x?topic=presto-configuration-setting-up-your-profile) for guidance on obtaining and organizing these details.


import SetUpPages from '/snippets/_setup-pages-intro.md';

<SetUpPages meta={frontMatter.meta}/>


## Connecting to IBM watsonx.data presto

To connect dbt with watsonx.data Presto(java), you need to configure a profile in your `profiles.yml` file located in the `.dbt/` directory of your home folder. The following is an example configuration for connecting to IBM watsonx.data SaaS and Software instances:

<File name='~/.dbt/profiles.yml'>

```yaml
my_project:
outputs:
software:
type: presto
method: BasicAuth
user: [user]
password: [password]
host: [hostname]
database: [catalog name]
schema: [your dbt schema]
port: [port number]
threads: [1 or more]
ssl_verify: path/to/certificate

saas:
type: presto
method: BasicAuth
user: [user]
password: [api_key]
host: [hostname]
database: [catalog name]
schema: [your dbt schema]
port: [port number]
threads: [1 or more]

target: software

```

</File>

## Host parameters

The following profile fields are required to configure watsonx.data Presto(java) connections. For IBM watsonx.data SaaS or Software instances, you can get the `hostname` and `port` details by clicking **View connect details** on the Presto(java) engine details page.

| Option | Required/Optional | Description | Example |
| --------- | ------- | ------- | ----------- |
| `method` | Required | Specifies the authentication method for secure connections. Use `BasicAuth` when connecting to IBM watsonx.data SaaS or Software instances. | `BasicAuth` |
| `user` | Required | Username or email address for authentication. | `user` |
| `password`| Required | Password or API key for authentication | `password` |
| `host` | Required | Hostname for connecting to Presto. | `127.0.0.1` |
| `database`| Required | The catalog name in your Presto instance. | `Analytics` |
| `schema` | Required | The schema name within your Presto instance catalog. | `my_schema` |
| `port` | Required | The port for connecting to Presto. | `443` |
| `ssl_verify` | Optional (default: **true**) | Specifies the path to the SSL certificate or a boolean value. The SSL certificate path is required if the watsonx.data instance is not secure (HTTP).| `path/to/certificate` or `true` |


### Schemas and databases
When selecting the catalog and the schema, make sure the user has read and write access to both. This selection does not limit your ability to query the catalog. Instead, they serve as the default location for where tables and views are materialized. In addition, the Presto connector used in the catalog must support creating tables. This default can be changed later from within your dbt project.

### SSL verification
- If the Presto instance uses an unsecured HTTP connection, you must set `ssl_verify` to the path of the SSL certificate file.
- If the instance uses `HTTPS`, this parameter is not required and can be omitted.

## Additional parameters

The following profile fields are optional to set up. They let you configure your instance session and dbt for your connection.


| Profile field | Description | Example |
| ----------------------------- | ----------------------------------------------------------------------------------------------------------- | ------------------------------------ |
| `threads` | How many threads dbt should use (default is `1`) | `8` |
| `http_headers` | HTTP headers to send alongside requests to Presto, specified as a yaml dictionary of (header, value) pairs. | `X-Presto-Routing-Group: my-instance` |
| `http_scheme` | The HTTP scheme to use for requests to (default: `http`, or `https` if `BasicAuth`) | `https` or `http` |
Loading

0 comments on commit 217ade2

Please sign in to comment.