Skip to content

Commit

Permalink
Merge branch 'current' into full-refresh-update
Browse files Browse the repository at this point in the history
  • Loading branch information
mirnawong1 authored Jan 24, 2025
2 parents ca671af + 30430a0 commit 53cf493
Show file tree
Hide file tree
Showing 21 changed files with 515 additions and 37 deletions.
150 changes: 150 additions & 0 deletions website/blog/2025-01-23-levels-of-sql-comprehension.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,150 @@
---
title: "The Three Levels of SQL Comprehension: What they are and why you need to know about them"
description: "Parsers, compilers, executors, oh my! What it means when we talk about 'understanding SQL'."
slug: the-levels-of-sql-comprehension

authors: [joel_labes]

tags: [data ecosystem]
hide_table_of_contents: false

date: 2025-01-23
is_featured: true
---


Ever since [dbt Labs acquired SDF Labs last week](https://www.getdbt.com/blog/dbt-labs-acquires-sdf-labs), I've been head-down diving into their technology and making sense of it all. The main thing I knew going in was "SDF understands SQL". It's a nice pithy quote, but the specifics are *fascinating.*

For the next era of Analytics Engineering to be as transformative as the last, dbt needs to move beyond being a [string preprocessor](https://en.wikipedia.org/wiki/Preprocessor) and into fully comprehending SQL. **For the first time, SDF provides the technology necessary to make this possible.** Today we're going to dig into what SQL comprehension actually means, since it's so critical to what comes next.

<!-- truncate -->

## What is SQL comprehension?

Let’s call any tool that can look at a string of text, interpret it as SQL, and extract some meaning from it a *SQL Comprehension tool.*

Put another way, SQL Comprehension tools **recognize SQL code and deduce more information about that SQL than is present in the [tokens](https://www.postgresql.org/docs/current/sql-syntax-lexical.html) themselves**. Here’s a non-exhaustive set of behaviors and capabilities that such a tool might have for a given [dialect](https://blog.sdf.com/p/sql-dialects-and-the-tower-of-babel) of SQL:

- Identify constituent parts of a query.
- Create structured artifacts for their own use or for other tools to consume in turn.
- Check whether the SQL is valid.
- Understand what will happen when the query runs: things like what columns will be created, what datatypes do they have, and what DDL is involved
- Execute the query and return data (unsurprisingly, your database is a tool that comprehends SQL!)

By building on top of tools that truly understand SQL, it is possible to create systems that are much more capable, resilient and flexible than we’ve seen to date.

## The Levels of SQL Comprehension

When you look at the capabilities above, you can imagine some of those outcomes being achievable with [one line of regex](https://github.com/joellabes/mode-dbt-exposures/blob/main/generate_yaml.py#L52) and some that are only possible if you’ve literally built a database. Given that range of possibilities, we believe that “can you comprehend SQL” is an insufficiently precise question.

A better question is “to what level can you comprehend SQL?” To that end, we have identified different levels of capability. Each level deals with a key artifact (or more precisely - a specific "[intermediate representation](https://en.wikipedia.org/wiki/Intermediate_representation)"). And in doing so, each level unlocks specific capabilities and more in-depth validation.

| Level | Name | Artifact | Example Capability Unlocked |
| --- | --- | --- | --- |
| 1 | Parsing | Syntax Tree | Know what symbols are used in a query. |
| 2 | Compiling | Logical Plan | Know what types are used in a query, and how they change, regardless of their origin. |
| 3 | Executing | Physical Plan + Query Results | Know how a query will run on your database, all the way to calculating its results. |

At Level 1, you have a baseline comprehension of SQL. By parsing the string of SQL into a Syntax Tree, it’s possible to **reason about the components of a query** and identify whether you've **written syntactically legal code**.

At Level 2, the system produces a complete Logical Plan. A logical plan knows about every function that’s called in your query, the datatypes being passed into them, and what every column will look like as a result (among many other things). Static analysis of this plan makes it possible to **identify almost every error before you run your code**.

Finally, at Level 3, you can actually **execute a query and modify data**, because it understands all the complexities involved in answering the question "how does the exact data passed into this query get transformed/mutated".

## Can I see an example?

This can feel pretty theoretical based on descriptions alone, so let’s look at a basic Snowflake query.

A system at each level of SQL comprehension understands progressively more about the query, and that increased understanding enables it to **say with more precision whether the query is valid**.

To tools at lower levels of comprehension, some elements of a query are effectively a black box - their syntax tree has the contents of the query but cannot validate whether everything makes sense. **Remember that comprehension is deducing more information than is present in the plain text of the query; by comprehending more, you can validate more.**

### Level 1: Parsing

<Lightbox src="/img/blog/2025-01-23-levels-of-sql-comprehension/level_1.png" width="100%" />

A parser recognizes that a function called `dateadd` has been called with three arguments, and knows the contents of those arguments.

However, without knowledge of the [function signature](https://en.wikipedia.org/wiki/Type_signature#Signature), it has no way to validate whether those arguments are valid types, whether three is the right number of arguments, or even whether `dateadd` is an available function. This also means it can’t know what the datatype of the created column will be.

Parsers are intentionally flexible in what they will consume - their purpose is to make sense of what they're seeing, not nitpick. Most parsers describe themselves as “non-validating”, because true validation requires compilation.

### Level 2: Compiling

<Lightbox src="/img/blog/2025-01-23-levels-of-sql-comprehension/level_2.png" width="100%" />

Extending beyond a parser, a compiler *does* know the function signatures. It knows that on Snowflake, `dateadd` is a function which takes three arguments: a `datepart`, an `integer`, and an `expression` (in that order).

A compiler also knows what types a function can return without actually running the code (this is called [static analysis](https://en.wikipedia.org/wiki/Static_program_analysis), we’ll get into that another day). In this case, because `dateadd`’s return type depends on the input expression and our expression isn’t explicitly cast, the compiler just knows that the `new_day` column can be [one of three possible datatypes](https://docs.snowflake.com/en/sql-reference/functions/dateadd#returns).

### Level 3: Executing

<Lightbox src="/img/blog/2025-01-23-levels-of-sql-comprehension/level_3.png" width="100%" />

A tool with execution capabilities knows everything about this query and the data that is passed into it, including how functions are implemented. Therefore it can perfectly represent the results as run on Snowflake. Again, that’s what databases do. A database is a Level 3 tool.

### Review

Let’s review the increasing validation capabilities unlocked by each level of comprehension, and notice that over time **the black boxes completely disappear**:

<Lightbox src="/img/blog/2025-01-23-levels-of-sql-comprehension/validation_all_levels.png" width="100%" />

In a toy example like this one, the distinctions between the different levels might feel subtle. As you move away from a single query and into a full-scale project, the functionality gaps become more pronounced. That’s hard to demonstrate in a blog post, but fortunately there’s another easier option: look at some failing queries. How the query is broken impacts what level of tool is necessary to recognize the error.

## So let’s break things

As the great analytics engineer Tolstoy [once noted](https://en.wikipedia.org/wiki/Anna_Karenina_principle), “All correctly written queries are alike; each incorrectly written query is incorrect in its own way”.

Consider these three invalid queries:

- `selecte dateadd('day', 1, getdate()) as tomorrow` (Misspelled keyword)
- `select dateadd('day', getdate(), 1) as tomorrow` (Wrong order of arguments)
- `select cast('2025-01-32' as date) as tomorrow` (Impossible date)

Tools that comprehend SQL can catch errors. But they can't all catch the same errors! Each subsequent level will catch more subtle errors in addition to those from *all prior levels*. That's because the levels are additive — each level contains and builds on the knowledge of the ones below it.

Each of the above queries requires progressively greater SQL comprehension abilities to identify the mistake.

### Parser (Level 1): Capture Syntax Errors

Example: `selecte dateadd('day', 1, getdate()) as tomorrow`

Parsers know that `selecte` is **not a valid keyword** in Snowflake SQL, and will reject it.

### Compiler (Level 2): Capture Compilation Errors

Example: `select dateadd('day', getdate(), 1) as tomorrow`

To a parser, this looks fine - all the parentheses and commas are in the right places, and we’ve spelled `select` correctly this time.

A compiler, on the other hand, recognizes that the **function arguments are out of order** because:

- It knows that the second argument (`value`) needs to be a number, but that `getdate()` returns a `timestamp_ltz`.
- Likewise, it knows that a number is not a valid date/time expression for the third argument.

### Executor (Level 3): Capture Data Errors

Example: `select cast('2025-01-32' as date) as tomorrow`

Again, the parser signs off on this as valid SQL syntax.

But this time the compiler also thinks everything is fine! Remember that a compiler checks the signature of a function. It knows that `cast` takes a source expression and a target datatype as arguments, and it's checked that both these arguments are of the correct type.

It even has an overload that knows that strings can be cast into dates, but since it can’t do any validation of those strings’ *values* it doesn’t know **January 32nd isn’t a valid date**.

To actually know whether some data can be processed by a SQL query, you have to, well, process the data. Data errors can only be captured by a Level 3 system.

## Conclusion

Building your mental model of the levels of SQL comprehension – why they matter, how they're achieved and what they’ll unlock for you – is critical to understanding the coming era of data tooling.

In introducing these concepts, we’re still just scratching the surface. There's a lot more to discuss:

- Going deeper on the specific nuances of each level of comprehension
- How each level actually works, including the technologies and artifacts that power each level
- How this is all going to roll into a step change in the experience of working with data
- What it means for doing great data work

Over the coming days, you'll be hearing more about all of this from the dbt Labs team - both familiar faces and our new friends from SDF Labs.

This is a special moment for the industry and the community. It's alive with possibilities, with ideas, and with new potential. We're excited to navigate this new frontier with all of you.
5 changes: 5 additions & 0 deletions website/blog/ctas.yml
Original file line number Diff line number Diff line change
Expand Up @@ -30,3 +30,8 @@
subheader: Catch up on Coalesce 2024 and register to access a select number of on-demand sessions.
button_text: Register and watch
url: https://coalesce.getdbt.com/register/online
- name: sdf_webinar_2025
header: Accelerating dbt with SDF
subheader: Join leaders from dbt Labs and SDF Labs for insights and a live Q&A.
button_text: Save your seat
url: https://www.getdbt.com/resources/webinars/accelerating-dbt-with-sdf
2 changes: 1 addition & 1 deletion website/blog/metadata.yml
Original file line number Diff line number Diff line change
Expand Up @@ -2,7 +2,7 @@
featured_image: ""

# This CTA lives in right sidebar on blog index
featured_cta: "coalesce_2024_catchup"
featured_cta: "sdf_webinar_2025"

# Show or hide hero title, description, cta from blog index
show_title: true
Expand Down
6 changes: 2 additions & 4 deletions website/docs/docs/build/metricflow-commands.md
Original file line number Diff line number Diff line change
Expand Up @@ -536,14 +536,12 @@ limit 10
</TabItem>
<TabItem value="eg7" label=" Export to CSV">
Add the `--csv file_name.csv` flag to export the results of your query to a csv.
Add the `--csv file_name.csv` flag to export the results of your query to a csv. The `--csv` flag is available in dbt Core only and not supported in dbt Cloud.
**Query**
```bash
# In dbt Cloud
dbt sl query --metrics order_total --group-by metric_time,is_food_order --limit 10 --order-by -metric_time --where "is_food_order = True" --start-time '2017-08-22' --end-time '2017-08-27' --csv query_example.csv
# In dbt Core
mf query --metrics order_total --group-by metric_time,is_food_order --limit 10 --order-by -metric_time --where "is_food_order = True" --start-time '2017-08-22' --end-time '2017-08-27' --csv query_example.csv
Expand Down
12 changes: 3 additions & 9 deletions website/docs/docs/cloud/cloud-cli-installation.md
Original file line number Diff line number Diff line change
Expand Up @@ -313,16 +313,10 @@ This alias will allow you to use the <code>dbt-cloud</code> command to invoke th
</DetailsToggle>
<DetailsToggle alt_header="Why am I receiving a `Session occupied` error?">
If you've ran a dbt command and receive a <code>Session occupied</code> error, you can reattach to your existing session with <code>dbt reattach</code> and then press <code>Control-C</code> and choose to cancel the invocation.
</DetailsToggle>
<DetailsToggle alt_header="Why am I receiving a `Stuck session` error when trying to run a new command?">
The dbt Cloud CLI allows only one command that writes to the data warehouse at a time. If you attempt to run multiple write commands simultaneously (for example, `dbt run` and `dbt build`), you will encounter a `stuck session` error. To resolve this, cancel the specific invocation by passing its ID to the cancel command. For more information, refer to [parallel execution](/reference/dbt-commands#parallel-execution).
The Cloud CLI allows only one command that writes to the data warehouse at a time. If you attempt to run multiple write commands simultaneously (for example, `dbt run` and `dbt build`), you will encounter a `stuck session` error. To resolve this, cancel the specific invocation by passing its ID to the cancel command. For more information, refer to [parallel execution](/reference/dbt-commands#parallel-execution).
</DetailsToggle>
</DetailsToggle>
<FAQ path="Troubleshooting/long-sessions-cloud-cli" />
12 changes: 7 additions & 5 deletions website/docs/docs/cloud/configure-cloud-cli.md
Original file line number Diff line number Diff line change
Expand Up @@ -124,7 +124,7 @@ As a tip, most command-line tools have a `--help` flag to show available command
- `dbt run --help`: Lists the flags available for the `run` command
:::

### Lint SQL files
## Lint SQL files

From the dbt Cloud CLI, you can invoke [SQLFluff](https://sqlfluff.com/) which is a modular and configurable SQL linter that warns you of complex functions, syntax, formatting, and compilation errors. Many of the same flags that you can pass to SQLFluff are available from the dbt Cloud CLI.

Expand Down Expand Up @@ -154,7 +154,8 @@ When running `dbt sqlfluff` from the dbt Cloud CLI, the following are important
- An SQLFluff command will return an exit code of 0 if it ran with any file violations. This dbt behavior differs from SQLFluff behavior, where a linting violation returns a non-zero exit code. dbt Labs plans on addressing this in a later release.

## FAQs
<Expandable alt_header="How to create a .dbt directory and move your file">

<DetailsToggle alt_header="How to create a .dbt directory and move your file">

If you've never had a `.dbt` directory, you should perform the following recommended steps to create one. If you already have a `.dbt` directory, move the `dbt_cloud.yml` file into it.

Expand Down Expand Up @@ -195,11 +196,12 @@ move %USERPROFILE%\Downloads\dbt_cloud.yml %USERPROFILE%\.dbt\dbt_cloud.yml

This command moves the `dbt_cloud.yml` from the `Downloads` folder to the `.dbt` folder. If your `dbt_cloud.yml` file is located elsewhere, adjust the path accordingly.

</Expandable>
</DetailsToggle>

<Expandable alt_header="How to skip artifacts from being downloaded">
<DetailsToggle alt_header="How to skip artifacts from being downloaded">

By default, [all artifacts](/reference/artifacts/dbt-artifacts) are downloaded when you execute dbt commands from the dbt Cloud CLI. To skip these files from being downloaded, add `--download-artifacts=false` to the command you want to run. This can help improve run-time performance but might break workflows that depend on assets like the [manifest](/reference/artifacts/manifest-json).

</DetailsToggle>

</Expandable>
<FAQ path="Troubleshooting/long-sessions-cloud-cli" />
3 changes: 2 additions & 1 deletion website/docs/docs/community-adapters.md
Original file line number Diff line number Diff line change
Expand Up @@ -15,4 +15,5 @@ Community adapters are adapter plugins contributed and maintained by members of
| [MySQL](/docs/core/connect-data-platform/mysql-setup) | [RisingWave](/docs/core/connect-data-platform/risingwave-setup) | [Rockset](/docs/core/connect-data-platform/rockset-setup) |
| [SingleStore](/docs/core/connect-data-platform/singlestore-setup)| [SQL Server & Azure SQL](/docs/core/connect-data-platform/mssql-setup) | [SQLite](/docs/core/connect-data-platform/sqlite-setup) |
| [Starrocks](/docs/core/connect-data-platform/starrocks-setup) | [TiDB](/docs/core/connect-data-platform/tidb-setup)| [TimescaleDB](https://dbt-timescaledb.debruyn.dev/) |
| [Upsolver](/docs/core/connect-data-platform/upsolver-setup) | [Vertica](/docs/core/connect-data-platform/vertica-setup) | [Yellowbrick](/docs/core/connect-data-platform/yellowbrick-setup) |
| [Upsolver](/docs/core/connect-data-platform/upsolver-setup) | [Vertica](/docs/core/connect-data-platform/vertica-setup) | [Watsonx-Presto](/docs/core/connect-data-platform/watsonx-presto-setup) |
| [Yellowbrick](/docs/core/connect-data-platform/yellowbrick-setup) |
Loading

0 comments on commit 53cf493

Please sign in to comment.