Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Make MAXIMUM_SEED_SIZE_MIB configurable #11177

Open
wants to merge 2 commits into
base: main
Choose a base branch
from
Open

Conversation

jtcohen6
Copy link
Contributor

resolves #7117
resolves #7124

Reapply changes from #7125. (This proved easier than rebasing the commits directly.)

Problem

We apply an arbitrary limit of 1 MiB to seeds (CSVs), for the specific purpose of hashing contents and comparing those hashed contents during state:modified comparison. (That's a mebibyte, not a megabyte, for those who care to distinguish between the two).

That is, dbt doesn't raise an error if it detects a seed larger than MAXIMUM_SEED_SIZE_MIB, but certain features become unavailable and dbt does not make any guarantees about acceptable performance.

We could adjust this for inflation (1 MiB in 2020 is worth ~1.2 MiB today), but this PR takes the more forward-looking approach of making the value configurable by end users. Users can instruct dbt to compare the contents of larger seeds, so long as they're willing to "pay the price" of hashing the contents of large seeds.

We will need to update docs: https://docs.getdbt.com/reference/node-selection/state-comparison-caveats

Solution

From #7125:

Increasing the maximum size of seed files where the content is hashed for state comparison will enable a greater use of deferred runs with updated seed file contents. By making this a configuration with an environment variable users are able to override the default 1 MiB limit.

Furthermore, to support reading larger files in memory constrained environments, a new method is added to read and compute file contents incrementally. As this is performed on bytes for better performance we continue using the previous UTF-8 content method for small files to not mess up current states where seed files are not stored with UTF-8.

Checklist

  • I have read the contributing guide and understand what's expected of me.
  • I have run this code in development, and it appears to resolve the stated issue.
  • This PR includes tests, or tests are not required or relevant for this PR.
  • This PR has no interface changes (e.g., macros, CLI, logs, JSON artifacts, config files, adapter interface, etc.) or this PR has already received feedback and approval from Product or DX.
  • This PR includes type annotations for new and modified functions.

Co-authored by: Noah Holm <[email protected]>
Co-authored by: Jeremy Cohen <[email protected]>
@jtcohen6 jtcohen6 requested a review from a team as a code owner December 24, 2024 16:26
@cla-bot cla-bot bot added the cla:yes label Dec 24, 2024
Copy link

codecov bot commented Dec 24, 2024

Codecov Report

All modified and coverable lines are covered by tests ✅

Project coverage is 88.88%. Comparing base (459d156) to head (20be925).

Additional details and impacted files
@@            Coverage Diff             @@
##             main   #11177      +/-   ##
==========================================
- Coverage   88.93%   88.88%   -0.05%     
==========================================
  Files         186      186              
  Lines       24054    24075      +21     
==========================================
+ Hits        21392    21400       +8     
- Misses       2662     2675      +13     
Flag Coverage Δ
integration 86.19% <93.33%> (-0.14%) ⬇️
unit 62.03% <46.66%> (-0.02%) ⬇️

Flags with carried forward coverage won't be shown. Click here to find out more.

Components Coverage Δ
Unit Tests 62.03% <46.66%> (-0.02%) ⬇️
Integration Tests 86.19% <93.33%> (-0.14%) ⬇️

Copy link
Contributor Author

@jtcohen6 jtcohen6 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🎩 Given a my_small_seed.csv (<1 MB) and my_large_seed.csv (~3 MB)

Using dbt-core@main:

% dbt parse && mv target/manifest.json state && dbt ls -s state:modified --state state    
16:29:42  Running with dbt=1.10.0-a1
16:29:42  Registered adapter: duckdb=1.9.1
16:29:42  Performance info: /Users/jerco/dev/scratch/testy/target/perf_info.json
16:29:43  Running with dbt=1.10.0-a1
16:29:43  Registered adapter: duckdb=1.9.1
16:29:43  Found 1 operation, 1 model, 2 seeds, 424 macros
16:29:43  Found a seed (testy.my_large_seed) >1MB in size at the same path, dbt cannot tell if it has changed: assuming they are the same
16:29:43  The selection criterion 'state:modified' does not match any enabled nodes
16:29:43  No nodes selected!

Switching to dbt-core@jerco/redo-pr-7125 (without recreating state/manifest.json):

% dbt ls -s state:modified --state state    
16:30:03  Running with dbt=1.10.0-a1
16:30:03  Registered adapter: duckdb=1.9.1
16:30:03  Found 1 operation, 1 model, 2 seeds, 424 macros
16:30:03  Found a seed (testy.my_large_seed) >1MiB in size at the same path, dbt cannot tell if it has changed: assuming they are the same
16:30:03  The selection criterion 'state:modified' does not match any enabled nodes
16:30:03  No nodes selected!

This proves that the checksum of my_small_seed has not changed.

Now, let's set the config and redo:

% export DBT_MAXIMUM_SEED_SIZE_MIB=10
% dbt parse && mv target/manifest.json state && dbt ls -s state:modified --state state
16:30:45  Running with dbt=1.10.0-a1
16:30:45  Registered adapter: duckdb=1.9.1
16:30:45  Performance info: /Users/jerco/dev/scratch/testy/target/perf_info.json
16:30:46  Running with dbt=1.10.0-a1
16:30:46  Registered adapter: duckdb=1.9.1
16:30:46  Found 1 operation, 1 model, 2 seeds, 424 macros
16:30:46  The selection criterion 'state:modified' does not match any enabled nodes
16:30:46  No nodes selected!

Manually edit one row in the large seed:

% dbt ls -s state:modified --state state
16:36:58  Running with dbt=1.10.0-a1
16:36:58  Registered adapter: duckdb=1.9.1
16:36:58  Found 1 operation, 1 model, 2 seeds, 424 macros
testy.my_large_seed

# We don't want to calculate a hash of this file. Use the path.
source_file = SourceFile.big_seed(match)
else:
file_contents = load_file_contents(match.absolute_path, strip=True)
checksum = FileHash.from_contents(file_contents)
checksum = FileHash.from_path(match.absolute_path)
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I have confirmed that this is not a "breaking" change, insofar as the same seed produces the same checksum before and after this change.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Update: The failing test on Windows seems to indicate that the seed checksum does indeed change, as a result of this PR, but only on Windows. The contributor mentioned this code comment as indication that we expect actually different checksums on Windows versus MacOS / Linux.

In that test, the checksum for 'seed.test.seed':

  • on MacOS + Linux (before/after): '381eb2f...'
  • on Windows (before): '54a28a3...'
  • on Windows (after): '381eb2f...'

I think our options are:

  1. Restore the behavior to return actually different checksums on different systems. That might be easiest, but I agree it's more desirable to return the same checksum regardless of OS.
  2. Place this behind a behavior-change flag, which we very quickly flip to "True" by default (in the next minor version of dbt-core), with targeted advisory for users on Windows: "When this change rolls out, all your seeds will be detected as modified, if comparing against a manifest produced before this code change."
  3. Roll this out in the next minor version of dbt-core, without a behavior-change flag. It won't affect the vast majority of users (who are not running on Windows). We mention it in the upgrade guide.

I lean toward option (2) for thoroughness.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I wonder weather we want to just do a logic of:

  • if we find the hash is different on Windows(maybe also if the manifest was computed with a previous version of dbt), compute with the other method to see if it changed, if not, we return not changed but persist the new hash in artifact.

This should provide a seamless upgrade experience and we don't need to have a flag at all.

@@ -60,6 +61,27 @@ def from_contents(cls, contents: str, name="sha256") -> "FileHash":
checksum = hashlib.new(name, data).hexdigest()
return cls(name=name, checksum=checksum)

@classmethod
def from_path(cls, path: str, name="sha256") -> "FileHash":
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Non-breaking change in the dbt/artifacts/ directory, so I will add the artifact_minor_upgrade label to this PR

@jtcohen6 jtcohen6 added the artifact_minor_upgrade To bypass the CI check by confirming that the change is not breaking label Dec 24, 2024
Copy link
Contributor

@ChenyuLInx ChenyuLInx left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM with one thought around migration.

# We don't want to calculate a hash of this file. Use the path.
source_file = SourceFile.big_seed(match)
else:
file_contents = load_file_contents(match.absolute_path, strip=True)
checksum = FileHash.from_contents(file_contents)
checksum = FileHash.from_path(match.absolute_path)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I wonder weather we want to just do a logic of:

  • if we find the hash is different on Windows(maybe also if the manifest was computed with a previous version of dbt), compute with the other method to see if it changed, if not, we return not changed but persist the new hash in artifact.

This should provide a seamless upgrade experience and we don't need to have a flag at all.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
artifact_minor_upgrade To bypass the CI check by confirming that the change is not breaking cla:yes
Projects
None yet
3 participants