diff --git a/astro.config.mjs b/astro.config.mjs index 06b67e6e..fbda23a4 100644 --- a/astro.config.mjs +++ b/astro.config.mjs @@ -196,6 +196,8 @@ export default defineConfig({ ] }, { label: 'JSON', link: '/extensions/json' }, + { label: 'Iceberg', link: '/extensions/iceberg', badge: { text: 'New' }}, + { label: 'Delta Lake', link: '/extensions/delta', badge: { text: 'New' }}, ], autogenerate: { directory: 'reference' }, }, diff --git a/src/content/docs/developer-guide/testing-framework.md b/src/content/docs/developer-guide/testing-framework.md index efbb1019..c7d94264 100644 --- a/src/content/docs/developer-guide/testing-framework.md +++ b/src/content/docs/developer-guide/testing-framework.md @@ -15,7 +15,7 @@ you must specify the dataset to be used and other optional parameters such as `BUFFER_POOL_SIZE`. :::caution[Note] -Avoid using the character `-` in test file names. In the Google Test Framework, `-` has a special meaning that can inadvertently exclude a test case, leading to the test file being silently skipped. To prevent this issue, our `e2e_test` framework will throw an exception if a test file name contains `-`. +Avoid using the character `-` in test file names and case names. In the Google Test Framework, `-` has a special meaning that can inadvertently exclude a test case, leading to the test file being silently skipped. To prevent this issue, our `e2e_test` framework will throw an exception if a test file name contains `-`. ::: Here is a basic example of a test: @@ -36,15 +36,18 @@ Here is a basic example of a test: The first three lines represents the header, separated by `--`. The testing framework will parse the file and register a [GTest programatically](http://google.github.io/googletest/advanced.html#registering-tests-programmatically). +All e2e tests will have a prefix `e2e_test_` when being registered, which is used to distinguish them from other internal tests. e.g. a e2e_test named `BasicTest` will be registered as a GTest named `e2e_test_BasicTest`. When it comes to the test case name, the provided example above would be equivalent to: ``` -TEST_F(basic, BasicTest) { +TEST_F(basic, e2e_test_BasicTest) { ... } ``` -The test group name will be the relative path of the file under the `test/test_files` directory, delimited by `~`, followed by a dot and the test case name. +For the main source code tests, the test group name will be the relative path of the file under the `test/test_files` directory, delimited by `~`, followed by a dot and the test case name. + +For the extension code tests, the test group name will be the relative path of the file under the `extension/name_of_extension/test/test_files` directory, delimited by `~`, followed by a dot and the test case name. The testing framework will test each logical plan created from the prepared statements and assert the result. @@ -81,10 +84,29 @@ $ ctest -V -R common~types~interval.DifferentTypesCheck $ ctest -j 10 ``` +To switch between main tests and extension tests, pass 'E2E_TEST_FILES_DIRECTORY=extension' as an environment variable when calling ctest. + +Example: + +``` +# First cd to build/relwithdebinfo/test (after running make extension-test) +$ cd build/relwithdebinfo/test + +# Run all the extension tests (-R e2e_test is used to filter the extension tests, as all extension tests are e2e tests) +$ E2E_TEST_FILES_DIRECTORY=extension ctest -R e2e_test +``` + +:::caution[Note] +Windows has different syntax for setting environment variable, to run all extension tests in windows, run +``` +$ set "E2E_TEST_FILES_DIRECTORY=extension" && ctest -R e2e_test +``` +::: + #### 2. Running directly from `e2e_test` binary -The test binaries are available in `build/release[or debug]/test/runner` -folder. You can run `e2e_test` specifying the relative path file inside +The test binaries are available in `build/relwithdebinfo[or debug or release]/test/runner` +folder. To run any of the main tests, you can run `e2e_test` specifying the relative path file inside `test_files`: ``` @@ -98,6 +120,19 @@ $ ./e2e_test long_string_pk/long_string_pk.test $ ./e2e_test . ``` +To run any of the extension tests, you can run `e2e_test` with environment variable `E2E_TEST_FILES_DIRECTORY=extension` and specify the relative path file inside +`extension`: +``` +# Run all tests inside extension/duckdb +$ E2E_TEST_FILES_DIRECTORY=extension ./e2e_test duckdb + +# Run all tests from extension/json/test/copy_to_json.test file +$ E2E_TEST_FILES_DIRECTORY=extension ./e2e_test json/test/copy_to_json.test + +# Run all extension tests +$ E2E_TEST_FILES_DIRECTORY=extension ./e2e_test . +``` + :::caution[Note] Some test files contain multiple test cases, and sometimes it is not easy to find the output from a failed test. In this situation, the flag diff --git a/src/content/docs/extensions/delta.md b/src/content/docs/extensions/delta.md new file mode 100644 index 00000000..92d7218d --- /dev/null +++ b/src/content/docs/extensions/delta.md @@ -0,0 +1,145 @@ +--- +title: "Delta Lake" +--- + +## Usage + +The `delta` extension adds support for scanning/copying from the [`Delta Lake open-source storage format`](https://delta.io/). +Delta Lake is an open-source storage framework that enables building a format agnostic Lakehouse architecture. +Using this extension, you can interact with Delta tables from within Kùzu using the `LOAD FROM` and `COPY FROM` clauses. + +The Delta functionality is not available by default, so you would first need to install the `DELTA` +extension by running the following commands: + +```sql +INSTALL DELTA; +LOAD EXTENSION DELTA; +``` + +### Example dataset + +Let's look at an example dataset to demonstrate how the Delta extension can be used. +Firstly, let's create a Delta table containing student information using Python and save the Delta table in the `'/tmp/student'` directory: +Before running the script, make sure the `deltalake` Python package is properly installed (we will also use Pandas). +```shell +pip install deltalake pandas +``` + +```python +# create_delta_table.py +import pandas as pd +from deltalake import DeltaTable, write_deltalake + +student = { + "name": ["Alice", "Bob", "Carol"], + "ID": [0, 3, 7] +} + +write_deltalake(f"/tmp/student", pd.DataFrame.from_dict(student)) +``` + +In the following sections, we will first scan the Delta table to query its contents in Cypher, and +then proceed to copy the data and construct a node table. + +### Scan the Delta table +`LOAD FROM` is a Cypher clause that scans a file or object element by element, but doesn’t actually +move the data into a Kùzu table. + +To scan the Delta table created above, you can do the following: + +```cypher +LOAD FROM '/tmp/student' (file_format='delta') RETURN *; +``` +``` +┌────────┬───────┐ +│ name │ ID │ +│ STRING │ INT64 │ +├────────┼───────┤ +│ Alice │ 0 │ +│ Bob │ 3 │ +│ Carol │ 7 │ +└────────┴───────┘ +``` +:::note[Note] +Note: The `file_format` parameter is used to explicitly specify the file format of the given file instead of letting Kùzu autodetect the file format at runtime. +When scanning from the Delta table, `file_format` option must be provided since Kùzu is not capable of autodetecting Delta tables. +::: + +### Copy the Delta table into a node table +You can then use a `COPY FROM` statement to directly copy the contents of the Delta table into a Kùzu node table. + +```cypher +CREATE NODE TABLE student (name STRING, ID INT64, PRIMARY KEY(ID)); +COPY student FROM '/tmp/student' (file_format='delta') +``` + +Just like above in `LOAD FROM`, the `file_format` parameter is mandatory when specifying the `COPY FROM` clause as well. + +```cypher +// First, create the node table +CREATE NODE TABLE student (name STRING, ID INT64, PRIMARY KEY(ID)); +``` +``` +┌─────────────────────────────────┐ +│ result │ +│ STRING │ +├─────────────────────────────────┤ +│ Table student has been created. │ +└─────────────────────────────────┘ +``` +```cypher +COPY student FROM '/tmp/student' (file_format='delta'); +``` +``` +┌─────────────────────────────────────────────────┐ +│ result │ +│ STRING │ +├─────────────────────────────────────────────────┤ +│ 3 tuples have been copied to the student table. │ +└─────────────────────────────────────────────────┘ +``` + +### Access Delta tables hosted on S3 +Kùzu also supports scanning/copying a Delta table hosted on S3 in the same way as from a local file system. +Before reading and writing from S3, you have to configure the connection using the [CALL](https://kuzudb.com/docusaurus/cypher/configuration) statement. + +#### Supported options + +| Option name | Description | +|----------|----------| +| `s3_access_key_id` | S3 access key id | +| `s3_secret_access_key` | S3 secret access key | +| `s3_endpoint` | S3 endpoint | +| `s3_url_style` | Uses [S3 url style](https://docs.aws.amazon.com/AmazonS3/latest/userguide/VirtualHosting.html) (should either be vhost or path) | +| `s3_region` | S3 region | + +#### Requirements on the S3 server API + +| Feature | Required S3 API features | +|----------|----------| +| Public file reads | HTTP Range request | +| Private file reads | Secret key authentication| + +#### Scan Delta table from S3 +Reading or scanning a Delta table that's on S3 is as simple as reading from regular files: + +```sql +LOAD FROM 's3://kuzu-sample/sample-delta' (file_format='delta') +RETURN * +``` + +#### Copy Delta table hosted on S3 into a local node table + +Copying from Delta tables on S3 is also as simple as copying from regular files: + +```cypher +CREATE NODE TABLE student (name STRING, ID INT64, PRIMARY KEY(ID)); +COPY student FROM 's3://kuzu-sample/student-delta' (file_format='delta') +``` + +## Limitations + +When using the Delta Lake extension in Kùzu, keep the following limitations in mind. + +- Writing (i.e., exporting to) Delta files from Kùzu is currently not supported. + diff --git a/src/content/docs/extensions/iceberg.md b/src/content/docs/extensions/iceberg.md new file mode 100644 index 00000000..e8a17b28 --- /dev/null +++ b/src/content/docs/extensions/iceberg.md @@ -0,0 +1,246 @@ +--- +title: "Iceberg" +--- + +The `iceberg` extension adds support for scanning and copying from the [Apache Iceberg format](https://iceberg.apache.org/). +Iceberg is an open-source table format originally developed at Netflix for large-scale analytical datasets. +Using this extension, you can interact with Iceberg tables from within Kùzu using the `LOAD FROM` and `COPY FROM` clauses. + +The Iceberg functionality is not available by default, so you would first need to install the `iceberg` +extension by running the following commands: + +```sql +INSTALL iceberg; +LOAD EXTENSION iceberg; +``` + +At a high level, the `iceberg` extension provides the following functionality: + +- Scanning an Iceberg table +- Copying an Iceberg table into a node table +- Accessing the Iceberg metadata +- Listing the Iceberg snapshots + +## Usage + +To run the examples below, download the [iceberg_tables.zip](https://kuzudb.com/data/iceberg-extension/iceberg_tables.zip) file, unzip it +and place the contents in the `/tmp` directory. + +### Scan the Iceberg table + +`LOAD FROM` is a Cypher query that scans a file or object element by element, but doesn’t actually +move the data into a Kùzu table. + +Here's how you would scan an Iceberg table: + +```cypher +LOAD FROM '/tmp/iceberg_tables/university' (file_format='iceberg', allow_moved_paths=true) RETURN *; +``` +``` +┌────────────┬──────┬──────────┐ +| University | Rank | Funding | +├────────────┼──────┼──────────┤ +| Stanford | 2 | 250.300 | +| Yale | 6 | 190.700 | +| Harvard | 1 | 210.500 | +| Cambridge | 5 | 280.200 | +| MIT | 3 | 170.000 | +| Oxford | 4 | 300.000 | +└────────────┴──────┴──────────┘ +``` + +:::note[Notes] +- The `file_format` parameter is used to explicitly specify the file format of the given file instead of +letting Kùzu autodetect the file format at runtime. When scanning from the Iceberg table, +the `file_format` option must be provided since Kùzu is not capable of autodetecting Iceberg tables. +- The `allow_moved_paths` option ensures that proper path resolution is performed, which allows scanning +Iceberg tables that are moved from their original location. +::: + +### Copy the Iceberg table into a node table +You can then use a `COPY FROM` statement to directly copy the contents of the Iceberg table into a node table. + +```cypher +CREATE NODE TABLE university (name STRING, age INT64, PRIMARY KEY(age)); +COPY student FROM '/tmp/iceberg_tables/person_table' (file_format='iceberg', allow_moved_paths=true) +``` + +Just like above in `LOAD FROM`, the `file_format` parameter is mandatory when specifying the `COPY FROM` clause as well. + +Result: +```cypher +// Create the node table +CREATE NODE TABLE university (name STRING, rank INT64, fund double, PRIMARY KEY(name)); +``` +``` +┌─────────────────────────────────────┐ +│ result │ +│ STRING │ +├─────────────────────────────────────┤ +│ Table university has been created. │ +└─────────────────────────────────────┘ +``` +```cypher +COPY university FROM '/tmp/iceberg_tables/university' (file_format='iceberg', allow_moved_paths=true); +``` +``` +┌─────────────────────────────────────────────────────┐ +│ result │ +│ STRING │ +├─────────────────────────────────────────────────────┤ +│ 6 tuples have been copied to the university table. │ +└─────────────────────────────────────────────────────┘ +``` + +### Access Iceberg metadata +At the heart of Iceberg’s table structure is the metadata, which tracks everything from the schema, to partition information +and snapshots of the table's state. This is particularly useful for understanding the underlying structure, tracking data +changes, and debugging issues in Iceberg datasets. + +The `ICEBERG_METADATA` function lists the metadata files for an Iceberg table via the `CALL` function in Kùzu. + +:::caution[Note] +Ensure you use `:=` operator to set the variable in `CALL` function, not the `=` operator. +The `:=` operator is required within `CALL` functions in Kùzu. +::: + +```cypher +CALL ICEBERG_METADATA( + '/tmp/iceberg_tables/lineitem_iceberg', + allow_moved_paths := true +) +RETURN *; +``` + +``` +┌──────────────────────────┬──────────────────────────┬──────────────────┬─────────┬──────────┬──────────────────────────┬─────────────┬──────────────┐ +│ manifest_path │ manifest_sequence_number │ manifest_content │ status │ content │ file_path │ file_format │ record_count │ +│ STRING │ INT64 │ STRING │ STRING │ STRING │ STRING │ STRING │ INT64 │ +├──────────────────────────┼──────────────────────────┼──────────────────┼─────────┼──────────┼──────────────────────────┼─────────────┼──────────────┤ +│ lineitem_iceberg/meta... │ 2 │ DATA │ ADDED │ EXISTING │ lineitem_iceberg/data... │ PARQUET │ 51793 │ +│ lineitem_iceberg/meta... │ 2 │ DATA │ DELETED │ EXISTING │ lineitem_iceberg/data... │ PARQUET │ 60175 │ +└──────────────────────────┴──────────────────────────┴──────────────────┴─────────┴──────────┴──────────────────────────┴─────────────┴──────────────┘ +``` + +### List Iceberg snapshots +Iceberg tables maintain a series of snapshots, which are consistent views of the table at a specific point in time. +Snapshots are the core of Iceberg’s versioning system, allowing you to track, query, and manage changes to your table over time. + +The `ICEBERG_SNAPSHOTS` function lists the snapshots for an Iceberg table via the `CALL` function. +Note that for snapshots, you do not need to specify the `allow_moved_paths` option. + +```cypher +CALL ICEBERG_SNAPSHOTS('/tmp/iceberg_tables/lineitem_iceberg') RETURN *; +``` + +``` +┌─────────────────┬─────────────────────┬─────────────────────────┬────────────────────────────────────────────────────────────────────────────────────────────────┐ +│ sequence_number │ snapshot_id │ timestamp_ms │ manifest_list │ +│ UINT64 │ UINT64 │ TIMESTAMP │ STRING │ +├─────────────────┼─────────────────────┼─────────────────────────┼────────────────────────────────────────────────────────────────────────────────────────────────┤ +│ 1 │ 3776207205136740581 │ 2023-02-15 15:07:54.504 │ lineitem_iceberg/metadata/snap-3776207205136740581-1-cf3d0be5-cf70-453d-ad8f-48fdc412e608.avro │ +│ 2 │ 7635660646343998149 │ 2023-02-15 15:08:14.73 │ lineitem_iceberg/metadata/snap-7635660646343998149-1-10eaca8a-1e1c-421e-ad6d-b232e5ee23d3.avro │ +└─────────────────┴─────────────────────┴─────────────────────────┴────────────────────────────────────────────────────────────────────────────────────────────────┘ +``` + +### Optional parameters + +The following optional parameters are supported in the Iceberg extension: + +
+ +| Parameter | Type | Default | Description | +|-------------------------------|-----------|--------------|---------------------------------------------------------------------------------| +| allow_moved_paths | BOOLEAN | `false` | Allows scanning Iceberg tables that are not located in their original directory | +| metadata_compression_codec | STRING | `''` | Specifies the compression code used for the metadata files (currenly only supports `gzip`) | +| version | STRING | `'?'` | Provides an explicit Iceberg version number, if not provided, the Iceberg version number would be determined from `version-hint.txt`| +| version_name_format | STRING | `'v%s%s.metadata.json,%s%s.metadata.json'` | Provides the regular expression to find the correct metadata data file | + +
+ +More details on usage are provided below. + +#### Select metadata version +By default, the `iceberg` extension will look for a `version-hint.text` file to identify the proper metadata version to use. +This can be overridden by explicitly supplying a version number via the `version` parameter to Iceberg table functions. + +Example: +```cypher +LOAD FROM '/tmp/iceberg_tables/lineitem_iceberg' ( + file_format='iceberg', + allow_moved_paths=true, + version='2' +) +RETURN *; +``` + +#### Change metadata compression codec +By default, this extension will look for both `v{version}.metadata.json` and `{version}.metadata.json` files for metadata, or `v{version}.gz.metadata.json` and `{version}.gz.metadata.json` when `metadata_compression_codec = 'gzip'` is specified. +Other compression codecs are NOT supported. + +```cypher +LOAD FROM '/tmp/iceberg_tables/lineitem_iceberg_gz' ( + file_format='iceberg', + allow_moved_paths=true, + metadata_compression_codec = 'gzip' +) +RETURN *; +``` + +#### Change metadata name format +To change the metadata naming format, use the `version_name_format` option, for example, if your metadata is named as `rev-2.metadata.json`, set this option as `version_name_format = 'rev-%s.metadata.json` to make sure the metadata file can be found successfully. + +```cypher +LOAD FROM '/tmp/iceberg_tables/lineitem_iceberg_alter_name' ( + file_format='iceberg', + allow_moved_paths=true, + version_name_format = 'rev-%s.metadata.json' +) +RETURN *; +``` + +### Access an Iceberg table hosted on S3 +Kùzu also supports scanning/copying a Iceberg table hosted on S3 in the same way as from a local file system. +Before reading and writing from S3, you have to configure the connection using the [CALL](https://kuzudb.com/docusaurus/cypher/configuration) statement. + +#### Supported options + +| Option name | Description | +|----------|----------| +| `s3_access_key_id` | S3 access key id | +| `s3_secret_access_key` | S3 secret access key | +| `s3_endpoint` | S3 endpoint | +| `s3_url_style` | Uses [S3 url style](https://docs.aws.amazon.com/AmazonS3/latest/userguide/VirtualHosting.html) (should either be vhost or path) | +| `s3_region` | S3 region | + + +#### Requirements on the S3 server API + +| Feature | Required S3 API features | +|----------|----------| +| Public file reads | HTTP Range request | +| Private file reads | Secret key authentication| + +#### Scan Iceberg table from S3 +Reading or scanning a Iceberg table that's on S3 is as simple as reading from regular files: + +```cypher +LOAD FROM 's3://path/to/iceberg_table' (file_format='iceberg', allow_moved_paths=true) +RETURN * +``` + +#### Copy Iceberg table hosted on S3 into a local node table + +Copying from Iceberg tables on S3 is also as simple as copying from regular files: + +```cypher +CREATE NODE TABLE student (name STRING, ID INT64, PRIMARY KEY(ID)); +COPY student FROM 's3://path/to/iceberg_table' (file_format='iceberg', allow_moved_paths=true) +``` + +## Limitations + +When using the Iceberg extension in Kùzu, keep the following limitations in mind. + +- Writing (i.e., exporting to) Iceberg tables from Kùzu is currently not supported. +- We currently do not support scanning/copying nested data (i.e., of type `STRUCT`) in the Iceberg table columns. diff --git a/src/content/docs/extensions/index.mdx b/src/content/docs/extensions/index.mdx index a0554b9f..30a74a48 100644 --- a/src/content/docs/extensions/index.mdx +++ b/src/content/docs/extensions/index.mdx @@ -36,6 +36,8 @@ The following extensions are implemented, with more to come: | [postgres](/extensions/attach) | Scan data from an attached PostgreSQL database | | [sqlite](/extensions/attach) | Scan data from an attached SQLite database | | [json](/extensions/json) | Scan and manipulate JSON data | +| [delta](/extensions/delta) | Scan data from a delta table | +| [iceberg](/extensions/iceberg) | Scan iceberg data | ## Using Extensions in Kùzu diff --git a/src/content/docs/installation.mdx b/src/content/docs/installation.mdx index 5e37e73d..501bac9d 100644 --- a/src/content/docs/installation.mdx +++ b/src/content/docs/installation.mdx @@ -28,19 +28,19 @@ brew install kuzu Use a tool like cURL to download the latest version of the Kùzu CLI to your local machine. Alternatively, simply copy-paste [this URL](https://github.com/kuzudb/kuzu/releases/download/v0/kuzu_cli-linux-x86_64.tar.gz) -for x86-64 or [this one](https://github.com/kuzudb/kuzu/releases/download/v0.7.0/kuzu_cli-linux-aarch64.tar.gz) +for x86-64 or [this one](https://github.com/kuzudb/kuzu/releases/download/v0.7.1/kuzu_cli-linux-aarch64.tar.gz) for aarch64 into your browser. **x86-64** ```bash -curl -L -O https://github.com/kuzudb/kuzu/releases/download/v0.7.0/kuzu_cli-linux-x86_64.tar.gz +curl -L -O https://github.com/kuzudb/kuzu/releases/download/v0.7.1/kuzu_cli-linux-x86_64.tar.gz ``` **aarch64** ```bash -curl -L -O https://github.com/kuzudb/kuzu/releases/download/v0.7.0/kuzu_cli-linux-aarch64.tar.gz +curl -L -O https://github.com/kuzudb/kuzu/releases/download/v0.7.1/kuzu_cli-linux-aarch64.tar.gz ``` Double-click the `kuzu_cli-linux-*.tar.gz` file to extract a file named `kuzu`, and give it @@ -51,11 +51,11 @@ executable permissions using the command `chmod +x kuzu`. Use a tool like cURL to download the latest version of the Kùzu CLI to your local machine. Alternatively, -simply copy-paste [this HTTPS URL](https://github.com/kuzudb/kuzu/releases/download/v0.7.0/kuzu_cli-windows-x86_64.zip) +simply copy-paste [this HTTPS URL](https://github.com/kuzudb/kuzu/releases/download/v0.7.1/kuzu_cli-windows-x86_64.zip) into your browser. ```bash -curl -L -O https://github.com/kuzudb/kuzu/releases/download/v0.7.0/kuzu_cli-windows-x86_64.zip +curl -L -O https://github.com/kuzudb/kuzu/releases/download/v0.7.1/kuzu_cli-windows-x86_64.zip ``` Right-click on the `kuzu_cli-xxx.zip` file and click on **Extract All**. This will create a directory @@ -142,7 +142,7 @@ The latest stable version is available on [Maven Central](https://central.sonaty com.kuzudb kuzu - 0.7.0 + 0.7.1 ``` @@ -154,7 +154,7 @@ The latest stable version is available on [Maven Central](https://central.sonaty com.kuzudb kuzu - 0.7.0 + 0.7.1 ``` @@ -166,7 +166,7 @@ The latest stable version is available on [Maven Central](https://central.sonaty com.kuzudb kuzu - 0.7.0 + 0.7.1 ``` @@ -254,7 +254,7 @@ any additional installation. You just need to specify the library search path fo ```bash -curl -L -O https://github.com/kuzudb/kuzu/releases/download/v0.7.0/libkuzu-osx-universal.tar.gz +curl -L -O https://github.com/kuzudb/kuzu/releases/download/v0.7.1/libkuzu-osx-universal.tar.gz ``` @@ -264,19 +264,19 @@ curl -L -O https://github.com/kuzudb/kuzu/releases/download/v0.7.0/libkuzu-osx-u **x86-64** ```bash -curl -L -O https://github.com/kuzudb/kuzu/releases/download/v0.7.0/libkuzu-linux-x86_64.tar.gz +curl -L -O https://github.com/kuzudb/kuzu/releases/download/v0.7.1/libkuzu-linux-x86_64.tar.gz ``` **aarch64** ```bash -curl -L -O https://github.com/kuzudb/kuzu/releases/download/v0.7.0/libkuzu-linux-aarch64.tar.gz +curl -L -O https://github.com/kuzudb/kuzu/releases/download/v0.7.1/libkuzu-linux-aarch64.tar.gz ``` **x86-64** ([old ABI](https://gcc.gnu.org/onlinedocs/libstdc++/manual/using_dual_abi.html)) ```bash -curl -L -O https://github.com/kuzudb/kuzu/releases/download/v0.7.0/libkuzu-linux-old_abi-x86_64.tar.gz +curl -L -O https://github.com/kuzudb/kuzu/releases/download/v0.7.1/libkuzu-linux-old_abi-x86_64.tar.gz ``` @@ -284,7 +284,7 @@ curl -L -O https://github.com/kuzudb/kuzu/releases/download/v0.7.0/libkuzu-linux ```bash -curl -L -O https://github.com/kuzudb/kuzu/releases/download/v0.7.0/libkuzu-windows-x86_64.zip +curl -L -O https://github.com/kuzudb/kuzu/releases/download/v0.7.1/libkuzu-windows-x86_64.zip ```