dbt enables data analysts and engineers to transform their data using the same practices that software engineers use to build applications.
dbt is the T in ELT. Organize, cleanse, denormalize, filter, rename, and pre-aggregate the raw data in your warehouse so that it's ready for analysis.
dbt-spark
enables dbt to work with Apache Spark.
For more information on using dbt with Spark, consult the docs.
Review the repository README.md as most of that information pertains to dbt-spark
.
A docker-compose
environment starts a Spark Thrift server and a Postgres database as a Hive Metastore backend.
Note: dbt-spark now supports Spark 3.3.2.
The following command starts two docker containers:
docker-compose up -d
It will take a bit of time for the instance to start, you can check the logs of the two containers. If the instance doesn't start correctly, try the complete reset command listed below and then try start again.
Create a profile like this one:
spark_testing:
target: local
outputs:
local:
type: spark
method: thrift
host: 127.0.0.1
port: 10000
user: dbt
schema: analytics
connect_retries: 5
connect_timeout: 60
retry_all: true
Connecting to the local spark instance:
- The Spark UI should be available at http://localhost:4040/sqlserver/
- The endpoint for SQL-based testing is at
http://localhost:10000
and can be referenced with the Hive or Spark JDBC drivers using connection stringjdbc:hive2://localhost:10000
and default credentialsdbt
:dbt
Note that the Hive metastore data is persisted under ./.hive-metastore/
, and the Spark-produced data under ./.spark-warehouse/
. To completely reset you environment run the following:
docker-compose down
rm -rf ./.hive-metastore/
rm -rf ./.spark-warehouse/
If installing on MacOS, use homebrew
to install required dependencies.
brew install unixodbc
- Want to help us build
dbt-spark
? Check out the Contributing Guide.