This project shows an example of interactive 3d density map, with data processed by Databricks and deployed as a Databricks App.
Source data loaded from HDX (Humanitarian Data Exchange) and is provided by Meta. Please read more about the data here.
Prerequisites:
- Python 3.9+
- Hatch
- Node.js 20.0+
- Databricks CLI
- Databricks workspace
Steps:
- Clone the repository
- Setup the Python env for backend:
hatch env create
- Setup frontend:
yarn --cwd src/frontend install
First, deploy and run the workflow:
databricks bundle deploy --var="catalog=main" --var="schema=terrametria"
databricks bundle run terrametria
Note the catalog and schema name. You will need it to run the app. Grant access to this catalog and schema to the principal that you will use to run the app.
Then, configure your environment variables in .env
file:
# client/secret id for a principal that has access to the catalog and schema
DATABRICKS_CLIENT_ID=
DATABRICKS_CLIENT_SECRET=
# Databricks workspace URL, without HTTP/HTTPS prefix
DATABRICKS_HOST=
DATABRICKS_SQL_WAREHOUSE_ID=
# catalog and schema name from the previous step
TERRAMETRIA_CATALOG=
TERRAMETRIA_SCHEMA=
Now open 2 terminals and run the following commands:
# Terminal 1
> hatch run dev-frontend
# Terminal 2
> hatch run dev-backend
Go to http://localhost:5173
to see the app in action.
To deploy the app, login to your Databricks workspace from Databricks CLI:
databricks auth login
Then, run the following command:
./deploy-app.sh <app-name> <Workspace-FS-dir-for-app-files>
For example:
./deploy-app.sh terrametria /Workspace/Users/${MY_DATABRICKS_USERNAME}/apps/terrametria
During the App deployment, another service principal will be created. Make sure to grant access to the catalog and schema to this principal.
After the app deployment, add a SQL warehouse to the app resources via UI. Give it a key sql_warehouse
. The value should be the ID of the SQL warehouse that you want to use for the app.
- Frontend:
- Backend: