Skip to content

An extension to the Apache Spark framework that allows easy and fast processing of very large geospatial datasets.

License

Notifications You must be signed in to change notification settings

magabarron/mosaic

 
 

Repository files navigation

Databricks

mosaic-logo

An extension to the Apache Spark framework that allows easy and fast processing of very large geospatial datasets.

Mosaic provides:

  • easy conversion between common spatial data encodings (WKT, WKB and GeoJSON);

  • constructors to easily generate new geometries from Spark native data types;

  • many of the OGC SQL standard ST_ functions implemented as Spark Expressions for transforming, aggregating and joining spatial datasets;

  • high performance through implementation of Spark code generation within the core Mosaic functions;

  • optimisations for performing point-in-polygon joins using an approach we co-developed with Ordnance Survey (blog post); and

  • the choice of a Scala, SQL and Python API.

    mosaic-logo Image1: Mosaic logical design.

Getting started

Requirements

The only requirement to start using Mosaic is a Databricks cluster running Databricks Runtime 10.0 (or later) with either of the following attached:

  • (for Python API users) the Python .whl file; or
  • (for Scala or SQL users) the Scala JAR; or
  • (for R users) the Scala JAR and the R library see the sparkR readme.

The .whl, JAR, and R artefacts can be found in the 'Releases' section of the Mosaic GitHub repository.

Instructions for how to attach libraries to a Databricks cluster can be found here.

Releases

You can access the latest artifacts and binaries here.

Ecosystem

Mosaic is intended to augment the existing system and unlock the potential by integrating spark, delta and 3rd party frameworks into the Lakehouse architecture.

mosaic-logo Image2: Mosaic ecosystem - Lakehouse integration.

Example notebooks

This repository contains several example notebooks in notebooks/examples. You can import them into your Databricks workspace using the instructions here.

Project Support

Please note that all projects in the databrickslabs github space are provided for your exploration only, and are not formally supported by Databricks with Service Level Agreements (SLAs). They are provided AS-IS and we do not make any guarantees of any kind. Please do not submit a support ticket relating to any issues arising from the use of these projects.

Any issues discovered through the use of this project should be filed as GitHub Issues on the Repo. They will be reviewed as time permits, but there are no formal SLAs for support.

About

An extension to the Apache Spark framework that allows easy and fast processing of very large geospatial datasets.

Resources

License

Stars

Watchers

Forks

Packages

No packages published

Languages

  • Scala 84.7%
  • Python 12.7%
  • R 2.6%