- Fix: SQLAlchemy adapter could not reflect TIMESTAMP or DATETIME columns
- Other: Relax pandas and alembic dependency specifications
- Other: Relax sqlalchemy required version as it was unecessarily strict.
- Add support for External Auth providers
- Fix: Python HTTP proxies were broken
- Other: All Thrift requests that timeout during connection will be automatically retried
- Less strict numpy and pyarrow dependencies
- Update examples in README to use security best practices
- Update docstring for client.execute() for clarity
- Improve compatibility when installed alongside other Databricks namespace Python packages
- Add SQLAlchemy dialect
- Support staging ingestion commands for DBR 12+
- Support custom oauth client id and redirect port
- Fix: Add none check on _oauth_persistence in DatabricksOAuthProvider
- Add support for Python 3.11
- Bump thrift version to address https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-13949
- Add support for lz4 compression
- Introduce experimental OAuth support while Bring Your Own IDP is in Public Preview on AWS
- Add functional examples
- Fix: closing a connection now closes any open cursors from that connection at the server
- Other: Add project links to pyproject.toml (helpful for visitors from PyPi)
- Add support for Python 3.10
- Add unit test matrix for supported Python versions
Huge thanks to @dbaxa for contributing this change!
- Add retry logic for
GetOperationStatus
requests that fail with anOSError
- Reorganised code to use Poetry for dependency management.
- Better exception handling in automatic connection close
- Fixed Pandas dependency in setup.cfg to be >= 1.2.0
- Initial stable release of V2
- Added better support for complex types, so that in Databricks runtime 10.3+, Arrays, Maps and Structs will get deserialized as lists, lists of tuples and dicts, respectively.
- Changed the name of the metadata arg to http_headers
- Change import of collections.Iterable to collections.abc.Iterable to make the library compatible with Python 3.10
- Fixed bug with .tables method so that .tables works as expected with Unity-Catalog enabled endpoints
- Fix packaging issue (dependencies were not being installed properly)
- Fetching timestamp results will now return aware instead of naive timestamps
- The client will now default to using simplified error messages
- Initial beta release of V2. V2 is an internal re-write of large parts of the connector to use Databricks edge features. All public APIs from V1 remain.
- Added Unity Catalog support (pass catalog and / or schema key word args to the .connect method to select initial schema and catalog)
Note: The code for versions prior to v2.0.0b
is not contained in this repository. The below entries are included for reference only.
- Add operations for retrieving metadata
- Add the ability to access columns by name on result rows
- Add the ability to provide configuration settings on connect
- Improved logging and error messages.
- Add retries for 429 and 503 HTTP responses.
- (Bug fix) Increased Thrift requirement from 0.10.0 to 0.13.0 as 0.10.0 was in fact incompatible
- (Bug fix) Fixed error message after query execution failed -SQLSTATE and Error message were misplaced
- Public Preview release, Experimental tag removed
- minor updates in internal build/packaging
- no functional changes
- initial (Experimental) release of pyhive-forked connector
- Python DBAPI 2.0 (PEP-0249), thrift based
- see docs for more info: https://docs.databricks.com/dev-tools/python-sql-connector.html