-
Notifications
You must be signed in to change notification settings - Fork 889
Commit
This release contains performance improvements, an improved hypertable DDL API and bug fixes since the 2.12.2 release. We recommend that you upgrade at the next available opportunity. In addition, it includes these noteworthy features: * Full PostgreSQL 16 support for all existing features * Vectorized aggregation execution for sum() * Track chunk creation time used in retention/compression policies **Deprecation notice: Multi-node support** TimescaleDB 2.13 is the last version that will include multi-node support. Multi-node support in 2.13 is available for PostgreSQL 13, 14 and 15. Learn more about it [here](docs/MultiNodeDeprecation.md). If you want to migrate from multi-node TimescaleDB to single-node TimescaleDB read the [migration documentation](https://docs.timescale.com/migrate/latest/multi-node-to-timescale-service/). **Starting from TimescaleDB 2.13.0:** * No Amazon Machine Images (AMI) are published. If you previously used AMI, please use another [installation method](https://docs.timescale.com/self-hosted/latest/install/) * Continuous Aggregates are materialized only (non-realtime) by default **Features** * #5575 Add chunk-wise sorted paths for compressed chunks * #5761 Simplify hypertable DDL API * #5890 Reduce WAL activity by freezing compressed tuples immediately * #6050 Vectorized aggregation execution for sum() * #6062 Add metadata for chunk creation time * #6077 Make Continous Aggregates materialized only (non-realtime) by default * #6177 Change show_chunks/drop_chunks using chunk creation time * #6178 Show batches/tuples decompressed during DML operations in EXPLAIN output * #6185 Keep track of catalog version * #6227 Use creation time in retention/compression policy * #6307 Add SQL function cagg_validate_query **Bugfixes** * #6188 Add GUC for setting background worker log level * #6222 Allow enabling compression on hypertable with unique expression index * #6240 Check if worker registration succeeded * #6254 Fix exception detail passing in compression_policy_execute * #6264 Fix missing bms_del_member result assignment * #6275 Fix negative bitmapset member not allowed in compression * #6280 Potential data loss when compressing a table with a partial index that matches compression order. * #6289 Add support for startup chunk exclusion with aggs * #6290 Repair relacl on upgrade * #6297 Fix segfault when creating a cagg using a NULL width in time bucket function * #6305 Make timescaledb_functions.makeaclitem strict * #6332 Fix typmod and collation for segmentby columns **Thanks** * @fetchezar for reporting an issue with compression policy error messages * @jflambert for reporting the background worker log level issue * @torazem for reporting an issue with compression and large oids * @pdipesh02 for contributing to the implementation of the metadata for chunk creation time, the generalized hypertable API, and show_chunks/drop_chunks using chunk creation time * @lkshminarayanan for all his work on PG16 support
- Loading branch information
There are no files selected for viewing
This file was deleted.
This file was deleted.
This file was deleted.
This file was deleted.
This file was deleted.
This file was deleted.
This file was deleted.
This file was deleted.
This file was deleted.
This file was deleted.
This file was deleted.
This file was deleted.
This file was deleted.
This file was deleted.
This file was deleted.
This file was deleted.
This file was deleted.
This file was deleted.
This file was deleted.
This file was deleted.
This file was deleted.
This file was deleted.
This file was deleted.
This file was deleted.
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,59 @@ | ||
## Multi-node Deprecation | ||
|
||
Multi-node support has been deprecated. | ||
TimescaleDB 2.13 is the last version that will include multi-node support. | ||
Multi-node support in 2.13 is available for PostgreSQL 13, 14 and 15. | ||
|
||
If you want to migrate from multi-node TimescaleDB to single-node TimescaleDB | ||
read the [migration documentation](https://docs.timescale.com/migrate/latest/multi-node-to-timescale-service/). | ||
|
||
### Why we have deprecated multi-node support | ||
|
||
We began to work on multi-node support in 2018 and released the first version in 2020 to provide higher scalability | ||
for TimescaleDB deployments. Since then and as we’ve continued to evolve our community and cloud products, | ||
we’ve come to the realization that multi-node was not the most forward-looking approach, | ||
and we have instead turned our focus to more cloud-native designs that inherently leverage compute/storage separation for higher scalability. | ||
|
||
These are the challenges we encountered with our multi-node architecture: | ||
- <b>Lower development speed</b> | ||
|
||
Multi-node was hard to maintain and evolve and added a very high tax on any new feature we developed, | ||
significantly slowing down our development speed. | ||
Additionally, only ~1% of TimescaleDB deployments use multi-node. | ||
|
||
- <b>Inconsistent developer experience</b> | ||
|
||
Multi-node imposed a key initial decision on developers adopting TimescaleDB: start with single-node or start with multi-node. | ||
Moving from one to the other required a migration. | ||
On top of that there were a number of features supported on single-node that were not available on multi-node making | ||
the experience inconsistent and the choice even more complicated. | ||
|
||
- <b>Expensive price / performance ratio</b> | ||
|
||
We were very far from linear scalability, not least due to the natural “performance hit” when scaling from a single-node to distributed transactions. | ||
For example, handling 2x the load a single node could handle required 8 servers. | ||
At the same time we’ve been able to dramatically improve the performance of single-node to handle 2 million inserts | ||
per second as well as improve query performance dramatically with a | ||
number of [query optimizations](https://www.timescale.com/blog/8-performance-improvements-in-recent-timescaledb-releases-for-faster-query-analytics/) | ||
and [vectorized query execution](https://www.timescale.com/blog/teaching-postgres-new-tricks-simd-vectorization-for-faster-analytical-queries/). | ||
|
||
- <b>Not designed to leverage new cloud architectures</b> | ||
|
||
As we’ve gained experience developing and scaling cloud solutions we’ve realized that we can solve | ||
the same problems more easily through new more modern approaches that leverage cloud technologies, | ||
which often separate compute and storage to scale them independently. | ||
For example, we’re now leveraging object storage in our cloud offering to deliver virtually infinite | ||
storage capacity at a very low cost with [Tiered Storage](https://www.timescale.com/blog/scaling-postgresql-for-cheap-introducing-tiered-storage-in-timescale/), | ||
and our [columnar compression](https://www.timescale.com/blog/building-columnar-compression-in-a-row-oriented-database/) (built concurrently to multi-node) | ||
offers effective per-disk capacity that’s 10-20x that of traditional Postgres. | ||
Similarly, users can scale compute both vertically and horizontally by either dynamically resizing compute | ||
allocated to cloud databases, or adding additional server replicas (each of which can use the same tiered storage). | ||
|
||
Given all those reasons, we’ve made the difficult decision to deprecate multi-node so we can accelerate feature development | ||
and performance improvements, and deliver a better developer experience for the 99% of our community that is not using multi-node. | ||
|
||
### Questions and feedback | ||
|
||
We understand the news will be disappointing to users of multi-node. We’d like to help and provide advice on the best path forward for you. | ||
|
||
If you have any questions or feedback, you can share them in the #multi-node channel in our [community Slack](https://slack.timescale.com/). |