-
Notifications
You must be signed in to change notification settings - Fork 888
Commit
This release contains performance improvements, an improved hypertable DDL API and bug fixes since the 2.12.2 release. We recommend that you upgrade at the next available opportunity. In addition, it includes these noteworthy features: * Full PostgreSQL 16 support for all existing features * Vectorized aggregation execution for sum() * Track chunk creation time used in retention/compression policies **Deprecation notice: Multi-node support** TimescaleDB 2.13 is the last version that will include multi-node support. Multi-node support in 2.13 is available for PostgreSQL 13, 14 and 15. Learn more about it [here](docs/MultiNodeDeprecation.md). If you want to migrate from multi-node TimescaleDB to single-node TimescaleDB read the [migration documentation](https://docs.timescale.com/migrate/latest/multi-node-to-timescale-service/). **PostgreSQL 13 deprecation announcement** We will continue supporting PostgreSQL 13 until April 2024. Sooner to that time, we will announce the specific version of TimescaleDB in which PostgreSQL 13 support will not be included going forward. **Starting from TimescaleDB 2.13.0** * No Amazon Machine Images (AMI) are published. If you previously used AMI, please use another [installation method](https://docs.timescale.com/self-hosted/latest/install/) * Continuous Aggregates are materialized only (non-realtime) by default **Features** * #5575 Add chunk-wise sorted paths for compressed chunks * #5761 Simplify hypertable DDL API * #5890 Reduce WAL activity by freezing compressed tuples immediately * #6050 Vectorized aggregation execution for sum() * #6062 Add metadata for chunk creation time * #6077 Make Continous Aggregates materialized only (non-realtime) by default * #6177 Change show_chunks/drop_chunks using chunk creation time * #6178 Show batches/tuples decompressed during DML operations in EXPLAIN output * #6185 Keep track of catalog version * #6227 Use creation time in retention/compression policy * #6307 Add SQL function cagg_validate_query **Bugfixes** * #6188 Add GUC for setting background worker log level * #6222 Allow enabling compression on hypertable with unique expression index * #6240 Check if worker registration succeeded * #6254 Fix exception detail passing in compression_policy_execute * #6264 Fix missing bms_del_member result assignment * #6275 Fix negative bitmapset member not allowed in compression * #6280 Potential data loss when compressing a table with a partial index that matches compression order. * #6289 Add support for startup chunk exclusion with aggs * #6290 Repair relacl on upgrade * #6297 Fix segfault when creating a cagg using a NULL width in time bucket function * #6305 Make timescaledb_functions.makeaclitem strict * #6332 Fix typmod and collation for segmentby columns * #6339 Fix tablespace with constraints * #6343 Enable segmentwise recompression in compression policy **Thanks** * @fetchezar for reporting an issue with compression policy error messages * @jflambert for reporting the background worker log level issue * @torazem for reporting an issue with compression and large oids * @fetchezar for reporting an issue in the compression policy * @lyp-bobi for reporting an issue with tablespace with constraints * @pdipesh02 for contributing to the implementation of the metadata for chunk creation time, the generalized hypertable API, and show_chunks/drop_chunks using chunk creation time * @lkshminarayanan for all his work on PG16 support
- Loading branch information
There are no files selected for viewing
This file was deleted.
This file was deleted.
This file was deleted.
This file was deleted.
This file was deleted.
This file was deleted.
This file was deleted.
This file was deleted.
This file was deleted.
This file was deleted.
This file was deleted.
This file was deleted.
This file was deleted.
This file was deleted.
This file was deleted.
This file was deleted.
This file was deleted.
This file was deleted.
This file was deleted.
This file was deleted.
This file was deleted.
This file was deleted.
This file was deleted.
This file was deleted.
This file was deleted.
This file was deleted.
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,42 @@ | ||
## Multi-node Deprecation | ||
|
||
Multi-node support has been deprecated. | ||
TimescaleDB 2.13 is the last version that will include multi-node support. Multi-node support in 2.13 is available for PostgreSQL 13, 14 and 15. | ||
This decision was not made lightly, and we want to provide a clear understanding of the reasoning behind this change and the path forward. | ||
|
||
### Why we are ending multi-node support | ||
|
||
We began to work on multi-node support in 2018 and released the first version in 2020 to address the growing | ||
demand for higher scalability in TimescaleDB deployments. | ||
The distributed architecture of multi-node allowed for horizontal scalability of writes and reads to go beyond what a single node could provide. | ||
|
||
While we added many improvements since the initial launch, the architecture of multi-node came with a number of | ||
inherent limitations and challenges that have limited its adoption. | ||
Regrettably, only about 1% of current TimescaleDB deployments utilize multi-node, and the complexity involved in | ||
maintaining this feature has become a significant obstacle. | ||
It’s not an isolated feature that can be kept in the product with very little effort since adding new features required | ||
extra development and testing to ensure they also worked for multi-node. | ||
|
||
As we've evolved our single-node product and expanded our cloud offering to serve thousands of customers, | ||
we've identified more efficient solutions to meet the scalability needs of our users. | ||
|
||
First, we’ve been able and will continue to make big improvements in the write and read performance of single-node. | ||
We’ve scaled a single-node deployment to process 2 million inserts per second and have seen performance improvements of 10x for common queries. | ||
You can read a summary of the latest query performance improvements [here](https://www.timescale.com/blog/8-performance-improvements-in-recent-timescaledb-releases-for-faster-query-analytics/). | ||
|
||
And second, we are leveraging cloud technologies that have become very mature to provide higher scalability in a more accessible way. | ||
For example, our cloud offering uses object storage to deliver virtually [infinite storage capacity at a very low cost](https://www.timescale.com/blog/scaling-postgresql-for-cheap-introducing-tiered-storage-in-timescale/). | ||
|
||
For those reasons, we’ve decided to focus our efforts on improving single-node and leveraging cloud technologies to solve for high scalability and as a result we’ve ended support for multi-node. | ||
|
||
### What this means for you | ||
|
||
We understand that this change may raise questions, and we are committed to supporting you through the transition. | ||
|
||
For current TimescaleDB multi-node users, please refer to our [migration documentation](https://docs.timescale.com/migrate/latest/multi-node-to-timescale-service/) | ||
for a step-by-step guide to transition to a single-node configuration. | ||
|
||
Alternatively, you can continue to use multi-node up to version 2.13. However, please be aware that future versions will no longer include this functionality. | ||
|
||
If you have any questions or feedback, you can share them in the #multi-node channel in our [community Slack](https://slack.timescale.com/). | ||
|