Skip to content

Commit

Permalink
Merge remote-tracking branch 'origin/3269-docs-rfc-update-the-readme-…
Browse files Browse the repository at this point in the history
…in-the-timescaledb-github-repo-to-match-the-pgai-docs' into 3269-docs-rfc-update-the-readme-in-the-timescaledb-github-repo-to-match-the-pgai-docs

# Conflicts:
#	README.md
  • Loading branch information
atovpeko committed Dec 9, 2024
2 parents f65d1c1 + c205a02 commit 022ea52
Show file tree
Hide file tree
Showing 15 changed files with 1,684 additions and 1,306 deletions.
25 changes: 18 additions & 7 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -8,15 +8,18 @@

<div align=center>

<h3>TimescaleDB is an open-source extension for PostgreSQL that enables time-series, events, and analytics workloads, while increasing ingest, query, and storage performance</h3>
<h3>TimescaleDB is an extension for PostgreSQL that enables time-series, events, and real-time analytics workloads, while increasing ingest, query, and storage performance</h3>

[![Docs](https://img.shields.io/badge/Read_the_Timescale_docs-black?style=for-the-badge&logo=readthedocs&logoColor=white)](https://docs.timescale.com/)
[![SLACK](https://img.shields.io/badge/Ask_the_Timescale_community-black?style=for-the-badge&logo=slack&logoColor=white)](https://timescaledb.slack.com/archives/C4GT3N90X)
[![Try TimescaleDB for free](https://img.shields.io/badge/Try_Timescale_for_free-black?style=for-the-badge&logo=timescale&logoColor=white)](https://console.cloud.timescale.com/signup)

</div>

TimescaleDB scales PostgreSQL for ingesting and querying vast amounts of live data. From the perspective of both use and management, TimescaleDB looks and feels just like PostgreSQL, and can be managed and queried as such. However, it provides a wide range of features and optimizations that supercharge your queries - all while keeping the costs down. For example, our hybrid row-columnar engine makes queries up to 350x faster, ingests 44% faster, and reduces storage by 95% over RDS. Visit [timescale.com](https://www.timescale.com) for details, use cases, customer success stories, and more.
TimescaleDB scales PostgreSQL for time-series data with the help of [hypertables](https://docs.timescale.com/use-timescale/latest/hypertables/about-hypertables/). Hypertables are PostgreSQL tables that automatically partition your data by time and space. You interact with a hypertable in the same way as regular PostgreSQL table. Behind the scenes, the database performs the work of setting up and maintaining the hypertable's partitions.

From the perspective of both use and management, TimescaleDB looks and feels like PostgreSQL, and can be managed and queried as
such. However, it provides a range of features and optimizations that make managing your time-series data easier and more efficient. For example, our hybrid row-columnar engine makes queries up to 350x faster, ingests 44% faster, and reduces storage by 95% over RDS.

<table style="width:100%;">
<thead>
Expand Down Expand Up @@ -112,16 +115,24 @@ You compress your time-series data to reduce its size by more than 90%. This cut

When you enable compression, the data in your hypertable is compressed chunk by chunk. When the chunk is compressed, multiple records are grouped into a single row. The columns of this row hold an array-like structure that stores all the data. This means that instead of using lots of rows to store the data, it stores the same data in a single row. Because a single row takes up less disk space than many rows, it decreases the amount of disk space required, and can also speed up your queries. For example:

- Compress a specific hypertable chunk manually:
- Enable compression on hypertable

```sql
ALTER TABLE conditions SET (
timescaledb.compress,
timescaledb.compress_segmentby = 'device_id'
);
```
- Compress hypertable chunks manually:

```sql
SELECT compress_chunk( '<chunk_name>');
SELECT compress_chunk(chunk, if_not_compressed => true) FROM show_chunks( 'conditions', now()::timestamp - INTERVAL '1 week', now()::timestamp - INTERVAL '3 weeks' ) AS chunk;
```

- Create a policy to compress chunks that are older than seven days automatically:

```sql
SELECT add_compression_policy('<hypertable_name>', INTERVAL '7 days');
SELECT add_compression_policy('conditions', INTERVAL '7 days');
```

See more:
Expand All @@ -133,12 +144,12 @@ See more:

Time buckets enable you to aggregate data in hypertables by time interval and calculate summary values.

For example, calculate the average daily temperature in a table named `weather_conditions`. The table has a `time` and `temperature` columns:
For example, calculate the average daily temperature in a table named `conditions`. The table has a `time` and `temperature` columns:

```sql
SELECT time_bucket('1 day', time) AS bucket,
avg(temperature) AS avg_temp
FROM weather_conditions
FROM conditions
GROUP BY bucket
ORDER BY bucket ASC;
```
Expand Down
647 changes: 333 additions & 314 deletions test/expected/append-14.out

Large diffs are not rendered by default.

649 changes: 334 additions & 315 deletions test/expected/append-15.out

Large diffs are not rendered by default.

649 changes: 334 additions & 315 deletions test/expected/append-16.out

Large diffs are not rendered by default.

649 changes: 334 additions & 315 deletions test/expected/append-17.out

Large diffs are not rendered by default.

12 changes: 1 addition & 11 deletions test/runner.sh
Original file line number Diff line number Diff line change
Expand Up @@ -148,14 +148,4 @@ ${PSQL} -U ${TEST_PGUSER} \
-v TSL_MODULE_PATHNAME="'timescaledb-tsl-${EXT_VERSION}'" \
-v TEST_SUPPORT_FILE=${TEST_SUPPORT_FILE} \
-v TEST_SUPPORT_FILE_INIT=${TEST_SUPPORT_FILE_INIT} \
"$@" -d ${TEST_DBNAME} 2>&1 | \
sed -e '/<exclude_from_test>/,/<\/exclude_from_test>/d' \
-e 's! Memory: [0-9]\{1,\}kB!!' \
-e 's! Memory Usage: [0-9]\{1,\}kB!!' \
-e 's! Average Peak Memory: [0-9]\{1,\}kB!!' | \
grep -v 'DEBUG: rehashing catalog cache id' | \
grep -v 'DEBUG: compacted fsync request queue from' | \
grep -v 'DEBUG: creating and filling new WAL file' | \
grep -v 'DEBUG: done creating and filling new WAL file' | \
grep -v 'DEBUG: flushed relation because a checkpoint occurred concurrently' | \
grep -v 'NOTICE: cancelling the background worker for job'
"$@" -d ${TEST_DBNAME} 2>&1 | ${CURRENT_DIR}/runner_cleanup_output.sh
30 changes: 30 additions & 0 deletions test/runner_cleanup_output.sh
Original file line number Diff line number Diff line change
@@ -0,0 +1,30 @@
#!/usr/bin/env bash

set -u
set -e

RUNNER=${1:-""}

sed -e '/<exclude_from_test>/,/<\/exclude_from_test>/d' \
-e 's! Memory: [0-9]\{1,\}kB!!' \
-e 's! Memory Usage: [0-9]\{1,\}kB!!' \
-e 's! Average Peak Memory: [0-9]\{1,\}kB!!' | \
grep -v 'DEBUG: rehashing catalog cache id' | \
grep -v 'DEBUG: compacted fsync request queue from' | \
grep -v 'DEBUG: creating and filling new WAL file' | \
grep -v 'DEBUG: done creating and filling new WAL file' | \
grep -v 'DEBUG: flushed relation because a checkpoint occurred concurrently' | \
grep -v 'NOTICE: cancelling the background worker for job' | \
if [ "${RUNNER}" = "shared" ]; then \
sed -e '/^-\{1,\}$/d' \
-e 's!_[0-9]\{1,\}_[0-9]\{1,\}_chunk!_X_X_chunk!g' \
-e 's!^ \{1,\}QUERY PLAN \{1,\}$!QUERY PLAN!'; \
else \
cat; \
fi | \
if [ "${RUNNER}" = "isolation" ]; then \
sed -e 's!_[0-9]\{1,\}_[0-9]\{1,\}_chunk!_X_X_chunk!g' \
-e 's!hypertable_[0-9]\{1,\}!hypertable_X!g'; \
else \
cat; \
fi
6 changes: 2 additions & 4 deletions test/runner_isolation.sh
Original file line number Diff line number Diff line change
Expand Up @@ -7,6 +7,7 @@

set -e
set -u
CURRENT_DIR=$(dirname $0)

ISOLATIONTEST=$1
shift
Expand All @@ -18,7 +19,4 @@ shift
# the chunk numbers influence the names of indexes if they are long enough to be
# truncated, so the only way to get a stable explain output is to run such a test
# in a separate database.
$ISOLATIONTEST "$@" | \
sed -e 's!_[0-9]\{1,\}_[0-9]\{1,\}_chunk!_X_X_chunk!g' \
-e 's!hypertable_[0-9]\{1,\}!hypertable_X!g'

$ISOLATIONTEST "$@" | ${CURRENT_DIR}/runner_cleanup_output.sh "isolation"
16 changes: 1 addition & 15 deletions test/runner_shared.sh
Original file line number Diff line number Diff line change
Expand Up @@ -79,18 +79,4 @@ ${PSQL} -U ${TEST_PGUSER} \
-v ROLE_DEFAULT_PERM_USER_2=${TEST_ROLE_DEFAULT_PERM_USER_2} \
-v MODULE_PATHNAME="'timescaledb-${EXT_VERSION}'" \
-v TSL_MODULE_PATHNAME="'timescaledb-tsl-${EXT_VERSION}'" \
"$@" -d ${TEST_DBNAME} 2>&1 | \
sed -e '/<exclude_from_test>/,/<\/exclude_from_test>/d' \
-e 's!_[0-9]\{1,\}_[0-9]\{1,\}_chunk!_X_X_chunk!g' \
-e 's!^ \{1,\}QUERY PLAN \{1,\}$!QUERY PLAN!' \
-e 's!: actual rows!: actual rows!' \
-e '/^-\{1,\}$/d' \
-e 's! Memory: [0-9]\{1,\}kB!!' \
-e 's! Memory Usage: [0-9]\{1,\}kB!!' \
-e 's! Average Peak Memory: [0-9]\{1,\}kB!!' | \
grep -v 'DEBUG: rehashing catalog cache id' | \
grep -v 'DEBUG: compacted fsync request queue from' | \
grep -v 'DEBUG: creating and filling new WAL file' | \
grep -v 'DEBUG: done creating and filling new WAL file' | \
grep -v 'DEBUG: flushed relation because a checkpoint occurred concurrently' | \
grep -v 'NOTICE: cancelling the background worker for job'
"$@" -d ${TEST_DBNAME} 2>&1 | ${CURRENT_DIR}/runner_cleanup_output.sh "shared"
14 changes: 8 additions & 6 deletions test/sql/include/append_load.sql
Original file line number Diff line number Diff line change
Expand Up @@ -40,6 +40,7 @@ INSERT INTO append_test VALUES ('2017-03-22T09:18:22', 23.5, 1, '{"a": 1, "b": 2
('2017-05-22T09:18:22', 36.2, 2, '{"c": 3, "b": 2}'),
('2017-05-22T09:18:23', 15.2, 2, '{"c": 3}'),
('2017-08-22T09:18:22', 34.1, 3, '{"c": 4}');
VACUUM (ANALYZE) append_test;

-- Create another hypertable to join with
CREATE TABLE join_test(time timestamptz, temp float, colorid integer);
Expand All @@ -48,25 +49,27 @@ SELECT create_hypertable('join_test', 'time', chunk_time_interval => 26280000000
INSERT INTO join_test VALUES ('2017-01-22T09:18:22', 15.2, 1),
('2017-02-22T09:18:22', 24.5, 2),
('2017-08-22T09:18:22', 23.1, 3);
VACUUM (ANALYZE) join_test;

-- Create another table to join with which is not a hypertable.
CREATE TABLE join_test_plain(time timestamptz, temp float, colorid integer, attr jsonb);

INSERT INTO join_test_plain VALUES ('2017-01-22T09:18:22', 15.2, 1, '{"a": 1}'),
('2017-02-22T09:18:22', 24.5, 2, '{"b": 2}'),
('2017-08-22T09:18:22', 23.1, 3, '{"c": 3}');
VACUUM (ANALYZE) join_test_plain;

-- create hypertable with DATE time dimension
CREATE TABLE metrics_date(time DATE NOT NULL);
SELECT create_hypertable('metrics_date','time');
INSERT INTO metrics_date SELECT generate_series('2000-01-01'::date, '2000-02-01'::date, '5m'::interval);
ANALYZE metrics_date;
VACUUM (ANALYZE) metrics_date;

-- create hypertable with TIMESTAMP time dimension
CREATE TABLE metrics_timestamp(time TIMESTAMP NOT NULL);
SELECT create_hypertable('metrics_timestamp','time');
INSERT INTO metrics_timestamp SELECT generate_series('2000-01-01'::date, '2000-02-01'::date, '5m'::interval);
ANALYZE metrics_timestamp;
VACUUM (ANALYZE) metrics_timestamp;

-- create hypertable with TIMESTAMPTZ time dimension
CREATE TABLE metrics_timestamptz(time TIMESTAMPTZ NOT NULL, device_id INT NOT NULL);
Expand All @@ -75,7 +78,7 @@ SELECT create_hypertable('metrics_timestamptz','time');
INSERT INTO metrics_timestamptz SELECT generate_series('2000-01-01'::date, '2000-02-01'::date, '5m'::interval), 1;
INSERT INTO metrics_timestamptz SELECT generate_series('2000-01-01'::date, '2000-02-01'::date, '5m'::interval), 2;
INSERT INTO metrics_timestamptz SELECT generate_series('2000-01-01'::date, '2000-02-01'::date, '5m'::interval), 3;
ANALYZE metrics_timestamptz;
VACUUM (ANALYZE) metrics_timestamptz;

-- create space partitioned hypertable
CREATE TABLE metrics_space(time timestamptz NOT NULL, device_id int NOT NULL, v1 float, v2 float, v3 text);
Expand All @@ -87,7 +90,7 @@ FROM generate_series('2000-01-01'::timestamptz, '2000-01-14'::timestamptz, '5m':
generate_series(1,10,1) g2(device_id)
ORDER BY time, device_id;

ANALYZE metrics_space;
VACUUM (ANALYZE) metrics_space;

-- test ChunkAppend projection #2661
CREATE TABLE i2661 (
Expand All @@ -99,5 +102,4 @@ CREATE TABLE i2661 (
SELECT create_hypertable('i2661', 'timestamp');

INSERT INTO i2661 SELECT 1, 'speed', generate_series('2019-12-31 00:00:00', '2020-01-10 00:00:00', '2m'::interval), 0;
ANALYZE i2661;

VACUUM (ANALYZE) i2661;
3 changes: 2 additions & 1 deletion test/sql/include/append_query.sql
Original file line number Diff line number Diff line change
Expand Up @@ -320,6 +320,7 @@ SELECT time, device_id
FROM generate_series('2000-01-01'::timestamptz,'2000-01-21','30m') g1(time),
generate_series(1,10,1) g2(device_id)
ORDER BY time, device_id;
VACUUM (ANALYZE) join_limit;

-- get 2nd chunk oid
SELECT tableoid AS "CHUNK_OID" FROM join_limit WHERE time > '2000-01-07' ORDER BY time LIMIT 1
Expand Down Expand Up @@ -353,7 +354,7 @@ CREATE TABLE i3030(time timestamptz NOT NULL, a int, b int);
SELECT table_name FROM create_hypertable('i3030', 'time', create_default_indexes=>false);
CREATE INDEX ON i3030(a,time);
INSERT INTO i3030 (time,a) SELECT time, a FROM generate_series('2000-01-01'::timestamptz,'2000-01-01 3:00:00'::timestamptz,'1min'::interval) time, generate_series(1,30) a;
ANALYZE i3030;
VACUUM (ANALYZE) i3030;

:PREFIX SELECT * FROM i3030 where time BETWEEN '2000-01-01'::text::timestamptz AND '2000-01-03'::text::timestamptz ORDER BY a,time LIMIT 1;
DROP TABLE i3030;
Expand Down
16 changes: 15 additions & 1 deletion tsl/src/compression/api.c
Original file line number Diff line number Diff line change
Expand Up @@ -14,6 +14,7 @@
#include <catalog/dependency.h>
#include <catalog/index.h>
#include <catalog/indexing.h>
#include <catalog/pg_am.h>
#include <commands/event_trigger.h>
#include <commands/tablecmds.h>
#include <commands/trigger.h>
Expand Down Expand Up @@ -777,13 +778,26 @@ set_access_method(Oid relid, const char *amname)
};
bool to_hypercore = strcmp(amname, TS_HYPERCORE_TAM_NAME) == 0;
Oid amoid = ts_get_rel_am(relid);
Oid new_amoid = get_am_oid(amname, false);

/* Setting the same access method is a no-op */
if (amoid == get_am_oid(amname, false))
if (amoid == new_amoid)
return relid;

hypercore_alter_access_method_begin(relid, !to_hypercore);
AlterTableInternal(relid, list_make1(&cmd), false);

#if (PG_VERSION_NUM < 150004)
/* Fix for PostgreSQL bug where pg_depend was not updated to reflect the
* new dependency between AM and relation. See related PG fix here:
* https://github.com/postgres/postgres/commit/97d89101045fac8cb36f4ef6c08526ea0841a596 */
if (changeDependencyFor(RelationRelationId, relid, AccessMethodRelationId, amoid, new_amoid) !=
1)
elog(ERROR,
"could not change access method dependency for relation \"%s.%s\"",
get_namespace_name(get_rel_namespace(relid)),
get_rel_name(relid));
#endif
hypercore_alter_access_method_finish(relid, !to_hypercore);

#else
Expand Down
Loading

0 comments on commit 022ea52

Please sign in to comment.