TimeScale Notes: Difference between revisions
Ian.readman (talk | contribs) No edit summary |
Ian.readman (talk | contribs) No edit summary |
||
(27 intermediate revisions by the same user not shown) | |||
Line 1: | Line 1: | ||
== Installation == | |||
https://docs.timescale.com/self-hosted/latest/install/installation-linux | |||
== Create Hypertable == | |||
<pre> | |||
CREATE TABLE conditions ( | |||
time TIMESTAMPTZ NOT NULL, | |||
location TEXT NOT NULL, | |||
temperature DOUBLE PRECISION NULL | |||
); | |||
</pre> | |||
<pre> | |||
SELECT create_hypertable('conditions', by_range('time')); | |||
SELECT create_hypertable('conditions', by_range('time', INTERVAL '1 day')); | |||
SELECT set_chunk_time_interval('conditions', INTERVAL '2 days'); | |||
</pre> | |||
The default time interval for a chunk is 7 days. </br> | |||
The dimension can either be by_range or by_hash. | |||
== Indexes == | |||
You can create indexes on Hyper Tables as per normal. </br> | |||
However when creating a unique index you must include the partioning columns </br> | |||
In the example above that would include the <b>time</b> column. | |||
== Compression == | == Compression == | ||
=== Check Distinct Values === | === Check Distinct Values === | ||
<pre> | |||
-- n_distinct (-1 = Unique, 1 = All The Same) | -- n_distinct (-1 = Unique, 1 = All The Same) | ||
SELECT tablename, attname, most_common_vals, most_common_freqs, n_distinct | SELECT tablename, attname, most_common_vals, most_common_freqs, n_distinct | ||
FROM pg_stats | FROM pg_stats | ||
-- WHERE tablename = '<TableName>'; | -- WHERE tablename = '<TableName>'; | ||
WHERE schemaname = '<SchemaName>'; | WHERE schemaname = '<SchemaName>'; | ||
</pre> | |||
=== Compress Table === | === Compress Table === | ||
ALTER TABLE crypto_ticks | <pre> | ||
SET (timescaledb.compress, | ALTER TABLE crypto_ticks | ||
timescaledb.compress_segmentby='symbol', | SET (timescaledb.compress, | ||
timescaledb.compress_segmentby='symbol', | |||
timescaledb.compress_orderby='time DESC'); | timescaledb.compress_orderby='time DESC'); | ||
</pre> | |||
SELECT compress_chunk(c) from show_chunks('crypto_ticks') c; | <code>SELECT compress_chunk(c) from show_chunks('crypto_ticks') c; </code> | ||
=== Check Compression === | === Check Compression === | ||
SELECT pg_size_pretty(before_compression_total_bytes) as before, | <pre> | ||
pg_size_pretty(after_compression_total_bytes) as after | SELECT pg_size_pretty(before_compression_total_bytes) as before, | ||
FROM hypertable_compression_stats('crypto_ticks'); | pg_size_pretty(after_compression_total_bytes) as after | ||
FROM hypertable_compression_stats('crypto_ticks'); | |||
SELECT pg_size_pretty(hypertable_size('CRYPTO_TICKS')); | |||
</pre> | |||
=== Compression Policy === | === Compression Policy === | ||
SELECT add_compression_policy('crypto_ticks', INTERVAL '8 days'); | Create a scheduled job that will compress all data with a 'time' column that is older than 8 days. </br> | ||
<pre>SELECT add_compression_policy('crypto_ticks', INTERVAL '8 days'); </pre> | |||
=== Links === | === Links === | ||
https://docs.timescale.com/tutorials/latest/financial-tick-data/financial-tick-compress | |||
Since version 2.18, HyperCore has been used for compression. </br> | |||
This uses a hybrid of row and columnar storage. </br> | |||
https://docs.timescale.com/use-timescale/latest/compression (Old) </br> | |||
https://docs.timescale.com/use-timescale/latest/hypercore] (New) |
Latest revision as of 08:55, 8 April 2025
Installation[edit]
https://docs.timescale.com/self-hosted/latest/install/installation-linux
Create Hypertable[edit]
CREATE TABLE conditions ( time TIMESTAMPTZ NOT NULL, location TEXT NOT NULL, temperature DOUBLE PRECISION NULL );
SELECT create_hypertable('conditions', by_range('time')); SELECT create_hypertable('conditions', by_range('time', INTERVAL '1 day')); SELECT set_chunk_time_interval('conditions', INTERVAL '2 days');
The default time interval for a chunk is 7 days.
The dimension can either be by_range or by_hash.
Indexes[edit]
You can create indexes on Hyper Tables as per normal.
However when creating a unique index you must include the partioning columns
In the example above that would include the time column.
Compression[edit]
Check Distinct Values[edit]
-- n_distinct (-1 = Unique, 1 = All The Same) SELECT tablename, attname, most_common_vals, most_common_freqs, n_distinct FROM pg_stats -- WHERE tablename = '<TableName>'; WHERE schemaname = '<SchemaName>';
Compress Table[edit]
ALTER TABLE crypto_ticks SET (timescaledb.compress, timescaledb.compress_segmentby='symbol', timescaledb.compress_orderby='time DESC');
SELECT compress_chunk(c) from show_chunks('crypto_ticks') c;
Check Compression[edit]
SELECT pg_size_pretty(before_compression_total_bytes) as before, pg_size_pretty(after_compression_total_bytes) as after FROM hypertable_compression_stats('crypto_ticks'); SELECT pg_size_pretty(hypertable_size('CRYPTO_TICKS'));
Compression Policy[edit]
Create a scheduled job that will compress all data with a 'time' column that is older than 8 days.
SELECT add_compression_policy('crypto_ticks', INTERVAL '8 days');
Links[edit]
https://docs.timescale.com/tutorials/latest/financial-tick-data/financial-tick-compress
Since version 2.18, HyperCore has been used for compression.
This uses a hybrid of row and columnar storage.
https://docs.timescale.com/use-timescale/latest/compression (Old)
https://docs.timescale.com/use-timescale/latest/hypercore] (New)