Releases: talariadb/talaria
Releases · talariadb/talaria
v1.1.15
- Use zlib compression
- Add statsd metrics for keys added and keys deleted
- Improved compaction and key deletion latency
- Expose badger options as follows:
badger:
levelOneSize: 204800000
maxLevels: 3
syncWrites: false
v1.1.14
- Added compaction prior to server shutdown
v1.1.13
- Fix the compaction for the key whose hash key value is different from the last merged key.
- Change the compression to snappy for faster writes.
- Add stats for error in ingestion and compaction
v1.1.12
- Added an ability to write to multiple sinks
- Added net module for LUA exposing a way to query an IP/MAC addresses
v1.1.11
- Added liveness & readiness probe server endpoint for envoy
- Fixed BigQuery writer which defaulted to CSV instead of ORC
- Fixed timestamp column in ORC
- Changed
make://timestamp
to output the value as a timestamp instead of int64
v1.1.10
- Fixed an issue where computed columns were not computed for all rows under very load.
- Changed the docker image workflows. The
edge
tag is now built only for PRs, latest
for every master and :v1.x.x
for every release.
v1.1.9
- A unique event identifier can be generated by using
make://identifier
function in a computed column.
- An ingestion time stamp can be generated by using
make://timestamp
function in a computed column.
In order to use it, simply declare a computed column as such:
computed:
- name: ingested_at
func: "make://timestamp"
- name: id
func: "make://identifier"
v1.1.8
- Added better initialization logs
v1.1.7
- Added support for loading computed columns and configs from Google Cloud Storage. This can be done by using
gs://bucket/prefix
scheme when specifying the url.
v1.1.6
- Added ingestion by CSV and URL (for both CSV and ORC files)
- Added certs to the docker image