Hacker News new | past | comments | ask | show | jobs | submit login

It always depends on the data, but we've seen 92.5% and more: https://twitter.com/JeffMealo/status/1368030569557286915



(TimescaleDB person)

TimescaleDB users have seen 98% (ie over 50x) compression rates in some real-world cases (e.g., for some IT monitoring datasets), but compression ratio will definitely vary by dataset. (For example, a dataset of just 0s will compress even better! But that's probably not a realistic dataset :-) )

The reality is that Citus and TimescaleDB [0][1] take very different approaches to columnar compression, which result in different usability and performance trade-offs. In reality one should choose the right tool for their workload.

(As an aside, if you have time-series data, no one has spent more time developing an awesome time-series experience on Postgres than the TimescaleDB team has :-) )

Kudos to the Citus team for this launch! I love seeing how different members of the Postgres community keep pushing the state-of-the art.

[0] Building columnar compression in a row-oriented database (https://blog.timescale.com/blog/building-columnar-compressio...)

[1] Time-series compression algorithms, explained (https://blog.timescale.com/blog/time-series-compression-algo...)




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: