Hacker News new | past | comments | ask | show | jobs | submit login

Thanks for the feedback! I've opened an issue to track this [0]

* Levels 1-19 are the "standard" compression levels.

* Levels 20-22 are the "ultra" levels which require --ultra to use on the CLI. They allocate a lot of memory and are very slow.

* Level 0 is the default compression level, which is 3.

* Levels < 0 are the "fast" compression levels. They achieve speed by turning off Huffman compression, and by "accelerating" compression by a factor. Level -1 has acceleration factor 1, -2 has acceleration factor 2, and so on. So the minimum supported negative compression level is -131072, since the maximum acceleration factor is our block size. But in practice, I wouldn't think a negative level lower than -10 or -20 would be all that useful.

[0] https://github.com/facebook/zstd/issues/3133




We're still reserving the right to fiddle around with the meaning of our negative compression levels. We think that we may be able to offer more compression at the same speeds by completely changing our search strategies for very fast compression speeds. But, there is only so much time in the day, and we haven't had time to investigate it yet. So we don't want to lock ourselves into a particular scheme right now.


This is exactly what I was hoping for! If you just copied and pasted this into the documentation directly, that'd be more than enough. Thanks for writing it out so clearly and creating the issue.


> They allocate a lot of memory and are very slow.

Slow and takes memory to compress, uncompress, or both?


They are only slow to compress. They are as fast to decompress, or even faster sometimes. Source: I tested.


Yeah thats correct. I'll just point out that they use a larger window size, so they will use more memory to decompress, but will still be fast.


Both.




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: