There's a link to a table of benchmarks in the description (http://mattmahoney.net/dc/text.html). LZ4 still looks a little more compelling than LZHAM in terms of compress/decompress time. It doesn't mention the specific application though (CPU/Disk/etc...).
Are you sure it doesn't compare favorably to LZ4? For a small (1.5x) slower decompression speed, the compression factor appears to be substantially better. LZHAM managed to compress the enwiki8 set to 23.80 MiB, and the enwiki9 set to 196.83 MiB.
LZ4, by comparison, had compressed sizes of 40.88 MiB and 362.40 MiB. A database or a Linux distribution using zswap/zram/etc. would be able to store 84% more data in the same amount of memory. For caches, that's huge.
One trade-off appears to be that lzham often requires a larger dictionary in memory, but it appears that even with smaller dictionary sizes it is appreciably more compact than lz4.
That was also one of the things I noticed at a quick glance. It would probably be a fairly big win for anything I/O constrained. I didn't take a look at the data being compressed or the algorithm in detail though.
Nevertheless, an article comparing the speed and compression efficiency of these algorithms could be interesting.
[1] https://code.google.com/p/lz4/
[2] http://en.wikipedia.org/wiki/Lempel%E2%80%93Ziv%E2%80%93Ober...