Sauron looks really great and I for one am super excited about the future of Rust + WASM for web apps(with Backend). However I am a bit concerned that even the minimal example is 1.17MB of WASM which seems really high since Rust compiled to WASM should be quite small.
165kb is still fairly big for a minimal example, much smaller than 1.17MB though, compared to React for example. Are there ways to analyze what contributes to the size for WASM compiled projects? I guess wasm-bindgen contributes significantly to the size currently.
EDIT: It's 68KB gzipped and 165KB is the uncompressed size. One great benefit of WASM is that as described in "Making WebAssembly even faster: Firefox’s new streaming and tiering compiler"[0] it's much faster than JS to parse and execute.
I think If you start adding more code, the growth of the release binary will diminish. The todomvc code is only 207KB when compiled and has relatively more code than the minimal example.
There is a lot of rust compilation flags, and I have only tested a few.
I also enabled a lot of wasm-bindgen features in the library, I will eventually tighten it up.
> EDIT: It's 68KB gzipped and 165KB is the uncompressed size
What's the state of brotli content type support in browsers? Wasn't that supposed to be the next generation beyond gzip? Would it get a better still compression ratio? And FB is pushing zstd -- is that something that might be supported eventually?
It's still a lot given that when targetting the entire world (especially either mobile links or portions of the globe that don't have a CDN a millisecond away), a typical assumed link is on the order of 512 kbps bandwidth with 100ms RTT. That's two and a half seconds to load just your framework with a cold cache and without including any content.
Yes, many web developers are just targeting their own areas and test on high-speed links and everything seems fine, but frameworks that aim for widespread use still need to care about every single byte.
Images are not comparable to WASM and especially not to JS. This is described in "Making WebAssembly even faster: Firefox’s new streaming and tiering compile", but the TL;DR is that network is less so the bottleneck than parsing and execution these days.
Further the notion that 168kb is nothing is a bit sad when you consider emerging markets and other low bandwith/CPU scenarios. One of my hopes with WASM is that it can achieve both smaller bundles and better parse/execute performance. To my understanding one of the big blockers for this right now is the lack of a DOM API for WASM.