Hacker News new | past | comments | ask | show | jobs | submit login
Optimizing performance for 1000 units (construct.net)
66 points by AshleysBrain on Nov 30, 2022 | hide | past | favorite | 16 comments



> It's an example of the truly extraordinary performance of JavaScript. It really does get very close to C/C++ performance

Yes I can attest to that. In my day job I write JS code that processes images in 2D. Processing images, even of a high resolution through simple algorithm is perfectly doable at 60 fps in JS. You just have a to be careful to what you do but that's pretty much true for any language.

Recently, I implemented a bilinear interpolation in JS, C (wasm) and WebGL. The C version was only marginally faster than the JS version even though the whole set of optimization was on. Sure native C would probably be significantly faster but then you wouldn't compare languages but runtimes.


But Javascript the language is intrinsically linked to Javascript the runtime. Not sure what you are going for here. I'm not at all the surprised that tuned Javascript can be as performant as pre-compiled Javascript.

To explain further, I'm also not surprised that hand tuned assembler can get the same performance as compiled C.


> I'm not at all the surprised that tuned Javascript can be as performant as pre-compiled Javascript.

Well you must be smarter than I am (which is not saying much).

For my grug brain, an untyped memory-managed interpreted language will be at a significant disadvantage against a typed compiled language whatever the use case. It's a testament to the Chrome dev team's hard work.

It is also possible that the WASM VM has not benefited from as much love as the JS VM.


I have also a bit of a grug [0] brain, but this old brain has become accustomed to the idea that JIT and other optimisation passes can cram out such an insane amount of inference that it will find the fastest code possible. Effectively finding the hand-tuned equivalent. I first got this hunch looking at the JVM. Couldn't hand code JVM instructions which run faster than Java code. But both the hand coded "JVM assembly" and the generated assembly benefited from JIT passes.

What remains then, is compiling to native. This is where C shines and Javascript doesn't. Because of it's data model, Javascript just can't take advantage of what you could do in plain C.

0: This rambling "explanation" is exhibit A


I think parents point is more about that there is nothing making JavaScript (the language nor the runtime) intrinsically slow, but rather the DOM that makes it so the perception of JavaScript is that the language/runtime is slow.

When people bitch about that JS is so bloated and slow, it's usually because of things interacting with the DOM and everything related to it, or just slow network connections. But then people take out their frustrations on the language instead.


Most of my bitching that JavaScript is slow comes from node, which is quite far removed from the DOM. Fast and slow are relative terms, of course, but JavaScript's language features necessitate that it will always be slower than most static, compiled languages. Regardless of runtime. What blows me away is how unsafe the language is on top of being slow, and yet it's basically eaten the world


You're clearly doing something wrong if you seriously think Node is slow. V8 is extremely fast.


I think this is a good point - pure JavaScript code is indeed exceptionally fast. DOM calls do have relatively high overhead, and if the change kicks off CSS recalculation or layout, then that can be even more costly. However that's not representative of the performance of the JavaScript language itself.


> but rather the DOM that makes it so the perception of JavaScript is that the language/runtime is slow.

Not sure what you mean by this. Every time I run something like document.getElementById(), it is very fast.


Is this supposed to be a joke? If not, that's not how you measure performance. Try doing it a million times a second and use actual performance metrics instead of your eyes (which can't tell the difference between 1 nanosecond and 100 milliseconds, so completely useless for determining whether something is slow/fast). Even then just getting the element is clearly not the DOM operations that people are referring to when they say that the DOM is slow.


I'm not sure if the DOM is that slow actually, compared to raw code that can output at any time manipulating the DOM only has produce an update within the next frame for an external observer to perceive it as as fast as possible basically.

Often the problem is probably that people are making too many DOM nodes (no virtualization), talking to the DOM inefficiently via a slow framework (React) and talking to the framework inefficiently (bad code).


Generally I'd say that what you're doing WASM can sometimes be a lot slower than native, but if you're using WebGL to do bilinear interpolation that is something that is entirely on the GPU and is basically free, so it's an odd point of reference?

Looking at the unoptimized code itself, it's getting ~150 MFLOPs which is close to C/C++ in the sense that it's only 10-20x slower than what a single core can do before manually doing any vectorization.


I love gamedev, it's an industry that forces you to write code that runs fast on a wide range of devices

It's getting lost with unity/unreal and their asset store that bastardize the craft

But when you are on a mission to make a specific kind of game works, that's when you appreciate the smart people working in that industry, what ever you want on the screen, they'll make it happen, what ever it takes


The article mentions that you can't use FPS as a yardstick for performance if you're already getting "max", but there is a way around this.

You can measure the frame start and end time (duration) and divide 1000/average duration over 1 second. This will give you the render speed even if that's not the actual number of rendered frames.


You're right, it's usually called the frame time and some engines even measure this per frame using GPU events/markers. However, the main reason for not using FPS is that it's not a linear scale. Losing 5 FPS when it's running at 120 FPS isn't a big deal but losing 5 when running at 10 FPS is a disaster. Comparing frame rates is useless, comparing frame times is the right way.


One of my greatest joys as a developer was taking a web RTS from 100% DOM to everything but the HUD being canvas. I had a crappy "workstation" with very old, barely supported video card that even modern phones outperformed with easy due to better GPU acceleration. It felt almost like working on a real game, especially once I got the buildings and units going.

Went from 5-15 to 45-60 FPS during battles with hundreds of units. I even enjoyed the little fine tuning/cheating here and there that reminded me of the real games I've been playing my entire life. I was moderately happy at this ungrateful, low paying job.




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: