Absolutely. Asynchronous signal safety is among the murkiest waters of systems programming. It's pointless to even try to do anything in a signal handler, it's not safe to do anything more complex than setting a flag. The sanest way to handle signals seems to be signalfd. You just turn off normal signal delivery and handle them by epolling a signals file descriptor instead. Not portable of course, it's a Linux feature.
When using green threads/fibers/coroutines, an interesting technique to make signal handling safer is to run the signal handler asynchronously on a separate fiber/green thread. That way most of the problems of dealing with signals go away, and there's basically no limitation on what you can do inside the signal handler.
I've successfully used this technique in Polyphony [1], a fiber-based Ruby gem for writing concurrent programs. When a signal occurs, Polyphony creates a special-purpose fiber that runs the signal handling code. The fiber is put at the head of the run queue, and is resumed once the currently executed fiber yields control.
I love that userfaultfd now exists as an alternative to trapping sigsegv. I wish it was a bit more flexible though (I'd like to be able to associate user defined meta-data with the fault depending on address, for example.)
That sort of sounds like the solution I was thinking of when reading this.. that handlers should only cache the signal received to be handled at the program's convenience, except for a kill signal etc.