They can come at any time but they are only delivered to your process by the kernel during a context switch. They're queued until then.
> it could take an infinite amount of time between when the user sent SIGINT to when the program stopped by the very nature of what that utility does
A well coordinated signal-handling-thread and a workload-thread won't manifest this issue - poorly managed threads however, will.
>By using longjmp(), a SIGINT becomes a true interrupt, which is what users want.
Hard disagree.
What you've done is turned signals into interrupts, which is .. hokey. And not how signals are intended to be used. Its quite possible to get the behaviour you expect - fast interruption and death of work-code - but you'd have to sort your issues with threads out, first.
EDIT: its decades-old proven technology: use a semaphore or a mutex to keep your threads in lockstep, and avoid this longjmp() malarkey... signals aren't hard, but maybe they're only just a little less harder than threads ..
> What you've done is turned signals into interrupts, which is .. hokey
No that is precisely what they are - user space interrupts. Thats how they work. That’s why one has to worry about async safety and signal safe calls.
I have no idea where you get off on this “allowing a context switch” nonsense. In most of the systems being discussed, if a signal is delivered and the thread is runnable it will be delivered immediately and asynchronously - there is no queuing going on in that case. If the thread is not runnable/scheduled that’s another story but this does not square with what you’re saying, because it sounds strongly that you’re saying that signals are delivered synchronously (with context switches) and they are most definitely not, generally.
Also longjmp is an entirely user space concept - the kernel on the most common systems being discussed has no idea of its operation.
> They can come at any time but they are only delivered to your process by the kernel during a context switch.
This is backwards and extremely misleading; a signal being queued can trigger an immediate context switch (effected via an inter-processor interrupt). See `kick_process()` in the Linux kernel [1].
> What you've done is turned signals into interrupts, which is .. hokey. And not how signals are intended to be used.
Signals are userspace interrupts. That's exactly what they are, and they're no more hokey than hardware interrupts (so, pretty hokey).
I used the system you are advocating in my utility at first.
The problem is that you need to constantly check for work. That is expensive on tight loops. Yes, I checked on every loop iteration unless I knew a small, but constant, amount of work had to be done.
That's the thing: if you have a tight loop, do you know how long it's going to take to run through everything? In addition, when a SIGINT comes, can you be sure that your state is correct?
Here's a test: download my utility and run these commands:
In the 'real world' compare-and-swap operations (such as one would find in atomic types used for communication between worker and handler) are single-cycle operations, if not a hard CPU flag...
>I understand why you think the way you do; I did too. But the real world is more complicated.
Please consider the complications of the high-end audio world, where such techniques are well established. Not only must bat-out-of-hell threads have all the gumption they can muster, but they have to be able to be controlled - in as close to realtime as possible - by outside handler threads.
I think boffinaudio is trying to help you improve your code quality. Its not bad advice to re-think this.
If I was in that context, yes, I would use outside handler threads, but that's because I could probably put the compare-and-swap in one place (or very few).
As of 2.7.0, my bc had 77 places where signals were checked, and I was probably missing a few.
Real-time is different from what I was doing, so it required different techniques.
There is a case for signals in strictly real-time, strictly high-performance, and also strictly realtime+high-performance code.
You have decided to go strictly for high-performance, for your well-argued reasons, and you've abandoned a standard practice for your stated claims, but this isn't just about your code - its about how people can mis-use signals, and in your case you're mis-using signals by not using them.
It is the advice:
>"Yes be very afraid of signals."
.. which feels not entirely appropriate.
So I took a look at the bc code, and I too am terrified of your use of longjmp.
It appears to me you've gotten somewhat smelly code because you didn't find the appropriate datatype for your case, and decided to roll your own scheduling instead. Ouch.
>As of 2.7.0, my bc had 77 places where signals were checked, and I was probably missing a few.
To refactor this to a simple CAS operation to see if the thread should terminate, doesn't seem too unrealistic to me. Only 77 places to drop a macro that does the op - checking only if the signal handler has told your high-performance thread us to stop, hup, die, etc.
Signals are awesome, and work great - obviously - for many, many high-performance applications, and your high-performance, CPU-bound application might feel like the only way you could do it - but you certainly can attain the same performance and still handle signals like a well-behaved application that doesn't have to take big leaps just to stay ahead of the scheduler ..
>Also, I didn't have a scheduler. The point of interrupts is that you don't need a scheduler.
I think where we digress in position is that I do not think you have a good justification for the statement "be scared of signals" because, after all, you are clearly not scared of them and have decided to bend them to your own thoughts on how best to optimize your application, so its sort of ingenuous to hold the position having completely wiped "the standard way to do high-performance signal-handling" from your slate, to put your own special case forward by example. You're clearly not scared of them.
I'm calling you out on it because signals are absolutely not scary, but maybe talking about them with other experts can be.
Your case is more an example of how unscary signals are - but you've opted for longjmp()'s (which are, imho as a systems programmer, a far more cromulant fear) in your code as a solution to a problem which I don't think is really typical.
Thus, not really scary at all.
Well, it was a fun read of some code, and thanks for bc anyway.
Bus activity makes CAS and any atomic operation far more costly than a single cycle. If they were really that cheap then every operation would just be atomic.
In general you must trade off bandwidth for improved latency. Audio work by its nature can and must do this. It is not the appropriate trade off in frankly most cases of computing (even if it is arguably more interesting)
> it could take an infinite amount of time between when the user sent SIGINT to when the program stopped by the very nature of what that utility does
A well coordinated signal-handling-thread and a workload-thread won't manifest this issue - poorly managed threads however, will.
>By using longjmp(), a SIGINT becomes a true interrupt, which is what users want.
Hard disagree.
What you've done is turned signals into interrupts, which is .. hokey. And not how signals are intended to be used. Its quite possible to get the behaviour you expect - fast interruption and death of work-code - but you'd have to sort your issues with threads out, first.
EDIT: its decades-old proven technology: use a semaphore or a mutex to keep your threads in lockstep, and avoid this longjmp() malarkey... signals aren't hard, but maybe they're only just a little less harder than threads ..