Suppose you run a real time critical application receiving formatted messages, which are decoded by some library. In addition, the sent messages can be wrongly formatted, which can throw your decoding library in an infinite loop or lead to allocating too much memory. In short the library in question is badly written and so very unsafe. Curing that at the core is out of the equation, as the library is too complex to be maintained with the limited manpower available. Yes, it seems odd, but that's how reality is sometimes... To avoid the application going berserk and ultimately crashing in the case of bad input, I wanted to run the message decoding step as a child process, but with limited CPU and memory ressources. If these ressources are overshot, the parent process is signaled and the decoded message is dropped. You would believe that Unix like systems have nice facilities to do that out of the box? The answer is not a sounding "yes". The case for memory management in Linux is especially vexing, as Linux seems happy by default to let every process allocate as much as it wants. Below is a little program I have devised to test limits on: 1. the "brk" memory (-b argument) 1. the malloced memory (-m argument) 1. the CPU time (-c argument) Notice how every case has it's own strategy to set and enforce the limit. Link to the program: [[limit.tgz]]