From 197fa384b783bbe2e0571265229cbdf73a2fcad1 Mon Sep 17 00:00:00 2001 From: admin Date: Wed, 16 Feb 2011 23:09:01 +0000 Subject: [PATCH] --- ...ing_a_process_with_limited_ressources.mdwn | 31 +++++++++++++++++++ 1 file changed, 31 insertions(+) create mode 100644 running_a_process_with_limited_ressources.mdwn diff --git a/running_a_process_with_limited_ressources.mdwn b/running_a_process_with_limited_ressources.mdwn new file mode 100644 index 0000000..489340d --- /dev/null +++ b/running_a_process_with_limited_ressources.mdwn @@ -0,0 +1,31 @@ +Suppose you run a real time critical application receiving formatted +messages, which are decoded by some library. + +In addition, the sent messages can be wrongly formatted, which can +throw your decoding library in an infinite loop or lead to allocating +too much memory. In short the library in question is badly written and +so very unsafe. + +Curing that at the core is out of the equation, as the library is too +complex to be maintained with the limited manpower available. Yes, +it seems odd, but that's how reality is sometimes... + +To avoid the application going berserk and ultimately crashing in the +case of bad input, I wanted to run the message decoding step as a +child process, but with limited CPU and memory ressources. If these +ressources are overshot, the parent process is signaled and the +decoded message is dropped. + +You would believe that Unix like systems have nice facilities to do +that out of the box? The answer is not a sounding "yes". The case for +memory management in Linux is especially vexing, as Linux seems happy +by default to let every process allocate as much as it wants. + +Below is a little program I have devised to test limits on: + +1. the "brk" memory (-b argument) +1. the malloced memory (-m argument) +1. the CPU time (-c argument) + +Notice how every case has it's own strategy to set and enforce the +limit. -- 2.39.2