Limit memory usage for a single Linux process

I'm running pdftoppm to transform a customer - gave PDF right into a 300DPI photo. This functions wonderful, other than if the customer gives an PDF with a large web page dimension. pdftoppm will certainly allocate adequate memory to hold a 300DPI photo of that dimension in memory, which for a 100 inch square web page is 100 * 300 * 100 * 300 * 4 bytes per pixel = 3.5GB. A destructive customer can simply offer me a foolish - huge PDF and also create all sort of troubles.

So what I would certainly such as to do is place some sort of tough restriction on memory use for a youngster procedure I'm concerning to run - - simply have the procedure pass away if it attempts to allocate greater than, claim, 500MB of memory. Is that feasible?

I do not assume ulimit can be made use of for this, yet exists a one - procedure matching?

222
2022-06-08 05:31:11
Source Share
Answers: 2

There is some troubles with ulimit. Below is a valuable keep reading the subject: Limiting time and memory consumption of a program in Linux, which bring about the timeout device, which allows you cage a procedure (and also its forks) by time or memory intake.

The timeout device calls for Perl 5+and also the /proc filesystem placed. Afterwards you replicate the device to as an example /usr/local/bin thus:

curl https://raw.githubusercontent.com/pshved/timeout/master/timeout | \
  sudo tee /usr/local/bin/timeout && sudo chmod 755 /usr/local/bin/timeout

After that, you can 'cage' your procedure by memory intake as in your inquiry thus:

timeout -m 500 pdftoppm Sample.pdf

Alternatively you can make use of -t <seconds> and also -x <hertz> to specifically restrict the procedure by time or CPU restraints.

The means this device functions is by examining numerous times per 2nd if the generated procedure has not oversubscribed its set borders. This suggests there in fact is a tiny window where a procedure can possibly be oversubscribing prior to timeout notifications and also eliminates the procedure.

An even more proper strategy would certainly therefore likely entail cgroups, yet that is far more entailed to set up, also if you would certainly make use of Docker or runC, which amongst points, supply an even more customer - pleasant abstraction around cgroups.

79
2022-06-08 06:05:10
Source

If your procedure does not generate even more youngsters that eat one of the most memory, you might make use of setrlimit function. Extra usual interface for that is making use of ulimit command of the covering:

$ ulimit -Sv 500000     # Set ~500 mb limit
$ pdftoppm ...

This will just restrict "virtual" memory of your procedure, thinking about bdsh and also restricting bdsh the memory the procedure being invoked show to various other procedures, and also the memory mapped yet not booked (as an example, Java is huge lot). Still, digital memory is the closest estimate for procedures that expand actually huge, making the claimed mistakes trivial.

If your program generates youngsters, and also it is them which allocate memory, it comes to be extra intricate, and also you need to write supporting manuscripts to run procedures under your control. I created in my blog site, why and also how.

96
2022-06-08 05:41:12
Source