Subject: Re: userid partitioned swap spaces.
To: Jukka Marin <jmarin@pyy.jmp.fi>
From: Brian C. Grayson <bgrayson@marvin.ece.utexas.edu>
List: tech-kern
Date: 12/15/1998 19:05:02
On Tue, Dec 15, 1998 at 01:33:42PM +0200, Jukka Marin wrote:
>
> Better ideas? Killing just "the most recently created process" or "the
> process using most memory" wouldn't be that good, IMHO.
One metric (that I doubt is kept by the kernel) is
whichever process has recently allocated a fair amount of
memory, and is also big. In my case, the problem is usually a
single process (netscape, one of my simulations, etc.) that has
just grown too big. If one looked at recent allocations, and saw
that netscape recently allocated 100KB, then it is fairly likely
that it will allocate more soon, and thus might as well be killed
now. IMO, this is better than "last allocator" or "largest
process", and might be maintainable (decaying average of pages
allocated per second?).
If that stat can't be maintained all the time, maybe once 90%
of VM was used, the VM system started counting pages allocated
per process, and then used that simpler metric to determine the
most likely culprit. Various daemons would most likely
never be the highest allocator for the last 10% of VM. And that
way, a large-but-stable-sized process wouldn't be killed if it had
stopped allocating.
Maybe that's more complication than it's worth. I like the signal
idea -- I could rig my simulator to basically save its state
and then kill itself, so that things could be restarted later
when more VM was available.
Brian
--
"What happens to things blowing in the breeze on a dry, winter's day? Static!"
Dr. Bennett, ELEC 426