On 2022-06-15 16:01, Michael van Elst wrote:
bqt%softjar.se@localhost (Johnny Billquist) writes:
They might be the reason for the memory shortage. You can prefer large
processes as victims or protect system services to keep the system
managable.
So when one process tries to grow, you'd kill a process that currently
have no issues in running?
All processes have issues on that system and the goal is to keep things
alive so that you can recover, a system hang, crash or reboot is the
worst outcome.
Maybe, but not definitely.
And the outcome is in general processes being killed, which basically
should never result in an outright crash or reboot. Not even a hang,
although if the wrong process is killed, you might end up not being able
to access the system, so it's a bit of a grey area.
Obviously there is no heuristic that can predict what action will have
the best outcome and which causes the least damage. Guessing on the
cost of various kinds of damage is an impossible task by itself as
that is fairly subjective.
Agreed. But the one thing that is known at specific point in time is
that there is one process who needed one more page, which could not be
satisfied. All the other processes at that moment in time are not in
trouble. Which also means, we do not know if killing another process is
enough to keep this process going, and we do not know if that other
process would ever get into trouble at all. So we are faced with the
choice of killing one process we know are in trouble, or speculatively
kill something else, and then hope that would help.
The suggestion that we'd add some kind of hinting could at least help
some, but it is rather imperfect. And if we don't have any hints, we're
back in the same place again.
But there can be a heuristic that helps in many cases, and for the rest
you can hint the system.
If you can come up with some heuristics, it would be interesting to see
them. I don't see any easy ones.
Johnny