Current-Users archive
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index][Old Index]
Re: X server being killed a lot
In article <pr6tbj$a3n$1%serpens.de@localhost>,
Michael van Elst <mlelstv%serpens.de@localhost> wrote:
>mlelstv%serpens.de@localhost (Michael van Elst) writes:
>
>>filemax is not the limit for the cache but the level it tries to keep
>>when pressed for memory.
>
>None of these settings are directly responsible for killing a process,
>they just help to avoid that the system runs against the wall.
>
>A process is killed by UVM when it needs to fault-in a page but there
>is no free page and it thinks none could be reclaimed. As long as there
>is swap, the assumption is that there is at least one anon page that can
>be reclaimed somewhen and nothing is killed. As long as the file cache
>exceeds 1/16 of managed memory or 5MByte, the assumption is that at
>least one file page can be reclaimed somewhen and nothing is killed.
>
>There is one more possibility. Even when there is swap and pages
>could be reclaimed but the pager itself runs out of (kernel) memory,
>that error can kill the process. That includes also a failure to
>allocate kernel address space.
>
>The UVM history should give you the exact reason why the fault
>couldn't be handled.
But we kill the process that faulted in this case not the process that
likely caused the shortage. We should be keeping stats so that we can
select a better victim, then kill that instead and retry. But this is
easier said than done :-)
christos
Home |
Main Index |
Thread Index |
Old Index