Subject: Re: Bad response...
To: Jochen Kunz <jkunz@unixag-kl.fh-kl.de>
From: Steven M. Bellovin <smb@research.att.com>
List: current-users
Date: 08/30/2004 09:43:32
In message <20040827134506.GA13999@hoss.unixag-kl.fh-kl.de>, Jochen Kunz =
writes
:
>On Fri, Aug 27, 2004 at 11:20:14AM +0200, Johnny Billquist wrote:
>
>> I'm getting tired of the fact that my system's response have gone down=
=
>> dramatically the last year or so.
>> I believe the problem is in the unified memory buffer handling. The di=
sk =
>> cache is way to agressive in grabbing memory.
>Seconded. I have seen the same on other OS (e.g. Tru64) too. The
>filesystem buffer cache eats up all memory, executable pages are reused
>resulting in heavy paging when an application process is touched again. =
>This is really annoying.
Agreed -- I haven't been really satisfied with interactive performance =
since UBC came in. Now that my major machines have 512M apiece, I can =
live with it, but it's still not what I'd like.
I think there are at least two problems. First, using arbitrary =
amounts of memory for file cache leaves lots of dirty pages; these need =
to be cleaned before the memory can be reused. By contrast, pages =
used for process text can be reused immediately. =
But I suspect that a bigger problem is that once there's a demand for =
memory to satisfy an interactive process, whether a new command or some =
process that has just awakened, it looks like the new process has to =
fight with the file-writing processes for the newly-freed memory. I =
suspect that the percentage of RAM allocated to file writes needs to =
follow a strategy similar to TCP's congestion control -- when there are =
competing requests, it needs to *seriously* cut back the percentage =
allocatable to such needs -- say, an additive increase/multiplicative =
decrease scheme, just like TCP uses.
--Steve Bellovin, http://www.research.att.com/~smb