Subject: Re: CVS commit: basesrc
To: None <ragge@ludd.luth.se>
From: Ross Harvey <ross@ghs.com>
List: source-changes
Date: 05/06/2001 14:52:31
> From: Anders Magnusson <ragge@ludd.luth.se>
> >
> > Module Name: basesrc
> > Committed By: ross
> > Date: Sun May 6 19:27:08 UTC 2001
> >
> > Modified Files:
> > basesrc/lib/libc/arch/i386/sys: brk.S
> >
> > Log Message:
> > I have no idea why this syscall wrapper does some very un-unix-like
> > argument prefrobbing, in particular, it computes max(addr, __minbrk)
> > and uses that. The code is like this even in the ancient libc/i386 tree,
> > back to the earliest rev 1.2. I did not see it Lite 1, but I'm not totally
> > sure what the random site I found was serving up.
> >
> The comments from the vax port may explain:
> ENTRY(_brk, 0)
> cmpl 4(ap),_C_LABEL(__minbrk) # gtr > _end
> bgeq 1f # is fine
> movl _C_LABEL(__minbrk),4(ap) # shrink back to _end
> 1: chmk $ SYS_break # do it
> ...
>
> Avoid lowering brk below end of program...
Well, duhhh. :-) As you can see, I even described exactly what it did,
and the intention to establish a brk(2) floor is obvious. But other unix
systems generally don't do this, it doesn't seem needed, netbsd does not
use it, so the "why" question remains. The only thing that references it
is gmon, which does not use it but does "maintain" it by bumping it up
after an initial sbrk().
Thinking some more, my guess would be that it was intended to be used
instead of &end, so that toolchain or system utilities could sbrk() behind
the scenes without destroying the usefulness of &end, and perhaps for use
by the infamous historical program or two that did their memory allocation
by [s]brk(2)'ing until they stopped segfaulting.
// ross