Current-Users archive
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index][Old Index]
Memory usage of crunchgenned binary vs shared
In an embedded project, I have kernel with an internal md root. I am then
using compressed vnd images, one of which includes standard shared
libraries (e.g. libc). By default, all the standard scripting tools (sh,
awk, sed, rm, cp, etc.) are all part of the monolithic crunchgenned
binary. As such they have a vsz of around 2.5MB when running. I've
noticed that in short memory situations that they are sometimes killed
causing scripts to fail in odd ways. This lead me to think that
installing the shared versions in the vnd and symlinking to them might
save memory.
However, every time I think about the logic of this I come up with a
different answer :-)
My current belief is that using the shared version will actually make
things worse. The md block cannot be run in place as they are in a
read-only place in the kernel memory so they are copied out. This is done
by block. In a crunchgenned binary the whole binary (and therefore all
links to it) will therefore be copied once and thus take up 2.5MB of
memory (over and above the RAM already used by its read-only image in the
kernel). If the shared version is used, the buffer cache will have the
whole of libc (which is likely to be there anyway because it'll be used
by other things) plus the shared binary versions of each individual
executable. I'm not sure how this maps to per-process memory usage though.
Could any VM experts give me a definitive answer here? :-)
--
Stephen
Home |
Main Index |
Thread Index |
Old Index