I took your patch and have been adding comments to help me understand things, as well as debug logging. I also switched how it works, to have an ifdef for netbsd approch vs others, to make it less confusing -- but it amounts to the same thing. (I understand your intent was to touch as few lines as possible and agree that your approach is also sensible.) I conclude: the default behavior is to set the ARC size to all memory except 1 GB Even on a high-memory machine, without memory pressure mechanisms, the current code is dangerous -- even if in practice it is usually ok. If the ARC size is more moderate, things are ok The ARC tends to fill with metadata, and I believe this is because the vnode cache has refs to the ARC so it is not evictable We don't have any real evidence that huge ARC is much better than biggish ARC on real workloads. I attach my patch, which I am not really proposing for committing this minute. But I suggest anyone trying to run zfs on 8G and below try it. I think it would be interesting to hear how it affects systems with lots of memory. On a system with 6000 MB (yes, that's not a power of 2 - xen config), I end up with 750 MB of arc_c. There is often 2+ GB of pool usage, but that's of course also non-zfs. The system is stable, so far. (I have pkgsrc, distfiles, binary packages in zfs; the OS is in UFS2.) ARCI 002 arc_abs_min 16777216 ARCI 002 arc_c_min 196608000 ARCI 005 arc_c_max 786432000 ARCI 010 arc_c_min 196608000 ARCI 010 arc_p 393216000 ARCI 010 arc_c 786432000 ARCI 010 arc_c_max 786432000 ARCI 011 arc_meta_limit 196608000
Attachment:
DIFF.arc
Description: Binary data