Subject: Re: LFSv2 on the trunk
To: Michael K. Sanders <msanders@confusion.net>
From: Konrad Schroder <perseant@hhhh.org>
List: current-users
Date: 07/13/2001 18:02:42
On Fri, 13 Jul 2001, Michael K. Sanders wrote:
> While I agree that would be good, how would that be distinguishing
> from FFS+softdep+snapshots?
One word: undelete. :^)
I'm taking your question to mean "how would LFS distinguish" rather than
"how is that feature set different", because the feature set is not fully
representative of LFS' potential...I'm also going to assume that LFS
works, although that's not quite correct either :^)
Soft dependencies are good, but they are an attempt to get LFS-like
features out of an existing installed base of FFS, and no changes to the
FFS disk layout were allowed. (These criteria were explicit in the
softdep paper.) I haven't followed softdeps very closely, so I won't
comment on it any more than that.
LFS *by way of its design* has the potential to be significantly faster
than a conventional filesystem (e.g. FFS) at file creation and removal,
and concurrent file writes. Try comparing a 1.5.1 FFS, with or without
softdeps enabled, against a 1.5.1 LFS, doing two or more simultaneous "tar
xf"s of, say, the (uncompressed) NetBSD 1.5.1 base tar. Multiple
concurrent "bonnie" benchmarks also look good for LFS, though not as good
as the multiple "tar".
Snapshot support (if I understand what you mean by that, which I might
not) is also already there and working. Basically, if you sync, kill the
cleaner, and run dump_lfs, you will get a guaranteed-consistent version of
the filesystem, while still being able to write to the filesystem in the
normal way and without seeing much performance degradation while you're
dumping (well, until you run out of empty segments; hey, it's not magic).
Because it exists in user space[*], the cleaner can perform other actions
before making space available for writing. Old versions of file data can
be written to secondary storage, for example, by the cleaner, or multiple
versions can be saved on disk if desired. The cleaner could, with some
work, place seldom-used files at the extremities of the disk and relocate
busy files together in the middle of the disk, or perform other
layout optimizations, after the files are written and possibly even
differing from day to day. None of these cleaner behaviors have been
written, but the log-structured design makes them possible.
[* - There are pros and cons for having the cleaner operate in user space;
the potential interaction with external processes is one of the pros.]
That's all I can think of for now :^)
Konrad Schroder
perseant@hhhh.org