Subject: Re: as long as we're hitting FFS...
To: Chris G. Demetriou <cgd@netbsd.org>
From: Bill Studenmund <wrstuden@nas.nasa.gov>
List: tech-kern
Date: 03/25/1999 17:46:36
On 25 Mar 1999, Chris G. Demetriou wrote:
> Bill Studenmund <wrstuden@nas.nasa.gov> writes:
> > And just what, exactly, are other people going to want to stick in here?
> > It's storage for overlay fs's. I really don't forsee having two overlay
> > fs's running on top of each other at the same time, each needing per-inode
> > storage.
>
> If this is truly where you are going, you _have_ to provide the hooks
> so the Wrong Thing cannot happen. (i.e. multiple overlays, use of
> wrong overlay.) You haven't done that (that i've seen).
As far as I can see, we can only do so much to prevent things from getting
mis-mounted (is that what you're describing?).
My code has two things it does, both of which are beyond the scope of the
discussion so far (they are in what the layer does, not how we grow ffs to
support it). At mount time, the mount routine looks to see if there is
opaque data under the mount point. If not, it stores it's mount point data
there. If there is anything other than its mountpoint data under there, it
bails - mount fails.
When each node is brought into service, we check and see if there's our
data already in it. If not, we do nothing. Likewise if it has someone
else's, we ignore the file.
Also, we catch the setting of opaque data, and so know if we are gaining
or loosing data (remembering that this design assumes only one type of
opaque data per file).
It would be legit for the layer to not allow setting other types of opaque
data if only one type can be stored.
> It's not enough to say "we make rope" here. It's simply _wrong_ to
> allow a simple, potentially common naive misconfiguration to trash a
> user's file system data.
>
> In other words, even for the problem you propose to solve, your
> solution is not complete.
At no point in this discussion has I said I am presenting a complete
description of how to put an overlay fs which stores info in each node
into service. :-) I've thought about a lot of this, but I was only
discussing how to get the per-node storage going. :-)
Take care,
Bill