Subject: Re: CRITICAL ** Holes in default cron jobs ** CRITICAL
To: der Mouse <mouse@holo.rodents.montreal.qc.ca>
From: John F. Woods <jfw@jfwhome.funhouse.com>
List: current-users
Date: 12/30/1996 16:39:42
> Also, the code to walk the hierarchy properly would be _very_ unsimple.
> (Primarily because you can have _far_ more pathname components - ie,
> directories you have to walk down and back up through - than you have
> file descriptors available. File descriptor limit seems to be 344 on
> the system I have handy, but MAXPATHLEN is more like 10K, meaning you
> can need up to 5K pending directories.)
It is difficult to monkey with ".." links if you aren't already
superuser, so it might be sufficient to keep track of just the current
directory, as long as you don't descend through symbolic-links-to-directories.
(Hmm, no there's still a race: leave a directory, let find stat it, remove
it and create a symlink, let find call chdir; it's just a very very narrow
window. I guess find could stat "." after the chdir to see if it has been
hosed.)
Perhaps an unlink variant which takes a file handle (typically inode/device/
generation number) plus an entryname to remove would help. Exchanging a
directory for a symlink should cause the operation to just fail (well, I
guess it would work spuriously as often as one time in 4 billion, on a UFS
filesystem). The NFS server already uses such a primitive, it should be
easy to expose it. (It needs to be priveleged, of course, since it bypasses
any directory protection leading from / down to the directory in question!)