Subject: Snapshot Weirdness
To: None <current-users@NetBSD.org>
From: Rhialto <rhialto@falu.nl>
List: current-users
Date: 05/06/2006 19:30:36
I am experimenting with snapshots, since I think some of what is written
in fssconfig(8) is either strange or misleading. In particular, I want
to know if a snapshot survives a reboot (if it doesn't it seems fairly
pointless, or at least a lot less useful than the snapshots as described
in [McKusick99]).
Here is what I did:
1. make snapshot #0
2. mount it
3. fill up some extra disk space
4. make snapshot #1
5. mount it
6. look at positively weird output from df.
bash-3.00# cd /scratch
bash-3.00# fssconfig fss0 /scratch /scratch/snapshotbackup
bash-3.00# mount
/dev/wd0e on /scratch type ffs (soft dependencies, NFS exported, local)
bash-3.00# mount /dev/fss0 /mnt/scratch
/dev/fss0: file system not clean (fs_clean=4); please fsck(8)
/dev/fss0: lost blocks 0 files 0
bash-3.00# cd junk/
bash-3.00# dd if=/dev/zero of=junk1 bs=1m count=4096
4294967296 bytes transferred in 240.492 secs (17859085 bytes/sec)
bash-3.00# ls -lh
total 4.0G
-rw-r--r-- 1 root wheel 4.0G May 6 17:43 junk1
bash-3.00# fssconfig fss1 /scratch /scratch/snapshotbackup1
bash-3.00# mount /dev/fss1 /mnt/scratch1
ffs_snapshot_mount: non-snapshot inode 3
/dev/fss1: file system not clean (fs_clean=4); please fsck(8)
/dev/fss1: lost blocks 0 files 0
bash-3.00# df -h
Filesystem Size Used Avail Capacity Mounted on
/dev/wd0e 44G 34G 8.1G 80% /scratch
/dev/fss0 44G 30G 12G 71% /mnt/scratch
/dev/fss1 44G 30G 12G 71% /mnt/scratch1
Should not /dev/fss1 have the same amount Used and Avail as /dev/wd0e at
this time?
Another experiment that I did:
1. Make snapshot of nearly full file system.
2. Mount snapshot file system
3. Remove a large file. one expects that the available space does not
increase since the file must remain allocated for the snapshot.
4. Create a new file to fill up the disk. This file turns out to be much
larger than expected from step 3.
bash-3.00# df -h .
Filesystem Size Used Avail Capacity Mounted on
/dev/wd0e 44G 42G 71M 99% /scratch
bash-3.00# ls -lh
total 12G
-rw-r--r-- 1 root wheel 4.0G May 6 17:43 junk1
-rw-r--r-- 1 root wheel 4.0G May 6 18:03 junk2
-rw-r--r-- 1 root wheel 4.0G May 6 18:07 junk3
bash-3.00# fssconfig fss0 /scratch /scratch/snapshotbackup0
bash-3.00# mount /dev/fss0 /mnt/scratch
/dev/fss0: file system not clean (fs_clean=4); please fsck(8)
/dev/fss0: lost blocks 0 files 0
bash-3.00# ls -lh /mnt/scratch/junk/
total 12G
-rw-r--r-- 1 root wheel 4.0G May 6 17:43 junk1
-rw-r--r-- 1 root wheel 4.0G May 6 18:03 junk2
-rw-r--r-- 1 root wheel 4.0G May 6 18:07 junk3
sh-3.00# rm junk3
bash-3.00# df -h . /mnt/scratch
Filesystem Size Used Avail Capacity Mounted on
/dev/wd0e 44G 38G 4.1G 90% /scratch
/dev/fss0 44G 30G 12G 71% /mnt/scratch
Huh??? why is fss0 *emptier* than wd0e?? Is it still confused from my
previous experiment? I did unconfigure it first and remove the snapshot
backup files...
bash-3.00# ls -lh /mnt/scratch/junk/ /scratch/junk/
/mnt/scratch/junk/:
total 12G
-rw-r--r-- 1 root wheel 4.0G May 6 17:43 junk1
-rw-r--r-- 1 root wheel 4.0G May 6 18:03 junk2
-rw-r--r-- 1 root wheel 4.0G May 6 18:07 junk3
/scratch/junk/:
total 8.0G
-rw-r--r-- 1 root wheel 4.0G May 6 17:43 junk1
-rw-r--r-- 1 root wheel 4.0G May 6 18:03 junk2
bash-3.00# dd if=/dev/zero of=junk_toomuch bs=1m count=4096
/scratch: write failed, file system is full
bash-3.00# df -h /scratch /mnt/scratch
Filesystem Size Used Avail Capacity Mounted on
/dev/wd0e 44G 44G -2.2G 105% /scratch
/dev/fss0 44G 30G 12G 71% /mnt/scratch
Ah, ok, the "extra" space is actually coming from the 5% spare room.
But it is funny that the file system goes from (seemingly) 4.1G
available to -2.2G while writing a file of (less than) 4G.
bash-3.00# rm junk_toomuch
bash-3.00# df -h .
Filesystem Size Used Avail Capacity Mounted on
/dev/wd0e 44G 42G 87M 99% /scratch
Ah, now the administration seems to be somehat resembling correct, since
it is (almost but not quite) back where we started.
Another experient:
1. make snapshot.
2. reboot.
3. remove file.
4. create large new file.
4. re-establish and mount snapshot.
bash-3.00# df -h
Filesystem Size Used Avail Capacity Mounted on
/dev/wd0e 44G 42G 87M 99% /scratch
bash-3.00# fssconfig fss0 /scratch /scratch/snapshotbackup0
reboot
bash-3.00# df -h .
Filesystem Size Used Avail Capacity Mounted on
/dev/wd0e 44G 42G 87M 99% /scratch
bash-3.00# rm junk3
bash-3.00# dd if=/dev/zero of=junk_toomuch bs=1m count=4096
4096+0 records in
4096+0 records out
4294967296 bytes transferred in 241.734 secs (17767328 bytes/sec)
Oops! I guess that snapshot was not so persistent after all!
bash-3.00# df -h .
Filesystem Size Used Avail Capacity Mounted on
/dev/wd0e 44G 42G 87M 99% /scratch
bash-3.00# fssconfig fss0 /scratch /scratch/s
savecore/ slavin/ snapshotbackup0 spam/
bash-3.00# fssconfig fss0 /scratch /scratch/snapshotbackup0
bash-3.00# mount /dev/fss0 /mnt/scratch
/dev/fss0: file system not clean (fs_clean=4); please fsck(8)
/dev/fss0: lost blocks 0 files 0
bash-3.00# ls -lh /mnt/scratch/junk/ /scratch/junk/
/mnt/scratch/junk/:
total 12G
-rw-r--r-- 1 root wheel 4.0G May 6 17:43 junk1
-rw-r--r-- 1 root wheel 4.0G May 6 18:03 junk2
-rw-r--r-- 1 root wheel 4.0G May 6 18:55 junk_toomuch
/scratch/junk/:
total 12G
-rw-r--r-- 1 root wheel 4.0G May 6 17:43 junk1
-rw-r--r-- 1 root wheel 4.0G May 6 18:03 junk2
-rw-r--r-- 1 root wheel 4.0G May 6 18:55 junk_toomuch
No... these "persistent" snapshots don't persist at all as long as
promised:
If backup resides on the snapshotted file system a persistent snapshot
will be created. This snapshot is active until backup is unlinked.
Since I didn't unlink the backup, I was promised the snapshot would
persist. Hoewever when getting back at it, a new snapshot is made
instead...
More unexpected/misleading/wrong wording from the manpage:
Otherwise data written through the path will be saved in backup.
Based on [McKusicki99], I would expect that the *overwritten* data from the
filesystem to be saved in the backup file. Otherwise, after next reboot,
or when the snapshot is unconfigured, the filesystem would revert to its
snapshotted state and "backup" would be required indefinitely for
ongoing processing.
All this is done with
NetBSD azenomei.falu.nl 3.99.15 NetBSD 3.99.15 (AZENOMEI) #2: Sun Feb 5 16:21:31 CET 2006 rhialto@azenomei.falu.nl:/usr/src/sys/arch/alpha/compile/AZENOMEI alpha
since a recent -current I wanted to install didn't boot properly.
[McKusick99]: see
http://www.usenix.org/publications/library/proceedings/usenix99/mckusick.html
chapter 6 in the paper (pp 14-).
In The Design and Implementation of The FreeBSD System, which expands on
the paper, on page 352, "Maintaining a Filesystem Snapshot" explicitly
states that "Snapshots may live across reboots".
-Olaf.
--
___ Olaf 'Rhialto' Seibert -- You author it, and I'll reader it.
\X/ rhialto/at/xs4all.nl -- Cetero censeo "authored" delendum esse.