Subject: Re: New Documentation: Encrypted CDs/DVDs
To: Alan Barrett <apb@cequrux.com>
From: Florian Stoehr <netbsd@wolfnode.de>
List: netbsd-advocacy
Date: 03/13/2005 13:30:18
On Sun, 13 Mar 2005, Alan Barrett wrote:
> On Sat, 12 Mar 2005, Steven M. Bellovin wrote:
>> Second, it says to create the prototype file by dd'ing /dev/zero.
>> That leaves zeros on the disk in unwritten sectors; these stick out
>> like a sore thumb. You could write /dev/urandom instead, though that
>> can be slow. What I recommend is a little odd. Create the space with
>> /dev/zero, as indicated. When you're finished putting the content you
>> really want on the file system, dd /dev/zero to the cgd partition,
>> until it runs out of space. Run sync, to make sure it's written
>> out. You can then delete that pad file and proceed with the unmount
>> procedure.
>
> (For "dd /dev/zero to the cgd partition" read "dd /dev/zero to a scratch
> file in the filesystem that's in the cgd partition", I presume.) With
> that method, I'd be afraid that there were parts of the disk that
> were reserved for file system meta-data but that were never written.
>
> I recommend one of two methods:
>
> A) First dd from /dev/urandom to to the backing file that will be used
> by cgd, then vnconfig, cgdconfig, newfs, write the real data. (This is
> the same as Steve's first suggestion above.)
>
> B) Create the backing file via dd from /dev/zero, then vnconfig and
> cgdconfig, then dd from /dev/zero to the raw partition on the cgd device
> (this will result in random-looking stuff being written to the backing
> file, as cgd encrypts the zeros), then newfs and write the real data.
>
> --apb (Alan Barrett)
>
Hi,
I left this step out in the documentation as it is optional (or even
not necessary in all cases, see below) and may be easily discovered by
reading the recommended cgd chapter in the guide.
I did not experience any stability problems with the vnd/cgd
combination used here (and I have this in everyday use).
dding /dev/urandom is not that "random". And /dev/random is way too
slow or will even hang if there's not enough entrophy (think so,
never tried ddin /dev/random to a 4 GB image).
Configuring the cgd with a random-key (as Alan suggested) and dding
/dev/zero to this is the best solution *IF* you want to fill up the image
and you always want to write a full image, I agree.
As I assume users will not always use a 700 MB image for 130 MB data
and thus will create an image of appropriate size, this step
ist not necessary at all for the few bytes left in the image,
although -of course- a large filled up image obfuscates the
content even more, sure.
I personally have a set of "standard" image sizes (200, 400, 700)
for CDs and reuse them, without even any dding. I just newfs
them and leave the old (raw) file data in them (this is of
course NOT that secure or even SILLY since it allows
partially gathering the raw data of the last CD through the
unlocked cgd of the new one). The image size stepping
reduces the size of left-over old data of course.
I do this because I often use encrypted CDs/DVDs and even with
old "random" data left in the image, I consider this being
secure enough for SOME cases (not-that-legal mp3 backup, e.g.).
I'd always use a 700 MB new and full randomized image for
"top-secure" content.
But, this is up to each user and way not a recommendation.
-Florian