Port-xen archive
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index][Old Index]
cgd-on-vnd results on Xen
I use big files in my filesystems for file-backed guest domains. I'd
like to set up some cgd(4) devices in my guest domains to protect
against the data being slurped off the dom0 partition and read. To
help decide which type of cgd setup I should choose, I wanted to see
the effect of using various cgd-over-vnd configurations in my Xen
setup.
I'm running NetBSD-3.0_BETA on a dual-proc Xeon 3.2GHz box with 2GB
RAM and Ultra320 SCSI disks, and dom0 was allocated 128MB of memory
usage. What I did was run the following in dom0:
dd if=/dev/zero of=/dev/rcgd0d bs=... # for writes
dd if=/dev/rcgd0d of=/dev/null bs=... # for reads
using various block sizes for transfer on cgd-on-vnd devices backed
by a 5GB file on my filesystem. The dom0 filesystem is FFSv2 with
16KB blocks and 2KB fragments.
My results for various cgd configurations and block sizes were:
32KB 16KB 8KB
algorithm write read write read write read (MB/s)
------------------------------------------------------------------------
none 80.24 63.47 59.94 54.95 46.66 43.16
aes-cbc 128 33.48 24.24 29.13 22.12 25.21 19.89
aes-cbc 192 30.52 20.90 26.99 20.34 22.65 18.37
aes-cbc 256 27.02 20.20 25.17 18.10 22.15 17.16
3des-cbc 192 14.39 10.94 14.00 12.22 12.64 11.48
blowfish-cbc 128 37.49 31.73 32.45 27.70 27.36 24.46
32KB 16KB 8KB
algorithm write read write read write read (%)
------------------------------------------------------------------------
none 100.00 100.00 100.00 100.00 100.00 100.00
aes-cbc 128 41.72 38.19 48.60 40.25 54.03 46.08
aes-cbc 192 38.04 32.93 45.03 37.02 48.54 42.56
aes-cbc 256 33.67 31.83 41.98 32.94 47.47 39.76
3des-cbc 192 17.93 17.23 23.36 22.24 27.09 26.60
blowfish-cbc 128 46.72 49.99 54.14 50.41 58.64 56.67
The second table displays the same results as the first, but expresses
the results using percentages instead of the raw throughput numbers.
The results for the first line, "none", of each table were from running:
dd if=/dev/zero of=/dev/rvnd0a bs=... # for writes
dd if=/dev/rvnd0a of=/dev/null bs=... # for reads
The "none" results are supposed to be baseline measurements for
comparisons against the throughput for the basic vnd disk-on-file
abstraction. I freely admit this is a naive benchmark because I was
too lazy to install and run bonnie. However, I hope this information
is still somewhat useful to other people trying to set up something
similar.
Cheers,
-- Johnny Lam <jlam%pkgsrc.org@localhost>
Home |
Main Index |
Thread Index |
Old Index