tg%gmplib.org@localhost (Torbjörn Granlund) writes: > Greg Troxel <gdt%ir.bbn.com@localhost> writes: > > xen provides fair sharing of cpu among dom0 and domU because it controls > CPU. For disk, it's more complicated, because operations are forwarded > to the dom0 to be actually done. It seems that the fair-sharing of > disk among domUs is working. > > Do Dom0's disk own operations go via Xen? I believe not. The basic issue is that the disk controller is attached to the dom0 operating system. > So I wonder how the domU ops that arrive in dom0 share with the native > dom0, and I think that's entirely a netbsd issue, not a xen issue. > > What is your kern.bufq.strategies? > > -bash-4.3# sysctl kern.bufq.strategies > kern.bufq.strategies = disksort fcfs priocscan That looks similar to mine. I suspect this is important but am not clear on the details. > Do you perceive reasonable fair-sharing in the dom0 across processes? > > I haven't scrutinized that, but as far as I can tell it works perfectly. What's going on is that there are disk requests from xen domUs which arrive and in your case they are getting forwarded to a vnd which then forwards them to a file which eventually gets to the disk. So it may be that the domU is getting fairshaired against all other processes on your dom0 which leads to 1/30th of the disk bandwidth. Really there needs to be some sort of hierarchical schedular that splits evenly among all domN and then shares properly among the native dom0 usage. I suspect that if you used a separate disk in the dom0 for the domU storage, you wouldn't have this issue, but that's avoiding the issue. I also wonder if raw partitions/LVM would be better. It is probably not super hard to read the xen code in the dom0 kernel to see what's going on, and there may be performance counters you can look at. > Please encrypt, key id 0xC8601622 I would but then the list wouldn't be able to read it :-)
Attachment:
signature.asc
Description: PGP signature