Port-xen archive
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index][Old Index]
Re: Xen balloon driver rewrite
On Thu, 7 Apr 2011 20:22:58 +0530, "Cherry G. Mathew"
<cherry.g.mathew%gmail.com@localhost> wrote:
If you ask for inflate then a deflate of 256MiB, or the other way
around,
you will effectively spawn two workers. Both will work towards the
same
target though, and in this case, will rapidly exit.
I'm curious about the effect of lock contention in the case of many
outstanding threads reading/writing to the same shared variable. Also
the load on the scheduler / thread manager for multiple outstanding
requests.
More than lock contention, it's the concurrent access to
balloon_inflate and balloon_deflate that is problematic :)
Although this is a rare situation, it can happen: one worker is
currently executing a balloon_inflate, while another is created to
handle a deflate. Ouch. Linux fixes that by having a "biglock" mutex
wrapping the balloon handler loop, but I'd like to avoid going down that
route.
Can't think of any other objections.
This all depends on the above. If I can't find a clean solution to the
above (other that having an ugly "biglock" at worker handler entry and
release it upon return), I'll revert back to the idle thread solution.
In the feedback case, the userland script/tool within the domU, and
the dom0 (via xenstore) can see what's going on. Wouldn't this be
better than to leave both uninformed and under the impression that
the
balloon succeeded ?
domU will see it in the error case, because it will be logged to its
console.
Indeed, for dom0/admin domain, it won't notice it. The memory/target
entry is not modified back. I am not sure if it should anyway: the
pv_ops Linux kernel does not feedback such situation back to Xenstore (I
did not find the code that does it...), so I am not sure that
tools/someone would ever care if memory/target gets modified behind
their back.
Yes, but balloon.mem-min is a pretty arbitrary/rule-of-thumb value,
right ?
Yes. It permits the domU admin to control what the kernel is doing for
ballooning, independently from the safeguard XEN_RESERVATION_MIN value.
Of course, mem-min cannot go below XEN_RESERVATION_MIN. Think of it as a
way for domU's admin to say: "I am expecting high memory usage in the
near future, so don't inflate balloon too much."
Whereas the scenario above would be a realworld situation
where we got feedback that a high mem-pressure situation has been
reached ?
Yes. However, I wouldn't "signal" it via the memory/target Xenstore
node. Such states are transient. It's the kind of setup where some
stupid guy starts doing
while true; do
sleep 1 && xm mem-set <whatever> # Heck NetBSD, stop changing
the value, heh
done
because he doesn't get a clue of what is currently happening.
Agreed nonetheless, it's hard to report back specific conditions to
dom0. Except for well defined nodes, domains put whatever pleases them
in Xenstore...
--
Jean-Yves Migeon
jeanyves.migeon%free.fr@localhost
Home |
Main Index |
Thread Index |
Old Index