From: Dan Magenheimer <dan.magenheimer@oracle.com>
To: George Dunlap <george.dunlap@eu.citrix.com>
Cc: "Keir (Xen.org)" <keir@xen.org>,
Ian Campbell <Ian.Campbell@citrix.com>,
Andres Lagar-Cavilla <andreslc@gridcentric.ca>,
"Tim (Xen.org)" <tim@xen.org>,
xen-devel@lists.xen.org,
Konrad Rzeszutek Wilk <konrad@kernel.org>,
Jan Beulich <JBeulich@suse.com>,
Ian Jackson <Ian.Jackson@eu.citrix.com>
Subject: Re: Proposed XENMEM_claim_pages hypercall: Analysis of problem and alternate solutions
Date: Mon, 14 Jan 2013 10:18:17 -0800 (PST) [thread overview]
Message-ID: <9be877bb-d38b-40c7-bae7-b66497f11abf@default> (raw)
In-Reply-To: <50F42827.60507@eu.citrix.com>
> From: George Dunlap [mailto:george.dunlap@eu.citrix.com]
> Subject: Re: [Xen-devel] Proposed XENMEM_claim_pages hypercall: Analysis of problem and alternate
> solutions
Hi George -- I trust we have gotten past the recent unpleasantness?
I do value your technical input to this debate (even when we
disagree), so I thank you for continuing the discussion below.
> On 09/01/13 14:44, Dan Magenheimer wrote:
> >> From: Ian Campbell [mailto:Ian.Campbell@citrix.com]
> >> Subject: Re: [Xen-devel] Proposed XENMEM_claim_pages hypercall: Analysis of problem and alternate
> >> solutions
> >>
> >> On Tue, 2013-01-08 at 19:41 +0000, Dan Magenheimer wrote:
> >>> [1] A clarification: In the Oracle model, there is only maxmem;
> >>> i.e. current_maxmem is always the same as lifetime_maxmem;
> >> This is exactly what I am proposing that you change in order to
> >> implement something like the claim mechanism in the toolstack.
> >>
> >> If your model is fixed in stone and cannot accommodate changes of this
> >> type then there isn't much point in continuing this conversation.
> >>
> >> I think we need to agree on this before we consider the rest of your
> >> mail in detail, so I have snipped all that for the time being.
> > Agreed that it is not fixed in stone. I should have said
> > "In the _current_ Oracle model" and that footnote was only for
> > comparison purposes. So, please, do proceed in commenting on the
> > two premises I outlined.
> >
> >>> i.e. d->max_pages is fixed for the life of the domain and
> >>> only d->tot_pages varies; i.e. no intelligence is required
> >>> in the toolstack. AFAIK, the distinction between current_maxmem
> >>> and lifetime_maxmem was added for Citrix DMC support.
> >> I don't believe Xen itself has any such concept, the distinction is
> >> purely internal to the toolstack and which value it chooses to push down
> >> to d->max_pages.
> > Actually I believe a change was committed to the hypervisor specifically
> > to accommodate this. George mentioned it earlier in this thread...
> > I'll have to dig to find the specific changeset but the change allows
> > the toolstack to reduce d->max_pages so that it is (temporarily)
> > less than d->tot_pages. Such a change would clearly be unnecessary
> > if current_maxmem was always the same as lifetime_maxmem.
>
> Not exactly. You could always change d->max_pages; and so there was
> never a concept of "lifetime_maxmem" inside of Xen.
(Well, not exactly "always", but since Aug 2006... changeset 11257.
There being no documentation, it's not clear whether the addition
of a domctl to modify d->max_pages was intended to be used
frequently by the toolstack, as opposed to used only rarely and only
by a responsible host system administrator.)
> The change I think you're talking about is this. While you could always
> change d->max_pages, it used to be the case that if you tried to set
> d->max_pages to a value less than d->tot_pages, it would return
> -EINVAL*. What this meant was that if you wanted to use d->max_pages
> to enforce a ballooning request, you had to do the following:
> 1. Issue a balloon request to the guest
> 2. Wait for the guest to successfully balloon down to the new target
> 3. Set d->max_pages to the new target.
>
> The waiting made the logic more complicated, and also introduced a race
> between steps 2 and 3. So the change was made so that Xen would
> tolerate setting max_pages to less than tot_pages. Then things looked
> like this:
> 1. Set d->max_pages to the new target
> 2. Issue a balloon request to the guest.
>
> The new semantics guaranteed that the guest would not be able to "change
> its mind" and ask for memory back after freeing it without the toolstack
> needing to closely monitor the actual current usage.
>
> But even before the change, it was still possible to change max_pages;
> so the change doesn't have any bearing on the discussion here.
>
> -George
>
> * I may have some of the details incorrect (e.g., maybe it was
> d->tot_pages+something else, maybe it didn't return -EINVAL but failed
> in some other way), but the general idea is correct.
Yes, understood. Ian please correct me if I am wrong, but I believe
your proposal (at least as last stated) does indeed, in some cases,
set d->max_pages less than or equal to d->tot_pages. So AFAICT the
change does very much have a bearing on the discussion here.
> The new semantics guaranteed that the guest would not be able to "change
> its mind" and ask for memory back after freeing it without the toolstack
> needing to closely monitor the actual current usage.
Exactly. So, in your/Ian's model, you are artificially constraining a
guest's memory growth, including any dynamic allocations*. If, by bad luck,
you do that at a moment when the guest was growing and is very much in
need of that additional memory, the guest may now swapstorm or OOM, and
the toolstack has seriously impacted a running guest. Oracle considers
this both unacceptable and unnecessary.
In the Oracle model, d->max_pages never gets changed, except possibly
by explicit rare demand by a host administrator. In the Oracle model,
the toolstack has no business arbitrarily changing a constraint for a
guest that can have a serious impact on the guest. In the Oracle model,
each guest shrinks and grows its memory needs self-adaptively, only
constrained by the vm.cfg at the launch of the guest and the physical
limits of the machine (max-of-sums because it is done in the hypervisor,
not sum-of-maxes). All this uses working shipping code upstream in
Xen and Linux... except that you are blocking from open source the
proposed XENMEM_claim_pages hypercall.
So, I think it is very fair (not snide) to point out that a change was
made to the hypervisor to accommodate your/Ian's memory-management model,
a change that Oracle considers unnecessary, a change explicitly
supporting your/Ian's model, which is a model that has not been
implemented in open source and has no clear (let alone proven) policy
to guide it. Yet you wish to block a minor hypervisor change which
is needed to accommodate Oracle's shipping memory-management model?
Please reconsider.
Thanks,
Dan
* To repeat my definition of that term, "dynamic allocations" means
any increase to d->tot_pages that is unbeknownst to the toolstack,
including specifically in-guest ballooning and certain tmem calls.
next prev parent reply other threads:[~2013-01-14 18:18 UTC|newest]
Thread overview: 53+ messages / expand[flat|nested] mbox.gz Atom feed top
[not found] <mailman.18000.1354568068.1399.xen-devel@lists.xen.org>
2012-12-04 3:24 ` Proposed XENMEM_claim_pages hypercall: Analysis of problem and alternate solutions Andres Lagar-Cavilla
2012-12-18 22:17 ` Konrad Rzeszutek Wilk
2012-12-19 12:53 ` George Dunlap
2012-12-19 13:48 ` George Dunlap
2013-01-03 20:38 ` Dan Magenheimer
2013-01-02 21:59 ` Konrad Rzeszutek Wilk
2013-01-14 18:28 ` George Dunlap
2013-01-22 21:57 ` Konrad Rzeszutek Wilk
2013-01-23 18:36 ` Dave Scott
2013-02-12 15:38 ` Konrad Rzeszutek Wilk
2012-12-20 16:04 ` Tim Deegan
2013-01-02 15:31 ` Andres Lagar-Cavilla
2013-01-02 21:43 ` Dan Magenheimer
2013-01-03 16:25 ` Andres Lagar-Cavilla
2013-01-03 18:49 ` Dan Magenheimer
2013-01-07 14:43 ` Ian Campbell
2013-01-07 18:41 ` Dan Magenheimer
2013-01-08 9:03 ` Ian Campbell
2013-01-08 19:41 ` Dan Magenheimer
2013-01-09 10:41 ` Ian Campbell
2013-01-09 14:44 ` Dan Magenheimer
2013-01-09 14:58 ` Ian Campbell
2013-01-14 15:45 ` George Dunlap
2013-01-14 18:18 ` Dan Magenheimer [this message]
2013-01-14 19:42 ` George Dunlap
2013-01-14 23:14 ` Dan Magenheimer
2013-01-23 12:18 ` Ian Campbell
2013-01-23 17:34 ` Dan Magenheimer
2013-02-12 16:18 ` Konrad Rzeszutek Wilk
2013-01-10 10:31 ` Ian Campbell
2013-01-10 18:42 ` Dan Magenheimer
2013-01-02 21:38 ` Dan Magenheimer
2013-01-03 16:24 ` Andres Lagar-Cavilla
2013-01-03 18:33 ` Dan Magenheimer
2013-01-10 17:13 ` Tim Deegan
2013-01-10 21:43 ` Dan Magenheimer
2013-01-17 15:12 ` Tim Deegan
2013-01-17 15:26 ` Andres Lagar-Cavilla
2013-01-22 19:22 ` Dan Magenheimer
2013-01-23 12:18 ` Ian Campbell
2013-01-23 16:05 ` Dan Magenheimer
2013-01-02 15:29 ` Andres Lagar-Cavilla
2013-01-11 16:03 ` Konrad Rzeszutek Wilk
2013-01-11 16:13 ` Andres Lagar-Cavilla
2013-01-11 19:08 ` Konrad Rzeszutek Wilk
2013-01-14 16:00 ` George Dunlap
2013-01-14 16:11 ` Andres Lagar-Cavilla
2013-01-17 15:16 ` Tim Deegan
2013-01-18 21:45 ` Konrad Rzeszutek Wilk
2013-01-21 10:29 ` Tim Deegan
2013-02-12 15:54 ` Konrad Rzeszutek Wilk
2013-02-14 13:32 ` Konrad Rzeszutek Wilk
2012-12-03 20:54 Dan Magenheimer
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=9be877bb-d38b-40c7-bae7-b66497f11abf@default \
--to=dan.magenheimer@oracle.com \
--cc=Ian.Campbell@citrix.com \
--cc=Ian.Jackson@eu.citrix.com \
--cc=JBeulich@suse.com \
--cc=andreslc@gridcentric.ca \
--cc=george.dunlap@eu.citrix.com \
--cc=keir@xen.org \
--cc=konrad@kernel.org \
--cc=tim@xen.org \
--cc=xen-devel@lists.xen.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).