xen-devel.lists.xenproject.org archive mirror
 help / color / mirror / Atom feed
From: Dario Faggioli <raistlin@linux.it>
To: George Dunlap <george.dunlap@eu.citrix.com>
Cc: "Zhang, Yang Z" <yang.z.zhang@intel.com>,
	Andre Przywara <andre.przywara@amd.com>,
	Ian Jackson <Ian.Jackson@eu.citrix.com>,
	Ian Campbell <Ian.Campbell@citrix.com>,
	xen-devel <xen-devel@lists.xen.org>
Subject: Re: [PATCH 2 of 2 RFC] xl: allow for moving the domain's memory when changing vcpu affinity
Date: Fri, 06 Jul 2012 15:57:28 +0200	[thread overview]
Message-ID: <1341583048.25268.28.camel@Solace> (raw)
In-Reply-To: <4FF6DFE5.7060403@eu.citrix.com>


[-- Attachment #1.1: Type: text/plain, Size: 2864 bytes --]

On Fri, 2012-07-06 at 13:53 +0100, George Dunlap wrote:
> On 06/07/12 10:54, Dario Faggioli wrote:
> > By introducing a new '-M' option to the `xl vcpu-pin' command. The actual
> > memory "movement" is achieved suspending the domain to a ttemporary file and
> > resuming it with the new vcpu-affinity
> Hmm... this will work and be reliable, but it seems a bit clunky.
>
If I can ask, the idea or the implementation? :-)

> Long 
> term we want to be able to do node migration in the background without 
> shutting down a VM, right?  
>
Definitely, and we also want to do that automatically, according to some
load/performance/whatever run-time measurement, without any user
intervention. However ...

> If that can be done in the 4.3 timeframe, 
> then it seems unnecessary to implement something like this.
>
... I think something like this could still be useful.

IOW, I think it would be worthwhile to have both the automatic memory
migration happening in background and something like this explicit,
do-it-all-now, as they serve different purposes.

It's sort of like in cpu scheduling, where you have free tasks/vcpus
that run wherever the scheduler want, but you also have pinning, if at
some point you want to confine them to some subset of cpus for whatever
reason. Also, as it happens for pinning, you can do it at domain
creation time, but then your might change your mind and you'd need
something to have the system reflecting your actual needs.

So, if you allow me, initial vcpu pinning is like NUMA automatic
placement, and automatic memory migration happening in background is
like "free scheduling", then this feature is like what we get when
running `xl vcpu-pin'. (I hope the comparison helped in clarifying my
view rather than making it even more obscure :-)).

Also, as an example, what happens if you want to create a ne VM and you
have enough memory, but not in a single node because of memory
fragmentation? Ideally, that won't happen thanks to the automatic memory
migration, but that's not guaranteed to be possible (there might be VM
pinned, cpupools, and any sort of stuff), or to happen effectively  and
soon enough (it will be an heuristics after all). In such a situation, I
figure this feature could be a useful tool for a system administrator.

Than again, I'm talking about the feature. The implementation might well
change, and I really expect being able to migrate memory in background
to help beautifying it (the implementation) quite a bit.

Sorry for writing so much. :-P

Thanks and Regards,
Dario

-- 
<<This happens because I choose it to happen!>> (Raistlin Majere)
-----------------------------------------------------------------
Dario Faggioli, Ph.D, http://retis.sssup.it/people/faggioli
Senior Software Engineer, Citrix Systems R&D Ltd., Cambridge (UK)


[-- Attachment #1.2: This is a digitally signed message part --]
[-- Type: application/pgp-signature, Size: 198 bytes --]

[-- Attachment #2: Type: text/plain, Size: 126 bytes --]

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

  parent reply	other threads:[~2012-07-06 13:57 UTC|newest]

Thread overview: 13+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2012-07-06  9:54 [PATCH 0 of 2 RFC] xl: move domeins among NUMA nodes Dario Faggioli
2012-07-06  9:54 ` [PATCH 1 of 2 RFC] xl: parse extra_config options even when restoring Dario Faggioli
2012-07-06  9:54 ` [PATCH 2 of 2 RFC] xl: allow for moving the domain's memory when changing vcpu affinity Dario Faggioli
2012-07-06 12:53   ` George Dunlap
2012-07-06 13:25     ` Ian Campbell
2012-07-06 13:30       ` George Dunlap
2012-07-06 13:38         ` Ian Campbell
2012-07-06 14:05       ` Dario Faggioli
2012-07-06 14:07         ` George Dunlap
2012-07-06 14:42         ` Ian Campbell
2012-07-06 13:57     ` Dario Faggioli [this message]
2012-07-06 14:04       ` George Dunlap
2012-07-06 14:14         ` Dario Faggioli

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=1341583048.25268.28.camel@Solace \
    --to=raistlin@linux.it \
    --cc=Ian.Campbell@citrix.com \
    --cc=Ian.Jackson@eu.citrix.com \
    --cc=andre.przywara@amd.com \
    --cc=george.dunlap@eu.citrix.com \
    --cc=xen-devel@lists.xen.org \
    --cc=yang.z.zhang@intel.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).