xen-devel.lists.xenproject.org archive mirror
 help / color / mirror / Atom feed
From: "Zhai, Edwin" <edwin.zhai@intel.com>
To: keir.fraser@eu.citrix.com, andreas.olsowski@uni.leuphana.de,
	xen-devel@lists.xensource.com, Ian.Jackson@eu.citrix.com,
	edwin.zhai@intel.com, jeremy@goop.org
Subject: Re: slow live magration / xc_restore on xen4 pvops
Date: Thu, 03 Jun 2010 16:58:26 +0800	[thread overview]
Message-ID: <4C076EB2.9030108@intel.com> (raw)
In-Reply-To: <20100603064545.GB52378@zanzibar.kublai.com>

[-- Attachment #1: Type: text/plain, Size: 1676 bytes --]

I assume this is PV domU rather than HVM, right?

1. we need check if super page is the culprit by SP_check1.patch.

2. if this can fix this problem, we need further check where the extra 
costs comes: the speculative algorithm, or the super page population 
hypercall by SP_check2.patch

If SP_check2.patch works, the culprit is the new allocation hypercall(so 
guest creation also suffer); Else, the speculative algorithm.

Does it make sense?

Thanks,
edwin


Brendan Cully wrote:
> On Thursday, 03 June 2010 at 06:47, Keir Fraser wrote:
>   
>> On 03/06/2010 02:04, "Brendan Cully" <Brendan@cs.ubc.ca> wrote:
>>
>>     
>>> I've done a bit of profiling of the restore code and observed the
>>> slowness here too. It looks to me like it's probably related to
>>> superpage changes. The big hit appears to be at the front of the
>>> restore process during calls to allocate_mfn_list, under the
>>> normal_page case. It looks like we're calling
>>> xc_domain_memory_populate_physmap once per page here, instead of
>>> batching the allocation? I haven't had time to investigate further
>>> today, but I think this is the culprit.
>>>       
>> Ccing Edwin Zhai. He wrote the superpage logic for domain restore.
>>     
>
> Here's some data on the slowdown going from 2.6.18 to pvops dom0:
>
> I wrapped the call to allocate_mfn_list in uncanonicalize_pagetable
> to measure the time to do the allocation.
>
> kernel, min call time, max call time
> 2.6.18, 4 us, 72 us
> pvops, 202 us, 10696 us (!)
>
> It looks like pvops is dramatically slower to perform the
> xc_domain_memory_populate_physmap call!
>
> I'll attach the patch and raw data below.
>   

-- 
best rgds,
edwin


[-- Attachment #2: SP_check1.patch --]
[-- Type: text/plain, Size: 433 bytes --]

diff -r 4ab68bf4c37e tools/libxc/xc_domain_restore.c
--- a/tools/libxc/xc_domain_restore.c	Thu Jun 03 07:30:54 2010 +0100
+++ b/tools/libxc/xc_domain_restore.c	Thu Jun 03 16:30:30 2010 +0800
@@ -1392,6 +1392,8 @@ int xc_domain_restore(xc_interface *xch,
     if ( hvm )
         superpages = 1;
 
+    superpages = 0;
+
     if ( read_exact(io_fd, &dinfo->p2m_size, sizeof(unsigned long)) )
     {
         PERROR("read: p2m_size");

[-- Attachment #3: SP_check2.patch --]
[-- Type: text/plain, Size: 629 bytes --]

diff -r 4ab68bf4c37e tools/libxc/xc_domain_restore.c
--- a/tools/libxc/xc_domain_restore.c	Thu Jun 03 07:30:54 2010 +0100
+++ b/tools/libxc/xc_domain_restore.c	Thu Jun 03 16:48:38 2010 +0800
@@ -248,6 +248,7 @@ static int allocate_mfn_list(xc_interfac
     if  ( super_page_populated(xch, ctx, pfn) )
         goto normal_page;
 
+#if 0
     pfn &= ~(SUPERPAGE_NR_PFNS - 1);
     mfn =  pfn;
 
@@ -263,6 +264,7 @@ static int allocate_mfn_list(xc_interfac
     DPRINTF("No 2M page available for pfn 0x%lx, fall back to 4K page.\n",
             pfn);
     ctx->no_superpage_mem = 1;
+#endif
 
 normal_page:
     if ( !batch_buf )

[-- Attachment #4: Type: text/plain, Size: 138 bytes --]

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xensource.com
http://lists.xensource.com/xen-devel

  parent reply	other threads:[~2010-06-03  8:58 UTC|newest]

Thread overview: 32+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2010-06-01 17:49 XCP AkshayKumar Mehta
2010-06-01 19:06 ` XCP Jonathan Ludlam
2010-06-01 19:15   ` XCP AkshayKumar Mehta
2010-06-03  3:03   ` XCP AkshayKumar Mehta
2010-06-03 10:24     ` XCP Jonathan Ludlam
2010-06-03 17:20       ` XCP AkshayKumar Mehta
2010-08-31  1:33       ` XCP - iisues with XCP .5 AkshayKumar Mehta
2010-06-01 21:17 ` slow live magration / xc_restore on xen4 pvops Andreas Olsowski
2010-06-02  7:11   ` Keir Fraser
2010-06-02 15:46     ` Andreas Olsowski
2010-06-02 15:55       ` Keir Fraser
2010-06-02 16:18   ` Ian Jackson
2010-06-02 16:20     ` Ian Jackson
2010-06-02 16:24     ` Keir Fraser
2010-06-03  1:04       ` Brendan Cully
2010-06-03  4:31         ` Brendan Cully
2010-06-03  5:47         ` Keir Fraser
2010-06-03  6:45           ` Brendan Cully
2010-06-03  6:53             ` Jeremy Fitzhardinge
2010-06-03  6:55             ` Brendan Cully
2010-06-03  7:12               ` Keir Fraser
2010-06-03  8:58             ` Zhai, Edwin [this message]
2010-06-09 13:32               ` Keir Fraser
2010-06-02 16:27     ` Brendan Cully
2010-06-03 10:01       ` Ian Jackson
2010-06-03 15:03         ` Brendan Cully
2010-06-03 15:18           ` Keir Fraser
2010-06-03 17:15           ` Ian Jackson
2010-06-03 17:29             ` Brendan Cully
2010-06-03 18:02               ` Ian Jackson
2010-06-02 22:59   ` Andreas Olsowski
2010-06-10  9:27     ` Keir Fraser

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=4C076EB2.9030108@intel.com \
    --to=edwin.zhai@intel.com \
    --cc=Ian.Jackson@eu.citrix.com \
    --cc=andreas.olsowski@uni.leuphana.de \
    --cc=jeremy@goop.org \
    --cc=keir.fraser@eu.citrix.com \
    --cc=xen-devel@lists.xensource.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).