xen-devel.lists.xenproject.org archive mirror
 help / color / mirror / Atom feed
From: Ian Campbell <ian.campbell@citrix.com>
To: Jaeyong Yoo <jaeyong.yoo@samsung.com>
Cc: 'Stefano Stabellini' <stefano.stabellini@eu.citrix.com>,
	xen-devel@lists.xen.org
Subject: Re: [PATCH v3 07/10] xen/arm: Add handling write fault for dirty-page tracing
Date: Sat, 17 Aug 2013 23:16:02 +0100	[thread overview]
Message-ID: <1376777762.31937.6.camel@hastur.hellion.org.uk> (raw)
In-Reply-To: <005f01ce996f$60b901c0$222b0540$%yoo@samsung.com>

On Thu, 2013-08-15 at 13:24 +0900, Jaeyong Yoo wrote:

> > Why don't we just context switch the slots for now, only for domains where
> > log dirty is enabled, and then we can measure and see how bad it is etc.
> 
> 
> Here goes the measurement results:

Wow, that was quick, thanks.

> For better understanding of trade-off between vlpt and page-table 
> walk in dirty-page handling, let's consider the  following two cases:
>  - Migrating a single domain at a time:
>  - Migrating multiple domains concurrently:
> 
> For each case, the metrics that we are going to see is the following:
>  - page-table walk overhead: for handling a single dirty-page, 
>     page-table requires 6us and vlpt (improved version) requires 1.5us. 
>     From this, we consider 4.5 us for pure overhead compared to vlpt. 
>     And it happens every dirty-pages.

map_domain_page is has a hash table structure in which the PTE entires
are reference counted, however we don't clear the pte when the ref
reaches 0 so if we immediately use it again we don't need to flush. But
we may need to flush if there is a hash table collision. So in practice
there will be a bit more overhead, I'm not sure how significant that
will be. I suppose the chance of collision depends on the side of the
guest.

>  - vlpt overhead: the only vlpt overhead is the flushes at context
>     switch. And flushing 34MB (which is for supporting 16GB domU)
>     virtual address range requires 130us. And it happens when two
>     migrating domUs are contexted switched.
> 
> Here goes the results:
> 
>  - Migrating a domain at a time:
>     * page-table walk overhead: 4.5us * 611 times = 2.7ms
>     * vlpt overhead: 0 (no flush required)
> 
>  - Migrating two domains concurrently:
>     * page-table walk overhead: 4.5us * 8653 times = 39 ms
>     * vlpt overhead: 130us * 357 times = 46 ms

The 611, 8653 and 357's in here are from an actual test, right?

Out of interest what was the total time for each case?

> Although page-table walk gives little bit better performance in
> migrating two domains, I think it is better to choose vlpt due to 
> the following reasons:
>  - In the above tests, I did not run any workloads at migrating domU,
>     and IIRC, when I run gzip or bonnie++ in domU, the dirty-pages grow
>     to few thousands. Then, page-table walk overhead becomes few hundred
>     milli-seconds even in migrating a domain.
>  - I would expect that migrating a single domain would be used more
>     Frequently than migrating multiple domains at a time. 

Both of those seem like sound arguments to me.

> One more thing: regarding your comments about tlb lockdown, which is:
> > It occurs to me now that with 16 slots changing on context switch and 
> > a further 16 aliasing them (and hence requiring maintenance too) for 
> > the super pages it is possible that the TLB maintenance at context 
> > switch might get prohibitively expensive. We could address this by 
> > firstly only doing it when switching to/from domains which have log 
> > dirty mode enabled and then secondly by seeing if we can make use of 
> > global or locked down mappings for the static Xen .text/.data/.xenheap 
> > mappings and therefore allow us to use a bigger global flush.
> 
> Unfortunately Cortex A15 looks like not supporting tlb lockdown.
> http://infocenter.arm.com/help/index.jsp?topic=/com.arm.doc.ddi0438d/CHDGEDA
> E.html

Oh well.

> And, I am not sure that setting global of page table entry prevents being
> flushed from TLB flush operation.
> If it works, we may decrease the vlpt overhead a lot.

yes, this is something to investigate, but not urgently I don't think.

Ian.

  reply	other threads:[~2013-08-17 22:16 UTC|newest]

Thread overview: 31+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2013-08-01 12:57 [PATCH v3 00/10] xen/arm: live migration support in arndale board Jaeyong Yoo
2013-08-01 12:57 ` [PATCH v3 01/10] xen/arm: Implement hvm save and restore Jaeyong Yoo
2013-08-01 12:57 ` [PATCH v3 02/10] xen/arm: Add more registers for saving and restoring vcpu registers Jaeyong Yoo
2013-08-01 12:57 ` [PATCH v3 03/10] xen/arm: Implement set_memory_map hypercall Jaeyong Yoo
2013-08-01 12:57 ` [PATCH v3 04/10] xen/arm: Implement get_maximum_gpfn hypercall for arm Jaeyong Yoo
2013-08-01 12:57 ` [PATCH v3 05/10] xen/arm: Implement modify_returncode Jaeyong Yoo
2013-08-01 12:57 ` [PATCH v3 06/10] xen/arm: Implement virtual-linear page table for guest p2m mapping in live migration Jaeyong Yoo
2013-08-01 12:57 ` [PATCH v3 07/10] xen/arm: Add handling write fault for dirty-page tracing Jaeyong Yoo
2013-08-04 16:27   ` Stefano Stabellini
2013-08-05  0:23     ` Jaeyong Yoo
2013-08-05 11:11       ` Stefano Stabellini
2013-08-05 11:39         ` Jaeyong Yoo
2013-08-05 13:49           ` Stefano Stabellini
2013-08-05 13:52         ` Ian Campbell
2013-08-06 11:56           ` Jaeyong Yoo
2013-08-06 13:17             ` Ian Campbell
2013-08-07  1:24               ` Jaeyong Yoo
2013-08-15  4:24               ` Jaeyong Yoo
2013-08-17 22:16                 ` Ian Campbell [this message]
2013-08-17 22:21                   ` Ian Campbell
2013-08-20 10:15                   ` Jaeyong Yoo
2013-08-18  6:39                 ` Ian Campbell
2013-08-20 10:19                   ` Jaeyong Yoo
2013-08-17 23:51   ` Julien Grall
2013-08-20 10:16     ` Jaeyong Yoo
2013-08-01 12:57 ` [PATCH v3 08/10] xen/arm: Fixing clear_guest_offset macro Jaeyong Yoo
2013-08-01 12:57 ` [PATCH v3 09/10] xen/arm: Implement hypercall for dirty page tracing (shadow op) Jaeyong Yoo
2013-08-01 12:57 ` [PATCH v3 10/10] xen/arm: Implement toolstack for xl restore/save and migrate Jaeyong Yoo
2013-09-25 15:59 ` [PATCH v3 00/10] xen/arm: live migration support in arndale board Ian Campbell
2013-09-26  6:23   ` Jaeyong Yoo
2013-09-26 15:13     ` Ian Campbell

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=1376777762.31937.6.camel@hastur.hellion.org.uk \
    --to=ian.campbell@citrix.com \
    --cc=jaeyong.yoo@samsung.com \
    --cc=stefano.stabellini@eu.citrix.com \
    --cc=xen-devel@lists.xen.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).