xen-devel.lists.xenproject.org archive mirror
 help / color / mirror / Atom feed
From: Andres Lagar-Cavilla <andres@lagarcavilla.org>
To: xen-devel@lists.xen.org
Cc: andres@gridcentric.ca, keir.xen@gmail.com, tim@xen.org,
	adin@gridcentric.ca
Subject: [PATCH 0 of 3] RFC: x86 memory sharing performance improvements
Date: Thu, 12 Apr 2012 10:16:11 -0400	[thread overview]
Message-ID: <patchbomb.1334240171@xdev.gridcentric.ca> (raw)

This is an RFC series. I haven't fully tested it yet, but I want the concept to
be known as I intend this to be merged prior to the closing of the 4.2 window.

The sharing subsystem does not scale elegantly with high degrees of page
sharing. The culprit is a reverse map that each shared frame maintains,
resolving to all domain pages pointing to the shared frame. Because the rmap is
implemented with a O(n) search linked-list, CoW unsharing can result in
prolonged search times.

The place where this becomes most obvious is during domain destruction, during
which all shared p2m entries need to be unshared. Destroying a domain with a
lot of sharing could result in minutes of hypervisor freeze-up!

Solutions proposed:
- Make the p2m clean up of shared entries part of the preemptible, synchronous,
domain_kill domctl (as opposed to executing monolithically in the finalize
destruction RCU callback)
- When a shared frame exceeds an arbitrary ref count, mutate the rmap from a
linked list to a hash table.

Signed-off-by: Andres Lagar-Cavilla <andres@lagarcavilla.org>

 xen/arch/x86/domain.c             |   16 +++-
 xen/arch/x86/mm/mem_sharing.c     |   45 ++++++++++
 xen/arch/x86/mm/p2m.c             |    4 +
 xen/include/asm-arm/mm.h          |    4 +
 xen/include/asm-x86/domain.h      |    1 +
 xen/include/asm-x86/mem_sharing.h |   10 ++
 xen/include/asm-x86/p2m.h         |    4 +
 xen/arch/x86/mm/mem_sharing.c     |  142 +++++++++++++++++++++++--------
 xen/arch/x86/mm/mem_sharing.c     |  170 +++++++++++++++++++++++++++++++++++--
 xen/include/asm-x86/mem_sharing.h |   13 ++-
 10 files changed, 354 insertions(+), 55 deletions(-)

             reply	other threads:[~2012-04-12 14:16 UTC|newest]

Thread overview: 12+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2012-04-12 14:16 Andres Lagar-Cavilla [this message]
2012-04-12 14:16 ` [PATCH 1 of 3] x86/mm/sharing: Clean ups for relinquishing shared pages on destroy Andres Lagar-Cavilla
2012-04-18 12:42   ` Tim Deegan
2012-04-18 13:06     ` Andres Lagar-Cavilla
2012-04-12 14:16 ` [PATCH 2 of 3] x86/mem_sharing: modularize reverse map for shared frames Andres Lagar-Cavilla
2012-04-18 14:05   ` Tim Deegan
2012-04-18 14:19     ` Andres Lagar-Cavilla
2012-04-12 14:16 ` [PATCH 3 of 3] x86/mem_sharing: For shared pages with many references, use a hash table instead of a list Andres Lagar-Cavilla
2012-04-18 15:35   ` Tim Deegan
2012-04-18 16:18     ` Andres Lagar-Cavilla
2012-04-24 19:33       ` Andres Lagar-Cavilla
  -- strict thread matches above, loose matches on Subject: below --
2012-04-13 16:11 [PATCH 0 of 3] RFC: x86 memory sharing performance improvements Andres Lagar-Cavilla

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=patchbomb.1334240171@xdev.gridcentric.ca \
    --to=andres@lagarcavilla.org \
    --cc=adin@gridcentric.ca \
    --cc=andres@gridcentric.ca \
    --cc=keir.xen@gmail.com \
    --cc=tim@xen.org \
    --cc=xen-devel@lists.xen.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).