kvm.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Marcelo Tosatti <mtosatti@redhat.com>
To: Takuya Yoshikawa <takuya.yoshikawa@gmail.com>
Cc: Xiao Guangrong <xiaoguangrong@linux.vnet.ibm.com>,
	Avi Kivity <avi@redhat.com>, LKML <linux-kernel@vger.kernel.org>,
	KVM <kvm@vger.kernel.org>
Subject: Re: [PATCH v4 06/10] KVM: MMU: fast path of handling guest page fault
Date: Mon, 30 Apr 2012 23:31:45 -0300	[thread overview]
Message-ID: <20120501023145.GC10142@amt.cnet> (raw)
In-Reply-To: <20120429175004.b54d8c095a60d98c8cdbc942@gmail.com>

On Sun, Apr 29, 2012 at 05:50:04PM +0900, Takuya Yoshikawa wrote:
> On Fri, 27 Apr 2012 11:52:13 -0300
> Marcelo Tosatti <mtosatti@redhat.com> wrote:
> 
> > Yes but the objective you are aiming for is to read and write sptes
> > without mmu_lock. That is, i am not talking about this patch. 
> > Please read carefully the two examples i gave (separated by "example)").
> 
> The real objective is not still clear.
> 
> The ~10% improvement reported before was on macro benchmarks during live
> migration.  At least, that optimization was the initial objective.
> 
> But at some point, the objective suddenly changed to "lock-less" without
> understanding what introduced the original improvement.
> 
> Was the problem really mmu_lock contention?
> 
> If the path being introduced by this patch is really fast, isn't it
> possible to achieve the same improvement still using mmu_lock?

Right. Supposedly, mmu_lock cacheline bouncing is the problem. Hum:

$ pahole -C "kvm" /tmp/kvm.ko 
struct kvm {
	spinlock_t                 mmu_lock;             /*     0     2
*/

	/* XXX 6 bytes hole, try to pack */

	struct mutex               slots_lock;           /*     8    32
*/
	struct mm_struct *         mm;                   /*    40     8
*/
	struct kvm_memslots *      memslots;             /*    48     8
*/
	struct srcu_struct         srcu;                 /*    56    48
*/
	/* --- cacheline 1 boundary (64 bytes) was 40 bytes ago --- */
	u32                        bsp_vcpu_id;          /*   104     4
*/

	/* XXX 4 bytes hole, try to pack */

Oops. False sharing?

> Note: During live migration, the fact that the guest gets faulted is
> itself a limitation.  We could easily see noticeable slowdown of a
> program even if it runs only between two GET_DIRTY_LOGs.
> 
> 
> > The rules for code under mmu_lock should be:
> > 
> > 1) Spte updates under mmu lock must always be atomic and 
> > with locked instructions.
> > 2) Spte values must be read once, and appropriate action
> > must be taken when writing them back in case their value
> > has changed (remote TLB flush might be required).
> 
> Although I am not certain about what will be really needed in the
> final form, if this kind of maybe-needed-overhead is going to be
> added little by little, I worry about possible regression.
> 
> Thanks,
> 	Takuya

Yes, that is a possibility.

  reply	other threads:[~2012-05-01  2:31 UTC|newest]

Thread overview: 31+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2012-04-25  4:00 [PATCH v4 00/10] KVM: MMU: fast page fault Xiao Guangrong
2012-04-25  4:01 ` [PATCH v4 01/10] KVM: MMU: return bool in __rmap_write_protect Xiao Guangrong
2012-04-25  4:01 ` [PATCH v4 02/10] KVM: MMU: abstract spte write-protect Xiao Guangrong
2012-04-25  4:02 ` [PATCH v4 03/10] KVM: VMX: export PFEC.P bit on ept Xiao Guangrong
2012-04-25  4:02 ` [PATCH v4 04/10] KVM: MMU: introduce SPTE_MMU_WRITEABLE bit Xiao Guangrong
2012-04-25  4:03 ` [PATCH v4 05/10] KVM: MMU: introduce SPTE_WRITE_PROTECT bit Xiao Guangrong
2012-04-25  4:03 ` [PATCH v4 06/10] KVM: MMU: fast path of handling guest page fault Xiao Guangrong
2012-04-26 23:45   ` Marcelo Tosatti
2012-04-27  5:53     ` Xiao Guangrong
2012-04-27 14:52       ` Marcelo Tosatti
2012-04-28  6:10         ` Xiao Guangrong
2012-05-01  1:34           ` Marcelo Tosatti
2012-05-02  5:28             ` Xiao Guangrong
2012-05-02 21:07               ` Marcelo Tosatti
2012-05-03 11:26                 ` Xiao Guangrong
2012-05-05 14:08                   ` Marcelo Tosatti
2012-05-06  9:36                     ` Avi Kivity
2012-05-07  6:52                     ` Xiao Guangrong
2012-04-29  8:50         ` Takuya Yoshikawa
2012-05-01  2:31           ` Marcelo Tosatti [this message]
2012-05-02  5:39           ` Xiao Guangrong
2012-05-02 21:10             ` Marcelo Tosatti
2012-05-03 12:09               ` Xiao Guangrong
2012-05-03 12:13                 ` Avi Kivity
2012-05-03  0:15             ` Takuya Yoshikawa
2012-05-03 12:23               ` Xiao Guangrong
2012-05-03 12:40                 ` Takuya Yoshikawa
2012-04-25  4:04 ` [PATCH v4 07/10] KVM: MMU: lockless update spte on fast page fault path Xiao Guangrong
2012-04-25  4:04 ` [PATCH v4 08/10] KVM: MMU: trace fast page fault Xiao Guangrong
2012-04-25  4:05 ` [PATCH v4 09/10] KVM: MMU: fix kvm_mmu_pagetable_walk tracepoint Xiao Guangrong
2012-04-25  4:06 ` [PATCH v4 10/10] KVM: MMU: document mmu-lock and fast page fault Xiao Guangrong

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20120501023145.GC10142@amt.cnet \
    --to=mtosatti@redhat.com \
    --cc=avi@redhat.com \
    --cc=kvm@vger.kernel.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=takuya.yoshikawa@gmail.com \
    --cc=xiaoguangrong@linux.vnet.ibm.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).