linux-arm-kernel.lists.infradead.org archive mirror
 help / color / mirror / Atom feed
From: jan.glauber@caviumnetworks.com (Jan Glauber)
To: linux-arm-kernel@lists.infradead.org
Subject: RCU stall with high number of KVM vcpus
Date: Tue, 14 Nov 2017 08:49:54 +0100	[thread overview]
Message-ID: <20171114074954.GA16731@hc> (raw)
In-Reply-To: <5FC3163CFD30C246ABAA99954A238FA83845C7B4@FRAEML521-MBX.china.huawei.com>

On Mon, Nov 13, 2017 at 06:13:08PM +0000, Shameerali Kolothum Thodi wrote:

[...]

> > > > numbers don't look good, see waittime-max:
> > > >
> > > > ---------------------------------------------------------------------------------------------------
> > -------------------------------------------------------------------------------------------------------
> > -------------------
> > > >                               class name    con-bounces    contentions   waittime-min
> > waittime-max waittime-total   waittime-avg    acq-bounces   acquisitions
> > holdtime-min   holdtime-max holdtime-total   holdtime-avg
> > > > ---------------------------------------------------------------------------------------------------
> > -------------------------------------------------------------------------------------------------------
> > -------------------
> > > >
> > > >                 &(&kvm->mmu_lock)->rlock:      99346764       99406604
> > 0.14  1321260806.59 710654434972.0        7148.97      154228320
> > 225122857           0.13   917688890.60  3705916481.39          16.46
> > > >                 ------------------------
> > > >                 &(&kvm->mmu_lock)->rlock       99365598
> > [<ffff0000080b43b8>] kvm_handle_guest_abort+0x4c0/0x950
> > > >                 &(&kvm->mmu_lock)->rlock          25164
> > [<ffff0000080a4e30>] kvm_mmu_notifier_invalidate_range_start+0x70/0xe8
> > > >                 &(&kvm->mmu_lock)->rlock          14934
> > [<ffff0000080a7eec>] kvm_mmu_notifier_invalidate_range_end+0x24/0x68
> > > >                 &(&kvm->mmu_lock)->rlock            908
> > [<ffff00000810a1f0>] __cond_resched_lock+0x68/0xb8
> > > >                 ------------------------
> > > >                 &(&kvm->mmu_lock)->rlock              3          [<ffff0000080b34c8>]
> > stage2_flush_vm+0x60/0xd8
> > > >                 &(&kvm->mmu_lock)->rlock       99186296
> > [<ffff0000080b43b8>] kvm_handle_guest_abort+0x4c0/0x950
> > > >                 &(&kvm->mmu_lock)->rlock         179238
> > [<ffff0000080a4e30>] kvm_mmu_notifier_invalidate_range_start+0x70/0xe8
> > > >                 &(&kvm->mmu_lock)->rlock          19181
> > [<ffff0000080a7eec>] kvm_mmu_notifier_invalidate_range_end+0x24/0x68
> 
> That looks like something similar we had on our hip07 platform when multiple VMs
> were launched.  The issue was tracked down to CONFIG_NUMA set with memory_less
> nodes. This results in lot of individual 4K pages and unmap_stage2_ptes() takes a good
> amount of time coupled with some HW cache flush latencies. I am not sure you are
> seeing the same thing, but may be worth checking.

Hi Shameer,

thanks for the tip. We don't have memory-less nodes but it might me
related to NUMA. I've tried putting the guest onto one node but that did
not help.

PID                               Node 0          Node 1           Total
-----------------------  --------------- --------------- ---------------
56753 (qemu-nbd)                    4.48           11.16           15.64
56813 (qemu-system-aar)             2.02         1685.72         1687.75
-----------------------  --------------- --------------- ---------------
Total                               6.51         1696.88         1703.39

I'll try switching to 64K pages in the host next.

thanks,
Jan

      reply	other threads:[~2017-11-14  7:49 UTC|newest]

Thread overview: 11+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
     [not found] <20171113131000.GA10546@hc>
2017-11-13 13:47 ` RCU stall with high number of KVM vcpus Marc Zyngier
2017-11-13 17:35   ` Jan Glauber
2017-11-13 18:11     ` Marc Zyngier
2017-11-13 18:40       ` Jan Glauber
2017-11-14 13:30         ` Marc Zyngier
2017-11-14 14:19           ` Jan Glauber
2017-11-14  7:52       ` Jan Glauber
2017-11-14  8:49         ` Marc Zyngier
2017-11-14 11:34           ` Suzuki K Poulose
2017-11-13 18:13     ` Shameerali Kolothum Thodi
2017-11-14  7:49       ` Jan Glauber [this message]

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20171114074954.GA16731@hc \
    --to=jan.glauber@caviumnetworks.com \
    --cc=linux-arm-kernel@lists.infradead.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).