linux-arm-kernel.lists.infradead.org archive mirror
 help / color / mirror / Atom feed
From: Marc Zyngier <maz@kernel.org>
To: Kunkun Jiang <jiangkunkun@huawei.com>
Cc: Thomas Gleixner <tglx@linutronix.de>,
	Oliver Upton <oliver.upton@linux.dev>,
	James Morse <james.morse@arm.com>,
	Suzuki K Poulose <suzuki.poulose@arm.com>,
	Zenghui Yu <yuzenghui@huawei.com>,
	"open list:IRQ SUBSYSTEM" <linux-kernel@vger.kernel.org>,
	"moderated\ list:ARM SMMU DRIVERS"
	<linux-arm-kernel@lists.infradead.org>, <kvmarm@lists.linux.dev>,
	"wanghaibin.wang@huawei.com" <wanghaibin.wang@huawei.com>,
	<nizhiqiang1@huawei.com>,
	"tangnianyao@huawei.com" <tangnianyao@huawei.com>,
	<wangzhou1@hisilicon.com>
Subject: Re: [bug report] GICv4.1: multiple vpus execute vgic_v4_load at the same time will greatly increase the time consumption
Date: Wed, 21 Aug 2024 11:59:01 +0100	[thread overview]
Message-ID: <86msl6xhu2.wl-maz@kernel.org> (raw)
In-Reply-To: <a7fc58e4-64c2-77fc-c1dc-f5eb78dbbb01@huawei.com>

On Wed, 21 Aug 2024 10:51:27 +0100,
Kunkun Jiang <jiangkunkun@huawei.com> wrote:
> 
> Hi all,
> 
> Recently I discovered a problem about GICv4.1, the scenario is as follows:
> 1. Enable GICv4.1
> 2. Create multiple VMs.For example, 50 VMs(4U8G)

I don't know what 4U8G means. On how many physical CPUs are you
running 50 VMs? Direct injection of interrupts and over-subscription
are fundamentally incompatible.

> 3. The business running in VMs has a frequent mmio access and need to exit
>   to qemu for processing.
> 4. Or modify the kvm code so that wfi must trap to kvm
> 5. Then the utilization of pcpu where the vcpu is located will be 100%,and
>   basically all in sys.

What did you expect? If you trap all the time, your performance will
suck.  Don't do that.

> 6. This problem does not exist in GICv3.

Because GICv3 doesn't have the same constraints.

> 
> According to analysis, this problem is due to the execution of vgic_v4_load.
> vcpu_load or kvm_sched_in
>     kvm_arch_vcpu_load
>     ...
>         vgic_v4_load
>             irq_set_affinity
>             ...
>                 irq_do_set_affinity
>                     raw_spin_lock(&tmp_mask_lock)
>                     chip->irq_set_affinity
>                     ...
>                       its_vpe_set_affinity
> 
> The tmp_mask_lock is the key. This is a global lock. I don't quite
> understand
> why tmp_mask_lock is needed here. I think there are two possible
> solutions here:
> 1. Remove this tmp_mask_lock

Maybe you could have a look at 33de0aa4bae98 (and 11ea68f553e24)? It
would allow you to understand the nature of the problem.

This can probably be replaced with a per-CPU cpumask, which would
avoid the locking, but potentially result in a larger memory usage.

> 2. Modify the gicv4 driver,do not perfrom VMOVP via
> irq_set_affinity.

Sure. You could also not use KVM at all if don't care about interrupts
being delivered to your VM. We do not send a VMOVP just for fun. We
send it because your vcpu has moved to a different CPU, and the ITS
needs to know about that.

You seem to be misunderstanding the use case for GICv4: a partitioned
system, without any over-subscription, no vcpu migration between CPUs.
If that's not your setup, then GICv4 will always be a net loss
compared to SW injection with GICv3 (additional HW interaction,
doorbell interrupts).

	M.

-- 
Without deviation from the norm, progress is not possible.


  reply	other threads:[~2024-08-21 11:00 UTC|newest]

Thread overview: 9+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2024-08-21  9:51 [bug report] GICv4.1: multiple vpus execute vgic_v4_load at the same time will greatly increase the time consumption Kunkun Jiang
2024-08-21 10:59 ` Marc Zyngier [this message]
2024-08-21 18:23   ` Kunkun Jiang
2024-08-22  8:26     ` Marc Zyngier
2024-08-22 10:59       ` Kunkun Jiang
2024-08-22 12:47         ` Marc Zyngier
2024-08-22 21:20           ` Thomas Gleixner
2024-08-23  8:49             ` Marc Zyngier
2024-08-26  3:10               ` Kunkun Jiang

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=86msl6xhu2.wl-maz@kernel.org \
    --to=maz@kernel.org \
    --cc=james.morse@arm.com \
    --cc=jiangkunkun@huawei.com \
    --cc=kvmarm@lists.linux.dev \
    --cc=linux-arm-kernel@lists.infradead.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=nizhiqiang1@huawei.com \
    --cc=oliver.upton@linux.dev \
    --cc=suzuki.poulose@arm.com \
    --cc=tangnianyao@huawei.com \
    --cc=tglx@linutronix.de \
    --cc=wanghaibin.wang@huawei.com \
    --cc=wangzhou1@hisilicon.com \
    --cc=yuzenghui@huawei.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).