From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id AB31DC3DA59 for ; Fri, 19 Jul 2024 11:31:54 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender:List-Subscribe:List-Help :List-Post:List-Archive:List-Unsubscribe:List-Id:Content-Type:MIME-Version: References:In-Reply-To:Subject:Cc:To:From:Message-ID:Date:Reply-To: Content-Transfer-Encoding:Content-ID:Content-Description:Resent-Date: Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:List-Owner; bh=z4cZaGJMZFrg1/aC3Q2yzN75t43zLLoPvsw1UK2WRkw=; b=djCxAz7QMql6U8Wvy2z8kKJL5C wdTEw84Uul06Qpv4uIb/Afc61rlr4OTYxEWX7xa/6fTfvqVgfaihoBPZe6UGvvS5xm+xnQIfGPS+z 84Qp81/9xPHe/5KPTM5CLSPiUCfSTNyXjqSWNU+lqAJzkmrMZqaRq8/FNkozjnS+9heErdO+Lcy4O KcF8w5GGRqTd49s47GWN68tfmUeZtD1oThGLgs6Pp2kLMNcbMVwPuG4W/PWsrOVnfKBaXWYcvGFpM sdzmvknG9HB+/Gr+xfVvT8Iqkhm2rLakkSmHEHHlrMQdRGzZicjEFAZX0MlcPadNT1fZLZCk5zWrq LHKJkb2w==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.97.1 #2 (Red Hat Linux)) id 1sUlpu-00000002WOg-16Ii; Fri, 19 Jul 2024 11:31:42 +0000 Received: from dfw.source.kernel.org ([2604:1380:4641:c500::1]) by bombadil.infradead.org with esmtps (Exim 4.97.1 #2 (Red Hat Linux)) id 1sUlpY-00000002WKa-37bK for linux-arm-kernel@lists.infradead.org; Fri, 19 Jul 2024 11:31:22 +0000 Received: from smtp.kernel.org (transwarp.subspace.kernel.org [100.75.92.58]) by dfw.source.kernel.org (Postfix) with ESMTP id 0316B61C4A; Fri, 19 Jul 2024 11:31:20 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id AB50AC32782; Fri, 19 Jul 2024 11:31:19 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1721388679; bh=ne2vWVxA6XUsX7661bKyLku1qh3orhzjZVHmWu5iO00=; h=Date:From:To:Cc:Subject:In-Reply-To:References:From; b=YgcG2BDgxGyhS2lmc7Ln0lqa1etFOIoSGXNQ0DHorLazK48QRWYlEH0JDeE+LYiD2 GpyM8RRWSnCn3UHq1tPDjjnthppLbbZrttL90tKAUG85MDjHesujbCtCrfBils5VMG beHlv0uqQ3ra1nvwEo2b6Aw82qx+3l2Y+8NVVjtnkN8nY6yUD+Y9QwfEYAntchqIHG H/RiNilXycbqpXBYb6CBLEyzrhyRy73suBGBDFqByEgQ/7x9UZEACpXDBHEdxios8M Op3Tjs5NhNdzYFTVA6jBog/3imaYTp+EjXjma3ttMV5K4DbbEGP5UdYOHhg9ZNdJmv epsM/2XfanPzQ== Received: from sofa.misterjones.org ([185.219.108.64] helo=goblin-girl.misterjones.org) by disco-boy.misterjones.org with esmtpsa (TLS1.3) tls TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384 (Exim 4.95) (envelope-from ) id 1sUlpV-00DhVs-3n; Fri, 19 Jul 2024 12:31:17 +0100 Date: Fri, 19 Jul 2024 12:31:16 +0100 Message-ID: <86h6cl39ff.wl-maz@kernel.org> From: Marc Zyngier To: Zhou Wang Cc: , , , Thomas Gleixner , Nianyao Tang Subject: Re: [PATCH 3/3] irqchip/gic-v4: Make sure a VPE is locked when VMAPP is issued In-Reply-To: <2c9489cc-d276-7c52-5d52-7f234fdc726e@hisilicon.com> References: <20240705093155.871070-1-maz@kernel.org> <20240705093155.871070-4-maz@kernel.org> <2c9489cc-d276-7c52-5d52-7f234fdc726e@hisilicon.com> User-Agent: Wanderlust/2.15.9 (Almost Unreal) SEMI-EPG/1.14.7 (Harue) FLIM-LB/1.14.9 (=?UTF-8?B?R29qxY0=?=) APEL-LB/10.8 EasyPG/1.0.0 Emacs/29.3 (aarch64-unknown-linux-gnu) MULE/6.0 (HANACHIRUSATO) MIME-Version: 1.0 (generated by SEMI-EPG 1.14.7 - "Harue") Content-Type: text/plain; charset=US-ASCII X-SA-Exim-Connect-IP: 185.219.108.64 X-SA-Exim-Rcpt-To: wangzhou1@hisilicon.com, kvmarm@lists.linux.dev, linux-arm-kernel@lists.infradead.org, linux-kernel@vger.kernel.org, tglx@linutronix.de, tangnianyao@huawei.com X-SA-Exim-Mail-From: maz@kernel.org X-SA-Exim-Scanned: No (on disco-boy.misterjones.org); SAEximRunCond expanded to false X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20240719_043120_905173_C37691C9 X-CRM114-Status: GOOD ( 33.99 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org On Fri, 19 Jul 2024 10:42:02 +0100, Zhou Wang wrote: > > On 2024/7/5 17:31, Marc Zyngier wrote: > > In order to make sure that vpe->col_idx is correctly sampled > > when a VMAPP command is issued, we must hold the lock for the > > VPE. This is now possible since the introduction of the per-VM > > vmapp_lock, which can be taken before vpe_lock in the locking > > order. > > > > Signed-off-by: Marc Zyngier > > --- > > drivers/irqchip/irq-gic-v3-its.c | 8 ++++++-- > > 1 file changed, 6 insertions(+), 2 deletions(-) > > > > diff --git a/drivers/irqchip/irq-gic-v3-its.c b/drivers/irqchip/irq-gic-v3-its.c > > index b52d60097cad5..951ec140bcea2 100644 > > --- a/drivers/irqchip/irq-gic-v3-its.c > > +++ b/drivers/irqchip/irq-gic-v3-its.c > > @@ -1810,7 +1810,9 @@ static void its_map_vm(struct its_node *its, struct its_vm *vm) > > for (i = 0; i < vm->nr_vpes; i++) { > > struct its_vpe *vpe = vm->vpes[i]; > > > > - its_send_vmapp(its, vpe, true); > > + scoped_guard(raw_spinlock, &vpe->vpe_lock) > > + its_send_vmapp(its, vpe, true); > > + > > its_send_vinvall(its, vpe); > > } > > } > > @@ -1827,8 +1829,10 @@ static void its_unmap_vm(struct its_node *its, struct its_vm *vm) > > if (!--vm->vlpi_count[its->list_nr]) { > > int i; > > > > - for (i = 0; i < vm->nr_vpes; i++) > > + for (i = 0; i < vm->nr_vpes; i++) { > > + guard(raw_spinlock)(&vm->vpes[i]->vpe_lock); > > its_send_vmapp(its, vm->vpes[i], false); > > + } > > } > > } > > > > Hi Marc, > > It looks like there is ABBA deadlock after applying this series: > > In its_map_vm: vmapp_lock -> vpe_lock > In its_vpe_set_affinity: vpe_to_cpuid_lock(vpe_lock) -> its_send_vmovp(vmapp_lock) > > Any idea about this? Hmmm, well spotted. That's an annoying one. Can you give the below hack a go? I've only lightly tested it, as my D05 box is on its last leg (it is literally falling apart) and I don't have any other GICv4.x box to test on. Thanks, M. diff --git a/drivers/irqchip/irq-gic-v3-its.c b/drivers/irqchip/irq-gic-v3-its.c index 951ec140bcea2..b88c6011c8771 100644 --- a/drivers/irqchip/irq-gic-v3-its.c +++ b/drivers/irqchip/irq-gic-v3-its.c @@ -1328,12 +1328,6 @@ static void its_send_vmovp(struct its_vpe *vpe) return; } - /* - * Protect against concurrent updates of the mapping state on - * individual VMs. - */ - guard(raw_spinlock_irqsave)(&vpe->its_vm->vmapp_lock); - /* * Yet another marvel of the architecture. If using the * its_list "feature", we need to make sure that all ITSs @@ -3808,7 +3802,7 @@ static int its_vpe_set_affinity(struct irq_data *d, struct its_vpe *vpe = irq_data_get_irq_chip_data(d); unsigned int from, cpu = nr_cpu_ids; struct cpumask *table_mask; - unsigned long flags; + unsigned long flags, vmapp_flags; /* * Changing affinity is mega expensive, so let's be as lazy as @@ -3822,7 +3816,14 @@ static int its_vpe_set_affinity(struct irq_data *d, * protect us, and that we must ensure nobody samples vpe->col_idx * during the update, hence the lock below which must also be * taken on any vLPI handling path that evaluates vpe->col_idx. + * + * Finally, we must protect ourselves against concurrent + * updates of the mapping state on this VM should the ITS list + * be in use. */ + if (its_list_map) + raw_spin_lock_irqsave(&vpe->its_vm->vmapp_lock, vmapp_flags); + from = vpe_to_cpuid_lock(vpe, &flags); table_mask = gic_data_rdist_cpu(from)->vpe_table_mask; @@ -3852,6 +3853,9 @@ static int its_vpe_set_affinity(struct irq_data *d, irq_data_update_effective_affinity(d, cpumask_of(cpu)); vpe_to_cpuid_unlock(vpe, flags); + if (its_list_map) + raw_spin_unlock_irqrestore(&vpe->its_vm->vmapp_lock, vmapp_flags); + return IRQ_SET_MASK_OK_DONE; } -- Without deviation from the norm, progress is not possible.