From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 40FAEC3DA49 for ; Tue, 23 Jul 2024 17:57:05 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender:List-Subscribe:List-Help :List-Post:List-Archive:List-Unsubscribe:List-Id:Content-Transfer-Encoding: Content-Type:MIME-Version:References:In-Reply-To:Subject:Cc:To:From: Message-ID:Date:Reply-To:Content-ID:Content-Description:Resent-Date: Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:List-Owner; bh=eCRBbMFU9mQIrSmp2X+VTQNFHoEbqg9nx8urq57aOAE=; b=T1az/KiJT9F6/nHUBJK6VXxWtu kki+/rKzysdJFt28Jjq+fPLQAUxcyw1x4Bi7J8aFMuYPRpkvezl/l8nHu4Nq8/6a2HUc/Eyt8ihSy HqTiGJ2O7NJ2si/BxL+5kKCeX1Zo0XbyLwsgl1Afs/CaxivsjGbKcz/tGxzW8Twr2n3PtU91LCoyg buKYgIS2SO6ijxMUFKO25CEAcDWHZv6yepAYgsINjZ2YyVyzC88ECwnuo4Vz9X72yuiLWmmB/E7wf bJ2iUETq7HFJ/EmArYh33O3v0HLRLAOWygrQcSb+NN7epslrgryLX8hbM8ZLPJFiYPHBL0qDGk9Lm EYtqFIfw==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.97.1 #2 (Red Hat Linux)) id 1sWJks-0000000DAWt-00e4; Tue, 23 Jul 2024 17:56:54 +0000 Received: from sin.source.kernel.org ([145.40.73.55]) by bombadil.infradead.org with esmtps (Exim 4.97.1 #2 (Red Hat Linux)) id 1sWJkS-0000000DATO-48Po for linux-arm-kernel@lists.infradead.org; Tue, 23 Jul 2024 17:56:30 +0000 Received: from smtp.kernel.org (transwarp.subspace.kernel.org [100.75.92.58]) by sin.source.kernel.org (Postfix) with ESMTP id 0A868CE0CFC; Tue, 23 Jul 2024 17:56:27 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 3560BC4AF0A; Tue, 23 Jul 2024 17:56:26 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1721757386; bh=lI27AGHSNiEg23GogeEx/hdKprk9OzcyxtP42lFjVLo=; h=Date:From:To:Cc:Subject:In-Reply-To:References:From; b=HthFAbE2P6nQ5UjgNk9WHNTD0AtXThl8hFI9mD2GZamcjuPfM4cf8/fEFNyNsuxOM Poh3AiihsfS4mplFeVaIWwiLosMsVc2IxojQjR1z4snu2puIOcKVHEvI6ZyIhTJ0Za 3rPsProkPmncJJxysOVvHhkpxZC98bKXdossZB3rroW2/ewWpuxTpaz/HVXUeFrNqb Ttk3a+62SFuaxoEVOnUU7y11RwzKNHQRXFGZ85IWb6iJh6hWzx4yYBWzg55mSgNRnW P7qMNXLBVYD5S4I8wcBeCAYtE6Ui8c+4LcoiBfiIBrd33eVp6fhePs6IWl29+j/+ag eY5JbKjwO9/9w== Received: from sofa.misterjones.org ([185.219.108.64] helo=goblin-girl.misterjones.org) by disco-boy.misterjones.org with esmtpsa (TLS1.3) tls TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384 (Exim 4.95) (envelope-from ) id 1sWJkO-00Enco-6b; Tue, 23 Jul 2024 18:56:24 +0100 Date: Tue, 23 Jul 2024 18:56:23 +0100 Message-ID: <86bk2o2drs.wl-maz@kernel.org> From: Marc Zyngier To: Zhou Wang Cc: , , , Thomas Gleixner , Nianyao Tang Subject: Re: [PATCH 3/3] irqchip/gic-v4: Make sure a VPE is locked when VMAPP is issued In-Reply-To: <4b05e4fe-0906-102f-5697-eec7ee222bef@hisilicon.com> References: <20240705093155.871070-1-maz@kernel.org> <20240705093155.871070-4-maz@kernel.org> <2c9489cc-d276-7c52-5d52-7f234fdc726e@hisilicon.com> <86h6cl39ff.wl-maz@kernel.org> <4b05e4fe-0906-102f-5697-eec7ee222bef@hisilicon.com> User-Agent: Wanderlust/2.15.9 (Almost Unreal) SEMI-EPG/1.14.7 (Harue) FLIM-LB/1.14.9 (=?UTF-8?B?R29qxY0=?=) APEL-LB/10.8 EasyPG/1.0.0 Emacs/29.3 (aarch64-unknown-linux-gnu) MULE/6.0 (HANACHIRUSATO) MIME-Version: 1.0 (generated by SEMI-EPG 1.14.7 - "Harue") Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: quoted-printable X-SA-Exim-Connect-IP: 185.219.108.64 X-SA-Exim-Rcpt-To: wangzhou1@hisilicon.com, kvmarm@lists.linux.dev, linux-arm-kernel@lists.infradead.org, linux-kernel@vger.kernel.org, tglx@linutronix.de, tangnianyao@huawei.com X-SA-Exim-Mail-From: maz@kernel.org X-SA-Exim-Scanned: No (on disco-boy.misterjones.org); SAEximRunCond expanded to false X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20240723_105629_419669_8821C6AA X-CRM114-Status: GOOD ( 40.31 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org On Tue, 23 Jul 2024 02:51:32 +0100, Zhou Wang wrote: >=20 > On 2024/7/19 19:31, Marc Zyngier wrote: > > On Fri, 19 Jul 2024 10:42:02 +0100, > > Zhou Wang wrote: > >> > >> On 2024/7/5 17:31, Marc Zyngier wrote: > >>> In order to make sure that vpe->col_idx is correctly sampled > >>> when a VMAPP command is issued, we must hold the lock for the > >>> VPE. This is now possible since the introduction of the per-VM > >>> vmapp_lock, which can be taken before vpe_lock in the locking > >>> order. > >>> > >>> Signed-off-by: Marc Zyngier > >>> --- > >>> drivers/irqchip/irq-gic-v3-its.c | 8 ++++++-- > >>> 1 file changed, 6 insertions(+), 2 deletions(-) > >>> > >>> diff --git a/drivers/irqchip/irq-gic-v3-its.c b/drivers/irqchip/irq-g= ic-v3-its.c > >>> index b52d60097cad5..951ec140bcea2 100644 > >>> --- a/drivers/irqchip/irq-gic-v3-its.c > >>> +++ b/drivers/irqchip/irq-gic-v3-its.c > >>> @@ -1810,7 +1810,9 @@ static void its_map_vm(struct its_node *its, st= ruct its_vm *vm) > >>> for (i =3D 0; i < vm->nr_vpes; i++) { > >>> struct its_vpe *vpe =3D vm->vpes[i]; > >>> =20 > >>> - its_send_vmapp(its, vpe, true); > >>> + scoped_guard(raw_spinlock, &vpe->vpe_lock) > >>> + its_send_vmapp(its, vpe, true); > >>> + > >>> its_send_vinvall(its, vpe); > >>> } > >>> } > >>> @@ -1827,8 +1829,10 @@ static void its_unmap_vm(struct its_node *its,= struct its_vm *vm) > >>> if (!--vm->vlpi_count[its->list_nr]) { > >>> int i; > >>> =20 > >>> - for (i =3D 0; i < vm->nr_vpes; i++) > >>> + for (i =3D 0; i < vm->nr_vpes; i++) { > >>> + guard(raw_spinlock)(&vm->vpes[i]->vpe_lock); > >>> its_send_vmapp(its, vm->vpes[i], false); > >>> + } > >>> } > >>> } > >>> =20 > >> > >> Hi Marc, > >> > >> It looks like there is ABBA deadlock after applying this series: > >> > >> In its_map_vm: vmapp_lock -> vpe_lock > >> In its_vpe_set_affinity: vpe_to_cpuid_lock(vpe_lock) -> its_send_vmovp= (vmapp_lock) > >> > >> Any idea about this? > >=20 > > Hmmm, well spotted. That's an annoying one. > >=20 > > Can you give the below hack a go? I've only lightly tested it, as my > > D05 box is on its last leg (it is literally falling apart) and I don't > > have any other GICv4.x box to test on. > >=20 > > Thanks, > >=20 > > M. > >=20 > > diff --git a/drivers/irqchip/irq-gic-v3-its.c b/drivers/irqchip/irq-gic= -v3-its.c > > index 951ec140bcea2..b88c6011c8771 100644 > > --- a/drivers/irqchip/irq-gic-v3-its.c > > +++ b/drivers/irqchip/irq-gic-v3-its.c > > @@ -1328,12 +1328,6 @@ static void its_send_vmovp(struct its_vpe *vpe) > > return; > > } > > =20 > > - /* > > - * Protect against concurrent updates of the mapping state on > > - * individual VMs. > > - */ > > - guard(raw_spinlock_irqsave)(&vpe->its_vm->vmapp_lock); > > - > > /* > > * Yet another marvel of the architecture. If using the > > * its_list "feature", we need to make sure that all ITSs > > @@ -3808,7 +3802,7 @@ static int its_vpe_set_affinity(struct irq_data *= d, > > struct its_vpe *vpe =3D irq_data_get_irq_chip_data(d); > > unsigned int from, cpu =3D nr_cpu_ids; > > struct cpumask *table_mask; > > - unsigned long flags; > > + unsigned long flags, vmapp_flags; > > =20 > > /* > > * Changing affinity is mega expensive, so let's be as lazy as > > @@ -3822,7 +3816,14 @@ static int its_vpe_set_affinity(struct irq_data = *d, > > * protect us, and that we must ensure nobody samples vpe->col_idx > > * during the update, hence the lock below which must also be > > * taken on any vLPI handling path that evaluates vpe->col_idx. > > + * > > + * Finally, we must protect ourselves against concurrent > > + * updates of the mapping state on this VM should the ITS list > > + * be in use. > > */ > > + if (its_list_map) > > + raw_spin_lock_irqsave(&vpe->its_vm->vmapp_lock, vmapp_flags); > > + > > from =3D vpe_to_cpuid_lock(vpe, &flags); > > table_mask =3D gic_data_rdist_cpu(from)->vpe_table_mask; > > =20 > > @@ -3852,6 +3853,9 @@ static int its_vpe_set_affinity(struct irq_data *= d, > > irq_data_update_effective_affinity(d, cpumask_of(cpu)); > > vpe_to_cpuid_unlock(vpe, flags); > > =20 > > + if (its_list_map) > > + raw_spin_unlock_irqrestore(&vpe->its_vm->vmapp_lock, vmapp_flags); > > + > > return IRQ_SET_MASK_OK_DONE; > > } > > >=20 > Hi Marc=EF=BC=8C >=20 > We add above code to do test again. Now it is OK. Great, thanks for giving it a go. I have just posted an actual patch (with the exact same change) at [1]. It would be good if you could give it a Tested-by: tag. Thanks, M. [1] https://lore.kernel.org/r/20240723175203.3193882-1-maz@kernel.org --=20 Without deviation from the norm, progress is not possible.