From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 8AB14D534 for ; Sun, 28 Jul 2024 09:42:35 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1722159755; cv=none; b=rs7u/9pbms1rmzR39sAs248HgnJPuPlmJVqceFUCZ6iLBWdedwENg+PmuSPn+dGz2w+4MINGxJgU25Pirn2MysbC7JHGOpt28h0CVAmSu7GVoe+iJM2ClI7o1i9otV7vNAwPxRNk4XO678IldsEQulcnddXTZb4jWL80i+034FQ= ARC-Message-Signature:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1722159755; c=relaxed/simple; bh=o12tsrInDUp7TWLjhQ3FIg7eGUv0Ahvb5FlQ5xtUNQI=; h=Date:Message-ID:From:To:Cc:Subject:In-Reply-To:References: MIME-Version:Content-Type; b=AL/Csq0qj9Lfztm8bxTeohU7R6c2XfUFi4xYBoT6Pz/qomi6Pp3MM9UJVASZIv7XuHN4ot1i+Ncq7qFU8y+LjSOf5y7RJrdtd9ozx+OzP24OiPqkct7Xi49/Wl7izi+r9iW/TfeaIzKoaGf+BCkHmVMjpMpcqbqIT+eOEj0UKu0= ARC-Authentication-Results:i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=rgSWK+I9; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="rgSWK+I9" Received: by smtp.kernel.org (Postfix) with ESMTPSA id 0DA19C116B1; Sun, 28 Jul 2024 09:42:35 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1722159755; bh=o12tsrInDUp7TWLjhQ3FIg7eGUv0Ahvb5FlQ5xtUNQI=; h=Date:From:To:Cc:Subject:In-Reply-To:References:From; b=rgSWK+I9bzdBJW97j9tBQ4inV9Z1NxRsQzJaoTvt+ROTL13DwAlU3q6EZXkDgI65p 9niFi9cWLhXlcPUUkeZARowNodFMb+SYJ+c90L75Q0zznLfP9zf1vCdHfwc3V6VS1T 9bzfBeK0gj8SzTzvbj/vnOrBcgu8bvcoCQBi1m6GXFjguKAso0xeleV1yrObxdR4Tm vSUm99NPC+zi+n/YkbNYIbtQ18OKrweDWBwJBRA45014SEWLE267ru9RilR6qceZf8 fD11SJwfSdJuqmG4t9g5UJe038fbXmMn3OnEqLIOVE5npL8l4m9jBpTaF7PUiNEawH Tcsn1SIMmdz0A== Received: from sofa.misterjones.org ([185.219.108.64] helo=wait-a-minute.misterjones.org) by disco-boy.misterjones.org with esmtpsa (TLS1.3) tls TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384 (Exim 4.95) (envelope-from ) id 1sY0QC-00G2bK-WB; Sun, 28 Jul 2024 10:42:33 +0100 Date: Sun, 28 Jul 2024 10:42:32 +0100 Message-ID: <87y15l4zuf.wl-maz@kernel.org> From: Marc Zyngier To: Thomas Gleixner Cc: linux-kernel@vger.kernel.org, linux-arm-kernel@lists.infradead.org, Zhou Wang Subject: Re: [PATCH] irqchip/gic-v4: Fix ordering between vmapp and vpe locks In-Reply-To: <875xsrvpt3.ffs@tglx> References: <20240723175203.3193882-1-maz@kernel.org> <875xsrvpt3.ffs@tglx> User-Agent: Wanderlust/2.15.9 (Almost Unreal) SEMI-EPG/1.14.7 (Harue) FLIM-LB/1.14.9 (=?UTF-8?B?R29qxY0=?=) APEL-LB/10.8 EasyPG/1.0.0 Emacs/28.2 (x86_64-pc-linux-gnu) MULE/6.0 (HANACHIRUSATO) Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 (generated by SEMI-EPG 1.14.7 - "Harue") Content-Type: text/plain; charset=US-ASCII X-SA-Exim-Connect-IP: 185.219.108.64 X-SA-Exim-Rcpt-To: tglx@linutronix.de, linux-kernel@vger.kernel.org, linux-arm-kernel@lists.infradead.org, wangzhou1@hisilicon.com X-SA-Exim-Mail-From: maz@kernel.org X-SA-Exim-Scanned: No (on disco-boy.misterjones.org); SAEximRunCond expanded to false On Fri, 26 Jul 2024 21:52:40 +0100, Thomas Gleixner wrote: > > On Tue, Jul 23 2024 at 18:52, Marc Zyngier wrote: > > @@ -3808,7 +3802,7 @@ static int its_vpe_set_affinity(struct irq_data *d, > > struct its_vpe *vpe = irq_data_get_irq_chip_data(d); > > unsigned int from, cpu = nr_cpu_ids; > > struct cpumask *table_mask; > > - unsigned long flags; > > + unsigned long flags, vmapp_flags; > > What's this flags business for? its_vpe_set_affinity() is called with > interrupts disabled, no? Duh. Of course. Cargo-culted braindead logic. I'll fix that. > > > /* > > * Changing affinity is mega expensive, so let's be as lazy as > > @@ -3822,7 +3816,14 @@ static int its_vpe_set_affinity(struct irq_data *d, > > * protect us, and that we must ensure nobody samples vpe->col_idx > > * during the update, hence the lock below which must also be > > * taken on any vLPI handling path that evaluates vpe->col_idx. > > + * > > + * Finally, we must protect ourselves against concurrent > > + * updates of the mapping state on this VM should the ITS list > > + * be in use. > > */ > > + if (its_list_map) > > + raw_spin_lock_irqsave(&vpe->its_vm->vmapp_lock, vmapp_flags); > > Confused. This changes the locking from unconditional to > conditional. What's the rationale here? I think I'm confused too. I've written this as a mix of the VMOVP lock (which must be conditional) and the new VMAPP lock, which must be taken to avoid racing against a new vcpu coming up. And of course, this makes zero sense. I'll get some sleep first, and then fix this correctly. Thanks for spotting it. M. -- Without deviation from the norm, progress is not possible.