From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id B208E255F2D; Fri, 9 Jan 2026 16:13:05 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1767975185; cv=none; b=W3uUGZaDQfgFlqpOIfv6nvYoSfYNPP/aGHp+4DWJm1ujNeRuutnGo7e8/J1t/8hFYRH74UdaRVe57f5AL33HDOsPZE25Ax/ygTsnjHTgIiiyQJDXK+Bt5vr/RE7RqZ3+7ae7uaVcbt8kjUsNdNZzsR9LGA8LZ+wGQhouxxahFr0= ARC-Message-Signature:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1767975185; c=relaxed/simple; bh=oCvECle5CMTUsJbpndJH1a7dO2J2qJfeumT+tt40pDA=; h=Date:Message-ID:From:To:Cc:Subject:In-Reply-To:References: MIME-Version:Content-Type; b=aCoeMIGfSsM/IAdhqGOk2eXaemKIPpeJUkIAHeFkf6UpOxB0t7453OLvKWX0L4qfAu+sYGAXBxC2bcJ8VDtew3S8t6F7/RtTR/DvPQrWO3RCn5eRJli1FBqYwaMc7A1SJX/8ZgjH4xvyYv4aOh8dBJneaw4oaZPceaMWEMt7uvI= ARC-Authentication-Results:i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=VBi5gmD+; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="VBi5gmD+" Received: by smtp.kernel.org (Postfix) with ESMTPSA id 3E3F5C4CEF7; Fri, 9 Jan 2026 16:13:05 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1767975185; bh=oCvECle5CMTUsJbpndJH1a7dO2J2qJfeumT+tt40pDA=; h=Date:From:To:Cc:Subject:In-Reply-To:References:From; b=VBi5gmD+EtrSCN1v0UuEQgVtE19f4fPlvRy4380qgQ2QMVc6su3Vgh90KMnbruTqU jco4xy0K7lALPjfSHNoGmEqsHpdWZN4Zd38cpqWtegZh8Bs6G+NNa0OAmRVTz+Gr1o jMkawIhzuaOn/U1eauuAhGBXTiE0TB42ae//EZuWLn71WVLyL34LOWfNHt2s9cH80/ rgp63UMcC1OhYsBuViilYQTblk1vSmDBiESbuxD/XgZjnbzXbzWcOj1/XlLZ+Tgb+M 1TbVbs2LI+i9dg/cUaHD1nvGfx7MvT+QauikTgrddH9b03LP0P/9HbTC6oACdTzP1R plN1FOCK7LtdA== Received: from sofa.misterjones.org ([185.219.108.64] helo=goblin-girl.misterjones.org) by disco-boy.misterjones.org with esmtpsa (TLS1.3) tls TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384 (Exim 4.98.2) (envelope-from ) id 1veF6k-00000000qBe-3tDG; Fri, 09 Jan 2026 16:13:03 +0000 Date: Fri, 09 Jan 2026 16:13:02 +0000 Message-ID: <86wm1qlq7l.wl-maz@kernel.org> From: Marc Zyngier To: Thomas Gleixner Cc: Waiman Long , Sebastian Andrzej Siewior , Clark Williams , Steven Rostedt , linux-arm-kernel@lists.infradead.org, linux-kernel@vger.kernel.org, linux-rt-devel@lists.linux.dev Subject: Re: [PATCH] irqchip/gic-v3-its: Don't acquire rt_spin_lock in allocate_vpe_l1_table() In-Reply-To: <87ms2nsqju.ffs@tglx> References: <20260107215353.75612-1-longman@redhat.com> <864iowmrx6.wl-maz@kernel.org> <87ms2nsqju.ffs@tglx> User-Agent: Wanderlust/2.15.9 (Almost Unreal) SEMI-EPG/1.14.7 (Harue) FLIM-LB/1.14.9 (=?UTF-8?B?R29qxY0=?=) APEL-LB/10.8 EasyPG/1.0.0 Emacs/30.1 (aarch64-unknown-linux-gnu) MULE/6.0 (HANACHIRUSATO) Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 (generated by SEMI-EPG 1.14.7 - "Harue") Content-Type: text/plain; charset=US-ASCII X-SA-Exim-Connect-IP: 185.219.108.64 X-SA-Exim-Rcpt-To: tglx@kernel.org, longman@redhat.com, bigeasy@linutronix.de, clrkwllms@kernel.org, rostedt@goodmis.org, linux-arm-kernel@lists.infradead.org, linux-kernel@vger.kernel.org, linux-rt-devel@lists.linux.dev X-SA-Exim-Mail-From: maz@kernel.org X-SA-Exim-Scanned: No (on disco-boy.misterjones.org); SAEximRunCond expanded to false On Thu, 08 Jan 2026 22:11:33 +0000, Thomas Gleixner wrote: > > On Thu, Jan 08 2026 at 08:26, Marc Zyngier wrote: > > Err, no. That's horrible. I can see three ways to address this in a > > more appealing way: > > > > - you give RT a generic allocator that works for (small) atomic > > allocations. I appreciate that's not easy, and even probably > > contrary to the RT goals. But I'm also pretty sure that the GIC code > > is not the only pile of crap being caught doing that. > > > > - you pre-compute upfront how many cpumasks you are going to require, > > based on the actual GIC topology. You do that on CPU0, outside of > > the hotplug constraints, and allocate what you need. This is > > difficult as you need to ensure the RD<->CPU matching without the > > CPUs having booted, which means wading through the DT/ACPI gunk to > > try and guess what you have. > > > > - you delay the allocation of L1 tables to a context where you can > > perform allocations, and before we have a chance of running a guest > > on this CPU. That's probably the simplest option (though dealing > > with late onlining while guests are already running could be > > interesting...). > > At the point where a CPU is brought up, the topology should be known > already, which means this can be allocated on the control CPU _before_ > the new CPU comes up, no? No. Each CPU finds *itself* in the forest of redistributors, and from there tries to find whether it has some shared resource with a CPU that has booted before it. That's because firmware is absolutely awful and can't present a consistent view of the system. Anyway, I expect it could be solved by moving this part of the init to an ONLINE HP callback. Thanks, M. -- Without deviation from the norm, progress is not possible.