From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from casper.infradead.org (casper.infradead.org [90.155.50.34]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 9BD3D1D618E for ; Sat, 14 Feb 2026 06:28:55 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=90.155.50.34 ARC-Seal:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1771050540; cv=none; b=T5YqWTDzebjgdhtIuKobeFr26eFntrDnCPE7/N720bN+uL5F8x7lEikenwWoA9u1FV8Z+fhyPwLttia4khmViG5tQKmJGUyW7YGnGuNEwLLCxBx8C25enqh5GKTJTp09Mt2Oja2VxesTgVTgMS0h1SC/AdlSFw4S6ds4x434BNU= ARC-Message-Signature:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1771050540; c=relaxed/simple; bh=WVq91oVZJhMNJNPOSs48qByyEG6Ok74kxHzHWyuezO0=; h=Date:From:To:Cc:Subject:Message-ID:References:MIME-Version: Content-Type:Content-Disposition:In-Reply-To; b=lDf49Q4Klaac1VQrBCyrlXoAXMt+EKP7QYHpr/LSte7MuBI5WYjkf0EzJWIK8C+WK4ILifrUQIwXxg75P0C/qRD/6wxmY6MJKScTvbHE40K3Do5+UUT7fyhBW5/FDqsFRIVNTj9eg+Lc/l2qJ04pu9yRmbrK/JH8ciYwJsHUt0w= ARC-Authentication-Results:i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=infradead.org; spf=none smtp.mailfrom=infradead.org; dkim=pass (2048-bit key) header.d=infradead.org header.i=@infradead.org header.b=t4UtV5oc; arc=none smtp.client-ip=90.155.50.34 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=infradead.org Authentication-Results: smtp.subspace.kernel.org; spf=none smtp.mailfrom=infradead.org Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=infradead.org header.i=@infradead.org header.b="t4UtV5oc" DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=In-Reply-To:Content-Type:MIME-Version: References:Message-ID:Subject:Cc:To:From:Date:Sender:Reply-To: Content-Transfer-Encoding:Content-ID:Content-Description; bh=wGw4Vu7tCiL3kLSWjSCIuIQilIgi0fpPvCGA80V/pdM=; b=t4UtV5ociA1M7EnjmH5YrnSsoX /Y6jiY3A4S+rX0aR9XMr+JYQsGqW6tQSq1VJLqDrke70scuSZ1gmwn2VUvbwi/hZsrxe66R0vPz5i ibY9rc78mKgwj/ClsebpT/b2hPJTTmXbqbNzlVUQXIOi8Cie7oLg1i+ikwd+2bXmwcqIkr6bOmshi 0M09tVy1j7bDQZcFBKJ28ozFHRJ4ZHVn9aROJqcb9ILJNcotbXeNVTA0cD6Nmu7wBe2ZlGvQPUl4x jlLTWhr66rn2jdzM8VbvZ6fMUspdhdZ7+eu3xkpmDAHg7GffnGYbfzuR25dQOM6TySS2PXei/BV9b XFKjHcew==; Received: from willy by casper.infradead.org with local (Exim 4.98.2 #2 (Red Hat Linux)) id 1vr992-0000000GrgH-0q3m; Sat, 14 Feb 2026 06:28:44 +0000 Date: Sat, 14 Feb 2026 06:28:43 +0000 From: Matthew Wilcox To: Vlastimil Babka Cc: Peter Zijlstra , Ingo Molnar , Will Deacon , Sebastian Andrzej Siewior , LKML , "linux-mm@kvack.org" , Linus Torvalds , Waiman Long , Mel Gorman , Steven Rostedt Subject: Re: [RFC] making nested spin_trylock() work on UP? Message-ID: References: Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: On Fri, Feb 13, 2026 at 12:57:43PM +0100, Vlastimil Babka wrote: > The page allocator has been using a locking scheme for its percpu page > caches (pcp) for years now, based on spin_trylock() with no _irqsave() part. > The point is that if we interrupt the locked section, we fail the trylock > and just fallback to something that's more expensive, but it's rare so we > don't need to pay the irqsave cost all the time in the fastpaths. > > It's similar to but not exactly local_trylock_t (which is also newer anyway) > because in some cases we do lock the pcp of a non-local cpu to flush it, in > a way that's cheaper than IPI or queue_work_on(). > > The complication of this scheme has been UP non-debug spinlock > implementation which assumes spin_trylock() can't fail on UP and has no > state to track it. It just doesn't anticipate this usage scenario. So to > work around that we disable IRQs on UP, complicating the implementation. > Also recently we found years old bug in the implementation - see > 038a102535eb ("mm/page_alloc: prevent pcp corruption with SMP=n"). > > So my question is if we could have spinlock implementation supporting this > nested spin_trylock() usage, or if the UP optimization is still considered > too important to lose it. I was thinking: > > - remove the UP implementation completely - would it increase the overhead > on SMP=n systems too much and do we still care? > > - make the non-debug implementation a bit like the debug one so we do have > the 'locked' state (see include/linux/spinlock_up.h and lock->slock). This > also adds some overhead but not as much as the full SMP implementation? What if we use an atomic_t on UP to simulate there being a spinlock, but only for pcp? Your demo shows pcp_spin_trylock() continuing to exist, so how about doing something like: #ifdef CONFIG_SMP #define pcp_spin_trylock(ptr) \ ({ \ struct per_cpu_pages *__ret; \ __ret = pcpu_spin_trylock(struct per_cpu_pages, lock, ptr); \ __ret; \ }) #else static atomic_t pcp_UP_lock = ATOMIC_INIT(0); #define pcp_spin_trylock(ptr) \ ({ \ struct per_cpu_pages *__ret = NULL; \ if (atomic_try_cmpxchg(&pcp_UP_lock, 0, 1)) \ __ret = (void *)&pcp_UP_lock; \ __ret; \ }); #endif (obviously you need pcp_spin_lock/pcp_spin_unlock also defined) That only costs us 4 extra bytes on UP, rather than 4 bytes per spinlock. And some people still use routers with tiny amounts of memory and a single CPU, or retrocomputers with single CPUs.