From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from mail.ilvokhin.com (mail.ilvokhin.com [178.62.254.231]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 0426D3FE640 for ; Wed, 29 Apr 2026 18:01:24 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=178.62.254.231 ARC-Seal:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1777485686; cv=none; b=nn1Um2LnfhF99Q5xmP13M5eBIiXbVlynM5/LJMGwGKsSyfNjrtQi0ixYfuW1JVNj9NdQx8lWRnuvtYsWsI9QKwE0LwCoAdtC/h3JaxR9I3r2zNSYI15jWGwUtcfwh6fb1taXXYSa6H6tdNHHFE8wVQtM+ajYSRfn6gCOCvmS9s8= ARC-Message-Signature:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1777485686; c=relaxed/simple; bh=vfm8/BRUucsg1iWEXMgRXCUHhHMRwtTQyqzYz+WFJcc=; h=Date:From:To:Cc:Subject:Message-ID:References:MIME-Version: Content-Type:Content-Disposition:In-Reply-To; b=lti//SXiksAPXlC2e8tnpJPM3btpD/O7vuFHGJYdga4fwTCFbcMX0HIwUI3NCE92CFOea4jpqovw67RRaidIuV1D8Dqwnc2RGdbIPR4SHmKfMWmpqX0RVxP08WnAu6YfM2GViON4JiFxffNncJufX//+V+PArFNjiH49Ps33yco= ARC-Authentication-Results:i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=ilvokhin.com; spf=pass smtp.mailfrom=ilvokhin.com; dkim=pass (1024-bit key) header.d=ilvokhin.com header.i=@ilvokhin.com header.b=MNS1CBKL; arc=none smtp.client-ip=178.62.254.231 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=ilvokhin.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=ilvokhin.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=ilvokhin.com header.i=@ilvokhin.com header.b="MNS1CBKL" Received: from shell.ilvokhin.com (shell.ilvokhin.com [138.68.190.75]) (Authenticated sender: d@ilvokhin.com) by mail.ilvokhin.com (Postfix) with ESMTPSA id D170CC7AE4; Wed, 29 Apr 2026 18:01:22 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=ilvokhin.com; s=mail; t=1777485683; bh=nr8aDAlOr0TZyX8GAjW1YzW8YmGzApJfxtMlvZWK2Gw=; h=Date:From:To:Cc:Subject:References:In-Reply-To; b=MNS1CBKLiIHC72rn5utxv4+xfalJ82zNcleJqz/wcnQyYR05TH8ACIiFhljicn8Pd iLLLjqRZjlXrwkZ1jT27BSd1/I563YvWmBMm68udCy3AwbJLUP7ELWAWLHd7aA67CN BRwJX7vEF/Z28mndEUJGVjTYziLKhbogM0N7qV84= Date: Wed, 29 Apr 2026 18:01:19 +0000 From: Dmitry Ilvokhin To: Peter Zijlstra Cc: dan.j.williams@intel.com, Vlastimil Babka , Steven Rostedt , Andrew Morton , David Hildenbrand , Lorenzo Stoakes , "Liam R. Howlett" , Mike Rapoport , Suren Baghdasaryan , Michal Hocko , Brendan Jackman , Johannes Weiner , Zi Yan , linux-mm@kvack.org, linux-kernel@vger.kernel.org, kernel-team@meta.com Subject: Re: [PATCH 1/8] mm: use zone lock guard in reserve_highatomic_pageblock() Message-ID: References: <20260306095336.a79fcc869a7f6d2b2e97501b@linux-foundation.org> <20260306130052.7da8eab3@gandalf.local.home> <20260307131641.GX606826@noisy.programming.kicks-ass.net> <20260309164516.GE606826@noisy.programming.kicks-ass.net> <20260428114721.GV3102624@noisy.programming.kicks-ass.net> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20260428114721.GV3102624@noisy.programming.kicks-ass.net> On Tue, Apr 28, 2026 at 01:47:21PM +0200, Peter Zijlstra wrote: > On Tue, Apr 28, 2026 at 10:58:41AM +0000, Dmitry Ilvokhin wrote: > > > I re-tested my original patchset after rebasing and can still reproduce > > the regression (though smaller). It appears to depend on compiler > > inlining decisions: in some cases the compiler is able to deduplicate > > the cleanup path across multiple return sites, while in others it is > > not. > > I'm confused, all this has __always_inline on. And the compilers should > be able to track the assignment of the variable and eliminate the test > themselves if value-tracking excludes NULL. > > > Given that, I think we can go further than just removing > > __GUARD_IS_ERR(). It should be possible to eliminate this branch > > entirely and simplify the cleanup flow. > > > > https://lore.kernel.org/all/20260427165037.205337-1-d@ilvokhin.com/ > > > > Reposting here to increase visibility, as several people involved in > > this code have participated in this thread already. > > > > Any feedback would be appreciated. > > This would require very careful audit of all the current users, it's had > this behaviour from the start. I did a bit of code archeology to double check I was not missing anything here. - May 2023: guards were introduced by 54da6a092431 ("locking: Introduce __cleanup() based infrastructure") and DEFINE_GUARD() called _unlock unconditionally. __DEFINE_UNLOCK_GUARD() had a NULL check, but it was unnecessary since conditional variants did not exist yet and the constructor always set .lock. - Sep 2023: conditional guards were added by e4ab322fbaaa ("cleanup: Add conditional guard support"). The branch was added there, because conditional destructor directly delegated to the base class destructor. When a trylock failed, .lock was set to NULL. - Jul 2025: ACQUIRE() was introduced by 857d18f23ab1 ("cleanup: Introduce ACQUIRE() and ACQUIRE_ERR() for conditional locks") and changed the conditional failure value from NULL to ERR_PTR(_RET). - Mar 2026: 2deccd5c862a ("cleanup: Optimize guards") EXTEND_CLASS_COND() was introduced with its own if (_cond) return; pre-check before calling the base destructor. After EXTEND_CLASS_COND() was introduced, the conditional destructor does if (_cond) return; before calling the base, so the base destructor is only ever reached for successfully acquired locks. Peter, do you think it is accurate? All the commits above are from you, so you are the best person to catch if I missed something here :)