From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from foss.arm.com (foss.arm.com [217.140.110.172]) by smtp.subspace.kernel.org (Postfix) with ESMTP id 5377B2F12DA for ; Wed, 18 Feb 2026 09:29:20 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=217.140.110.172 ARC-Seal:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1771406962; cv=none; b=TRlXX5zBqxy8EMR686+YviIMce3+pm9NwtUicdpLcMv0wSIpOZZzeW/tqNzgf3yzlikowuCIRpHXoNDGwkJNT2xG6DzCwGpV/UTs8a9kmA9Vtbm8NL6vvHi/oeugdSop68Xyy9L7zkZz2F4czIP8Kdk7m59xnxUigSsCxl1puHM= ARC-Message-Signature:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1771406962; c=relaxed/simple; bh=61VTLYRgq2jebAcPraQN06u20cvGjy+TmVNctmZIrzk=; h=Date:From:To:Cc:Subject:Message-ID:References:MIME-Version: Content-Type:Content-Disposition:In-Reply-To; b=KyxeI/o9FQEJlziWvPDo13F1gSlGMUh0y/+ALBuOBA8kBBUt6d/xrbjM840OpyFurSG9Um8u84A42s7Bl/7KNJmVF4yd+y1iYlRQkwGs/2BovdNKqP+08hUW4jqRAWvKDAELvheQTWxId+6BRCkoJ7INEt+P7w+PAtaNKajlPfM= ARC-Authentication-Results:i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=arm.com; spf=pass smtp.mailfrom=arm.com; arc=none smtp.client-ip=217.140.110.172 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=arm.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=arm.com Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 102A41477; Wed, 18 Feb 2026 01:29:14 -0800 (PST) Received: from arm.com (usa-sjc-mx-foss1.foss.arm.com [172.31.20.19]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id DF0D63F7F5; Wed, 18 Feb 2026 01:29:18 -0800 (PST) Date: Wed, 18 Feb 2026 09:29:15 +0000 From: Catalin Marinas To: K Prateek Nayak Cc: Will Deacon , Dev Jain , Jisheng Zhang , Dennis Zhou , Tejun Heo , Christoph Lameter , linux-arm-kernel@lists.infradead.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org, maz@kernel.org Subject: Re: [PATCH] arm64: remove HAVE_CMPXCHG_LOCAL Message-ID: References: <20260215033944.16374-1-jszhang@kernel.org> <89606308-3c03-4dcf-a89d-479258b710e4@arm.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: Hi Prateek, On Wed, Feb 18, 2026 at 09:31:19AM +0530, K Prateek Nayak wrote: > On 2/17/2026 10:18 PM, Catalin Marinas wrote: > > Yes, that would be good. It's the preempt_enable_notrace() path that > > ends up calling preempt_schedule_notrace() -> __schedule() pretty much > > unconditionally. > > What do you mean by unconditionally? We always check > __preempt_count_dec_and_test() before calling into __schedule(). > > On x86, We use MSB of preempt_count to indicate a resched and > set_preempt_need_resched() would just clear this MSB. > > If the preempt_count() turns 0, we immediately go into schedule > or or the next preempt_enable() -> __preempt_count_dec_and_test() > would see the entire preempt_count being clear and will call into > schedule. > > The arm64 implementation seems to be doing something similar too > with a separate "ti->preempt.need_resched" bit which is part of > the "ti->preempt_count"'s union so it isn't really unconditional. Ah, yes, you are right. I got the polarity of need_resched in thread_info wrong (we should have named it no_need_to_resched). So in the common case, the overhead is caused by the additional pointer chase and preempt_count update, on top of the cpu offset read. Not sure we can squeeze any more cycles out of these without some large overhaul like: https://git.kernel.org/mark/c/84ee5f23f93d4a650e828f831da9ed29c54623c5 or Yang's per-CPU page tables. Well, there are more ideas like in-kernel restartable sequences but they move the overhead elsewhere. Thanks. -- Catalin