From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 1963D3EB7F1 for ; Thu, 23 Apr 2026 12:05:38 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1776945939; cv=none; b=sWkWoMqNwvyswH1x0MYPiBWcJFP/lriXLofIcvMj2gzKupLFQMuXViZC+/8SBl3c8u8d1KUH/ZXKURts24Hff2MHsE6YeftXEAsPstwFqgxDapnVHArfmCrfwJPS0FNP0bUsXsZhlwx0xAJ4JyRn4MZRKcJqSCNiNFJKpKXOXNI= ARC-Message-Signature:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1776945939; c=relaxed/simple; bh=C8L0DI8bjifoZyoQd8quNIezpqMtNMjJSaqBFtUNlbM=; h=Date:From:To:Cc:Subject:Message-ID:References:MIME-Version: Content-Type:Content-Disposition:In-Reply-To; b=d6wykqfqCNrGkeSvXxXGB43C/X4pmhQvaJxErwmkoYcjcrsK/1ThWDdrDVf/bZUgwq7jDIjtEyC76w3itrGagZgxrI0jB/Nj5JWjgJbdYoyLpbDavaMebfOkkhja88VnGJVNHQzMDJbHNs5e4smO1VfLL0KnICFgJHyKvRsq8zI= ARC-Authentication-Results:i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=WB76paww; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="WB76paww" Received: by smtp.kernel.org (Postfix) with ESMTPSA id 33845C2BCAF; Thu, 23 Apr 2026 12:05:38 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1776945938; bh=C8L0DI8bjifoZyoQd8quNIezpqMtNMjJSaqBFtUNlbM=; h=Date:From:To:Cc:Subject:References:In-Reply-To:From; b=WB76paww7nYWKxJiZgpedeCMeKScT9VpuGflMHS1x8Kc/4fEy8zsn/9EGstee+JPh 97qALXBGnBrTadp1G+1bAZ2wOOzOppfZt9e6BqlJPeWkqOCEglJPmUOTW6SqDKsLVp gZRokA5BaPyUhI6GW4RfoOF3ktdG9V3FJ9LJF1GTT5tNOGKB2aCIN++OSnaY7GfYqW F+CGYLFzVB9N8qymDBRyu9Q7W2Dh2IwPKKdnaOyJX1CoUlx0s3srRgdBbHAkgCH9bN +2J5Cgk+Fg7U0Ykf7r7rHEAPo2s1fAlKXzNGlBT+9OTm82zsJ/eFdqo5YJEbN2bapU 9FGKi5JFbN0RA== Date: Thu, 23 Apr 2026 14:05:35 +0200 From: Frederic Weisbecker To: Partha Satapathy Cc: anna-maria@linutronix.de, tglx@kernel.org, linux-kernel@vger.kernel.org, tj@kernel.org, jiangshanlai@gmail.com, notify@kernel.org Subject: Re: [PATCH 0/2] timers/workqueue: Add support for active CPU Message-ID: References: <20260423091914.63645-1-partha.satapathy@oracle.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Type: text/plain; charset=iso-8859-1 Content-Disposition: inline Content-Transfer-Encoding: 8bit In-Reply-To: <20260423091914.63645-1-partha.satapathy@oracle.com> Hi, Le Thu, Apr 23, 2026 at 09:19:05AM +0000, Partha Satapathy a écrit : > From: Partha Sarathi Satapathy > > Hi, > > Timers queued with add_timer_on() and delayed work queued with > queue_delayed_work_on() currently rely on the caller to ensure that the > target CPU remains online until the enqueue operation completes. In > practice, CPU hotplug can still race with that sequence and leave the > timer queued on an offline CPU, where it will not run until that CPU > comes back online. > > For delayed work, this has a direct knock-on effect: if the backing > timer is stranded on an offline CPU, the work item is never queued for > execution until that CPU returns. > > In many cases, the target CPU is chosen for locality and cache affinity > rather than as a strict execution requirement. Falling back to an active > CPU is preferable to leaving the timer or delayed work blocked on a dead > CPU. While callers can try to track CPU hotplug state themselves, that > does not close the race, and taking the hotplug lock around enqueue > operations is too expensive for this class of use. > > This series adds opt-in helpers for that fallback behavior without > changing the semantics of the existing interfaces: > > - add_timer_active_cpu() queues a timer on the requested CPU only if > the target CPU's timer base is active; otherwise it falls back to > the current CPU. > > - queue_delayed_work_active_cpu() uses the new timer helper for the > delayed timer path and updates dwork->cpu to reflect the CPU > actually selected for the timer, so the work item is queued on the > same active CPU. > > The existing add_timer_on() and queue_delayed_work_on() behavior is left > unchanged for callers that require strict CPU placement. Timers are migrated when CPUs go offline. So the problem is queueing a timer to an offline CPU. It should be the responsibility of a subsystem to synchronize with CPU hotplug in order to avoid that. As for timers that are queued locally not for correctness but for performance reasons, do we know such example? Thanks. -- Frederic Weisbecker SUSE Labs