From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 2085432C923; Fri, 8 May 2026 13:18:39 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1778246319; cv=none; b=PgrlExYgxIAj1IILdPBajEjBnj7wioUugFCcYpk7gasi2TPSeLByd8bxSJnXX+8/OiIG7uCyzqsMe/Fa/ay01eOFtWpHScxyuom7n8IWGFWCK1RqIvkgtZ01LwgypZN3inegFKPLqFOsXhnGh25WCUSQSthXJLNGi047O3Ae04E= ARC-Message-Signature:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1778246319; c=relaxed/simple; bh=aDFX0+laRCKARzsRyJXt2dacO4Lp4TbF1nix0/a8GEw=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=JzcVaphfuAfBkFiGB9n/FBC8dZ8Szz5i+NPO/vVF8QcmA0YDEwRs+WztxvonurupowN4URNWNghxON+g3fY+moLB2HvN8YjonwHuSUtNxWzXtKwCYLf4VpmsLF+q/Ms9NhYHFvCoiCj2H3nuBGRPrxjrqIn+HfhKC5ZVkWFFSWk= ARC-Authentication-Results:i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=odGALObB; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="odGALObB" Received: by smtp.kernel.org (Postfix) with ESMTPSA id CBFB0C2BCB0; Fri, 8 May 2026 13:18:30 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1778246319; bh=aDFX0+laRCKARzsRyJXt2dacO4Lp4TbF1nix0/a8GEw=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=odGALObB+8PSkGeU76sOdQvTcd+Ulfgdeu/xmsXvrU3fgW3UPCPEs25FKu5qJqYLe 2qb8WxUYbj0ewXH7rHl5cKQQ7hwYDRuQ6pXRfpsyIyIP3IJ9gPrwA4HKtpqPO+6Srw /gG7hW7BYDXL01UGMGRLQY5yvhcgVxh7zWqHwIFs5oTc4pIYLalCZ+VOIPkKbORBbO lqifg1C4C4JxwCvKroFuhd2eVx4aY3W3rIdDomzpOn4F+jD+qeNLiX+2Fy0hUh7g5V PYTyXhuSF0jSoOYTVyyBZQyTMM+E5cd5gPT8gEVgTXHKqVcbLa5wNfCn2OEpUTdJgO Cu3YKAwskHevg== From: Frederic Weisbecker To: LKML Cc: Frederic Weisbecker , "Christophe Leroy (CS GROUP)" , "Rafael J. Wysocki" , Alexander Gordeev , Anna-Maria Behnsen , Ben Segall , Boqun Feng , Christian Borntraeger , Dietmar Eggemann , Heiko Carstens , Ingo Molnar , Ingo Molnar , Jan Kiszka , Joel Fernandes , Juri Lelli , Kieran Bingham , Madhavan Srinivasan , Mel Gorman , Michael Ellerman , Neeraj Upadhyay , Nicholas Piggin , "Paul E . McKenney" , Peter Zijlstra , Sashiko, Shrikanth Hegde , Steven Rostedt , Sven Schnelle , Thomas Gleixner , Uladzislau Rezki , Valentin Schneider , Vasily Gorbik , Vincent Guittot , Viresh Kumar , Xin Zhao , linux-pm@vger.kernel.org, linux-s390@vger.kernel.org, linuxppc-dev@lists.ozlabs.org Subject: [PATCH 11/15] tick/sched: Account tickless idle cputime only when tick is stopped Date: Fri, 8 May 2026 15:16:43 +0200 Message-ID: <20260508131647.43868-12-frederic@kernel.org> X-Mailer: git-send-email 2.53.0 In-Reply-To: <20260508131647.43868-1-frederic@kernel.org> References: <20260508131647.43868-1-frederic@kernel.org> Precedence: bulk X-Mailing-List: linux-s390@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: 8bit There is no real point in switching to dyntick-idle cputime accounting mode if the tick is not actually stopped. This just adds overhead, notably fetching the GTOD, on each idle exit and each idle IRQ entry for no reason during short idle trips. Signed-off-by: Frederic Weisbecker Tested-by: Shrikanth Hegde --- kernel/time/tick-sched.c | 50 +++++++++++++++++++++------------------- 1 file changed, 26 insertions(+), 24 deletions(-) diff --git a/kernel/time/tick-sched.c b/kernel/time/tick-sched.c index fa03cf7b3cec..c1ee0b256445 100644 --- a/kernel/time/tick-sched.c +++ b/kernel/time/tick-sched.c @@ -1157,8 +1157,10 @@ void tick_nohz_idle_stop_tick(void) ts->idle_sleeps++; ts->idle_expires = expires; - if (!was_stopped && tick_sched_flag_test(ts, TS_FLAG_STOPPED)) + if (!was_stopped && tick_sched_flag_test(ts, TS_FLAG_STOPPED)) { + kcpustat_dyntick_start(ts->idle_entrytime); nohz_balance_enter_idle(cpu); + } } else { tick_nohz_retain_tick(ts); } @@ -1200,7 +1202,6 @@ void tick_nohz_idle_enter(void) WARN_ON_ONCE(ts->timer_expires_base); tick_sched_flag_set(ts, TS_FLAG_INIDLE); ts->idle_entrytime = ktime_get(); - kcpustat_dyntick_start(ts->idle_entrytime); tick_nohz_clock_sleep(ts); local_irq_enable(); @@ -1230,9 +1231,10 @@ void tick_nohz_irq_exit(void) struct tick_sched *ts = this_cpu_ptr(&tick_cpu_sched); if (tick_sched_flag_test(ts, TS_FLAG_INIDLE)) { - ts->idle_entrytime = ktime_get(); - kcpustat_irq_exit(ts->idle_entrytime); tick_nohz_clock_sleep(ts); + ts->idle_entrytime = ktime_get(); + if (tick_sched_flag_test(ts, TS_FLAG_STOPPED)) + kcpustat_irq_exit(ts->idle_entrytime); } else { tick_nohz_full_update_tick(ts); } @@ -1333,8 +1335,17 @@ void tick_nohz_idle_restart_tick(void) { struct tick_sched *ts = this_cpu_ptr(&tick_cpu_sched); - if (tick_sched_flag_test(ts, TS_FLAG_STOPPED)) - tick_nohz_restart_sched_tick(ts, ktime_get()); + if (tick_sched_flag_test(ts, TS_FLAG_STOPPED)) { + /* + * Update entrytime here in case the tick restart is due to temporary + * polling on forced broadcast. The tick may be stopped again later within + * the same idle trip. The idle_entrytime was updated recently but make sure + * no tiny amount of idle time is accounted twice. + */ + ts->idle_entrytime = ktime_get(); + kcpustat_dyntick_stop(ts->idle_entrytime); + tick_nohz_restart_sched_tick(ts, ts->idle_entrytime); + } } static void tick_nohz_idle_update_tick(struct tick_sched *ts, ktime_t now) @@ -1364,7 +1375,6 @@ static void tick_nohz_idle_update_tick(struct tick_sched *ts, ktime_t now) void tick_nohz_idle_exit(void) { struct tick_sched *ts = this_cpu_ptr(&tick_cpu_sched); - bool idle_active, tick_stopped; ktime_t now; local_irq_disable(); @@ -1373,18 +1383,13 @@ void tick_nohz_idle_exit(void) WARN_ON_ONCE(ts->timer_expires_base); tick_sched_flag_clear(ts, TS_FLAG_INIDLE); - idle_active = tick_sched_flag_test(ts, TS_FLAG_IDLE_ACTIVE); - tick_stopped = tick_sched_flag_test(ts, TS_FLAG_STOPPED); + tick_nohz_clock_wakeup(ts); - if (idle_active || tick_stopped) + if (tick_sched_flag_test(ts, TS_FLAG_STOPPED)) { now = ktime_get(); - - if (idle_active) - tick_nohz_clock_wakeup(ts); - - if (tick_stopped) + kcpustat_dyntick_stop(now); tick_nohz_idle_update_tick(ts, now); - kcpustat_dyntick_stop(now); + } local_irq_enable(); } @@ -1439,15 +1444,13 @@ static inline void tick_nohz_irq_enter(void) struct tick_sched *ts = this_cpu_ptr(&tick_cpu_sched); ktime_t now; - if (!tick_sched_flag_test(ts, TS_FLAG_STOPPED | TS_FLAG_IDLE_ACTIVE)) + tick_nohz_clock_wakeup(ts); + + if (!tick_sched_flag_test(ts, TS_FLAG_STOPPED)) return; now = ktime_get(); - - if (tick_sched_flag_test(ts, TS_FLAG_IDLE_ACTIVE)) { - tick_nohz_clock_wakeup(ts); - kcpustat_irq_enter(now); - } + kcpustat_irq_enter(now); /* * If all CPUs are idle we may need to update a stale jiffies value. @@ -1456,8 +1459,7 @@ static inline void tick_nohz_irq_enter(void) * rare case (typically stop machine). So we must make sure we have a * last resort. */ - if (tick_sched_flag_test(ts, TS_FLAG_STOPPED)) - tick_nohz_update_jiffies(now); + tick_nohz_update_jiffies(now); } #else -- 2.53.0