* [PATCH 0/2] Miscellaneous updates for v6.12' @ 2024-08-02 0:30 Paul E. McKenney 2024-08-02 0:30 ` [PATCH misc 1/2] workqueue: Add check for clocks going backwards to wq_worker_tick() Paul E. McKenney 2024-08-02 0:30 ` [PATCH misc 2/2] exit: Sleep at TASK_IDLE when waiting for application core dump Paul E. McKenney 0 siblings, 2 replies; 5+ messages in thread From: Paul E. McKenney @ 2024-08-02 0:30 UTC (permalink / raw) To: linux-kernel Cc: Tejun Heo, Lai Jiangshan, Breno Leitao, Rik van Riel, Anhad Jai Singh, Oleg Nesterov, Jens Axboe, Christian Brauner, Andrew Morton, Matthew Wilcox (Oracle), Chris Mason Hello! This series contains a couple of miscellaneous updates: /home/paulmck/bin/extractsubject.sh: 5: cannot open /tmp/rcu/*: No such file Thanx, Paul ------------------------------------------------------------------------ 1. Add check for clocks going backwards to wq_worker_tick(). 2. Sleep at TASK_IDLE when waiting for application core dump. Thanx, Paul ------------------------------------------------------------------------ exit.c | 2 +- workqueue.c | 2 ++ 2 files changed, 3 insertions(+), 1 deletion(-) ^ permalink raw reply [flat|nested] 5+ messages in thread
* [PATCH misc 1/2] workqueue: Add check for clocks going backwards to wq_worker_tick() 2024-08-02 0:30 [PATCH 0/2] Miscellaneous updates for v6.12' Paul E. McKenney @ 2024-08-02 0:30 ` Paul E. McKenney 2024-08-02 16:06 ` Rik van Riel 2024-08-02 0:30 ` [PATCH misc 2/2] exit: Sleep at TASK_IDLE when waiting for application core dump Paul E. McKenney 1 sibling, 1 reply; 5+ messages in thread From: Paul E. McKenney @ 2024-08-02 0:30 UTC (permalink / raw) To: linux-kernel Cc: Tejun Heo, Lai Jiangshan, Breno Leitao, Rik van Riel, Anhad Jai Singh, Oleg Nesterov, Jens Axboe, Christian Brauner, Andrew Morton, Matthew Wilcox (Oracle), Chris Mason, Paul E. McKenney Experimental, might never go to mainline. There has been some evidence of clocks going backwards, producing "workqueue: kfree_rcu_monitor hogged CPU" diagnostics on idle systems just after a change in clocksource. This diagnostic commit checks for this, ignoring differences that would be negative if interpreted as a signed 64-bit integer. Signed-off-by: Paul E. McKenney <paulmck@kernel.org> Cc: Tejun Heo <tj@kernel.org> Cc: Lai Jiangshan <jiangshanlai@gmail.com> Cc: Breno Leitao <leitao@debian.org> Cc: Rik van Riel <riel@surriel.com> --- kernel/workqueue.c | 2 ++ 1 file changed, 2 insertions(+) diff --git a/kernel/workqueue.c b/kernel/workqueue.c index 1745ca788ede3..4f7b4b32e6b4e 100644 --- a/kernel/workqueue.c +++ b/kernel/workqueue.c @@ -1482,6 +1482,7 @@ void wq_worker_tick(struct task_struct *task) * If the current worker is concurrency managed and hogged the CPU for * longer than wq_cpu_intensive_thresh_us, it's automatically marked * CPU_INTENSIVE to avoid stalling other concurrency-managed work items. + * If the time is negative, ignore, assuming a backwards clock. * * Set @worker->sleeping means that @worker is in the process of * switching out voluntarily and won't be contributing to @@ -1491,6 +1492,7 @@ void wq_worker_tick(struct task_struct *task) * We probably want to make this prettier in the future. */ if ((worker->flags & WORKER_NOT_RUNNING) || READ_ONCE(worker->sleeping) || + WARN_ON_ONCE((s64)(worker->task->se.sum_exec_runtime - worker->current_at) < 0) || worker->task->se.sum_exec_runtime - worker->current_at < wq_cpu_intensive_thresh_us * NSEC_PER_USEC) return; -- 2.40.1 ^ permalink raw reply related [flat|nested] 5+ messages in thread
* Re: [PATCH misc 1/2] workqueue: Add check for clocks going backwards to wq_worker_tick() 2024-08-02 0:30 ` [PATCH misc 1/2] workqueue: Add check for clocks going backwards to wq_worker_tick() Paul E. McKenney @ 2024-08-02 16:06 ` Rik van Riel 2024-08-02 16:39 ` Paul E. McKenney 0 siblings, 1 reply; 5+ messages in thread From: Rik van Riel @ 2024-08-02 16:06 UTC (permalink / raw) To: Paul E. McKenney, linux-kernel Cc: Tejun Heo, Lai Jiangshan, Breno Leitao, Anhad Jai Singh, Oleg Nesterov, Jens Axboe, Christian Brauner, Andrew Morton, Matthew Wilcox (Oracle), Chris Mason On Thu, 2024-08-01 at 17:30 -0700, Paul E. McKenney wrote: > > +++ b/kernel/workqueue.c > @@ -1482,6 +1482,7 @@ void wq_worker_tick(struct task_struct *task) > * If the current worker is concurrency managed and hogged > the CPU for > * longer than wq_cpu_intensive_thresh_us, it's > automatically marked > * CPU_INTENSIVE to avoid stalling other concurrency-managed > work items. > + * If the time is negative, ignore, assuming a backwards > clock. > * > * Set @worker->sleeping means that @worker is in the > process of > * switching out voluntarily and won't be contributing to > @@ -1491,6 +1492,7 @@ void wq_worker_tick(struct task_struct *task) > * We probably want to make this prettier in the future. > */ > if ((worker->flags & WORKER_NOT_RUNNING) || > READ_ONCE(worker->sleeping) || > + WARN_ON_ONCE((s64)(worker->task->se.sum_exec_runtime - > worker->current_at) < 0) || > worker->task->se.sum_exec_runtime - worker->current_at < > wq_cpu_intensive_thresh_us * NSEC_PER_USEC) > return; What is the code path by which sum_exec_runtime could go backward in time, if the TSC and sched_clock() jump backward? Might it make sense to check in the place where sum_exec_runtime is updated, instead, and catch a wider net? On the flip side, the run time increments are "fairly large" in number of TSC cycles, while most of the negative TSC jumps we have seen are quite small, so even that wider net might not catch much because of how coarse these updates typically are... -- All Rights Reversed. ^ permalink raw reply [flat|nested] 5+ messages in thread
* Re: [PATCH misc 1/2] workqueue: Add check for clocks going backwards to wq_worker_tick() 2024-08-02 16:06 ` Rik van Riel @ 2024-08-02 16:39 ` Paul E. McKenney 0 siblings, 0 replies; 5+ messages in thread From: Paul E. McKenney @ 2024-08-02 16:39 UTC (permalink / raw) To: Rik van Riel Cc: linux-kernel, Tejun Heo, Lai Jiangshan, Breno Leitao, Anhad Jai Singh, Oleg Nesterov, Jens Axboe, Christian Brauner, Andrew Morton, Matthew Wilcox (Oracle), Chris Mason On Fri, Aug 02, 2024 at 12:06:06PM -0400, Rik van Riel wrote: > On Thu, 2024-08-01 at 17:30 -0700, Paul E. McKenney wrote: > > > > +++ b/kernel/workqueue.c > > @@ -1482,6 +1482,7 @@ void wq_worker_tick(struct task_struct *task) > > * If the current worker is concurrency managed and hogged > > the CPU for > > * longer than wq_cpu_intensive_thresh_us, it's > > automatically marked > > * CPU_INTENSIVE to avoid stalling other concurrency-managed > > work items. > > + * If the time is negative, ignore, assuming a backwards > > clock. > > * > > * Set @worker->sleeping means that @worker is in the > > process of > > * switching out voluntarily and won't be contributing to > > @@ -1491,6 +1492,7 @@ void wq_worker_tick(struct task_struct *task) > > * We probably want to make this prettier in the future. > > */ > > if ((worker->flags & WORKER_NOT_RUNNING) || > > READ_ONCE(worker->sleeping) || > > + WARN_ON_ONCE((s64)(worker->task->se.sum_exec_runtime - > > worker->current_at) < 0) || > > worker->task->se.sum_exec_runtime - worker->current_at < > > wq_cpu_intensive_thresh_us * NSEC_PER_USEC) > > return; > > What is the code path by which sum_exec_runtime could go backward > in time, if the TSC and sched_clock() jump backward? > > Might it make sense to check in the place where sum_exec_runtime is > updated, instead, and catch a wider net? > > On the flip side, the run time increments are "fairly large" in > number of TSC cycles, while most of the negative TSC jumps we > have seen are quite small, so even that wider net might not catch > much because of how coarse these updates typically are... Good points! Even more telling, this patch didn't catch anything during Breno's tests. I will drop it with a workqueue.2024.08.01 branch in -rcu in case someone needs it later. Thanx, Paul > All Rights Reversed. ^ permalink raw reply [flat|nested] 5+ messages in thread
* [PATCH misc 2/2] exit: Sleep at TASK_IDLE when waiting for application core dump 2024-08-02 0:30 [PATCH 0/2] Miscellaneous updates for v6.12' Paul E. McKenney 2024-08-02 0:30 ` [PATCH misc 1/2] workqueue: Add check for clocks going backwards to wq_worker_tick() Paul E. McKenney @ 2024-08-02 0:30 ` Paul E. McKenney 1 sibling, 0 replies; 5+ messages in thread From: Paul E. McKenney @ 2024-08-02 0:30 UTC (permalink / raw) To: linux-kernel Cc: Tejun Heo, Lai Jiangshan, Breno Leitao, Rik van Riel, Anhad Jai Singh, Oleg Nesterov, Jens Axboe, Christian Brauner, Andrew Morton, Matthew Wilcox (Oracle), Chris Mason, Paul E. McKenney Currently, the coredump_task_exit() function sets the task state to TASK_UNINTERRUPTIBLE|TASK_FREEZABLE, which usually works well. But a combination of large memory and slow (and/or highly contended) mass storage can cause application core dumps to take more than two minutes, which can cause check_hung_task(), which is invoked by check_hung_uninterruptible_tasks(), to produce task-blocked splats. There does not seem to be any reasonable benefit to getting these splats. Furthermore, as Oleg Nesterov points out, TASK_UNINTERRUPTIBLE could be misleading because the task sleeping in coredump_task_exit() really is killable, albeit indirectly. See the check of signal->core_state in prepare_signal() and the check of fatal_signal_pending() in dump_interrupted(), which bypass the normal unkillability of TASK_UNINTERRUPTIBLE, resulting in coredump_finish() invoking wake_up_process() on any threads sleeping in coredump_task_exit(). Therefore, change that TASK_UNINTERRUPTIBLE to TASK_IDLE. Reported-by: Anhad Jai Singh <ffledgling@meta.com> Signed-off-by: Paul E. McKenney <paulmck@kernel.org> Acked-by: Oleg Nesterov <oleg@redhat.com> Cc: Jens Axboe <axboe@kernel.dk> Cc: Christian Brauner <brauner@kernel.org> Cc: Andrew Morton <akpm@linux-foundation.org> Cc: "Matthew Wilcox (Oracle)" <willy@infradead.org> Cc: Chris Mason <clm@fb.com> Cc: Rik van Riel <riel@surriel.com> --- kernel/exit.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/kernel/exit.c b/kernel/exit.c index 7430852a85712..0d62a53605dfc 100644 --- a/kernel/exit.c +++ b/kernel/exit.c @@ -428,7 +428,7 @@ static void coredump_task_exit(struct task_struct *tsk) complete(&core_state->startup); for (;;) { - set_current_state(TASK_UNINTERRUPTIBLE|TASK_FREEZABLE); + set_current_state(TASK_IDLE|TASK_FREEZABLE); if (!self.task) /* see coredump_finish() */ break; schedule(); -- 2.40.1 ^ permalink raw reply related [flat|nested] 5+ messages in thread
end of thread, other threads:[~2024-08-02 16:39 UTC | newest] Thread overview: 5+ messages (download: mbox.gz follow: Atom feed -- links below jump to the message on this page -- 2024-08-02 0:30 [PATCH 0/2] Miscellaneous updates for v6.12' Paul E. McKenney 2024-08-02 0:30 ` [PATCH misc 1/2] workqueue: Add check for clocks going backwards to wq_worker_tick() Paul E. McKenney 2024-08-02 16:06 ` Rik van Riel 2024-08-02 16:39 ` Paul E. McKenney 2024-08-02 0:30 ` [PATCH misc 2/2] exit: Sleep at TASK_IDLE when waiting for application core dump Paul E. McKenney
This is a public inbox, see mirroring instructions for how to clone and mirror all data and code used for this inbox