From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-6.9 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS,MAILING_LIST_MULTI, SIGNED_OFF_BY,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED autolearn=no autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id C50D2C43461 for ; Thu, 17 Sep 2020 10:50:20 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 7464C2087D for ; Thu, 17 Sep 2020 10:50:20 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=linutronix.de header.i=@linutronix.de header.b="gCvyQ3fU"; dkim=permerror (0-bit key) header.d=linutronix.de header.i=@linutronix.de header.b="MRuwwG4A" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726678AbgIQKuS (ORCPT ); Thu, 17 Sep 2020 06:50:18 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:52814 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726589AbgIQKsv (ORCPT ); Thu, 17 Sep 2020 06:48:51 -0400 Received: from galois.linutronix.de (Galois.linutronix.de [IPv6:2a0a:51c0:0:12e:550::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 1C038C061756 for ; Thu, 17 Sep 2020 03:48:50 -0700 (PDT) Message-Id: <20200917101623.983510598@linutronix.de> DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linutronix.de; s=2020; t=1600339720; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: references:references; bh=5TseO1hGbjBdclnFWlJOxk9JO+806WqU6B6V3o1eFwQ=; b=gCvyQ3fUydHYWAKg0JHU4DC8E7X1NVjFODY6yLLtSaZN5Ztpo7/1TA+KtbjlTFn8wkcYI7 +mQjutRIGgx0IFFfoWZg5bpK7scC28IzC+0NYkE7lukIq+gZhQgRBzYsswbbCgK5uX4m1+ dANzMEl3QA/0Z8sM054q8wVrgDhgWjYeLBU/S03ewTmXBcAG3fHujz/qMtnc3v3xQNFHSF 2ObeGXTkpeZdL4T3vmJn4leqhC38BRTEUia+24PWnPaN6yFe5NGXrOhnukZBuabixQbsTb Fev3meRCp05b/v2hG01/UESIlzyILlKZICeprIr9B6cGlbJcAvkG+cPDwZ9R/w== DKIM-Signature: v=1; a=ed25519-sha256; c=relaxed/relaxed; d=linutronix.de; s=2020e; t=1600339720; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: references:references; bh=5TseO1hGbjBdclnFWlJOxk9JO+806WqU6B6V3o1eFwQ=; b=MRuwwG4AlKR+HEDghzwKHxH/zhr5Jptt2edLgSn4wfBUJubST3NQg25AqmJAkXUNe6D4PX YwxJ5gpt8bLYLyBA== Date: Thu, 17 Sep 2020 11:42:03 +0200 From: Thomas Gleixner To: LKML Cc: Sebastian Siewior , Qais Yousef , Scott Wood , "Peter Zijlstra (Intel)" , Valentin Schneider , Ingo Molnar , Peter Zijlstra , Juri Lelli , Vincent Guittot , Dietmar Eggemann , Steven Rostedt , Ben Segall , Mel Gorman , Daniel Bristot de Oliveira , Vincent Donnefort Subject: [patch 01/10] sched: Fix balance_callback() References: <20200917094202.301694311@linutronix.de> MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-transfer-encoding: 8-bit Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org From: Peter Zijlstra The intent of balance_callback() has always been to delay executing balancing operations until the end of the current rq->lock section. This is because balance operations must often drop rq->lock, and that isn't safe in general. However, as noted by Scott, there were a few holes in that scheme; balance_callback() was called after rq->lock was dropped, which means another CPU can interleave and touch the callback list. Rework code to call the balance callbacks before dropping rq->lock where possible, and otherwise splice the balance list onto a local stack. This guarantees that the balance list must be empty when rq->lock is acquired. IOW, this will only ever run balance callbacks which are queued while holding rq->lock. Reported-by: Scott Wood Signed-off-by: Peter Zijlstra (Intel) Signed-off-by: Thomas Gleixner Reviewed-by: Valentin Schneider Link: https://lore.kernel.org/r/20200911082536.470013100@infradead.org --- kernel/sched/core.c | 119 ++++++++++++++++++++++++++++++++------------------- kernel/sched/sched.h | 2 2 files changed, 77 insertions(+), 44 deletions(-) --- a/kernel/sched/core.c +++ b/kernel/sched/core.c @@ -3489,6 +3489,69 @@ static inline void finish_task(struct ta #endif } +#ifdef CONFIG_SMP + +static void do_balance_callbacks(struct rq *rq, struct callback_head *head) +{ + void (*func)(struct rq *rq); + struct callback_head *next; + + lockdep_assert_held(&rq->lock); + + while (head) { + func = (void (*)(struct rq *))head->func; + next = head->next; + head->next = NULL; + head = next; + + func(rq); + } +} + +static inline struct callback_head *splice_balance_callbacks(struct rq *rq) +{ + struct callback_head *head = rq->balance_callback; + + lockdep_assert_held(&rq->lock); + if (head) + rq->balance_callback = NULL; + + return head; +} + +static void __balance_callbacks(struct rq *rq) +{ + do_balance_callbacks(rq, splice_balance_callbacks(rq)); +} + +static inline void balance_callbacks(struct rq *rq, struct callback_head *head) +{ + unsigned long flags; + + if (unlikely(head)) { + raw_spin_lock_irqsave(&rq->lock, flags); + do_balance_callbacks(rq, head); + raw_spin_unlock_irqrestore(&rq->lock, flags); + } +} + +#else + +static inline void __balance_callbacks(struct rq *rq) +{ +} + +static inline struct callback_head *splice_balance_callbacks(struct rq *rq) +{ + return NULL; +} + +static inline void balance_callbacks(struct rq *rq, struct callback_head *head) +{ +} + +#endif + static inline void prepare_lock_switch(struct rq *rq, struct task_struct *next, struct rq_flags *rf) { @@ -3514,6 +3577,7 @@ static inline void finish_lock_switch(st * prev into current: */ spin_acquire(&rq->lock.dep_map, 0, 0, _THIS_IP_); + __balance_callbacks(rq); raw_spin_unlock_irq(&rq->lock); } @@ -3655,43 +3719,6 @@ static struct rq *finish_task_switch(str return rq; } -#ifdef CONFIG_SMP - -/* rq->lock is NOT held, but preemption is disabled */ -static void __balance_callback(struct rq *rq) -{ - struct callback_head *head, *next; - void (*func)(struct rq *rq); - unsigned long flags; - - raw_spin_lock_irqsave(&rq->lock, flags); - head = rq->balance_callback; - rq->balance_callback = NULL; - while (head) { - func = (void (*)(struct rq *))head->func; - next = head->next; - head->next = NULL; - head = next; - - func(rq); - } - raw_spin_unlock_irqrestore(&rq->lock, flags); -} - -static inline void balance_callback(struct rq *rq) -{ - if (unlikely(rq->balance_callback)) - __balance_callback(rq); -} - -#else - -static inline void balance_callback(struct rq *rq) -{ -} - -#endif - /** * schedule_tail - first thing a freshly forked thread must call. * @prev: the thread we just switched away from. @@ -3711,7 +3738,6 @@ asmlinkage __visible void schedule_tail( */ rq = finish_task_switch(prev); - balance_callback(rq); preempt_enable(); if (current->set_child_tid) @@ -4527,10 +4553,11 @@ static void __sched notrace __schedule(b rq = context_switch(rq, prev, next, &rf); } else { rq->clock_update_flags &= ~(RQCF_ACT_SKIP|RQCF_REQ_SKIP); - rq_unlock_irq(rq, &rf); - } - balance_callback(rq); + rq_unpin_lock(rq, &rf); + __balance_callbacks(rq); + raw_spin_unlock_irq(&rq->lock); + } } void __noreturn do_task_dead(void) @@ -4938,9 +4965,11 @@ void rt_mutex_setprio(struct task_struct out_unlock: /* Avoid rq from going away on us: */ preempt_disable(); - __task_rq_unlock(rq, &rf); - balance_callback(rq); + rq_unpin_lock(rq, &rf); + __balance_callbacks(rq); + raw_spin_unlock(&rq->lock); + preempt_enable(); } #else @@ -5214,6 +5243,7 @@ static int __sched_setscheduler(struct t int retval, oldprio, oldpolicy = -1, queued, running; int new_effective_prio, policy = attr->sched_policy; const struct sched_class *prev_class; + struct callback_head *head; struct rq_flags rf; int reset_on_fork; int queue_flags = DEQUEUE_SAVE | DEQUEUE_MOVE | DEQUEUE_NOCLOCK; @@ -5452,6 +5482,7 @@ static int __sched_setscheduler(struct t /* Avoid rq from going away on us: */ preempt_disable(); + head = splice_balance_callbacks(rq); task_rq_unlock(rq, p, &rf); if (pi) { @@ -5460,7 +5491,7 @@ static int __sched_setscheduler(struct t } /* Run balance callbacks after we've adjusted the PI chain: */ - balance_callback(rq); + balance_callbacks(rq, head); preempt_enable(); return 0; --- a/kernel/sched/sched.h +++ b/kernel/sched/sched.h @@ -1220,6 +1220,8 @@ static inline void rq_pin_lock(struct rq #ifdef CONFIG_SCHED_DEBUG rq->clock_update_flags &= (RQCF_REQ_SKIP|RQCF_ACT_SKIP); rf->clock_update_flags = 0; + + SCHED_WARN_ON(rq->balance_callback); #endif }