From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 89C3DEEB567 for ; Fri, 8 Sep 2023 18:03:35 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S245725AbjIHSDh (ORCPT ); Fri, 8 Sep 2023 14:03:37 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:48128 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S237641AbjIHSDX (ORCPT ); Fri, 8 Sep 2023 14:03:23 -0400 Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 05C5C1FE2; Fri, 8 Sep 2023 11:02:53 -0700 (PDT) Received: by smtp.kernel.org (Postfix) with ESMTPSA id D3B2CC4339A; Fri, 8 Sep 2023 18:02:50 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1694196172; bh=/GEZdp/jZsOtw/HmDSaGNqsBrwDYefpHqB+uyaokJbw=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=cKu1A4Oojh/96GptmAEPiJG31bvVdeaqmf7IBJJFhm/FR1kpWfT8x5pXIRL0qc30I tjsTN6WWlN1/OvZV7eAAm6TkoKKQgBR11XFq3VuxMwWzxu5cQi3eAVp454DACqLfca HX4GvL+lHXPjqoKu9NRezYmMhcqG2GWx1w9xwhMH1DIXCefLk87gL+yLB/vnqi1zho ZfKH9fqibyeCKw0ndmyoM7WF3SFu2vz/CvsrtqclVaRwaEfVA7CFW5nsueL4pcJEa6 odRmq1o4lDaC9ArHfwkD0wuyrAAoHmnZAqGn9r4Mo0gk+ORRaqdm6l2C8pxT13pgaf rekijP696F3HQ== From: Sasha Levin To: linux-kernel@vger.kernel.org, stable@vger.kernel.org Cc: Wander Lairson Costa , Hu Chunyu , Oleg Nesterov , Valentin Schneider , Peter Zijlstra , Sasha Levin , brauner@kernel.org, michael.christie@oracle.com, mst@redhat.com, wangkefeng.wang@huawei.com, akpm@linux-foundation.org, surenb@google.com, Liam.Howlett@oracle.com, mathieu.desnoyers@efficios.com, npiggin@gmail.com, mjguzik@gmail.com, avagin@gmail.com Subject: [PATCH AUTOSEL 5.15 2/9] kernel/fork: beware of __put_task_struct() calling context Date: Fri, 8 Sep 2023 14:02:33 -0400 Message-Id: <20230908180240.3458469-2-sashal@kernel.org> X-Mailer: git-send-email 2.40.1 In-Reply-To: <20230908180240.3458469-1-sashal@kernel.org> References: <20230908180240.3458469-1-sashal@kernel.org> MIME-Version: 1.0 X-stable: review X-Patchwork-Hint: Ignore X-stable-base: Linux 5.15.131 Content-Transfer-Encoding: 8bit Precedence: bulk List-ID: X-Mailing-List: stable@vger.kernel.org From: Wander Lairson Costa [ Upstream commit d243b34459cea30cfe5f3a9b2feb44e7daff9938 ] Under PREEMPT_RT, __put_task_struct() indirectly acquires sleeping locks. Therefore, it can't be called from an non-preemptible context. One practical example is splat inside inactive_task_timer(), which is called in a interrupt context: CPU: 1 PID: 2848 Comm: life Kdump: loaded Tainted: G W --------- Hardware name: HP ProLiant DL388p Gen8, BIOS P70 07/15/2012 Call Trace: dump_stack_lvl+0x57/0x7d mark_lock_irq.cold+0x33/0xba mark_lock+0x1e7/0x400 mark_usage+0x11d/0x140 __lock_acquire+0x30d/0x930 lock_acquire.part.0+0x9c/0x210 rt_spin_lock+0x27/0xe0 refill_obj_stock+0x3d/0x3a0 kmem_cache_free+0x357/0x560 inactive_task_timer+0x1ad/0x340 __run_hrtimer+0x8a/0x1a0 __hrtimer_run_queues+0x91/0x130 hrtimer_interrupt+0x10f/0x220 __sysvec_apic_timer_interrupt+0x7b/0xd0 sysvec_apic_timer_interrupt+0x4f/0xd0 asm_sysvec_apic_timer_interrupt+0x12/0x20 RIP: 0033:0x7fff196bf6f5 Instead of calling __put_task_struct() directly, we defer it using call_rcu(). A more natural approach would use a workqueue, but since in PREEMPT_RT, we can't allocate dynamic memory from atomic context, the code would become more complex because we would need to put the work_struct instance in the task_struct and initialize it when we allocate a new task_struct. The issue is reproducible with stress-ng: while true; do stress-ng --sched deadline --sched-period 1000000000 \ --sched-runtime 800000000 --sched-deadline \ 1000000000 --mmapfork 23 -t 20 done Reported-by: Hu Chunyu Suggested-by: Oleg Nesterov Suggested-by: Valentin Schneider Suggested-by: Peter Zijlstra Signed-off-by: Wander Lairson Costa Signed-off-by: Peter Zijlstra (Intel) Link: https://lore.kernel.org/r/20230614122323.37957-2-wander@redhat.com Signed-off-by: Sasha Levin --- include/linux/sched/task.h | 28 +++++++++++++++++++++++++++- kernel/fork.c | 8 ++++++++ 2 files changed, 35 insertions(+), 1 deletion(-) diff --git a/include/linux/sched/task.h b/include/linux/sched/task.h index d23977e9035d4..0c2d008099151 100644 --- a/include/linux/sched/task.h +++ b/include/linux/sched/task.h @@ -108,10 +108,36 @@ static inline struct task_struct *get_task_struct(struct task_struct *t) } extern void __put_task_struct(struct task_struct *t); +extern void __put_task_struct_rcu_cb(struct rcu_head *rhp); static inline void put_task_struct(struct task_struct *t) { - if (refcount_dec_and_test(&t->usage)) + if (!refcount_dec_and_test(&t->usage)) + return; + + /* + * under PREEMPT_RT, we can't call put_task_struct + * in atomic context because it will indirectly + * acquire sleeping locks. + * + * call_rcu() will schedule delayed_put_task_struct_rcu() + * to be called in process context. + * + * __put_task_struct() is called when + * refcount_dec_and_test(&t->usage) succeeds. + * + * This means that it can't "conflict" with + * put_task_struct_rcu_user() which abuses ->rcu the same + * way; rcu_users has a reference so task->usage can't be + * zero after rcu_users 1 -> 0 transition. + * + * delayed_free_task() also uses ->rcu, but it is only called + * when it fails to fork a process. Therefore, there is no + * way it can conflict with put_task_struct(). + */ + if (IS_ENABLED(CONFIG_PREEMPT_RT) && !preemptible()) + call_rcu(&t->rcu, __put_task_struct_rcu_cb); + else __put_task_struct(t); } diff --git a/kernel/fork.c b/kernel/fork.c index ace0717c71e27..753e641f617bd 100644 --- a/kernel/fork.c +++ b/kernel/fork.c @@ -764,6 +764,14 @@ void __put_task_struct(struct task_struct *tsk) } EXPORT_SYMBOL_GPL(__put_task_struct); +void __put_task_struct_rcu_cb(struct rcu_head *rhp) +{ + struct task_struct *task = container_of(rhp, struct task_struct, rcu); + + __put_task_struct(task); +} +EXPORT_SYMBOL_GPL(__put_task_struct_rcu_cb); + void __init __weak arch_task_cache_init(void) { } /* -- 2.40.1