public inbox for linux-kernel@vger.kernel.org
 help / color / mirror / Atom feed
From: Andrea Righi <arighi@nvidia.com>
To: Tejun Heo <tj@kernel.org>
Cc: void@manifault.com, multics69@gmail.com,
	linux-kernel@vger.kernel.org, sched-ext@meta.com
Subject: Re: [PATCH 06/12] sched_ext: Move dsq_hash into scx_sched
Date: Sat, 26 Apr 2025 22:25:09 +0200	[thread overview]
Message-ID: <aA1BJewva-MMTabR@gpd3> (raw)
In-Reply-To: <20250425215840.2334972-7-tj@kernel.org>

Hi Tejun,

On Fri, Apr 25, 2025 at 11:58:21AM -1000, Tejun Heo wrote:
> User DSQs are going to become per scheduler instance. Move dsq_hash into
> scx_sched. This shifts the code that assumes scx_root to be the only
> scx_sched instance up the call stack but doesn't remove them yet.
> 
> Signed-off-by: Tejun Heo <tj@kernel.org>
> ---
...
> @@ -6858,7 +6889,11 @@ __bpf_kfunc s32 scx_bpf_dsq_nr_queued(u64 dsq_id)
>   */
>  __bpf_kfunc void scx_bpf_destroy_dsq(u64 dsq_id)
>  {
> -	destroy_dsq(dsq_id);
> +	struct scx_sched *sch;
> +
> +	sch = rcu_dereference(scx_root);
> +	if (sch)
> +		destroy_dsq(sch, dsq_id);
>  }
>  
>  /**

I just triggered the following lockdep splat running the create_dsq
selftest. If we call scx_bpf_destroy_dsq() from ops.init() we're missing
rcu_read_lock/unlock(), should we just add that?

arighi@virtme-ng~/s/l/t/t/s/sched_ext (scx)> sudo ./runner -t create_dsq
===== START =====
TEST: create_dsq
DESCRIPTION: Create and destroy a dsq in a loop
OUTPUT:
[   72.890532]
[   72.890621] =============================
[   72.890652] WARNING: suspicious RCU usage
[   72.890683] 6.14.0-virtme #33 Not tainted
[   72.890720] -----------------------------
[   72.890754] kernel/sched/ext.c:6879 suspicious rcu_dereference_check() usage!
[   72.890819]
[   72.890819] other info that might help us debug this:
[   72.890819]
[   72.890879]
[   72.890879] rcu_scheduler_active = 2, debug_locks = 1
[   72.890935] 4 locks held by runner/2097:
[   72.890967]  #0: ffffffffb239d968 (update_mutex){+.+.}-{4:4}, at: bpf_struct_ops_link_create+0x112/0x180
[   72.891050]  #1: ffffffffb228aa68 (scx_enable_mutex){+.+.}-{4:4}, at: scx_enable.isra.0+0x65/0x1420
[   72.891141]  #2: ffffffffb2274c90 (cpu_hotplug_lock){++++}-{0:0}, at: scx_enable.isra.0+0x516/0x1420
[   72.891242]  #3: ffffffffb236fb80 (rcu_read_lock_trace){....}-{0:0}, at: __bpf_prog_enter_sleepable+0x27/0xa0
[   72.891331]
[   72.891331] stack backtrace:
[   72.891377] CPU: 1 UID: 0 PID: 2097 Comm: runner Not tainted 6.14.0-virtme #33 PREEMPT(full)
[   72.891379] Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS Arch Linux 1.16.3-1-1 04/01/2014
[   72.891380] Sched_ext: create_dsq (enabling)
[   72.891381] Call Trace:
[   72.891383]  <TASK>
[   72.891385]  dump_stack_lvl+0x9e/0xe0
[   72.891390]  lockdep_rcu_suspicious+0x14a/0x1b0
[   72.891396]  scx_bpf_destroy_dsq+0x71/0x80
[   72.891401]  bpf_prog_4b98ae790b57e181_create_dsq_init+0xcd/0xe0
[   72.891403]  ? __bpf_prog_enter_sleepable+0x27/0xa0
[   72.891407]  bpf__sched_ext_ops_init+0x40/0xa4
[   72.891411]  ? scx_idle_enable+0xf0/0x130
[   72.891414]  scx_enable.isra.0+0x54b/0x1420
[   72.891440]  bpf_struct_ops_link_create+0x12c/0x180
[   72.891447]  __sys_bpf+0x1fdd/0x2a90
[   72.891470]  __x64_sys_bpf+0x1e/0x30
[   72.891473]  do_syscall_64+0xbb/0x1d0
[   72.891477]  entry_SYSCALL_64_after_hwframe+0x77/0x7f
[   72.891479] RIP: 0033:0x7f82b9508fad
[   72.891481] Code: ff c3 66 2e 0f 1f 84 00 00 00 00 00 90 f3 0f 1e fa 48 89 f8 48 89 f7 48 89 d6 48 89 ca 4d 89 c2 4d 89 c8 4c 8b 4c 24 08 0f 05 <48> 3d 01 f0 ff ff 73 01 c3 48 8b 0d 2b 7d 0c 00 f7 d8 64 89 01 48
[   72.891482] RSP: 002b:00007ffcd032fb58 EFLAGS: 00000206 ORIG_RAX: 0000000000000141
[   72.891483] RAX: ffffffffffffffda RBX: 0000000000000000 RCX: 00007f82b9508fad
[   72.891484] RDX: 0000000000000040 RSI: 00007ffcd032fc40 RDI: 000000000000001c
[   72.891484] RBP: 00007ffcd032fb70 R08: 00007ffcd032fc40 R09: 00007ffcd032fc40
[   72.891485] R10: 00007ffcd032f9e0 R11: 0000000000000206 R12: 00007ffcd0330dfc
[   72.891485] R13: 000055e7e8854160 R14: 0000000000000000 R15: 000055e7e8854160
[   72.891495]  </TASK>
[   72.922754] sched_ext: BPF scheduler "create_dsq" enabled
[   72.940151] sched_ext: BPF scheduler "create_dsq" disabled (unregistered from user space)

That's the only issue that I found, other than that, everything else looks
good to me.

Thanks,
-Andrea

  reply	other threads:[~2025-04-26 20:25 UTC|newest]

Thread overview: 20+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2025-04-25 21:58 [PATCHSET v2 sched_ext/for-6.16] sched_ext: Introduce scx_sched Tejun Heo
2025-04-25 21:58 ` [PATCH 01/12] " Tejun Heo
2025-04-25 21:58 ` [PATCH 02/12] sched_ext: Avoid NULL scx_root deref through SCX_HAS_OP() Tejun Heo
2025-04-25 21:58 ` [PATCH 03/12] sched_ext: Use dynamic allocation for scx_sched Tejun Heo
2025-04-25 21:58 ` [PATCH 04/12] sched_ext: Inline create_dsq() into scx_bpf_create_dsq() Tejun Heo
2025-04-25 21:58 ` [PATCH 05/12] sched_ext: Factor out scx_alloc_and_add_sched() Tejun Heo
2025-04-25 21:58 ` [PATCH 06/12] sched_ext: Move dsq_hash into scx_sched Tejun Heo
2025-04-26 20:25   ` Andrea Righi [this message]
2025-04-28 20:43   ` [PATCH v2 " Tejun Heo
2025-04-28 23:34     ` Changwoo Min
2025-04-25 21:58 ` [PATCH 07/12] sched_ext: Move global_dsqs " Tejun Heo
2025-04-25 21:58 ` [PATCH 08/12] sched_ext: Relocate scx_event_stats definition Tejun Heo
2025-04-25 21:58 ` [PATCH 09/12] sched_ext: Factor out scx_read_events() Tejun Heo
2025-04-25 21:58 ` [PATCH 10/12] sched_ext: Move event_stats_cpu into scx_sched Tejun Heo
2025-04-25 21:58 ` [PATCH 11/12] sched_ext: Move disable machinery " Tejun Heo
2025-04-25 21:58 ` [PATCH 12/12] sched_ext: Clean up SCX_EXIT_NONE handling in scx_disable_workfn() Tejun Heo
2025-04-27  7:04 ` [PATCHSET v2 sched_ext/for-6.16] sched_ext: Introduce scx_sched Changwoo Min
2025-04-28 20:58 ` Andrea Righi
2025-04-29 18:41 ` Tejun Heo
  -- strict thread matches above, loose matches on Subject: below --
2025-04-23 23:44 [PATCHSET " Tejun Heo
2025-04-23 23:44 ` [PATCH 06/12] sched_ext: Move dsq_hash into scx_sched Tejun Heo

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=aA1BJewva-MMTabR@gpd3 \
    --to=arighi@nvidia.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=multics69@gmail.com \
    --cc=sched-ext@meta.com \
    --cc=tj@kernel.org \
    --cc=void@manifault.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox