From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from mail-pg1-f172.google.com (mail-pg1-f172.google.com [209.85.215.172]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 91CE63815F4 for ; Mon, 20 Apr 2026 05:30:04 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.215.172 ARC-Seal:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1776663006; cv=none; b=lrcqYhh3cved11H52kPxA1+NEIcWi4B9B+Xz3XwRgDD6qOjYEF4yILhgi2SoG3Qbng7Q6XMrM2zheIjsq6GSBpEGqTmWZ3XBSnSr8PCAlJukRVsjkHA3UwkuPjZrPlZLr11GVkO/4rXreJIFCUlp+B8dEE2NH0AayuHHPmhrEo4= ARC-Message-Signature:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1776663006; c=relaxed/simple; bh=aEamj7vHP2uw8wPCnBhcwTLKt2YwF8jj2+rbh7f78TM=; h=Content-Type:Mime-Version:Subject:From:In-Reply-To:Date:Cc: Message-Id:References:To; b=dl8UjnH4UrxDAH2kwaIVj89830664ZQt7PMT81Xp+yBXSEli0N6fNZrEBn04Awokhc7Zsr3oy1He953GEwQa/LlvLDcpMo3+Qru4VSNZTwZOsJwNeb++oJqfzbPacGyEhwfM9E0S0eaC90bXmQdFB4lqw5rO9Dh7wFsPrKTxhDQ= ARC-Authentication-Results:i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com; spf=pass smtp.mailfrom=gmail.com; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b=m/4m4Va+; arc=none smtp.client-ip=209.85.215.172 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=gmail.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="m/4m4Va+" Received: by mail-pg1-f172.google.com with SMTP id 41be03b00d2f7-c79662bbd2eso1743980a12.3 for ; Sun, 19 Apr 2026 22:30:04 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20251104; t=1776663004; x=1777267804; darn=vger.kernel.org; h=to:references:message-id:content-transfer-encoding:cc:date :in-reply-to:from:subject:mime-version:from:to:cc:subject:date :message-id:reply-to; bh=rvh9TKyLrQhY1Uk5M4VD8G31WO9314O9D/VYgtBuUHo=; b=m/4m4Va+Wv5I9PthhZTpjlPQbDV0dKvtna8YovJsDI+XVoX2VilMg++FwDAtYSgBmQ e3sU9PKBYrANiufcS2VnoOKEuuXvspRgQmLJbAB4BDch2pIDxG3/PdQJpM17gwIpwtS1 a9ApD7s253B43RPhNo+filW7HJQ6FN0/utdDO3p2fTDd0StbyobsWJ5ydqKQbTuu6mFh dKUNbVZtXvyL/TPqM8Jd0RkJnLJmurPKLe06nl7MbTQuS5Rbv4Aldlb8RNvU5oJSZuSc rqvhG1p/gX4LHH8+B5bNbS5ZCiZh1AhsAkMzl//R4XS2g7yQF/5i6WDRKsFtg9e+6zTH pFog== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20251104; t=1776663004; x=1777267804; h=to:references:message-id:content-transfer-encoding:cc:date :in-reply-to:from:subject:mime-version:x-gm-gg:x-gm-message-state :from:to:cc:subject:date:message-id:reply-to; bh=rvh9TKyLrQhY1Uk5M4VD8G31WO9314O9D/VYgtBuUHo=; b=V9iA9n6cPN/mcJ6bIfcjMRgm4FUdnUxov7ZnqqMY907Z4jc7E/SLgOHayrqmK7woCQ 3lp/LgEmNAqAuZetDjkpNMy829c5qWgfF/1a+31wRtqNOu/bENgQo3Ih2K2WQwVtLZ/S C4WAPk8HgZG+wJHfFqKhyc/uhC5UA7rrvPBV63s+C8Jvm6DhdrttuvJJksTvKzOtA4Ou uck12omozZdzFgvikPEL0za84eW42BsPcMHeUopK1wC+nB36/280f6YC6YK7L8RAjXc+ 1UyhSU/90sMvCEUh6DMqTjHWQORLVdeedOWtZaocGfTtjEf26FV5YUB4Jpux97hQVLXP L0xA== X-Forwarded-Encrypted: i=1; AFNElJ8RSjfunrWLUn2a1fFarNoZaD/VL1ngNRB4sgUw4NO14NQFyc0nnX5ZMWAJtqczMep8NVTFYXlDvqIbj0U=@vger.kernel.org X-Gm-Message-State: AOJu0Yx7inSQpsLVIMcs8R+mJqVn14rhLo0uLPL67shwk5xnd8K/gqFj ifiM/Nsd9DK7j1WhPO/qoWCP1i+vmXafvB8cSnya5Im6kdnTgT9xSJ9t X-Gm-Gg: AeBDiesrPf38VNfLHK24kjjLrXm1WLbzibiTR/mLZKNbTWZ1FCCFwc7o41yy63wxHOz 79v2TdN0Dx+vD9RUKi7zKSIabzgwfii+y6WoRBgtX9yRhD4bumMlj8fcSiJAUMnemkKRlfAmAf1 AH1eu7pXdbcPFACAsdqYuCn2XA5Oq07EuhqyPLQSFmt9cUHeXdxqgzGjdqp5M0kYNgTIakUI4wU tjMco+IgPss41vqL08iPHcbcRimDf2ZN+wk9amgMc7Q4Rtz1RP6uo/R4aXzk+LCHmAg3ROz4E5+ G4LLg4tn3EOAy+F6fjnkI1n4KTFkyf7qnzkWkiK1HxgTeQqLqts1UTSoOaBlGhGbJ1qeSMZIy9R 6jB+O54xyuvp9jYbTfMGtkQRNZMOj7dxfM0u77SD/6YCiQ/2XPfE2eygVp36KyIcNgTBTrBE08w 9utWJzKf6W+WJ5CIEISuiuBYVnBmBhS9/GcZmmqMrb4e0jwe5K8uL6wLa2 X-Received: by 2002:a05:6a20:e293:b0:398:b16f:7054 with SMTP id adf61e73a8af0-3a08d8a8949mr13648177637.32.1776663003767; Sun, 19 Apr 2026 22:30:03 -0700 (PDT) Received: from smtpclient.apple ([43.224.245.235]) by smtp.gmail.com with ESMTPSA id 41be03b00d2f7-c7976f0811fsm6670452a12.0.2026.04.19.22.30.01 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128); Sun, 19 Apr 2026 22:30:03 -0700 (PDT) Content-Type: text/plain; charset=utf-8 Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 (Mac OS X Mail 16.0 \(3864.500.181\)) Subject: Re: [PATCH v2] sched_ext: Honor SCX_OPS_ALWAYS_ENQ_IMMED on framework-internal goto-local paths From: zhidao su In-Reply-To: Date: Mon, 20 Apr 2026 13:29:49 +0800 Cc: Tejun Heo , David Vernet , Changwoo Min , sched-ext@lists.linux.dev, linux-kernel@vger.kernel.org Content-Transfer-Encoding: quoted-printable Message-Id: References: <20260420035646.1715762-1-suzhidao@xiaomi.com> To: Andrea Righi X-Mailer: Apple Mail (2.3864.500.181) Hi Andrea, Thank you for the careful review =E2=80=94 you're right. I am not notice the re-enqueue side-effect, will drop this patch and = think again. Apologies for the noise, including the v1 submission error that sent = unrelated files. Thanks, zhidao > 2026=E5=B9=B44=E6=9C=8820=E6=97=A5 13:23=EF=BC=8CAndrea Righi = =E5=86=99=E9=81=93=EF=BC=9A >=20 > Hi zhidao, >=20 > On Mon, Apr 20, 2026 at 11:56:46AM +0800, zhidao su wrote: >> SCX_OPS_ALWAYS_ENQ_IMMED promises that SCX_ENQ_IMMED is set on all = local >> DSQ enqueues. scx_vet_enq_flags() enforces this for BPF kfunc callers >> (scx_bpf_dsq_insert, scx_bpf_dsq_move_*), but the framework-internal >> goto-local paths in do_enqueue_task() =E2=80=94 PF_EXITING, = migration-disabled, >> and !scx_rq_online fallbacks =E2=80=94 bypass scx_vet_enq_flags() = entirely. >>=20 >> When a scheduler sets SCX_OPS_ALWAYS_ENQ_IMMED, tasks hitting these >> goto-local paths arrive at dispatch_enqueue() without SCX_ENQ_IMMED = in >> enq_flags, violating the flag's documented semantics. >>=20 >> This can be observed with trace_printk instrumentation at the = enqueue: >> label while running a multi-threaded fork-exit workload under a = scheduler >> with SCX_OPS_ALWAYS_ENQ_IMMED: >>=20 >> Before (scx_simple + ALWAYS_ENQ_IMMED, 2 CPUs, mmap-contention = exit): >> 95 PF_EXITING local enqueues, 95/95 IMMED=3D0 ALW=3D1 <-- bug >>=20 >> After: >> 1030 PF_EXITING local enqueues, 1030/1030 IMMED=3D1 ALW=3D1 <-- = fixed >>=20 >> Fix by checking SCX_OPS_ALWAYS_ENQ_IMMED at the enqueue: label and >> setting SCX_ENQ_IMMED when dispatching to a local DSQ. This mirrors >> what scx_vet_enq_flags() does for BPF callers. >>=20 >> Fixes: 3229ac4a5ef5 ("sched_ext: Add SCX_OPS_ALWAYS_ENQ_IMMED ops = flag") >> Signed-off-by: zhidao su >> --- >> v2: Resend to correct a submission error in v1 where unrelated files >> were accidentally included in the patch. The code change is >> identical; only kernel/sched/ext.c is modified. Apologies for >> the noise. >> --- >> kernel/sched/ext.c | 15 ++++++++++++++- >> 1 file changed, 14 insertions(+), 1 deletion(-) >>=20 >> diff --git a/kernel/sched/ext.c b/kernel/sched/ext.c >> index 9628c64e5592..0758f5e5a8f0 100644 >> --- a/kernel/sched/ext.c >> +++ b/kernel/sched/ext.c >> @@ -1859,7 +1859,9 @@ static void do_enqueue_task(struct rq *rq, = struct task_struct *p, u64 enq_flags, >> * Clear persistent TASK_IMMED for fresh enqueues, see dsq_inc_nr(). >> * Note that exiting and migration-disabled tasks that skip >> * ops.enqueue() below will lose IMMED protection unless >> - * %SCX_OPS_ENQ_EXITING / %SCX_OPS_ENQ_MIGRATION_DISABLED are set. >> + * %SCX_OPS_ENQ_EXITING / %SCX_OPS_ENQ_MIGRATION_DISABLED are set, >> + * or %SCX_OPS_ALWAYS_ENQ_IMMED is enabled (which re-applies IMMED >> + * at the enqueue: label below). >> */ >> p->scx.flags &=3D ~SCX_TASK_IMMED; >>=20 >> @@ -1949,6 +1951,17 @@ static void do_enqueue_task(struct rq *rq, = struct task_struct *p, u64 enq_flags, >> */ >> touch_core_sched(rq, p); >> refill_task_slice_dfl(sch, p); >> + >> + /* >> + * Honor %SCX_OPS_ALWAYS_ENQ_IMMED for framework-internal local DSQ >> + * enqueues (PF_EXITING, migration-disabled, !online fallbacks). >> + * scx_vet_enq_flags() already handles this for BPF kfunc callers, >> + * but the goto-local paths above bypass it. >> + */ >> + if ((sch->ops.flags & SCX_OPS_ALWAYS_ENQ_IMMED) && >> + dsq =3D=3D &rq->scx.local_dsq) >> + enq_flags |=3D SCX_ENQ_IMMED; >> + >=20 > I'm not sure this should be applied across all fallback cases, it's = probably > safer to avoid triggering re-enqueues for the internal events, = especially > considering that the BPF scheduler doesn't have visibility into them. >=20 > If we do this we should also update %SCX_ENQ_IMMED documentation, that = says: >=20 > * Exiting and migration-disabled tasks bypass ops.enqueue() and > * are placed directly on a local DSQ without IMMED protection > * unless %SCX_OPS_ENQ_EXITING and %SCX_OPS_ENQ_MIGRATION_DISABLED > * are set respectively. >=20 > But again, do we actually want to do this? If SCX_OPS_ENQ_EXITING and > SCX_OPS_ENQ_MIGRATION_DISABLED aren't set, these cases are handled = internally by > the sched_ext core, so triggering a re-enqueue seems unnecessary, as = the BPF > scheduler wouldn't have visibility of such events anyway. >=20 >> dispatch_enqueue(sch, rq, dsq, p, enq_flags); >> } >>=20 >> --=20 >> 2.43.0 >>=20 >=20 > Thanks, > -Andrea