From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from mail-oi1-f178.google.com (mail-oi1-f178.google.com [209.85.167.178]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 4CA3833F399 for ; Sun, 22 Mar 2026 23:34:04 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.167.178 ARC-Seal:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1774222445; cv=none; b=P/l/VYIo7Oj8Xd0Ula9sgM1K3FgQB9oh1fQFja1tKVIz1IFCYoQG/mr0Qfy5GxtynqIzHDD1ZDW+YXk4YvoptsnMU6V5kQMcGWKaJdUfcQOOsnPI79pKZ66xM9Khgfj4xA09EjB9WndTFLiPIn7mb+aodUzYLUtnztOwuo0T55A= ARC-Message-Signature:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1774222445; c=relaxed/simple; bh=Yvx+EuPgWvjtjy658e1zHhslLR3HfdGscfTgmem5PQA=; h=Mime-Version:Content-Type:Date:Message-Id:Cc:Subject:From:To: References:In-Reply-To; b=KXVUvbyTG0w+lIwRtOGR6YofnLsrtXD3BDRfuqapYZRpKwG2Iz3TgK4SuL/9r54z7NsPlVWvlaEY4e7EyHuwiSsvZsvbYgK8NL2wRxusMBtwhikmmyg81yTosNh22YLCpqiiIPGkTRwrfF5+lRhSU8w+TBN43eiWnjRXH0GPQg0= ARC-Authentication-Results:i=1; smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=etsalapatis.com; spf=pass smtp.mailfrom=etsalapatis.com; dkim=pass (2048-bit key) header.d=etsalapatis-com.20230601.gappssmtp.com header.i=@etsalapatis-com.20230601.gappssmtp.com header.b=d9uMirq9; arc=none smtp.client-ip=209.85.167.178 Authentication-Results: smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=etsalapatis.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=etsalapatis.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=etsalapatis-com.20230601.gappssmtp.com header.i=@etsalapatis-com.20230601.gappssmtp.com header.b="d9uMirq9" Received: by mail-oi1-f178.google.com with SMTP id 5614622812f47-467e8aaa865so1481077b6e.2 for ; Sun, 22 Mar 2026 16:34:04 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=etsalapatis-com.20230601.gappssmtp.com; s=20230601; t=1774222443; x=1774827243; darn=vger.kernel.org; h=in-reply-to:references:to:from:subject:cc:message-id:date :content-transfer-encoding:mime-version:from:to:cc:subject:date :message-id:reply-to; bh=ILmjQxcfB9xi0INuqN7GRsQnoqsM2fGKPtNxDztVzeU=; b=d9uMirq9qkRK7UTelb3pR+GSMENQA4Lt8Y3ycb0gnHpaKgrrf0U3oRkrybgUp3hvDK x3kfFKPSEVFddVm7XX2nT55bi9s5rho4CQtpT0uRXkOUhYAdFgzsCjAsj0v0Wy+2EFvA +S8q3H/QvhW5oSXIatxBZPLW3W9b898JhP1z9MGNVITo9Oq76C27ZdpSPlte0jlYTGIU jIbac09Z8DillnCV5LrNuTMboqNq4v6DGniufkTXwkghcSjkstejx1cE6ZMM6Jx5Wwr5 xAz7gArW/eVG3H/2kIdH1TODl+F830aw2LSQRfRoCcx2DmTsbOkDkVjfMEs88sZ5S6mU jE8w== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20251104; t=1774222443; x=1774827243; h=in-reply-to:references:to:from:subject:cc:message-id:date :content-transfer-encoding:mime-version:x-gm-gg:x-gm-message-state :from:to:cc:subject:date:message-id:reply-to; bh=ILmjQxcfB9xi0INuqN7GRsQnoqsM2fGKPtNxDztVzeU=; b=U8tQo4nVfrLQBJNM86dj3ZJNF2J+obbcrQHSvvqK9aG3pLZ5MScNOH4jTfago2dicn ZMJDr/iliRGS86mRCqkovK8xkVUUi9EuyrlAICDN0YOjV52XMsWmMdPKn+D7Hi0v4HDZ 4VKkmkdeQn+Qlbhb2npzdhIqIwsAsUukWg8bl53F6NhMaB589QJDjsWb6vZ/zIGJup1z 6/PTScmppDi3YkC6bKwE38yZW8o6tANuwEx5Mpsu/Xo0cqrV8trndej707mWE94ZC/FH HMVUwOqvld8YM+bSh+76cRjbhlEDxaq69LrM90BTRrkijjNm2nluIn3FBCWDsZJCU47p 37BQ== X-Forwarded-Encrypted: i=1; AJvYcCWL+6WLO904/3fpvdiAOxKDqTWrIRrRqFHYVbngREd8AAPYDV5XL+t0+VAOLZV4CT9euZmyRUATHDOoa18=@vger.kernel.org X-Gm-Message-State: AOJu0YxBSmkrvsTBQfdb1tF10/NYTloBdLzc8Ne1eHPdxlFat4nLj/7T TxyWeaVbSQiJRSqYhYLFVMdgXlXTVSL2V+jq/BLmVOysr6Z2Xa/5P2sIAFWEE3czHJc= X-Gm-Gg: ATEYQzxkCIE7dsgEAcdiJEvREcHM4RjJle4O3ODdqpmfuTdsBpsz2mMsEPFe7Y3RhTS Xs3fO46GNOxnFVE6Pg5E9s/loOk+wHcI6ydrC14p4VtquFZz0hMt3Ud1ydIOmf3QQNDduqzyr3B X2+5my6aiiiZfsOMse6UfBzTIGXVRlhK6JAMqIt45V70AD77ntV1kqylTj9ScO1yGuRJV1QyISG z7VTm4IKGYjL+u4HcMysRLrBpIWcamgUvuPQV4rDsi3pMw6eBMoFpaYlinhRzJiEUKJ9CehHgZm fS2YDUMQyO7RQ/3i4ZKUb70WrWCrfbFSEoZt/TYkFUgZG/5K7GGTaMO0l2Pt420ePrG3SZcA7Qh /W0Zo6X83WabMn4Q7fqTjvxDGNU+8mEP+oMzhN0LRa5SHFt1sjia6hXPPEslvhxbAn/2tzWYTMK AlQ17nna+Z8xNGvdtIwZO0U+Y= X-Received: by 2002:a05:6808:8816:b0:467:32c1:acf3 with SMTP id 5614622812f47-467e5f7b686mr4173660b6e.52.1774222443094; Sun, 22 Mar 2026 16:34:03 -0700 (PDT) Received: from localhost ([140.174.219.137]) by smtp.gmail.com with ESMTPSA id 5614622812f47-467e7d4bdbdsm5855407b6e.7.2026.03.22.16.34.02 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128); Sun, 22 Mar 2026 16:34:02 -0700 (PDT) Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset=UTF-8 Date: Sun, 22 Mar 2026 19:34:01 -0400 Message-Id: Cc: , Subject: Re: [PATCH sched_ext/for-7.1] sched_ext: Use irq_work_queue_on() in schedule_deferred() From: "Emil Tsalapatis" To: "Tejun Heo" , "David Vernet" , "Andrea Righi" , "Changwoo Min" X-Mailer: aerc 0.20.1 References: In-Reply-To: On Sun Mar 22, 2026 at 4:33 PM EDT, Tejun Heo wrote: > schedule_deferred() uses irq_work_queue() which always queues on the > calling CPU. The deferred work can run from any CPU correctly, and the > _locked() path already processes remote rqs from the calling CPU. However= , > when falling through to the irq_work path, queuing on the target CPU is > preferable as the work can run sooner via IPI delivery rather than waitin= g > for the calling CPU to re-enable IRQs. > > Currently, only reenqueue operations use this path - either BPF-initiated > reenqueue targeting a remote rq, or IMMED reenqueue when the target CPU i= s > busy running userspace (not in balance or wakeup, so the _locked() fast > paths aren't available). Use irq_work_queue_on() to target the owning CPU= . > > This improves IMMED reenqueue latency when tasks are dispatched to > remote local DSQs. Testing on a 24-CPU AMD Ryzen 3900X with scx_qmap > -I -F 50 (ALWAYS_ENQ_IMMED, every 50th enqueue forced to prev_cpu's > local DSQ) under heavy mixed load (2x CPU oversubscription, yield and > context-switch pressure, SCHED_FIFO bursts, periodic fork storms, mixed > nice levels, C-states disabled), measuring local DSQ residence time > (insert to remove) over 5 x 120s runs (~1.2M tasks per set): > > >128us outliers: 71 -> 39 (-45%) > >256us outliers: 59 -> 36 (-39%) > > Signed-off-by: Tejun Heo Reviewed-by: Emil Tsalapatis > --- > kernel/sched/ext.c | 14 +++++++++++--- > 1 file changed, 11 insertions(+), 3 deletions(-) > > --- a/kernel/sched/ext.c > +++ b/kernel/sched/ext.c > @@ -1164,10 +1164,18 @@ static void deferred_irq_workfn(struct i > static void schedule_deferred(struct rq *rq) > { > /* > - * Queue an irq work. They are executed on IRQ re-enable which may take > - * a bit longer than the scheduler hook in schedule_deferred_locked(). > + * This is the fallback when schedule_deferred_locked() can't use > + * the cheaper balance callback or wakeup hook paths (the target > + * CPU is not in balance or wakeup). Currently, this is primarily > + * hit by reenqueue operations targeting a remote CPU. > + * > + * Queue on the target CPU. The deferred work can run from any CPU > + * correctly - the _locked() path already processes remote rqs from > + * the calling CPU - but targeting the owning CPU allows IPI delivery > + * without waiting for the calling CPU to re-enable IRQs and is > + * cheaper as the reenqueue runs locally. > */ > - irq_work_queue(&rq->scx.deferred_irq_work); > + irq_work_queue_on(&rq->scx.deferred_irq_work, cpu_of(rq)); > } > > /** > -- > tejun