From: Peter Zijlstra <peterz@infradead.org>
To: Tobias Huschle <huschle@linux.ibm.com>
Cc: Abel Wu <wuyun.abel@bytedance.com>,
Linux Kernel <linux-kernel@vger.kernel.org>,
kvm@vger.kernel.org, virtualization@lists.linux.dev,
netdev@vger.kernel.org, mst@redhat.com, jasowang@redhat.com
Subject: Re: Re: EEVDF/vhost regression (bisected to 86bfbb7ce4f6 sched/fair: Add lag based placement)
Date: Wed, 22 Nov 2023 11:00:16 +0100 [thread overview]
Message-ID: <20231122100016.GO8262@noisy.programming.kicks-ass.net> (raw)
In-Reply-To: <ZVyt4UU9+XxunIP7@DESKTOP-2CCOB1S.>
On Tue, Nov 21, 2023 at 02:17:21PM +0100, Tobias Huschle wrote:
> We applied both suggested patch options and ran the test again, so
>
> sched/eevdf: Fix vruntime adjustment on reweight
> sched/fair: Update min_vruntime for reweight_entity() correctly
>
> and
>
> sched/eevdf: Delay dequeue
>
> Unfortunately, both variants do NOT fix the problem.
> The regression remains unchanged.
Thanks for testing.
> I will continue getting myself familiar with how cgroups are scheduled to dig
> deeper here. If there are any other ideas, I'd be happy to use them as a
> starting point for further analysis.
>
> Would additional traces still be of interest? If so, I would be glad to
> provide them.
So, since it got bisected to the placement logic, but is a cgroup
related issue, I was thinking that 'Delay dequeue' might not cut it,
that only works for tasks, not the internal entities.
The below should also work for internal entities, but last time I poked
around with it I had some regressions elsewhere -- you know, fix one,
wreck another type of situations on hand.
But still, could you please give it a go -- it applies cleanly to linus'
master and -rc2.
---
Subject: sched/eevdf: Revenge of the Sith^WSleepers
For tasks that have received excess service (negative lag) allow them to
gain parity (zero lag) by sleeping.
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
---
kernel/sched/fair.c | 36 ++++++++++++++++++++++++++++++++++++
kernel/sched/features.h | 6 ++++++
2 files changed, 42 insertions(+)
diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index d7a3c63a2171..b975e4b07a68 100644
--- a/kernel/sched/fair.c
+++ b/kernel/sched/fair.c
@@ -5110,6 +5110,33 @@ static inline void update_misfit_status(struct task_struct *p, struct rq *rq) {}
#endif /* CONFIG_SMP */
+static inline u64
+entity_vlag_sleeper(struct cfs_rq *cfs_rq, struct sched_entity *se, int flags)
+{
+ u64 now, vdelta;
+ s64 delta;
+
+ if (!(flags & ENQUEUE_WAKEUP))
+ return se->vlag;
+
+ if (flags & ENQUEUE_MIGRATED)
+ return 0;
+
+ now = rq_clock_task(rq_of(cfs_rq));
+ delta = now - se->exec_start;
+ if (delta < 0)
+ return se->vlag;
+
+ if (sched_feat(GENTLE_SLEEPER))
+ delta /= 2;
+
+ vdelta = __calc_delta(delta, NICE_0_LOAD, &cfs_rq->load);
+ if (vdelta < -se->vlag)
+ return se->vlag + vdelta;
+
+ return 0;
+}
+
static void
place_entity(struct cfs_rq *cfs_rq, struct sched_entity *se, int flags)
{
@@ -5133,6 +5160,15 @@ place_entity(struct cfs_rq *cfs_rq, struct sched_entity *se, int flags)
lag = se->vlag;
+ /*
+ * Allow tasks that have received too much service (negative
+ * lag) to (re)gain parity (zero lag) by sleeping for the
+ * equivalent duration. This ensures they will be readily
+ * eligible.
+ */
+ if (sched_feat(PLACE_SLEEPER) && lag < 0)
+ lag = entity_vlag_sleeper(cfs_rq, se, flags);
+
/*
* If we want to place a task and preserve lag, we have to
* consider the effect of the new entity on the weighted
diff --git a/kernel/sched/features.h b/kernel/sched/features.h
index a3ddf84de430..722282d3ed07 100644
--- a/kernel/sched/features.h
+++ b/kernel/sched/features.h
@@ -7,6 +7,12 @@
SCHED_FEAT(PLACE_LAG, true)
SCHED_FEAT(PLACE_DEADLINE_INITIAL, true)
SCHED_FEAT(RUN_TO_PARITY, true)
+/*
+ * Let sleepers earn back lag, but not more than 0-lag. GENTLE_SLEEPERS earn at
+ * half the speed.
+ */
+SCHED_FEAT(PLACE_SLEEPER, true)
+SCHED_FEAT(GENTLE_SLEEPER, true)
/*
* Prefer to schedule the task we woke last (assuming it failed
next prev parent reply other threads:[~2023-11-22 10:00 UTC|newest]
Thread overview: 58+ messages / expand[flat|nested] mbox.gz Atom feed top
2023-11-16 18:58 EEVDF/vhost regression (bisected to 86bfbb7ce4f6 sched/fair: Add lag based placement) Tobias Huschle
2023-11-17 9:23 ` Peter Zijlstra
2023-11-17 9:58 ` Peter Zijlstra
2023-11-17 12:24 ` Tobias Huschle
2023-11-17 12:37 ` Peter Zijlstra
2023-11-17 13:07 ` Abel Wu
2023-11-21 13:17 ` Tobias Huschle
2023-11-22 10:00 ` Peter Zijlstra [this message]
2023-11-27 13:56 ` Tobias Huschle
[not found] ` <6564a012.c80a0220.adb78.f0e4SMTPIN_ADDED_BROKEN@mx.google.com>
2023-11-28 8:55 ` Abel Wu
2023-11-29 6:31 ` Tobias Huschle
2023-12-07 6:22 ` Tobias Huschle
[not found] ` <07513.123120701265800278@us-mta-474.us.mimecast.lan>
2023-12-07 6:48 ` Michael S. Tsirkin
2023-12-08 9:24 ` Tobias Huschle
2023-12-08 17:28 ` Mike Christie
[not found] ` <56082.123120804242300177@us-mta-137.us.mimecast.lan>
2023-12-08 10:31 ` Re: " Michael S. Tsirkin
2023-12-08 11:41 ` Tobias Huschle
[not found] ` <53044.123120806415900549@us-mta-342.us.mimecast.lan>
2023-12-09 10:42 ` Michael S. Tsirkin
2023-12-11 7:26 ` Jason Wang
2023-12-11 16:53 ` Michael S. Tsirkin
2023-12-12 3:00 ` Jason Wang
2023-12-12 16:15 ` Michael S. Tsirkin
2023-12-13 10:37 ` Tobias Huschle
[not found] ` <42870.123121305373200110@us-mta-641.us.mimecast.lan>
2023-12-13 12:00 ` Michael S. Tsirkin
2023-12-13 12:45 ` Tobias Huschle
[not found] ` <25485.123121307454100283@us-mta-18.us.mimecast.lan>
2023-12-13 14:47 ` Michael S. Tsirkin
2023-12-13 14:55 ` Michael S. Tsirkin
2023-12-14 7:14 ` Michael S. Tsirkin
2024-01-08 13:13 ` Tobias Huschle
[not found] ` <92916.124010808133201076@us-mta-622.us.mimecast.lan>
2024-01-09 23:07 ` Michael S. Tsirkin
2024-01-21 18:44 ` Michael S. Tsirkin
2024-01-22 11:29 ` Tobias Huschle
2024-02-01 7:38 ` Tobias Huschle
[not found] ` <07974.124020102385100135@us-mta-501.us.mimecast.lan>
2024-02-01 8:08 ` Michael S. Tsirkin
2024-02-01 11:47 ` Tobias Huschle
[not found] ` <89460.124020106474400877@us-mta-475.us.mimecast.lan>
2024-02-01 12:08 ` Michael S. Tsirkin
2024-02-22 19:23 ` Michael S. Tsirkin
2024-03-11 17:05 ` Michael S. Tsirkin
2024-03-12 9:45 ` Luis Machado
2024-03-14 11:46 ` Tobias Huschle
[not found] ` <73123.124031407552500165@us-mta-156.us.mimecast.lan>
2024-03-14 15:09 ` Michael S. Tsirkin
2024-03-15 8:33 ` Tobias Huschle
[not found] ` <84704.124031504335801509@us-mta-515.us.mimecast.lan>
2024-03-15 10:31 ` Michael S. Tsirkin
2024-03-19 8:21 ` Tobias Huschle
2024-03-19 8:29 ` Michael S. Tsirkin
2024-03-19 8:59 ` Tobias Huschle
2024-04-30 10:50 ` Tobias Huschle
2024-05-01 10:51 ` Peter Zijlstra
2024-05-01 15:31 ` Michael S. Tsirkin
2024-05-02 9:16 ` Peter Zijlstra
2024-05-02 12:23 ` Tobias Huschle
2024-05-02 12:20 ` Tobias Huschle
2023-11-18 5:14 ` Abel Wu
2023-11-20 10:56 ` Peter Zijlstra
2023-11-20 12:06 ` Abel Wu
2023-11-18 7:33 ` Abel Wu
2023-11-18 15:29 ` Honglei Wang
2023-11-19 13:29 ` Bagas Sanjaya
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20231122100016.GO8262@noisy.programming.kicks-ass.net \
--to=peterz@infradead.org \
--cc=huschle@linux.ibm.com \
--cc=jasowang@redhat.com \
--cc=kvm@vger.kernel.org \
--cc=linux-kernel@vger.kernel.org \
--cc=mst@redhat.com \
--cc=netdev@vger.kernel.org \
--cc=virtualization@lists.linux.dev \
--cc=wuyun.abel@bytedance.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).