From: luca abeni <luca.abeni@santannapisa.it>
To: Vineeth Remanan Pillai <vineeth@bitbyteword.org>
Cc: Juri Lelli <juri.lelli@redhat.com>,
Daniel Bristot de Oliveira <bristot@redhat.com>,
Peter Zijlstra <peterz@infradead.org>,
Ingo Molnar <mingo@redhat.com>,
Vincent Guittot <vincent.guittot@linaro.org>,
Steven Rostedt <rostedt@goodmis.org>,
Joel Fernandes <joel@joelfernandes.org>,
Dietmar Eggemann <dietmar.eggemann@arm.com>,
Ben Segall <bsegall@google.com>, Mel Gorman <mgorman@suse.de>,
Valentin Schneider <vschneid@redhat.com>,
Jonathan Corbet <corbet@lwn.net>,
linux-kernel@vger.kernel.org, linux-doc@vger.kernel.org
Subject: Re: [PATCH v3 2/5] sched/deadline: Fix reclaim inaccuracy with SMP
Date: Tue, 16 May 2023 09:37:29 +0200 [thread overview]
Message-ID: <20230516093729.0771938c@luca64> (raw)
In-Reply-To: <CAO7JXPgq8V5yHM6F2+iXf4XJ9cyT30Hn4ot5b2k7srjsaPc3JQ@mail.gmail.com>
On Mon, 15 May 2023 21:47:03 -0400
Vineeth Remanan Pillai <vineeth@bitbyteword.org> wrote:
> Hi Luca,
>
> On Mon, May 15, 2023 at 4:06 AM luca abeni
> <luca.abeni@santannapisa.it> wrote:
>
> >
> > this patch is giving me some headaches:
> >
> Sorry about that.. I was also stressing out on how to get the
> reclaiming done right for the past couple of days ;-)
Well, this math is hard... :)
> > Vineeth Pillai <vineeth@bitbyteword.org> wrote:
> > [...]
> > > * Uextra: Extra bandwidth not reserved:
> > > - * = Umax - \Sum(u_i / #cpus in the root
> > > domain)
> > > + * = Umax - this_bw
> >
> > While I agree that this setting should be OK, it ends up with
> > dq = -Uact / Umax * dt
> > which I remember I originally tried, and gave some issues
> > (I do not remember the details, but I think if you try N
> > identical reclaiming tasks, with N > M, the reclaimed time
> > is not distributed equally among them?)
> >
> I have noticed this behaviour where the reclaimed time is not equally
> distributed when we have more tasks than available processors. But it
> depended on where the task was scheduled. Within the same cpu, the
> distribution seemed to be proportional.
Yes, as far as I remember it is due to migrations. IIRC, the problem is
related to the fact that using "dq = -Uact / Umax * dt" a task running
on a core might end up trying to reclaim some idle time from other
cores (which is obviously not possible).
This is why m-GRUB used "1 - Uinact" instead of "Uact"
[...]
> > I need to think a little bit more about this...
> >
> Thanks for looking into this.. I have a basic idea why tasks with less
> bandwidth reclaim less in SMP when number of tasks is less than number
> of cpus, but do not yet have a verifiable fix for it.
I think I can now understand at least part of the problem. In my
understanding, the problem is due to using
dq = -(max{u_i, (Umax - Uinact - Uextra)} / Umax) * dt
It should really be
dq = -(max{u_i, (1 - Uinact - Uextra)} / Umax) * dt
(since we divide by Umax, using "Umax - ..." will lead to reclaiming up
to "Umax / Umax" = 1)
Did you try this equation?
I'll write more about this later... And thanks for coping with all my
comments!
Luca
>
> If patches 1 and 4 looks good to you, we shall drop 2 and 3 and fix
> the SMP issue with varying bandwidth separately.. Patch 4 would
> differ a bit when I remove 2 and 3 so as to use the formula:
> "dq = -(max{u, (Umax_reclaim - Uinact - Uextra)} / Umax_reclaim) dt"
>
> Thanks for your patience with all these brainstorming:-)
>
> Thanks,
> Vineeth
next prev parent reply other threads:[~2023-05-16 7:37 UTC|newest]
Thread overview: 25+ messages / expand[flat|nested] mbox.gz Atom feed top
2023-05-15 2:57 [PATCH v3 0/5] GRUB reclaiming fixes Vineeth Pillai
2023-05-15 2:57 ` [PATCH v3 1/5] sched/deadline: Fix bandwidth reclaim equation in GRUB Vineeth Pillai
2023-05-15 2:57 ` [PATCH v3 2/5] sched/deadline: Fix reclaim inaccuracy with SMP Vineeth Pillai
2023-05-15 8:06 ` luca abeni
2023-05-16 1:47 ` Vineeth Remanan Pillai
2023-05-16 7:37 ` luca abeni [this message]
2023-05-16 15:08 ` Vineeth Remanan Pillai
2023-05-16 16:19 ` luca abeni
2023-05-17 2:17 ` Vineeth Remanan Pillai
2023-05-19 9:56 ` luca abeni
2023-05-19 10:18 ` luca abeni
2023-05-19 16:12 ` Vineeth Remanan Pillai
2023-05-20 9:50 ` luca abeni
2023-05-20 9:58 ` luca abeni
2023-05-22 19:22 ` Vineeth Remanan Pillai
2023-05-23 20:58 ` luca abeni
2023-05-24 2:11 ` Vineeth Remanan Pillai
2023-05-26 14:54 ` Vineeth Remanan Pillai
2023-05-26 15:18 ` luca abeni
2023-05-19 17:56 ` Dietmar Eggemann
2023-05-20 2:15 ` Vineeth Remanan Pillai
2023-05-25 11:55 ` Dietmar Eggemann
2023-05-15 2:57 ` [PATCH v3 3/5] sched/deadline: Remove unused variable extra_bw Vineeth Pillai
2023-05-15 2:57 ` [PATCH v3 4/5] sched/deadline: Account for normal deadline tasks in GRUB Vineeth Pillai
2023-05-15 2:57 ` [PATCH v3 5/5] Documentation: sched/deadline: Update GRUB description Vineeth Pillai
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20230516093729.0771938c@luca64 \
--to=luca.abeni@santannapisa.it \
--cc=bristot@redhat.com \
--cc=bsegall@google.com \
--cc=corbet@lwn.net \
--cc=dietmar.eggemann@arm.com \
--cc=joel@joelfernandes.org \
--cc=juri.lelli@redhat.com \
--cc=linux-doc@vger.kernel.org \
--cc=linux-kernel@vger.kernel.org \
--cc=mgorman@suse.de \
--cc=mingo@redhat.com \
--cc=peterz@infradead.org \
--cc=rostedt@goodmis.org \
--cc=vincent.guittot@linaro.org \
--cc=vineeth@bitbyteword.org \
--cc=vschneid@redhat.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).