public inbox for linux-kernel@vger.kernel.org
 help / color / mirror / Atom feed
From: Chengming Zhou <zhouchengming@bytedance.com>
To: Josh Don <joshdon@google.com>, Peter Zijlstra <peterz@infradead.org>
Cc: "Ingo Molnar" <mingo@redhat.com>,
	"Juri Lelli" <juri.lelli@redhat.com>,
	"Vincent Guittot" <vincent.guittot@linaro.org>,
	"Dietmar Eggemann" <dietmar.eggemann@arm.com>,
	"Steven Rostedt" <rostedt@goodmis.org>,
	"Ben Segall" <bsegall@google.com>, "Mel Gorman" <mgorman@suse.de>,
	"Daniel Bristot de Oliveira" <bristot@redhat.com>,
	"Valentin Schneider" <vschneid@redhat.com>,
	linux-kernel@vger.kernel.org, "Tejun Heo" <tj@kernel.org>,
	"Michal Koutný" <mkoutny@suse.com>,
	"Christian Brauner" <brauner@kernel.org>,
	"Zefan Li" <lizefan.x@bytedance.com>
Subject: Re: [PATCH v3] sched: async unthrottling for cfs bandwidth
Date: Sun, 20 Nov 2022 10:22:40 +0800	[thread overview]
Message-ID: <094299a3-f039-04c1-d749-2bea0bc14246@linux.dev> (raw)
In-Reply-To: <CABk29NtSmXVCvkdpymeam7AYmXhZy2JLYLPFTdKpk5g6AN1-zg@mail.gmail.com>

On 2022/11/19 03:25, Josh Don wrote:
> On Fri, Nov 18, 2022 at 4:47 AM Peter Zijlstra <peterz@infradead.org> wrote:
>>
>> preempt_disable() -- through rq->lock -- also holds off rcu. Strictly
>> speaking this here is superfluous. But if you want it as an annotation,
>> that's fine I suppose.
> 
> Yep, I purely added this as extra annotation for future readers.
> 
>> Ideally we'd first queue all the remotes and then process local, but
>> given how all this is organized that doesn't seem trivial to arrange.
>>
>> Maybe have this function return false when local and save that cfs_rq in
>> a local var to process again later, dunno, that might turn messy.
> 
> Maybe something like this? Apologies for inline diff formatting.
> 
> diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
> index 012ec9d03811..100dae6023da 100644
> --- a/kernel/sched/fair.c
> +++ b/kernel/sched/fair.c
> @@ -5520,12 +5520,15 @@ static bool distribute_cfs_runtime(struct
> cfs_bandwidth *cfs_b)
>         struct cfs_rq *cfs_rq;
>         u64 runtime, remaining = 1;
>         bool throttled = false;
> +       int this_cpu = smp_processor_id();
> +       struct cfs_rq *local_unthrottle = NULL;
> +       struct rq *rq;
> +       struct rq_flags rf;
> 
>         rcu_read_lock();
>         list_for_each_entry_rcu(cfs_rq, &cfs_b->throttled_cfs_rq,
>                                 throttled_list) {
> -               struct rq *rq = rq_of(cfs_rq);
> -               struct rq_flags rf;
> +               rq = rq_of(cfs_rq);
> 
>                 if (!remaining) {
>                         throttled = true;
> @@ -5556,14 +5559,36 @@ static bool distribute_cfs_runtime(struct
> cfs_bandwidth *cfs_b)
>                 cfs_rq->runtime_remaining += runtime;
> 
>                 /* we check whether we're throttled above */
> -               if (cfs_rq->runtime_remaining > 0)
> -                       unthrottle_cfs_rq_async(cfs_rq);
> +               if (cfs_rq->runtime_remaining > 0) {
> +                       if (cpu_of(rq) != this_cpu ||
> +                           SCHED_WARN_ON(local_unthrottle)) {
> +                               unthrottle_cfs_rq_async(cfs_rq);
> +                       } else {
> +                               local_unthrottle = cfs_rq;
> +                       }
> +               } else {
> +                       throttled = true;
> +               }

Hello,

I don't get the point why local unthrottle is put after all the remote cpus,
since this list is FIFO? (earliest throttled cfs_rq is at the head)

Should we distribute runtime in the FIFO order?

Thanks.

> 
>  next:
>                 rq_unlock_irqrestore(rq, &rf);
>         }
>         rcu_read_unlock();
> 
> +       /*
> +        * We prefer to stage the async unthrottles of all the remote cpus
> +        * before we do the inline unthrottle locally. Note that
> +        * unthrottle_cfs_rq_async() on the local cpu is actually synchronous,
> +        * but it includes extra WARNs to make sure the cfs_rq really is
> +        * still throttled.
> +        */
> +       if (local_unthrottle) {
> +               rq = cpu_rq(this_cpu);
> +               rq_lock_irqsave(rq, &rf);
> +               unthrottle_cfs_rq_async(local_unthrottle);
> +               rq_unlock_irqrestore(rq, &rf);
> +       }
> +
>         return throttled;
>  }
> 
> Note that one change we definitely want is the extra setting of
> throttled = true in the case that cfs_rq->runtime_remaining <= 0, to
> catch the case where we run out of runtime to distribute on the last
> entity in the list.
> 
>>> +
>>> +     /* Already enqueued */
>>> +     if (SCHED_WARN_ON(!list_empty(&cfs_rq->throttled_csd_list)))
>>> +             return;
>>> +
>>> +     list_add_tail(&cfs_rq->throttled_csd_list, &rq->cfsb_csd_list);
>>> +
>>> +     smp_call_function_single_async(cpu_of(rq), &rq->cfsb_csd);
>>
>> Hurmph.. so I was expecting something like:
>>
>>         first = list_empty(&rq->cfsb_csd_list);
>>         list_add_tail(&cfs_rq->throttled_csd_list, &rq->cfsb_csd_list);
>>         if (first)
>>                 smp_call_function_single_async(cpu_of(rq), &rq->cfsb_csd);
>>
>> But I suppose I'm remembering the 'old' version. I don't think it is
>> broken as written. There's a very narrow window where you'll end up
>> sending a second IPI for naught, but meh.
> 
> The CSD doesn't get  unlocked until right before we call the func().
> But you're right that that's a (very) narrow window for an  extra IPI.
> Please feel free to modify the patch with that diff if you like.
> 
>>
>>> +}
>>
>> Let me go queue this thing, we can always improve upon matters later.
> 
> Thanks! Please add at least the extra assignment of 'throttled = true'
> from the diff above, but feel free to squash both the diffs if it
> makes sense to you.

  reply	other threads:[~2022-11-20  2:23 UTC|newest]

Thread overview: 21+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2022-11-17  0:54 [PATCH v3] sched: async unthrottling for cfs bandwidth Josh Don
2022-11-18 12:47 ` Peter Zijlstra
2022-11-18 19:25   ` Josh Don
2022-11-20  2:22     ` Chengming Zhou [this message]
2022-11-21 11:58       ` Peter Zijlstra
2022-11-21 19:37         ` Josh Don
2022-11-22 10:35           ` Peter Zijlstra
2022-11-25  8:57             ` Peter Zijlstra
2022-11-25  8:59               ` Peter Zijlstra
2022-11-25  9:12                 ` Peter Zijlstra
2022-11-29  1:38                   ` Josh Don
2022-11-29  1:32               ` Josh Don
2022-11-21 12:34     ` Peter Zijlstra
2022-11-21 18:02       ` Michal Koutný
2022-11-21 19:31       ` Josh Don
2022-11-22  5:55         ` Aaron Lu
2022-11-22 10:30         ` Peter Zijlstra
2022-11-22  6:08     ` Aaron Lu
2022-11-22 19:41       ` Josh Don
2022-11-24  9:12         ` Peter Zijlstra
2022-12-27 12:13 ` [tip: sched/core] sched: Async " tip-bot2 for Josh Don

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=094299a3-f039-04c1-d749-2bea0bc14246@linux.dev \
    --to=zhouchengming@bytedance.com \
    --cc=brauner@kernel.org \
    --cc=bristot@redhat.com \
    --cc=bsegall@google.com \
    --cc=dietmar.eggemann@arm.com \
    --cc=joshdon@google.com \
    --cc=juri.lelli@redhat.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=lizefan.x@bytedance.com \
    --cc=mgorman@suse.de \
    --cc=mingo@redhat.com \
    --cc=mkoutny@suse.com \
    --cc=peterz@infradead.org \
    --cc=rostedt@goodmis.org \
    --cc=tj@kernel.org \
    --cc=vincent.guittot@linaro.org \
    --cc=vschneid@redhat.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox