From: Ingo Molnar <mingo@kernel.org>
To: "Michael S. Tsirkin" <mst@redhat.com>
Cc: Mike Galbraith <umgwanakikbuti@gmail.com>,
Jason Wang <jasowang@redhat.com>,
davem@davemloft.net, netdev@vger.kernel.org,
linux-kernel@vger.kernel.org,
Peter Zijlstra <peterz@infradead.org>,
Ingo Molnar <mingo@elte.hu>
Subject: Re: [PATCH net-next 2/2] net: exit busy loop when another process is runnable
Date: Wed, 13 Apr 2016 09:20:41 +0200 [thread overview]
Message-ID: <20160413072041.GA5709@gmail.com> (raw)
In-Reply-To: <20160411182111-mutt-send-email-mst@redhat.com>
* Michael S. Tsirkin <mst@redhat.com> wrote:
> On Fri, Aug 22, 2014 at 09:36:53AM +0200, Ingo Molnar wrote:
> >
> > > > diff --git a/include/net/busy_poll.h b/include/net/busy_poll.h
> > > > index 1d67fb6..8a33fb2 100644
> > > > --- a/include/net/busy_poll.h
> > > > +++ b/include/net/busy_poll.h
> > > > @@ -109,7 +109,8 @@ static inline bool sk_busy_loop(struct sock *sk, int nonblock)
> > > > cpu_relax();
> > > >
> > > > } while (!nonblock && skb_queue_empty(&sk->sk_receive_queue) &&
> > > > - !need_resched() && !busy_loop_timeout(end_time));
> > > > + !need_resched() && !busy_loop_timeout(end_time) &&
> > > > + nr_running_this_cpu() < 2);
> >
> > So it's generally a bad idea to couple to the scheduler through
> > such a low level, implementation dependent value like
> > 'nr_running', causing various problems:
> >
> > - It misses important work that might be pending on this CPU,
> > like RCU callbacks.
> >
> > - It will also over-credit task contexts that might be
> > runnable, but which are less important than the currently
> > running one: such as a SCHED_IDLE task
> >
> > - It will also over-credit even regular SCHED_NORMAL tasks, if
> > this current task is more important than them: say
> > SCHED_FIFO. A SCHED_FIFO workload should run just as fast
> > with SCHED_NORMAL tasks around, as a SCHED_NORMAL workload
> > on an otherwise idle system.
> >
> > So what you want is a more sophisticated query to the
> > scheduler, a sched_expected_runtime() method that returns the
> > number of nsecs this task is expected to run in the future,
> > which returns 0 if you will be scheduled away on the next
> > schedule(), and returns infinity for a high prio SCHED_FIFO
> > task, or if this SCHED_NORMAL task is on an otherwise idle CPU.
> >
> > It will return a regular time slice value in other cases, when
> > there's some load on the CPU.
> >
> > The polling logic can then do its decision based on that time
> > value.
> >
> > All this can be done reasonably fast and lockless in most
> > cases, so that it can be called from busy-polling code.
> >
> > An added advantage would be that this approach consolidates the
> > somewhat random need_resched() checks into this method as well.
> >
> > In any case I don't agree with the nr_running_this_cpu()
> > method.
> >
> > (Please Cc: me and lkml to future iterations of this patchset.)
> >
> > Thanks,
> >
> > Ingo
>
> I tried to look into this: it might be even nicer to add
> sched_expected_to_run(time) which tells us whether we expect the current
> task to keep running for the next XX nsecs.
>
> For the fair scheduler, it seems that it could be as simple as
>
> +static bool expected_to_run_fair(struct cfs_rq *cfs_rq, s64 t)
> +{
> + struct sched_entity *left;
> + struct sched_entity *curr = cfs_rq->curr;
> +
> + if (!curr || !curr->on_rq)
> + return false;
> +
> + left = __pick_first_entity(cfs_rq);
> + if (!left)
> + return true;
> +
> + return (s64)(curr->vruntime + calc_delta_fair(t, curr) -
> + left->vruntime) < 0;
> +}
>
> The reason it seems easier is because that way we can reuse
> calc_delta_fair and don't have to do the reverse translation
> from vruntime to nsec.
>
> And I guess if we do this with interrupts disabled, and only poke
> at the current CPU's rq, we know first entity
> won't go away so we don't need locks?
>
> Is this close to what you had in mind?
Yeah, fair enough ;-)
I'm not 100% convinced about the interface, but the model looks good to me.
Let's try it - I don't have fundamental objections anymore.
I also agree that it could be done lockless - although I'd suggest two steps:
first do the dumb thing with the proper scheduler lock(s) held, then another patch
which removes the locks for a bit more performance. That will make any subtle
crashes/races bisectable.
Thanks,
Ingo
next prev parent reply other threads:[~2016-04-13 7:20 UTC|newest]
Thread overview: 53+ messages / expand[flat|nested] mbox.gz Atom feed top
2014-08-21 8:05 [PATCH net-next 1/2] sched: introduce nr_running_this_cpu() Jason Wang
2014-08-21 8:05 ` [PATCH net-next 2/2] net: exit busy loop when another process is runnable Jason Wang
2014-08-21 8:11 ` Michael S. Tsirkin
2014-08-22 2:53 ` Jason Wang
2014-08-21 19:03 ` Amos Kong
2014-08-22 5:01 ` Mike Galbraith
2014-08-22 7:29 ` Jason Wang
2014-08-22 7:42 ` Ingo Molnar
2014-08-29 3:08 ` Jason Wang
2014-09-01 6:39 ` Eliezer Tamir
2014-09-02 3:29 ` Jason Wang
2014-09-02 6:15 ` Eliezer Tamir
2014-09-02 7:37 ` Jason Wang
2014-09-02 8:31 ` Michael S. Tsirkin
2014-09-03 6:49 ` Eliezer Tamir
2014-09-03 7:33 ` Jason Wang
2014-09-03 9:36 ` Peter Zijlstra
2014-09-03 9:59 ` Michael S. Tsirkin
2014-09-03 7:51 ` Michael S. Tsirkin
2014-09-04 6:51 ` Eliezer Tamir
2014-08-22 7:36 ` Ingo Molnar
2014-08-22 9:08 ` Jason Wang
2014-08-22 14:16 ` Eric Dumazet
2014-08-25 2:54 ` Jason Wang
2014-08-25 13:16 ` Eliezer Tamir
2014-08-26 7:16 ` Jason Wang
2014-09-01 6:55 ` Eliezer Tamir
2014-09-02 3:35 ` Jason Wang
2014-09-02 6:03 ` Eliezer Tamir
2014-09-02 6:31 ` Jason Wang
2014-09-03 6:21 ` Eliezer Tamir
2014-09-03 6:59 ` Jason Wang
2016-04-14 0:55 ` Peter Zijlstra
2014-09-03 8:09 ` Michael S. Tsirkin
2016-04-11 16:31 ` Michael S. Tsirkin
2016-04-13 7:20 ` Ingo Molnar [this message]
2016-04-13 13:28 ` Peter Zijlstra
2016-04-13 13:51 ` Michael S. Tsirkin
2016-04-14 0:58 ` Peter Zijlstra
2014-09-01 9:31 ` Peter Zijlstra
2014-09-01 9:52 ` Michael S. Tsirkin
2014-09-01 10:04 ` Peter Zijlstra
2014-09-01 10:19 ` Peter Zijlstra
2014-09-02 4:03 ` Jason Wang
2014-09-02 10:24 ` Peter Zijlstra
2014-09-03 6:58 ` Jason Wang
2014-09-03 9:30 ` Peter Zijlstra
2014-09-01 10:22 ` Michael S. Tsirkin
2014-09-02 3:38 ` Jason Wang
2014-09-02 6:12 ` Peter Zijlstra
2014-09-02 7:19 ` Jason Wang
2014-08-21 13:52 ` [PATCH net-next 1/2] sched: introduce nr_running_this_cpu() Ingo Molnar
2014-08-22 7:27 ` Jason Wang
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20160413072041.GA5709@gmail.com \
--to=mingo@kernel.org \
--cc=davem@davemloft.net \
--cc=jasowang@redhat.com \
--cc=linux-kernel@vger.kernel.org \
--cc=mingo@elte.hu \
--cc=mst@redhat.com \
--cc=netdev@vger.kernel.org \
--cc=peterz@infradead.org \
--cc=umgwanakikbuti@gmail.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).