netdev.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: John Fastabend <john.fastabend@gmail.com>
To: Alexei Starovoitov <alexei.starovoitov@gmail.com>
Cc: daniel@iogearbox.net, eric.dumazet@gmail.com, jhs@mojatatu.com,
	aduyck@mirantis.com, brouer@redhat.com, davem@davemloft.net,
	john.r.fastabend@intel.com, netdev@vger.kernel.org
Subject: Re: [RFC PATCH 06/12] net: sched: support qdisc_reset on NOLOCK qdisc
Date: Sun, 03 Jan 2016 11:37:06 -0800	[thread overview]
Message-ID: <56897862.6080803@gmail.com> (raw)
In-Reply-To: <20160101023003.GA17287@ast-mbp.thefacebook.com>

On 15-12-31 06:30 PM, Alexei Starovoitov wrote:
> On Wed, Dec 30, 2015 at 09:53:13AM -0800, John Fastabend wrote:
>> The qdisc_reset operation depends on the qdisc lock at the moment
>> to halt any additions to gso_skb and statistics while the list is
>> free'd and the stats zeroed.
>>
>> Without the qdisc lock we can not guarantee another cpu is not in
>> the process of adding a skb to one of the "cells". Here are the
>> two cases we have to handle.
>>
>>  case 1: qdisc_graft operation. In this case a "new" qdisc is attached
>> 	 and the 'qdisc_destroy' operation is called on the old qdisc.
>> 	 The destroy operation will wait a rcu grace period and call
>> 	 qdisc_rcu_free(). At which point gso_cpu_skb is free'd along
>> 	 with all stats so no need to zero stats and gso_cpu_skb from
>> 	 the reset operation itself.
>>
>> 	 Because we can not continue to call qdisc_reset before waiting
>> 	 an rcu grace period so that the qdisc is detached from all
>> 	 cpus simply do not call qdisc_reset() at all and let the
>> 	 qdisc_destroy operation clean up the qdisc. Note, a refcnt
>> 	 greater than 1 would cause the destroy operation to be
>> 	 aborted however if this ever happened the reference to the
>> 	 qdisc would be lost and we would have a memory leak.
>>
>>  case 2: dev_deactivate sequence. This can come from a user bringing
>> 	 the interface down which causes the gso_skb list to be flushed
>> 	 and the qlen zero'd. At the moment this is protected by the
>> 	 qdisc lock so while we clear the qlen/gso_skb fields we are
>> 	 guaranteed no new skbs are added. For the lockless case
>> 	 though this is not true. To resolve this move the qdisc_reset
>> 	 call after the new qdisc is assigned and a grace period is
>> 	 exercised to ensure no new skbs can be enqueued. Further
>> 	 the RTNL lock is held so we can not get another call to
>> 	 activate the qdisc while the skb lists are being free'd.
>>
>> 	 Finally, fix qdisc_reset to handle the per cpu stats and
>> 	 skb lists.
>>
>> Signed-off-by: John Fastabend <john.r.fastabend@intel.com>
> ...
>> -	/* Prune old scheduler */
>> -	if (oqdisc && atomic_read(&oqdisc->refcnt) <= 1)
>> -		qdisc_reset(oqdisc);
>> -
> ...
>> -		sync_needed |= !dev->dismantle;
>> +		sync_needed = true;
> 
> I think killing above <=1 check and forcing synchronize_net() will
> make qdisc destruction more reliable than it's right now.

I'll probably do that in a follow up series. Its a bit tricky and
actually I'm not convinced we aren't leaking memory as it is before
this series when we transition from per txq qdiscs and the more
traditional single big qdisc in a few cases. I'll take a look tomorrow
with kmemleak and tools. My guess is in practice people load a qdisc
and don't change it much and better setups for those types of things
is to load your favorite qdisc under mq or mqprio when using
multiqueue devices.

> Your commit log sounds too pessimistic :)
> 
> btw, sync_needed variable can be removed as well.

Yep thanks good catch.

> 
> All other patches look good. Great stuff overall!
> 

Thanks, I'm going to add mqprio and multiq support, do some testing and
some perf work then submit it. The alf_queue bits can be optimized
further later.

Also after we get multiq support lockless we can run filters on egress
side without the qdisc lock overhead.

.John

  reply	other threads:[~2016-01-03 19:37 UTC|newest]

Thread overview: 29+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2015-12-30 17:50 [RFC PATCH 00/12] drop the qdisc lock for pfifo_fast/mq John Fastabend
2015-12-30 17:51 ` [RFC PATCH 01/12] lib: array based lock free queue John Fastabend
2016-01-13 19:28   ` Jesper Dangaard Brouer
2015-12-30 17:51 ` [RFC PATCH 02/12] net: sched: free per cpu bstats John Fastabend
2016-01-04 15:21   ` Daniel Borkmann
2016-01-04 17:32     ` Eric Dumazet
2016-01-04 18:08       ` John Fastabend
2015-12-30 17:51 ` [RFC PATCH 03/12] net: sched: allow qdiscs to handle locking John Fastabend
2015-12-30 17:52 ` [RFC PATCH 04/12] net: sched: provide per cpu qstat helpers John Fastabend
2015-12-30 17:52 ` [RFC PATCH 05/12] net: sched: per cpu gso handlers John Fastabend
2015-12-30 20:26   ` Jesper Dangaard Brouer
2015-12-30 20:42     ` John Fastabend
2015-12-30 17:53 ` [RFC PATCH 06/12] net: sched: support qdisc_reset on NOLOCK qdisc John Fastabend
2016-01-01  2:30   ` Alexei Starovoitov
2016-01-03 19:37     ` John Fastabend [this message]
2016-01-13 16:20   ` David Miller
2016-01-13 18:03     ` John Fastabend
2016-01-15 19:44       ` David Miller
2015-12-30 17:53 ` [RFC PATCH 07/12] net: sched: qdisc_qlen for per cpu logic John Fastabend
2015-12-30 17:53 ` [RFC PATCH 08/12] net: sched: a dflt qdisc may be used with per cpu stats John Fastabend
2015-12-30 17:54 ` [RFC PATCH 09/12] net: sched: pfifo_fast use alf_queue John Fastabend
2016-01-13 16:24   ` David Miller
2016-01-13 18:18     ` John Fastabend
2015-12-30 17:54 ` [RFC PATCH 10/12] net: sched: helper to sum qlen John Fastabend
2015-12-30 17:55 ` [RFC PATCH 11/12] net: sched: add support for TCQ_F_NOLOCK subqueues to sch_mq John Fastabend
2015-12-30 17:55 ` [RFC PATCH 12/12] net: sched: pfifo_fast new option to deque multiple pkts John Fastabend
2015-12-30 18:13   ` John Fastabend
2016-01-06 13:14 ` [RFC PATCH 00/12] drop the qdisc lock for pfifo_fast/mq Jamal Hadi Salim
2016-01-07 23:30   ` John Fastabend

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=56897862.6080803@gmail.com \
    --to=john.fastabend@gmail.com \
    --cc=aduyck@mirantis.com \
    --cc=alexei.starovoitov@gmail.com \
    --cc=brouer@redhat.com \
    --cc=daniel@iogearbox.net \
    --cc=davem@davemloft.net \
    --cc=eric.dumazet@gmail.com \
    --cc=jhs@mojatatu.com \
    --cc=john.r.fastabend@intel.com \
    --cc=netdev@vger.kernel.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).