From: Daniel Borkmann <daniel@iogearbox.net>
To: Cong Wang <xiyou.wangcong@gmail.com>
Cc: Roi Dayan <roid@mellanox.com>,
Linux Kernel Network Developers <netdev@vger.kernel.org>,
Jiri Pirko <jiri@mellanox.com>,
John Fastabend <john.fastabend@gmail.com>
Subject: Re: [Patch net-next] net_sched: move the empty tp check from ->destroy() to ->delete()
Date: Sun, 27 Nov 2016 01:33:39 +0100 [thread overview]
Message-ID: <583A29E3.8030809@iogearbox.net> (raw)
In-Reply-To: <58396D71.8070703@iogearbox.net>
On 11/26/2016 12:09 PM, Daniel Borkmann wrote:
> On 11/26/2016 07:46 AM, Cong Wang wrote:
>> On Thu, Nov 24, 2016 at 7:20 AM, Daniel Borkmann <daniel@iogearbox.net> wrote:
[...]
>>> Ok, strange, qdisc_destroy() calls into ops->destroy(), where ingress
>>> drops its entire chain via tcf_destroy_chain(), so that will be NULL
>>> eventually. The tps are freed by call_rcu() as well as qdisc itself
>>> later on via qdisc_rcu_free(), where it frees per-cpu bstats as well.
>>> Outstanding readers should either bail out due to if (!cl) or can still
>>> process the chain until read section ends, but during that time, cl->q
>>> resp. bstats should be good. Do you happen to know what's at address
>>> ffff880a68b04028? I was wondering wrt call_rcu() vs call_rcu_bh(), but
>>> at least on ingress (netif_receive_skb_internal()) we hold rcu_read_lock()
>>> here. The KASAN report is reliably happening at this location, right?
>>
>> I am confused as well, I don't see how it could be related to my patch yet.
>> I will take a deep look in the weekend.
>
> Ok, I'm currently on the run. Got too late yesterday night, but I'll
> write what I found in the evening today, not related to ingress though.
Just pushed out my analysis to netdev under "[PATCH net] net, sched: respect
rcu grace period on cls destruction". My conclusion is that both issues are
actually separate, and that one is small enough where we could route it via
net actually. Perhaps this at the same time shrinks your "[PATCH net-next]
net_sched: move the empty tp check from ->destroy() to ->delete()" to a
reasonable size that it's suitable to net as well. Your ->delete()/->destroy()
one is definitely needed, too. The tp->root one is independant of ->delete()/
->destroy() as they are different races and tp->root could also happen when
you just destroy the whole tp directly. I think that seems like a good path
forward to me.
Thanks,
Daniel
next prev parent reply other threads:[~2016-11-27 0:33 UTC|newest]
Thread overview: 16+ messages / expand[flat|nested] mbox.gz Atom feed top
2016-11-24 1:58 [Patch net-next] net_sched: move the empty tp check from ->destroy() to ->delete() Cong Wang
2016-11-24 8:29 ` Roi Dayan
2016-11-24 10:14 ` Daniel Borkmann
2016-11-24 11:01 ` Roi Dayan
2016-11-24 15:20 ` Daniel Borkmann
2016-11-24 17:18 ` Daniel Borkmann
2016-11-26 6:46 ` Cong Wang
2016-11-26 11:09 ` Daniel Borkmann
2016-11-27 0:33 ` Daniel Borkmann [this message]
2016-11-27 4:47 ` Roi Dayan
2016-11-27 6:29 ` Roi Dayan
2016-11-28 2:26 ` John Fastabend
2016-11-28 2:51 ` John Fastabend
2016-11-29 6:59 ` Cong Wang
2016-11-28 2:57 ` John Fastabend
2016-11-29 6:57 ` Cong Wang
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=583A29E3.8030809@iogearbox.net \
--to=daniel@iogearbox.net \
--cc=jiri@mellanox.com \
--cc=john.fastabend@gmail.com \
--cc=netdev@vger.kernel.org \
--cc=roid@mellanox.com \
--cc=xiyou.wangcong@gmail.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).