From: Shawn Bohrer <sbohrer@rgmadvisors.com>
To: Eric Dumazet <eric.dumazet@gmail.com>
Cc: netdev@vger.kernel.org
Subject: Re: Increased multicast packet drops in 3.4
Date: Fri, 7 Sep 2012 17:38:43 -0500 [thread overview]
Message-ID: <20120907223843.GA2767@BohrerMBP.rgmadvisors.com> (raw)
In-Reply-To: <1346998125.2484.220.camel@edumazet-glaptop>
On Fri, Sep 07, 2012 at 08:08:45AM +0200, Eric Dumazet wrote:
> On Thu, 2012-09-06 at 23:00 -0500, Shawn Bohrer wrote:
> > On Thu, Sep 06, 2012 at 03:21:07PM +0200, Eric Dumazet wrote:
> > > kfree_skb() can free a list of skb, and we use a generic function to do
> > > so, without forwarding the drop/notdrop status. So its unfortunate, but
> > > adding extra parameters just for the sake of drop_monitor is not worth
> > > it. skb_drop_fraglist() doesnt know if the parent skb is dropped or
> > > only freed, so it calls kfree_skb(), not consume_skb() or kfree_skb()
> >
> > I understand that this means that dropwatch or the skb:kfree_skb
> > tracepoint won't know if the packet was really dropped, but do we
> > know in this case from the context of the stack trace? I'm assuming
> > since we didn't receive an error that the packet was delivered and
> > these aren't real drops.
>
> I am starting to believe this is an application error.
>
> This application uses recvmmsg() to fetch a lot of messages in one
> syscall, and it might well be it throws out a batch of 50+ messages
> because of an application bug. Yes, this starts with 3.4, but it can b
> triggered by a timing difference or something that is not a proper
> kernel bug...
Eric, you are absolutely correct. There is at least one bug in the
application. The code that re-orders out of order packets would give
up around the 50+ point and assume the packets in between were
dropped. I did prove that if we keep reading from the socket we do
receive those packets. So no packets are being dropped in the kernel.
I also proved I this is happening on 3.1 as well but 3.4 does trigger
it more often.
I'm still debugging the application because it appears I'm getting
very large batches of packets out of order. It isn't clear to me if
this is another application bug, the kernel, the switch or something
else.
Thanks for all of your help looking into this (non)-issue. If I have
further questions about the kernel side I'll let you know.
Thanks,
Shawn
--
---------------------------------------------------------------
This email, along with any attachments, is confidential. If you
believe you received this message in error, please contact the
sender immediately and delete all copies of the message.
Thank you.
next prev parent reply other threads:[~2012-09-07 22:38 UTC|newest]
Thread overview: 10+ messages / expand[flat|nested] mbox.gz Atom feed top
2012-09-06 0:11 Increased multicast packet drops in 3.4 Shawn Bohrer
2012-09-06 6:07 ` Eric Dumazet
2012-09-06 6:22 ` Eric Dumazet
2012-09-06 13:03 ` Shawn Bohrer
2012-09-06 13:21 ` Eric Dumazet
2012-09-06 13:31 ` Eric Dumazet
2012-09-07 4:00 ` Shawn Bohrer
2012-09-07 6:08 ` Eric Dumazet
2012-09-07 22:38 ` Shawn Bohrer [this message]
2012-09-06 6:26 ` Eric Dumazet
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20120907223843.GA2767@BohrerMBP.rgmadvisors.com \
--to=sbohrer@rgmadvisors.com \
--cc=eric.dumazet@gmail.com \
--cc=netdev@vger.kernel.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox