Intel-Wired-Lan Archive on lore.kernel.org
 help / color / mirror / Atom feed
From: Paolo Abeni <pabeni@redhat.com>
To: Eric Dumazet <edumazet@google.com>,
	Willem de Bruijn <willemb@google.com>
Cc: "netdev@vger.kernel.org" <netdev@vger.kernel.org>,
	Ian Kumlien <ian.kumlien@gmail.com>,
	"linux-kernel@vger.kernel.org" <linux-kernel@vger.kernel.org>,
	intel-wired-lan <intel-wired-lan@lists.osuosl.org>,
	Jakub Kicinski <kuba@kernel.org>
Subject: Re: [Intel-wired-lan] bug with rx-udp-gro-forwarding offloading?
Date: Thu, 06 Jul 2023 16:04:23 +0200	[thread overview]
Message-ID: <c4e40b45b41d0476afd8989d31e6bab74c51a72a.camel@redhat.com> (raw)
In-Reply-To: <CANn89i+F=R71refT8K_8hPaP+uWn15GeHz+FTMYU=VPTG24WFA@mail.gmail.com>

On Thu, 2023-07-06 at 15:56 +0200, Eric Dumazet wrote:
> On Thu, Jul 6, 2023 at 3:02 PM Paolo Abeni <pabeni@redhat.com> wrote:
> > 
> > On Thu, 2023-07-06 at 13:27 +0200, Ian Kumlien wrote:
> > > On Thu, Jul 6, 2023 at 10:42 AM Paolo Abeni <pabeni@redhat.com> wrote:
> > > > On Wed, 2023-07-05 at 15:58 +0200, Ian Kumlien wrote:
> > > > > On Wed, Jul 5, 2023 at 3:29 PM Paolo Abeni <pabeni@redhat.com> wrote:
> > > > > > 
> > > > > > On Wed, 2023-07-05 at 13:32 +0200, Ian Kumlien wrote:
> > > > > > > On Wed, Jul 5, 2023 at 12:28 PM Paolo Abeni <pabeni@redhat.com> wrote:
> > > > > > > > 
> > > > > > > > On Tue, 2023-07-04 at 16:27 +0200, Ian Kumlien wrote:
> > > > > > > > > More stacktraces.. =)
> > > > > > > > > 
> > > > > > > > > cat bug.txt | ./scripts/decode_stacktrace.sh vmlinux
> > > > > > > > > [  411.413767] ------------[ cut here ]------------
> > > > > > > > > [  411.413792] WARNING: CPU: 9 PID: 942 at include/net/ud     p.h:509
> > > > > > > > > udpv6_queue_rcv_skb (./include/net/udp.h:509 net/ipv6/udp.c:800
> > > > > > > > > net/ipv6/udp.c:787)
> > > > > > > > 
> > > > > > > > I'm really running out of ideas here...
> > > > > > > > 
> > > > > > > > This is:
> > > > > > > > 
> > > > > > > >         WARN_ON_ONCE(UDP_SKB_CB(skb)->partial_cov);
> > > > > > > > 
> > > > > > > > sort of hint skb being shared (skb->users > 1) while enqueued in
> > > > > > > > multiple places (bridge local input and br forward/flood to tun
> > > > > > > > device). I audited the bridge mc flooding code, and I could not find
> > > > > > > > how a shared skb could land into the local input path.
> > > > > > > > 
> > > > > > > > Anyway the other splats reported here and in later emails are
> > > > > > > > compatible with shared skbs.
> > > > > > > > 
> > > > > > > > The above leads to another bunch of questions:
> > > > > > > > * can you reproduce the issue after disabling 'rx-gro-list' on the
> > > > > > > > ingress device? (while keeping 'rx-udp-gro-forwarding' on).
> > > > > > > 
> > > > > > > With rx-gro-list off, as in never turned on, everything seems to run fine
> > > > > > > 
> > > > > > > > * do you have by chance qdiscs on top of the VM tun devices?
> > > > > > > 
> > > > > > > default qdisc is fq
> > > > > > 
> > > > > > IIRC libvirt could reset the qdisc to noqueue for the owned tun
> > > > > > devices.
> > > > > > 
> > > > > > Could you please report the output of:
> > > > > > 
> > > > > > tc -d -s qdisc show dev <tun dev name>
> > > > > 
> > > > > I don't have these set:
> > > > > CONFIG_NET_SCH_INGRESS
> > > > > CONFIG_NET_SCHED
> > > > > 
> > > > > so tc just gives an error...
> > > > 
> > > > The above is confusing. AS CONFIG_NET_SCH_DEFAULT depends on
> > > > CONFIG_NET_SCHED, you should not have a default qdisc, too ;)
> > > 
> > > Well it's still set in sysctl - dunno if it fails
> > > 
> > > > Could you please share your kernel config?
> > > 
> > > Sure...
> > > 
> > > As a side note, it hasn't crashed - no traces since we did the last change
> > 
> > It sounds like an encouraging sing! (last famous words...). I'll wait 1
> > more day, than I'll submit formally...
> > 
> > > For reference, this is git diff on the running kernels source tree:
> > > diff --git a/net/core/skbuff.c b/net/core/skbuff.c
> > > index cea28d30abb5..1b2394ebaf33 100644
> > > --- a/net/core/skbuff.c
> > > +++ b/net/core/skbuff.c
> > > @@ -4270,6 +4270,17 @@ struct sk_buff *skb_segment_list(struct sk_buff *skb,
> > > 
> > >         skb_push(skb, -skb_network_offset(skb) + offset);
> > > 
> > > +       if (WARN_ON_ONCE(skb_shared(skb))) {
> > > +               skb = skb_share_check(skb, GFP_ATOMIC);
> > > +               if (!skb)
> > > +                       goto err_linearize;
> > > +       }
> > > +
> > > +       /* later code will clear the gso area in the shared info */
> > > +       err = skb_header_unclone(skb, GFP_ATOMIC);
> > > +       if (err)
> > > +               goto err_linearize;
> > > +
> > >         skb_shinfo(skb)->frag_list = NULL;
> > > 
> > >         while (list_skb) {
> > 
> > ...the above check only, as the other 2 should only catch-up side
> > effects of lack of this one. In any case the above address a real
> > issue, so we likely want it no-matter-what.
> > 
> 
> Interesting, I wonder if this could also fix some syzbot reports
> Willem and I are investigating.
> 
> Any idea of when the bug was 'added' or 'revealed' ?

The issue specifically addressed above should be present since
frag_list introduction commit 3a1296a38d0c ("net: Support GRO/GSO
fraglist chaining."). AFAICS triggering it requires non trivial setup -
mcast rx on bridge with frag-list enabled and forwarding to multiple
ports - so perhaps syzkaller found it later due to improvements on its
side ?!?

Cheers,

Paolo

_______________________________________________
Intel-wired-lan mailing list
Intel-wired-lan@osuosl.org
https://lists.osuosl.org/mailman/listinfo/intel-wired-lan

  reply	other threads:[~2023-07-06 14:04 UTC|newest]

Thread overview: 46+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2023-06-24 20:03 [Intel-wired-lan] bug with rx-udp-gro-forwarding offloading? Ian Kumlien
2023-06-25 10:59 ` Ian Kumlien
2023-06-26 13:39   ` Ian Kumlien
2023-06-26 14:17   ` Alexander Lobakin
2023-06-26 14:25     ` Ian Kumlien
2023-06-26 16:42       ` Paolo Abeni
2023-06-26 17:07       ` Alexander Lobakin
2023-06-26 17:24         ` Ian Kumlien
2023-06-26 17:30           ` Ian Kumlien
2023-06-26 17:55             ` Paolo Abeni
2023-06-26 18:01               ` Ian Kumlien
2023-06-26 18:20                 ` Ian Kumlien
2023-06-26 18:59                   ` Ian Kumlien
2023-06-27  9:19                     ` Paolo Abeni
2023-06-27 12:31                       ` Ian Kumlien
2023-06-28  7:37                         ` Ian Kumlien
2023-06-28  9:06                           ` Paolo Abeni
2023-06-28 11:47                             ` Ian Kumlien
2023-06-28 12:04                               ` Ian Kumlien
2023-06-28 15:14                                 ` Paolo Abeni
2023-06-28 20:18                                   ` Ian Kumlien
2023-06-29 10:50                                     ` Ian Kumlien
2023-07-03  9:37                                       ` Ian Kumlien
2023-07-04 10:10                                         ` Paolo Abeni
2023-07-04 11:36                                           ` Ian Kumlien
2023-07-04 12:54                                             ` Paolo Abeni
2023-07-04 13:23                                               ` Ian Kumlien
2023-07-04 13:41                                                 ` Paolo Abeni
2023-07-04 14:06                                                   ` Ian Kumlien
2023-07-04 14:27                                                     ` Ian Kumlien
2023-07-04 14:57                                                       ` Ian Kumlien
2023-07-05 10:28                                                       ` Paolo Abeni
2023-07-05 11:32                                                         ` Ian Kumlien
2023-07-05 13:29                                                           ` Paolo Abeni
2023-07-05 13:58                                                             ` Ian Kumlien
2023-07-06  8:42                                                               ` Paolo Abeni
2023-07-06 11:27                                                                 ` Ian Kumlien
2023-07-06 13:01                                                                   ` Paolo Abeni
2023-07-06 13:56                                                                     ` Eric Dumazet
2023-07-06 14:04                                                                       ` Paolo Abeni [this message]
2023-07-06 16:17                                                                         ` Ian Kumlien
2023-07-06 17:10                                                                           ` Paolo Abeni
2023-07-06 17:43                                                                             ` Ian Kumlien
2023-07-06 22:32                                                                             ` Ian Kumlien
2023-07-06 22:41                                                                               ` Ian Kumlien
2023-07-07  6:55                                                                               ` Paolo Abeni

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=c4e40b45b41d0476afd8989d31e6bab74c51a72a.camel@redhat.com \
    --to=pabeni@redhat.com \
    --cc=edumazet@google.com \
    --cc=ian.kumlien@gmail.com \
    --cc=intel-wired-lan@lists.osuosl.org \
    --cc=kuba@kernel.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=netdev@vger.kernel.org \
    --cc=willemb@google.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox