netdev.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: David Miller <davem@davemloft.net>
To: vyasevich@gmail.com
Cc: netdev@vger.kernel.org, vyasevic@redhat.com,
	makita.toshiaki@lab.ntt.co.jp
Subject: Re: [PATCH net 1/3] vlan: Fix tcp checksums offloads for Q-in-Q vlan.
Date: Mon, 22 May 2017 19:59:21 -0400 (EDT)	[thread overview]
Message-ID: <20170522.195921.1033609103219347751.davem@davemloft.net> (raw)
In-Reply-To: <1495114265-23368-2-git-send-email-vyasevic@redhat.com>

From: Vladislav Yasevich <vyasevich@gmail.com>
Date: Thu, 18 May 2017 09:31:03 -0400

> It appears that since commit 8cb65d000, Q-in-Q vlans have been
> broken.  The series that commit is part of enabled TSO and checksum
> offloading on Q-in-Q vlans.  However, most HW we support can't handle
> it.  To work around the issue, the above commit added a function that
> turns off offloads on Q-in-Q devices, but it left the checksum offload.
> That will cause issues with most older devices that supprort very basic
> checksum offload capabilities as well as some newer devices (we've
> reproduced te problem with both be2net and bnx).
> 
> To solve this for everyone, turn off checksum offloading feature
> by default when sending Q-in-Q traffic.  Devices that are proven to
> work can provided a corrected ndo_features_check implemetation.
> 
> Fixes: 8cb65d000 ("net: Move check for multiple vlans to drivers")
> CC: Toshiaki Makita <makita.toshiaki@lab.ntt.co.jp>
> Signed-off-by: Vladislav Yasevich <vyasevic@redhat.com>

This is a tough one.  I can certainly sympathize with your frustration
trying to track this down.

Clearing NETIF_F_HW_CSUM completely is the most conservative change.

However, for all the (perhaps many) cards upon which the checksumming
does work properly in Q-in-Q situations, this change could be
introducing non-trivial performance regressions.

So I think Toshiaki's suggestion to drop IP_CSUM and IPV6_CSUM is,
on balance, the best way forward.

Thanks.

  parent reply	other threads:[~2017-05-22 23:59 UTC|newest]

Thread overview: 15+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2017-05-18 13:31 [PATCH net 0/3] vlan: Offload fixes for Q-in-Q vlans Vladislav Yasevich
2017-05-18 13:31 ` [PATCH net 1/3] vlan: Fix tcp checksums offloads for Q-in-Q vlan Vladislav Yasevich
2017-05-19  1:04   ` Toshiaki Makita
2017-05-19  2:13   ` Toshiaki Makita
2017-05-19  7:09     ` Vlad Yasevich
2017-05-19  8:16       ` Toshiaki Makita
2017-05-19  9:53         ` Vlad Yasevich
2017-05-19 13:31           ` Toshiaki Makita
2017-05-22 23:59   ` David Miller [this message]
2017-05-23 12:59     ` Vlad Yasevich
2017-05-23 16:29     ` Alexander Duyck
2017-05-18 13:31 ` [PATCH net 2/3] be2net: Fix offload features for Q-in-Q packets Vladislav Yasevich
2017-05-18 13:31 ` [PATCH net 3/3] virtio-net: enable TSO/checksum offloads for Q-in-Q vlans Vladislav Yasevich
2017-05-18 15:06   ` Michael S. Tsirkin
2017-05-19 14:18   ` Jason Wang

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20170522.195921.1033609103219347751.davem@davemloft.net \
    --to=davem@davemloft.net \
    --cc=makita.toshiaki@lab.ntt.co.jp \
    --cc=netdev@vger.kernel.org \
    --cc=vyasevic@redhat.com \
    --cc=vyasevich@gmail.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).