netdev.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Tony Nguyen <anthony.l.nguyen@intel.com>
To: Paolo Abeni <pabeni@redhat.com>, Corinna Vinschen <vinschen@redhat.com>
Cc: <linux-kernel@vger.kernel.org>, <netdev@vger.kernel.org>,
	<intel-wired-lan@lists.osuosl.org>,
	Nikolay Aleksandrov <razor@blackwall.org>,
	Jason Xing <kerneljasonxing@gmail.com>,
	Jakub Kicinski <kuba@kernel.org>,
	"David S . Miller" <davem@davemloft.net>,
	Eric Dumazet <edumazet@google.com>,
	Jesse Brandeburg <jesse.brandeburg@intel.com>
Subject: Re: [PATCH net v3] igb: cope with large MAX_SKB_FRAGS.
Date: Tue, 23 Jul 2024 10:16:27 -0700	[thread overview]
Message-ID: <d33c7174-733a-bc70-78cd-dfcfe666f263@intel.com> (raw)
In-Reply-To: <afdb7011-5098-47dd-89af-5ed0096294d8@redhat.com>



On 7/23/2024 1:27 AM, Paolo Abeni wrote:
> On 7/18/24 10:56, Corinna Vinschen wrote:
>> From: Paolo Abeni <pabeni@redhat.com>
>>
>> Sabrina reports that the igb driver does not cope well with large
>> MAX_SKB_FRAG values: setting MAX_SKB_FRAG to 45 causes payload
>> corruption on TX.
>>
>> An easy reproducer is to run ssh to connect to the machine.  With
>> MAX_SKB_FRAGS=17 it works, with MAX_SKB_FRAGS=45 it fails.
>>
>> The root cause of the issue is that the driver does not take into
>> account properly the (possibly large) shared info size when selecting
>> the ring layout, and will try to fit two packets inside the same 4K
>> page even when the 1st fraglist will trump over the 2nd head.
>>
>> Address the issue forcing the driver to fit a single packet per page,
>> leaving there enough room to store the (currently) largest possible
>> skb_shared_info.
>>
>> Fixes: 3948b05950fd ("net: introduce a config option to tweak 
>> MAX_SKB_FRAGS")
>> Reported-by: Jan Tluka <jtluka@redhat.com>
>> Reported-by: Jirka Hladky <jhladky@redhat.com>
>> Reported-by: Sabrina Dubroca <sd@queasysnail.net>
>> Tested-by: Sabrina Dubroca <sd@queasysnail.net>
>> Tested-by: Corinna Vinschen <vinschen@redhat.com>
>> Signed-off-by: Paolo Abeni <pabeni@redhat.com>
> 
> @Tony: would you like to take this one in your tree first, or we can 
> merge it directly?

Hi Paolo,

I can take it through IWL unless you need to get it in sooner, in which 
case, feel free to take it directly. If so...

Reviewed-by: Tony Nguyen <anthony.l.nguyen@intel.com>

Thanks,
Tony


  reply	other threads:[~2024-07-23 17:16 UTC|newest]

Thread overview: 5+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2024-07-18  8:56 [PATCH net v3] igb: cope with large MAX_SKB_FRAGS Corinna Vinschen
2024-07-23  8:27 ` Paolo Abeni
2024-07-23 17:16   ` Tony Nguyen [this message]
2024-07-23  8:28 ` Eric Dumazet
2024-08-06 15:13 ` [Intel-wired-lan] " Pucha, HimasekharX Reddy

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=d33c7174-733a-bc70-78cd-dfcfe666f263@intel.com \
    --to=anthony.l.nguyen@intel.com \
    --cc=davem@davemloft.net \
    --cc=edumazet@google.com \
    --cc=intel-wired-lan@lists.osuosl.org \
    --cc=jesse.brandeburg@intel.com \
    --cc=kerneljasonxing@gmail.com \
    --cc=kuba@kernel.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=netdev@vger.kernel.org \
    --cc=pabeni@redhat.com \
    --cc=razor@blackwall.org \
    --cc=vinschen@redhat.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).