From mboxrd@z Thu Jan 1 00:00:00 1970 From: "Dmitry Kravkov" Subject: Re: [PATCH] bnx2x: fix panic when TX ring is full Date: Thu, 21 Jun 2012 18:56:22 +0300 Message-ID: <1340294182.18721.30.camel@lb-tlvb-dmitry> References: <1339616716.22704.661.camel@edumazet-glaptop> <20120615.153049.103988387813257203.davem@davemloft.net> <504C9EFCA2D0054393414C9CB605C37F1CF19E@SJEXCHMB06.corp.ad.broadcom.com> <1340005136.7491.609.camel@edumazet-glaptop> <1340281166.15484.16.camel@lb-tlvb-dmitry> <1340291526.4604.5710.camel@edumazet-glaptop> Mime-Version: 1.0 Content-Type: text/plain; charset=utf-8 Content-Transfer-Encoding: 7bit Cc: "Tomas Hruby" , "David Miller" , "netdev@vger.kernel.org" , "therbert@google.com" , "evansr@google.com" , "Eilon Greenstein" , "Merav Sicron" , "Yaniv Rosner" , "willemb@google.com" To: "Eric Dumazet" Return-path: Received: from mms3.broadcom.com ([216.31.210.19]:3653 "EHLO mms3.broadcom.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1755812Ab2FUP43 (ORCPT ); Thu, 21 Jun 2012 11:56:29 -0400 In-Reply-To: <1340291526.4604.5710.camel@edumazet-glaptop> Sender: netdev-owner@vger.kernel.org List-ID: On Thu, 2012-06-21 at 17:12 +0200, Eric Dumazet wrote: > On Thu, 2012-06-21 at 15:19 +0300, Dmitry Kravkov wrote: > > > The crash happens with default configuration since > > [4acb41903b2f99f3dffd4c3df9acc84ca5942cb2] "net/tcp: Fix tcp memory > > limits initialization when !CONFIG_SYSCTL", but it can be hit by > > increasing values of tcp_wmem even earlier. > > This makes no sense. Bisected to this commit and reproduced before the commit only after: echo "4096 16384 4194304" > /proc/sys/net/ipv4/tcp_wmem by this max nr_frags raised from 8 to 17, when running 40 netperfs i've decreased rx queue to 200, during the test > > From: Dmitry Kravkov > > Subject: [PATCH net-next] bnx2x: reservation for NEXT tx BDs > > > > Commit [4acb41903b2f99f3dffd4c3df9acc84ca5942cb2] > > net/tcp: Fix tcp memory limits initialization when !CONFIG_SYSCTL > > provided new default value for tcp_wmem, since heavy tcp > > traffic may cause the TSO packet to consume 20 BDs + 1 for next page > > descriptor. > > This is completely bogus. I have no idea how you came to this. > > A forwarding workload can trigger same bug, if GRO is enabled. > > Remove this wrong bit, please ? > > >