From mboxrd@z Thu Jan 1 00:00:00 1970 From: Thomas Falcon Subject: Re: [PATCH net-next] ibmveth: v1 calculate correct gso_size and set gso_type Date: Thu, 27 Oct 2016 09:44:30 -0500 Message-ID: <10c6ca2b-eb57-2f56-d62c-968dd2d93a9f@linux.vnet.ibm.com> References: <1477440555-21133-1-git-send-email-jmaxwell37@gmail.com> Mime-Version: 1.0 Content-Type: text/plain; charset=windows-1252 Content-Transfer-Encoding: 7bit Cc: tom@herbertland.com, netdev@vger.kernel.org, jmaxwell@redhat.com, linux-kernel@vger.kernel.org, jarod@redhat.com, paulus@samba.org, hofrat@osadl.org, mleitner@redhat.com, linuxppc-dev@lists.ozlabs.org, davem@davemloft.net To: Jon Maxwell Return-path: In-Reply-To: <1477440555-21133-1-git-send-email-jmaxwell37@gmail.com> List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: linuxppc-dev-bounces+glppe-linuxppc-embedded-2=m.gmane.org@lists.ozlabs.org Sender: "Linuxppc-dev" List-Id: netdev.vger.kernel.org On 10/25/2016 07:09 PM, Jon Maxwell wrote: > We recently encountered a bug where a few customers using ibmveth on the > same LPAR hit an issue where a TCP session hung when large receive was > enabled. Closer analysis revealed that the session was stuck because the > one side was advertising a zero window repeatedly. > > We narrowed this down to the fact the ibmveth driver did not set gso_size > which is translated by TCP into the MSS later up the stack. The MSS is > used to calculate the TCP window size and as that was abnormally large, > it was calculating a zero window, even although the sockets receive buffer > was completely empty. > > We were able to reproduce this and worked with IBM to fix this. Thanks Tom > and Marcelo for all your help and review on this. > > The patch fixes both our internal reproduction tests and our customers tests. > > Signed-off-by: Jon Maxwell Thanks, Jon. Acked-by: Thomas Falcon > --- > drivers/net/ethernet/ibm/ibmveth.c | 20 ++++++++++++++++++++ > 1 file changed, 20 insertions(+) > > diff --git a/drivers/net/ethernet/ibm/ibmveth.c b/drivers/net/ethernet/ibm/ibmveth.c > index 29c05d0..c51717e 100644 > --- a/drivers/net/ethernet/ibm/ibmveth.c > +++ b/drivers/net/ethernet/ibm/ibmveth.c > @@ -1182,6 +1182,8 @@ static int ibmveth_poll(struct napi_struct *napi, int budget) > int frames_processed = 0; > unsigned long lpar_rc; > struct iphdr *iph; > + bool large_packet = 0; > + u16 hdr_len = ETH_HLEN + sizeof(struct tcphdr); > > restart_poll: > while (frames_processed < budget) { > @@ -1236,10 +1238,28 @@ static int ibmveth_poll(struct napi_struct *napi, int budget) > iph->check = 0; > iph->check = ip_fast_csum((unsigned char *)iph, iph->ihl); > adapter->rx_large_packets++; > + large_packet = 1; > } > } > } > > + if (skb->len > netdev->mtu) { > + iph = (struct iphdr *)skb->data; > + if (be16_to_cpu(skb->protocol) == ETH_P_IP && > + iph->protocol == IPPROTO_TCP) { > + hdr_len += sizeof(struct iphdr); > + skb_shinfo(skb)->gso_type = SKB_GSO_TCPV4; > + skb_shinfo(skb)->gso_size = netdev->mtu - hdr_len; > + } else if (be16_to_cpu(skb->protocol) == ETH_P_IPV6 && > + iph->protocol == IPPROTO_TCP) { > + hdr_len += sizeof(struct ipv6hdr); > + skb_shinfo(skb)->gso_type = SKB_GSO_TCPV6; > + skb_shinfo(skb)->gso_size = netdev->mtu - hdr_len; > + } > + if (!large_packet) > + adapter->rx_large_packets++; > + } > + > napi_gro_receive(napi, skb); /* send it up */ > > netdev->stats.rx_packets++;