From mboxrd@z Thu Jan 1 00:00:00 1970 From: Paolo Abeni Subject: Re: [PATCH RFC net-next 00/11] udp gso Date: Fri, 31 Aug 2018 11:09:44 +0200 Message-ID: <8b4de31a06d9bdb69e348f88ad0dcbf7d8576477.camel@redhat.com> References: <20180417200059.30154-1-willemdebruijn.kernel@gmail.com> <20180417201557.GA4080@oracle.com> <20180417204829.GK7632@oracle.com> Mime-Version: 1.0 Content-Type: text/plain; charset="UTF-8" Content-Transfer-Encoding: 7bit Cc: Network Development , Willem de Bruijn To: Willem de Bruijn , Sowmini Varadhan Return-path: Received: from mx3-rdu2.redhat.com ([66.187.233.73]:59444 "EHLO mx1.redhat.com" rhost-flags-OK-OK-OK-FAIL) by vger.kernel.org with ESMTP id S1727438AbeHaNQR (ORCPT ); Fri, 31 Aug 2018 09:16:17 -0400 In-Reply-To: Sender: netdev-owner@vger.kernel.org List-ID: Hi, On Tue, 2018-04-17 at 17:07 -0400, Willem de Bruijn wrote: > That said, for negotiated flows an inverse GRO feature could > conceivably be implemented to reduce rx stack traversal, too. > Though due to interleaving of packets on the wire, it aggregation > would be best effort, similar to TCP TSO and GRO using the > PSH bit as packetization signal. Reviving this old thread, before I forgot again. I have some local patches implementing UDP GRO in a dual way to current GSO_UDP_L4 implementation: several datagram with the same length are aggregated into a single one, and the user space receive a single larger packet instead of multiple ones. I hope quic can leverage such scenario, but I really know nothing about the protocol. I measure roughly a 50% performance improvement with udpgso_bench in respect to UDP GSO, and ~100% using a pktgen sender, and a reduced CPU usage on the receiver[1]. Some additional hacking to the general GRO bits is required to avoid useless socket lookups for ingress UDP packets when UDP_GSO is not enabled. If there is interest on this topic, I can share some RFC patches (hopefully somewhat next week). Cheers, Paolo [1] With udpgso_bench_tx, the bottle-neck is again the sender, even with GSO enabled. With a pktgen sender, the bottle-neck become the rx softirqd, and I see a lot of time consumed due to retpolines in the GRO code. In both scenarios skb_release_data() becomes the topmost perf offender for the user space process.