From mboxrd@z Thu Jan 1 00:00:00 1970 From: Maxime Coquelin Subject: Re: [PATCH v2] add mtu set in virtio Date: Thu, 8 Sep 2016 09:50:34 +0200 Message-ID: References: <20160829230240.20164-1-sodey@sonusnet.com> <20160907032547.GG23158@yliu-dev.sh.intel.com> <20160908073029.GM23158@yliu-dev.sh.intel.com> Mime-Version: 1.0 Content-Type: text/plain; charset=windows-1252; format=flowed Content-Transfer-Encoding: 7bit Cc: souvikdey33 , stephen@networkplumber.org, huawei.xie@intel.com, dev@dpdk.org To: Yuanhan Liu Return-path: Received: from mx1.redhat.com (mx1.redhat.com [209.132.183.28]) by dpdk.org (Postfix) with ESMTP id F40DF91D0 for ; Thu, 8 Sep 2016 09:50:39 +0200 (CEST) In-Reply-To: <20160908073029.GM23158@yliu-dev.sh.intel.com> List-Id: patches and discussions about DPDK List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" On 09/08/2016 09:30 AM, Yuanhan Liu wrote: > On Wed, Sep 07, 2016 at 11:16:47AM +0200, Maxime Coquelin wrote: >> >> >> On 09/07/2016 05:25 AM, Yuanhan Liu wrote: >>> On Tue, Aug 30, 2016 at 09:57:39AM +0200, Maxime Coquelin wrote: >>>> Hi Souvik, >>>> >>>> On 08/30/2016 01:02 AM, souvikdey33 wrote: >>>>> Signed-off-by: Souvik Dey >>>>> >>>>> Fixes: 1fb8e8896ca8 ("Signed-off-by: Souvik Dey ") >>>>> Reviewed-by: Stephen Hemminger >>>>> >>>>> Virtio interfaces should also support setting of mtu, as in case of cloud >>>>> it is expected to have the consistent mtu across the infrastructure that >>>>> the dhcp server sends and not hardcoded to 1500(default). >>>>> --- >>>>> drivers/net/virtio/virtio_ethdev.c | 12 ++++++++++++ >>>>> 1 file changed, 12 insertions(+) >>>> >>>> FYI, there are some on-going changes in the VIRTIO specification >>>> so that the VHOST interface exposes its MTU to its VIRTIO peer. >>>> It may also be used as an alternative of what you patch achieves. >>>> >>>> I am working on its implementation in Qemu/DPDK, our goal being to >>>> reduce performance drops for small packets with Rx mergeable buffers >>>> feature enabled. >>> >>> Mind to educate me a bit on how that works? >> >> Of course. >> >> Basically, this is a way to advise the MTU we want in the guest. >> In the guest, if GRO is not enabled: >> - In case of Kernel virtio-net, it could be used to >> size the SKBs at the expected MTU. If possible, we could disable Rx >> mergeable buffers. >> - In case of virtio PMD, if the MTU advised by host is lower than the >> pre-allocated mbuf size for the receive queue, then we should not need >> mergeable buffers. > > Thanks for the explanation! > > I see. So, the point is to avoid using mergeable buffers while it is > enabled. > >> Does that sound reasonnable? > > Yeah, maybe. Just don't know how well it may work in real life. Have > you got any rought data so far? The PoC is not done yet, only Qemu part is implemented. But what we noticed is that for small packets, we have a 50% degradation when rx mergeable buffers are on when running PVP use-case. Main part of the degradation is due an additional cache-miss in virtio-pmd receive path, because we fetch the header to get the number of buffer. When sending only small packets and removing this access, we recover 25% of the degradation. The 25% remaining part may be reduced significantly with Zhihong series. Hope it answer your questions. Thanks, Maxime