From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from eggs.gnu.org ([2001:4830:134:3::10]:39538) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1btqAd-0007h2-LW for qemu-devel@nongnu.org; Tue, 11 Oct 2016 02:04:12 -0400 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71) (envelope-from ) id 1btqAZ-00079Q-BI for qemu-devel@nongnu.org; Tue, 11 Oct 2016 02:04:10 -0400 Received: from mga01.intel.com ([192.55.52.88]:6098) by eggs.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1btqAZ-000790-55 for qemu-devel@nongnu.org; Tue, 11 Oct 2016 02:04:07 -0400 Date: Tue, 11 Oct 2016 14:04:56 +0800 From: Yuanhan Liu Message-ID: <20161011060456.GM1597@yliu-dev.sh.intel.com> References: <20160929231252-mutt-send-email-mst@kernel.org> <05d62750-303c-4b9b-a5cb-9db8552f0ab2@redhat.com> <2b458818-01ef-0533-4366-1c35a8452e4a@redhat.com> <20160930221241-mutt-send-email-mst@kernel.org> <20161010040531.GZ1597@yliu-dev.sh.intel.com> <20161010070800-mutt-send-email-mst@kernel.org> <20161010042209.GB1597@yliu-dev.sh.intel.com> <3b71f113-26af-711c-4060-ca576260ec72@redhat.com> <20161010144209.GI1597@yliu-dev.sh.intel.com> <18372cc2-19d3-f455-728d-2f2ed405d800@redhat.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <18372cc2-19d3-f455-728d-2f2ed405d800@redhat.com> Subject: Re: [Qemu-devel] [PATCH 1/2] vhost: enable any layout feature List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , To: Maxime Coquelin Cc: "Michael S. Tsirkin" , Stephen Hemminger , dev@dpdk.org, qemu-devel@nongnu.org On Mon, Oct 10, 2016 at 04:54:39PM +0200, Maxime Coquelin wrote: > > > On 10/10/2016 04:42 PM, Yuanhan Liu wrote: > >On Mon, Oct 10, 2016 at 02:40:44PM +0200, Maxime Coquelin wrote: > >>>>>At that time, a packet always use 2 descs. Since indirect desc is > >>>>>enabled (by default) now, the assumption is not true then. What's > >>>>>worse, it might even slow things a bit down. That should also be > >>>>>part of the reason why performance is slightly worse than before. > >>>>> > >>>>> --yliu > >>>> > >>>>I'm not sure I get what you are saying > >>>> > >>>>>commit 1d41d77cf81c448c1b09e1e859bfd300e2054a98 > >>>>>Author: Yuanhan Liu > >>>>>Date: Mon May 2 17:46:17 2016 -0700 > >>>>> > >>>>> vhost: optimize dequeue for small packets > >>>>> > >>>>> A virtio driver normally uses at least 2 desc buffers for Tx: the > >>>>> first for storing the header, and the others for storing the data. > >>>>> > >>>>> Therefore, we could fetch the first data desc buf before the main > >>>>> loop, and do the copy first before the check of "are we done yet?". > >>>>> This could save one check for small packets that just have one data > >>>>> desc buffer and need one mbuf to store it. > >>>>> > >>>>> Signed-off-by: Yuanhan Liu > >>>>> Acked-by: Huawei Xie > >>>>> Tested-by: Rich Lane > >>>> > >>>>This fast-paths the 2-descriptors format but it's not active > >>>>for indirect descriptors. Is this what you mean? > >>> > >>>Yes. It's also not active when ANY_LAYOUT is actually turned on. > >>>>Should be a simple matter to apply this optimization for indirect. > >>> > >>>Might be. > >> > >>If I understand the code correctly, indirect descs also benefit from this > >>optimization, or am I missing something? > > > >Aha..., you are right! > > The interesting thing is that the patch I send on Thursday that removes > header access when no offload has been negotiated[0] seems to reduce > almost to zero the performance seen with indirect descriptors enabled. Didn't follow that. > I see this with 64 bytes packets using testpmd on both ends. > > When I did the patch, I would have expected the same gain with both > modes, whereas I measured +1% for direct and +4% for indirect. IIRC, I did a test before (remove those offload code piece), and the performance was basically the same before and after that. Well, there might be some small difference, say 1% as you said. But the result has never been steady. Anyway, I think your patch is good to have: I just didn't see v2. --yliu