From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from eggs.gnu.org ([2001:4830:134:3::10]:56935) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1dKexs-0000b3-UO for qemu-devel@nongnu.org; Tue, 13 Jun 2017 02:06:09 -0400 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71) (envelope-from ) id 1dKexo-0002SM-Vu for qemu-devel@nongnu.org; Tue, 13 Jun 2017 02:06:08 -0400 Received: from mga04.intel.com ([192.55.52.120]:43924) by eggs.gnu.org with esmtps (TLS1.0:DHE_RSA_AES_256_CBC_SHA1:32) (Exim 4.71) (envelope-from ) id 1dKexo-0002Rz-NE for qemu-devel@nongnu.org; Tue, 13 Jun 2017 02:06:04 -0400 Message-ID: <593F8153.7080209@intel.com> Date: Tue, 13 Jun 2017 14:08:19 +0800 From: Wei Wang MIME-Version: 1.0 References: <1496653049-44530-1-git-send-email-wei.w.wang@intel.com> <20170605182514-mutt-send-email-mst@kernel.org> <5937511D.5070205@intel.com> <20170608220006-mutt-send-email-mst@kernel.org> <593A0F47.5080806@intel.com> <593E5F46.5080704@intel.com> <20170612234035-mutt-send-email-mst@kernel.org> <593F57C0.9080701@intel.com> <4639c51b-32a6-3528-2ed6-c5f6296b6300@redhat.com> <593F6133.9050804@intel.com> <65faf3f9-a839-0b04-c740-6b2a60a51cf6@redhat.com> In-Reply-To: <65faf3f9-a839-0b04-c740-6b2a60a51cf6@redhat.com> Content-Type: text/plain; charset=utf-8; format=flowed Content-Transfer-Encoding: 8bit Subject: Re: [Qemu-devel] [virtio-dev] Re: [PATCH v1] virtio-net: enable configurable tx queue size List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , To: Jason Wang , "Michael S. Tsirkin" Cc: "virtio-dev@lists.oasis-open.org" , "stefanha@gmail.com" , "qemu-devel@nongnu.org" , "jan.scheurich@ericsson.com" , "armbru@redhat.com" , "marcandre.lureau@gmail.com" , "pbonzini@redhat.com" On 06/13/2017 11:55 AM, Jason Wang wrote: > > On 2017年06月13日 11:51, Wei Wang wrote: >> On 06/13/2017 11:19 AM, Jason Wang wrote: >>> >>> On 2017年06月13日 11:10, Wei Wang wrote: >>>> On 06/13/2017 04:43 AM, Michael S. Tsirkin wrote: >>>>> On Mon, Jun 12, 2017 at 05:30:46PM +0800, Wei Wang wrote: >>>>>> Ping for comments, thanks. >>>>> This was only posted a week ago, might be a bit too short for some >>>>> people. >>>> OK, sorry for the push. >>>>> A couple of weeks is more reasonable before you ping. Also, I >>>>> sent a bunch of comments on Thu, 8 Jun 2017. You should probably >>>>> address these. >>>>> >>>> I responded to the comments. The main question is that I'm not sure >>>> why we need the vhost backend to support VIRTIO_F_MAX_CHAIN_SIZE. >>>> IMHO, that should be a feature proposed to solve the possible issue >>>> caused by the QEMU implemented backend. >>> The issue is what if there's a mismatch of max #sgs between qemu and >>> vhost? >>> >> When the vhost backend is used, QEMU is not involved in the data path. >> The vhost backend >> directly gets what is offered by the guest from the vq. Why would >> there be a mismatch of >> max #sgs between QEMU and vhost, and what is the QEMU side max #sgs >> used for? Thanks. > You need query the backend max #sgs in this case at least. no? If not > how do you know the value is supported by the backend? > > Thanks > Here is my thought: vhost backend has already been supporting 1024 sgs, so I think it might not be necessary to query the max sgs that the vhost backend supports. In the setup phase, when QEMU detects the backend is vhost, it assumes 1024 max sgs is supported, instead of giving an extra call to query. The advantage is that people who is using the vhost backend can upgrade to use 1024 tx queue size by only applying the QEMU patches. Adding an extra call to query the size would need to patch their vhost backend (like vhost-user), which is difficult for them. Best, Wei