From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from eggs.gnu.org ([2001:4830:134:3::10]:39756) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1dKg3X-0002nK-3U for qemu-devel@nongnu.org; Tue, 13 Jun 2017 03:16:04 -0400 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71) (envelope-from ) id 1dKg3T-0005ON-VP for qemu-devel@nongnu.org; Tue, 13 Jun 2017 03:16:03 -0400 Received: from mga02.intel.com ([134.134.136.20]:32890) by eggs.gnu.org with esmtps (TLS1.0:DHE_RSA_AES_256_CBC_SHA1:32) (Exim 4.71) (envelope-from ) id 1dKg3T-0005OB-ML for qemu-devel@nongnu.org; Tue, 13 Jun 2017 03:15:59 -0400 Message-ID: <593F9187.6040800@intel.com> Date: Tue, 13 Jun 2017 15:17:27 +0800 From: Wei Wang MIME-Version: 1.0 References: <1496653049-44530-1-git-send-email-wei.w.wang@intel.com> <20170605182514-mutt-send-email-mst@kernel.org> <5937511D.5070205@intel.com> <20170608220006-mutt-send-email-mst@kernel.org> <593A0F47.5080806@intel.com> <593E5F46.5080704@intel.com> <20170612234035-mutt-send-email-mst@kernel.org> <593F57C0.9080701@intel.com> <4639c51b-32a6-3528-2ed6-c5f6296b6300@redhat.com> <593F6133.9050804@intel.com> <65faf3f9-a839-0b04-c740-6b2a60a51cf6@redhat.com> <593F8153.7080209@intel.com> In-Reply-To: Content-Type: text/plain; charset=utf-8; format=flowed Content-Transfer-Encoding: 7bit Subject: Re: [Qemu-devel] [virtio-dev] Re: [virtio-dev] Re: [PATCH v1] virtio-net: enable configurable tx queue size List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , To: Jason Wang , "Michael S. Tsirkin" Cc: "virtio-dev@lists.oasis-open.org" , "stefanha@gmail.com" , "qemu-devel@nongnu.org" , "jan.scheurich@ericsson.com" , "armbru@redhat.com" , "marcandre.lureau@gmail.com" , "pbonzini@redhat.com" On 06/13/2017 02:29 PM, Jason Wang wrote: > The issue is what if there's a mismatch of max #sgs between qemu and >>>> When the vhost backend is used, QEMU is not involved in the data path. >>>> The vhost backend >>>> directly gets what is offered by the guest from the vq. Why would >>>> there be a mismatch of >>>> max #sgs between QEMU and vhost, and what is the QEMU side max #sgs >>>> used for? Thanks. >>> You need query the backend max #sgs in this case at least. no? If not >>> how do you know the value is supported by the backend? >>> >>> Thanks >>> >> Here is my thought: vhost backend has already been supporting 1024 sgs, >> so I think it might not be necessary to query the max sgs that the vhost >> backend supports. In the setup phase, when QEMU detects the backend is >> vhost, it assumes 1024 max sgs is supported, instead of giving an extra >> call to query. > > We can probably assume vhost kernel supports up to 1024 sgs. But how > about for other vhost-user backends? > So far, I haven't seen any vhost backend implementation supporting less than 1024 sgs. > And what you said here makes me ask one of my questions in the past: > > Do we have plan to extend 1024 to a larger value or 1024 looks good > for the future years? If we only care about 1024, there's even no need > for a new config filed, a feature flag is more than enough. If we want > to extend it to e.g 2048, we definitely need to query vhost backend's > limit (even for vhost-kernel). > According to virtio spec (e.g. 2.4.4), unreasonably large descriptors are not encouraged to be used by the guest. If possible, I would suggest to use 1024 as the largest number of descriptors that the guest can chain, even when we have larger queue size in the future. That is, if (backend == QEMU backend) config.max_chain_size = 1023 (defined by the qemu backend implementation); else if (backend == vhost) config.max_chain_size = 1024; It is transparent to the guest. From the guest's point of view, all it knows is a value given to him via reading config.max_chain_size. Best, Wei