From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from eggs.gnu.org ([209.51.188.92]:55138) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1gvz1K-0004y3-Q1 for qemu-devel@nongnu.org; Tue, 19 Feb 2019 01:36:47 -0500 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71) (envelope-from ) id 1gvz1J-00023f-0M for qemu-devel@nongnu.org; Tue, 19 Feb 2019 01:36:46 -0500 Received: from mx1.redhat.com ([209.132.183.28]:49236) by eggs.gnu.org with esmtps (TLS1.0:DHE_RSA_AES_256_CBC_SHA1:32) (Exim 4.71) (envelope-from ) id 1gvz1I-0001kx-AX for qemu-devel@nongnu.org; Tue, 19 Feb 2019 01:36:44 -0500 References: <20190213145116.20654-1-yuri.benditovich@daynix.com> <0801b4b7-ac54-1374-3d97-a59a10579218@redhat.com> <20190218113447-mutt-send-email-mst@kernel.org> <20190218183353-mutt-send-email-mst@kernel.org> From: Jason Wang Message-ID: <3bcc9d51-a5ff-583a-a76d-e7cf9e19bba3@redhat.com> Date: Tue, 19 Feb 2019 14:27:35 +0800 MIME-Version: 1.0 In-Reply-To: <20190218183353-mutt-send-email-mst@kernel.org> Content-Type: text/plain; charset=utf-8; format=flowed Content-Language: en-US Content-Transfer-Encoding: quoted-printable Subject: Re: [Qemu-devel] [PATCH] virtio-net: do not start queues that are not enabled by the guest List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , To: "Michael S. Tsirkin" , Yuri Benditovich Cc: Yan Vugenfirer , qemu-devel@nongnu.org On 2019/2/19 =E4=B8=8A=E5=8D=887:34, Michael S. Tsirkin wrote: > On Mon, Feb 18, 2019 at 10:49:08PM +0200, Yuri Benditovich wrote: >> On Mon, Feb 18, 2019 at 6:39 PM Michael S. Tsirkin wr= ote: >>> On Mon, Feb 18, 2019 at 11:58:51AM +0200, Yuri Benditovich wrote: >>>> On Mon, Feb 18, 2019 at 5:49 AM Jason Wang wro= te: >>>>> >>>>> On 2019/2/13 =E4=B8=8B=E5=8D=8810:51, Yuri Benditovich wrote: >>>>>> https://bugzilla.redhat.com/show_bug.cgi?id=3D1608226 >>>>>> On startup/link-up in multiqueue configuration the virtio-net >>>>>> tries to starts all the queues, including those that the guest >>>>>> will not enable by VIRTIO_NET_CTRL_MQ_VQ_PAIRS_SET. >>>>>> If the guest driver does not allocate queues that it will not >>>>>> use (for example, Windows driver does not) and number of actually >>>>>> used queues is less that maximal number supported by the device, >>>>> >>>>> Is this a requirement of e.g NDIS? If not, could we simply allocate= all >>>>> queues in this case. This is usually what normal Linux driver did. >>>>> >>>>> >>>>>> this causes vhost_net_start to fail and actually disables vhost >>>>>> for all the queues, reducing the performance. >>>>>> Current commit fixes this: initially only first queue is started, >>>>>> upon VIRTIO_NET_CTRL_MQ_VQ_PAIRS_SET started all the queues >>>>>> requested by the guest. >>>>>> >>>>>> Signed-off-by: Yuri Benditovich >>>>>> --- >>>>>> hw/net/virtio-net.c | 7 +++++-- >>>>>> 1 file changed, 5 insertions(+), 2 deletions(-) >>>>>> >>>>>> diff --git a/hw/net/virtio-net.c b/hw/net/virtio-net.c >>>>>> index 3f319ef723..d3b1ac6d3a 100644 >>>>>> --- a/hw/net/virtio-net.c >>>>>> +++ b/hw/net/virtio-net.c >>>>>> @@ -174,7 +174,7 @@ static void virtio_net_vhost_status(VirtIONet = *n, uint8_t status) >>>>>> { >>>>>> VirtIODevice *vdev =3D VIRTIO_DEVICE(n); >>>>>> NetClientState *nc =3D qemu_get_queue(n->nic); >>>>>> - int queues =3D n->multiqueue ? n->max_queues : 1; >>>>>> + int queues =3D n->multiqueue ? n->curr_queues : 1; >>>>>> >>>>>> if (!get_vhost_net(nc->peer)) { >>>>>> return; >>>>>> @@ -1016,9 +1016,12 @@ static int virtio_net_handle_mq(VirtIONet *= n, uint8_t cmd, >>>>>> return VIRTIO_NET_ERR; >>>>>> } >>>>>> >>>>>> - n->curr_queues =3D queues; >>>>>> /* stop the backend before changing the number of queues to= avoid handling a >>>>>> * disabled queue */ >>>>>> + virtio_net_set_status(vdev, 0); >>>>> >>>>> Any reason for doing this? >>>> I think there are 2 reasons: >>>> 1. The spec does not require guest SW to allocate unused queues. >>>> 2. We spend guest's physical memory to just make vhost happy when it >>>> touches queues that it should not use. >>>> >>>> Thanks, >>>> Yuri Benditovich >>> The spec also says: >>> queue_enable The driver uses this to selectively prevent the= device from executing requests from this >>> virtqueue. 1 - enabled; 0 - disabled. >>> >>> While this is not a conformance clause this strongly implies that >>> queues which are not enabled are never accessed by device. >>> >>> Yuri I am guessing you are not enabling these unused queues right? >> Of course, we (Windows driver) do not. >> The code of virtio-net passes max_queues to vhost and this causes >> vhost to try accessing all the queues, fail on unused ones and finally >> leave vhost disabled at all. > > Jason, at least for 1.0 accessing disabled queues looks like a spec > violation. What do you think? Yes, but there's some issues: - How to detect a disabled queue for 0.9x device? Looks like there's no=20 way according to the spec, so device must assume all queues was enabled. - For 1.0, if we depends on queue_enable, we should implement the=20 callback for vhost I think. Otherwise it's still buggy. So it looks tricky to enable and disable queues through set status Thanks > >>> >>> >>>>> Thanks >>>>> >>>>> >>>>>> + >>>>>> + n->curr_queues =3D queues; >>>>>> + >>>>>> virtio_net_set_status(vdev, vdev->status); >>>>>> virtio_net_set_queues(n); >>>>>>