qemu-devel.nongnu.org archive mirror
 help / color / mirror / Atom feed
From: "Longpeng (Mike)" <longpeng2@huawei.com>
To: Jason Wang <jasowang@redhat.com>
Cc: mst@redhat.com, "Longpeng(Mike)" <longpeng.mike@gmail.com>,
	qemu-devel@nongnu.org, arei.gonglei@huawei.com,
	king.wang@huawei.com, weidong.huang@huawei.com,
	stefanha@redhat.com
Subject: Re: [Qemu-devel] [Question] why need to start all queues in vhost_net_start
Date: Thu, 16 Nov 2017 17:08:55 +0800	[thread overview]
Message-ID: <5A0D55A7.7040905@huawei.com> (raw)
In-Reply-To: <2e0ddd76-3903-d715-9a86-73f1286322e7@redhat.com>



On 2017/11/16 16:54, Jason Wang wrote:

> 
> 
> On 2017年11月16日 13:53, Longpeng (Mike) wrote:
>> On 2017/11/15 23:54, Longpeng(Mike) wrote:
>>> 2017-11-15 23:05 GMT+08:00 Jason Wang<jasowang@redhat.com>:
>>>> On 2017年11月15日 22:55, Longpeng(Mike) wrote:
>>>>> Hi guys,
>>>>>
>>>>> We got a BUG report from our testers yesterday, the testing scenario was
>>>>> migrating a VM (Windows guest, *4 vcpus*, 4GB, vhost-user net: *7
>>>>> queues*).
>>>>>
>>>>> We found the cause reason, and we'll report the BUG or send a fix patch
>>>>> to upstream if necessary( we haven't test the upstream yet, sorry... ).
>>>> Could you explain this a little bit more?
>>>>
>>>>> We want to know why the vhost_net_start() must start*total queues*  ( in
>>>>> our
>>>>> VM there're 7 queues ) but not*the queues that current used*  ( in our VM,
>>>>> guest
>>>>> only uses the first 4 queues because it's limited by the number of vcpus)
>>>>> ?
>>>>>
>>>>> Looking forward to your help, thx:)
>>>> Since the codes have been there for years and works well for kernel
>>>> datapath. You should really explain what's wrong.
>>>>
>>> OK.:)
>>>
>>> In our scenario,  the Windows's virtio-net driver only use the first 4
>>> queues and it
>>> *only set desc/avail/used table for the first 4 queues*, so in QEMU
>>> the desc/avail/
>>> used of the last 3 queues are ZERO,  but unfortunately...
>>> '''
>>> vhost_net_start
>>>    for (i = 0; i < total_queues; i++)
>>>      vhost_net_start_one
>>>        vhost_dev_start
>>>          vhost_virtqueue_start
>>> '''
>>> In vhost_virtqueue_start(), it will calculate the HVA of
>>> desc/avail/used table, so for last
>>> 3 queues, it will use ZERO as the GPA to calculate the HVA, and then
>>> send the results
>>> to the user-mode backend ( we use*vhost-user*  ) by vhost_virtqueue_set_addr().
>>>
>>> When the EVS get these address, it will update a*idx*  which will be
>>> treated as  vq's
>>> last_avail_idx when virtio-net stop ( pls see vhost_virtqueue_stop() ).
>>>
>>> So we get the following result after virtio-net stop:
>>>    the desc/avail/used of the last 3 queues's vqs are all ZERO, but these vqs's
>>>    last_avail_idx is NOT ZERO.
>>>
>>> At last, virtio_load() reports an error:
>>> '''
>>>    if (!vdev->vq[i].vring.desc && vdev->vq[i].last_avail_idx) { // <--
>>> will be TRUE
>>>        error_report("VQ %d address 0x0 "
>>>                           "inconsistent with Host index 0x%x",
>>>                           i, vdev->vq[i].last_avail_idx);
>>>              return -1;
>>>     }
>>> '''
>>>
>>> BTW, the problem won't appear if use Linux guest, because the Linux virtio-net
>>> driver will set all 7 queues's desc/avail/used tables. And the problem
>>> won't appear
>>> if the VM use vhost-net, because vhost-net won't update*idx*  in SET_ADDR ioctl.
> 
> Just to make sure I understand here, I thought Windows guest + vhost_net hit
> this issue?
> 


Windows guest + vhost-user hit.
Windows guest + vhost-net is fine.

'''
In vhost_virtqueue_start(), it will calculate the HVA of
desc/avail/used tables, so for last
3 queues, it will use ZERO as the GPA to calculate the HVA, and then
send the results
to the user-mode backend ( we use *vhost-user* ) by vhost_virtqueue_set_addr().
'''
I think this is the root cause, it is strange, right ?

> Thanks
> 
>>>
>>> Sorry for my pool English, Maybe I could describe the problem in Chinese for you
>>> in private if necessary.
>>>
>>>
>>>> Thanks
>> -- Regards, Longpeng(Mike)
> 
> 
> .
> 


-- 
Regards,
Longpeng(Mike)

      parent reply	other threads:[~2017-11-16  9:10 UTC|newest]

Thread overview: 18+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2017-11-15 14:55 [Qemu-devel] [Question] why need to start all queues in vhost_net_start Longpeng(Mike)
2017-11-15 15:05 ` Jason Wang
2017-11-15 15:54   ` Longpeng(Mike)
2017-11-16  5:53     ` Longpeng (Mike)
2017-11-16  8:11       ` Yan Vugenfirer
2017-11-16 11:42         ` Jason Wang
2017-11-16  8:54       ` Jason Wang
2017-11-16  9:01         ` Gonglei (Arei)
2017-11-16  9:09           ` Jason Wang
2017-11-16  9:13           ` Jason Wang
2017-11-16  9:32             ` Longpeng (Mike)
2017-11-16 12:04               ` Jason Wang
2017-11-17  2:01                 ` Longpeng (Mike)
2017-11-17  3:46                   ` Jason Wang
2017-11-17  4:32                 ` Michael S. Tsirkin
2017-11-17  5:43                   ` Longpeng (Mike)
2017-11-17  6:44                   ` Jason Wang
2017-11-16  9:08         ` Longpeng (Mike) [this message]

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=5A0D55A7.7040905@huawei.com \
    --to=longpeng2@huawei.com \
    --cc=arei.gonglei@huawei.com \
    --cc=jasowang@redhat.com \
    --cc=king.wang@huawei.com \
    --cc=longpeng.mike@gmail.com \
    --cc=mst@redhat.com \
    --cc=qemu-devel@nongnu.org \
    --cc=stefanha@redhat.com \
    --cc=weidong.huang@huawei.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).