qemu-devel.nongnu.org archive mirror
 help / color / mirror / Atom feed
From: Wei Wang <wei.w.wang@intel.com>
To: Jason Wang <jasowang@redhat.com>, "Michael S. Tsirkin" <mst@redhat.com>
Cc: stefanha@gmail.com, marcandre.lureau@gmail.com,
	pbonzini@redhat.com, jan.scheurich@ericsson.com,
	virtio-dev@lists.oasis-open.org, qemu-devel@nongnu.org
Subject: Re: [Qemu-devel] [virtio-dev] Re: [PATCH RFC] virtio-net: enable configurable tx queue size
Date: Wed, 24 May 2017 16:18:33 +0800	[thread overview]
Message-ID: <592541D9.3030303@intel.com> (raw)
In-Reply-To: <48ec0c88-801f-fc1f-401f-51cb2ca3fed2@redhat.com>

On 05/24/2017 11:19 AM, Jason Wang wrote:
>
>
> On 2017年05月23日 18:36, Wei Wang wrote:
>> On 05/23/2017 02:24 PM, Jason Wang wrote:
>>>
>>>
>>> On 2017年05月23日 13:15, Wei Wang wrote:
>>>> On 05/23/2017 10:04 AM, Jason Wang wrote:
>>>>>
>>>>>
>>>>> On 2017年05月22日 19:52, Wei Wang wrote:
>>>>>> On 05/20/2017 04:42 AM, Michael S. Tsirkin wrote:
>>>>>>> On Fri, May 19, 2017 at 10:32:19AM +0800, Wei Wang wrote:
>>>>>>>> This patch enables the virtio-net tx queue size to be configurable
>>>>>>>> between 256 (the default queue size) and 1024 by the user. The 
>>>>>>>> queue
>>>>>>>> size specified by the user should be power of 2.
>>>>>>>>
>>>>>>>> Setting the tx queue size to be 1024 requires the guest driver to
>>>>>>>> support the VIRTIO_NET_F_MAX_CHAIN_SIZE feature.
>>>>>>> This should be a generic ring feature, not one specific to 
>>>>>>> virtio net.
>>>>>> OK. How about making two more changes below:
>>>>>>
>>>>>> 1) make the default tx queue size = 1024 (instead of 256).
>>>>>
>>>>> As has been pointed out, you need compat the default value too in 
>>>>> this case.
>>>>
>>>> The driver gets the size info from the device, then would it cause any
>>>> compatibility issue if we change the default ring size to 1024 in the
>>>> vhost case? In other words, is there any software (i.e. any 
>>>> virtio-net driver)
>>>> functions based on the assumption of 256 queue size?
>>>
>>> I don't know. But is it safe e.g we migrate from 1024 to an older 
>>> qemu with 256 as its queue size?
>>
>> Yes, I think it is safe, because the default queue size is used when 
>> the device is being
>> set up (e.g. feature negotiation).
>> During migration (the device has already been running), the 
>> destination machine will
>> load the device state based on the the queue size that is being used 
>> (i.e. vring.num).
>> The default value is not used any more after the setup phase.
>
> I haven't checked all cases, but there's two obvious things:
>
> - After migration and after a reset, it will go back to 256 on dst.

Please let me clarify what we want first: when QEMU boots and it 
realizes the
virtio-net device, if the tx_queue_size is not given by the command 
line, we want
to use 1024 as the queue size, that is, virtio_add_queue(,1024,), which sets
vring.num=1024 and vring.num_default=1024.

When migration happens, the vring.num variable (has been 1024) is sent to
the destination machine, where virtio_load() will assign the destination 
side vring.num
to that value (1024). So, vring.num=1024 continues to work on the 
destination machine
with old QEMU. I don't see an issue here.

If reset happens, I think the device and driver will re-do the 
initialization steps. So, if they are
with the old QEMU, then they use the old qemu realize() function to do 
virtio_add_queue(,256,),
and the driver will re-do the probe() steps and take vring.num=256, then 
everything works fine.


> - ABI is changed, e.g -M pc-q35-2.10 returns 1024 on 2.11
>
Didn't get this. Could you please explain more? which ABI would be 
changed, and why it affects q35?


>>
>>>
>>>>
>>>> For live migration, the queue size that is being used will also be 
>>>> transferred
>>>> to the destination.
>>>>
>>>>>
>>>>>> We can reduce the size (to 256) if the MAX_CHAIN_SIZE feature
>>>>>> is not supported by the guest.
>>>>>> In this way, people who apply the QEMU patch can directly use the
>>>>>> largest queue size(1024) without adding the booting command line.
>>>>>>
>>>>>> 2) The vhost backend does not use writev, so I think when the vhost
>>>>>> backed is used, using 1024 queue size should not depend on the
>>>>>> MAX_CHAIN_SIZE feature.
>>>>>
>>>>> But do we need to consider even larger queue size now?
>>>>
>>>> Need Michael's feedback on this. Meanwhile, I'll get the next 
>>>> version of
>>>> code ready and check if larger queue size would cause any corner case.
>>>
>>> The problem is, do we really need a new config filed for this? Or 
>>> just introduce a flag which means "I support up to 1024 sgs" is 
>>> sufficient?
>>>
>>
>> For now, it also works without the new config field, max_chain_size,
>> But I would prefer to keep the new config field, because:
>>
>> Without that, the driver will work on  an assumed value, 1023.
>
> This is the fact, and it's too late to change legacy driver.
>
>> If the future, QEMU needs to change it to 1022, then how can the
>> new QEMU tell the old driver, which supports the MAX_CHAIN_SIZE
>> feature but works with the old hardcode value 1023?
>
> Can config filed help in this case? The problem is similar to 
> ANY_HEADER_SG, the only thing we can is to clarify the limitation for 
> new drivers.
>

I think it helps, because the driver will do
virtio_cread_feature(vdev, VIRTIO_NET_F_MAX_CHAIN_SIZE,
                                   struct virtio_net_config, 
max_chain_size, &chain_size);
to get the max_chain_size from the device. So when new QEMU has a new 
value of max_chain_size, old driver
will get the new value.

Best,
Wei

  reply	other threads:[~2017-05-24  8:16 UTC|newest]

Thread overview: 13+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2017-05-19  2:32 [Qemu-devel] [PATCH RFC] virtio-net: enable configurable tx queue size Wei Wang
2017-05-19 20:42 ` Michael S. Tsirkin
2017-05-22 11:52   ` [Qemu-devel] [virtio-dev] " Wei Wang
2017-05-23  2:04     ` Jason Wang
2017-05-23  5:15       ` Wei Wang
2017-05-23  6:24         ` Jason Wang
2017-05-23 10:36           ` Wei Wang
2017-05-24  3:19             ` Jason Wang
2017-05-24  8:18               ` Wei Wang [this message]
2017-05-25  7:49                 ` Jason Wang
2017-05-25 11:50                   ` Wei Wang
2017-05-25 12:13                     ` [Qemu-devel] [virtio-dev] " Jason Wang
2017-05-25 18:04                       ` Michael S. Tsirkin

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=592541D9.3030303@intel.com \
    --to=wei.w.wang@intel.com \
    --cc=jan.scheurich@ericsson.com \
    --cc=jasowang@redhat.com \
    --cc=marcandre.lureau@gmail.com \
    --cc=mst@redhat.com \
    --cc=pbonzini@redhat.com \
    --cc=qemu-devel@nongnu.org \
    --cc=stefanha@gmail.com \
    --cc=virtio-dev@lists.oasis-open.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).