qemu-devel.nongnu.org archive mirror
 help / color / mirror / Atom feed
From: Wei Wang <wei.w.wang@intel.com>
To: Jason Wang <jasowang@redhat.com>, "Michael S. Tsirkin" <mst@redhat.com>
Cc: stefanha@gmail.com, marcandre.lureau@gmail.com,
	pbonzini@redhat.com, jan.scheurich@ericsson.com,
	virtio-dev@lists.oasis-open.org, qemu-devel@nongnu.org
Subject: Re: [Qemu-devel] [virtio-dev] Re: [PATCH RFC] virtio-net: enable configurable tx queue size
Date: Tue, 23 May 2017 18:36:15 +0800	[thread overview]
Message-ID: <5924109F.6090608@intel.com> (raw)
In-Reply-To: <ac8a484d-744a-8bdf-eba2-05239c6eefb1@redhat.com>

On 05/23/2017 02:24 PM, Jason Wang wrote:
>
>
> On 2017年05月23日 13:15, Wei Wang wrote:
>> On 05/23/2017 10:04 AM, Jason Wang wrote:
>>>
>>>
>>> On 2017年05月22日 19:52, Wei Wang wrote:
>>>> On 05/20/2017 04:42 AM, Michael S. Tsirkin wrote:
>>>>> On Fri, May 19, 2017 at 10:32:19AM +0800, Wei Wang wrote:
>>>>>> This patch enables the virtio-net tx queue size to be configurable
>>>>>> between 256 (the default queue size) and 1024 by the user. The queue
>>>>>> size specified by the user should be power of 2.
>>>>>>
>>>>>> Setting the tx queue size to be 1024 requires the guest driver to
>>>>>> support the VIRTIO_NET_F_MAX_CHAIN_SIZE feature.
>>>>> This should be a generic ring feature, not one specific to virtio 
>>>>> net.
>>>> OK. How about making two more changes below:
>>>>
>>>> 1) make the default tx queue size = 1024 (instead of 256).
>>>
>>> As has been pointed out, you need compat the default value too in 
>>> this case.
>>
>> The driver gets the size info from the device, then would it cause any
>> compatibility issue if we change the default ring size to 1024 in the
>> vhost case? In other words, is there any software (i.e. any 
>> virtio-net driver)
>> functions based on the assumption of 256 queue size?
>
> I don't know. But is it safe e.g we migrate from 1024 to an older qemu 
> with 256 as its queue size?

Yes, I think it is safe, because the default queue size is used when the 
device is being
set up (e.g. feature negotiation).
During migration (the device has already been running), the destination 
machine will
load the device state based on the the queue size that is being used 
(i.e. vring.num).
The default value is not used any more after the setup phase.

>
>>
>> For live migration, the queue size that is being used will also be 
>> transferred
>> to the destination.
>>
>>>
>>>> We can reduce the size (to 256) if the MAX_CHAIN_SIZE feature
>>>> is not supported by the guest.
>>>> In this way, people who apply the QEMU patch can directly use the
>>>> largest queue size(1024) without adding the booting command line.
>>>>
>>>> 2) The vhost backend does not use writev, so I think when the vhost
>>>> backed is used, using 1024 queue size should not depend on the
>>>> MAX_CHAIN_SIZE feature.
>>>
>>> But do we need to consider even larger queue size now?
>>
>> Need Michael's feedback on this. Meanwhile, I'll get the next version of
>> code ready and check if larger queue size would cause any corner case.
>
> The problem is, do we really need a new config filed for this? Or just 
> introduce a flag which means "I support up to 1024 sgs" is sufficient?
>

For now, it also works without the new config field, max_chain_size,
But I would prefer to keep the new config field, because:

Without that, the driver will work on  an assumed value, 1023.
If the future, QEMU needs to change it to 1022, then how can the
new QEMU tell the old driver, which supports the MAX_CHAIN_SIZE
feature but works with the old hardcode value 1023?
So, I think using that config value would be good for future updates.

Best,
Wei

  reply	other threads:[~2017-05-23 10:34 UTC|newest]

Thread overview: 13+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2017-05-19  2:32 [Qemu-devel] [PATCH RFC] virtio-net: enable configurable tx queue size Wei Wang
2017-05-19 20:42 ` Michael S. Tsirkin
2017-05-22 11:52   ` [Qemu-devel] [virtio-dev] " Wei Wang
2017-05-23  2:04     ` Jason Wang
2017-05-23  5:15       ` Wei Wang
2017-05-23  6:24         ` Jason Wang
2017-05-23 10:36           ` Wei Wang [this message]
2017-05-24  3:19             ` Jason Wang
2017-05-24  8:18               ` Wei Wang
2017-05-25  7:49                 ` Jason Wang
2017-05-25 11:50                   ` Wei Wang
2017-05-25 12:13                     ` [Qemu-devel] [virtio-dev] " Jason Wang
2017-05-25 18:04                       ` Michael S. Tsirkin

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=5924109F.6090608@intel.com \
    --to=wei.w.wang@intel.com \
    --cc=jan.scheurich@ericsson.com \
    --cc=jasowang@redhat.com \
    --cc=marcandre.lureau@gmail.com \
    --cc=mst@redhat.com \
    --cc=pbonzini@redhat.com \
    --cc=qemu-devel@nongnu.org \
    --cc=stefanha@gmail.com \
    --cc=virtio-dev@lists.oasis-open.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).