From: Jason Wang <jasowang@redhat.com>
To: Wei Xu <wexu@redhat.com>
Cc: jfreiman@redhat.com, maxime.coquelin@redhat.com,
qemu-devel@nongnu.org, tiwei.bie@intel.com, mst@redhat.com
Subject: Re: [Qemu-devel] [PATCH v4 09/11] virtio-net: update the head descriptor in a chain lastly
Date: Wed, 20 Feb 2019 10:34:32 +0800 [thread overview]
Message-ID: <9a9751aa-0cfc-151a-890c-ae70a9a37d64@redhat.com> (raw)
In-Reply-To: <20190220015441.GB23868@wei-ubt>
On 2019/2/20 上午9:54, Wei Xu wrote:
> On Tue, Feb 19, 2019 at 09:09:33PM +0800, Jason Wang wrote:
>> On 2019/2/19 下午6:51, Wei Xu wrote:
>>> On Tue, Feb 19, 2019 at 03:23:01PM +0800, Jason Wang wrote:
>>>> On 2019/2/14 下午12:26, wexu@redhat.com wrote:
>>>>> From: Wei Xu <wexu@redhat.com>
>>>>>
>>>>> This is a helper for packed ring.
>>>>>
>>>>> To support packed ring, the head descriptor in a chain should be updated
>>>>> lastly since no 'avail_idx' like in packed ring to explicitly tell the
>>>>> driver side that all payload is ready after having done the chain, so
>>>>> the head is always visible immediately.
>>>>>
>>>>> This patch fills the header after done all the other ones.
>>>>>
>>>>> Signed-off-by: Wei Xu <wexu@redhat.com>
>>>> It's really odd to workaround API issue in the implementation of device.
>>>> Please introduce batched used updating helpers instead.
>>> Can you elaborate a bit more? I don't get it as well.
>>>
>>> The exact batch as vhost-net or dpdk pmd is not supported by userspace
>>> backend. The change here is to keep the header descriptor updated at
>>> last in case of a chaining descriptors and the helper might not help
>>> too much.
>>>
>>> Wei
>>
>> Of course we can add batching support why not?
> It is always good to improve performance with anything, while probably
> this could be done in another separate batch, also we need to bear
> in mind that usually qemu userspace backend is not the first option for
> performance oriented user.
The point is to hide layout specific things from device emulation. If it
helps for performance, it could be treated as a good byproduct.
>
> AFAICT, virtqueue_fill() is a generic API for all relevant userspace virtio
> devices that do not support batching , without touching virtqueue_fill(),
> supporting batching changes the meaning of the parameter 'idx' which should
> be kept overall.
>
> To fix it, I got two proposals so far:
> 1). batching support(two APIs needed to keep compatibility)
> 2). save a head elem for a vq instead of caching an array of elems like vhost,
> and introduce a new API(virtqueue_chain_fill()) functioning with an
> additional parameter 'more' to the current virtqueue_fill() to indicate if
> there are more descriptor(s) coming in a chain.
>
> Either way it changes the API somehow and it does not seem to be clean and clear
> as wanted.
It's as simple as accepting an array of elems in e.g
virtqueue_fill_batched()?
>
> Any better idea?
>
>> Your code assumes the device know the virtio layout specific assumption
>> whih breaks the layer. Device should not care about the actual layout.
>>
> Good point, but anyway, change to virtio-net receiving code path is
> unavoidable to support split and packed rings, and batching is like a new
> feature somehow.
It's ok to change the code as a result of introducing of generic helper
but it's bad to change to code for working around a bad API.
Thanks
>
> Wei
>
>> Thanks
>>
>>
>>>> Thanks
>>>>
>>>>
>>>>> ---
>>>>> hw/net/virtio-net.c | 11 ++++++++++-
>>>>> 1 file changed, 10 insertions(+), 1 deletion(-)
>>>>>
>>>>> diff --git a/hw/net/virtio-net.c b/hw/net/virtio-net.c
>>>>> index 3f319ef..330abea 100644
>>>>> --- a/hw/net/virtio-net.c
>>>>> +++ b/hw/net/virtio-net.c
>>>>> @@ -1251,6 +1251,8 @@ static ssize_t virtio_net_receive_rcu(NetClientState *nc, const uint8_t *buf,
>>>>> struct virtio_net_hdr_mrg_rxbuf mhdr;
>>>>> unsigned mhdr_cnt = 0;
>>>>> size_t offset, i, guest_offset;
>>>>> + VirtQueueElement head;
>>>>> + int head_len = 0;
>>>>> if (!virtio_net_can_receive(nc)) {
>>>>> return -1;
>>>>> @@ -1328,7 +1330,13 @@ static ssize_t virtio_net_receive_rcu(NetClientState *nc, const uint8_t *buf,
>>>>> }
>>>>> /* signal other side */
>>>>> - virtqueue_fill(q->rx_vq, elem, total, i++);
>>>>> + if (i == 0) {
>>>>> + head_len = total;
>>>>> + head = *elem;
>>>>> + } else {
>>>>> + virtqueue_fill(q->rx_vq, elem, len, i);
>>>>> + }
>>>>> + i++;
>>>>> g_free(elem);
>>>>> }
>>>>> @@ -1339,6 +1347,7 @@ static ssize_t virtio_net_receive_rcu(NetClientState *nc, const uint8_t *buf,
>>>>> &mhdr.num_buffers, sizeof mhdr.num_buffers);
>>>>> }
>>>>> + virtqueue_fill(q->rx_vq, &head, head_len, 0);
>>>>> virtqueue_flush(q->rx_vq, i);
>>>>> virtio_notify(vdev, q->rx_vq);
next prev parent reply other threads:[~2019-02-20 2:35 UTC|newest]
Thread overview: 41+ messages / expand[flat|nested] mbox.gz Atom feed top
2019-02-14 4:26 [Qemu-devel] [PATCH v4 00/11] packed ring virtio-net backends support wexu
2019-02-14 4:26 ` [Qemu-devel] [PATCH v4 01/11] virtio: rename structure for packed ring wexu
2019-02-14 4:26 ` [Qemu-devel] [PATCH v4 02/11] virtio: device/driver area size calculation helper for split ring wexu
2019-02-14 4:26 ` [Qemu-devel] [PATCH v4 03/11] virtio: initialize packed ring region wexu
2019-02-14 4:26 ` [Qemu-devel] [PATCH v4 04/11] virtio: initialize wrap counter for packed ring wexu
2019-02-14 4:26 ` [Qemu-devel] [PATCH v4 05/11] virtio: queue/descriptor check helpers " wexu
2019-02-14 4:26 ` [Qemu-devel] [PATCH v4 06/11] virtio: get avail bytes check " wexu
2019-02-18 7:27 ` Jason Wang
2019-02-18 17:07 ` Wei Xu
2019-02-19 6:24 ` Jason Wang
2019-02-19 8:24 ` Wei Xu
2019-02-14 4:26 ` [Qemu-devel] [PATCH v4 07/11] virtio: fill/flush/pop " wexu
2019-02-18 7:51 ` Jason Wang
2019-02-18 14:46 ` Wei Xu
2019-02-19 6:49 ` Jason Wang
2019-02-19 8:21 ` Wei Xu
2019-02-19 9:33 ` Jason Wang
2019-02-19 11:34 ` Wei Xu
2019-02-14 4:26 ` [Qemu-devel] [PATCH v4 08/11] virtio: event suppression support " wexu
2019-02-19 7:19 ` Jason Wang
2019-02-19 10:40 ` Wei Xu
2019-02-19 13:06 ` Jason Wang
2019-02-20 2:17 ` Wei Xu
2019-02-14 4:26 ` [Qemu-devel] [PATCH v4 09/11] virtio-net: update the head descriptor in a chain lastly wexu
2019-02-19 7:23 ` Jason Wang
2019-02-19 10:51 ` Wei Xu
2019-02-19 13:09 ` Jason Wang
2019-02-20 1:54 ` Wei Xu
2019-02-20 2:34 ` Jason Wang [this message]
2019-02-20 4:01 ` Wei Xu
2019-02-20 7:53 ` Jason Wang
2019-02-14 4:26 ` [Qemu-devel] [PATCH v4 10/11] virtio: migration support for packed ring wexu
2019-02-19 7:30 ` Jason Wang
2019-02-19 11:00 ` Wei Xu
2019-02-19 13:12 ` Jason Wang
2019-02-14 4:26 ` [Qemu-devel] [PATCH v4 11/11] virtio: CLI and provide packed ring feature bit by default wexu
2019-02-19 7:32 ` Jason Wang
2019-02-19 11:23 ` Wei Xu
2019-02-19 13:33 ` Jason Wang
2019-02-20 0:46 ` Wei Xu
2019-02-19 7:35 ` [Qemu-devel] [PATCH v4 00/11] packed ring virtio-net backends support Jason Wang
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=9a9751aa-0cfc-151a-890c-ae70a9a37d64@redhat.com \
--to=jasowang@redhat.com \
--cc=jfreiman@redhat.com \
--cc=maxime.coquelin@redhat.com \
--cc=mst@redhat.com \
--cc=qemu-devel@nongnu.org \
--cc=tiwei.bie@intel.com \
--cc=wexu@redhat.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).