From: Jason Wang <jasowang@redhat.com>
To: jiangyiwen <jiangyiwen@huawei.com>, stefanha@redhat.com
Cc: netdev@vger.kernel.org, kvm@vger.kernel.org,
virtualization@lists.linux-foundation.org
Subject: Re: [PATCH 1/5] VSOCK: support fill mergeable rx buffer in guest
Date: Wed, 7 Nov 2018 14:17:53 +0800 [thread overview]
Message-ID: <a06cc13c-c525-18da-9156-e6a792e6700d@redhat.com> (raw)
In-Reply-To: <5BE13331.7050901@huawei.com>
On 2018/11/6 下午2:22, jiangyiwen wrote:
> On 2018/11/6 11:38, Jason Wang wrote:
>> On 2018/11/5 下午3:45, jiangyiwen wrote:
>>> In driver probing, if virtio has VIRTIO_VSOCK_F_MRG_RXBUF feature,
>>> it will fill mergeable rx buffer, support for host send mergeable
>>> rx buffer. It will fill a page everytime to compact with small
>>> packet and big packet.
>>>
>>> Signed-off-by: Yiwen Jiang<jiangyiwen@huawei.com>
>>> ---
>>> include/linux/virtio_vsock.h | 3 ++
>>> net/vmw_vsock/virtio_transport.c | 72 +++++++++++++++++++++++++++++-----------
>>> 2 files changed, 56 insertions(+), 19 deletions(-)
>>>
>>> diff --git a/include/linux/virtio_vsock.h b/include/linux/virtio_vsock.h
>>> index e223e26..bf84418 100644
>>> --- a/include/linux/virtio_vsock.h
>>> +++ b/include/linux/virtio_vsock.h
>>> @@ -14,6 +14,9 @@
>>> #define VIRTIO_VSOCK_MAX_BUF_SIZE 0xFFFFFFFFUL
>>> #define VIRTIO_VSOCK_MAX_PKT_BUF_SIZE (1024 * 64)
>>>
>>> +/* Virtio-vsock feature */
>>> +#define VIRTIO_VSOCK_F_MRG_RXBUF 0 /* Host can merge receive buffers. */
>>> +
>>> enum {
>>> VSOCK_VQ_RX = 0, /* for host to guest data */
>>> VSOCK_VQ_TX = 1, /* for guest to host data */
>>> diff --git a/net/vmw_vsock/virtio_transport.c b/net/vmw_vsock/virtio_transport.c
>>> index 5d3cce9..2040a9e 100644
>>> --- a/net/vmw_vsock/virtio_transport.c
>>> +++ b/net/vmw_vsock/virtio_transport.c
>>> @@ -64,6 +64,7 @@ struct virtio_vsock {
>>> struct virtio_vsock_event event_list[8];
>>>
>>> u32 guest_cid;
>>> + bool mergeable;
>>> };
>>>
>>> static struct virtio_vsock *virtio_vsock_get(void)
>>> @@ -256,6 +257,25 @@ static int virtio_transport_send_pkt_loopback(struct virtio_vsock *vsock,
>>> return 0;
>>> }
>>>
>>> +static int fill_mergeable_rx_buff(struct virtqueue *vq)
>>> +{
>>> + void *page = NULL;
>>> + struct scatterlist sg;
>>> + int err;
>>> +
>>> + page = (void *)get_zeroed_page(GFP_KERNEL);
>> Any reason to use zeroed page?
> In previous version, the entire structure of virtio_vsock_pkt is preallocated
> in guest use kzalloc, it is a contiguous zeroed physical memory, but host only
> need to fill virtio_vsock_hdr size.
>
> However, in mergeable rx buffer version, we only fill a page in vring descriptor
> in guest, and I will reserve size of virtio_vsock_pkt in host instead of write
> the total size of virtio_vsock_pkt, for the correctness of structure value,
> we should set zeroed page in advance.
I may miss something, but it looks to me only the header needs to be zeroed.
>
>>> + if (!page)
>>> + return -ENOMEM;
>>> +
>>> + sg_init_one(&sg, page, PAGE_SIZE);
>> FYI, for virtio-net we have several optimizations for mergeable rx buffer:
>>
>> - skb_page_frag_refill() which can use high order page and reduce the stress of page allocator
>>
> You're right, initially I want to use a memory poll to manage the rx buffer,
> and then use this in the later optimized patch. Your advice is very great.
>
>> - we don't use fixed buffer size, instead we use EWMA to estimate the possible rx buffer size to avoid internal fragmentation
>>
> Ok, I analysis the feature and consider add it into my patches.
>
>> If we can try to reuse virtio-net driver, we will get those nice features.
>>
> Yes, after all virtio-net has a very good ecological environment, and it also
> do many performance optimization, it is actually a good idea.
>
Yes, so my suggestion is to consider to reuse them (unless we found
something that is a real blocker) instead of duplicating codes, features
a bugs.
Thanks
prev parent reply other threads:[~2018-11-07 15:46 UTC|newest]
Thread overview: 4+ messages / expand[flat|nested] mbox.gz Atom feed top
2018-11-05 7:45 [PATCH 1/5] VSOCK: support fill mergeable rx buffer in guest jiangyiwen
2018-11-06 3:38 ` Jason Wang
2018-11-06 6:22 ` jiangyiwen
2018-11-07 6:17 ` Jason Wang [this message]
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=a06cc13c-c525-18da-9156-e6a792e6700d@redhat.com \
--to=jasowang@redhat.com \
--cc=jiangyiwen@huawei.com \
--cc=kvm@vger.kernel.org \
--cc=netdev@vger.kernel.org \
--cc=stefanha@redhat.com \
--cc=virtualization@lists.linux-foundation.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).