qemu-devel.nongnu.org archive mirror
 help / color / mirror / Atom feed
From: Wei Wang <wei.w.wang@intel.com>
To: "Michael S. Tsirkin" <mst@redhat.com>
Cc: jasowang@redhat.com, stefanha@gmail.com,
	marcandre.lureau@gmail.com, pbonzini@redhat.com,
	virtio-dev@lists.oasis-open.org, qemu-devel@nongnu.org,
	jan.scheurich@ericsson.com
Subject: Re: [Qemu-devel] [virtio-dev] Re: [PATCH] virtio-net: keep the packet layout intact
Date: Tue, 16 May 2017 12:49:24 +0800	[thread overview]
Message-ID: <591A84D4.1030006@intel.com> (raw)
In-Reply-To: <591A7FCE.1010703@intel.com>

On 05/16/2017 12:27 PM, Wei Wang wrote:
> On 05/15/2017 10:46 PM, Michael S. Tsirkin wrote:
>> On Mon, May 15, 2017 at 05:29:15PM +0800, Wei Wang wrote:
>>> Ping for comments, thanks.
>>>
>>> On 05/11/2017 12:57 PM, Wei Wang wrote:
>>>> The current implementation may change the packet layout when
>>>> vnet_hdr needs an endianness swap. The layout change causes
>>>> one more iov to be added to the iov[] passed from the guest, which
>>>> is a barrier to making the TX queue size 1024 due to the possible
>>>> off-by-one issue.
>> It blocks making it 512 but I don't think we can make it 1024
>> as entries might cross page boundaries and get split.
>>
>
> I agree with the performance lose issue you mentioned
> below, thanks. To understand more here, could you please
> shed some light on "entries can't cross page boundaries"?
>
> Seems the virtio spec doesn't mention that vring_desc entries
> shouldn't be in two physically continuous pages. Also didn't
> find an issue from the implementation.
> On the device side, the writev manual does 
typo - "does" -> "doesn't"


> require the iov[]
> array to be in one page only, and the limit to iovcnt is 1024.
>
>
>>>> This patch changes the implementation to remain the packet layout
>>>> intact. In this case, the number of iov[] passed to writev will be
>>>> equal to the number obtained from the guest.
>>>>
>>>> Signed-off-by: Wei Wang <wei.w.wang@intel.com>
>> As this is at the cost of a full data copy, I don't think
>> this makes sense. We could limit this when sg list does not fit
>> in 1024.
>
> Yes, this would cause a performance loss with the layout that data
> is adjacent with vnet_hdr. Since you prefer another solution below,
> I will skip and not discuss my ideas to avoid that copy.
>
>> But I really think we should just add a max s/g field to virtio
>> and then we'll be free to increase the ring size.
>
> Yes, that's also a good way to solve it. So, add a new device
> property, "max_chain_size" and a feature bit to detect it?
>
>

Best,
Wei

      reply	other threads:[~2017-05-16  4:47 UTC|newest]

Thread overview: 5+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2017-05-11  4:57 [Qemu-devel] [PATCH] virtio-net: keep the packet layout intact Wei Wang
2017-05-15  9:29 ` Wei Wang
2017-05-15 14:46   ` Michael S. Tsirkin
2017-05-16  4:27     ` [Qemu-devel] [virtio-dev] " Wei Wang
2017-05-16  4:49       ` Wei Wang [this message]

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=591A84D4.1030006@intel.com \
    --to=wei.w.wang@intel.com \
    --cc=jan.scheurich@ericsson.com \
    --cc=jasowang@redhat.com \
    --cc=marcandre.lureau@gmail.com \
    --cc=mst@redhat.com \
    --cc=pbonzini@redhat.com \
    --cc=qemu-devel@nongnu.org \
    --cc=stefanha@gmail.com \
    --cc=virtio-dev@lists.oasis-open.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).