From: "Michael S. Tsirkin" <mst@redhat.com>
To: Wei Wang <wei.w.wang@intel.com>
Cc: jasowang@redhat.com, stefanha@gmail.com,
marcandre.lureau@gmail.com, pbonzini@redhat.com,
virtio-dev@lists.oasis-open.org, qemu-devel@nongnu.org,
jan.scheurich@ericsson.com
Subject: Re: [Qemu-devel] [PATCH] virtio-net: keep the packet layout intact
Date: Mon, 15 May 2017 17:46:28 +0300 [thread overview]
Message-ID: <20170515174044-mutt-send-email-mst@kernel.org> (raw)
In-Reply-To: <591974EB.2010409@intel.com>
On Mon, May 15, 2017 at 05:29:15PM +0800, Wei Wang wrote:
> Ping for comments, thanks.
>
> On 05/11/2017 12:57 PM, Wei Wang wrote:
> > The current implementation may change the packet layout when
> > vnet_hdr needs an endianness swap. The layout change causes
> > one more iov to be added to the iov[] passed from the guest, which
> > is a barrier to making the TX queue size 1024 due to the possible
> > off-by-one issue.
It blocks making it 512 but I don't think we can make it 1024
as entries might cross page boundaries and get split.
> >
> > This patch changes the implementation to remain the packet layout
> > intact. In this case, the number of iov[] passed to writev will be
> > equal to the number obtained from the guest.
> >
> > Signed-off-by: Wei Wang <wei.w.wang@intel.com>
As this is at the cost of a full data copy, I don't think
this makes sense. We could limit this when sg list does not fit
in 1024.
But I really think we should just add a max s/g field to virtio
and then we'll be free to increase the ring size.
> > ---
> > hw/net/virtio-net.c | 29 ++++++++++++++---------------
> > 1 file changed, 14 insertions(+), 15 deletions(-)
> >
> > diff --git a/hw/net/virtio-net.c b/hw/net/virtio-net.c
> > index 7d091c9..a51e9a8 100644
> > --- a/hw/net/virtio-net.c
> > +++ b/hw/net/virtio-net.c
> > @@ -1287,8 +1287,7 @@ static int32_t virtio_net_flush_tx(VirtIONetQueue *q)
> > for (;;) {
> > ssize_t ret;
> > unsigned int out_num;
> > - struct iovec sg[VIRTQUEUE_MAX_SIZE], sg2[VIRTQUEUE_MAX_SIZE + 1], *out_sg;
> > - struct virtio_net_hdr_mrg_rxbuf mhdr;
> > + struct iovec sg[VIRTQUEUE_MAX_SIZE], mhdr, *out_sg;
> > elem = virtqueue_pop(q->tx_vq, sizeof(VirtQueueElement));
> > if (!elem) {
> > @@ -1305,7 +1304,10 @@ static int32_t virtio_net_flush_tx(VirtIONetQueue *q)
> > }
> > if (n->has_vnet_hdr) {
> > - if (iov_to_buf(out_sg, out_num, 0, &mhdr, n->guest_hdr_len) <
> > + /* Buffer to copy vnet_hdr and the possible adjacent data */
> > + mhdr.iov_len = out_sg[0].iov_len;
> > + mhdr.iov_base = g_malloc0(mhdr.iov_len);
> > + if (iov_to_buf(out_sg, out_num, 0, mhdr.iov_base, mhdr.iov_len) <
> > n->guest_hdr_len) {
> > virtio_error(vdev, "virtio-net header incorrect");
> > virtqueue_detach_element(q->tx_vq, elem, 0);
> > @@ -1313,17 +1315,12 @@ static int32_t virtio_net_flush_tx(VirtIONetQueue *q)
> > return -EINVAL;
> > }
> > if (n->needs_vnet_hdr_swap) {
> > - virtio_net_hdr_swap(vdev, (void *) &mhdr);
> > - sg2[0].iov_base = &mhdr;
> > - sg2[0].iov_len = n->guest_hdr_len;
> > - out_num = iov_copy(&sg2[1], ARRAY_SIZE(sg2) - 1,
> > - out_sg, out_num,
> > - n->guest_hdr_len, -1);
> > - if (out_num == VIRTQUEUE_MAX_SIZE) {
> > - goto drop;
> > - }
> > - out_num += 1;
> > - out_sg = sg2;
> > + virtio_net_hdr_swap(vdev, mhdr.iov_base);
> > + /* Copy the first iov where the vnet_hdr resides in */
> > + out_num = iov_copy(sg, 1, &mhdr, 1, 0, mhdr.iov_len);
> > + out_num += iov_copy(sg + 1, ARRAY_SIZE(sg) - 1, out_sg + 1,
> > + elem->out_num - 1, 0, -1);
> > + out_sg = sg;
> > }
> > }
> > /*
> > @@ -1345,13 +1342,15 @@ static int32_t virtio_net_flush_tx(VirtIONetQueue *q)
> > ret = qemu_sendv_packet_async(qemu_get_subqueue(n->nic, queue_index),
> > out_sg, out_num, virtio_net_tx_complete);
> > + if (n->has_vnet_hdr) {
> > + g_free(mhdr.iov_base);
> > + }
> > if (ret == 0) {
> > virtio_queue_set_notification(q->tx_vq, 0);
> > q->async_tx.elem = elem;
> > return -EBUSY;
> > }
> > -drop:
> > virtqueue_push(q->tx_vq, elem, 0);
> > virtio_notify(vdev, q->tx_vq);
> > g_free(elem);
next prev parent reply other threads:[~2017-05-15 14:46 UTC|newest]
Thread overview: 5+ messages / expand[flat|nested] mbox.gz Atom feed top
2017-05-11 4:57 [Qemu-devel] [PATCH] virtio-net: keep the packet layout intact Wei Wang
2017-05-15 9:29 ` Wei Wang
2017-05-15 14:46 ` Michael S. Tsirkin [this message]
2017-05-16 4:27 ` [Qemu-devel] [virtio-dev] " Wei Wang
2017-05-16 4:49 ` Wei Wang
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20170515174044-mutt-send-email-mst@kernel.org \
--to=mst@redhat.com \
--cc=jan.scheurich@ericsson.com \
--cc=jasowang@redhat.com \
--cc=marcandre.lureau@gmail.com \
--cc=pbonzini@redhat.com \
--cc=qemu-devel@nongnu.org \
--cc=stefanha@gmail.com \
--cc=virtio-dev@lists.oasis-open.org \
--cc=wei.w.wang@intel.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).