From: Shirley Ma <mashirle@us.ibm.com>
To: Rusty Russell <rusty@rustcorp.com.au>
Cc: "Michael S. Tsirkin" <mst@redhat.com>,
Herbert Xu <herbert@gondor.hengli.com.au>,
davem@davemloft.net, kvm@vger.kernel.org, netdev@vger.kernel.org
Subject: Re: [PATCH 2/2] virtio_net: remove send completion interrupts and avoid TX queue overrun through packet drop
Date: Wed, 23 Mar 2011 21:14:32 -0700 [thread overview]
Message-ID: <1300940073.3441.42.camel@localhost.localdomain> (raw)
In-Reply-To: <87r59xbbr6.fsf@rustcorp.com.au>
On Thu, 2011-03-24 at 11:00 +1030, Rusty Russell wrote:
> > With simply removing the notify here, it does help the case when TX
> > overrun hits too often, for example for 1K message size, the single
> > TCP_STREAM performance improved from 2.xGb/s to 4.xGb/s.
>
> OK, we'll be getting rid of the "kick on full", so please delete that
> on
> all benchmarks.
>
> Now, does the capacity check before add_buf() still win anything? I
> can't see how unless we have some weird bug.
>
> Once we've sorted that out, we should look at the more radical change
> of publishing last_used and using that to intuit whether interrupts
> should be sent. If we're not careful with ordering and barriers that
> could introduce more bugs.
Without the kick, it's not necessary for capacity check. I am
regenerating the patch with add_buf check and summit the patch after
passing all tests.
> Anything else on the optimization agenda I've missed?
Tom found small performance gain with freeing the used buffers when half
of the ring is full in TCP_RR workload.
I think we need a new API in virtio, which frees all used buffers at
once, I am testing the performance now, the new API looks like:
drivers/virtio/virtio_ring.c | 40 +++++++++++++++++++++++++++++++++++
include/linux/virtio.h | 6 +++++
diff --git a/drivers/virtio/virtio_ring.c b/drivers/virtio/virtio_ring.c
index cc2f73e..6d2dc16 100644
--- a/drivers/virtio/virtio_ring.c
+++ b/drivers/virtio/virtio_ring.c
@@ -329,6 +329,46 @@ void *virtqueue_get_buf(struct virtqueue *_vq, unsigned int *len)
}
EXPORT_SYMBOL_GPL(virtqueue_get_buf);
+int virtqueue_free_used(struct virtqueue *_vq, void (*free)(void *))
+{
+ struct vring_virtqueue *vq = to_vvq(_vq);
+ unsigned int i;
+ void *buf;
+
+ START_USE(vq);
+
+ if (unlikely(vq->broken)) {
+ END_USE(vq);
+ return NULL;
+ }
+
+ /* Only get used array entries after they have been exposed by host. */
+ virtio_rmb();
+
+ while (vq->last_used_idx != vq->vring.used->idx) {
+ i = vq->vring.used->ring[vq->last_used_idx%vq->vring.num].id;
+
+ if (unlikely(i >= vq->vring.num)) {
+ BAD_RING(vq, "id %u out of range\n", i);
+ return NULL;
+ }
+ if (unlikely(!vq->data[i])) {
+ BAD_RING(vq, "id %u is not a head!\n", i);
+ return NULL;
+ }
+
+ /* detach_buf clears data, so grab it now. */
+ buf = vq->data[i];
+ detach_buf(vq, i);
+ free(buf);
+ vq->last_used_idx++;
+ }
+ END_USE(vq);
+ return vq->num_free;
+}
+
+EXPORT_SYMBOL_GPL(virtqueue_free_used);
+
void virtqueue_disable_cb(struct virtqueue *_vq)
{
struct vring_virtqueue *vq = to_vvq(_vq);
diff --git a/include/linux/virtio.h b/include/linux/virtio.h
index aff5b4f..19acc66 100644
--- a/include/linux/virtio.h
+++ b/include/linux/virtio.h
@@ -42,6 +42,10 @@ struct virtqueue {
* vq: the struct virtqueue we're talking about.
* len: the length written into the buffer
* Returns NULL or the "data" token handed to add_buf.
+ * virtqueue_free_used: free all used buffers in the queue
+ * vq: the struct virtqueue we're talking about.
+ * free: free buf function from caller.
+ * Returns remaining capacity of the queue.
* virtqueue_disable_cb: disable callbacks
* vq: the struct virtqueue we're talking about.
* Note that this is not necessarily synchronous, hence unreliable and only
@@ -82,6 +86,8 @@ void virtqueue_kick(struct virtqueue *vq);
void *virtqueue_get_buf(struct virtqueue *vq, unsigned int *len);
+int virtqueue_free_used(struct virtqueue *vq, void (*free)(void *buf));
+
void virtqueue_disable_cb(struct virtqueue *vq);
bool virtqueue_enable_cb(struct virtqueue *vq);
Thanks
Shirley
next prev parent reply other threads:[~2011-03-24 4:14 UTC|newest]
Thread overview: 24+ messages / expand[flat|nested] mbox.gz Atom feed top
2011-03-17 0:12 [PATCH 2/2] virtio_net: remove send completion interrupts and avoid TX queue overrun through packet drop Shirley Ma
2011-03-17 5:02 ` Michael S. Tsirkin
2011-03-17 15:18 ` Shirley Ma
2011-03-18 3:28 ` Shirley Ma
2011-03-18 13:15 ` Michael S. Tsirkin
2011-03-18 16:54 ` Shirley Ma
2011-03-17 5:10 ` Rusty Russell
2011-03-17 15:10 ` Shirley Ma
2011-03-18 13:33 ` Herbert Xu
2011-03-19 1:41 ` Shirley Ma
2011-03-21 18:03 ` Shirley Ma
2011-03-22 11:36 ` Michael S. Tsirkin
2011-03-23 2:26 ` Shirley Ma
2011-03-24 0:30 ` Rusty Russell
2011-03-24 4:14 ` Shirley Ma [this message]
2011-03-24 14:28 ` Michael S. Tsirkin
2011-03-24 17:46 ` Shirley Ma
2011-03-24 18:10 ` Michael S. Tsirkin
2011-03-25 4:51 ` Rusty Russell
2011-03-25 4:50 ` Rusty Russell
2011-03-27 7:52 ` Michael S. Tsirkin
2011-04-04 6:13 ` Rusty Russell
2011-03-24 0:16 ` Rusty Russell
2011-03-24 6:39 ` Michael S. Tsirkin
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=1300940073.3441.42.camel@localhost.localdomain \
--to=mashirle@us.ibm.com \
--cc=davem@davemloft.net \
--cc=herbert@gondor.hengli.com.au \
--cc=kvm@vger.kernel.org \
--cc=mst@redhat.com \
--cc=netdev@vger.kernel.org \
--cc=rusty@rustcorp.com.au \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).