* [PATCHv2 RFC 0/4] virtio and vhost-net capacity handling
@ 2011-06-02 15:42 Michael S. Tsirkin
0 siblings, 0 replies; 9+ messages in thread
From: Michael S. Tsirkin @ 2011-06-02 15:42 UTC (permalink / raw)
Cc: Krishna Kumar, Carsten Otte, lguest, Shirley Ma, kvm, linux-s390,
netdev, habanero, Heiko Carstens, linux-kernel, virtualization,
steved, Christian Borntraeger, Tom Lendacky, Martin Schwidefsky,
linux390
OK, here's a new attempt to use the new capacity api. I also added more
comments to clarify the logic. Hope this is more readable. Let me know
pls.
This is on top of the patches applied by Rusty.
Warning: untested. Posting now to give people chance to
comment on the API.
Changes from v1:
- fix comment in patch 2 to correct confusion noted by Rusty
- rewrite patch 3 along the lines suggested by Rusty
note: it's not exactly the same but I hope it's close
enough, the main difference is that mine does limited
polling even in the unlikely xmit failure case.
- added a patch to not return capacity from add_buf
it always looked like a weird hack
Michael S. Tsirkin (4):
virtio_ring: add capacity check API
virtio_net: fix tx capacity checks using new API
virtio_net: limit xmit polling
Revert "virtio: make add_buf return capacity remaining:
drivers/net/virtio_net.c | 111 ++++++++++++++++++++++++++----------------
drivers/virtio/virtio_ring.c | 10 +++-
include/linux/virtio.h | 7 ++-
3 files changed, 84 insertions(+), 44 deletions(-)
--
1.7.5.53.gc233e
^ permalink raw reply [flat|nested] 9+ messages in thread
* Re: [PATCHv2 RFC 0/4] virtio and vhost-net capacity handling
[not found] <cover.1307029008.git.mst@redhat.com>
@ 2011-06-02 17:17 ` Michael S. Tsirkin
[not found] ` <20110602171721.GA13215@redhat.com>
` (2 subsequent siblings)
3 siblings, 0 replies; 9+ messages in thread
From: Michael S. Tsirkin @ 2011-06-02 17:17 UTC (permalink / raw)
To: linux-kernel
Cc: Krishna Kumar, Carsten Otte, lguest, Shirley Ma, kvm, linux-s390,
netdev, habanero, Heiko Carstens, virtualization, steved,
Christian Borntraeger, Tom Lendacky, Martin Schwidefsky, linux390
On Thu, Jun 02, 2011 at 06:42:35PM +0300, Michael S. Tsirkin wrote:
> OK, here's a new attempt to use the new capacity api. I also added more
> comments to clarify the logic. Hope this is more readable. Let me know
> pls.
>
> This is on top of the patches applied by Rusty.
>
> Warning: untested. Posting now to give people chance to
> comment on the API.
>
> Changes from v1:
> - fix comment in patch 2 to correct confusion noted by Rusty
> - rewrite patch 3 along the lines suggested by Rusty
> note: it's not exactly the same but I hope it's close
> enough, the main difference is that mine does limited
> polling even in the unlikely xmit failure case.
> - added a patch to not return capacity from add_buf
> it always looked like a weird hack
>
> Michael S. Tsirkin (4):
> virtio_ring: add capacity check API
> virtio_net: fix tx capacity checks using new API
> virtio_net: limit xmit polling
> Revert "virtio: make add_buf return capacity remaining:
>
> drivers/net/virtio_net.c | 111 ++++++++++++++++++++++++++----------------
> drivers/virtio/virtio_ring.c | 10 +++-
> include/linux/virtio.h | 7 ++-
> 3 files changed, 84 insertions(+), 44 deletions(-)
>
> --
> 1.7.5.53.gc233e
And just FYI, here's a patch (on top) that I considered but
decided against. This does a single get_buf before
xmit. I thought it's not really needed as the capacity
check in add_buf is relatively cheap, and we removed
the kick in xmit_skb. But the point is that the loop
will now be easy to move around if someone manages
to show this benefits speed (which I doubt).
diff --git a/drivers/net/virtio_net.c b/drivers/net/virtio_net.c
index b25db1c..75af5be 100644
--- a/drivers/net/virtio_net.c
+++ b/drivers/net/virtio_net.c
@@ -590,6 +590,9 @@ static netdev_tx_t start_xmit(struct sk_buff *skb, struct net_device *dev)
struct virtnet_info *vi = netdev_priv(dev);
int ret, i;
+ /* Try to pop an skb, to increase the chance we can add this one. */
+ free_old_xmit_skb(v);
+
/* We normally do have space in the ring, so try to queue the skb as
* fast as possible. */
ret = xmit_skb(vi, skb);
@@ -627,8 +630,7 @@ static netdev_tx_t start_xmit(struct sk_buff *skb, struct net_device *dev)
* This is so that we don't hog the skb memory unnecessarily. *
* Doing this after kick means there's a chance we'll free
* the skb we have just sent, which is hot in cache. */
- for (i = 0; i < 2; i++)
- free_old_xmit_skb(v);
+ free_old_xmit_skb(v);
if (likely(free_xmit_capacity(vi)))
return NETDEV_TX_OK;
^ permalink raw reply related [flat|nested] 9+ messages in thread
* Re: [PATCHv2 RFC 0/4] virtio and vhost-net capacity handling
[not found] ` <20110602171721.GA13215@redhat.com>
@ 2011-06-06 3:39 ` Rusty Russell
0 siblings, 0 replies; 9+ messages in thread
From: Rusty Russell @ 2011-06-06 3:39 UTC (permalink / raw)
To: Michael S. Tsirkin, linux-kernel
Cc: Krishna Kumar, Carsten Otte, lguest, Shirley Ma, kvm, linux-s390,
netdev, habanero, Heiko Carstens, virtualization, steved,
Christian Borntraeger, Tom Lendacky, Martin Schwidefsky, linux390
On Thu, 2 Jun 2011 20:17:21 +0300, "Michael S. Tsirkin" <mst@redhat.com> wrote:
> On Thu, Jun 02, 2011 at 06:42:35PM +0300, Michael S. Tsirkin wrote:
> > OK, here's a new attempt to use the new capacity api. I also added more
> > comments to clarify the logic. Hope this is more readable. Let me know
> > pls.
> >
> > This is on top of the patches applied by Rusty.
> >
> > Warning: untested. Posting now to give people chance to
> > comment on the API.
> >
> > Changes from v1:
> > - fix comment in patch 2 to correct confusion noted by Rusty
> > - rewrite patch 3 along the lines suggested by Rusty
> > note: it's not exactly the same but I hope it's close
> > enough, the main difference is that mine does limited
> > polling even in the unlikely xmit failure case.
> > - added a patch to not return capacity from add_buf
> > it always looked like a weird hack
> >
> > Michael S. Tsirkin (4):
> > virtio_ring: add capacity check API
> > virtio_net: fix tx capacity checks using new API
> > virtio_net: limit xmit polling
> > Revert "virtio: make add_buf return capacity remaining:
> >
> > drivers/net/virtio_net.c | 111 ++++++++++++++++++++++++++----------------
> > drivers/virtio/virtio_ring.c | 10 +++-
> > include/linux/virtio.h | 7 ++-
> > 3 files changed, 84 insertions(+), 44 deletions(-)
> >
> > --
> > 1.7.5.53.gc233e
>
>
> And just FYI, here's a patch (on top) that I considered but
> decided against. This does a single get_buf before
> xmit. I thought it's not really needed as the capacity
> check in add_buf is relatively cheap, and we removed
> the kick in xmit_skb. But the point is that the loop
> will now be easy to move around if someone manages
> to show this benefits speed (which I doubt).
Agreed. The other is clearer.
I like the approach these patches take. Testing is required, but I
think the final result is a neater driver than the current one, as well
as having nicer latency.
Thanks,
Rusty.
^ permalink raw reply [flat|nested] 9+ messages in thread
* Re: [PATCHv2 RFC 0/4] virtio and vhost-net capacity handling
[not found] <cover.1307029008.git.mst@redhat.com>
2011-06-02 17:17 ` Michael S. Tsirkin
[not found] ` <20110602171721.GA13215@redhat.com>
@ 2011-06-07 16:08 ` Michael S. Tsirkin
[not found] ` <20110607160830.GB17581@redhat.com>
3 siblings, 0 replies; 9+ messages in thread
From: Michael S. Tsirkin @ 2011-06-07 16:08 UTC (permalink / raw)
To: linux-kernel
Cc: Krishna Kumar, Carsten Otte, lguest, Shirley Ma, kvm, linux-s390,
netdev, habanero, Heiko Carstens, virtualization, steved,
Christian Borntraeger, Tom Lendacky, Martin Schwidefsky, linux390
On Thu, Jun 02, 2011 at 06:42:35PM +0300, Michael S. Tsirkin wrote:
> OK, here's a new attempt to use the new capacity api. I also added more
> comments to clarify the logic. Hope this is more readable. Let me know
> pls.
>
> This is on top of the patches applied by Rusty.
>
> Warning: untested. Posting now to give people chance to
> comment on the API.
OK, this seems to have survived some testing so far,
after I dropped patch 4 and fixed build for patch 3
(build fixup patch sent in reply to the original).
I'll be mostly offline until Sunday, would appreciate
testing reports.
git://git.kernel.org/pub/scm/linux/kernel/git/mst/vhost.git
virtio-net-xmit-polling-v2
git://git.kernel.org/pub/scm/linux/kernel/git/mst/qemu-kvm.git
virtio-net-event-idx-v3
Thanks!
> Changes from v1:
> - fix comment in patch 2 to correct confusion noted by Rusty
> - rewrite patch 3 along the lines suggested by Rusty
> note: it's not exactly the same but I hope it's close
> enough, the main difference is that mine does limited
> polling even in the unlikely xmit failure case.
> - added a patch to not return capacity from add_buf
> it always looked like a weird hack
>
> Michael S. Tsirkin (4):
> virtio_ring: add capacity check API
> virtio_net: fix tx capacity checks using new API
> virtio_net: limit xmit polling
> Revert "virtio: make add_buf return capacity remaining:
>
> drivers/net/virtio_net.c | 111 ++++++++++++++++++++++++++----------------
> drivers/virtio/virtio_ring.c | 10 +++-
> include/linux/virtio.h | 7 ++-
> 3 files changed, 84 insertions(+), 44 deletions(-)
>
> --
> 1.7.5.53.gc233e
^ permalink raw reply [flat|nested] 9+ messages in thread
* Re: [PATCHv2 RFC 0/4] virtio and vhost-net capacity handling
[not found] ` <20110607160830.GB17581@redhat.com>
@ 2011-06-13 13:32 ` Krishna Kumar2
[not found] ` <OFE36F9C2B.43D9DB04-ON652578AE.0048E1AB-652578AE.004A1462@in.ibm.com>
1 sibling, 0 replies; 9+ messages in thread
From: Krishna Kumar2 @ 2011-06-13 13:32 UTC (permalink / raw)
To: Michael S. Tsirkin
Cc: habanero, lguest, Shirley Ma, kvm, Carsten Otte, linux-s390,
Heiko Carstens, linux-kernel, virtualization, steved,
Christian Borntraeger, Tom Lendacky, netdev, Martin Schwidefsky,
linux390
"Michael S. Tsirkin" <mst@redhat.com> wrote on 06/07/2011 09:38:30 PM:
> > This is on top of the patches applied by Rusty.
> >
> > Warning: untested. Posting now to give people chance to
> > comment on the API.
>
> OK, this seems to have survived some testing so far,
> after I dropped patch 4 and fixed build for patch 3
> (build fixup patch sent in reply to the original).
>
> I'll be mostly offline until Sunday, would appreciate
> testing reports.
Hi Michael,
I ran the latest patches with 1K I/O (guest->local host) and
the results are (60 sec run for each test case):
______________________________
#sessions BW% SD%
______________________________
1 -25.6 47.0
2 -29.3 22.9
4 .8 1.6
8 1.6 0
16 -1.6 4.1
32 -5.3 2.1
48 11.3 -7.8
64 -2.8 .7
96 -6.2 .6
128 -10.6 12.7
______________________________
BW: -4.8 SD: 5.4
I tested it again to see if the regression is fleeting (since
the numbers vary quite a bit for 1K I/O even between guest->
local host), but:
______________________________
#sessions BW% SD%
______________________________
1 14.0 -17.3
2 19.9 -11.1
4 7.9 -15.3
8 9.6 -13.1
16 1.2 -7.3
32 -.6 -13.5
48 -28.7 10.0
64 -5.7 -.7
96 -9.4 -8.1
128 -9.4 .7
______________________________
BW: -3.7 SD: -2.0
With 16K, there was an improvement in SD, but
higher sessions seem to slightly degrade BW/SD:
______________________________
#sessions BW% SD%
______________________________
1 30.9 -25.0
2 16.5 -19.4
4 -1.3 7.9
8 1.4 6.2
16 3.9 -5.4
32 0 4.3
48 -.5 .1
64 32.1 -1.5
96 -2.1 23.2
128 -7.4 3.8
______________________________
BW: 5.0 SD: 7.5
Thanks,
- KK
^ permalink raw reply [flat|nested] 9+ messages in thread
* Re: [PATCHv2 RFC 0/4] virtio and vhost-net capacity handling
[not found] ` <OFE36F9C2B.43D9DB04-ON652578AE.0048E1AB-652578AE.004A1462@in.ibm.com>
@ 2011-06-13 13:35 ` Michael S. Tsirkin
2011-06-13 13:38 ` Krishna Kumar2
` (2 subsequent siblings)
3 siblings, 0 replies; 9+ messages in thread
From: Michael S. Tsirkin @ 2011-06-13 13:35 UTC (permalink / raw)
To: Krishna Kumar2
Cc: habanero, lguest, Shirley Ma, kvm, Carsten Otte, linux-s390,
Heiko Carstens, linux-kernel, virtualization, steved,
Christian Borntraeger, Tom Lendacky, netdev, Martin Schwidefsky,
linux390
On Mon, Jun 13, 2011 at 07:02:27PM +0530, Krishna Kumar2 wrote:
> "Michael S. Tsirkin" <mst@redhat.com> wrote on 06/07/2011 09:38:30 PM:
>
> > > This is on top of the patches applied by Rusty.
> > >
> > > Warning: untested. Posting now to give people chance to
> > > comment on the API.
> >
> > OK, this seems to have survived some testing so far,
> > after I dropped patch 4 and fixed build for patch 3
> > (build fixup patch sent in reply to the original).
> >
> > I'll be mostly offline until Sunday, would appreciate
> > testing reports.
>
> Hi Michael,
>
> I ran the latest patches with 1K I/O (guest->local host) and
> the results are (60 sec run for each test case):
Hi!
Did you apply this one:
[PATCHv2 RFC 4/4] Revert "virtio: make add_buf return capacity remaining"
?
It turns out that that patch has a bug and should be reverted,
only patches 1-3 should be applied.
Could you confirm please?
> ___________________/
> #sessions BW% SD%
> ______________________________
> 1 -25.6 47.0
> 2 -29.3 22.9
> 4 .8 1.6
> 8 1.6 0
> 16 -1.6 4.1
> 32 -5.3 2.1
> 48 11.3 -7.8
> 64 -2.8 .7
> 96 -6.2 .6
> 128 -10.6 12.7
> ______________________________
> BW: -4.8 SD: 5.4
>
> I tested it again to see if the regression is fleeting (since
> the numbers vary quite a bit for 1K I/O even between guest->
> local host), but:
>
> ______________________________
> #sessions BW% SD%
> ______________________________
> 1 14.0 -17.3
> 2 19.9 -11.1
> 4 7.9 -15.3
> 8 9.6 -13.1
> 16 1.2 -7.3
> 32 -.6 -13.5
> 48 -28.7 10.0
> 64 -5.7 -.7
> 96 -9.4 -8.1
> 128 -9.4 .7
> ______________________________
> BW: -3.7 SD: -2.0
>
>
> With 16K, there was an improvement in SD, but
> higher sessions seem to slightly degrade BW/SD:
>
> ______________________________
> #sessions BW% SD%
> ______________________________
> 1 30.9 -25.0
> 2 16.5 -19.4
> 4 -1.3 7.9
> 8 1.4 6.2
> 16 3.9 -5.4
> 32 0 4.3
> 48 -.5 .1
> 64 32.1 -1.5
> 96 -2.1 23.2
> 128 -7.4 3.8
> ______________________________
> BW: 5.0 SD: 7.5
>
>
> Thanks,
>
> - KK
^ permalink raw reply [flat|nested] 9+ messages in thread
* Re: [PATCHv2 RFC 0/4] virtio and vhost-net capacity handling
[not found] ` <OFE36F9C2B.43D9DB04-ON652578AE.0048E1AB-652578AE.004A1462@in.ibm.com>
2011-06-13 13:35 ` Michael S. Tsirkin
@ 2011-06-13 13:38 ` Krishna Kumar2
[not found] ` <20110613133513.GA29884@redhat.com>
2011-06-19 8:53 ` Michael S. Tsirkin
3 siblings, 0 replies; 9+ messages in thread
From: Krishna Kumar2 @ 2011-06-13 13:38 UTC (permalink / raw)
To: Krishna Kumar2
Cc: habanero, lguest, Shirley Ma, kvm, Carsten Otte, linux-s390,
Michael S. Tsirkin, Heiko Carstens, linux-kernel, netdev-owner,
steved, Christian Borntraeger, Tom Lendacky, netdev,
Martin Schwidefsky, linux390, virtualization
> Krishna Kumar2/India/IBM@IBMIN wrote on 06/13/2011 07:02:27 PM:
...
> With 16K, there was an improvement in SD, but
> higher sessions seem to slightly degrade BW/SD:
I meant to say "With 16K, there was an improvement in BW"
above. Again the numbers are not very reproducible,
I will test with remote host also to see if I get more
consistent numbers.
Thanks,
- KK
>
> ______________________________
> #sessions BW% SD%
> ______________________________
> 1 30.9 -25.0
> 2 16.5 -19.4
> 4 -1.3 7.9
> 8 1.4 6.2
> 16 3.9 -5.4
> 32 0 4.3
> 48 -.5 .1
> 64 32.1 -1.5
> 96 -2.1 23.2
> 128 -7.4 3.8
> ______________________________
> BW: 5.0 SD: 7.5
^ permalink raw reply [flat|nested] 9+ messages in thread
* Re: [PATCHv2 RFC 0/4] virtio and vhost-net capacity handling
[not found] ` <20110613133513.GA29884@redhat.com>
@ 2011-06-13 13:44 ` Krishna Kumar2
0 siblings, 0 replies; 9+ messages in thread
From: Krishna Kumar2 @ 2011-06-13 13:44 UTC (permalink / raw)
To: Michael S. Tsirkin
Cc: habanero, lguest, Shirley Ma, kvm, Carsten Otte, linux-s390,
Heiko Carstens, linux-kernel, virtualization, steved,
Christian Borntraeger, Tom Lendacky, netdev, Martin Schwidefsky,
linux390
"Michael S. Tsirkin" <mst@redhat.com> wrote on 06/13/2011 07:05:13 PM:
> > I ran the latest patches with 1K I/O (guest->local host) and
> > the results are (60 sec run for each test case):
>
> Hi!
> Did you apply this one:
> [PATCHv2 RFC 4/4] Revert "virtio: make add_buf return capacity remaining"
> ?
>
> It turns out that that patch has a bug and should be reverted,
> only patches 1-3 should be applied.
>
> Could you confirm please?
No, I didn't apply that patch. I had also seen your mail
earlier on this patch breaking receive buffer processing
if applied.
thanks,
- KK
^ permalink raw reply [flat|nested] 9+ messages in thread
* Re: [PATCHv2 RFC 0/4] virtio and vhost-net capacity handling
[not found] ` <OFE36F9C2B.43D9DB04-ON652578AE.0048E1AB-652578AE.004A1462@in.ibm.com>
` (2 preceding siblings ...)
[not found] ` <20110613133513.GA29884@redhat.com>
@ 2011-06-19 8:53 ` Michael S. Tsirkin
3 siblings, 0 replies; 9+ messages in thread
From: Michael S. Tsirkin @ 2011-06-19 8:53 UTC (permalink / raw)
To: Krishna Kumar2
Cc: habanero, lguest, Shirley Ma, kvm, Carsten Otte, linux-s390,
Heiko Carstens, linux-kernel, virtualization, steved,
Christian Borntraeger, Tom Lendacky, netdev, Martin Schwidefsky,
linux390
On Mon, Jun 13, 2011 at 07:02:27PM +0530, Krishna Kumar2 wrote:
> "Michael S. Tsirkin" <mst@redhat.com> wrote on 06/07/2011 09:38:30 PM:
>
> > > This is on top of the patches applied by Rusty.
> > >
> > > Warning: untested. Posting now to give people chance to
> > > comment on the API.
> >
> > OK, this seems to have survived some testing so far,
> > after I dropped patch 4 and fixed build for patch 3
> > (build fixup patch sent in reply to the original).
> >
> > I'll be mostly offline until Sunday, would appreciate
> > testing reports.
>
> Hi Michael,
>
> I ran the latest patches with 1K I/O (guest->local host) and
> the results are (60 sec run for each test case):
>
> ______________________________
> #sessions BW% SD%
> ______________________________
> 1 -25.6 47.0
> 2 -29.3 22.9
> 4 .8 1.6
> 8 1.6 0
> 16 -1.6 4.1
> 32 -5.3 2.1
> 48 11.3 -7.8
> 64 -2.8 .7
> 96 -6.2 .6
> 128 -10.6 12.7
> ______________________________
> BW: -4.8 SD: 5.4
>
> I tested it again to see if the regression is fleeting (since
> the numbers vary quite a bit for 1K I/O even between guest->
> local host), but:
>
> ______________________________
> #sessions BW% SD%
> ______________________________
> 1 14.0 -17.3
> 2 19.9 -11.1
> 4 7.9 -15.3
> 8 9.6 -13.1
> 16 1.2 -7.3
> 32 -.6 -13.5
> 48 -28.7 10.0
> 64 -5.7 -.7
> 96 -9.4 -8.1
> 128 -9.4 .7
> ______________________________
> BW: -3.7 SD: -2.0
>
>
> With 16K, there was an improvement in SD, but
> higher sessions seem to slightly degrade BW/SD:
>
> ______________________________
> #sessions BW% SD%
> ______________________________
> 1 30.9 -25.0
> 2 16.5 -19.4
> 4 -1.3 7.9
> 8 1.4 6.2
> 16 3.9 -5.4
> 32 0 4.3
> 48 -.5 .1
> 64 32.1 -1.5
> 96 -2.1 23.2
> 128 -7.4 3.8
> ______________________________
> BW: 5.0 SD: 7.5
>
>
> Thanks,
>
> - KK
I think I see one scenario where we do extra work:
when TX ring overflows, the first attempt to
add buf will fail, so the work to format the s/g
list is then wasted. So it might make sense to
free up buffers up to capacity first thing after all,
which will still do nothing typically, add buf afterwards.
--
MST
^ permalink raw reply [flat|nested] 9+ messages in thread
end of thread, other threads:[~2011-06-19 8:53 UTC | newest]
Thread overview: 9+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2011-06-02 15:42 [PATCHv2 RFC 0/4] virtio and vhost-net capacity handling Michael S. Tsirkin
[not found] <cover.1307029008.git.mst@redhat.com>
2011-06-02 17:17 ` Michael S. Tsirkin
[not found] ` <20110602171721.GA13215@redhat.com>
2011-06-06 3:39 ` Rusty Russell
2011-06-07 16:08 ` Michael S. Tsirkin
[not found] ` <20110607160830.GB17581@redhat.com>
2011-06-13 13:32 ` Krishna Kumar2
[not found] ` <OFE36F9C2B.43D9DB04-ON652578AE.0048E1AB-652578AE.004A1462@in.ibm.com>
2011-06-13 13:35 ` Michael S. Tsirkin
2011-06-13 13:38 ` Krishna Kumar2
[not found] ` <20110613133513.GA29884@redhat.com>
2011-06-13 13:44 ` Krishna Kumar2
2011-06-19 8:53 ` Michael S. Tsirkin
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).