netdev.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* TX from KVM guest virtio_net to vhost issues
@ 2011-03-09 21:46 Shirley Ma
  2011-03-11  6:19 ` Rusty Russell
  0 siblings, 1 reply; 10+ messages in thread
From: Shirley Ma @ 2011-03-09 21:46 UTC (permalink / raw)
  To: Michael S. Tsirkin
  Cc: Tom Lendacky, Rusty Russell, Krishna Kumar2, David Miller, kvm,
	netdev, steved

Since we have lots of performance discussions about virtio_net and vhost
communication. I think it's better to have a common understandings of
the code first, then we can seek the right directions to improve it. We
also need to collect more statistics data on both virtio and vhost.

Let's look at TX first: from virtio_net(guest) to vhost(host), send vq
is shared between guest virtio_net and host vhost, it uses memory
barriers to sync the changes.

In the start:

Guest virtio_net TX send completion interrupt (for freeing used skbs) is
disable. Guest virtio_net TX send completion interrupt is enabled only
when send vq is overrun, guest needs to wait vhost to consume more
available skbs. 

Host vhost notification is enabled in the beginning (for consuming
available skbs); It is disable whenever the send vq is not empty. Once
the send vq is empty, the notification is enabled by vhost.

In guest start_xmit(), it first frees used skbs, then send available
skbs to vhost, ideally guest never enables TX send completion interrupts
to free used skbs if vhost keeps posting used skbs in send vq.

In vhost handle_tx(), it wakes up by guest whenever the send vq has a
skb to send, once the send vq is not empty, vhost exits handle_tx()
without enabling notification. Ideally if guest keeps xmit skbs in send
vq, the notification is never enabled.

I don't see issues on this implementation.

However, in our TCP_STREAM small message size test, we found that
somehow guest couldn't see more used skbs to free, which caused
frequently TX send queue overrun.

In our TCP_RR small message size multiple streams test, we found that
vhost couldn't see more xmit skbs in send vq, thus it enabled
notification too often.

What's the possible cause here in xmit? How guest, vhost are being
scheduled? Whether it's possible, guest virtio_net cooperates with vhost
for ideal performance: both guest virtio_net and vhost can be in pace
with send vq without many notifications and exits?

Thanks
Shirley




^ permalink raw reply	[flat|nested] 10+ messages in thread

end of thread, other threads:[~2011-03-17 15:07 UTC | newest]

Thread overview: 10+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2011-03-09 21:46 TX from KVM guest virtio_net to vhost issues Shirley Ma
2011-03-11  6:19 ` Rusty Russell
2011-03-14 20:29   ` [PATCH 0/2] publish last used index (was Re: TX from KVM guest virtio_net to vhost issues) Michael S. Tsirkin
2011-03-14 20:30     ` [PATCH 1/2] virtio: put last seen used index into ring itself Michael S. Tsirkin
2011-03-15  0:21       ` Shirley Ma
2011-03-17 12:20         ` Michael S. Tsirkin
2011-03-14 20:30     ` [PATCH 2/2] vhost-net: utilize PUBLISH_USED_IDX feature Michael S. Tsirkin
2011-03-17  0:20   ` TX from KVM guest virtio_net to vhost issues Shirley Ma
2011-03-17  6:00     ` Michael S. Tsirkin
2011-03-17 15:07       ` Shirley Ma

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).