public inbox for kvm@vger.kernel.org
 help / color / mirror / Atom feed
From: Avi Kivity <avi@redhat.com>
To: Mark McLoughlin <markmc@redhat.com>
Cc: kvm@vger.kernel.org
Subject: Re: [PATCH 0/6] Kill off the virtio_net tx mitigation timer
Date: Thu, 06 Nov 2008 19:38:21 +0200	[thread overview]
Message-ID: <49132B8D.2070300@redhat.com> (raw)
In-Reply-To: <1225989962.10879.19.camel@blaa>

Mark McLoughlin wrote:
> Hey,
> 	So, I went off and spent some time gathering more data on this stuff
> and putting it together in a more consumable fashion.
>
> 	Here are some graphs showing the effect some of these changes have on
> throughput, cpu utilization and vmexit rate:
>
>   http://markmc.fedorapeople.org/virtio-netperf/2008-11-06/
>
>   

This is very helpful.

> 	The results are a little surprising, and I'm not sure I've fully
> digested them yet but some conclusions:
>
>   1) Disabling notifications from the guest for longer helps; you see 
>      an increase in cpu utilization and vmexit rate, but that can be 
>      accounted for by the extra data we're transferring
>
>   

Graphing cpu/bandwidth (cycles/bit) will show that nicely.

>   2) Flushing (when the ring is full) in the I/O thread doesn't seem to
>      help anything; strangely, it has a detrimental effect on 
>      host->guest traffic where I would expect us to hit this case at 
>      all.
>
>      I suspect we may not actually be hitting the full ring condition
>      in these tests at all.
>
>   

That's good; ring full == stall, especially with smp guests.


>   4) Removing the tx timer doesn't have a huge affect on guest->host, 
>      except for 32 byte buffers where we see a huge increase in vmexits 
>      and a drop in throughput. Bizarrely, we don't see this effect with 
>      64 byte buffers.
>   

Wierd.  Cacheline size effects?  the host must copy twice the number of 
cachelines for the same throughput, when moving between 32 and 64.

> 	
>      However, it does have a pretty significant impact on host->guest, 
>      which makes sense since in that case we'll just have a steady
>      stream of TCP ACK packets so if small guest->host packets are 
>      affected badly, so will the ACK packets.
>   

no-tx-timer is good for two workloads: streaming gso packets, where the 
packet is so large the vmexit count is low anyway, and small, latency 
sensitive packets, where you need the vmexits.  I'm worried about the 
workloads in between, which is why I'm pushing for the dynamic window.

>   5) The drop-mutex patch is a nice win overall, except for a huge 
>      increase in vmexits for sub-4k guest->host packets. Strange.
>   

What types of vmexits are these?  virtio pio or mmu?  and what's the 
test length (interested in vmexits/sec and vmexits/bit).

Maybe the allocator changes its behavior and we're faulting in pages.

-- 
I have a truly marvellous patch that fixes the bug which this
signature is too narrow to contain.


  reply	other threads:[~2008-11-06 17:38 UTC|newest]

Thread overview: 27+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2008-10-30 17:51 [PATCH 0/6] Kill off the virtio_net tx mitigation timer Mark McLoughlin
2008-10-30 17:51 ` [PATCH 1/6] kvm: qemu: virtio: remove unused variable Mark McLoughlin
2008-10-30 17:51   ` [PATCH 2/6] kvm: qemu: dup the qemu_eventfd() return Mark McLoughlin
2008-10-30 17:51     ` [PATCH 3/6] kvm: qemu: add qemu_eventfd_write() and qemu_eventfd_read() Mark McLoughlin
2008-10-30 17:51       ` [PATCH 4/6] kvm: qemu: aggregate reads from eventfd Mark McLoughlin
2008-10-30 17:51         ` [PATCH 5/6] kvm: qemu: virtio-net: handle all tx in I/O thread without timer Mark McLoughlin
2008-10-30 17:51           ` [PATCH 6/6] kvm: qemu: virtio-net: drop mutex during tx tapfd write Mark McLoughlin
2008-11-04 11:43             ` Avi Kivity
2008-10-30 19:24           ` [PATCH 5/6] kvm: qemu: virtio-net: handle all tx in I/O thread without timer Anthony Liguori
2008-10-31  9:16             ` Mark McLoughlin
2008-11-03 15:07               ` Mark McLoughlin
2008-11-02  9:56           ` Avi Kivity
2008-11-04 15:23           ` David S. Ahern
2008-11-06 17:02             ` Mark McLoughlin
2008-11-06 17:13               ` David S. Ahern
2008-11-06 17:43               ` Avi Kivity
2008-10-30 19:20 ` [PATCH 0/6] Kill off the virtio_net tx mitigation timer Anthony Liguori
2008-11-02  9:48 ` Avi Kivity
2008-11-03 12:23   ` Mark McLoughlin
2008-11-03 12:40     ` Avi Kivity
2008-11-03 15:04       ` Mark McLoughlin
2008-11-03 15:19         ` Avi Kivity
2008-11-06 16:46           ` Mark McLoughlin
2008-11-06 17:38             ` Avi Kivity [this message]
2008-11-06 17:45       ` Mark McLoughlin
2008-11-09 11:29         ` Avi Kivity
2008-11-02  9:57 ` Avi Kivity

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=49132B8D.2070300@redhat.com \
    --to=avi@redhat.com \
    --cc=kvm@vger.kernel.org \
    --cc=markmc@redhat.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox