From: Mark McLoughlin <markmc@redhat.com>
To: "David S. Ahern" <daahern@cisco.com>
Cc: Avi Kivity <avi@qumranet.com>, kvm@vger.kernel.org
Subject: Re: [PATCH 5/6] kvm: qemu: virtio-net: handle all tx in I/O thread without timer
Date: Thu, 06 Nov 2008 17:02:56 +0000 [thread overview]
Message-ID: <1225990976.10879.34.camel@blaa> (raw)
In-Reply-To: <491068ED.3020902@cisco.com>
On Tue, 2008-11-04 at 08:23 -0700, David S. Ahern wrote:
>
> Mark McLoughlin wrote:
>
> > Note also that when tuning for a specific workload, which CPU
> > the I/O thread is pinned to is important.
> >
>
> Hi Mark:
>
> Can you give an example of when that has a noticeable affect?
>
> For example, if the guest handles network interrupts on vcpu0 and it is
> pinned to pcpu0 where should the IO thread be pinned for best performance?
Basically, the I/O thread is where packets are copied too and from host
kernel space at the moment.
If there are other copies of the packets anywhere, you want those to
copy from a cache.
With my netperf guest->host benchmark, you actually have four copies
going on:
1) netperf process in guest copying to guest kernel space
2) qemu process in the host copying between internal buffers
3) qemu process in the host copying to host kernel space
4) netserver process in the host copying into its buffers
My machine has four CPUs, with two 6Mb L2 caches - each cache is shared
between two of the CPUs, so I set things up as follows:
pcpu#3 - netserver, I/O thread, vcpu#0
pcup#4 - vcpu#1, virtio_net irq, netperf
which (hopefully) ensures that we're only doing one copy using RAM and
the rest are using the L1/L2 caches.
Cheers,
Mark.
next prev parent reply other threads:[~2008-11-06 17:04 UTC|newest]
Thread overview: 27+ messages / expand[flat|nested] mbox.gz Atom feed top
2008-10-30 17:51 [PATCH 0/6] Kill off the virtio_net tx mitigation timer Mark McLoughlin
2008-10-30 17:51 ` [PATCH 1/6] kvm: qemu: virtio: remove unused variable Mark McLoughlin
2008-10-30 17:51 ` [PATCH 2/6] kvm: qemu: dup the qemu_eventfd() return Mark McLoughlin
2008-10-30 17:51 ` [PATCH 3/6] kvm: qemu: add qemu_eventfd_write() and qemu_eventfd_read() Mark McLoughlin
2008-10-30 17:51 ` [PATCH 4/6] kvm: qemu: aggregate reads from eventfd Mark McLoughlin
2008-10-30 17:51 ` [PATCH 5/6] kvm: qemu: virtio-net: handle all tx in I/O thread without timer Mark McLoughlin
2008-10-30 17:51 ` [PATCH 6/6] kvm: qemu: virtio-net: drop mutex during tx tapfd write Mark McLoughlin
2008-11-04 11:43 ` Avi Kivity
2008-10-30 19:24 ` [PATCH 5/6] kvm: qemu: virtio-net: handle all tx in I/O thread without timer Anthony Liguori
2008-10-31 9:16 ` Mark McLoughlin
2008-11-03 15:07 ` Mark McLoughlin
2008-11-02 9:56 ` Avi Kivity
2008-11-04 15:23 ` David S. Ahern
2008-11-06 17:02 ` Mark McLoughlin [this message]
2008-11-06 17:13 ` David S. Ahern
2008-11-06 17:43 ` Avi Kivity
2008-10-30 19:20 ` [PATCH 0/6] Kill off the virtio_net tx mitigation timer Anthony Liguori
2008-11-02 9:48 ` Avi Kivity
2008-11-03 12:23 ` Mark McLoughlin
2008-11-03 12:40 ` Avi Kivity
2008-11-03 15:04 ` Mark McLoughlin
2008-11-03 15:19 ` Avi Kivity
2008-11-06 16:46 ` Mark McLoughlin
2008-11-06 17:38 ` Avi Kivity
2008-11-06 17:45 ` Mark McLoughlin
2008-11-09 11:29 ` Avi Kivity
2008-11-02 9:57 ` Avi Kivity
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=1225990976.10879.34.camel@blaa \
--to=markmc@redhat.com \
--cc=avi@qumranet.com \
--cc=daahern@cisco.com \
--cc=kvm@vger.kernel.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox