public inbox for kvm@vger.kernel.org
 help / color / mirror / Atom feed
From: Anthony Liguori <anthony@codemonkey.ws>
To: Mike Murphy <mamurph@cs.clemson.edu>
Cc: kvm@vger.kernel.org
Subject: Re: Interrupt coalescence?
Date: Tue, 27 May 2008 09:20:27 -0500	[thread overview]
Message-ID: <483C18AB.30709@codemonkey.ws> (raw)
In-Reply-To: <5aa163d00805261423m287bf40embc407835e1e3e600@mail.gmail.com>

Mike Murphy wrote:
> Greetings,
>
> I was wondering if KVM/QEMU supports interrupt coalescence for network
> devices, in particular the e1000 NIC.
>
> We have a test cluster set up with 14 Linux hosts, each of which runs
> 2 KVM instances (1 per physical CPU core). Each instance uses bridged
> Ethernet to connect to a physical Gigabit NIC. (We have a 2-port NIC,
> and each VM gets its own port.)
>
> We've observed a tripling of ping RTT's (100 us from host to host, but
> 300 us from VM to VM), a reduction of theoretical bandwidth (940 Mbps
> to around 750 Mbps per Iperf), and a substantial reduction in secure
> copy transfer speeds (36 MB/s down to 8 MB/s) for a large (2 GB) file.
> Looking at /proc/interrupts before and after the transfer tests,
> receiving all 2 GB required 133,341 interrupts on the physical host,
> compared to 1,339,823 interrupts on the guest. (Interrupts at the
> sender side were 326,766 and 1,552,613, respectively.)
>   

I'm not sure that we're all that aggressive about interrupt mitigation 
with the e1000.  Depending on your network, you may see a good amount of 
benefit when we finally support GSO.  That should cut the number of 
interrupts required on RX significantly.

> The primary concern with the large number of interrupts is actually
> more the latency than the throughput, as we would like to run MPI
> applications inside VM's. These applications are exceptionally
> sensitive to latency. For example, the HPL benchmark suite gives us a
> benchmark of around 140 GFLOPS running on the physical hosts, but only
> around 60 GLOPS (poorly load-balanced) on VM's -- yielding an overhead
> of around 60%. By itself on a single system, we have found KVM to be
> capable of computational overheads less than 9%.
>   

Right now, virtio-net is pretty dumb with TX mitigation which is going 
to increase ping latency significantly.  If you disable TX mitigation, 
you'll see much better ping latency (at the expense of throughput).

Regards,

Anthony Liguori

> Any insight into ways to enable interrupt coalescence and/or reduce
> network latency would be greatly appreciated... we are running KVM-63
> on the cluster at the moment, but upgrading is possible.
>
> As a side note, preliminary tests with a paravirtualized guest were
> actually worse... the latency increased to about 500 us. However, I
> ran this test probably a month or two ago, so if there have been
> substantial upgrades to the virtio code, I can run this test again.
>
> Thanks,
> Mike
>   


      reply	other threads:[~2008-05-27 14:20 UTC|newest]

Thread overview: 2+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2008-05-26 21:23 Interrupt coalescence? Mike Murphy
2008-05-27 14:20 ` Anthony Liguori [this message]

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=483C18AB.30709@codemonkey.ws \
    --to=anthony@codemonkey.ws \
    --cc=kvm@vger.kernel.org \
    --cc=mamurph@cs.clemson.edu \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox