* Interrupt coalescence?
@ 2008-05-26 21:23 Mike Murphy
2008-05-27 14:20 ` Anthony Liguori
0 siblings, 1 reply; 2+ messages in thread
From: Mike Murphy @ 2008-05-26 21:23 UTC (permalink / raw)
To: kvm
Greetings,
I was wondering if KVM/QEMU supports interrupt coalescence for network
devices, in particular the e1000 NIC.
We have a test cluster set up with 14 Linux hosts, each of which runs
2 KVM instances (1 per physical CPU core). Each instance uses bridged
Ethernet to connect to a physical Gigabit NIC. (We have a 2-port NIC,
and each VM gets its own port.)
We've observed a tripling of ping RTT's (100 us from host to host, but
300 us from VM to VM), a reduction of theoretical bandwidth (940 Mbps
to around 750 Mbps per Iperf), and a substantial reduction in secure
copy transfer speeds (36 MB/s down to 8 MB/s) for a large (2 GB) file.
Looking at /proc/interrupts before and after the transfer tests,
receiving all 2 GB required 133,341 interrupts on the physical host,
compared to 1,339,823 interrupts on the guest. (Interrupts at the
sender side were 326,766 and 1,552,613, respectively.)
The primary concern with the large number of interrupts is actually
more the latency than the throughput, as we would like to run MPI
applications inside VM's. These applications are exceptionally
sensitive to latency. For example, the HPL benchmark suite gives us a
benchmark of around 140 GFLOPS running on the physical hosts, but only
around 60 GLOPS (poorly load-balanced) on VM's -- yielding an overhead
of around 60%. By itself on a single system, we have found KVM to be
capable of computational overheads less than 9%.
Any insight into ways to enable interrupt coalescence and/or reduce
network latency would be greatly appreciated... we are running KVM-63
on the cluster at the moment, but upgrading is possible.
As a side note, preliminary tests with a paravirtualized guest were
actually worse... the latency increased to about 500 us. However, I
ran this test probably a month or two ago, so if there have been
substantial upgrades to the virtio code, I can run this test again.
Thanks,
Mike
--
Mike Murphy
Ph.D. Candidate and NSF Graduate Research Fellow
Clemson University School of Computing
201 McAdams Hall
Clemson, SC 29634-0974 USA
Tel: +1 864.656.2838 Fax: +1 864.656.0145
http://www.cs.clemson.edu/~mamurph
^ permalink raw reply [flat|nested] 2+ messages in thread
* Re: Interrupt coalescence?
2008-05-26 21:23 Interrupt coalescence? Mike Murphy
@ 2008-05-27 14:20 ` Anthony Liguori
0 siblings, 0 replies; 2+ messages in thread
From: Anthony Liguori @ 2008-05-27 14:20 UTC (permalink / raw)
To: Mike Murphy; +Cc: kvm
Mike Murphy wrote:
> Greetings,
>
> I was wondering if KVM/QEMU supports interrupt coalescence for network
> devices, in particular the e1000 NIC.
>
> We have a test cluster set up with 14 Linux hosts, each of which runs
> 2 KVM instances (1 per physical CPU core). Each instance uses bridged
> Ethernet to connect to a physical Gigabit NIC. (We have a 2-port NIC,
> and each VM gets its own port.)
>
> We've observed a tripling of ping RTT's (100 us from host to host, but
> 300 us from VM to VM), a reduction of theoretical bandwidth (940 Mbps
> to around 750 Mbps per Iperf), and a substantial reduction in secure
> copy transfer speeds (36 MB/s down to 8 MB/s) for a large (2 GB) file.
> Looking at /proc/interrupts before and after the transfer tests,
> receiving all 2 GB required 133,341 interrupts on the physical host,
> compared to 1,339,823 interrupts on the guest. (Interrupts at the
> sender side were 326,766 and 1,552,613, respectively.)
>
I'm not sure that we're all that aggressive about interrupt mitigation
with the e1000. Depending on your network, you may see a good amount of
benefit when we finally support GSO. That should cut the number of
interrupts required on RX significantly.
> The primary concern with the large number of interrupts is actually
> more the latency than the throughput, as we would like to run MPI
> applications inside VM's. These applications are exceptionally
> sensitive to latency. For example, the HPL benchmark suite gives us a
> benchmark of around 140 GFLOPS running on the physical hosts, but only
> around 60 GLOPS (poorly load-balanced) on VM's -- yielding an overhead
> of around 60%. By itself on a single system, we have found KVM to be
> capable of computational overheads less than 9%.
>
Right now, virtio-net is pretty dumb with TX mitigation which is going
to increase ping latency significantly. If you disable TX mitigation,
you'll see much better ping latency (at the expense of throughput).
Regards,
Anthony Liguori
> Any insight into ways to enable interrupt coalescence and/or reduce
> network latency would be greatly appreciated... we are running KVM-63
> on the cluster at the moment, but upgrading is possible.
>
> As a side note, preliminary tests with a paravirtualized guest were
> actually worse... the latency increased to about 500 us. However, I
> ran this test probably a month or two ago, so if there have been
> substantial upgrades to the virtio code, I can run this test again.
>
> Thanks,
> Mike
>
^ permalink raw reply [flat|nested] 2+ messages in thread
end of thread, other threads:[~2008-05-27 14:20 UTC | newest]
Thread overview: 2+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2008-05-26 21:23 Interrupt coalescence? Mike Murphy
2008-05-27 14:20 ` Anthony Liguori
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox