From mboxrd@z Thu Jan 1 00:00:00 1970 From: David Ahern Subject: Re: performance of virtual functions compared to virtio Date: Mon, 25 Apr 2011 11:39:16 -0600 Message-ID: <4DB5B1C4.4000602@gmail.com> References: <4DAF8EF0.8010203@gmail.com> <1303353349.3110.181.camel@x201> Mime-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 7bit Cc: KVM mailing list To: Alex Williamson Return-path: Received: from mail-pv0-f174.google.com ([74.125.83.174]:56893 "EHLO mail-pv0-f174.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1758690Ab1DYRjU (ORCPT ); Mon, 25 Apr 2011 13:39:20 -0400 Received: by pvg12 with SMTP id 12so1300508pvg.19 for ; Mon, 25 Apr 2011 10:39:20 -0700 (PDT) In-Reply-To: <1303353349.3110.181.camel@x201> Sender: kvm-owner@vger.kernel.org List-ID: On 04/20/11 20:35, Alex Williamson wrote: > Device assignment via a VF provides the lowest latency and most > bandwidth for *getting data off the host system*, though virtio/vhost is > getting better. If all you care about is VM-VM on the same host or > VM-host, then virtio is only limited by memory bandwidth/latency and > host processor cycles. Your processor has 25GB/s of memory bandwidth. > On the other hand, the VF has to send data all the way out to the wire > and all the way back up through the NIC to get to the other VM/host. > You're using a 1Gb/s NIC. Your results actually seem to indicate you're > getting better than wire rate, so maybe you're only passing through an > internal switch on the NIC, in any case, VFs are not optimal for > communication within the same physical system. They are optimal for off > host communication. Thanks, Hi Alex: Host-host was the next focus for the tests. I have 2 of the aforementioned servers, each configured identically. As a reminder: Host: Dell R410 2 quad core E5620@2.40 GHz processors 16 GB RAM Intel 82576 NIC (Gigabit ET Quad Port) - devices eth2, eth3, eth4, eth5 Fedora 14 kernel: 2.6.35.12-88.fc14.x86_64 qemu-kvm.git, ffce28fe6 (18-April-11) VMs: Fedora 14 kernel 2.6.35.11-83.fc14.x86_64 2 vcpus 1GB RAM 2 NICs - 1 virtio, 1 VF The virtio network arguments to qemu-kvm are: -netdev type=tap,vhost=on,ifname=tap0,id=netdev1 -device virtio-net-pci,mac=${mac},netdev=netdev1 For this round of tests I have the following setup: .======================================. | Host - A | | | | .-------------------------. | | | Virtual Machine - C | | | | | | | | .------. .------. | | | '--| eth1 |-----| eth0 |--' | | '------' '------' | | 192.168. | | 192.168.103.71 | 102.71 | .------. | | | | tap0 | | | | '------' | | | | | | | .------. | | | | br | 192.168.103.79 | | '------' | | {VF} | | | .--------. .------. | '=======| eth2 |======| eth3 |=======' '--------' '------' 192.168.102.79 | | | point-to- | | point | | connections | 192.168.102.80 | | .--------. .------. .=======| eth2 |======| eth3 |=======. | '--------' '------' | | {VF} | | | | .------. | | | | br | 192.168.103.80 | | '------' | | | | | | | .------. | | | | tap0 | | | 192.168. | '------' | | 102.81 | | 192.168.103.81 | .------. .------. | | .--| eth1 |-----| eth0 |--. | | | '------' '------' | | | | | | | | Virtual Machine - D | | | '-------------------------' | | | | Host - B | '======================================' So, basically, 192.168.102 is the network where the VMs have a VF, and 192.168.103 is the network where the VMs use virtio for networking. The netperf commands are all run on either Host-A or VM-C: netperf -H $ip -jcC -v 2 -t TCP_RR -- -r 1024 -D L,R netperf -H $ip -jcC -v 2 -t TCP_STREAM -- -m 1024 -D L,R latency throughput (usec) Mbps cross-host: A-B, eth2 185 932 A-B, eth3 185 935 same host, host-VM: A-C, using VF 488 1085 (seen as high as 1280's) A-C, virtio 150 4282 cross-host, host-VM: A-D, VF 489 938 A-D, virtio 288 889 cross-host, VM-VM: C-D, VF 488 934 C-D, virtio 490 933 While throughput for VFs is fine (near line-rate when crossing hosts), the latency is horrible. Any options to improve that? David