From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from mailman by lists.gnu.org with tmda-scanned (Exim 4.43) id 1Neagl-0003Ns-N2 for qemu-devel@nongnu.org; Mon, 08 Feb 2010 15:58:35 -0500 Received: from [199.232.76.173] (port=60250 helo=monty-python.gnu.org) by lists.gnu.org with esmtp (Exim 4.43) id 1Neagk-0003Nd-Vm for qemu-devel@nongnu.org; Mon, 08 Feb 2010 15:58:35 -0500 Received: from Debian-exim by monty-python.gnu.org with spam-scanned (Exim 4.60) (envelope-from ) id 1Neagj-0007tv-H2 for qemu-devel@nongnu.org; Mon, 08 Feb 2010 15:58:34 -0500 Received: from mx20.gnu.org ([199.232.41.8]:6490) by monty-python.gnu.org with esmtps (TLS-1.0:RSA_AES_256_CBC_SHA1:32) (Exim 4.60) (envelope-from ) id 1Neagj-0007tg-37 for qemu-devel@nongnu.org; Mon, 08 Feb 2010 15:58:33 -0500 Received: from e39.co.us.ibm.com ([32.97.110.160]) by mx20.gnu.org with esmtp (Exim 4.60) (envelope-from ) id 1Neagi-0008R7-8M for qemu-devel@nongnu.org; Mon, 08 Feb 2010 15:58:32 -0500 Received: from d03relay02.boulder.ibm.com (d03relay02.boulder.ibm.com [9.17.195.227]) by e39.co.us.ibm.com (8.14.3/8.13.1) with ESMTP id o18KookJ016092 for ; Mon, 8 Feb 2010 13:50:50 -0700 Received: from d03av06.boulder.ibm.com (d03av06.boulder.ibm.com [9.17.195.245]) by d03relay02.boulder.ibm.com (8.13.8/8.13.8/NCO v9.1) with ESMTP id o18Kw9TF064806 for ; Mon, 8 Feb 2010 13:58:11 -0700 Received: from d03av06.boulder.ibm.com (loopback [127.0.0.1]) by d03av06.boulder.ibm.com (8.14.3/8.13.1/NCO v10.0 AVout) with ESMTP id o18L0LBQ020274 for ; Mon, 8 Feb 2010 14:00:22 -0700 Message-ID: <4B707ADD.4040401@linux.vnet.ibm.com> Date: Mon, 08 Feb 2010 14:58:05 -0600 From: Anthony Liguori MIME-Version: 1.0 References: <201001291406.41559.tahm@linux.vnet.ibm.com> <201002081010.03751.tahm@linux.vnet.ibm.com> In-Reply-To: <201002081010.03751.tahm@linux.vnet.ibm.com> Content-Type: text/plain; charset=ISO-8859-1; format=flowed Content-Transfer-Encoding: 7bit Subject: [Qemu-devel] Re: Network shutdown under load List-Id: qemu-devel.nongnu.org List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , To: Tom Lendacky Cc: chrisw@redhat.com, markmc@redhat.com, Anthony Liguori , herbert@gondor.apana.org.au, kvm@vger.kernel.org, qemu-devel@nongnu.org, rek2@binaryfreedom.info, avi@redhat.com On 02/08/2010 10:10 AM, Tom Lendacky wrote: > Fix a race condition where qemu finds that there are not enough virtio > ring buffers available and the guest make more buffers available before > qemu can enable notifications. > > Signed-off-by: Tom Lendacky > Signed-off-by: Anthony Liguori > I've walked through the changes in this series and I'm pretty certain that this is the only problem. I'd appreciate if others could review though. Regards, Anthony Liguori > hw/virtio-net.c | 10 +++++++++- > 1 files changed, 9 insertions(+), 1 deletions(-) > > diff --git a/hw/virtio-net.c b/hw/virtio-net.c > index 6e48997..5c0093e 100644 > --- a/hw/virtio-net.c > +++ b/hw/virtio-net.c > @@ -379,7 +379,15 @@ static int virtio_net_has_buffers(VirtIONet *n, int bufsize) > (n->mergeable_rx_bufs&& > !virtqueue_avail_bytes(n->rx_vq, bufsize, 0))) { > virtio_queue_set_notification(n->rx_vq, 1); > - return 0; > + > + /* To avoid a race condition where the guest has made some buffers > + * available after the above check but before notification was > + * enabled, check for available buffers again. > + */ > + if (virtio_queue_empty(n->rx_vq) || > + (n->mergeable_rx_bufs&& > + !virtqueue_avail_bytes(n->rx_vq, bufsize, 0))) > + return 0; > } > > virtio_queue_set_notification(n->rx_vq, 0); > > On Friday 29 January 2010 02:06:41 pm Tom Lendacky wrote: > >> There's been some discussion of this already in the kvm list, but I want to >> summarize what I've found and also include the qemu-devel list in an effort >> to find a solution to this problem. >> >> Running a netperf test between two kvm guests results in the guest's >> network interface shutting down. I originally found this using kvm guests >> on two different machines that were connected via a 10GbE link. However, >> I found this problem can be easily reproduced using two guests on the same >> machine. >> >> I am running the 2.6.32 level of the kvm.git tree and the 0.12.1.2 level of >> the qemu-kvm.git tree. >> >> The setup includes two bridges, br0 and br1. >> >> The commands used to start the guests are as follows: >> usr/local/bin/qemu-system-x86_64 -name cape-vm001 -m 1024 -drive >> file=/autobench/var/tmp/cape-vm001- >> raw.img,if=virtio,index=0,media=disk,boot=on -net >> nic,model=virtio,vlan=0,macaddr=00:16:3E:00:62:51,netdev=cape-vm001-eth0 - >> netdev tap,id=cape-vm001-eth0,script=/autobench/var/tmp/ifup-kvm- >> br0,downscript=/autobench/var/tmp/ifdown-kvm-br0 -net >> nic,model=virtio,vlan=1,macaddr=00:16:3E:00:62:D1,netdev=cape-vm001-eth1 - >> netdev tap,id=cape-vm001-eth1,script=/autobench/var/tmp/ifup-kvm- >> br1,downscript=/autobench/var/tmp/ifdown-kvm-br1 -vnc :1 -monitor >> telnet::5701,server,nowait -snapshot -daemonize >> >> usr/local/bin/qemu-system-x86_64 -name cape-vm002 -m 1024 -drive >> file=/autobench/var/tmp/cape-vm002- >> raw.img,if=virtio,index=0,media=disk,boot=on -net >> nic,model=virtio,vlan=0,macaddr=00:16:3E:00:62:61,netdev=cape-vm002-eth0 - >> netdev tap,id=cape-vm002-eth0,script=/autobench/var/tmp/ifup-kvm- >> br0,downscript=/autobench/var/tmp/ifdown-kvm-br0 -net >> nic,model=virtio,vlan=1,macaddr=00:16:3E:00:62:E1,netdev=cape-vm002-eth1 - >> netdev tap,id=cape-vm002-eth1,script=/autobench/var/tmp/ifup-kvm- >> br1,downscript=/autobench/var/tmp/ifdown-kvm-br1 -vnc :2 -monitor >> telnet::5702,server,nowait -snapshot -daemonize >> >> The ifup-kvm-br0 script takes the (first) qemu created tap device and >> brings it up and adds it to bridge br0. The ifup-kvm-br1 script take the >> (second) qemu created tap device and brings it up and adds it to bridge >> br1. >> >> Each ethernet device within a guest is on it's own subnet. For example: >> guest 1 eth0 has addr 192.168.100.32 and eth1 has addr 192.168.101.32 >> guest 2 eth0 has addr 192.168.100.64 and eth1 has addr 192.168.101.64 >> >> On one of the guests run netserver: >> netserver -L 192.168.101.32 -p 12000 >> >> On the other guest run netperf: >> netperf -L 192.168.101.64 -H 192.168.101.32 -p 12000 -t TCP_STREAM -l 60 >> -c -C -- -m 16K -M 16K >> >> It may take more than one netperf run (I find that my second run almost >> always causes the shutdown) but the network on the eth1 links will stop >> working. >> >> I did some debugging and found that in qemu on the guest running netserver: >> - the receive_disabled variable is set and never gets reset >> - the read_poll event handler for the eth1 tap device is disabled and >> never re-enabled >> These conditions result in no packets being read from the tap device and >> sent to the guest - effectively shutting down the network. Network >> connectivity can be restored by shutting down the guest interfaces, >> unloading the virtio_net module, re-loading the virtio_net module and >> re-starting the guest interfaces. >> >> I'm continuing to work on debugging this, but would appreciate if some >> folks with more qemu network experience could try to recreate and debug >> this. >> >> If my kernel config matters, I can provide that. >> >> Thanks, >> Tom >> -- >> To unsubscribe from this list: send the line "unsubscribe kvm" in >> the body of a message to majordomo@vger.kernel.org >> More majordomo info at http://vger.kernel.org/majordomo-info.html >> >>