From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from mailman by lists.gnu.org with tmda-scanned (Exim 4.43) id 1N81op-0006xk-H2 for qemu-devel@nongnu.org; Tue, 10 Nov 2009 20:16:19 -0500 Received: from exim by lists.gnu.org with spam-scanned (Exim 4.43) id 1N81ok-0006rM-Mv for qemu-devel@nongnu.org; Tue, 10 Nov 2009 20:16:18 -0500 Received: from [199.232.76.173] (port=37335 helo=monty-python.gnu.org) by lists.gnu.org with esmtp (Exim 4.43) id 1N81ok-0006r7-GK for qemu-devel@nongnu.org; Tue, 10 Nov 2009 20:16:14 -0500 Received: from mail-pz0-f188.google.com ([209.85.222.188]:59909) by monty-python.gnu.org with esmtp (Exim 4.60) (envelope-from ) id 1N81ok-00086c-2T for qemu-devel@nongnu.org; Tue, 10 Nov 2009 20:16:14 -0500 Received: by pzk26 with SMTP id 26so406392pzk.4 for ; Tue, 10 Nov 2009 17:16:12 -0800 (PST) MIME-Version: 1.0 Date: Tue, 10 Nov 2009 17:16:12 -0800 Message-ID: <3540a3280911101716o6a002149ld7b0644a362969a8@mail.gmail.com> From: Shesha Sreenivasamurthy Content-Type: text/plain; charset=ISO-8859-1 Subject: [Qemu-devel] Multiple Nics on a VLAN List-Id: qemu-devel.nongnu.org List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , To: qemu-devel@nongnu.org Hi All, I'm using the following command to have two nics in multicast on the same vlan. I see a storm of ARP requests. Does any one have any suggestions? qemu.bin.kvm84 -hda /live_disks/clone-disk.img -snapshot -serial telnet:SERVER:50000,nowait,server -monitor tcp:SERVER:51000,server,nowait,nodelay -p 61000 -m 768m -smp 1 -vnc SERVER:10 -net nic,model=e1000,vlan=0,macaddr=56:48:AA:BB:CC:DD -net tap,vlan=0,script=netscripts/net0-ifup -net nic,model=e1000,vlan=1,macaddr=56:48:AA:BB:CC:EE -net socket,vlan=1,mcast=230.0.0.1:3001 -net nic,model=e1000,vlan=1,macaddr=56:48:AA:BB:CC:FF -net socket,vlan=1,mcast=230.0.0.1:3001 --uuid cc6145a8-cdae-11de-ac18-003048d4fd3e However, If I launch two QEMU with one nic in multicast, where eth0 in both QEMU are connected to vlan1, the I can ping 1.1.1.1 -> 1.1.1.2 and vice versa. I'm running CENTOS inside the VM. Thanks, Shesha