From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from eggs.gnu.org ([2001:4830:134:3::10]:36582) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1Z7QSG-00040p-QE for qemu-devel@nongnu.org; Tue, 23 Jun 2015 11:49:45 -0400 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71) (envelope-from ) id 1Z7QSC-0005tn-Bu for qemu-devel@nongnu.org; Tue, 23 Jun 2015 11:49:44 -0400 Received: from smtprelay0091.hostedemail.com ([216.40.44.91]:36421 helo=smtprelay.hostedemail.com) by eggs.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1Z7QSC-0005tG-80 for qemu-devel@nongnu.org; Tue, 23 Jun 2015 11:49:40 -0400 Received: from filter.hostedemail.com (unknown [216.40.38.60]) by smtprelay05.hostedemail.com (Postfix) with ESMTP id 118362691A4 for ; Tue, 23 Jun 2015 15:49:39 +0000 (UTC) Received: from [192.168.3.197] (cpe-74-74-243-211.rochester.res.rr.com [74.74.243.211]) (Authenticated sender: kressb@moose.net) by omf03.hostedemail.com (Postfix) with ESMTPA for ; Tue, 23 Jun 2015 15:49:38 +0000 (UTC) Message-ID: <55898005.2040600@moose.net> Date: Tue, 23 Jun 2015 11:49:25 -0400 From: Brian Kress MIME-Version: 1.0 Content-Type: text/plain; charset=utf-8; format=flowed Content-Transfer-Encoding: 7bit Subject: [Qemu-devel] vmxnet3, vnet_hdr, and minimum length padding List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , To: qemu-devel@nongnu.org When running ESXi under qemu there is an issue with the ESXi guest discarding packets that are too short. The guest discards any packets under the normal minimum length for an ethernet packet (60). This results in odd behaviour where other hosts or VMs on other hosts can communicate with the ESXi guest just fine (since there's a physical NIC somewhere doing padding), but VMs on the host and the host itself cannot because the ARP request packets are too small for the ESXi host to accept. Someone in the past thought this was worth fixing, and added code to the vmxnet3 qemu emulation such that if it is receiving packets smaller than 60 bytes to pad the packet out to 60. Unfortunately this code is wrong (or at least in the wrong place). It does so BEFORE before taking into account the vnet_hdr at the front of the packet added by the tap device. As a result, it might add padding, but it never adds enough. Specifically it adds 10 less (the length of the vnet_hdr) than it needs to. The following (hopefully "obviously correct") patch simply swaps the order of processing the vnet header and the padding. With this patch an ESXi guest is able to communicate with the host or other local VMs. --- a/qemu-2.3.0/hw/net/vmxnet3.c 2015-04-27 10:08:24.000000000 -0400 +++ b/qemu-2.3.0/hw/net/vmxnet3.c 2015-06-23 11:38:48.865728713 -0400 @@ -1879,6 +1879,12 @@ return -1; } + if (s->peer_has_vhdr) { + vmxnet_rx_pkt_set_vhdr(s->rx_pkt, (struct virtio_net_hdr *)buf); + buf += sizeof(struct virtio_net_hdr); + size -= sizeof(struct virtio_net_hdr); + } + /* Pad to minimum Ethernet frame length */ if (size < sizeof(min_buf)) { memcpy(min_buf, buf, size); @@ -1887,12 +1893,6 @@ size = sizeof(min_buf); } - if (s->peer_has_vhdr) { - vmxnet_rx_pkt_set_vhdr(s->rx_pkt, (struct virtio_net_hdr *)buf); - buf += sizeof(struct virtio_net_hdr); - size -= sizeof(struct virtio_net_hdr); - } - vmxnet_rx_pkt_set_packet_type(s->rx_pkt, get_eth_packet_type(PKT_GET_ETH_HDR(buf)));