From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from eggs.gnu.org ([2001:4830:134:3::10]:38387) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1ZCS8w-0002Jg-U3 for qemu-devel@nongnu.org; Tue, 07 Jul 2015 08:38:35 -0400 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71) (envelope-from ) id 1ZCS8v-0000bS-N0 for qemu-devel@nongnu.org; Tue, 07 Jul 2015 08:38:34 -0400 Received: from mx1.redhat.com ([209.132.183.28]:49076) by eggs.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1ZCS8v-0000bC-HF for qemu-devel@nongnu.org; Tue, 07 Jul 2015 08:38:33 -0400 From: Stefan Hajnoczi Date: Tue, 7 Jul 2015 13:38:19 +0100 Message-Id: <1436272705-28499-4-git-send-email-stefanha@redhat.com> In-Reply-To: <1436272705-28499-1-git-send-email-stefanha@redhat.com> References: <1436272705-28499-1-git-send-email-stefanha@redhat.com> Subject: [Qemu-devel] [PULL for-2.4 3/9] vmxnet3: Fix incorrect small packet padding List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , To: qemu-devel@nongnu.org Cc: Peter Maydell , Brian Kress , Stefan Hajnoczi From: Brian Kress When running ESXi under qemu there is an issue with the ESXi guest discarding packets that are too short. The guest discards any packets under the normal minimum length for an ethernet packet (60). This results in odd behaviour where other hosts or VMs on other hosts can communicate with the ESXi guest just fine (since there's a physical NIC somewhere doing padding), but VMs on the host and the host itself cannot because the ARP request packets are too small for the ESXi host to accept. Someone in the past thought this was worth fixing, and added code to the vmxnet3 qemu emulation such that if it is receiving packets smaller than 60 bytes to pad the packet out to 60. Unfortunately this code is wrong (or at least in the wrong place). It does so BEFORE before taking into account the vnet_hdr at the front of the packet added by the tap device. As a result, it might add padding, but it never adds enough. Specifically it adds 10 less (the length of the vnet_hdr) than it needs to. The following (hopefully "obviously correct") patch simply swaps the order of processing the vnet header and the padding. With this patch an ESXi guest is able to communicate with the host or other local VMs. Signed-off-by: Brian Kress Reviewed-by: Paolo Bonzini Reviewed-by: Dmitry Fleytman Signed-off-by: Stefan Hajnoczi --- hw/net/vmxnet3.c | 12 ++++++------ 1 file changed, 6 insertions(+), 6 deletions(-) diff --git a/hw/net/vmxnet3.c b/hw/net/vmxnet3.c index 104a0f5..706e060 100644 --- a/hw/net/vmxnet3.c +++ b/hw/net/vmxnet3.c @@ -1879,6 +1879,12 @@ vmxnet3_receive(NetClientState *nc, const uint8_t *buf, size_t size) return -1; } + if (s->peer_has_vhdr) { + vmxnet_rx_pkt_set_vhdr(s->rx_pkt, (struct virtio_net_hdr *)buf); + buf += sizeof(struct virtio_net_hdr); + size -= sizeof(struct virtio_net_hdr); + } + /* Pad to minimum Ethernet frame length */ if (size < sizeof(min_buf)) { memcpy(min_buf, buf, size); @@ -1887,12 +1893,6 @@ vmxnet3_receive(NetClientState *nc, const uint8_t *buf, size_t size) size = sizeof(min_buf); } - if (s->peer_has_vhdr) { - vmxnet_rx_pkt_set_vhdr(s->rx_pkt, (struct virtio_net_hdr *)buf); - buf += sizeof(struct virtio_net_hdr); - size -= sizeof(struct virtio_net_hdr); - } - vmxnet_rx_pkt_set_packet_type(s->rx_pkt, get_eth_packet_type(PKT_GET_ETH_HDR(buf))); -- 2.4.3