From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from eggs.gnu.org ([2001:4830:134:3::10]:35083) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1ZJipZ-0004yd-TM for qemu-devel@nongnu.org; Mon, 27 Jul 2015 09:52:38 -0400 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71) (envelope-from ) id 1ZJipV-00068Z-TR for qemu-devel@nongnu.org; Mon, 27 Jul 2015 09:52:37 -0400 Received: from mx1.redhat.com ([209.132.183.28]:43415) by eggs.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1ZJipV-00068V-OG for qemu-devel@nongnu.org; Mon, 27 Jul 2015 09:52:33 -0400 From: Stefan Hajnoczi Date: Mon, 27 Jul 2015 14:52:01 +0100 Message-Id: <1438005121-31153-17-git-send-email-stefanha@redhat.com> In-Reply-To: <1438005121-31153-1-git-send-email-stefanha@redhat.com> References: <1438005121-31153-1-git-send-email-stefanha@redhat.com> Subject: [Qemu-devel] [PULL for-2.4 16/16] axienet: Flush queued packets when rx is done List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , To: qemu-devel@nongnu.org Cc: Peter Maydell , Fam Zheng , Stefan Hajnoczi From: Fam Zheng eth_can_rx checks s->rxsize and returns false if it is non-zero. Because of the .can_receive semantics change, this will make the incoming queue disabled by peer, until it is explicitly flushed. So we should flush it when s->rxsize is becoming zero. Squash eth_can_rx semantics into etx_rx and drop .can_receive() callback, also add flush when rx buffer becomes available again after a packet gets queued. The other conditions, "!axienet_rx_resetting(s) && axienet_rx_enabled(s)" are OK because enet_write already calls qemu_flush_queued_packets when the register bits are changed. Signed-off-by: Fam Zheng Reviewed-by: Jason Wang Reviewed-by: Stefan Hajnoczi Message-id: 1436955553-22791-13-git-send-email-famz@redhat.com Signed-off-by: Stefan Hajnoczi --- hw/net/xilinx_axienet.c | 17 +++++++++++++---- 1 file changed, 13 insertions(+), 4 deletions(-) diff --git a/hw/net/xilinx_axienet.c b/hw/net/xilinx_axienet.c index 9205770..d63c423 100644 --- a/hw/net/xilinx_axienet.c +++ b/hw/net/xilinx_axienet.c @@ -401,6 +401,9 @@ struct XilinxAXIEnet { uint8_t rxapp[CONTROL_PAYLOAD_SIZE]; uint32_t rxappsize; + + /* Whether axienet_eth_rx_notify should flush incoming queue. */ + bool need_flush; }; static void axienet_rx_reset(XilinxAXIEnet *s) @@ -658,10 +661,8 @@ static const MemoryRegionOps enet_ops = { .endianness = DEVICE_LITTLE_ENDIAN, }; -static int eth_can_rx(NetClientState *nc) +static int eth_can_rx(XilinxAXIEnet *s) { - XilinxAXIEnet *s = qemu_get_nic_opaque(nc); - /* RX enabled? */ return !s->rxsize && !axienet_rx_resetting(s) && axienet_rx_enabled(s); } @@ -701,6 +702,10 @@ static void axienet_eth_rx_notify(void *opaque) s->rxpos += ret; if (!s->rxsize) { s->regs[R_IS] |= IS_RX_COMPLETE; + if (s->need_flush) { + s->need_flush = false; + qemu_flush_queued_packets(qemu_get_queue(s->nic)); + } } } enet_update_irq(s); @@ -721,6 +726,11 @@ static ssize_t eth_rx(NetClientState *nc, const uint8_t *buf, size_t size) DENET(qemu_log("%s: %zd bytes\n", __func__, size)); + if (!eth_can_rx(s)) { + s->need_flush = true; + return 0; + } + unicast = ~buf[0] & 0x1; broadcast = memcmp(buf, sa_bcast, 6) == 0; multicast = !unicast && !broadcast; @@ -925,7 +935,6 @@ xilinx_axienet_data_stream_push(StreamSlave *obj, uint8_t *buf, size_t size) static NetClientInfo net_xilinx_enet_info = { .type = NET_CLIENT_OPTIONS_KIND_NIC, .size = sizeof(NICState), - .can_receive = eth_can_rx, .receive = eth_rx, }; -- 2.4.3