From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from eggs.gnu.org ([2001:4830:134:3::10]:51280) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1ZFZNr-0003Uk-PW for qemu-devel@nongnu.org; Wed, 15 Jul 2015 22:58:52 -0400 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71) (envelope-from ) id 1ZFZNo-0003De-Jf for qemu-devel@nongnu.org; Wed, 15 Jul 2015 22:58:51 -0400 Received: from mx1.redhat.com ([209.132.183.28]:50287) by eggs.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1ZFZNo-0003Da-BV for qemu-devel@nongnu.org; Wed, 15 Jul 2015 22:58:48 -0400 Message-ID: <55A71DE2.3060205@redhat.com> Date: Thu, 16 Jul 2015 10:58:42 +0800 From: Jason Wang MIME-Version: 1.0 References: <1436955553-22791-1-git-send-email-famz@redhat.com> <1436955553-22791-13-git-send-email-famz@redhat.com> In-Reply-To: <1436955553-22791-13-git-send-email-famz@redhat.com> Content-Type: text/plain; charset=windows-1252 Content-Transfer-Encoding: 7bit Subject: Re: [Qemu-devel] [PATCH v2 for-2.4 12/12] axienet: Flush queued packets when rx is done List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , To: Fam Zheng , qemu-devel@nongnu.org Cc: Peter Maydell , Peter Crosthwaite , Rob Herring , Michael Walle , Gerd Hoffmann , stefanha@redhat.com, "Edgar E. Iglesias" On 07/15/2015 06:19 PM, Fam Zheng wrote: > eth_can_rx checks s->rxsize and returns false if it is non-zero. Because > of the .can_receive semantics change, this will make the incoming queue > disabled by peer, until it is explicitly flushed. So we should flush it > when s->rxsize is becoming zero. > > Squash eth_can_rx semantics into etx_rx and drop .can_receive() > callback, also add flush when rx buffer becomes available again after a > packet gets queued. > > The other conditions, "!axienet_rx_resetting(s) && > axienet_rx_enabled(s)" are OK because enet_write already calls > qemu_flush_queued_packets when the register bits are changed. > > Signed-off-by: Fam Zheng > --- > hw/net/xilinx_axienet.c | 17 +++++++++++++---- > 1 file changed, 13 insertions(+), 4 deletions(-) > > diff --git a/hw/net/xilinx_axienet.c b/hw/net/xilinx_axienet.c > index 9205770..d63c423 100644 > --- a/hw/net/xilinx_axienet.c > +++ b/hw/net/xilinx_axienet.c > @@ -401,6 +401,9 @@ struct XilinxAXIEnet { > > uint8_t rxapp[CONTROL_PAYLOAD_SIZE]; > uint32_t rxappsize; > + > + /* Whether axienet_eth_rx_notify should flush incoming queue. */ > + bool need_flush; > }; > > static void axienet_rx_reset(XilinxAXIEnet *s) > @@ -658,10 +661,8 @@ static const MemoryRegionOps enet_ops = { > .endianness = DEVICE_LITTLE_ENDIAN, > }; > > -static int eth_can_rx(NetClientState *nc) > +static int eth_can_rx(XilinxAXIEnet *s) > { > - XilinxAXIEnet *s = qemu_get_nic_opaque(nc); > - > /* RX enabled? */ > return !s->rxsize && !axienet_rx_resetting(s) && axienet_rx_enabled(s); > } > @@ -701,6 +702,10 @@ static void axienet_eth_rx_notify(void *opaque) > s->rxpos += ret; > if (!s->rxsize) { > s->regs[R_IS] |= IS_RX_COMPLETE; > + if (s->need_flush) { > + s->need_flush = false; > + qemu_flush_queued_packets(qemu_get_queue(s->nic)); > + } > } > } > enet_update_irq(s); > @@ -721,6 +726,11 @@ static ssize_t eth_rx(NetClientState *nc, const uint8_t *buf, size_t size) > > DENET(qemu_log("%s: %zd bytes\n", __func__, size)); > > + if (!eth_can_rx(s)) { > + s->need_flush = true; > + return 0; > + } > + axienet_eth_rx_notify() was only called by eth_rx(). So when s->need_flush is true, we won't ever reach axienet_eth_rx_notify()? > unicast = ~buf[0] & 0x1; > broadcast = memcmp(buf, sa_bcast, 6) == 0; > multicast = !unicast && !broadcast; > @@ -925,7 +935,6 @@ xilinx_axienet_data_stream_push(StreamSlave *obj, uint8_t *buf, size_t size) > static NetClientInfo net_xilinx_enet_info = { > .type = NET_CLIENT_OPTIONS_KIND_NIC, > .size = sizeof(NICState), > - .can_receive = eth_can_rx, > .receive = eth_rx, > }; >