From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from eggs.gnu.org ([2001:4830:134:3::10]:56879) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1ZFZuo-0000OH-6I for qemu-devel@nongnu.org; Wed, 15 Jul 2015 23:32:55 -0400 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71) (envelope-from ) id 1ZFZul-0006sa-0a for qemu-devel@nongnu.org; Wed, 15 Jul 2015 23:32:54 -0400 Received: from mx1.redhat.com ([209.132.183.28]:43217) by eggs.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1ZFZuk-0006sW-Pz for qemu-devel@nongnu.org; Wed, 15 Jul 2015 23:32:50 -0400 Date: Thu, 16 Jul 2015 11:32:47 +0800 From: Fam Zheng Message-ID: <20150716033247.GB10334@ad.nay.redhat.com> References: <1436955553-22791-1-git-send-email-famz@redhat.com> <1436955553-22791-13-git-send-email-famz@redhat.com> <55A71DE2.3060205@redhat.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <55A71DE2.3060205@redhat.com> Subject: Re: [Qemu-devel] [PATCH v2 for-2.4 12/12] axienet: Flush queued packets when rx is done List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , To: Jason Wang Cc: Peter Maydell , Peter Crosthwaite , Rob Herring , qemu-devel@nongnu.org, Michael Walle , Gerd Hoffmann , stefanha@redhat.com, "Edgar E. Iglesias" On Thu, 07/16 10:58, Jason Wang wrote: > > > On 07/15/2015 06:19 PM, Fam Zheng wrote: > > eth_can_rx checks s->rxsize and returns false if it is non-zero. Because > > of the .can_receive semantics change, this will make the incoming queue > > disabled by peer, until it is explicitly flushed. So we should flush it > > when s->rxsize is becoming zero. > > > > Squash eth_can_rx semantics into etx_rx and drop .can_receive() > > callback, also add flush when rx buffer becomes available again after a > > packet gets queued. > > > > The other conditions, "!axienet_rx_resetting(s) && > > axienet_rx_enabled(s)" are OK because enet_write already calls > > qemu_flush_queued_packets when the register bits are changed. > > > > Signed-off-by: Fam Zheng > > --- > > hw/net/xilinx_axienet.c | 17 +++++++++++++---- > > 1 file changed, 13 insertions(+), 4 deletions(-) > > > > diff --git a/hw/net/xilinx_axienet.c b/hw/net/xilinx_axienet.c > > index 9205770..d63c423 100644 > > --- a/hw/net/xilinx_axienet.c > > +++ b/hw/net/xilinx_axienet.c > > @@ -401,6 +401,9 @@ struct XilinxAXIEnet { > > > > uint8_t rxapp[CONTROL_PAYLOAD_SIZE]; > > uint32_t rxappsize; > > + > > + /* Whether axienet_eth_rx_notify should flush incoming queue. */ > > + bool need_flush; > > }; > > > > static void axienet_rx_reset(XilinxAXIEnet *s) > > @@ -658,10 +661,8 @@ static const MemoryRegionOps enet_ops = { > > .endianness = DEVICE_LITTLE_ENDIAN, > > }; > > > > -static int eth_can_rx(NetClientState *nc) > > +static int eth_can_rx(XilinxAXIEnet *s) > > { > > - XilinxAXIEnet *s = qemu_get_nic_opaque(nc); > > - > > /* RX enabled? */ > > return !s->rxsize && !axienet_rx_resetting(s) && axienet_rx_enabled(s); > > } > > @@ -701,6 +702,10 @@ static void axienet_eth_rx_notify(void *opaque) > > s->rxpos += ret; > > if (!s->rxsize) { > > s->regs[R_IS] |= IS_RX_COMPLETE; > > + if (s->need_flush) { > > + s->need_flush = false; > > + qemu_flush_queued_packets(qemu_get_queue(s->nic)); > > + } > > } > > } > > enet_update_irq(s); > > @@ -721,6 +726,11 @@ static ssize_t eth_rx(NetClientState *nc, const uint8_t *buf, size_t size) > > > > DENET(qemu_log("%s: %zd bytes\n", __func__, size)); > > > > + if (!eth_can_rx(s)) { > > + s->need_flush = true; > > + return 0; > > + } > > + > > axienet_eth_rx_notify() was only called by eth_rx(). So when > s->need_flush is true, we won't ever reach axienet_eth_rx_notify()? We will. If we are here it measn a previous call to axienet_eth_rx_notify hasn't drained the buffer: static void axienet_eth_rx_notify(void *opaque) { ... while (s->rxsize && stream_can_push(s->tx_data_dev, axienet_eth_rx_notify, s)) { size_t ret = stream_push(s->tx_data_dev, (void *)s->rxmem + s->rxpos, s->rxsize); s->rxsize -= ret; s->rxpos += ret; if (!s->rxsize) { s->regs[R_IS] |= IS_RX_COMPLETE; } } ... } axienet_eth_rx_notify is passed to stream_can_push so it will be reached again once s->tx_data_dev can receive more data: typedef struct StreamSlaveClass { InterfaceClass parent; /** * can push - determine if a stream slave is capable of accepting at least * one byte of data. Returns false if cannot accept. If not implemented, the * slave is assumed to always be capable of receiving. * @notify: Optional callback that the slave will call when the slave is * capable of receiving again. Only called if false is returned. * @notify_opaque: opaque data to pass to notify call. */ bool (*can_push)(StreamSlave *obj, StreamCanPushNotifyFn notify, void *notify_opaque); ... Am I missing anything? Fam