netdev.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Florian Fainelli <f.fainelli@gmail.com>
To: Petri Gynther <pgynther@google.com>
Cc: netdev@vger.kernel.org, David Miller <davem@davemloft.net>
Subject: Re: [PATCH net-next 2/4] net: bcmgenet: return number of packets completed in TX reclaim
Date: Fri, 24 Oct 2014 15:52:22 -0700	[thread overview]
Message-ID: <544AD826.5060301@gmail.com> (raw)
In-Reply-To: <CAGXr9JHNRFsXXT+_tfM=ye5k5YwjmJ6s5YP+qaTdGMOLKZxPwA@mail.gmail.com>

On 10/24/2014 03:20 PM, Petri Gynther wrote:
> Hi Florian,
> 
> On Fri, Oct 24, 2014 at 1:02 PM, Florian Fainelli <f.fainelli@gmail.com> wrote:
>> In preparation for reclaiming transmitted buffers in NAPI context,
>> update __bcmgenet_tx_reclaim() and bcmgenet_tx_reclaim() to return the
>> number of packets completed per call.
>>
>> Signed-off-by: Florian Fainelli <f.fainelli@gmail.com>
>> ---
>>  drivers/net/ethernet/broadcom/genet/bcmgenet.c | 20 +++++++++++++-------
>>  1 file changed, 13 insertions(+), 7 deletions(-)
>>
>> diff --git a/drivers/net/ethernet/broadcom/genet/bcmgenet.c b/drivers/net/ethernet/broadcom/genet/bcmgenet.c
>> index ee4d5baf09b6..70f2fb366375 100644
>> --- a/drivers/net/ethernet/broadcom/genet/bcmgenet.c
>> +++ b/drivers/net/ethernet/broadcom/genet/bcmgenet.c
>> @@ -876,14 +876,14 @@ static inline void bcmgenet_tx_ring_int_disable(struct bcmgenet_priv *priv,
>>  }
>>
>>  /* Unlocked version of the reclaim routine */
>> -static void __bcmgenet_tx_reclaim(struct net_device *dev,
>> -                                 struct bcmgenet_tx_ring *ring)
>> +static unsigned int __bcmgenet_tx_reclaim(struct net_device *dev,
>> +                                         struct bcmgenet_tx_ring *ring)
>>  {
>>         struct bcmgenet_priv *priv = netdev_priv(dev);
>>         int last_tx_cn, last_c_index, num_tx_bds;
>>         struct enet_cb *tx_cb_ptr;
>>         struct netdev_queue *txq;
>> -       unsigned int bds_compl;
>> +       unsigned int bds_compl, pkts_compl = 0;
> 
> bcmgenet_desc_rx() uses "rxpktprocessed", so I'd go with "txpktprocessed" here.

Sure.

> 
>>         unsigned int c_index;
>>
>>         /* Compute how many buffers are transmitted since last xmit call */
>> @@ -928,6 +928,7 @@ static void __bcmgenet_tx_reclaim(struct net_device *dev,
>>                 }
>>                 dev->stats.tx_packets++;
>>                 ring->free_bds += bds_compl;
>> +               pkts_compl += bds_compl;
> 
> This change doesn't look correct. The number of cleaned packets is not
> necessarily equal to the number of cleaned TxBDs.

Right, that should be a +1 increment, it does not matter because what we
want to know is 0 versus anything else.

> 
> I think that the while-loop should be:
> 
>         while (last_tx_cn-- > 0) {
>                 tx_cb_ptr = ring->cbs + last_c_index;
> 
>                 if (tx_cb_ptr->skb) {
>                         pkts_compl++;
>                         dev->stats.tx_packets++;
>                         dev->stats.tx_bytes += tx_cb_ptr->skb->len;
>                         dma_unmap_single(&dev->dev,
>                                          dma_unmap_addr(tx_cb_ptr, dma_addr),
>                                          tx_cb_ptr->skb->len,
>                                          DMA_TO_DEVICE);
>                         bcmgenet_free_cb(tx_cb_ptr);
>                 } else if (dma_unmap_addr(tx_cb_ptr, dma_addr)) {
>                         dev->stats.tx_bytes +=
>                                 dma_unmap_len(tx_cb_ptr, dma_len);
>                         dma_unmap_page(&dev->dev,
>                                        dma_unmap_addr(tx_cb_ptr, dma_addr),
>                                        dma_unmap_len(tx_cb_ptr, dma_len),
>                                        DMA_TO_DEVICE);
>                         dma_unmap_addr_set(tx_cb_ptr, dma_addr, 0);
>                 }
>                 ring->free_bds++;
>                 last_c_index++;
>                 last_c_index &= (num_tx_bds - 1);
>         }
> 
>>
>>                 last_c_index++;
>>                 last_c_index &= (num_tx_bds - 1);
>> @@ -936,20 +937,25 @@ static void __bcmgenet_tx_reclaim(struct net_device *dev,
>>         if (ring->free_bds > (MAX_SKB_FRAGS + 1))
>>                 ring->int_disable(priv, ring);
>>
>> -       if (netif_tx_queue_stopped(txq))
>> +       if (netif_tx_queue_stopped(txq) && pkts_compl)
> 
> bcmgenet_xmit() stops the Tx queue when:
> ring->free_bds <= (MAX_SKB_FRAGS + 1)
> 
> So, shouldn't this check be:
> netif_tx_queue_stopped(txq) && (ring->free_bds > (MAX_SKB_FRAGS + 1))
> 
> Having said that --
> Why does the driver stop the Tx queue when there are still
> (MAX_SKB_FRAGS + 1) TxBDs left?
> It leaves 17 or so TxBDs unused on the Tx ring, and most of the
> packets are not fragmented.

Right, this is something I am still trying to understand. One one hand
it does make sense to stop the queue on MAX_SKB_FRAGS + 1 just to be
safe and allow for a full-size fragment to be pushed on the next xmit()
call. But by the same token, bcmgenet_xmit() will check for the actual
fragment size early on.

I think the only one which is valid is the one at the end of
bcmgenet_xmit(), this one in bcmgenet_tx_reclaim() should just verify
that we have released some packets and that the queue was previous stopped.

> 
> I feel that bcmgenet_xmit() should stop the Tx queue only when there
> is 1 TxBD left after putting a new packet on the ring.
> 
>>                 netif_tx_wake_queue(txq);
>>
>>         ring->c_index = c_index;
>> +
>> +       return pkts_compl;
>>  }
>>
>> -static void bcmgenet_tx_reclaim(struct net_device *dev,
>> -                               struct bcmgenet_tx_ring *ring)
>> +static unsigned int bcmgenet_tx_reclaim(struct net_device *dev,
>> +                                       struct bcmgenet_tx_ring *ring)
>>  {
>>         unsigned long flags;
>> +       unsigned int pkts_compl;
>>
>>         spin_lock_irqsave(&ring->lock, flags);
>> -       __bcmgenet_tx_reclaim(dev, ring);
>> +       pkts_compl = __bcmgenet_tx_reclaim(dev, ring);
>>         spin_unlock_irqrestore(&ring->lock, flags);
>> +
>> +       return pkts_compl;
>>  }
>>
>>  static void bcmgenet_tx_reclaim_all(struct net_device *dev)
>> --
>> 1.9.1
>>

  reply	other threads:[~2014-10-24 22:52 UTC|newest]

Thread overview: 10+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2014-10-24 20:02 [PATCH net-next 0/4] net: bcmgenet: switch to TX NAPI Florian Fainelli
2014-10-24 20:02 ` [PATCH net-next 1/4] net: bcmgenet: consistently use UMAC_IRQ_RXDMA_MASK Florian Fainelli
2014-10-24 21:39   ` Petri Gynther
2014-10-24 20:02 ` [PATCH net-next 2/4] net: bcmgenet: return number of packets completed in TX reclaim Florian Fainelli
2014-10-24 22:20   ` Petri Gynther
2014-10-24 22:52     ` Florian Fainelli [this message]
2014-10-24 20:02 ` [PATCH net-next 3/4] net: bcmgenet: reclaim transmitted buffers in NAPI context Florian Fainelli
2014-10-24 22:30   ` Petri Gynther
2014-10-24 20:02 ` [PATCH net-next 4/4] net: bcmgenet: update ring producer index and buffer count in xmit Florian Fainelli
2014-10-24 22:39   ` Petri Gynther

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=544AD826.5060301@gmail.com \
    --to=f.fainelli@gmail.com \
    --cc=davem@davemloft.net \
    --cc=netdev@vger.kernel.org \
    --cc=pgynther@google.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).