netdev.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Ilias Apalodimas <ilias.apalodimas@linaro.org>
To: Andy Gospodarek <andrew.gospodarek@broadcom.com>
Cc: Michael Chan <michael.chan@broadcom.com>,
	davem@davemloft.net, netdev@vger.kernel.org
Subject: Re: [PATCH net-next] bnxt_en: Add page_pool_destroy() during RX ring cleanup.
Date: Tue, 9 Jul 2019 21:55:06 +0300	[thread overview]
Message-ID: <20190709185506.GA7854@apalos> (raw)
In-Reply-To: <20190709163154.GO87269@C02RW35GFVH8.dhcp.broadcom.net>

On Tue, Jul 09, 2019 at 12:31:54PM -0400, Andy Gospodarek wrote:
> On Tue, Jul 09, 2019 at 06:20:57PM +0300, Ilias Apalodimas wrote:
> > Hi,
> > 
> > > > Add page_pool_destroy() in bnxt_free_rx_rings() during normal RX ring
> > > > cleanup, as Ilias has informed us that the following commit has been
> > > > merged:
> > > > 
> > > > 1da4bbeffe41 ("net: core: page_pool: add user refcnt and reintroduce page_pool_destroy")
> > > > 
> > > > The special error handling code to call page_pool_free() can now be
> > > > removed.  bnxt_free_rx_rings() will always be called during normal
> > > > shutdown or any error paths.
> > > > 
> > > > Fixes: 322b87ca55f2 ("bnxt_en: add page_pool support")
> > > > Cc: Ilias Apalodimas <ilias.apalodimas@linaro.org>
> > > > Cc: Andy Gospodarek <gospo@broadcom.com>
> > > > Signed-off-by: Michael Chan <michael.chan@broadcom.com>
> > > > ---
> > > >  drivers/net/ethernet/broadcom/bnxt/bnxt.c | 8 ++------
> > > >  1 file changed, 2 insertions(+), 6 deletions(-)
> > > > 
> > > > diff --git a/drivers/net/ethernet/broadcom/bnxt/bnxt.c b/drivers/net/ethernet/broadcom/bnxt/bnxt.c
> > > > index e9d3bd8..2b5b0ab 100644
> > > > --- a/drivers/net/ethernet/broadcom/bnxt/bnxt.c
> > > > +++ b/drivers/net/ethernet/broadcom/bnxt/bnxt.c
> > > > @@ -2500,6 +2500,7 @@ static void bnxt_free_rx_rings(struct bnxt *bp)
> > > >  		if (xdp_rxq_info_is_reg(&rxr->xdp_rxq))
> > > >  			xdp_rxq_info_unreg(&rxr->xdp_rxq);
> > > >  
> > > > +		page_pool_destroy(rxr->page_pool);
> > > >  		rxr->page_pool = NULL;
> > > >  
> > > >  		kfree(rxr->rx_tpa);
> > > > @@ -2560,19 +2561,14 @@ static int bnxt_alloc_rx_rings(struct bnxt *bp)
> > > >  			return rc;
> > > >  
> > > >  		rc = xdp_rxq_info_reg(&rxr->xdp_rxq, bp->dev, i);
> > > > -		if (rc < 0) {
> > > > -			page_pool_free(rxr->page_pool);
> > > > -			rxr->page_pool = NULL;
> > > > +		if (rc < 0)
> > > >  			return rc;
> > > > -		}
> > > >  
> > > >  		rc = xdp_rxq_info_reg_mem_model(&rxr->xdp_rxq,
> > > >  						MEM_TYPE_PAGE_POOL,
> > > >  						rxr->page_pool);
> > > >  		if (rc) {
> > > >  			xdp_rxq_info_unreg(&rxr->xdp_rxq);
> > > > -			page_pool_free(rxr->page_pool);
> > > > -			rxr->page_pool = NULL;
> > > 
> > > Rather than deleting these lines it would also be acceptable to do:
> > > 
> > >                 if (rc) {
> > >                         xdp_rxq_info_unreg(&rxr->xdp_rxq);
> > > -                       page_pool_free(rxr->page_pool);
> > > +                       page_pool_destroy(rxr->page_pool);
> > >                         rxr->page_pool = NULL;
> > >                         return rc;
> > >                 }
> > > 
> > > but anytime there is a failure to bnxt_alloc_rx_rings the driver will
> > > immediately follow it up with a call to bnxt_free_rx_rings, so
> > > page_pool_destroy will be called.
> > > 
> > > Thanks for pushing this out so quickly!
> > > 
> > 
> > I also can't find page_pool_release_page() or page_pool_put_page() called when
> > destroying the pool. Can you try to insmod -> do some traffic -> rmmod ?
> > If there's stale buffers that haven't been unmapped properly you'll get a
> > WARN_ON for them.
> 
> I did that test a few times with a few different bpf progs but I do not
> see any WARN messages.  Of course this does not mean that the code we
> have is 100% correct.
> 

I'll try to have a closer look as well

> Presumably you are talking about one of these messages, right?
> 
> 215         /* The distance should not be able to become negative */
> 216         WARN(inflight < 0, "Negative(%d) inflight packet-pages", inflight);
> 
> or
> 
> 356         /* Drivers should fix this, but only problematic when DMA is used */
> 357         WARN(1, "Still in-flight pages:%d hold:%u released:%u",
> 358              distance, hold_cnt, release_cnt);
> 

Yea particularly the second one. There's a counter we increase everytime you
alloc a fresh page which needs to be decresed before freeing the whole pool.
page_pool_release_page will do that for example

> 
> > This part was added later on in the API when Jesper fixed in-flight packet
> > handling

Thanks
/Ilias

  reply	other threads:[~2019-07-09 18:55 UTC|newest]

Thread overview: 6+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2019-07-09  7:50 [PATCH net-next] bnxt_en: Add page_pool_destroy() during RX ring cleanup Michael Chan
2019-07-09 13:18 ` Andy Gospodarek
2019-07-09 15:20   ` Ilias Apalodimas
2019-07-09 16:31     ` Andy Gospodarek
2019-07-09 18:55       ` Ilias Apalodimas [this message]
2019-07-09 19:18 ` David Miller

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20190709185506.GA7854@apalos \
    --to=ilias.apalodimas@linaro.org \
    --cc=andrew.gospodarek@broadcom.com \
    --cc=davem@davemloft.net \
    --cc=michael.chan@broadcom.com \
    --cc=netdev@vger.kernel.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).