netdev.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Jesper Dangaard Brouer <brouer@redhat.com>
To: Ioana Ciornei <ioana.ciornei@nxp.com>
Cc: "davem@davemloft.net" <davem@davemloft.net>,
	"netdev@vger.kernel.org" <netdev@vger.kernel.org>,
	brouer@redhat.com
Subject: Re: [PATCH net-next 4/4] dpaa2-eth: use bulk enqueue in .ndo_xdp_xmit
Date: Wed, 22 Apr 2020 10:37:41 +0200	[thread overview]
Message-ID: <20200422103741.690765d0@carbon> (raw)
In-Reply-To: <DB8PR04MB6828FCFAC2B6755E046B0B05E0D20@DB8PR04MB6828.eurprd04.prod.outlook.com>

On Wed, 22 Apr 2020 07:51:46 +0000
Ioana Ciornei <ioana.ciornei@nxp.com> wrote:

> > Subject: Re: [PATCH net-next 4/4] dpaa2-eth: use bulk enqueue in
> > .ndo_xdp_xmit
> > 
> > On Tue, 21 Apr 2020 18:21:54 +0300
> > Ioana Ciornei <ioana.ciornei@nxp.com> wrote:
> >   
> > > Take advantage of the bulk enqueue feature in .ndo_xdp_xmit.
> > > We cannot use the XDP_XMIT_FLUSH since the architecture is not capable
> > > to store all the frames dequeued in a NAPI cycle so we instead are
> > > enqueueing all the frames received in a ndo_xdp_xmit call right away.
> > >
> > > After setting up all FDs for the xdp_frames received, enqueue multiple
> > > frames at a time until all are sent or the maximum number of retries
> > > is hit.
> > >
> > > Signed-off-by: Ioana Ciornei <ioana.ciornei@nxp.com>
> > > ---
> > >  .../net/ethernet/freescale/dpaa2/dpaa2-eth.c  | 60
> > > ++++++++++---------
> > >  1 file changed, 32 insertions(+), 28 deletions(-)
> > >
> > > diff --git a/drivers/net/ethernet/freescale/dpaa2/dpaa2-eth.c
> > > b/drivers/net/ethernet/freescale/dpaa2/dpaa2-eth.c
> > > index 9a0432cd893c..08b4efad46fd 100644
> > > --- a/drivers/net/ethernet/freescale/dpaa2/dpaa2-eth.c
> > > +++ b/drivers/net/ethernet/freescale/dpaa2/dpaa2-eth.c
> > > @@ -1933,12 +1933,12 @@ static int dpaa2_eth_xdp_xmit(struct net_device  
> > *net_dev, int n,  
> > >  			      struct xdp_frame **frames, u32 flags)  {
> > >  	struct dpaa2_eth_priv *priv = netdev_priv(net_dev);
> > > +	int total_enqueued = 0, retries = 0, enqueued;
> > >  	struct dpaa2_eth_drv_stats *percpu_extras;
> > >  	struct rtnl_link_stats64 *percpu_stats;
> > > +	int num_fds, i, err, max_retries;
> > >  	struct dpaa2_eth_fq *fq;
> > > -	struct dpaa2_fd fd;
> > > -	int drops = 0;
> > > -	int i, err;
> > > +	struct dpaa2_fd *fds;
> > >
> > >  	if (unlikely(flags & ~XDP_XMIT_FLAGS_MASK))
> > >  		return -EINVAL;
> > > @@ -1946,41 +1946,45 @@ static int dpaa2_eth_xdp_xmit(struct net_device  
> > *net_dev, int n,  
> > >  	if (!netif_running(net_dev))
> > >  		return -ENETDOWN;
> > >
> > > +	/* create the array of frame descriptors */
> > > +	fds = kcalloc(n, sizeof(*fds), GFP_ATOMIC);  
> > 
> > I don't like that you have an allocation on the transmit fast-path.
> > 
> > There are a number of ways you can avoid this.
> > 
> > Option (1) Given we know that (currently) devmap will max bulk 16 xdp_frames,
> > we can have a call-stack local array with struct dpaa2_fd, that contains 16
> > elements, sizeof(struct dpaa2_fd)==32 bytes times 16 is
> > 512 bytes, so it might be acceptable.  (And add code to alloc if n > 16, to be
> > compatible with someone increasing max bulk in devmap).
> >   
> 
> I didn't know about the 16 max xdp_frames . Thanks.
> 
> > Option (2) extend struct dpaa2_eth_priv with an array of 16 struct dpaa2_fd's,
> > that can be used as fds storage.  
> 
> I have more of a noob question here before proceeding with one of the
> two options.
> 
> The ndo_xdp_xmit() callback can be called concurrently from multiple
> softirq contexts, right?

Yes.
 
> If the above is true, then I think the dpaa2_eth_ch_xdp is the right
> struct to place the array of 16 FDs.

Good point, and it sounds correct to use dpaa2_eth_ch_xdp.

> Also, is there any caveat to just use DEV_MAP_BULK_SIZE when
> declaring the array?

Using DEV_MAP_BULK_SIZE sounds like a good idea.  Even if someone
decide to increase that to 64, then this is only 2048 bytes(32*64).


> >   
> > > +	if (!fds)
> > > +		return -ENOMEM;
> > > +  
> > 
> > [...]  
> > > +	kfree(fds);  
> > 
> >   
> 



-- 
Best regards,
  Jesper Dangaard Brouer
  MSc.CS, Principal Kernel Engineer at Red Hat
  LinkedIn: http://www.linkedin.com/in/brouer


  reply	other threads:[~2020-04-22  8:37 UTC|newest]

Thread overview: 9+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2020-04-21 15:21 [PATCH net-next 0/4] dpaa2-eth: add support for xdp bulk enqueue Ioana Ciornei
2020-04-21 15:21 ` [PATCH net-next 1/4] dpaa2-eth: return num_enqueued frames from enqueue callback Ioana Ciornei
2020-04-21 15:21 ` [PATCH net-next 2/4] dpaa2-eth: use the bulk ring mode enqueue interface Ioana Ciornei
2020-04-21 15:21 ` [PATCH net-next 3/4] dpaa2-eth: split the .ndo_xdp_xmit callback into two stages Ioana Ciornei
2020-04-21 15:21 ` [PATCH net-next 4/4] dpaa2-eth: use bulk enqueue in .ndo_xdp_xmit Ioana Ciornei
2020-04-22  7:12   ` Jesper Dangaard Brouer
2020-04-22  7:51     ` Ioana Ciornei
2020-04-22  8:37       ` Jesper Dangaard Brouer [this message]
2020-04-22  9:06         ` Ioana Ciornei

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20200422103741.690765d0@carbon \
    --to=brouer@redhat.com \
    --cc=davem@davemloft.net \
    --cc=ioana.ciornei@nxp.com \
    --cc=netdev@vger.kernel.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).