netdev.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Ilias Apalodimas <ilias.apalodimas@linaro.org>
To: Jesper Dangaard Brouer <brouer@redhat.com>
Cc: Lorenzo Bianconi <lorenzo@kernel.org>,
	netdev@vger.kernel.org, davem@davemloft.net,
	lorenzo.bianconi@redhat.com, mcroce@redhat.com,
	jonathan.lemon@gmail.com
Subject: Re: [PATCH v4 net-next 2/3] net: page_pool: add the possibility to sync DMA memory for device
Date: Tue, 19 Nov 2019 17:23:40 +0200	[thread overview]
Message-ID: <20191119152340.GA31758@apalos.home> (raw)
In-Reply-To: <20191119161109.7cd83965@carbon>

On Tue, Nov 19, 2019 at 04:11:09PM +0100, Jesper Dangaard Brouer wrote:
> On Tue, 19 Nov 2019 13:33:36 +0200
> Ilias Apalodimas <ilias.apalodimas@linaro.org> wrote:
> 
> > > > diff --git a/net/core/page_pool.c b/net/core/page_pool.c
> > > > index dfc2501c35d9..4f9aed7bce5a 100644
> > > > --- a/net/core/page_pool.c
> > > > +++ b/net/core/page_pool.c
> > > > @@ -47,6 +47,13 @@ static int page_pool_init(struct page_pool *pool,
> > > >  	    (pool->p.dma_dir != DMA_BIDIRECTIONAL))
> > > >  		return -EINVAL;
> > > >  
> > > > +	/* In order to request DMA-sync-for-device the page needs to
> > > > +	 * be mapped
> > > > +	 */
> > > > +	if ((pool->p.flags & PP_FLAG_DMA_SYNC_DEV) &&
> > > > +	    !(pool->p.flags & PP_FLAG_DMA_MAP))
> > > > +		return -EINVAL;
> > > > +  
> > > 
> > > I like that you have moved this check to setup time.
> > > 
> > > There are two other parameters the DMA_SYNC_DEV depend on:
> > > 
> > >  	struct page_pool_params pp_params = {
> > >  		.order = 0,
> > > -		.flags = PP_FLAG_DMA_MAP,
> > > +		.flags = PP_FLAG_DMA_MAP | PP_FLAG_DMA_SYNC_DEV,
> > >  		.pool_size = size,
> > >  		.nid = cpu_to_node(0),
> > >  		.dev = pp->dev->dev.parent,
> > >  		.dma_dir = xdp_prog ? DMA_BIDIRECTIONAL : DMA_FROM_DEVICE,
> > > +		.offset = pp->rx_offset_correction,
> > > +		.max_len = MVNETA_MAX_RX_BUF_SIZE,
> > >  	};
> > > 
> > > Can you add a check, that .max_len must not be zero.  The reason is
> > > that I can easily see people misconfiguring this.  And the effect is
> > > that the DMA-sync-for-device is essentially disabled, without user
> > > realizing this. The not-realizing part is really bad, especially
> > > because bugs that can occur from this are very rare and hard to catch.  
> > 
> > +1 we sync based on the min() value of those 
> > 
> > > 
> > > I'm up for discussing if there should be a similar check for .offset.
> > > IMHO we should also check .offset is configured, and then be open to
> > > remove this check once a driver user want to use offset=0.  Does the
> > > mvneta driver already have a use-case for this (in non-XDP mode)?  
> > 
> > Not sure about this, since it does not break anything apart from some
> > performance hit
> 
> I don't follow the 'performance hit' comment.  This is checked at setup
> time (page_pool_init), thus it doesn't affect runtime.

If the offset is 0, you'll end up syncing a couple of uneeded bytes (whatever
headers the buffer has which doesn't need syncing). 

> 
> This is a generic optimization principle that I use a lot. Moving code
> checks out of fast-path, and instead do more at setup/load-time, or
> even at shutdown-time (like we do for page_pool e.g. check refcnt
> invariance).  This principle is also heavily used by BPF, that adjust
> BPF-instructions at load-time.  It is core to getting the performance
> we need for high-speed networking.

The offset will affect the fast path running code.

What i am worried about is that XDP and SKB pool will have different needs for
offsets. In the netsec driver i am dealing with this with reserving the same
header whether the packet is an SKB or XDP buffer. If we check the offset we are
practically forcing people to do something similar

Thanks
/Ilias
> 
> -- 
> Best regards,
>   Jesper Dangaard Brouer
>   MSc.CS, Principal Kernel Engineer at Red Hat
>   LinkedIn: http://www.linkedin.com/in/brouer
> 

  reply	other threads:[~2019-11-19 15:23 UTC|newest]

Thread overview: 19+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2019-11-18 13:33 [PATCH v4 net-next 0/3] add DMA-sync-for-device capability to page_pool API Lorenzo Bianconi
2019-11-18 13:33 ` [PATCH v4 net-next 1/3] net: mvneta: rely on page_pool_recycle_direct in mvneta_run_xdp Lorenzo Bianconi
2019-11-18 13:33 ` [PATCH v4 net-next 2/3] net: page_pool: add the possibility to sync DMA memory for device Lorenzo Bianconi
2019-11-19 11:23   ` Jesper Dangaard Brouer
2019-11-19 11:33     ` Ilias Apalodimas
2019-11-19 15:11       ` Jesper Dangaard Brouer
2019-11-19 15:23         ` Ilias Apalodimas [this message]
2019-11-19 12:14     ` Lorenzo Bianconi
2019-11-19 15:13       ` Jesper Dangaard Brouer
2019-11-19 15:25         ` Lorenzo Bianconi
2019-11-19 21:17           ` Jesper Dangaard Brouer
2019-11-18 13:33 ` [PATCH v4 net-next 3/3] net: mvneta: get rid of huge dma sync in mvneta_rx_refill Lorenzo Bianconi
2019-11-19 11:38   ` Jesper Dangaard Brouer
2019-11-19 12:19     ` Lorenzo Bianconi
2019-11-19 14:51       ` Jesper Dangaard Brouer
2019-11-19 15:38         ` Lorenzo Bianconi
2019-11-19 22:23           ` Jonathan Lemon
2019-11-20  9:21             ` Lorenzo Bianconi
2019-11-20 16:29               ` Jonathan Lemon

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20191119152340.GA31758@apalos.home \
    --to=ilias.apalodimas@linaro.org \
    --cc=brouer@redhat.com \
    --cc=davem@davemloft.net \
    --cc=jonathan.lemon@gmail.com \
    --cc=lorenzo.bianconi@redhat.com \
    --cc=lorenzo@kernel.org \
    --cc=mcroce@redhat.com \
    --cc=netdev@vger.kernel.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).