From: Jesper Dangaard Brouer <brouer@redhat.com>
To: "Jonathan Lemon" <jonathan.lemon@gmail.com>
Cc: "Lorenzo Bianconi" <lorenzo@kernel.org>,
netdev@vger.kernel.org, davem@davemloft.net,
ilias.apalodimas@linaro.org, lorenzo.bianconi@redhat.com,
mcroce@redhat.com, brouer@redhat.com
Subject: Re: [PATCH v5 net-next 2/3] net: page_pool: add the possibility to sync DMA memory for device
Date: Wed, 20 Nov 2019 20:04:08 +0100 [thread overview]
Message-ID: <20191120200408.38b39201@carbon> (raw)
In-Reply-To: <3DD728CA-CF0B-4F26-AF64-4E1C357D0F0C@gmail.com>
On Wed, 20 Nov 2019 10:42:47 -0800
"Jonathan Lemon" <jonathan.lemon@gmail.com> wrote:
> On 20 Nov 2019, at 9:49, Jesper Dangaard Brouer wrote:
>
> > On Wed, 20 Nov 2019 16:54:18 +0200
> > Lorenzo Bianconi <lorenzo@kernel.org> wrote:
> >
> >> Introduce the following parameters in order to add the possibility to
> >> sync
> >> DMA memory for device before putting allocated pages in the page_pool
> >> caches:
> >> - PP_FLAG_DMA_SYNC_DEV: if set in page_pool_params flags, all pages
> >> that
> >> the driver gets from page_pool will be DMA-synced-for-device
> >> according
> >> to the length provided by the device driver. Please note
> >> DMA-sync-for-CPU
> >> is still device driver responsibility
> >> - offset: DMA address offset where the DMA engine starts copying rx
> >> data
> >> - max_len: maximum DMA memory size page_pool is allowed to flush.
> >> This
> >> is currently used in __page_pool_alloc_pages_slow routine when
> >> pages
> >> are allocated from page allocator
> >> These parameters are supposed to be set by device drivers.
> >>
> >> This optimization reduces the length of the DMA-sync-for-device.
> >> The optimization is valid because pages are initially
> >> DMA-synced-for-device as defined via max_len. At RX time, the driver
> >> will perform a DMA-sync-for-CPU on the memory for the packet length.
> >> What is important is the memory occupied by packet payload, because
> >> this is the area CPU is allowed to read and modify. As we don't track
> >> cache-lines written into by the CPU, simply use the packet payload
> >> length
> >> as dma_sync_size at page_pool recycle time. This also take into
> >> account
> >> any tail-extend.
> >>
> >> Tested-by: Matteo Croce <mcroce@redhat.com>
> >> Signed-off-by: Lorenzo Bianconi <lorenzo@kernel.org>
> >> ---
> >
> > Signed-off-by: Jesper Dangaard Brouer <brouer@redhat.com>
> >
> > [...]
> >> @@ -281,8 +309,8 @@ static bool __page_pool_recycle_direct(struct
> >> page *page,
> >> return true;
> >> }
> >>
> >> -void __page_pool_put_page(struct page_pool *pool,
> >> - struct page *page, bool allow_direct)
> >> +void __page_pool_put_page(struct page_pool *pool, struct page *page,
> >> + unsigned int dma_sync_size, bool allow_direct)
> >> {
> >> /* This allocator is optimized for the XDP mode that uses
> >> * one-frame-per-page, but have fallbacks that act like the
> >> @@ -293,6 +321,10 @@ void __page_pool_put_page(struct page_pool
> >> *pool,
> >> if (likely(page_ref_count(page) == 1)) {
> >> /* Read barrier done in page_ref_count / READ_ONCE */
> >>
> >> + if (pool->p.flags & PP_FLAG_DMA_SYNC_DEV)
> >> + page_pool_dma_sync_for_device(pool, page,
> >> + dma_sync_size);
> >> +
> >> if (allow_direct && in_serving_softirq())
> >> if (__page_pool_recycle_direct(page, pool))
> >> return;
> >
> > I am slightly concerned this touch the fast-path code. But at-least on
> > Intel, I don't think this is measurable. And for the ARM64 board it
> > was a huge win... thus I'll accept this.
>
> For the next series:
>
> The "in_serving_softirq()" check shows up on profiling. I'd
> like to remove this and just have a "direct" flag, where the
> caller takes the responsibility of the correct context.
As far as I can remember, this was added due to a bug in mlx5 shutdown
path... that needs to be fixed first.
--
Best regards,
Jesper Dangaard Brouer
MSc.CS, Principal Kernel Engineer at Red Hat
LinkedIn: http://www.linkedin.com/in/brouer
next prev parent reply other threads:[~2019-11-20 19:04 UTC|newest]
Thread overview: 14+ messages / expand[flat|nested] mbox.gz Atom feed top
2019-11-20 14:54 [PATCH v5 net-next 0/3] add DMA-sync-for-device capability to page_pool API Lorenzo Bianconi
2019-11-20 14:54 ` [PATCH v5 net-next 1/3] net: mvneta: rely on page_pool_recycle_direct in mvneta_run_xdp Lorenzo Bianconi
2019-11-20 15:45 ` Jesper Dangaard Brouer
2019-11-20 14:54 ` [PATCH v5 net-next 2/3] net: page_pool: add the possibility to sync DMA memory for device Lorenzo Bianconi
2019-11-20 17:49 ` Jesper Dangaard Brouer
2019-11-20 18:00 ` Ilias Apalodimas
2019-11-20 18:42 ` Jonathan Lemon
2019-11-20 19:04 ` Jesper Dangaard Brouer [this message]
2019-11-20 14:54 ` [PATCH v5 net-next 3/3] net: mvneta: get rid of huge dma sync in mvneta_rx_refill Lorenzo Bianconi
2019-11-20 17:49 ` Jesper Dangaard Brouer
2019-11-20 15:37 ` [PATCH v5 net-next 0/3] add DMA-sync-for-device capability to page_pool API Jesper Dangaard Brouer
2019-11-20 15:45 ` Lorenzo Bianconi
2019-11-20 18:05 ` Ilias Apalodimas
2019-11-20 20:34 ` David Miller
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20191120200408.38b39201@carbon \
--to=brouer@redhat.com \
--cc=davem@davemloft.net \
--cc=ilias.apalodimas@linaro.org \
--cc=jonathan.lemon@gmail.com \
--cc=lorenzo.bianconi@redhat.com \
--cc=lorenzo@kernel.org \
--cc=mcroce@redhat.com \
--cc=netdev@vger.kernel.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).