From: Jakub Kicinski <kuba@kernel.org>
To: Michael Chan <michael.chan@broadcom.com>
Cc: Jesper Dangaard Brouer <hawk@kernel.org>,
davem@davemloft.net, netdev@vger.kernel.org, edumazet@google.com,
pabeni@redhat.com, gospo@broadcom.com, bpf@vger.kernel.org,
somnath.kotur@broadcom.com,
Ilias Apalodimas <ilias.apalodimas@linaro.org>
Subject: Re: [PATCH net-next 3/3] bnxt_en: Let the page pool manage the DMA mapping
Date: Mon, 31 Jul 2023 13:44:30 -0700 [thread overview]
Message-ID: <20230731134430.5e7f9960@kernel.org> (raw)
In-Reply-To: <CACKFLimJO7Wt90O_F3Nk375rABpAQvKBZhNmBkNzzehYHbk_jA@mail.gmail.com>
On Mon, 31 Jul 2023 13:20:04 -0700 Michael Chan wrote:
> I think I am beginning to understand what the confusion is. These 32K
> page fragments within the page may not belong to the same (GRO)
> packet.
Right.
> So we cannot dma_sync the whole page at the same time.
I wouldn't phrase it like that.
> Without setting PP_FLAG_DMA_SYNC_DEV, the driver code should be
> something like this:
>
> mapping = page_pool_get_dma_addr(page) + offset;
> dma_sync_single_for_device(dev, mapping, BNXT_RX_PAGE_SIZE, bp->rx_dir);
>
> offset may be 0, 32K, etc.
>
> Since the PP_FLAG_DMA_SYNC_DEV logic is not aware of this offset, we
> actually must do our own dma_sync and not use PP_FLAG_DMA_SYNC_DEV in
> this case. Does that sound right?
No, no, all I'm saying is that with the current code (in page pool)
you can't be very intelligent about the sync'ing. Every time a page
enters the pool - the whole page should be synced. But that's fine,
it's still better to let page pool do the syncing than trying to
do it manually in the driver (since freshly allocated pages do not
have to be synced).
I think the confusion comes partially from the fact that the driver
only ever deals with fragments (32k), but internally page pool does
recycling in full pages (64k). And .max_len is part of the recycling
machinery, so to speak, not part of the allocation machinery.
tl;dr just set .max_len = PAGE_SIZE and all will be right.
next prev parent reply other threads:[~2023-07-31 20:44 UTC|newest]
Thread overview: 14+ messages / expand[flat|nested] mbox.gz Atom feed top
2023-07-28 23:18 [PATCH net-next 0/3] bnxt_en: Add support for page pool Michael Chan
2023-07-28 23:18 ` [PATCH net-next 1/3] bnxt_en: Fix page pool logic for page size >= 64K Michael Chan
2023-07-29 0:35 ` Jakub Kicinski
2023-07-28 23:18 ` [PATCH net-next 2/3] bnxt_en: Use the unified RX page pool buffers for XDP and non-XDP Michael Chan
2023-07-28 23:18 ` [PATCH net-next 3/3] bnxt_en: Let the page pool manage the DMA mapping Michael Chan
2023-07-29 0:42 ` Jakub Kicinski
2023-07-31 17:47 ` Jesper Dangaard Brouer
2023-07-31 18:00 ` Jakub Kicinski
2023-07-31 18:16 ` Michael Chan
2023-07-31 18:44 ` Jakub Kicinski
2023-07-31 20:20 ` Michael Chan
2023-07-31 20:44 ` Jakub Kicinski [this message]
2023-07-31 21:11 ` Michael Chan
2023-08-01 17:06 ` Jesper Dangaard Brouer
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20230731134430.5e7f9960@kernel.org \
--to=kuba@kernel.org \
--cc=bpf@vger.kernel.org \
--cc=davem@davemloft.net \
--cc=edumazet@google.com \
--cc=gospo@broadcom.com \
--cc=hawk@kernel.org \
--cc=ilias.apalodimas@linaro.org \
--cc=michael.chan@broadcom.com \
--cc=netdev@vger.kernel.org \
--cc=pabeni@redhat.com \
--cc=somnath.kotur@broadcom.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).