From: Thierry Reding <thierry.reding@gmail.com>
To: "David S . Miller" <davem@davemloft.net>
Cc: Giuseppe Cavallaro <peppe.cavallaro@st.com>,
Alexandre Torgue <alexandre.torgue@st.com>,
Jose Abreu <joabreu@synopsys.com>,
Florian Fainelli <f.fainelli@gmail.com>,
Jakub Kicinski <jakub.kicinski@netronome.com>,
Jon Hunter <jonathanh@nvidia.com>,
Bitan Biswas <bbiswas@nvidia.com>,
netdev@vger.kernel.org, linux-tegra@vger.kernel.org
Subject: Re: [PATCH net-next] net: stmmac: Fix page pool size
Date: Mon, 23 Sep 2019 12:00:31 +0200 [thread overview]
Message-ID: <20190923100031.GB11084@ulmo> (raw)
In-Reply-To: <20190923095915.11588-1-thierry.reding@gmail.com>
[-- Attachment #1: Type: text/plain, Size: 1889 bytes --]
On Mon, Sep 23, 2019 at 11:59:15AM +0200, Thierry Reding wrote:
> From: Thierry Reding <treding@nvidia.com>
>
> The size of individual pages in the page pool in given by an order. The
> order is the binary logarithm of the number of pages that make up one of
> the pages in the pool. However, the driver currently passes the number
> of pages rather than the order, so it ends up wasting quite a bit of
> memory.
>
> Fix this by taking the binary logarithm and passing that in the order
> field.
>
> Fixes: 2af6106ae949 ("net: stmmac: Introducing support for Page Pool")
> Signed-off-by: Thierry Reding <treding@nvidia.com>
> ---
> drivers/net/ethernet/stmicro/stmmac/stmmac_main.c | 4 +++-
> 1 file changed, 3 insertions(+), 1 deletion(-)
I fumbled the git format-patch incantation. This should've been marked
v2.
Thierry
> diff --git a/drivers/net/ethernet/stmicro/stmmac/stmmac_main.c b/drivers/net/ethernet/stmicro/stmmac/stmmac_main.c
> index ecd461207dbc..f8c90dba6db8 100644
> --- a/drivers/net/ethernet/stmicro/stmmac/stmmac_main.c
> +++ b/drivers/net/ethernet/stmicro/stmmac/stmmac_main.c
> @@ -1550,13 +1550,15 @@ static int alloc_dma_rx_desc_resources(struct stmmac_priv *priv)
> for (queue = 0; queue < rx_count; queue++) {
> struct stmmac_rx_queue *rx_q = &priv->rx_queue[queue];
> struct page_pool_params pp_params = { 0 };
> + unsigned int num_pages;
>
> rx_q->queue_index = queue;
> rx_q->priv_data = priv;
>
> pp_params.flags = PP_FLAG_DMA_MAP;
> pp_params.pool_size = DMA_RX_SIZE;
> - pp_params.order = DIV_ROUND_UP(priv->dma_buf_sz, PAGE_SIZE);
> + num_pages = DIV_ROUND_UP(priv->dma_buf_sz, PAGE_SIZE);
> + pp_params.order = ilog2(num_pages);
> pp_params.nid = dev_to_node(priv->device);
> pp_params.dev = priv->device;
> pp_params.dma_dir = DMA_FROM_DEVICE;
> --
> 2.23.0
>
[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 833 bytes --]
next prev parent reply other threads:[~2019-09-23 10:00 UTC|newest]
Thread overview: 3+ messages / expand[flat|nested] mbox.gz Atom feed top
2019-09-23 9:59 [PATCH net-next] net: stmmac: Fix page pool size Thierry Reding
2019-09-23 10:00 ` Thierry Reding [this message]
2019-09-26 7:28 ` David Miller
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20190923100031.GB11084@ulmo \
--to=thierry.reding@gmail.com \
--cc=alexandre.torgue@st.com \
--cc=bbiswas@nvidia.com \
--cc=davem@davemloft.net \
--cc=f.fainelli@gmail.com \
--cc=jakub.kicinski@netronome.com \
--cc=joabreu@synopsys.com \
--cc=jonathanh@nvidia.com \
--cc=linux-tegra@vger.kernel.org \
--cc=netdev@vger.kernel.org \
--cc=peppe.cavallaro@st.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).