netdev.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Yunsheng Lin <linyunsheng@huawei.com>
To: Lorenzo Bianconi <lorenzo@kernel.org>, <netdev@vger.kernel.org>
Cc: <lorenzo.bianconi@redhat.com>, <davem@davemloft.net>,
	<kuba@kernel.org>, <edumazet@google.com>, <pabeni@redhat.com>,
	<bpf@vger.kernel.org>, <toke@redhat.com>,
	<willemdebruijn.kernel@gmail.com>, <jasowang@redhat.com>,
	<sdf@google.com>, <hawk@kernel.org>,
	<ilias.apalodimas@linaro.org>
Subject: Re: [PATCH v6 net-next 1/5] net: add generic per-cpu page_pool allocator
Date: Mon, 29 Jan 2024 20:05:24 +0800	[thread overview]
Message-ID: <f6273e01-a826-4182-a5b5-564b51f2d9ae@huawei.com> (raw)
In-Reply-To: <5b0222d3df382c22fe0fa96154ae7b27189f7ecd.1706451150.git.lorenzo@kernel.org>

On 2024/1/28 22:20, Lorenzo Bianconi wrote:

>  #ifdef CONFIG_LOCKDEP
>  /*
>   * register_netdevice() inits txq->_xmit_lock and sets lockdep class
> @@ -11686,6 +11690,27 @@ static void __init net_dev_struct_check(void)
>   *
>   */
>  
> +#define SD_PAGE_POOL_RING_SIZE	256

I might missed that if there is a reason we choose 256 here, do we
need to use different value for differe page size, for 64K page size,
it means we might need to reserve 16MB memory for each CPU.

> +static int net_page_pool_alloc(int cpuid)
> +{
> +#if IS_ENABLED(CONFIG_PAGE_POOL)
> +	struct page_pool_params page_pool_params = {
> +		.pool_size = SD_PAGE_POOL_RING_SIZE,
> +		.nid = NUMA_NO_NODE,
> +	};
> +	struct page_pool *pp_ptr;
> +
> +	pp_ptr = page_pool_create_percpu(&page_pool_params, cpuid);
> +	if (IS_ERR(pp_ptr)) {
> +		pp_ptr = NULL;

unnecessary NULL setting?

> +		return -ENOMEM;
> +	}
> +
> +	per_cpu(page_pool, cpuid) = pp_ptr;
> +#endif
> +	return 0;
> +}
> +
>  /*
>   *       This is called single threaded during boot, so no need
>   *       to take the rtnl semaphore.
> @@ -11738,6 +11763,9 @@ static int __init net_dev_init(void)
>  		init_gro_hash(&sd->backlog);
>  		sd->backlog.poll = process_backlog;
>  		sd->backlog.weight = weight_p;
> +
> +		if (net_page_pool_alloc(i))
> +			goto out;
>  	}
>  
>  	dev_boot_phase = 0;
> @@ -11765,6 +11793,18 @@ static int __init net_dev_init(void)
>  	WARN_ON(rc < 0);
>  	rc = 0;
>  out:
> +	if (rc < 0) {
> +		for_each_possible_cpu(i) {
> +			struct page_pool *pp_ptr = this_cpu_read(page_pool);

this_cpu_read() -> per_cpu_ptr()?

> +
> +			if (!pp_ptr)
> +				continue;
> +
> +			page_pool_destroy(pp_ptr);
> +			per_cpu(page_pool, i) = NULL;
> +		}
> +	}
> +
>  	return rc;
>  }


  parent reply	other threads:[~2024-01-29 12:05 UTC|newest]

Thread overview: 36+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2024-01-28 14:20 [PATCH v6 net-next 0/5] add multi-buff support for xdp running in generic mode Lorenzo Bianconi
2024-01-28 14:20 ` [PATCH v6 net-next 1/5] net: add generic per-cpu page_pool allocator Lorenzo Bianconi
2024-01-29  6:37   ` kernel test robot
2024-01-29 12:05   ` Yunsheng Lin [this message]
2024-01-29 13:04     ` Lorenzo Bianconi
2024-01-30 11:02       ` Yunsheng Lin
2024-01-30 14:26         ` Lorenzo Bianconi
2024-02-01 11:49           ` Lorenzo Bianconi
2024-02-01 12:14             ` Yunsheng Lin
2024-01-29 12:45   ` Toke Høiland-Jørgensen
2024-01-29 12:52     ` Lorenzo Bianconi
2024-01-29 15:44       ` Toke Høiland-Jørgensen
2024-01-31 12:51         ` Jesper Dangaard Brouer
2024-01-31 12:28   ` Jesper Dangaard Brouer
2024-01-31 13:36     ` Lorenzo Bianconi
2024-01-28 14:20 ` [PATCH v6 net-next 2/5] xdp: rely on skb pointer reference in do_xdp_generic and netif_receive_generic_xdp Lorenzo Bianconi
2024-01-29  8:05   ` Ilias Apalodimas
2024-01-29  9:58     ` Lorenzo Bianconi
2024-01-28 14:20 ` [PATCH v6 net-next 3/5] xdp: add multi-buff support for xdp running in generic mode Lorenzo Bianconi
2024-01-31 23:47   ` Jakub Kicinski
2024-02-01 11:34     ` Lorenzo Bianconi
2024-02-01 15:15       ` Jakub Kicinski
2024-02-01 16:41         ` Lorenzo Bianconi
2024-01-28 14:20 ` [PATCH v6 net-next 4/5] net: page_pool: make stats available just for global pools Lorenzo Bianconi
2024-01-29 12:06   ` Yunsheng Lin
2024-01-29 13:07     ` Lorenzo Bianconi
2024-01-30 11:23       ` Yunsheng Lin
2024-01-30 13:52         ` Lorenzo Bianconi
2024-01-30 15:11           ` Jesper Dangaard Brouer
2024-01-30 16:01             ` Lorenzo Bianconi
2024-01-31 15:32               ` Toke Høiland-Jørgensen
2024-01-31 23:52                 ` Jakub Kicinski
2024-02-01 10:54                   ` Lorenzo Bianconi
2024-01-28 14:20 ` [PATCH v6 net-next 5/5] veth: rely on netif_skb_segment_for_xdp utility routine Lorenzo Bianconi
2024-01-29 12:43 ` [PATCH v6 net-next 0/5] add multi-buff support for xdp running in generic mode Toke Høiland-Jørgensen
2024-01-29 13:05   ` Lorenzo Bianconi

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=f6273e01-a826-4182-a5b5-564b51f2d9ae@huawei.com \
    --to=linyunsheng@huawei.com \
    --cc=bpf@vger.kernel.org \
    --cc=davem@davemloft.net \
    --cc=edumazet@google.com \
    --cc=hawk@kernel.org \
    --cc=ilias.apalodimas@linaro.org \
    --cc=jasowang@redhat.com \
    --cc=kuba@kernel.org \
    --cc=lorenzo.bianconi@redhat.com \
    --cc=lorenzo@kernel.org \
    --cc=netdev@vger.kernel.org \
    --cc=pabeni@redhat.com \
    --cc=sdf@google.com \
    --cc=toke@redhat.com \
    --cc=willemdebruijn.kernel@gmail.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).