public inbox for linux-pci@vger.kernel.org
 help / color / mirror / Atom feed
From: "Danilo Krummrich" <dakr@kernel.org>
To: "Jinhui Guo" <guojinhui.liam@bytedance.com>
Cc: <alexander.h.duyck@linux.intel.com>, <alexanderduyck@fb.com>,
	<bhelgaas@google.com>, <bvanassche@acm.org>,
	<dan.j.williams@intel.com>, <gregkh@linuxfoundation.org>,
	<helgaas@kernel.org>, <rafael@kernel.org>, <tj@kernel.org>,
	<linux-kernel@vger.kernel.org>, <linux-pci@vger.kernel.org>
Subject: Re: [PATCH 2/3] driver core: Add NUMA-node awareness to the synchronous probe path
Date: Wed, 07 Jan 2026 19:22:15 +0100	[thread overview]
Message-ID: <DFIKE4Z23Q0O.2ZC7FR40XO8SR@kernel.org> (raw)
In-Reply-To: <20260107175548.1792-3-guojinhui.liam@bytedance.com>

On Wed Jan 7, 2026 at 6:55 PM CET, Jinhui Guo wrote:
> + * __exec_on_numa_node - Execute a function on a specific NUMA node synchronously
> + * @node: Target NUMA node ID
> + * @func: The wrapper function to execute
> + * @arg1: First argument (void *)
> + * @arg2: Second argument (void *)
> + *
> + * Returns the result of the function execution, or -ENODEV if initialization fails.
> + * If the node is invalid or offline, it falls back to local execution.
> + */
> +static int __exec_on_numa_node(int node, numa_func_t func, void *arg1, void *arg2)
> +{
> +	struct numa_work_ctx ctx;
> +
> +	/* Fallback to local execution if the node is invalid or offline */
> +	if (node < 0 || node >= MAX_NUMNODES || !node_online(node))
> +		return func(arg1, arg2);

Just a quick drive-by comment (I’ll go through it more thoroughly later).

What about the case where we are already on the requested node?

Also, we should probably set the corresponding CPU affinity for the time we are
executing func() to prevent migration.

> +
> +	ctx.func = func;
> +	ctx.arg1 = arg1;
> +	ctx.arg2 = arg2;
> +	ctx.result = -ENODEV;
> +	INIT_WORK_ONSTACK(&ctx.work, numa_work_func);
> +
> +	/* Use system_dfl_wq to allow execution on the specific node. */
> +	queue_work_node(node, system_dfl_wq, &ctx.work);
> +	flush_work(&ctx.work);
> +	destroy_work_on_stack(&ctx.work);
> +
> +	return ctx.result;
> +}

  reply	other threads:[~2026-01-07 18:22 UTC|newest]

Thread overview: 9+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2026-01-07 17:55 [PATCH 0/3] Add NUMA-node-aware synchronous probing to driver core Jinhui Guo
2026-01-07 17:55 ` [PATCH 1/3] driver core: Introduce helper function __device_attach_driver_scan() Jinhui Guo
2026-01-17 13:36   ` Danilo Krummrich
2026-01-07 17:55 ` [PATCH 2/3] driver core: Add NUMA-node awareness to the synchronous probe path Jinhui Guo
2026-01-07 18:22   ` Danilo Krummrich [this message]
2026-01-08  8:28     ` Jinhui Guo
2026-01-17 14:03   ` Danilo Krummrich
2026-01-20 17:23     ` Jinhui Guo
2026-01-07 17:55 ` [PATCH 3/3] PCI: Clean up NUMA-node awareness in pci_bus_type probe Jinhui Guo

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=DFIKE4Z23Q0O.2ZC7FR40XO8SR@kernel.org \
    --to=dakr@kernel.org \
    --cc=alexander.h.duyck@linux.intel.com \
    --cc=alexanderduyck@fb.com \
    --cc=bhelgaas@google.com \
    --cc=bvanassche@acm.org \
    --cc=dan.j.williams@intel.com \
    --cc=gregkh@linuxfoundation.org \
    --cc=guojinhui.liam@bytedance.com \
    --cc=helgaas@kernel.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-pci@vger.kernel.org \
    --cc=rafael@kernel.org \
    --cc=tj@kernel.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox