From: Simon Horman <horms@kernel.org>
To: Jeroen de Borst <jeroendb@google.com>
Cc: netdev@vger.kernel.org, hramamurthy@google.com,
davem@davemloft.net, edumazet@google.com, kuba@kernel.org,
willemb@google.com, pabeni@redhat.com,
Bailey Forrest <bcf@google.com>,
Joshua Washington <joshwash@google.com>
Subject: Re: [PATCH net-next v2] gve: make IRQ handlers and page allocation NUMA aware
Date: Tue, 8 Jul 2025 14:31:32 +0100 [thread overview]
Message-ID: <20250708133132.GL452973@horms.kernel.org> (raw)
In-Reply-To: <20250707210107.2742029-1-jeroendb@google.com>
On Mon, Jul 07, 2025 at 02:01:07PM -0700, Jeroen de Borst wrote:
> From: Bailey Forrest <bcf@google.com>
>
> All memory in GVE is currently allocated without regard for the NUMA
> node of the device. Because access to NUMA-local memory access is
> significantly cheaper than access to a remote node, this change attempts
> to ensure that page frags used in the RX path, including page pool
> frags, are allocated on the NUMA node local to the gVNIC device. Note
> that this attempt is best-effort. If necessary, the driver will still
> allocate non-local memory, as __GFP_THISNODE is not passed. Descriptor
> ring allocations are not updated, as dma_alloc_coherent handles that.
>
> This change also modifies the IRQ affinity setting to only select CPUs
> from the node local to the device, preserving the behavior that TX and
> RX queues of the same index share CPU affinity.
>
> Signed-off-by: Bailey Forrest <bcf@google.com>
> Signed-off-by: Joshua Washington <joshwash@google.com>
> Reviewed-by: Willem de Bruijn <willemb@google.com>
> Signed-off-by: Harshitha Ramamurthy <hramamurthy@google.com>
> Signed-off-by: Jeroen de Borst <jeroendb@google.com>
> ---
> v1: https://lore.kernel.org/netdev/20250627183141.3781516-1-hramamurthy@google.com/
> v2:
> - Utilize kvcalloc_node instead of kvzalloc_node for array-type
> allocations.
Thanks for the update.
I note that this addresses Jakub's review of v1.
I have a minor suggestion below, but I don't think it warrants
blocking progress of this patch.
Reviewed-by: Simon Horman <horms@kernel.org>
...
> diff --git a/drivers/net/ethernet/google/gve/gve_main.c b/drivers/net/ethernet/google/gve/gve_main.c
...
> @@ -533,6 +540,8 @@ static int gve_alloc_notify_blocks(struct gve_priv *priv)
> }
>
> /* Setup the other blocks - the first n-1 vectors */
> + node_mask = gve_get_node_mask(priv);
> + cur_cpu = cpumask_first(node_mask);
> for (i = 0; i < priv->num_ntfy_blks; i++) {
> struct gve_notify_block *block = &priv->ntfy_blocks[i];
> int msix_idx = i;
> @@ -549,9 +558,17 @@ static int gve_alloc_notify_blocks(struct gve_priv *priv)
> goto abort_with_some_ntfy_blocks;
> }
> block->irq = priv->msix_vectors[msix_idx].vector;
> - irq_set_affinity_hint(priv->msix_vectors[msix_idx].vector,
> - get_cpu_mask(i % active_cpus));
> + irq_set_affinity_and_hint(block->irq,
> + cpumask_of(cur_cpu));
> block->irq_db_index = &priv->irq_db_indices[i].index;
> +
> + cur_cpu = cpumask_next(cur_cpu, node_mask);
> + /* Wrap once CPUs in the node have been exhausted, or when
> + * starting RX queue affinities. TX and RX queues of the same
> + * index share affinity.
> + */
> + if (cur_cpu >= nr_cpu_ids || (i + 1) == priv->tx_cfg.max_queues)
> + cur_cpu = cpumask_first(node_mask);
FWIIW, maybe this can be written more succinctly as follows.
(Completely untested!)
/* TX and RX queues of the same index share affinity. */
if (i + 1 == priv->tx_cfg.max_queues)
cur_cpu = cpumask_first(node_mask);
else
cur_cpu = cpumask_next_wrap(cur_cpu, node_mask);
...
next prev parent reply other threads:[~2025-07-08 13:31 UTC|newest]
Thread overview: 4+ messages / expand[flat|nested] mbox.gz Atom feed top
2025-07-07 21:01 [PATCH net-next v2] gve: make IRQ handlers and page allocation NUMA aware Jeroen de Borst
2025-07-08 13:31 ` Simon Horman [this message]
2025-07-09 17:55 ` Joshua Washington
2025-07-10 2:40 ` patchwork-bot+netdevbpf
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20250708133132.GL452973@horms.kernel.org \
--to=horms@kernel.org \
--cc=bcf@google.com \
--cc=davem@davemloft.net \
--cc=edumazet@google.com \
--cc=hramamurthy@google.com \
--cc=jeroendb@google.com \
--cc=joshwash@google.com \
--cc=kuba@kernel.org \
--cc=netdev@vger.kernel.org \
--cc=pabeni@redhat.com \
--cc=willemb@google.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).