From: Thomas Gleixner <tglx@linutronix.de>
To: 'Guanjun' <guanjun@linux.alibaba.com>,
corbet@lwn.net, axboe@kernel.dk, mst@redhat.com,
jasowang@redhat.com, xuanzhuo@linux.alibaba.com,
eperezma@redhat.com, vgoyal@redhat.com, stefanha@redhat.com,
miklos@szeredi.hu, peterz@infradead.org,
akpm@linux-foundation.org, paulmck@kernel.org, thuth@redhat.com,
rostedt@goodmis.org, bp@alien8.de, xiongwei.song@windriver.com,
linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org,
linux-block@vger.kernel.org, virtualization@lists.linux.dev,
linux-fsdevel@vger.kernel.org
Cc: guanjun@linux.alibaba.com
Subject: Re: [PATCH RFC v1 1/2] genirq/affinity: add support for limiting managed interrupts
Date: Thu, 31 Oct 2024 11:35:25 +0100 [thread overview]
Message-ID: <87v7x8woeq.ffs@tglx> (raw)
In-Reply-To: <20241031074618.3585491-2-guanjun@linux.alibaba.com>
On Thu, Oct 31 2024 at 15:46, guanjun@linux.alibaba.com wrote:
> #ifdef CONFIG_SMP
>
> +static unsigned int __read_mostly managed_irqs_per_node;
> +static struct cpumask managed_irqs_cpumsk[MAX_NUMNODES] __cacheline_aligned_in_smp = {
> + [0 ... MAX_NUMNODES-1] = {CPU_BITS_ALL}
> +};
>
> +static void __group_prepare_affinity(struct cpumask *premask,
> + cpumask_var_t *node_to_cpumask)
> +{
> + nodemask_t nodemsk = NODE_MASK_NONE;
> + unsigned int ncpus, n;
> +
> + get_nodes_in_cpumask(node_to_cpumask, premask, &nodemsk);
> +
> + for_each_node_mask(n, nodemsk) {
> + cpumask_and(&managed_irqs_cpumsk[n], &managed_irqs_cpumsk[n], premask);
> + cpumask_and(&managed_irqs_cpumsk[n], &managed_irqs_cpumsk[n], node_to_cpumask[n]);
How is this managed_irqs_cpumsk array protected against concurrency?
> + ncpus = cpumask_weight(&managed_irqs_cpumsk[n]);
> + if (ncpus < managed_irqs_per_node) {
> + /* Reset node n to current node cpumask */
> + cpumask_copy(&managed_irqs_cpumsk[n], node_to_cpumask[n]);
This whole logic is incomprehensible and aside of the concurrency
problem it's broken when CPUs are made present at run-time because these
cpu masks are static and represent the stale state of the last
invocation.
Given the limitations of the x86 vector space, which is not going away
anytime soon, there are only two options IMO to handle such a scenario.
1) Tell the nvme/block layer to disable queue affinity management
2) Restrict the devices and queues to the nodes they sit on
Thanks,
tglx
next prev parent reply other threads:[~2024-10-31 10:35 UTC|newest]
Thread overview: 9+ messages / expand[flat|nested] mbox.gz Atom feed top
2024-10-31 7:46 [PATCH RFC v1 0/2] Support for limiting the number of managed interrupts on every node per allocation 'Guanjun'
2024-10-31 7:46 ` [PATCH RFC v1 1/2] genirq/affinity: add support for limiting managed interrupts 'Guanjun'
2024-10-31 10:35 ` Thomas Gleixner [this message]
2024-10-31 10:50 ` Ming Lei
[not found] ` <43FD1116-C188-4729-A3AB-C2A0F5A087D2@linux.alibaba.com>
2024-11-01 3:34 ` Jason Wang
2024-11-01 3:03 ` mapicccy
2024-11-01 23:37 ` Thomas Gleixner
2024-11-01 7:06 ` Jiri Slaby
2024-10-31 7:46 ` [PATCH RFC v1 2/2] genirq/cpuhotplug: Handle managed IRQs when the last CPU hotplug out in the affinity 'Guanjun'
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=87v7x8woeq.ffs@tglx \
--to=tglx@linutronix.de \
--cc=akpm@linux-foundation.org \
--cc=axboe@kernel.dk \
--cc=bp@alien8.de \
--cc=corbet@lwn.net \
--cc=eperezma@redhat.com \
--cc=guanjun@linux.alibaba.com \
--cc=jasowang@redhat.com \
--cc=linux-block@vger.kernel.org \
--cc=linux-doc@vger.kernel.org \
--cc=linux-fsdevel@vger.kernel.org \
--cc=linux-kernel@vger.kernel.org \
--cc=miklos@szeredi.hu \
--cc=mst@redhat.com \
--cc=paulmck@kernel.org \
--cc=peterz@infradead.org \
--cc=rostedt@goodmis.org \
--cc=stefanha@redhat.com \
--cc=thuth@redhat.com \
--cc=vgoyal@redhat.com \
--cc=virtualization@lists.linux.dev \
--cc=xiongwei.song@windriver.com \
--cc=xuanzhuo@linux.alibaba.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).