From: Thomas Gleixner <tglx@linutronix.de>
To: Radu Rendec <rrendec@redhat.com>,
Manivannan Sadhasivam <mani@kernel.org>
Cc: "Daniel Tsai" <danielsftsai@google.com>,
"Marek Behún" <kabel@kernel.org>,
"Krishna Chaitanya Chundru" <quic_krichai@quicinc.com>,
"Bjorn Helgaas" <bhelgaas@google.com>,
"Rob Herring" <robh@kernel.org>,
"Krzysztof Wilczyński" <kwilczynski@kernel.org>,
"Lorenzo Pieralisi" <lpieralisi@kernel.org>,
"Jingoo Han" <jingoohan1@gmail.com>,
"Brian Masney" <bmasney@redhat.com>,
"Eric Chanudet" <echanude@redhat.com>,
"Alessandro Carminati" <acarmina@redhat.com>,
"Jared Kangas" <jkangas@redhat.com>,
linux-pci@vger.kernel.org, linux-kernel@vger.kernel.org
Subject: Re: [PATCH v2 1/3] genirq: Add interrupt redirection infrastructure
Date: Tue, 07 Oct 2025 22:04:06 +0200 [thread overview]
Message-ID: <87ecre32dl.ffs@tglx> (raw)
In-Reply-To: <20251006223813.1688637-2-rrendec@redhat.com>
On Mon, Oct 06 2025 at 18:38, Radu Rendec wrote:
> +
> +static bool demux_redirect_remote(struct irq_desc *desc)
> +{
> +#ifdef CONFIG_SMP
Please have a function and a stub for the !SMP case.
> + const struct cpumask *m = irq_data_get_effective_affinity_mask(&desc->irq_data);
> + unsigned int target_cpu = READ_ONCE(desc->redirect.fallback_cpu);
what means fallback_cpu in this context? That's confusing at best.
> + if (!cpumask_test_cpu(smp_processor_id(), m)) {
Why cpumask_test?
if (target != smp_processor_id()
should be good enough and understandable :)
> + /* Protect against shutdown */
> + if (desc->action)
> + irq_work_queue_on(&desc->redirect.work, target_cpu);
Can you please document why this is correct vs. CPU hotplug (especially unplug)?
I think it is, but I didn't look too carefully.
> +/**
> + * generic_handle_demux_domain_irq - Invoke the handler for a hardware interrupt
> + * of a demultiplexing domain.
> + * @domain: The domain where to perform the lookup
> + * @hwirq: The hardware interrupt number to convert to a logical one
> + *
> + * Returns: True on success, or false if lookup has failed
> + */
> +bool generic_handle_demux_domain_irq(struct irq_domain *domain, unsigned int hwirq)
> +{
> + struct irq_desc *desc = irq_resolve_mapping(domain, hwirq);
> +
> + if (unlikely(!desc))
> + return false;
> +
> + scoped_guard(raw_spinlock, &desc->lock) {
> + if (desc->irq_data.chip->irq_pre_redirect)
> + desc->irq_data.chip->irq_pre_redirect(&desc->irq_data);
I'd rather see that in the redirect function aboive.
> + if (demux_redirect_remote(desc))
> + return true;
> + }
> + return !handle_irq_desc(desc);
> +}
> +EXPORT_SYMBOL_GPL(generic_handle_demux_domain_irq);
> +
> #endif
>
> /* Dynamic interrupt handling */
> diff --git a/kernel/irq/manage.c b/kernel/irq/manage.c
> index c94837382037e..ed8f8b2667b0b 100644
> --- a/kernel/irq/manage.c
> +++ b/kernel/irq/manage.c
> @@ -35,6 +35,16 @@ static int __init setup_forced_irqthreads(char *arg)
> early_param("threadirqs", setup_forced_irqthreads);
> #endif
>
> +#ifdef CONFIG_SMP
> +static inline void synchronize_irqwork(struct irq_desc *desc)
> +{
> + /* Synchronize pending or on the fly redirect work */
> + irq_work_sync(&desc->redirect.work);
> +}
> +#else
> +static inline void synchronize_irqwork(struct irq_desc *desc) { }
> +#endif
> +
> static int __irq_get_irqchip_state(struct irq_data *d, enum irqchip_irq_state which, bool *state);
>
> static void __synchronize_hardirq(struct irq_desc *desc, bool sync_chip)
> @@ -43,6 +53,8 @@ static void __synchronize_hardirq(struct irq_desc *desc, bool sync_chip)
> bool inprogress;
>
> do {
> + synchronize_irqwork(desc);
That can't work. irq_work_sync() requires interrupts and preemption
enabled. But __synchronize_hardirq() can be invoked from interrupt or
preemption disabled context.
And it's not required at all beacuse that's not any different from a
hardware device interrupt. Either it is already handled (IRQ_INPROGRESS)
on some other CPU or not. That code can't anticipiate that there is a
interrupt somewhere on the flight in the system and not yet raised and
handled in a CPU.
Though you want to invoke it in __synchronize_irq() _before_ invoking
__synchronize_hardirq().
Thanks,
tglx
next prev parent reply other threads:[~2025-10-07 20:04 UTC|newest]
Thread overview: 7+ messages / expand[flat|nested] mbox.gz Atom feed top
2025-10-06 22:38 [PATCH v2 0/3] Enable MSI affinity support for dwc PCI Radu Rendec
2025-10-06 22:38 ` [PATCH v2 1/3] genirq: Add interrupt redirection infrastructure Radu Rendec
2025-10-07 20:04 ` Thomas Gleixner [this message]
2025-10-07 22:46 ` Radu Rendec
2025-10-06 22:38 ` [PATCH v2 2/3] PCI: dwc: Code cleanup Radu Rendec
2025-10-06 22:38 ` [PATCH v2 3/3] PCI: dwc: Enable MSI affinity support Radu Rendec
2025-10-07 19:08 ` [PATCH v2 0/3] Enable MSI affinity support for dwc PCI Thomas Gleixner
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=87ecre32dl.ffs@tglx \
--to=tglx@linutronix.de \
--cc=acarmina@redhat.com \
--cc=bhelgaas@google.com \
--cc=bmasney@redhat.com \
--cc=danielsftsai@google.com \
--cc=echanude@redhat.com \
--cc=jingoohan1@gmail.com \
--cc=jkangas@redhat.com \
--cc=kabel@kernel.org \
--cc=kwilczynski@kernel.org \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-pci@vger.kernel.org \
--cc=lpieralisi@kernel.org \
--cc=mani@kernel.org \
--cc=quic_krichai@quicinc.com \
--cc=robh@kernel.org \
--cc=rrendec@redhat.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).