From: Marc Zyngier <maz@kernel.org>
To: Vignesh Raghavendra <vigneshr@ti.com>
Cc: Nishanth Menon <nm@ti.com>, Tero Kristo <kristo@kernel.org>,
Santosh Shilimkar <ssantosh@kernel.org>,
Thomas Gleixner <tglx@linutronix.de>,
<linux-arm-kernel@lists.infradead.org>,
<linux-kernel@vger.kernel.org>
Subject: Re: [RFC PATCH 2/2] irqchip: irq-ti-sci-inta: Introduce IRQ affinity support
Date: Thu, 26 Jan 2023 14:18:46 +0000 [thread overview]
Message-ID: <86y1ppl6pl.wl-maz@kernel.org> (raw)
In-Reply-To: <20230122081607.959474-3-vigneshr@ti.com>
On Sun, 22 Jan 2023 08:16:07 +0000,
Vignesh Raghavendra <vigneshr@ti.com> wrote:
>
> Add support for setting IRQ affinity for VINTs which have only one event
> mapped to them. This just involves changing the parent IRQs affinity
> (GIC/INTR). Flag VINTs which have affinity configured so as to not
> aggregate/map more events to such VINTs.
>
> Signed-off-by: Vignesh Raghavendra <vigneshr@ti.com>
> ---
> drivers/irqchip/irq-ti-sci-inta.c | 39 +++++++++++++++++++++++++++++++
> 1 file changed, 39 insertions(+)
>
> diff --git a/drivers/irqchip/irq-ti-sci-inta.c b/drivers/irqchip/irq-ti-sci-inta.c
> index f1419d24568e..237cb4707cb8 100644
> --- a/drivers/irqchip/irq-ti-sci-inta.c
> +++ b/drivers/irqchip/irq-ti-sci-inta.c
> @@ -64,6 +64,7 @@ struct ti_sci_inta_event_desc {
> * @events: Array of event descriptors assigned to this vint.
> * @parent_virq: Linux IRQ number that gets attached to parent
> * @vint_id: TISCI vint ID
> + * @affinity_managed flag to indicate VINT affinity is managed
> */
> struct ti_sci_inta_vint_desc {
> struct irq_domain *domain;
> @@ -72,6 +73,7 @@ struct ti_sci_inta_vint_desc {
> struct ti_sci_inta_event_desc events[MAX_EVENTS_PER_VINT];
> unsigned int parent_virq;
> u16 vint_id;
> + bool affinity_managed;
> };
>
> /**
> @@ -334,6 +336,8 @@ static struct ti_sci_inta_event_desc *ti_sci_inta_alloc_irq(struct irq_domain *d
> vint_id = ti_sci_get_free_resource(inta->vint);
> if (vint_id == TI_SCI_RESOURCE_NULL) {
> list_for_each_entry(vint_desc, &inta->vint_list, list) {
> + if (vint_desc->affinity_managed)
> + continue;
> free_bit = find_first_zero_bit(vint_desc->event_map,
> MAX_EVENTS_PER_VINT);
> if (free_bit != MAX_EVENTS_PER_VINT)
> @@ -434,6 +438,7 @@ static int ti_sci_inta_request_resources(struct irq_data *data)
> return PTR_ERR(event_desc);
>
> data->chip_data = event_desc;
> + irq_data_update_effective_affinity(data, cpu_online_mask);
>
> return 0;
> }
> @@ -504,11 +509,45 @@ static void ti_sci_inta_ack_irq(struct irq_data *data)
> ti_sci_inta_manage_event(data, VINT_STATUS_OFFSET);
> }
>
> +#ifdef CONFIG_SMP
> +static int ti_sci_inta_set_affinity(struct irq_data *d,
> + const struct cpumask *mask_val, bool force)
> +{
> + struct ti_sci_inta_event_desc *event_desc;
> + struct ti_sci_inta_vint_desc *vint_desc;
> + struct irq_data *parent_irq_data;
> +
> + if (cpumask_equal(irq_data_get_effective_affinity_mask(d), mask_val))
> + return 0;
> +
> + event_desc = irq_data_get_irq_chip_data(d);
> + if (event_desc) {
> + vint_desc = to_vint_desc(event_desc, event_desc->vint_bit);
> +
> + /*
> + * Cannot set affinity if there is more than one event
> + * mapped to same VINT
> + */
> + if (bitmap_weight(vint_desc->event_map, MAX_EVENTS_PER_VINT) > 1)
> + return -EINVAL;
> +
> + vint_desc->affinity_managed = true;
> +
> + irq_data_update_effective_affinity(d, mask_val);
> + parent_irq_data = irq_get_irq_data(vint_desc->parent_virq);
> + if (parent_irq_data->chip->irq_set_affinity)
> + return parent_irq_data->chip->irq_set_affinity(parent_irq_data, mask_val, force);
This looks completely wrong.
You still have a chained irqchip on all paths, and have to do some
horrible probing to work out:
- which parent interrupt this is
- how many interrupts are connected to it
And then the fun begins:
- You have one interrupt that is standalone, so its affinity can be
moved
- An unrelated driver gets probed, and one of its interrupts gets
lumped together with the one above
- Now it cannot be moved anymore, and userspace complains
The rule is very simple: chained irqchip, no affinity management.
Either you reserve a poll of direct interrupts that have affinity
management and no muxing, or you keep the current approach.
But I'm strongly opposed to this sort of approach.
Thanks,
M.
--
Without deviation from the norm, progress is not possible.
_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel
next prev parent reply other threads:[~2023-01-26 14:19 UTC|newest]
Thread overview: 7+ messages / expand[flat|nested] mbox.gz Atom feed top
2023-01-22 8:16 [RFC PATCH 0/2] irqchip: irq-ti-sci-inta: Add IRQ affinity support Vignesh Raghavendra
2023-01-22 8:16 ` [RFC PATCH 1/2] irqchip: irq-ti-sci-inta: Don't aggregate MSI events until necessary Vignesh Raghavendra
2023-01-22 8:16 ` [RFC PATCH 2/2] irqchip: irq-ti-sci-inta: Introduce IRQ affinity support Vignesh Raghavendra
2023-01-26 14:18 ` Marc Zyngier [this message]
2023-01-27 17:53 ` Raghavendra, Vignesh
2023-02-20 8:47 ` Marc Zyngier
2023-02-22 17:07 ` Raghavendra, Vignesh
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=86y1ppl6pl.wl-maz@kernel.org \
--to=maz@kernel.org \
--cc=kristo@kernel.org \
--cc=linux-arm-kernel@lists.infradead.org \
--cc=linux-kernel@vger.kernel.org \
--cc=nm@ti.com \
--cc=ssantosh@kernel.org \
--cc=tglx@linutronix.de \
--cc=vigneshr@ti.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).