linux-arm-kernel.lists.infradead.org archive mirror
 help / color / mirror / Atom feed
From: Marc Zyngier <maz@kernel.org>
To: Johan Hovold <johan@kernel.org>
Cc: Johan Hovold <johan+linaro@kernel.org>,
	Thomas Gleixner <tglx@linutronix.de>,
	x86@kernel.org, platform-driver-x86@vger.kernel.org,
	linux-arm-kernel@lists.infradead.org, linux-mips@vger.kernel.org,
	linux-kernel@vger.kernel.org, Hsin-Yi Wang <hsinyi@chromium.org>,
	Mark-PK Tsai <mark-pk.tsai@mediatek.com>
Subject: Re: [PATCH v5 19/19] irqdomain: Switch to per-domain locking
Date: Fri, 10 Feb 2023 11:38:58 +0000	[thread overview]
Message-ID: <86bkm1zr59.wl-maz@kernel.org> (raw)
In-Reply-To: <Y+YUs6lzalneLyz7@hovoldconsulting.com>

On Fri, 10 Feb 2023 09:56:03 +0000,
Johan Hovold <johan@kernel.org> wrote:
> 
> On Thu, Feb 09, 2023 at 04:00:55PM +0000, Marc Zyngier wrote:
> > On Thu, 09 Feb 2023 13:23:23 +0000,
> > Johan Hovold <johan+linaro@kernel.org> wrote:
> > > 
> > > The IRQ domain structures are currently protected by the global
> > > irq_domain_mutex. Switch to using more fine-grained per-domain locking,
> > > which can speed up parallel probing by reducing lock contention.
> > > 
> > > On a recent arm64 laptop, the total time spent waiting for the locks
> > > during boot drops from 160 to 40 ms on average, while the maximum
> > > aggregate wait time drops from 550 to 90 ms over ten runs for example.
> > > 
> > > Note that the domain lock of the root domain (innermost domain) must be
> > > used for hierarchical domains. For non-hierarchical domains (as for root
> > > domains), the new root pointer is set to the domain itself so that
> > > domain->root->mutex can be used in shared code paths.
> > > 
> > > Also note that hierarchical domains should be constructed using
> > > irq_domain_create_hierarchy() (or irq_domain_add_hierarchy()) to avoid
> > > poking at irqdomain internals. As a safeguard, the lockdep assertion in
> > > irq_domain_set_mapping() will catch any offenders that fail to set the
> > > root domain pointer.
> > > 
> > > Tested-by: Hsin-Yi Wang <hsinyi@chromium.org>
> > > Tested-by: Mark-PK Tsai <mark-pk.tsai@mediatek.com>
> > > Signed-off-by: Johan Hovold <johan+linaro@kernel.org>
> > > ---
> > >  include/linux/irqdomain.h |  4 +++
> > >  kernel/irq/irqdomain.c    | 61 +++++++++++++++++++++++++--------------
> > >  2 files changed, 44 insertions(+), 21 deletions(-)
> > > 
> > > diff --git a/include/linux/irqdomain.h b/include/linux/irqdomain.h
> > > index 16399de00b48..cad47737a052 100644
> > > --- a/include/linux/irqdomain.h
> > > +++ b/include/linux/irqdomain.h
> > > @@ -125,6 +125,8 @@ struct irq_domain_chip_generic;
> > >   *		core code.
> > >   * @flags:	Per irq_domain flags
> > >   * @mapcount:	The number of mapped interrupts
> > > + * @mutex:	Domain lock, hierarhical domains use root domain's lock
> > 
> > nit: hierarchical
> > 
> > > + * @root:	Pointer to root domain, or containing structure if non-hierarchical
> 
> > > @@ -226,6 +226,17 @@ struct irq_domain *__irq_domain_add(struct fwnode_handle *fwnode, unsigned int s
> > >  
> > >  	domain->revmap_size = size;
> > >  
> > > +	/*
> > > +	 * Hierarchical domains use the domain lock of the root domain
> > > +	 * (innermost domain).
> > > +	 *
> > > +	 * For non-hierarchical domains (as for root domains), the root
> > > +	 * pointer is set to the domain itself so that domain->root->mutex
> > > +	 * can be used in shared code paths.
> > > +	 */
> > > +	mutex_init(&domain->mutex);
> > > +	domain->root = domain;
> > > +
> > >  	irq_domain_check_hierarchy(domain);
> > >  
> > >  	mutex_lock(&irq_domain_mutex);
> 
> > > @@ -518,7 +529,11 @@ static void irq_domain_set_mapping(struct irq_domain *domain,
> > >  				   irq_hw_number_t hwirq,
> > >  				   struct irq_data *irq_data)
> > >  {
> > > -	lockdep_assert_held(&irq_domain_mutex);
> > > +	/*
> > > +	 * This also makes sure that all domains point to the same root when
> > > +	 * called from irq_domain_insert_irq() for each domain in a hierarchy.
> > > +	 */
> > > +	lockdep_assert_held(&domain->root->mutex);
> > >  
> > >  	if (irq_domain_is_nomap(domain))
> > >  		return;
> > > @@ -540,7 +555,7 @@ static void irq_domain_disassociate(struct irq_domain *domain, unsigned int irq)
> > >  
> > >  	hwirq = irq_data->hwirq;
> > >  
> > > -	mutex_lock(&irq_domain_mutex);
> > > +	mutex_lock(&domain->mutex);
> > 
> > So you made that point about being able to uniformly using root>mutex,
> > which I think is a good invariant. Yet you hardly make use of it. Why?
> 
> I went back and forth over that a bit, but decided to only use
> domain->root->mutex in paths that can be called for hierarchical
> domains (i.e. the "shared code paths" mentioned above).
> 
> Using it in paths that are clearly only called for non-hierarchical
> domains where domain->root == domain felt a bit lazy.

My concern here is that as this code gets further refactored, it may
become much harder to reason about what is the correct level of
locking.

> The counter argument is of course that using domain->root->lock allows
> people to think less about the code they are changing, but that's not
> necessarily always a good thing.

Eventually, non-hierarchical domains should simply die and be replaced
with a single level hierarchy. Having a unified locking in place will
definitely make the required work clearer.

> Also note that the lockdep asserts in the revmap helpers would catch
> anyone using domain->mutex where they should not (i.e. using
> domain->mutex for an hierarchical domain).

Lockdep is great, but lockdep is a runtime thing. It doesn't help
reasoning about what gets locked when changing this code.

> > > @@ -1132,6 +1147,7 @@ struct irq_domain *irq_domain_create_hierarchy(struct irq_domain *parent,
> > >  	else
> > >  		domain = irq_domain_create_tree(fwnode, ops, host_data);
> > >  	if (domain) {
> > > +		domain->root = parent->root;
> > >  		domain->parent = parent;
> > >  		domain->flags |= flags;
> > 
> > So we still have a bug here, as we have published a domain that we
> > keep updating. A parallel probing could find it in the interval and do
> > something completely wrong.
> 
> Indeed we do, even if device links should make this harder to hit these
> days.
> 
> > Splitting the work would help, as per the following patch.
> 
> Looks good to me. Do you want to submit that as a patch that I'll rebase
> on or should I submit it as part of a v6?

Just take it directly.

Thanks,

	M.

-- 
Without deviation from the norm, progress is not possible.

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

  reply	other threads:[~2023-02-10 11:40 UTC|newest]

Thread overview: 29+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2023-02-09 13:23 [PATCH v5 00/19] irqdomain: fix mapping race and clean up locking Johan Hovold
2023-02-09 13:23 ` [PATCH v5 01/19] irqdomain: Fix association race Johan Hovold
2023-02-09 13:23 ` [PATCH v5 02/19] irqdomain: Fix disassociation race Johan Hovold
2023-02-09 13:23 ` [PATCH v5 03/19] irqdomain: Drop bogus fwspec-mapping error handling Johan Hovold
2023-02-09 13:23 ` [PATCH v5 04/19] irqdomain: Look for existing mapping only once Johan Hovold
2023-02-09 13:23 ` [PATCH v5 05/19] irqdomain: Refactor __irq_domain_alloc_irqs() Johan Hovold
2023-02-09 13:23 ` [PATCH v5 06/19] irqdomain: Fix mapping-creation race Johan Hovold
2023-02-09 14:03   ` Marc Zyngier
2023-02-10  9:10     ` Johan Hovold
2023-02-09 13:23 ` [PATCH v5 07/19] irqdomain: Drop revmap mutex Johan Hovold
2023-02-09 13:23 ` [PATCH v5 08/19] irqdomain: Drop dead domain-name assignment Johan Hovold
2023-02-09 13:23 ` [PATCH v5 09/19] irqdomain: Drop leftover brackets Johan Hovold
2023-02-09 13:23 ` [PATCH v5 10/19] irqdomain: Clean up irq_domain_push/pop_irq() Johan Hovold
2023-02-09 13:23 ` [PATCH v5 11/19] x86/ioapic: Use irq_domain_create_hierarchy() Johan Hovold
2023-02-09 13:23 ` [PATCH v5 12/19] x86/apic: " Johan Hovold
2023-02-09 13:23 ` [PATCH v5 13/19] irqchip/alpine-msi: Use irq_domain_add_hierarchy() Johan Hovold
2023-02-09 13:23 ` [PATCH v5 14/19] irqchip/gic-v2m: Use irq_domain_create_hierarchy() Johan Hovold
2023-02-09 13:23 ` [PATCH v5 15/19] irqchip/gic-v3-its: " Johan Hovold
2023-02-09 13:23 ` [PATCH v5 16/19] irqchip/gic-v3-mbi: " Johan Hovold
2023-02-09 13:23 ` [PATCH v5 17/19] irqchip/loongson-pch-msi: " Johan Hovold
2023-02-09 13:23 ` [PATCH v5 18/19] irqchip/mvebu-odmi: " Johan Hovold
2023-02-09 13:23 ` [PATCH v5 19/19] irqdomain: Switch to per-domain locking Johan Hovold
2023-02-09 16:00   ` Marc Zyngier
2023-02-10  9:56     ` Johan Hovold
2023-02-10 11:38       ` Marc Zyngier [this message]
2023-02-10 12:57         ` Johan Hovold
2023-02-10 15:06           ` Marc Zyngier
2023-02-11 11:35             ` Johan Hovold
2023-02-11 12:52               ` Marc Zyngier

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=86bkm1zr59.wl-maz@kernel.org \
    --to=maz@kernel.org \
    --cc=hsinyi@chromium.org \
    --cc=johan+linaro@kernel.org \
    --cc=johan@kernel.org \
    --cc=linux-arm-kernel@lists.infradead.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-mips@vger.kernel.org \
    --cc=mark-pk.tsai@mediatek.com \
    --cc=platform-driver-x86@vger.kernel.org \
    --cc=tglx@linutronix.de \
    --cc=x86@kernel.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).