public inbox for linux-arm-kernel@lists.infradead.org
 help / color / mirror / Atom feed
From: Nicolin Chen <nicolinc@nvidia.com>
To: Jason Gunthorpe <jgg@nvidia.com>
Cc: Pranjal Shrivastava <praan@google.com>, <will@kernel.org>,
	<jean-philippe@linaro.org>, <robin.murphy@arm.com>,
	<joro@8bytes.org>, <balbirs@nvidia.com>,
	<miko.lenczewski@arm.com>, <peterz@infradead.org>,
	<kevin.tian@intel.com>, <linux-arm-kernel@lists.infradead.org>,
	<iommu@lists.linux.dev>, <linux-kernel@vger.kernel.org>
Subject: Re: [PATCH v9 6/7] iommu/arm-smmu-v3: Add arm_smmu_invs based arm_smmu_domain_inv_range()
Date: Tue, 27 Jan 2026 12:14:37 -0800	[thread overview]
Message-ID: <aXkcrXQH8pHgE7Ft@Asurada-Nvidia> (raw)
In-Reply-To: <20260127191938.GR1134360@nvidia.com>

On Tue, Jan 27, 2026 at 03:19:38PM -0400, Jason Gunthorpe wrote:
> On Tue, Jan 27, 2026 at 10:37:44AM -0800, Nicolin Chen wrote:
> > On Tue, Jan 27, 2026 at 02:23:48PM -0400, Jason Gunthorpe wrote:
> > > On Tue, Jan 27, 2026 at 10:07:09AM -0800, Nicolin Chen wrote:
> > > > > My understanding has been that this invalidation can run from an IRQ
> > > > > context - we permit the use of the DMA API from an interrupt handler?
> > > > > 
> > > > > I though that for rwsem the read side does not require the _irqsave,
> > > > > even if it is in an irq context, unless the write side runs from an
> > > > > IRQ. 
> > > > 
> > > > Hmm, is "rwsem" a typo? Because it's rwlock_t, which is spinlock :-/
> > > 
> > > Yeah, sorry
> > > 
> > > > > Here the write side always runs from a process context.
> > > > > 
> > > > > So the write side will block the IRQ which ensures we don't spin
> > > > > during read in an IRQ.
> > > > 
> > > > And, does write_lock_irqsave() disable global IRQ or local IRQ only?
> > > > 
> > > > Documentation/locking/locktypes.rst mentions "local_irq_disable()"..
> > > 
> > > It will only disable the local IRQ, since it is a spin type lock an IRQ on
> > > another CPU can spin until it is unlocked.
> > > 
> > > The main issue is if this CPU takes an IRQ while the write side is
> > > locked and spins, then it will never unlock.
> > 
> > Yea, that sounds unsafe. I'll send a v11 with read_lock_irqsave().
> 
> I'm explaining why it is safe now, the write side takes the irqsave so
> the above can't happen.

Sorry, I misunderstood..

> There is no case where the read side needs to block IRQ because if the
> read side succeeds, an IRQ happens and tries to take another read
> side, it will succeed not spin.

Yea, I also went a bit deeper.

It seems to depend on the CONFIG_QUEUED_RWLOCKS (ARM sets =y)

252 config QUEUED_RWLOCKS
253         def_bool y if ARCH_USE_QUEUED_RWLOCKS
254         depends on SMP && !PREEMPT_RT

where a reader will not get blocked in our particular use case:

21 void __lockfunc queued_read_lock_slowpath(struct qrwlock *lock)
22 {
23         /*
24          * Readers come here when they cannot get the lock without waiting
25          */
26         if (unlikely(in_interrupt())) {
27                 /*
28                  * Readers in interrupt context will get the lock immediately
29                  * if the writer is just waiting (not holding the lock yet),
30                  * so spin with ACQUIRE semantics until the lock is available
31                  * without waiting in the queue.
32                  */
33                 atomic_cond_read_acquire(&lock->cnts, !(VAL & _QW_LOCKED));
34                 return;

And I don't see any non-hackable way for CONFIG_QUEUED_RWLOCKS=n
unless CONFIG_PREEMPT_RT=y, which would be a different ball game
that I assume SMMUv3 might not be completely compatible with.

Thanks
Nicolin


  reply	other threads:[~2026-01-27 20:15 UTC|newest]

Thread overview: 45+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2025-12-19 20:11 [PATCH v9 0/7] iommu/arm-smmu-v3: Introduce an RCU-protected invalidation array Nicolin Chen
2025-12-19 20:11 ` [PATCH v9 1/7] iommu/arm-smmu-v3: Explicitly set smmu_domain->stage for SVA Nicolin Chen
2026-01-23  9:49   ` Pranjal Shrivastava
2025-12-19 20:11 ` [PATCH v9 2/7] iommu/arm-smmu-v3: Add an inline arm_smmu_domain_free() Nicolin Chen
2026-01-23  9:50   ` Pranjal Shrivastava
2025-12-19 20:11 ` [PATCH v9 3/7] iommu/arm-smmu-v3: Introduce a per-domain arm_smmu_invs array Nicolin Chen
2026-01-23  9:53   ` Pranjal Shrivastava
2026-01-23 17:03   ` Will Deacon
2026-01-23 17:35     ` Nicolin Chen
2026-01-23 17:51       ` Will Deacon
2026-01-23 17:56         ` Nicolin Chen
2026-01-23 19:16           ` Jason Gunthorpe
2026-01-23 19:18             ` Nicolin Chen
2026-01-26 14:54             ` Will Deacon
2026-01-26 15:21               ` Jason Gunthorpe
2025-12-19 20:11 ` [PATCH v9 4/7] iommu/arm-smmu-v3: Pre-allocate a per-master invalidation array Nicolin Chen
2026-01-23  9:54   ` Pranjal Shrivastava
2025-12-19 20:11 ` [PATCH v9 5/7] iommu/arm-smmu-v3: Populate smmu_domain->invs when attaching masters Nicolin Chen
2025-12-19 20:11 ` [PATCH v9 6/7] iommu/arm-smmu-v3: Add arm_smmu_invs based arm_smmu_domain_inv_range() Nicolin Chen
2026-01-23  9:48   ` Pranjal Shrivastava
2026-01-23 13:56     ` Jason Gunthorpe
2026-01-27 16:38     ` Nicolin Chen
2026-01-27 17:08       ` Jason Gunthorpe
2026-01-27 18:07         ` Nicolin Chen
2026-01-27 18:23           ` Jason Gunthorpe
2026-01-27 18:37             ` Nicolin Chen
2026-01-27 19:19               ` Jason Gunthorpe
2026-01-27 20:14                 ` Nicolin Chen [this message]
2026-01-28  0:05                   ` Jason Gunthorpe
2026-01-23 17:05   ` Will Deacon
2026-01-23 17:10     ` Will Deacon
2026-01-23 17:43       ` Nicolin Chen
2026-01-23 20:03       ` Jason Gunthorpe
2026-01-26 13:01         ` Will Deacon
2026-01-26 15:20           ` Jason Gunthorpe
2026-01-26 16:02             ` Will Deacon
2026-01-26 16:09               ` Jason Gunthorpe
2026-01-26 18:56                 ` Will Deacon
2026-01-27  3:14                   ` Nicolin Chen
2026-01-26 17:50             ` Nicolin Chen
2025-12-19 20:11 ` [PATCH v9 7/7] iommu/arm-smmu-v3: Perform per-domain invalidations using arm_smmu_invs Nicolin Chen
2026-01-23 17:07   ` Will Deacon
2026-01-23 17:47     ` Nicolin Chen
2026-01-23 19:59     ` Jason Gunthorpe
2026-01-19 17:10 ` [PATCH v9 0/7] iommu/arm-smmu-v3: Introduce an RCU-protected invalidation array Nicolin Chen

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=aXkcrXQH8pHgE7Ft@Asurada-Nvidia \
    --to=nicolinc@nvidia.com \
    --cc=balbirs@nvidia.com \
    --cc=iommu@lists.linux.dev \
    --cc=jean-philippe@linaro.org \
    --cc=jgg@nvidia.com \
    --cc=joro@8bytes.org \
    --cc=kevin.tian@intel.com \
    --cc=linux-arm-kernel@lists.infradead.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=miko.lenczewski@arm.com \
    --cc=peterz@infradead.org \
    --cc=praan@google.com \
    --cc=robin.murphy@arm.com \
    --cc=will@kernel.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox