Linux-ARM-Kernel Archive on lore.kernel.org
 help / color / mirror / Atom feed
From: Will Deacon <will@kernel.org>
To: Catalin Marinas <catalin.marinas@arm.com>
Cc: linux-arm-kernel@lists.infradead.org,
	Marc Zyngier <maz@kernel.org>, Oliver Upton <oupton@kernel.org>,
	Lorenzo Pieralisi <lpieralisi@kernel.org>,
	Sudeep Holla <sudeep.holla@kernel.org>,
	James Morse <james.morse@arm.com>,
	Mark Rutland <mark.rutland@arm.com>,
	Mark Brown <broonie@kernel.org>,
	kvmarm@lists.linux.dev
Subject: Re: [PATCH 2/4] arm64: tlb: Pass the corresponding mm to __tlbi_sync_s1ish()
Date: Thu, 12 Mar 2026 15:00:09 +0000	[thread overview]
Message-ID: <abLU-cJLxgdpJ9C4@willie-the-truck> (raw)
In-Reply-To: <aaq3Vun0ZSSdXyXV@arm.com>

On Fri, Mar 06, 2026 at 11:15:34AM +0000, Catalin Marinas wrote:
> On Thu, Mar 05, 2026 at 07:19:15PM +0000, Catalin Marinas wrote:
> > On Thu, Mar 05, 2026 at 02:33:18PM +0000, Will Deacon wrote:
> > > On Mon, Mar 02, 2026 at 04:57:55PM +0000, Catalin Marinas wrote:
> > > > @@ -391,7 +391,7 @@ static inline bool arch_tlbbatch_should_defer(struct mm_struct *mm)
> > > >   */
> > > >  static inline void arch_tlbbatch_flush(struct arch_tlbflush_unmap_batch *batch)
> > > >  {
> > > > -	__tlbi_sync_s1ish();
> > > > +	__tlbi_sync_s1ish(NULL);
> > > 
> > > Hmm, it seems a bit rubbish to pass NULL here as that means that we'll
> > > deploy the mitigation regardless of the mm flags when finishing the
> > > batch.
> > > 
> > > It also looks like we could end up doing the workaround multiple times
> > > if arch_tlbbatch_add_pending() is passed a large enough region that we
> > > call __flush_tlb_range_limit_excess() fires.
> > > 
> > > So perhaps we should stash the mm in 'struct arch_tlbflush_unmap_batch'
> > > alongside some state to track whether or not we have uncompleted TLB
> > > maintenance in flight?
> > 
> > The problem is that arch_tlbbatch_flush() can be called to synchronise
> > multiple mm structures that were touched by TTU. We can't have the mm in
> > arch_tlbflush_unmap_batch. But we can track if any of the mms had
> > MMCF_SME_DVMSYNC flag set, something like below (needs testing, tidying
> > up). TBH, I did not notice any problem in benchmarking as I guess we
> > haven't exercised the TTU path much, so did not bother to optimise it.
> > 
> > For the TTU case, I don't think we need to worry about the excess limit
> > and doing the IPI twice. But I'll double check the code paths tomorrow.
> > 
> > diff --git a/arch/arm64/include/asm/tlbbatch.h b/arch/arm64/include/asm/tlbbatch.h
> > index fedb0b87b8db..e756eaca6cb8 100644
> > --- a/arch/arm64/include/asm/tlbbatch.h
> > +++ b/arch/arm64/include/asm/tlbbatch.h
> > @@ -7,6 +7,8 @@ struct arch_tlbflush_unmap_batch {
> >  	 * For arm64, HW can do tlb shootdown, so we don't
> >  	 * need to record cpumask for sending IPI
> >  	 */
> > +
> > +	bool sme_dvmsync;
> >  };
> > 
> >  #endif /* _ARCH_ARM64_TLBBATCH_H */
> > diff --git a/arch/arm64/include/asm/tlbflush.h b/arch/arm64/include/asm/tlbflush.h
> > index e3ea0246a4f4..c1141a684854 100644
> > --- a/arch/arm64/include/asm/tlbflush.h
> > +++ b/arch/arm64/include/asm/tlbflush.h
> > @@ -201,10 +201,15 @@ do {										\
> >   * Complete broadcast TLB maintenance issued by the host which invalidates
> >   * stage 1 information in the host's own translation regime.
> >   */
> > -static inline void __tlbi_sync_s1ish(struct mm_struct *mm)
> > +static inline void __tlbi_sync_s1ish_no_sme_dvmsync(void)
> >  {
> >  	dsb(ish);
> >  	__repeat_tlbi_sync(vale1is, 0);
> > +}
> > +
> > +static inline void __tlbi_sync_s1ish(struct mm_struct *mm)
> > +{
> > +	__tlbi_sync_s1ish_no_sme_dvmsync();
> >  	sme_dvmsync(mm);
> >  }
> > 
> > @@ -408,7 +413,11 @@ static inline bool arch_tlbbatch_should_defer(struct mm_struct *mm)
> >   */
> >  static inline void arch_tlbbatch_flush(struct arch_tlbflush_unmap_batch *batch)
> >  {
> > -	__tlbi_sync_s1ish(NULL);
> > +	__tlbi_sync_s1ish_no_sme_dvmsync();
> > +	if (batch->sme_dvmsync) {
> > +		batch->sme_dvmsync = false;
> > +		sme_dvmsync(NULL);
> > +	}
> >  }
> > 
> >  /*
> > @@ -613,6 +622,8 @@ static inline void arch_tlbbatch_add_pending(struct arch_tlbflush_unmap_batch *b
> >  		struct mm_struct *mm, unsigned long start, unsigned long end)
> >  {
> >  	__flush_tlb_range_nosync(mm, start, end, PAGE_SIZE, true, 3);
> > +	if (test_bit(ilog2(MMCF_SME_DVMSYNC), &mm->context.flags))
> > +		batch->sme_dvmsync = true;
> >  }
> 
> While writing a reply to your other comments, I realised why this
> wouldn't work (I had something similar but dropped it) - we can have the
> flag cleared here (or mm_cpumask() if we are to track per-mm) but we
> have not issued the DVMSync yet. The task may start using SME before
> arch_tlbbatch_flush() and we just missed it. Any checks on whether to
> issue the IPI like reading flags needs to be after the DVMSync.

Ah, yeah. I wonder if it's worth detecting the change of mm in
arch_tlbbatch_add_pending() and then pro-actively doing the DSB on CPUs
with the erratum? I suppose it depends on how often SME is being used.

Will


  reply	other threads:[~2026-03-12 15:00 UTC|newest]

Thread overview: 24+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2026-03-02 16:57 [PATCH 0/4] arm64: Work around C1-Pro erratum 4193714 (CVE-2026-0995) Catalin Marinas
2026-03-02 16:57 ` [PATCH 1/4] arm64: tlb: Use __tlbi_sync_s1ish_kernel() for kernel TLB maintenance Catalin Marinas
2026-03-03 13:12   ` Mark Rutland
2026-03-05 11:27     ` Catalin Marinas
2026-03-09 12:12       ` Mark Rutland
2026-03-02 16:57 ` [PATCH 2/4] arm64: tlb: Pass the corresponding mm to __tlbi_sync_s1ish() Catalin Marinas
2026-03-05 14:33   ` Will Deacon
2026-03-05 19:19     ` Catalin Marinas
2026-03-06 11:15       ` Catalin Marinas
2026-03-12 15:00         ` Will Deacon [this message]
2026-03-13 16:27           ` Catalin Marinas
2026-03-02 16:57 ` [PATCH 3/4] arm64: errata: Work around early CME DVMSync acknowledgement Catalin Marinas
2026-03-05 14:32   ` Will Deacon
2026-03-06 12:00     ` Catalin Marinas
2026-03-06 12:19       ` Catalin Marinas
2026-03-09 10:13       ` Vladimir Murzin
2026-03-10 15:35         ` Catalin Marinas
2026-03-12 14:55           ` Will Deacon
2026-03-13 15:48             ` Catalin Marinas
2026-03-13 15:58               ` Will Deacon
2026-03-17 12:09             ` Mark Rutland
2026-03-02 16:57 ` [PATCH 4/4] KVM: arm64: Add SMC hook for SME dvmsync erratum Catalin Marinas
2026-03-05 14:32   ` Will Deacon
2026-03-06 12:52     ` Catalin Marinas

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=abLU-cJLxgdpJ9C4@willie-the-truck \
    --to=will@kernel.org \
    --cc=broonie@kernel.org \
    --cc=catalin.marinas@arm.com \
    --cc=james.morse@arm.com \
    --cc=kvmarm@lists.linux.dev \
    --cc=linux-arm-kernel@lists.infradead.org \
    --cc=lpieralisi@kernel.org \
    --cc=mark.rutland@arm.com \
    --cc=maz@kernel.org \
    --cc=oupton@kernel.org \
    --cc=sudeep.holla@kernel.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox