From: Dawei Li <dawei.li@linux.dev>
To: Jason Gunthorpe <jgg@ziepe.ca>
Cc: will@kernel.org, robin.murphy@arm.com, joro@8bytes.org,
linux-arm-kernel@lists.infradead.org, iommu@lists.linux.dev,
linux-kernel@vger.kernel.org, set_pte_at@outlook.com,
stable@vger.kernel.org, dawei.li@linux.dev
Subject: Re: [PATCH] iommu/arm-smmu-v3: Maintain valid access attributes for non-coherent SMMU
Date: Sun, 4 Jan 2026 21:35:32 +0800 [thread overview]
Message-ID: <20260104133532.GA173992@wendao-VirtualBox> (raw)
In-Reply-To: <20260102184113.GA125261@ziepe.ca>
Jeson,
On Fri, Jan 02, 2026 at 02:41:13PM -0400, Jason Gunthorpe wrote:
> On Mon, Dec 29, 2025 at 08:23:54AM +0800, Dawei Li wrote:
> > According to SMMUv3 architecture specification, IO-coherent access for
> > SMMU is supported for:
> > - Translation table walks.
> > - Fetches of L1STD, STE, L1CD and CD.
> > - Command queue, Event queue and PRI queue access.
> > - GERROR, CMD_SYNC, Event queue and PRI queue MSIs, if supported
>
> I was recently looking at this too.. IMHO this is not really a clean
> description of what this patch is doing.
>
> I would write this description as:
>
> When the SMMU does a DMA for itself it can set various memory access
> attributes which control how the interconnect should execute the
> DMA. Linux uses these to differentiate DMA that must snoop the cache
> and DMA that must bypass it because Linux has allocated non-coherent
> on the CPU.
>
> In Table "13.8 Attributes for SMMU-originated accesses" each of the
> different types of DMA is categorized and the specific bits
> controlling the memory attribute for the fetch are identified.
Turns out I missed some of DMA types listed in 13.8, thanks for the
headsup.
>
> Make this consisent globally. If Linux has cache flushed the buffer,
> or allocated a DMA incoherenet buffer, then it should set the
> non-caching memory attribute so the DMA matches.
>
> This is important for some of the allocations where Linux is currently
> allocating DMA coherent memory, meaning nothing has made the CPU cache
> coherent and doing any coherent access to that memory may result in
> cache inconsistencies.
>
> This may solve problems in systems where the SMMU driver thinks the
> SMMU is non-coherent, but in fact, the SMMU and the interconnect
> selectively supports coherence and setting the wrong memory attributes
> will cause non-working cached access.
>
> [and then if you have a specific SOC that shows an issue please
> describe the HW]
>
> > +static __always_inline bool smmu_coherent(struct arm_smmu_device *smmu)
> > +{
> > + return !!(smmu->features & ARM_SMMU_FEAT_COHERENCY);
> > +}
> > +
> > /* High-level queue accessors */
> > -static int arm_smmu_cmdq_build_cmd(u64 *cmd, struct arm_smmu_cmdq_ent *ent)
> > +static int arm_smmu_cmdq_build_cmd(u64 *cmd, struct arm_smmu_cmdq_ent *ent,
> > + struct arm_smmu_device *smmu)
> > {
> > memset(cmd, 0, 1 << CMDQ_ENT_SZ_SHIFT);
> > cmd[0] |= FIELD_PREP(CMDQ_0_OP, ent->opcode);
> > @@ -358,8 +364,13 @@ static int arm_smmu_cmdq_build_cmd(u64 *cmd, struct arm_smmu_cmdq_ent *ent)
> > } else {
> > cmd[0] |= FIELD_PREP(CMDQ_SYNC_0_CS, CMDQ_SYNC_0_CS_SEV);
> > }
> > - cmd[0] |= FIELD_PREP(CMDQ_SYNC_0_MSH, ARM_SMMU_SH_ISH);
> > - cmd[0] |= FIELD_PREP(CMDQ_SYNC_0_MSIATTR, ARM_SMMU_MEMATTR_OIWB);
> > + if (smmu_coherent(smmu)) {
> > + cmd[0] |= FIELD_PREP(CMDQ_SYNC_0_MSH, ARM_SMMU_SH_ISH);
> > + cmd[0] |= FIELD_PREP(CMDQ_SYNC_0_MSIATTR, ARM_SMMU_MEMATTR_OIWB);
> > + } else {
> > + cmd[0] |= FIELD_PREP(CMDQ_SYNC_0_MSH, ARM_SMMU_SH_OSH);
> > + cmd[0] |= FIELD_PREP(CMDQ_SYNC_0_MSIATTR, ARM_SMMU_MEMATTR_OINC);
> > + }
>
> And then please go through your patch and add comments actually
> explaining what the DMA is and what memory is being reached by it -
> since it is not always very clear from the ARM mnemonics
Acked.
>
> For instance, this is:
> /* DMA for "CMDQ MSI" which targets q->base_dma allocated by arm_smmu_init_one_queue() */
>
> > @@ -1612,11 +1624,18 @@ void arm_smmu_make_cdtable_ste(struct arm_smmu_ste *target,
> > (cd_table->cdtab_dma & STRTAB_STE_0_S1CTXPTR_MASK) |
> > FIELD_PREP(STRTAB_STE_0_S1CDMAX, cd_table->s1cdmax));
> >
> > + if (smmu_coherent(smmu)) {
> > + val = FIELD_PREP(STRTAB_STE_1_S1CIR, STRTAB_STE_1_S1C_CACHE_WBRA) |
> > + FIELD_PREP(STRTAB_STE_1_S1COR, STRTAB_STE_1_S1C_CACHE_WBRA) |
> > + FIELD_PREP(STRTAB_STE_1_S1CSH, ARM_SMMU_SH_ISH);
> > + } else {
> > + val = FIELD_PREP(STRTAB_STE_1_S1CIR, STRTAB_STE_1_S1C_CACHE_NC) |
> > + FIELD_PREP(STRTAB_STE_1_S1COR, STRTAB_STE_1_S1C_CACHE_NC) |
> > + FIELD_PREP(STRTAB_STE_1_S1CSH, ARM_SMMU_SH_OSH);
> > + }
>
> This one is "CD fetch" allocated by arm_smmu_alloc_cd_ptr()
>
> etc
>
> And note that the above will need this hunk too:
>
> +++ b/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3-iommufd.c
> @@ -432,6 +432,14 @@ size_t arm_smmu_get_viommu_size(struct device *dev,
> !(smmu->features & ARM_SMMU_FEAT_S2FWB))
> return 0;
>
> + /*
> + * When running non-coherent we can't suppot S2FWB since it will also
> + * force a coherent CD fetch, aside from the question of what
> + * S2FWB/CANWBS even does with non-coherent SMMUs.
> + */
> + if (!smmu_coherent(smmu))
> + return 0;
I was wondering why S2FWB can affect CD fetching before reading 13.8.
Does diff below suffice?
@@ -1614,8 +1619,12 @@ void arm_smmu_make_cdtable_ste(struct arm_smmu_ste *target,
{
struct arm_smmu_ctx_desc_cfg *cd_table = &master->cd_table;
struct arm_smmu_device *smmu = master->smmu;
+ bool coherent, fwb;
u64 val;
+ coherent = smmu_coherent(smmu);
+ fwb = !!(smmu->features & ARM_SMMU_FEAT_S2FWB);
+
memset(target, 0, sizeof(*target));
target->data[0] = cpu_to_le64(
STRTAB_STE_0_V |
@@ -1624,14 +1633,24 @@ void arm_smmu_make_cdtable_ste(struct arm_smmu_ste *target,
(cd_table->cdtab_dma & STRTAB_STE_0_S1CTXPTR_MASK) |
FIELD_PREP(STRTAB_STE_0_S1CDMAX, cd_table->s1cdmax));
+ /*
+ * DMA for "CD fetch" targets cd_table->linear.table or cd_table->l2.l1tab
+ * allocated by arm_smmu_alloc_cd_ptr().
+ */
if (smmu_coherent(smmu)) {
val = FIELD_PREP(STRTAB_STE_1_S1CIR, STRTAB_STE_1_S1C_CACHE_WBRA) |
FIELD_PREP(STRTAB_STE_1_S1COR, STRTAB_STE_1_S1C_CACHE_WBRA) |
FIELD_PREP(STRTAB_STE_1_S1CSH, ARM_SMMU_SH_ISH);
} else {
- val = FIELD_PREP(STRTAB_STE_1_S1CIR, STRTAB_STE_1_S1C_CACHE_NC) |
- FIELD_PREP(STRTAB_STE_1_S1COR, STRTAB_STE_1_S1C_CACHE_NC) |
- FIELD_PREP(STRTAB_STE_1_S1CSH, ARM_SMMU_SH_OSH);
+ if (!fwb) {
+ val = FIELD_PREP(STRTAB_STE_1_S1CIR, STRTAB_STE_1_S1C_CACHE_NC) |
+ FIELD_PREP(STRTAB_STE_1_S1COR, STRTAB_STE_1_S1C_CACHE_NC) |
+ FIELD_PREP(STRTAB_STE_1_S1CSH, ARM_SMMU_SH_OSH);
+ } else {
+ dev_warn(&smmu->dev, "Inconsitency between COHACC & S2FWB\n");
+ /* FIX ME */
+ return;
+ }
}
>
> > @@ -3746,7 +3765,7 @@ int arm_smmu_init_one_queue(struct arm_smmu_device *smmu,
> > q->cons_reg = page + cons_off;
> > q->ent_dwords = dwords;
> >
> > - q->q_base = Q_BASE_RWA;
> > + q->q_base = smmu_coherent(smmu) ? Q_BASE_RWA : 0;
>
> CMDQ fetch, though do we even need to manage RWA? Isn't it ignored if
> IC/OC/SH are set to their non-cachable values?
My first thought was "why don't we just whack them all?", yet the spec
reads:
"Cache allocation hints are present in each _BASE register and are ignored
unless a cacheable type is used for the table or queue to which the
register corresponds"
You are right, gonna remove it.
>
> etc..
>
> Jason
Thanks,
Dawei
next prev parent reply other threads:[~2026-01-04 13:35 UTC|newest]
Thread overview: 9+ messages / expand[flat|nested] mbox.gz Atom feed top
2025-12-29 0:23 [PATCH] iommu/arm-smmu-v3: Maintain valid access attributes for non-coherent SMMU Dawei Li
2026-01-02 18:41 ` Jason Gunthorpe
2026-01-04 13:35 ` Dawei Li [this message]
2026-01-05 0:02 ` Jason Gunthorpe
2026-01-05 13:33 ` Robin Murphy
2026-01-05 14:53 ` Jason Gunthorpe
2026-01-05 16:02 ` Robin Murphy
2026-01-05 18:54 ` Jason Gunthorpe
2026-01-05 15:46 ` Dawei Li
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20260104133532.GA173992@wendao-VirtualBox \
--to=dawei.li@linux.dev \
--cc=iommu@lists.linux.dev \
--cc=jgg@ziepe.ca \
--cc=joro@8bytes.org \
--cc=linux-arm-kernel@lists.infradead.org \
--cc=linux-kernel@vger.kernel.org \
--cc=robin.murphy@arm.com \
--cc=set_pte_at@outlook.com \
--cc=stable@vger.kernel.org \
--cc=will@kernel.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).