From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from mail-wm1-f45.google.com (mail-wm1-f45.google.com [209.85.128.45]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 9508E3B8BC1 for ; Thu, 7 May 2026 09:22:13 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.128.45 ARC-Seal:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1778145739; cv=none; b=feMb5ZEEfX55tGV/TpRfKPqN36Y/YGkmCftUAosgY+QpMblHYvxcwEPQzgb9fgDIoo48d2xp2B4b1/ohXFHsgM4YGUSewJTTpQihgf8Jtdfkaptq50QFirqKUoF+RQf8E0SYJgxiVjBL2EjEb+V34j+JtpKITwOczAIa4aNvVEU= ARC-Message-Signature:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1778145739; c=relaxed/simple; bh=tRjsED7RdIpQmrwqA/i+wuJvCvqMbePWGOToskgjAKw=; h=Date:From:To:Cc:Subject:Message-ID:References:MIME-Version: Content-Type:Content-Disposition:In-Reply-To; b=gqkO2x7wJGbDaL/ib6YL+x19lwM4+f4G7zrHeWEI7sNCGPpw9ZTmLlDE9ZWBff7kv7v/hVlWmLPc/V6lr6M9m+zAjY5adWnBWFQjAtBkGpVDf3QXzeqzVJCZ1+GSqDM/K29MOgHh8U8lb3JYEnn6HSymjFeJhxUwb8QlLlXLaPY= ARC-Authentication-Results:i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=I3mX5eUy; arc=none smtp.client-ip=209.85.128.45 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="I3mX5eUy" Received: by mail-wm1-f45.google.com with SMTP id 5b1f17b1804b1-48d1c670255so77735e9.0 for ; Thu, 07 May 2026 02:22:12 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20251104; t=1778145731; x=1778750531; darn=vger.kernel.org; h=in-reply-to:content-disposition:mime-version:references:message-id :subject:cc:to:from:date:from:to:cc:subject:date:message-id:reply-to; bh=mdQjfHOoudoTP4zNLEK0/rz6Fm5ioqi3GcaesyNPhbA=; b=I3mX5eUyGmdZm5gt9YJwe8MemRxy8DdaZeI3GSYSGEOsUbNnxfV7s91UR/tsmNiWKY nm7hoMnL9g05BsBw1j6/0fwulTYh4HHcCOjuM+SSKwVUlGpLSQD+WxzWRFqYU4BdYNDH 2rBPkcrSfut3/ErtbyQ07FXFqwQTDB6pKuKFkeEsnvld7JQLNwfs/smCO3hAY9DXYIXz jDLN2b6sXRN+InbWwB2FIdfUtTZKUiDNQfKxR/bo65SCBrYeh0T+k33QcEJuW0eHgC2m hoH8D6WhpER0VnfEo2PKkYNS4lKy08CxUgO6171ubF6woXeZgbfyIPqkIcJFxNpdQL/d IyFA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20251104; t=1778145731; x=1778750531; h=in-reply-to:content-disposition:mime-version:references:message-id :subject:cc:to:from:date:x-gm-gg:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=mdQjfHOoudoTP4zNLEK0/rz6Fm5ioqi3GcaesyNPhbA=; b=lFlP0yfCEkn6TL9Ibwm2GvqVnLJLYET1Al4jBCYQEsKA/oHlx7HBNURcdiBswnVU3c +K000C12kgs35XrZp1yWdrUP6iE3g9uUSKit04GTzbwzcyz0TdRqgzZ+QL+qHmlSnsKr c6MAeDtFVHeECJkioXY3ErmnMidI/mqmJKvt+UKzS2gX4C0Y+AvGLZUSQ9HhuZJePQM3 cuEBn2XD79QWyzQRIJkBPIXvk4Xhe+Mf2ijvysqaf1l2/uP+CcmRyzN//7yqja04jBWT MdQEHmTU1cvTE2EnFD0H3w5B71ZP8XBq+kqhmUCNdZiFOUys09fcX0RqDS6p0pSnmtXY gQqQ== X-Forwarded-Encrypted: i=1; AFNElJ8z42v1LaR0JJwm571gTpJLxLMYXUauILYyzV6v9H8PGxAgV1yFO9OXKFC6Z1NHuTovVZJ8fiK23wCE1Q==@vger.kernel.org X-Gm-Message-State: AOJu0Yx6OJvF83dgwouhUQ8A5RZvDg9TwGw79RsPl2Z21Cdydvl46mv+ 7JzOGlg6qzxEAg6OQ8tmlwEhmEEFQ05Um4i9mOObPCEceVSLSB1Ti5Kfeb8fgMe8UQ== X-Gm-Gg: AeBDiet8DhvAhOO6DH1dJ8A6I//YoN+BX3sf7sJJsFwJ0022SFGMj+0txW3NA2GaWle V7PNTbCUpVwf9AULGITNPNxkwex3lWvj6lbQUXmnyCyouF8xGUjZOX6efPnwCyRcb8NEYobEDO0 y5GaM0ZKPd9WMbOCN18h1iPMduAGosuitrcFpaym7mMhUS7ZIDCQEp+CnikwFlRpH26hu8h08Fu UIvbypSxNTpw8s9g+Qb9tFVzxVNiRVpyWWlKuXbalknvmziHbLUmz3BaobJ3OqrbmuEz5KjW6BB JdfEzyvsdQXNi4JZf4NjpAISAcJuClix/JHzUDSROLe+2UnTsbPUTY7uiM14lyAgGHweupUwuIQ /BsDj9aGGyuNUohwP6b3c30dJFtKmeK1iAqcs96aGyCCYcJ8Ib2YXUiBjcLQyOfC5K+t7W7VBo7 hs9PiteqZQJSthPbbeQhSACucW+rW/JpiZNdmxEUZWwZ1CeB3o0ciGjdrPtuKGxx8b0ChxTPpJt eaWaA== X-Received: by 2002:a05:600d:8450:10b0:489:1ace:d0d3 with SMTP id 5b1f17b1804b1-48e52d0a34fmr1539695e9.3.1778145730457; Thu, 07 May 2026 02:22:10 -0700 (PDT) Received: from google.com (8.181.38.34.bc.googleusercontent.com. [34.38.181.8]) by smtp.gmail.com with ESMTPSA id 5b1f17b1804b1-48e5311f891sm56514085e9.4.2026.05.07.02.22.09 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 07 May 2026 02:22:09 -0700 (PDT) Date: Thu, 7 May 2026 09:22:05 +0000 From: Mostafa Saleh To: Jason Gunthorpe Cc: iommu@lists.linux.dev, Jonathan Hunter , Joerg Roedel , linux-arm-kernel@lists.infradead.org, linux-tegra@vger.kernel.org, Robin Murphy , Thierry Reding , Krishna Reddy , Will Deacon , David Matlack , Pasha Tatashin , patches@lists.linux.dev, Samiullah Khawaja Subject: Re: [PATCH 4/9] iommu/arm-smmu-v3: Convert arm_smmu_cmdq_batch cmds to struct arm_smmu_cmd Message-ID: References: <0-v1-b7dc0a0d4aa0+3723d-smmu_no_cmdq_ent_jgg@nvidia.com> <4-v1-b7dc0a0d4aa0+3723d-smmu_no_cmdq_ent_jgg@nvidia.com> Precedence: bulk X-Mailing-List: linux-tegra@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <4-v1-b7dc0a0d4aa0+3723d-smmu_no_cmdq_ent_jgg@nvidia.com> On Fri, May 01, 2026 at 11:29:13AM -0300, Jason Gunthorpe wrote: > Convert the batch's type to also get the remaining helper functions to > use the new type and complete replacing naked u64s with the new struct. > > The low-level queue_write()/queue_read()/queue_remove_raw() functions > remain u64-based since they are shared by event and PRI queues which > have different entry sizes. > > Signed-off-by: Jason Gunthorpe Reviewed-by: Mostafa Saleh Thanks, Mostafa > --- > .../arm/arm-smmu-v3/arm-smmu-v3-iommufd.c | 24 +++--- > drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c | 74 ++++++++++--------- > drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.h | 5 +- > .../iommu/arm/arm-smmu-v3/tegra241-cmdqv.c | 8 +- > 4 files changed, 58 insertions(+), 53 deletions(-) > > diff --git a/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3-iommufd.c b/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3-iommufd.c > index ddae0b07c76b50..1e9f7d2de34414 100644 > --- a/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3-iommufd.c > +++ b/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3-iommufd.c > @@ -300,7 +300,7 @@ static int arm_vsmmu_vsid_to_sid(struct arm_vsmmu *vsmmu, u32 vsid, u32 *sid) > /* This is basically iommu_viommu_arm_smmuv3_invalidate in u64 for conversion */ > struct arm_vsmmu_invalidation_cmd { > union { > - u64 cmd[2]; > + struct arm_smmu_cmd cmd; > struct iommu_viommu_arm_smmuv3_invalidate ucmd; > }; > }; > @@ -316,32 +316,32 @@ static int arm_vsmmu_convert_user_cmd(struct arm_vsmmu *vsmmu, > struct arm_vsmmu_invalidation_cmd *cmd) > { > /* Commands are le64 stored in u64 */ > - cmd->cmd[0] = le64_to_cpu(cmd->ucmd.cmd[0]); > - cmd->cmd[1] = le64_to_cpu(cmd->ucmd.cmd[1]); > + cmd->cmd.data[0] = le64_to_cpu(cmd->ucmd.cmd[0]); > + cmd->cmd.data[1] = le64_to_cpu(cmd->ucmd.cmd[1]); > > - switch (cmd->cmd[0] & CMDQ_0_OP) { > + switch (cmd->cmd.data[0] & CMDQ_0_OP) { > case CMDQ_OP_TLBI_NSNH_ALL: > /* Convert to NH_ALL */ > - cmd->cmd[0] = CMDQ_OP_TLBI_NH_ALL | > + cmd->cmd.data[0] = CMDQ_OP_TLBI_NH_ALL | > FIELD_PREP(CMDQ_TLBI_0_VMID, vsmmu->vmid); > - cmd->cmd[1] = 0; > + cmd->cmd.data[1] = 0; > break; > case CMDQ_OP_TLBI_NH_VA: > case CMDQ_OP_TLBI_NH_VAA: > case CMDQ_OP_TLBI_NH_ALL: > case CMDQ_OP_TLBI_NH_ASID: > - cmd->cmd[0] &= ~CMDQ_TLBI_0_VMID; > - cmd->cmd[0] |= FIELD_PREP(CMDQ_TLBI_0_VMID, vsmmu->vmid); > + cmd->cmd.data[0] &= ~CMDQ_TLBI_0_VMID; > + cmd->cmd.data[0] |= FIELD_PREP(CMDQ_TLBI_0_VMID, vsmmu->vmid); > break; > case CMDQ_OP_ATC_INV: > case CMDQ_OP_CFGI_CD: > case CMDQ_OP_CFGI_CD_ALL: { > - u32 sid, vsid = FIELD_GET(CMDQ_CFGI_0_SID, cmd->cmd[0]); > + u32 sid, vsid = FIELD_GET(CMDQ_CFGI_0_SID, cmd->cmd.data[0]); > > if (arm_vsmmu_vsid_to_sid(vsmmu, vsid, &sid)) > return -EIO; > - cmd->cmd[0] &= ~CMDQ_CFGI_0_SID; > - cmd->cmd[0] |= FIELD_PREP(CMDQ_CFGI_0_SID, sid); > + cmd->cmd.data[0] &= ~CMDQ_CFGI_0_SID; > + cmd->cmd.data[0] |= FIELD_PREP(CMDQ_CFGI_0_SID, sid); > break; > } > default: > @@ -386,7 +386,7 @@ int arm_vsmmu_cache_invalidate(struct iommufd_viommu *viommu, > continue; > > /* FIXME always uses the main cmdq rather than trying to group by type */ > - ret = arm_smmu_cmdq_issue_cmdlist(smmu, &smmu->cmdq, last->cmd, > + ret = arm_smmu_cmdq_issue_cmdlist(smmu, &smmu->cmdq, &last->cmd, > cur - last, true); > if (ret) { > cur--; > diff --git a/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c b/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c > index 67d23e9c54804e..b3ef001ce80d23 100644 > --- a/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c > +++ b/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c > @@ -268,9 +268,12 @@ static int queue_remove_raw(struct arm_smmu_queue *q, u64 *ent) > } > > /* High-level queue accessors */ > -static int arm_smmu_cmdq_build_cmd(u64 *cmd, struct arm_smmu_cmdq_ent *ent) > +static int arm_smmu_cmdq_build_cmd(struct arm_smmu_cmd *cmd_out, > + struct arm_smmu_cmdq_ent *ent) > { > - memset(cmd, 0, 1 << CMDQ_ENT_SZ_SHIFT); > + u64 *cmd = cmd_out->data; > + > + memset(cmd_out, 0, sizeof(*cmd_out)); > cmd[0] |= FIELD_PREP(CMDQ_0_OP, ent->opcode); > > switch (ent->opcode) { > @@ -390,7 +393,8 @@ static bool arm_smmu_cmdq_needs_busy_polling(struct arm_smmu_device *smmu, > return smmu->options & ARM_SMMU_OPT_TEGRA241_CMDQV; > } > > -static void arm_smmu_cmdq_build_sync_cmd(u64 *cmd, struct arm_smmu_device *smmu, > +static void arm_smmu_cmdq_build_sync_cmd(struct arm_smmu_cmd *cmd, > + struct arm_smmu_device *smmu, > struct arm_smmu_cmdq *cmdq, u32 prod) > { > struct arm_smmu_queue *q = &cmdq->q; > @@ -409,7 +413,8 @@ static void arm_smmu_cmdq_build_sync_cmd(u64 *cmd, struct arm_smmu_device *smmu, > > arm_smmu_cmdq_build_cmd(cmd, &ent); > if (arm_smmu_cmdq_needs_busy_polling(smmu, cmdq)) > - u64p_replace_bits(cmd, CMDQ_SYNC_0_CS_NONE, CMDQ_SYNC_0_CS); > + u64p_replace_bits(&cmd->data[0], CMDQ_SYNC_0_CS_NONE, > + CMDQ_SYNC_0_CS); > } > > void __arm_smmu_cmdq_skip_err(struct arm_smmu_device *smmu, > @@ -422,9 +427,8 @@ void __arm_smmu_cmdq_skip_err(struct arm_smmu_device *smmu, > [CMDQ_ERR_CERROR_ATC_INV_IDX] = "ATC invalidate timeout", > }; > struct arm_smmu_queue *q = &cmdq->q; > - > int i; > - u64 cmd[CMDQ_ENT_DWORDS]; > + struct arm_smmu_cmd cmd; > u32 cons = readl_relaxed(q->cons_reg); > u32 idx = FIELD_GET(CMDQ_CONS_ERR, cons); > struct arm_smmu_cmdq_ent cmd_sync = { > @@ -457,17 +461,18 @@ void __arm_smmu_cmdq_skip_err(struct arm_smmu_device *smmu, > * We may have concurrent producers, so we need to be careful > * not to touch any of the shadow cmdq state. > */ > - queue_read(cmd, Q_ENT(q, cons), q->ent_dwords); > + queue_read(cmd.data, Q_ENT(q, cons), q->ent_dwords); > dev_err(smmu->dev, "skipping command in error state:\n"); > - for (i = 0; i < ARRAY_SIZE(cmd); ++i) > - dev_err(smmu->dev, "\t0x%016llx\n", (unsigned long long)cmd[i]); > + for (i = 0; i < ARRAY_SIZE(cmd.data); ++i) > + dev_err(smmu->dev, "\t0x%016llx\n", (unsigned long long)cmd.data[i]); > > /* Convert the erroneous command into a CMD_SYNC */ > - arm_smmu_cmdq_build_cmd(cmd, &cmd_sync); > + arm_smmu_cmdq_build_cmd(&cmd, &cmd_sync); > if (arm_smmu_cmdq_needs_busy_polling(smmu, cmdq)) > - u64p_replace_bits(cmd, CMDQ_SYNC_0_CS_NONE, CMDQ_SYNC_0_CS); > + u64p_replace_bits(&cmd.data[0], CMDQ_SYNC_0_CS_NONE, > + CMDQ_SYNC_0_CS); > > - queue_write(Q_ENT(q, cons), cmd, q->ent_dwords); > + queue_write(Q_ENT(q, cons), cmd.data, q->ent_dwords); > } > > static void arm_smmu_cmdq_skip_err(struct arm_smmu_device *smmu) > @@ -767,7 +772,8 @@ static int arm_smmu_cmdq_poll_until_sync(struct arm_smmu_device *smmu, > return __arm_smmu_cmdq_poll_until_consumed(smmu, cmdq, llq); > } > > -static void arm_smmu_cmdq_write_entries(struct arm_smmu_cmdq *cmdq, u64 *cmds, > +static void arm_smmu_cmdq_write_entries(struct arm_smmu_cmdq *cmdq, > + struct arm_smmu_cmd *cmds, > u32 prod, int n) > { > int i; > @@ -777,10 +783,9 @@ static void arm_smmu_cmdq_write_entries(struct arm_smmu_cmdq *cmdq, u64 *cmds, > }; > > for (i = 0; i < n; ++i) { > - u64 *cmd = &cmds[i * CMDQ_ENT_DWORDS]; > - > prod = queue_inc_prod_n(&llq, i); > - queue_write(Q_ENT(&cmdq->q, prod), cmd, CMDQ_ENT_DWORDS); > + queue_write(Q_ENT(&cmdq->q, prod), cmds[i].data, > + ARRAY_SIZE(cmds[i].data)); > } > } > > @@ -801,10 +806,11 @@ static void arm_smmu_cmdq_write_entries(struct arm_smmu_cmdq *cmdq, u64 *cmds, > * CPU will appear before any of the commands from the other CPU. > */ > int arm_smmu_cmdq_issue_cmdlist(struct arm_smmu_device *smmu, > - struct arm_smmu_cmdq *cmdq, u64 *cmds, int n, > + struct arm_smmu_cmdq *cmdq, > + struct arm_smmu_cmd *cmds, int n, > bool sync) > { > - u64 cmd_sync[CMDQ_ENT_DWORDS]; > + struct arm_smmu_cmd cmd_sync; > u32 prod; > unsigned long flags; > bool owner; > @@ -847,8 +853,9 @@ int arm_smmu_cmdq_issue_cmdlist(struct arm_smmu_device *smmu, > arm_smmu_cmdq_write_entries(cmdq, cmds, llq.prod, n); > if (sync) { > prod = queue_inc_prod_n(&llq, n); > - arm_smmu_cmdq_build_sync_cmd(cmd_sync, smmu, cmdq, prod); > - queue_write(Q_ENT(&cmdq->q, prod), cmd_sync, CMDQ_ENT_DWORDS); > + arm_smmu_cmdq_build_sync_cmd(&cmd_sync, smmu, cmdq, prod); > + queue_write(Q_ENT(&cmdq->q, prod), cmd_sync.data, > + ARRAY_SIZE(cmd_sync.data)); > > /* > * In order to determine completion of our CMD_SYNC, we must > @@ -925,7 +932,7 @@ static int __arm_smmu_cmdq_issue_cmd(struct arm_smmu_device *smmu, > bool sync) > { > return arm_smmu_cmdq_issue_cmdlist( > - smmu, arm_smmu_get_cmdq(smmu, cmd), cmd->data, 1, sync); > + smmu, arm_smmu_get_cmdq(smmu, cmd), cmd, 1, sync); > } > > static int arm_smmu_cmdq_issue_cmd(struct arm_smmu_device *smmu, > @@ -954,7 +961,7 @@ static void arm_smmu_cmdq_batch_init(struct arm_smmu_device *smmu, > { > struct arm_smmu_cmd cmd; > > - arm_smmu_cmdq_build_cmd(cmd.data, ent); > + arm_smmu_cmdq_build_cmd(&cmd, ent); > arm_smmu_cmdq_batch_init_cmd(smmu, cmds, &cmd); > } > > @@ -966,9 +973,8 @@ static void arm_smmu_cmdq_batch_add(struct arm_smmu_device *smmu, > (smmu->options & ARM_SMMU_OPT_CMDQ_FORCE_SYNC); > struct arm_smmu_cmd cmd; > bool unsupported_cmd; > - int index; > > - if (unlikely(arm_smmu_cmdq_build_cmd(cmd.data, ent))) { > + if (unlikely(arm_smmu_cmdq_build_cmd(&cmd, ent))) { > dev_warn(smmu->dev, "ignoring unknown CMDQ opcode 0x%x\n", > ent->opcode); > return; > @@ -987,9 +993,7 @@ static void arm_smmu_cmdq_batch_add(struct arm_smmu_device *smmu, > arm_smmu_cmdq_batch_init_cmd(smmu, cmds, &cmd); > } > > - index = cmds->num * CMDQ_ENT_DWORDS; > - memcpy(&cmds->cmds[index], cmd.data, sizeof(cmd.data)); > - cmds->num++; > + cmds->cmds[cmds->num++] = cmd; > } > > static int arm_smmu_cmdq_batch_submit(struct arm_smmu_device *smmu, > @@ -1025,7 +1029,7 @@ static void arm_smmu_page_response(struct device *dev, struct iopf_fault *unused > break; > } > > - arm_smmu_cmdq_build_cmd(hw_cmd.data, &cmd); > + arm_smmu_cmdq_build_cmd(&hw_cmd, &cmd); > arm_smmu_cmdq_issue_cmd(master->smmu, &hw_cmd); > > /* > @@ -1865,7 +1869,7 @@ static void arm_smmu_ste_writer_sync_entry(struct arm_smmu_entry_writer *writer) > }; > struct arm_smmu_cmd cmd; > > - arm_smmu_cmdq_build_cmd(cmd.data, &ent); > + arm_smmu_cmdq_build_cmd(&cmd, &ent); > arm_smmu_cmdq_issue_cmd_with_sync(writer->master->smmu, &cmd); > } > > @@ -1899,7 +1903,7 @@ static void arm_smmu_write_ste(struct arm_smmu_master *master, u32 sid, > } }; > struct arm_smmu_cmd prefetch_cmd; > > - arm_smmu_cmdq_build_cmd(prefetch_cmd.data, &prefetch_ent); > + arm_smmu_cmdq_build_cmd(&prefetch_cmd, &prefetch_ent); > arm_smmu_cmdq_issue_cmd(smmu, &prefetch_cmd); > } > } > @@ -2339,7 +2343,7 @@ static void arm_smmu_handle_ppr(struct arm_smmu_device *smmu, u64 *evt) > }; > struct arm_smmu_cmd cmd; > > - arm_smmu_cmdq_build_cmd(cmd.data, &ent); > + arm_smmu_cmdq_build_cmd(&cmd, &ent); > arm_smmu_cmdq_issue_cmd(smmu, &cmd); > } > } > @@ -3462,7 +3466,7 @@ static void arm_smmu_inv_flush_iotlb_tag(struct arm_smmu_inv *inv) > } > > cmd.opcode = inv->nsize_opcode; > - arm_smmu_cmdq_build_cmd(hw_cmd.data, &cmd); > + arm_smmu_cmdq_build_cmd(&hw_cmd, &cmd); > arm_smmu_cmdq_issue_cmd_with_sync(inv->smmu, &hw_cmd); > } > > @@ -4875,18 +4879,18 @@ static int arm_smmu_device_reset(struct arm_smmu_device *smmu) > > /* Invalidate any cached configuration */ > ent.opcode = CMDQ_OP_CFGI_ALL; > - arm_smmu_cmdq_build_cmd(cmd.data, &ent); > + arm_smmu_cmdq_build_cmd(&cmd, &ent); > arm_smmu_cmdq_issue_cmd_with_sync(smmu, &cmd); > > /* Invalidate any stale TLB entries */ > if (smmu->features & ARM_SMMU_FEAT_HYP) { > ent.opcode = CMDQ_OP_TLBI_EL2_ALL; > - arm_smmu_cmdq_build_cmd(cmd.data, &ent); > + arm_smmu_cmdq_build_cmd(&cmd, &ent); > arm_smmu_cmdq_issue_cmd_with_sync(smmu, &cmd); > } > > ent.opcode = CMDQ_OP_TLBI_NSNH_ALL; > - arm_smmu_cmdq_build_cmd(cmd.data, &ent); > + arm_smmu_cmdq_build_cmd(&cmd, &ent); > arm_smmu_cmdq_issue_cmd_with_sync(smmu, &cmd); > > /* Event queue */ > diff --git a/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.h b/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.h > index 6d73f6b63e64a9..1fe6917448b774 100644 > --- a/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.h > +++ b/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.h > @@ -651,7 +651,7 @@ static inline bool arm_smmu_cmdq_supports_cmd(struct arm_smmu_cmdq *cmdq, > } > > struct arm_smmu_cmdq_batch { > - u64 cmds[CMDQ_BATCH_ENTRIES * CMDQ_ENT_DWORDS]; > + struct arm_smmu_cmd cmds[CMDQ_BATCH_ENTRIES]; > struct arm_smmu_cmdq *cmdq; > int num; > }; > @@ -1148,7 +1148,8 @@ void arm_smmu_install_ste_for_dev(struct arm_smmu_master *master, > const struct arm_smmu_ste *target); > > int arm_smmu_cmdq_issue_cmdlist(struct arm_smmu_device *smmu, > - struct arm_smmu_cmdq *cmdq, u64 *cmds, int n, > + struct arm_smmu_cmdq *cmdq, > + struct arm_smmu_cmd *cmds, int n, > bool sync); > > #ifdef CONFIG_ARM_SMMU_V3_SVA > diff --git a/drivers/iommu/arm/arm-smmu-v3/tegra241-cmdqv.c b/drivers/iommu/arm/arm-smmu-v3/tegra241-cmdqv.c > index b4d8c1f2fd3878..67be62a6e7640a 100644 > --- a/drivers/iommu/arm/arm-smmu-v3/tegra241-cmdqv.c > +++ b/drivers/iommu/arm/arm-smmu-v3/tegra241-cmdqv.c > @@ -427,16 +427,16 @@ tegra241_cmdqv_get_cmdq(struct arm_smmu_device *smmu, > static void tegra241_vcmdq_hw_flush_timeout(struct tegra241_vcmdq *vcmdq) > { > struct arm_smmu_device *smmu = &vcmdq->cmdqv->smmu; > - u64 cmd_sync[CMDQ_ENT_DWORDS] = {}; > + struct arm_smmu_cmd cmd_sync = {}; > > - cmd_sync[0] = FIELD_PREP(CMDQ_0_OP, CMDQ_OP_CMD_SYNC) | > - FIELD_PREP(CMDQ_SYNC_0_CS, CMDQ_SYNC_0_CS_NONE); > + cmd_sync.data[0] = FIELD_PREP(CMDQ_0_OP, CMDQ_OP_CMD_SYNC) | > + FIELD_PREP(CMDQ_SYNC_0_CS, CMDQ_SYNC_0_CS_NONE); > > /* > * It does not hurt to insert another CMD_SYNC, taking advantage of the > * arm_smmu_cmdq_issue_cmdlist() that waits for the CMD_SYNC completion. > */ > - arm_smmu_cmdq_issue_cmdlist(smmu, &smmu->cmdq, cmd_sync, 1, true); > + arm_smmu_cmdq_issue_cmdlist(smmu, &smmu->cmdq, &cmd_sync, 1, true); > } > > /* This function is for LVCMDQ, so @vcmdq must not be unmapped yet */ > -- > 2.43.0 >