From: Jason Gunthorpe <jgg@nvidia.com>
To: Nicolin Chen <nicolinc@nvidia.com>
Cc: will@kernel.org, robin.murphy@arm.com, joro@8bytes.org,
thierry.reding@gmail.com, vdumpa@nvidia.com,
jonathanh@nvidia.com, linux-kernel@vger.kernel.org,
iommu@lists.linux.dev, linux-arm-kernel@lists.infradead.org,
linux-tegra@vger.kernel.org
Subject: Re: [PATCH v6 6/6] iommu/tegra241-cmdqv: Limit CMDs for guest owned VINTF
Date: Tue, 30 Apr 2024 21:17:58 -0300 [thread overview]
Message-ID: <20240501001758.GZ941030@nvidia.com> (raw)
In-Reply-To: <ZjE/ZKX7okSkztpR@Asurada-Nvidia>
On Tue, Apr 30, 2024 at 11:58:44AM -0700, Nicolin Chen wrote:
> > Has to push everything, across all the iterations of add/submut, onto
> > the same CMDQ otherwise the SYNC won't be properly flushing?
>
> ECMDQ seems to have such a limitation, but VCMDQs can get away
> as HW can insert a SYNC to a queue that doesn't end with a SYNC.
That seems like a strange thing to do in HW, but I recall you
mentioned it once before. Still, I'm not sure there is any merit in
relying on it?
> > But each arm_smmu_cmdq_issue_cmdlist() calls its own get q
> > function. Yes, they probably return the same Q since we are probably
> > on the same CPU, but it seems logically wrong (and slower!) to
> > organize it like this.
> >
> > I would expect the Q to be selected when the struct
> > arm_smmu_cmdq_batch is allocated on the stack, and be the same for the
> > entire batch operation. Not only do we spend less time trying to
> > compute the Q to use we have a built in guarentee that every command
> > will be on the same Q as the fenching SYNC.
>
> This seems to be helpful to ECMDQ. The current version disables
> the preempts, which feels costly to me.
Oh, yes, definately should work like this then!
> > Something sort of like this as another patch?
> >
> > diff --git a/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c b/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c
> > index 268da20baa4e9c..d8c9597878315a 100644
> > --- a/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c
> > +++ b/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c
> > @@ -357,11 +357,22 @@ static int arm_smmu_cmdq_build_cmd(u64 *cmd, struct arm_smmu_cmdq_ent *ent)
> > return 0;
> > }
> >
> > -static struct arm_smmu_cmdq *arm_smmu_get_cmdq(struct arm_smmu_device *smmu,
> > - u64 *cmds, int n)
> > +enum required_cmds {
> > + CMDS_ALL,
> > + /*
> > + * Commands will be one of:
> > + * CMDQ_OP_ATC_INV, CMDQ_OP_TLBI_EL2_VA, CMDQ_OP_TLBI_NH_VA,
> > + * CMDQ_OP_TLBI_EL2_ASID, CMDQ_OP_TLBI_NH_ASID, CMDQ_OP_TLBI_S2_IPA,
> > + * CMDQ_OP_TLBI_S12_VMALL, CMDQ_OP_SYNC
> > + */
> > + CMDS_INVALIDATION,
> > +};
>
> Hmm, guest-owned VCMDQs don't support EL2 commands. So, it feels
> to be somehow complicated to decouple them further in the callers
> of arm_smmu_cmdq_batch_add(). And I am not sure if there is a use
> case of guest issuing CMDQ_OP_TLBI_S2_IPA/CMDQ_OP_TLBI_S12_VMALL
> either, HW surprisingly supports these two though.
These are the max commands that could be issued, but they are all
gated based on the feature bits. The ones VCMDQ don't support are not
going to be issued because of the feature bits. You could test and
enforce this when probing the ECMDQ parts.
> Perhaps we could just scan the first command in the batch, giving
> a faith that no one will covertly sneak different commands in it?
Yes with the current design that does seem to work, but it also feels
a bit obtuse.
> Otherwise, there has to be a get_suported_cmdq callback so batch
> or its callers can avoid adding unsupported commands at the first
> place.
If you really feel strongly the invalidation could be split into
S1/S2/S1_VM groupings that align with the feature bits and that could
be passed down from one step above. But I don't think the complexity
is really needed. It is better to deal with it through the feature
mechanism.
If high speed invalidation is supported then the invalidation queue
must support all the invalidation commands used by the SMMU's active
feature set, otherwise do not enable the invalidation queue. It does
make logical sense.
Jason
_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel
next prev parent reply other threads:[~2024-05-01 0:18 UTC|newest]
Thread overview: 28+ messages / expand[flat|nested] mbox.gz Atom feed top
2024-04-30 4:43 [PATCH v6 0/6] Add Tegra241 (Grace) CMDQV Support (part 1/2) Nicolin Chen
2024-04-30 4:43 ` [PATCH v6 1/6] iommu/arm-smmu-v3: Pass in cmdq pointer to arm_smmu_cmdq_issue_cmdlist() Nicolin Chen
2024-04-30 13:54 ` Jason Gunthorpe
2024-04-30 4:43 ` [PATCH v6 2/6] iommu/arm-smmu-v3: Add CS_NONE quirk Nicolin Chen
2024-04-30 14:22 ` Jason Gunthorpe
2024-04-30 16:30 ` Nicolin Chen
2024-04-30 16:37 ` Jason Gunthorpe
2024-04-30 16:43 ` Nicolin Chen
2024-04-30 4:43 ` [PATCH v6 3/6] iommu/arm-smmu-v3: Make arm_smmu_cmdq_init reusable Nicolin Chen
2024-04-30 14:24 ` Jason Gunthorpe
2024-04-30 15:50 ` Nicolin Chen
2024-04-30 4:43 ` [PATCH v6 4/6] iommu/arm-smmu-v3: Make __arm_smmu_cmdq_skip_err reusable Nicolin Chen
2024-04-30 14:06 ` Jason Gunthorpe
2024-04-30 15:48 ` Nicolin Chen
2024-04-30 4:43 ` [PATCH v6 5/6] iommu/arm-smmu-v3: Add in-kernel support for NVIDIA Tegra241 (Grace) CMDQV Nicolin Chen
2024-04-30 16:35 ` Jason Gunthorpe
2024-04-30 18:08 ` Nicolin Chen
2024-05-01 13:00 ` Jason Gunthorpe
2024-05-01 17:43 ` Nicolin Chen
2024-05-02 12:41 ` Jason Gunthorpe
2024-05-02 19:26 ` Nicolin Chen
2024-04-30 4:43 ` [PATCH v6 6/6] iommu/tegra241-cmdqv: Limit CMDs for guest owned VINTF Nicolin Chen
2024-04-30 17:06 ` Jason Gunthorpe
2024-04-30 18:58 ` Nicolin Chen
2024-05-01 0:17 ` Jason Gunthorpe [this message]
2024-05-01 16:32 ` Nicolin Chen
2024-05-06 3:52 ` Nicolin Chen
2024-05-06 13:00 ` Jason Gunthorpe
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20240501001758.GZ941030@nvidia.com \
--to=jgg@nvidia.com \
--cc=iommu@lists.linux.dev \
--cc=jonathanh@nvidia.com \
--cc=joro@8bytes.org \
--cc=linux-arm-kernel@lists.infradead.org \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-tegra@vger.kernel.org \
--cc=nicolinc@nvidia.com \
--cc=robin.murphy@arm.com \
--cc=thierry.reding@gmail.com \
--cc=vdumpa@nvidia.com \
--cc=will@kernel.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).