From: Zong Li <zong.li@sifive.com>
To: niliqiang <ni_liqiang@126.com>
Cc: aou@eecs.berkeley.edu, iommu@lists.linux.dev, jgg@ziepe.ca,
joro@8bytes.org, kevin.tian@intel.com,
linux-kernel@vger.kernel.org, linux-riscv@lists.infradead.org,
palmer@dabbelt.com, paul.walmsley@sifive.com,
robin.murphy@arm.com, tjeznach@rivosinc.com, will@kernel.org,
chenruisust@gmail.com
Subject: Re: [RFC PATCH v2 00/10] RISC-V IOMMU HPM and nested IOMMU support
Date: Tue, 2 Sep 2025 12:01:19 +0800 [thread overview]
Message-ID: <CANXhq0ra+yv-Wt_vKTN3+c4StsPQB1vR+=Kp3RVSh0g10Oogqw@mail.gmail.com> (raw)
In-Reply-To: <20250901133629.87310-1-ni_liqiang@126.com>
On Mon, Sep 1, 2025 at 9:37 PM niliqiang <ni_liqiang@126.com> wrote:
>
> Hi Zong
>
> Fri, 14 Jun 2024 22:21:48 +0800, Zong Li <zong.li@sifive.com> wrote:
>
> > This patch initialize the pmu stuff and uninitialize it when driver
> > removing. The interrupt handling is also provided, this handler need to
> > be primary handler instead of thread function, because pt_regs is empty
> > when threading the IRQ, but pt_regs is necessary by perf_event_overflow.
> >
> > Signed-off-by: Zong Li <zong.li@sifive.com>
> > ---
> > drivers/iommu/riscv/iommu.c | 65 +++++++++++++++++++++++++++++++++++++
> > 1 file changed, 65 insertions(+)
> >
> > diff --git a/drivers/iommu/riscv/iommu.c b/drivers/iommu/riscv/iommu.c
> > index 8b6a64c1ad8d..1716b2251f38 100644
> > --- a/drivers/iommu/riscv/iommu.c
> > +++ b/drivers/iommu/riscv/iommu.c
> > @@ -540,6 +540,62 @@ static irqreturn_t riscv_iommu_fltq_process(int irq, void *data)
> > return IRQ_HANDLED;
> > }
> >
> > +/*
> > + * IOMMU Hardware performance monitor
> > + */
> > +
> > +/* HPM interrupt primary handler */
> > +static irqreturn_t riscv_iommu_hpm_irq_handler(int irq, void *dev_id)
> > +{
> > + struct riscv_iommu_device *iommu = (struct riscv_iommu_device *)dev_id;
> > +
> > + /* Process pmu irq */
> > + riscv_iommu_pmu_handle_irq(&iommu->pmu);
> > +
> > + /* Clear performance monitoring interrupt pending */
> > + riscv_iommu_writel(iommu, RISCV_IOMMU_REG_IPSR, RISCV_IOMMU_IPSR_PMIP);
> > +
> > + return IRQ_HANDLED;
> > +}
> > +
> > +/* HPM initialization */
> > +static int riscv_iommu_hpm_enable(struct riscv_iommu_device *iommu)
> > +{
> > + int rc;
> > +
> > + if (!(iommu->caps & RISCV_IOMMU_CAPABILITIES_HPM))
> > + return 0;
> > +
> > + /*
> > + * pt_regs is empty when threading the IRQ, but pt_regs is necessary
> > + * by perf_event_overflow. Use primary handler instead of thread
> > + * function for PM IRQ.
> > + *
> > + * Set the IRQF_ONESHOT flag because this IRQ might be shared with
> > + * other threaded IRQs by other queues.
> > + */
> > + rc = devm_request_irq(iommu->dev,
> > + iommu->irqs[riscv_iommu_queue_vec(iommu, RISCV_IOMMU_IPSR_PMIP)],
> > + riscv_iommu_hpm_irq_handler, IRQF_ONESHOT | IRQF_SHARED, NULL, iommu);
> > + if (rc)
> > + return rc;
> > +
> > + return riscv_iommu_pmu_init(&iommu->pmu, iommu->reg, dev_name(iommu->dev));
> > +}
> > +
>
> What are the benefits of initializing the iommu-pmu driver in the iommu driver?
>
> It might be better for the RISC-V IOMMU PMU driver to be loaded as a separate module, as this would allow greater flexibility since different vendors may need to add custom events.
>
> Also, I'm not quite clear on how custom events should be added if the RISC-V iommu-pmu is placed within the iommu driver.
Hi Liqiang,
My original idea is that, since the IOMMU HPM is not always present,
it depends on the capability.HPM bit, if we separate HPM into an
individual module, I assume that the PMU driver may not have access to
the IOMMU's complete MMIO region. I’m not sure how we would check the
capability register in the PMU driver and avoid the following
situation: capability.HPM is zero, but the IOMMU-PMU driver is still
loaded because the PMU node is present in the DTS. It will be helpful
if you have any suggestions on this.
Regarding custom events, since we don’t have the driver data, my
current rough idea is to add a vendor event map table to list the
vendor events and use Kconfig to define them respectively. This is
just an initial thought and may not be the good solution, so feel free
to share any recommendations. Of course, if we eventually decide to
move it to drivers/perf as an individual module, then we could use the
driver data for custom events, similar to what ARM does.
Thanks
>
>
> Best regards,
> Liqiang
>
next prev parent reply other threads:[~2025-09-02 4:01 UTC|newest]
Thread overview: 38+ messages / expand[flat|nested] mbox.gz Atom feed top
2024-06-14 14:21 [RFC PATCH v2 00/10] RISC-V IOMMU HPM and nested IOMMU support Zong Li
2024-06-14 14:21 ` [RFC PATCH v2 01/10] iommu/riscv: add RISC-V IOMMU PMU support Zong Li
2024-06-17 14:55 ` Jason Gunthorpe
2024-06-18 1:14 ` Zong Li
2024-06-14 14:21 ` [RFC PATCH v2 02/10] iommu/riscv: support HPM and interrupt handling Zong Li
2024-12-10 7:54 ` [External] " yunhui cui
2024-12-10 8:48 ` Xu Lu
2024-12-27 8:37 ` Zong Li
2025-09-01 13:36 ` [RFC PATCH v2 00/10] RISC-V IOMMU HPM and nested IOMMU support niliqiang
2025-09-02 4:01 ` Zong Li [this message]
2025-09-05 3:27 ` Zong Li
2024-06-14 14:21 ` [RFC PATCH v2 03/10] iommu/riscv: use data structure instead of individual values Zong Li
2024-06-14 14:21 ` [RFC PATCH v2 04/10] iommu/riscv: add iotlb_sync_map operation support Zong Li
2024-06-15 3:14 ` Baolu Lu
2024-06-17 13:43 ` Zong Li
2024-06-17 14:39 ` Jason Gunthorpe
2024-06-18 3:01 ` Zong Li
2024-06-18 13:31 ` Jason Gunthorpe
2024-06-14 14:21 ` [RFC PATCH v2 05/10] iommu/riscv: support GSCID and GVMA invalidation command Zong Li
2024-06-14 14:21 ` [RFC PATCH v2 06/10] iommu/riscv: support nested iommu for getting iommu hardware information Zong Li
2024-06-19 15:49 ` Jason Gunthorpe
2024-06-21 7:32 ` Zong Li
2024-06-14 14:21 ` [RFC PATCH v2 07/10] iommu/riscv: support nested iommu for creating domains owned by userspace Zong Li
2024-06-19 16:02 ` Jason Gunthorpe
2024-06-28 9:03 ` Zong Li
2024-06-28 22:32 ` Jason Gunthorpe
2024-06-19 16:34 ` Joao Martins
2024-06-21 7:34 ` Zong Li
2024-06-14 14:21 ` [RFC PATCH v2 08/10] iommu/riscv: support nested iommu for flushing cache Zong Li
2024-06-15 3:22 ` Baolu Lu
2024-06-17 2:16 ` Zong Li
2024-06-19 16:17 ` Jason Gunthorpe
2024-06-28 8:19 ` Zong Li
2024-06-28 22:26 ` Jason Gunthorpe
2024-06-14 14:21 ` [RFC PATCH v2 09/10] iommu/dma: Support MSIs through nested domains Zong Li
2024-06-14 18:12 ` Nicolin Chen
2024-06-17 2:15 ` Zong Li
2024-06-14 14:21 ` [RFC PATCH v2 10/10] iommu:riscv: support nested iommu for get_msi_mapping_domain operation Zong Li
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to='CANXhq0ra+yv-Wt_vKTN3+c4StsPQB1vR+=Kp3RVSh0g10Oogqw@mail.gmail.com' \
--to=zong.li@sifive.com \
--cc=aou@eecs.berkeley.edu \
--cc=chenruisust@gmail.com \
--cc=iommu@lists.linux.dev \
--cc=jgg@ziepe.ca \
--cc=joro@8bytes.org \
--cc=kevin.tian@intel.com \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-riscv@lists.infradead.org \
--cc=ni_liqiang@126.com \
--cc=palmer@dabbelt.com \
--cc=paul.walmsley@sifive.com \
--cc=robin.murphy@arm.com \
--cc=tjeznach@rivosinc.com \
--cc=will@kernel.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).