From: Zong Li <zong.li@sifive.com>
To: Xu Lu <luxu.kernel@bytedance.com>
Cc: yunhui cui <cuiyunhui@bytedance.com>,
joro@8bytes.org, will@kernel.org, robin.murphy@arm.com,
tjeznach@rivosinc.com, paul.walmsley@sifive.com,
palmer@dabbelt.com, aou@eecs.berkeley.edu, jgg@ziepe.ca,
kevin.tian@intel.com, linux-kernel@vger.kernel.org,
iommu@lists.linux.dev, linux-riscv@lists.infradead.org
Subject: Re: [External] [RFC PATCH v2 02/10] iommu/riscv: support HPM and interrupt handling
Date: Fri, 27 Dec 2024 16:37:42 +0800 [thread overview]
Message-ID: <CANXhq0qo2=ztcv8CT8Qu4hk_wZpCnmupSknaGLB+mhxN3vqN8Q@mail.gmail.com> (raw)
In-Reply-To: <CAPYmKFtHn+ggujCWeQoSaWPK-2G=-Om0DuCpFyf+ha+OXQfsnw@mail.gmail.com>
On Tue, Dec 10, 2024 at 4:48 PM Xu Lu <luxu.kernel@bytedance.com> wrote:
>
> Hi Zong Li,
>
> Thanks for your job. We have tested your iommu pmu driver and have
> some feedbacks.
>
> 1. Maybe it is better to clear ipsr.PMIP first and then handle the pmu
> ovf irq in riscv_iommu_hpm_irq_handler(). Otherwise, if a new overflow
> happens after the riscv_iommu_pmu_handle_irq() and before pmip clear,
> we will drop it.
Yes, you are right. Let me change the order in the next version.
>
> 2. The period_left can be messed in riscv_iommu_pmu_update() as
> riscv_iommu_pmu_get_counter() always return the whole register value
> while bit 63 in hpmcycle actually indicates whether overflow happens
> instead of current value. Maybe these two functions should be
> implemented as:
Thanks for catch that. I will fix them in the next version.
>
> static void riscv_iommu_pmu_set_counter(struct riscv_iommu_pmu *pmu, u32 idx,
> u64 value)
> {
> void __iomem *addr = pmu->reg + RISCV_IOMMU_REG_IOHPMCYCLES;
>
> if (WARN_ON_ONCE(idx < 0 || idx > pmu->num_counters))
> return;
>
> if (idx == 0)
> value = (value & ~RISCV_IOMMU_IOHPMCYCLES_OF) |
> (readq(addr) & RISCV_IOMMU_IOHPMCYCLES_OF);
>
> writeq(FIELD_PREP(RISCV_IOMMU_IOHPMCTR_COUNTER, value), addr + idx * 8);
> }
>
> static u64 riscv_iommu_pmu_get_counter(struct riscv_iommu_pmu *pmu, u32 idx)
> {
> void __iomem *addr = pmu->reg + RISCV_IOMMU_REG_IOHPMCYCLES;
> u64 value;
>
> if (WARN_ON_ONCE(idx < 0 || idx > pmu->num_counters))
> return -EINVAL;
>
> value = readq(addr + idx * 8);
>
> if (idx == 0)
> return FIELD_GET(RISCV_IOMMU_IOHPMCYCLES_COUNTER, value);
>
> return FIELD_GET(RISCV_IOMMU_IOHPMCTR_COUNTER, value);
> }
>
> Please ignore me if these issues have already been discussed.
>
> Best regards,
>
> Xu Lu
>
> On Tue, Dec 10, 2024 at 3:55 PM yunhui cui <cuiyunhui@bytedance.com> wrote:
> >
> > Add Luxu in the loop.
> >
> > On Fri, Jun 14, 2024 at 10:22 PM Zong Li <zong.li@sifive.com> wrote:
> > >
> > > This patch initialize the pmu stuff and uninitialize it when driver
> > > removing. The interrupt handling is also provided, this handler need to
> > > be primary handler instead of thread function, because pt_regs is empty
> > > when threading the IRQ, but pt_regs is necessary by perf_event_overflow.
> > >
> > > Signed-off-by: Zong Li <zong.li@sifive.com>
> > > ---
> > > drivers/iommu/riscv/iommu.c | 65 +++++++++++++++++++++++++++++++++++++
> > > 1 file changed, 65 insertions(+)
> > >
> > > diff --git a/drivers/iommu/riscv/iommu.c b/drivers/iommu/riscv/iommu.c
> > > index 8b6a64c1ad8d..1716b2251f38 100644
> > > --- a/drivers/iommu/riscv/iommu.c
> > > +++ b/drivers/iommu/riscv/iommu.c
> > > @@ -540,6 +540,62 @@ static irqreturn_t riscv_iommu_fltq_process(int irq, void *data)
> > > return IRQ_HANDLED;
> > > }
> > >
> > > +/*
> > > + * IOMMU Hardware performance monitor
> > > + */
> > > +
> > > +/* HPM interrupt primary handler */
> > > +static irqreturn_t riscv_iommu_hpm_irq_handler(int irq, void *dev_id)
> > > +{
> > > + struct riscv_iommu_device *iommu = (struct riscv_iommu_device *)dev_id;
> > > +
> > > + /* Process pmu irq */
> > > + riscv_iommu_pmu_handle_irq(&iommu->pmu);
> > > +
> > > + /* Clear performance monitoring interrupt pending */
> > > + riscv_iommu_writel(iommu, RISCV_IOMMU_REG_IPSR, RISCV_IOMMU_IPSR_PMIP);
> > > +
> > > + return IRQ_HANDLED;
> > > +}
> > > +
> > > +/* HPM initialization */
> > > +static int riscv_iommu_hpm_enable(struct riscv_iommu_device *iommu)
> > > +{
> > > + int rc;
> > > +
> > > + if (!(iommu->caps & RISCV_IOMMU_CAPABILITIES_HPM))
> > > + return 0;
> > > +
> > > + /*
> > > + * pt_regs is empty when threading the IRQ, but pt_regs is necessary
> > > + * by perf_event_overflow. Use primary handler instead of thread
> > > + * function for PM IRQ.
> > > + *
> > > + * Set the IRQF_ONESHOT flag because this IRQ might be shared with
> > > + * other threaded IRQs by other queues.
> > > + */
> > > + rc = devm_request_irq(iommu->dev,
> > > + iommu->irqs[riscv_iommu_queue_vec(iommu, RISCV_IOMMU_IPSR_PMIP)],
> > > + riscv_iommu_hpm_irq_handler, IRQF_ONESHOT | IRQF_SHARED, NULL, iommu);
> > > + if (rc)
> > > + return rc;
> > > +
> > > + return riscv_iommu_pmu_init(&iommu->pmu, iommu->reg, dev_name(iommu->dev));
> > > +}
> > > +
> > > +/* HPM uninitialization */
> > > +static void riscv_iommu_hpm_disable(struct riscv_iommu_device *iommu)
> > > +{
> > > + if (!(iommu->caps & RISCV_IOMMU_CAPABILITIES_HPM))
> > > + return;
> > > +
> > > + devm_free_irq(iommu->dev,
> > > + iommu->irqs[riscv_iommu_queue_vec(iommu, RISCV_IOMMU_IPSR_PMIP)],
> > > + iommu);
> > > +
> > > + riscv_iommu_pmu_uninit(&iommu->pmu);
> > > +}
> > > +
> > > /* Lookup and initialize device context info structure. */
> > > static struct riscv_iommu_dc *riscv_iommu_get_dc(struct riscv_iommu_device *iommu,
> > > unsigned int devid)
> > > @@ -1612,6 +1668,9 @@ void riscv_iommu_remove(struct riscv_iommu_device *iommu)
> > > riscv_iommu_iodir_set_mode(iommu, RISCV_IOMMU_DDTP_IOMMU_MODE_OFF);
> > > riscv_iommu_queue_disable(&iommu->cmdq);
> > > riscv_iommu_queue_disable(&iommu->fltq);
> > > +
> > > + if (iommu->caps & RISCV_IOMMU_CAPABILITIES_HPM)
> > > + riscv_iommu_pmu_uninit(&iommu->pmu);
> > > }
> > >
> > > int riscv_iommu_init(struct riscv_iommu_device *iommu)
> > > @@ -1651,6 +1710,10 @@ int riscv_iommu_init(struct riscv_iommu_device *iommu)
> > > if (rc)
> > > goto err_queue_disable;
> > >
> > > + rc = riscv_iommu_hpm_enable(iommu);
> > > + if (rc)
> > > + goto err_hpm_disable;
> > > +
> > > rc = iommu_device_sysfs_add(&iommu->iommu, NULL, NULL, "riscv-iommu@%s",
> > > dev_name(iommu->dev));
> > > if (rc) {
> > > @@ -1669,6 +1732,8 @@ int riscv_iommu_init(struct riscv_iommu_device *iommu)
> > > err_remove_sysfs:
> > > iommu_device_sysfs_remove(&iommu->iommu);
> > > err_iodir_off:
> > > + riscv_iommu_hpm_disable(iommu);
> > > +err_hpm_disable:
> > > riscv_iommu_iodir_set_mode(iommu, RISCV_IOMMU_DDTP_IOMMU_MODE_OFF);
> > > err_queue_disable:
> > > riscv_iommu_queue_disable(&iommu->fltq);
> > > --
> > > 2.17.1
> > >
> > >
> >
> > Thanks,
> > Yunhui
_______________________________________________
linux-riscv mailing list
linux-riscv@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-riscv
next prev parent reply other threads:[~2024-12-27 8:38 UTC|newest]
Thread overview: 37+ messages / expand[flat|nested] mbox.gz Atom feed top
2024-06-14 14:21 [RFC PATCH v2 00/10] RISC-V IOMMU HPM and nested IOMMU support Zong Li
2024-06-14 14:21 ` [RFC PATCH v2 01/10] iommu/riscv: add RISC-V IOMMU PMU support Zong Li
2024-06-17 14:55 ` Jason Gunthorpe
2024-06-18 1:14 ` Zong Li
2024-06-14 14:21 ` [RFC PATCH v2 02/10] iommu/riscv: support HPM and interrupt handling Zong Li
2024-12-10 7:54 ` [External] " yunhui cui
2024-12-10 8:48 ` Xu Lu
2024-12-27 8:37 ` Zong Li [this message]
2025-09-01 13:36 ` [RFC PATCH v2 00/10] RISC-V IOMMU HPM and nested IOMMU support niliqiang
2025-09-02 4:01 ` Zong Li
2024-06-14 14:21 ` [RFC PATCH v2 03/10] iommu/riscv: use data structure instead of individual values Zong Li
2024-06-14 14:21 ` [RFC PATCH v2 04/10] iommu/riscv: add iotlb_sync_map operation support Zong Li
2024-06-15 3:14 ` Baolu Lu
2024-06-17 13:43 ` Zong Li
2024-06-17 14:39 ` Jason Gunthorpe
2024-06-18 3:01 ` Zong Li
2024-06-18 13:31 ` Jason Gunthorpe
2024-06-14 14:21 ` [RFC PATCH v2 05/10] iommu/riscv: support GSCID and GVMA invalidation command Zong Li
2024-06-14 14:21 ` [RFC PATCH v2 06/10] iommu/riscv: support nested iommu for getting iommu hardware information Zong Li
2024-06-19 15:49 ` Jason Gunthorpe
2024-06-21 7:32 ` Zong Li
2024-06-14 14:21 ` [RFC PATCH v2 07/10] iommu/riscv: support nested iommu for creating domains owned by userspace Zong Li
2024-06-19 16:02 ` Jason Gunthorpe
2024-06-28 9:03 ` Zong Li
2024-06-28 22:32 ` Jason Gunthorpe
2024-06-19 16:34 ` Joao Martins
2024-06-21 7:34 ` Zong Li
2024-06-14 14:21 ` [RFC PATCH v2 08/10] iommu/riscv: support nested iommu for flushing cache Zong Li
2024-06-15 3:22 ` Baolu Lu
2024-06-17 2:16 ` Zong Li
2024-06-19 16:17 ` Jason Gunthorpe
2024-06-28 8:19 ` Zong Li
2024-06-28 22:26 ` Jason Gunthorpe
2024-06-14 14:21 ` [RFC PATCH v2 09/10] iommu/dma: Support MSIs through nested domains Zong Li
2024-06-14 18:12 ` Nicolin Chen
2024-06-17 2:15 ` Zong Li
2024-06-14 14:21 ` [RFC PATCH v2 10/10] iommu:riscv: support nested iommu for get_msi_mapping_domain operation Zong Li
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to='CANXhq0qo2=ztcv8CT8Qu4hk_wZpCnmupSknaGLB+mhxN3vqN8Q@mail.gmail.com' \
--to=zong.li@sifive.com \
--cc=aou@eecs.berkeley.edu \
--cc=cuiyunhui@bytedance.com \
--cc=iommu@lists.linux.dev \
--cc=jgg@ziepe.ca \
--cc=joro@8bytes.org \
--cc=kevin.tian@intel.com \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-riscv@lists.infradead.org \
--cc=luxu.kernel@bytedance.com \
--cc=palmer@dabbelt.com \
--cc=paul.walmsley@sifive.com \
--cc=robin.murphy@arm.com \
--cc=tjeznach@rivosinc.com \
--cc=will@kernel.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).