From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 32BDACAC58C for ; Tue, 9 Sep 2025 06:45:56 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender:List-Subscribe:List-Help :List-Post:List-Archive:List-Unsubscribe:List-Id:Content-Transfer-Encoding: Content-Type:MIME-Version:References:In-Reply-To:Message-ID:Date:Subject:Cc: To:From:Reply-To:Content-ID:Content-Description:Resent-Date:Resent-From: Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:List-Owner; bh=OlSQveSkmHqeqOCz+8QD3cqQ+2CFh4AB3iYCoih56HI=; b=IZYUmfhpHE/C7mKAtn3Com9/OO VYJdHxcziSdcKTarKdbIlf9D5sCgOdCmvjMNEEN0XMZV1/ct9GczVFbj7fd5CvQ7ZjwiVLffC3P6r 68uLvI19x1+QzXG2wH6hixUiVsoaHCdfFuIvv6I5PSL+XoeghnGtdwtyy2OsER2FrGbWi2LjBAhyM c60TwiMjKppd0DgLUhMIaQ1cbi3Lcd5fVFjlmD9EOg6FeftJv59i2mtNcopuJgxOeCy/0ENUvpH5L 2lsMQnQiUCoD+7RHKqni9YKc+Ag1JyLq7rGC3xnt+mYMdvXZAZtnOB7vJNIJFDlZmLGVM/t/ngkRu vonRhwzA==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.98.2 #2 (Red Hat Linux)) id 1uvs6w-00000004mPR-3yat; Tue, 09 Sep 2025 06:45:51 +0000 Received: from mail-pf1-x431.google.com ([2607:f8b0:4864:20::431]) by bombadil.infradead.org with esmtps (Exim 4.98.2 #2 (Red Hat Linux)) id 1uvgSn-00000001ULG-2nMx for linux-arm-kernel@lists.infradead.org; Mon, 08 Sep 2025 18:19:38 +0000 Received: by mail-pf1-x431.google.com with SMTP id d2e1a72fcca58-77269d19280so4253020b3a.3 for ; Mon, 08 Sep 2025 11:19:37 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=rivosinc.com; s=google; t=1757355577; x=1757960377; darn=lists.infradead.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=OlSQveSkmHqeqOCz+8QD3cqQ+2CFh4AB3iYCoih56HI=; b=JdjJEYOeuekQFShm45cHzL0AyvccLfcRybzVkmvo5LnGrb0A8hGhmh4fpZZdxuPm0d aSaNAk+eGkaKwnTT2IpOMFKgiZZ/qQQNoLUuqdRCC6rxP4ZMnqNuG7U6aVkrijvY4lcZ pPSAvjLG1UlEJxcqch7SaqGYzRipJeOGM3/z2oszUe+r2ZCP6J18BWIFb6PcUCV/4C+S pz1yabrTdQ1jC7UsW4Iq0bkEVWX1ULZhyned/RXrXt5+QFgmEObmLUD1M8TilGPmbCfF FAvpwqsl2l59FiWU+2eP6cnpIAcfvlq9FNPWVnVa4O4Q9sMorQbeC1Dm+dPdjwCJBtIV ckLA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1757355577; x=1757960377; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=OlSQveSkmHqeqOCz+8QD3cqQ+2CFh4AB3iYCoih56HI=; b=SuJ9bnRGwdmQHBdxCdG024r4StqK2WuM4JUVi/VoUmtfnN7zPWJiiq2ckz36DaQFh3 G5rVtXl+sdeEJ9MsMj4g9gW7P9aD9SX9xxpbg3+pgKqr7/YSta5vndh3Fm1s8zflc35f pMZ4nZsxqqrWVL+441qMtagcC945N+RkXfpMuvbxxQz3YjI2BnyFZtkShr/awuloC00g a6wZ0GRX2uELdFj0CCnDTsPpdyW0LK/O1jzUsdb6NDuLSNDUfjxP011QFld8UK4rXqJo J8mdeRJlKctEPBCzmqGzr0GTCzWbHA2lDQEnYLoEyE7Hm3peHMeRUpHQXAmzdH/Ejmnc jxAQ== X-Forwarded-Encrypted: i=1; AJvYcCV3H9Oh0wizo2EWnhshncKrVr5QRd3payPAmiPDghj752HAwHZr2kppG0tO5tmObfXEd5zrfxMT3+vXHKMl5W1Y@lists.infradead.org X-Gm-Message-State: AOJu0Yx+bVuWJqStdUkimahvy3EGH+ccJ3FJsZYrHV35+NftqjOUUCeo j5a4EFxs+N90rOdLuadooMEThhVz4N7IFrhiRIoaspuwGoRUP02wSXxsze62anUgiPw= X-Gm-Gg: ASbGnct5mqafrMvQvPSp4ZfJv3U/ve9QZUI0123idie99+L5L3s1VFLPWi5TxZUAp15 bx3/nDUiOvn1u0wV+JAyfOxrFXncsHTEZqWqwp7AlHkb50yhY7N1aDYF+nI1hQ6KE70hjQ5nOTS pMQikarBXVfoZvz2+AmiW72S8DSKmcRBJhUU6jKvjwZTK4b6YEDkS2JtYR4/PARb89/yjMheWvA B7KHEEjprpyKIdBCq66JWne/5gkuyLh/YAwOdmtTBVENZXLvYmoGdtSCrSSk4XqfXkMpP6b7cjo lLxZKL39xWnbVy04lM4nKZ1Yau/rCMvGirNOiQBaAI1Rw5ao8hEienHlhPh9Kyoa28y30c4tKwq tPQRsUhwxqQU8kA== X-Google-Smtp-Source: AGHT+IGcxXdPyuA6sk3PlCCwbFJFwb57wOSa8X+xCQ07GgjSceHh/z87cR7mb6zcPGNFekV+LMmIPg== X-Received: by 2002:a17:902:a5c3:b0:24c:e9b8:c07 with SMTP id d9443c01a7336-25174374060mr81160125ad.43.1757355577047; Mon, 08 Sep 2025 11:19:37 -0700 (PDT) Received: from cleger.eu.int ([2001:41d0:420:f300::]) by smtp.gmail.com with ESMTPSA id d9443c01a7336-24ccd763948sm117012755ad.118.2025.09.08.11.19.29 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 08 Sep 2025 11:19:36 -0700 (PDT) From: =?UTF-8?q?Cl=C3=A9ment=20L=C3=A9ger?= To: Paul Walmsley , Palmer Dabbelt , linux-riscv@lists.infradead.org, linux-kernel@vger.kernel.org, linux-arm-kernel@lists.infradead.org Cc: =?UTF-8?q?Cl=C3=A9ment=20L=C3=A9ger?= , Himanshu Chauhan , Anup Patel , Xu Lu , Atish Patra , =?UTF-8?q?Bj=C3=B6rn=20T=C3=B6pel?= , Yunhui Cui Subject: [PATCH v7 4/5] perf: RISC-V: add support for SSE event Date: Mon, 8 Sep 2025 18:17:06 +0000 Message-ID: <20250908181717.1997461-5-cleger@rivosinc.com> X-Mailer: git-send-email 2.43.0 In-Reply-To: <20250908181717.1997461-1-cleger@rivosinc.com> References: <20250908181717.1997461-1-cleger@rivosinc.com> MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20250908_111937_710823_9C6702C4 X-CRM114-Status: GOOD ( 23.39 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org In order to use SSE within PMU drivers, register a SSE handler for the local PMU event. Reuse the existing overflow IRQ handler and pass appropriate pt_regs. Add a config option RISCV_PMU_SSE to select event delivery via SSE events. Signed-off-by: Clément Léger --- drivers/perf/Kconfig | 10 +++++ drivers/perf/riscv_pmu.c | 23 +++++++++++ drivers/perf/riscv_pmu_sbi.c | 71 +++++++++++++++++++++++++++++----- include/linux/perf/riscv_pmu.h | 5 +++ 4 files changed, 99 insertions(+), 10 deletions(-) diff --git a/drivers/perf/Kconfig b/drivers/perf/Kconfig index a9188dec36fe..bea08d4689b1 100644 --- a/drivers/perf/Kconfig +++ b/drivers/perf/Kconfig @@ -105,6 +105,16 @@ config RISCV_PMU_SBI full perf feature support i.e. counter overflow, privilege mode filtering, counter configuration. +config RISCV_PMU_SBI_SSE + depends on RISCV_PMU && RISCV_SBI_SSE + bool "RISC-V PMU SSE events" + default n + help + Say y if you want to use SSE events to deliver PMU interrupts. This + provides a way to profile the kernel at any level by using NMI-like + SSE events. SSE events being really intrusive, this option allows + to select it only if needed. + config STARFIVE_STARLINK_PMU depends on ARCH_STARFIVE || COMPILE_TEST depends on 64BIT diff --git a/drivers/perf/riscv_pmu.c b/drivers/perf/riscv_pmu.c index 7644147d50b4..dda2814801c0 100644 --- a/drivers/perf/riscv_pmu.c +++ b/drivers/perf/riscv_pmu.c @@ -13,6 +13,7 @@ #include #include #include +#include #include #include @@ -254,6 +255,24 @@ void riscv_pmu_start(struct perf_event *event, int flags) perf_event_update_userpage(event); } +#ifdef CONFIG_RISCV_PMU_SBI_SSE +static void riscv_pmu_disable(struct pmu *pmu) +{ + struct riscv_pmu *rvpmu = to_riscv_pmu(pmu); + + if (rvpmu->sse_evt) + sse_event_disable_local(rvpmu->sse_evt); +} + +static void riscv_pmu_enable(struct pmu *pmu) +{ + struct riscv_pmu *rvpmu = to_riscv_pmu(pmu); + + if (rvpmu->sse_evt) + sse_event_enable_local(rvpmu->sse_evt); +} +#endif + static int riscv_pmu_add(struct perf_event *event, int flags) { struct riscv_pmu *rvpmu = to_riscv_pmu(event->pmu); @@ -411,6 +430,10 @@ struct riscv_pmu *riscv_pmu_alloc(void) .event_mapped = riscv_pmu_event_mapped, .event_unmapped = riscv_pmu_event_unmapped, .event_idx = riscv_pmu_event_idx, +#ifdef CONFIG_RISCV_PMU_SBI_SSE + .pmu_enable = riscv_pmu_enable, + .pmu_disable = riscv_pmu_disable, +#endif .add = riscv_pmu_add, .del = riscv_pmu_del, .start = riscv_pmu_start, diff --git a/drivers/perf/riscv_pmu_sbi.c b/drivers/perf/riscv_pmu_sbi.c index 698de8ddf895..a864a543ccc8 100644 --- a/drivers/perf/riscv_pmu_sbi.c +++ b/drivers/perf/riscv_pmu_sbi.c @@ -17,6 +17,7 @@ #include #include #include +#include #include #include #include @@ -948,10 +949,10 @@ static void pmu_sbi_start_overflow_mask(struct riscv_pmu *pmu, pmu_sbi_start_ovf_ctrs_sbi(cpu_hw_evt, ctr_ovf_mask); } -static irqreturn_t pmu_sbi_ovf_handler(int irq, void *dev) +static irqreturn_t pmu_sbi_ovf_handler(struct cpu_hw_events *cpu_hw_evt, + struct pt_regs *regs, bool from_sse) { struct perf_sample_data data; - struct pt_regs *regs; struct hw_perf_event *hw_evt; union sbi_pmu_ctr_info *info; int lidx, hidx, fidx; @@ -959,7 +960,6 @@ static irqreturn_t pmu_sbi_ovf_handler(int irq, void *dev) struct perf_event *event; u64 overflow; u64 overflowed_ctrs = 0; - struct cpu_hw_events *cpu_hw_evt = dev; u64 start_clock = sched_clock(); struct riscv_pmu_snapshot_data *sdata = cpu_hw_evt->snapshot_addr; @@ -969,13 +969,15 @@ static irqreturn_t pmu_sbi_ovf_handler(int irq, void *dev) /* Firmware counter don't support overflow yet */ fidx = find_first_bit(cpu_hw_evt->used_hw_ctrs, RISCV_MAX_COUNTERS); if (fidx == RISCV_MAX_COUNTERS) { - csr_clear(CSR_SIP, BIT(riscv_pmu_irq_num)); + if (!from_sse) + csr_clear(CSR_SIP, BIT(riscv_pmu_irq_num)); return IRQ_NONE; } event = cpu_hw_evt->events[fidx]; if (!event) { - ALT_SBI_PMU_OVF_CLEAR_PENDING(riscv_pmu_irq_mask); + if (!from_sse) + ALT_SBI_PMU_OVF_CLEAR_PENDING(riscv_pmu_irq_mask); return IRQ_NONE; } @@ -990,16 +992,16 @@ static irqreturn_t pmu_sbi_ovf_handler(int irq, void *dev) /* * Overflow interrupt pending bit should only be cleared after stopping - * all the counters to avoid any race condition. + * all the counters to avoid any race condition. When using SSE, + * interrupt is cleared when stopping counters. */ - ALT_SBI_PMU_OVF_CLEAR_PENDING(riscv_pmu_irq_mask); + if (!from_sse) + ALT_SBI_PMU_OVF_CLEAR_PENDING(riscv_pmu_irq_mask); /* No overflow bit is set */ if (!overflow) return IRQ_NONE; - regs = get_irq_regs(); - for_each_set_bit(lidx, cpu_hw_evt->used_hw_ctrs, RISCV_MAX_COUNTERS) { struct perf_event *event = cpu_hw_evt->events[lidx]; @@ -1055,6 +1057,51 @@ static irqreturn_t pmu_sbi_ovf_handler(int irq, void *dev) return IRQ_HANDLED; } +static irqreturn_t pmu_sbi_ovf_irq_handler(int irq, void *dev) +{ + return pmu_sbi_ovf_handler(dev, get_irq_regs(), false); +} + +#ifdef CONFIG_RISCV_PMU_SBI_SSE +static int pmu_sbi_ovf_sse_handler(u32 evt, void *arg, struct pt_regs *regs) +{ + struct cpu_hw_events __percpu *hw_events = arg; + struct cpu_hw_events *hw_event = raw_cpu_ptr(hw_events); + + pmu_sbi_ovf_handler(hw_event, regs, true); + + return 0; +} + +static int pmu_sbi_setup_sse(struct riscv_pmu *pmu) +{ + int ret; + struct sse_event *evt; + struct cpu_hw_events __percpu *hw_events = pmu->hw_events; + + evt = sse_event_register(SBI_SSE_EVENT_LOCAL_PMU_OVERFLOW, 0, + pmu_sbi_ovf_sse_handler, hw_events); + if (IS_ERR(evt)) + return PTR_ERR(evt); + + ret = sse_event_enable(evt); + if (ret) { + sse_event_unregister(evt); + return ret; + } + + pr_info("using SSE for PMU event delivery\n"); + pmu->sse_evt = evt; + + return ret; +} +#else +static int pmu_sbi_setup_sse(struct riscv_pmu *pmu) +{ + return -EOPNOTSUPP; +} +#endif + static int pmu_sbi_starting_cpu(unsigned int cpu, struct hlist_node *node) { struct riscv_pmu *pmu = hlist_entry_safe(node, struct riscv_pmu, node); @@ -1105,6 +1152,10 @@ static int pmu_sbi_setup_irqs(struct riscv_pmu *pmu, struct platform_device *pde struct cpu_hw_events __percpu *hw_events = pmu->hw_events; struct irq_domain *domain = NULL; + ret = pmu_sbi_setup_sse(pmu); + if (!ret) + return 0; + if (riscv_isa_extension_available(NULL, SSCOFPMF)) { riscv_pmu_irq_num = RV_IRQ_PMU; riscv_pmu_use_irq = true; @@ -1139,7 +1190,7 @@ static int pmu_sbi_setup_irqs(struct riscv_pmu *pmu, struct platform_device *pde return -ENODEV; } - ret = request_percpu_irq(riscv_pmu_irq, pmu_sbi_ovf_handler, "riscv-pmu", hw_events); + ret = request_percpu_irq(riscv_pmu_irq, pmu_sbi_ovf_irq_handler, "riscv-pmu", hw_events); if (ret) { pr_err("registering percpu irq failed [%d]\n", ret); return ret; diff --git a/include/linux/perf/riscv_pmu.h b/include/linux/perf/riscv_pmu.h index 701974639ff2..cd493fcab9b3 100644 --- a/include/linux/perf/riscv_pmu.h +++ b/include/linux/perf/riscv_pmu.h @@ -28,6 +28,8 @@ #define RISCV_PMU_CONFIG1_GUEST_EVENTS 0x1 +struct sse_event; + struct cpu_hw_events { /* currently enabled events */ int n_events; @@ -54,6 +56,9 @@ struct riscv_pmu { char *name; irqreturn_t (*handle_irq)(int irq_num, void *dev); +#ifdef CONFIG_RISCV_PMU_SBI_SSE + struct sse_event *sse_evt; +#endif unsigned long cmask; u64 (*ctr_read)(struct perf_event *event); -- 2.43.0