public inbox for kvm@vger.kernel.org
 help / color / mirror / Atom feed
* [kvm-unit-tests PATCH v8 0/6] riscv: add SBI SSE extension tests
@ 2025-03-07 16:15 Clément Léger
  2025-03-07 16:15 ` [kvm-unit-tests PATCH v8 1/6] kbuild: Allow multiple asm-offsets file to be generated Clément Léger
                   ` (5 more replies)
  0 siblings, 6 replies; 13+ messages in thread
From: Clément Léger @ 2025-03-07 16:15 UTC (permalink / raw)
  To: kvm, kvm-riscv
  Cc: Clément Léger, Andrew Jones, Anup Patel, Atish Patra

This series adds tests for SBI SSE extension as well as needed
infrastructure for SSE support. It also adds test specific asm-offsets
generation to use custom OFFSET and DEFINE from the test directory.

---

V8:
 - Short circuit current event tests if failure happens
 - Remove SSE from all report strings
 - Indent .prio field
 - Add cpu_relax()/smp_rmb() where needed
 - Add timeout for global event ENABLED state check
 - Added BIT(32) aliases tests for attribute/event_id.

V7:
 - Test ids/attributes/attributes count > 32 bits
 - Rename all SSE function to sbi_sse_*
 - Use event_id instead of event/evt
 - Factorize read/write test
 - Use virt_to_phys() for attributes read/write.
 - Extensively use sbiret_report_error()
 - Change check function return values to bool.
 - Added assert for stack size to be below or equal to PAGE_SIZE
 - Use en env variable for the maximum hart ID
 - Check that individual read from attributes matches the multiple
   attributes read.
 - Added multiple attributes write at once
 - Used READ_ONCE/WRITE_ONCE
 - Inject all local event at once rather than looping fopr each core.
 - Split test_arg for local_dispatch test so that all CPUs can run at
   once.
 - Move SSE entry and generic code to lib/riscv for other tests
 - Fix unmask/mask state checking

V6:
 - Add missing $(generated-file) dependencies for "-deps" objects
 - Split SSE entry from sbi-asm.S to sse-asm.S and all SSE core functions
   since it will be useful for other tests as well (dbltrp).

V5:
 - Update event ranges based on latest spec
 - Rename asm-offset-test.c to sbi-asm-offset.c

V4:
 - Fix typo sbi_ext_ss_fid -> sbi_ext_sse_fid
 - Add proper asm-offset generation for tests
 - Move SSE specific file from lib/riscv to riscv/

V3:
 - Add -deps variable for test specific dependencies
 - Fix formatting errors/typo in sbi.h
 - Add missing double trap event
 - Alphabetize sbi-sse.c includes
 - Fix a6 content after unmasking event
 - Add SSE HART_MASK/UNMASK test
 - Use mv instead of move
 - move sbi_check_sse() definition in sbi.c
 - Remove sbi_sse test from unitests.cfg

V2:
 - Rebased on origin/master and integrate it into sbi.c tests

Clément Léger (6):
  kbuild: Allow multiple asm-offsets file to be generated
  riscv: Set .aux.o files as .PRECIOUS
  riscv: Use asm-offsets to generate SBI_EXT_HSM values
  riscv: lib: Add SBI SSE extension definitions
  lib: riscv: Add SBI SSE support
  riscv: sbi: Add SSE extension tests

 scripts/asm-offsets.mak |   22 +-
 riscv/Makefile          |    5 +-
 lib/riscv/asm/csr.h     |    1 +
 lib/riscv/asm/sbi.h     |  144 ++++-
 lib/riscv/sbi-sse-asm.S |  103 ++++
 lib/riscv/asm-offsets.c |    9 +
 lib/riscv/sbi.c         |   76 +++
 riscv/sbi-tests.h       |    1 +
 riscv/sbi-asm.S         |    6 +-
 riscv/sbi-asm-offsets.c |   11 +
 riscv/sbi-sse.c         | 1215 +++++++++++++++++++++++++++++++++++++++
 riscv/sbi.c             |    2 +
 riscv/.gitignore        |    1 +
 13 files changed, 1584 insertions(+), 12 deletions(-)
 create mode 100644 lib/riscv/sbi-sse-asm.S
 create mode 100644 riscv/sbi-asm-offsets.c
 create mode 100644 riscv/sbi-sse.c
 create mode 100644 riscv/.gitignore

-- 
2.47.2


^ permalink raw reply	[flat|nested] 13+ messages in thread

* [kvm-unit-tests PATCH v8 1/6] kbuild: Allow multiple asm-offsets file to be generated
  2025-03-07 16:15 [kvm-unit-tests PATCH v8 0/6] riscv: add SBI SSE extension tests Clément Léger
@ 2025-03-07 16:15 ` Clément Léger
  2025-03-07 16:15 ` [kvm-unit-tests PATCH v8 2/6] riscv: Set .aux.o files as .PRECIOUS Clément Léger
                   ` (4 subsequent siblings)
  5 siblings, 0 replies; 13+ messages in thread
From: Clément Léger @ 2025-03-07 16:15 UTC (permalink / raw)
  To: kvm, kvm-riscv
  Cc: Clément Léger, Andrew Jones, Anup Patel, Atish Patra,
	Andrew Jones

In order to allow multiple asm-offsets files to generated the include
guard need to be different between these file. Add a asm_offset_name
makefile macro to obtain an uppercase name matching the original asm
offsets file.

Signed-off-by: Clément Léger <cleger@rivosinc.com>
Reviewed-by: Andrew Jones <andrew.jones@linux.dev>
---
 scripts/asm-offsets.mak | 22 +++++++++++++++-------
 1 file changed, 15 insertions(+), 7 deletions(-)

diff --git a/scripts/asm-offsets.mak b/scripts/asm-offsets.mak
index 7b64162d..a5fdbf5d 100644
--- a/scripts/asm-offsets.mak
+++ b/scripts/asm-offsets.mak
@@ -15,10 +15,14 @@ define sed-y
 	s:->::; p;}'
 endef
 
+define asm_offset_name
+	$(shell echo $(notdir $(1)) | tr [:lower:]- [:upper:]_)
+endef
+
 define make_asm_offsets
 	(set -e; \
-	 echo "#ifndef __ASM_OFFSETS_H__"; \
-	 echo "#define __ASM_OFFSETS_H__"; \
+	 echo "#ifndef __$(strip $(asm_offset_name))_H__"; \
+	 echo "#define __$(strip $(asm_offset_name))_H__"; \
 	 echo "/*"; \
 	 echo " * Generated file. DO NOT MODIFY."; \
 	 echo " *"; \
@@ -29,12 +33,16 @@ define make_asm_offsets
 	 echo "#endif" ) > $@
 endef
 
-$(asm-offsets:.h=.s): $(asm-offsets:.h=.c)
-	$(CC) $(CFLAGS) -fverbose-asm -S -o $@ $<
+define gen_asm_offsets_rules
+$(1).s: $(1).c
+	$(CC) $(CFLAGS) -fverbose-asm -S -o $$@ $$<
+
+$(1).h: $(1).s
+	$$(call make_asm_offsets,$(1))
+	cp -f $$@ lib/generated/
+endef
 
-$(asm-offsets): $(asm-offsets:.h=.s)
-	$(call make_asm_offsets)
-	cp -f $(asm-offsets) lib/generated/
+$(foreach o,$(asm-offsets),$(eval $(call gen_asm_offsets_rules, $(o:.h=))))
 
 OBJDIRS += lib/generated
 
-- 
2.47.2


^ permalink raw reply related	[flat|nested] 13+ messages in thread

* [kvm-unit-tests PATCH v8 2/6] riscv: Set .aux.o files as .PRECIOUS
  2025-03-07 16:15 [kvm-unit-tests PATCH v8 0/6] riscv: add SBI SSE extension tests Clément Léger
  2025-03-07 16:15 ` [kvm-unit-tests PATCH v8 1/6] kbuild: Allow multiple asm-offsets file to be generated Clément Léger
@ 2025-03-07 16:15 ` Clément Léger
  2025-03-12 16:30   ` Andrew Jones
  2025-03-07 16:15 ` [kvm-unit-tests PATCH v8 3/6] riscv: Use asm-offsets to generate SBI_EXT_HSM values Clément Léger
                   ` (3 subsequent siblings)
  5 siblings, 1 reply; 13+ messages in thread
From: Clément Léger @ 2025-03-07 16:15 UTC (permalink / raw)
  To: kvm, kvm-riscv
  Cc: Clément Léger, Andrew Jones, Anup Patel, Atish Patra

When compiling, we need to keep .aux.o file or they will be removed
after the compilation which leads to dependent files to be recompiled.
Set these files as .PRECIOUS to keep them.

Signed-off-by: Clément Léger <cleger@rivosinc.com>
---
 riscv/Makefile | 1 +
 1 file changed, 1 insertion(+)

diff --git a/riscv/Makefile b/riscv/Makefile
index 52718f3f..ae9cf02a 100644
--- a/riscv/Makefile
+++ b/riscv/Makefile
@@ -90,6 +90,7 @@ CFLAGS += -I $(SRCDIR)/lib -I $(SRCDIR)/lib/libfdt -I lib -I $(SRCDIR)/riscv
 asm-offsets = lib/riscv/asm-offsets.h
 include $(SRCDIR)/scripts/asm-offsets.mak
 
+.PRECIOUS: %.aux.o
 %.aux.o: $(SRCDIR)/lib/auxinfo.c
 	$(CC) $(CFLAGS) -c -o $@ $< \
 		-DPROGNAME=\"$(notdir $(@:.aux.o=.$(exe)))\" -DAUXFLAGS=$(AUXFLAGS)
-- 
2.47.2


^ permalink raw reply related	[flat|nested] 13+ messages in thread

* [kvm-unit-tests PATCH v8 3/6] riscv: Use asm-offsets to generate SBI_EXT_HSM values
  2025-03-07 16:15 [kvm-unit-tests PATCH v8 0/6] riscv: add SBI SSE extension tests Clément Léger
  2025-03-07 16:15 ` [kvm-unit-tests PATCH v8 1/6] kbuild: Allow multiple asm-offsets file to be generated Clément Léger
  2025-03-07 16:15 ` [kvm-unit-tests PATCH v8 2/6] riscv: Set .aux.o files as .PRECIOUS Clément Léger
@ 2025-03-07 16:15 ` Clément Léger
  2025-03-07 16:15 ` [kvm-unit-tests PATCH v8 4/6] riscv: lib: Add SBI SSE extension definitions Clément Léger
                   ` (2 subsequent siblings)
  5 siblings, 0 replies; 13+ messages in thread
From: Clément Léger @ 2025-03-07 16:15 UTC (permalink / raw)
  To: kvm, kvm-riscv
  Cc: Clément Léger, Andrew Jones, Anup Patel, Atish Patra,
	Andrew Jones

Replace hardcoded values with generated ones using sbi-asm-offset. This
allows to directly use ASM_SBI_EXT_HSM and ASM_SBI_EXT_HSM_STOP in
assembly.

Signed-off-by: Clément Léger <cleger@rivosinc.com>
Reviewed-by: Andrew Jones <andrew.jones@linux.dev>
---
 riscv/Makefile          |  2 +-
 riscv/sbi-asm.S         |  6 ++++--
 riscv/sbi-asm-offsets.c | 11 +++++++++++
 riscv/.gitignore        |  1 +
 4 files changed, 17 insertions(+), 3 deletions(-)
 create mode 100644 riscv/sbi-asm-offsets.c
 create mode 100644 riscv/.gitignore

diff --git a/riscv/Makefile b/riscv/Makefile
index ae9cf02a..02d2ac39 100644
--- a/riscv/Makefile
+++ b/riscv/Makefile
@@ -87,7 +87,7 @@ CFLAGS += -ffreestanding
 CFLAGS += -O2
 CFLAGS += -I $(SRCDIR)/lib -I $(SRCDIR)/lib/libfdt -I lib -I $(SRCDIR)/riscv
 
-asm-offsets = lib/riscv/asm-offsets.h
+asm-offsets = lib/riscv/asm-offsets.h riscv/sbi-asm-offsets.h
 include $(SRCDIR)/scripts/asm-offsets.mak
 
 .PRECIOUS: %.aux.o
diff --git a/riscv/sbi-asm.S b/riscv/sbi-asm.S
index f4185496..51f46efd 100644
--- a/riscv/sbi-asm.S
+++ b/riscv/sbi-asm.S
@@ -6,6 +6,8 @@
  */
 #include <asm/asm.h>
 #include <asm/csr.h>
+#include <asm/asm-offsets.h>
+#include <generated/sbi-asm-offsets.h>
 
 #include "sbi-tests.h"
 
@@ -57,8 +59,8 @@ sbi_hsm_check:
 7:	lb	t0, 0(t1)
 	pause
 	beqz	t0, 7b
-	li	a7, 0x48534d	/* SBI_EXT_HSM */
-	li	a6, 1		/* SBI_EXT_HSM_HART_STOP */
+	li	a7, ASM_SBI_EXT_HSM
+	li	a6, ASM_SBI_EXT_HSM_HART_STOP
 	ecall
 8:	pause
 	j	8b
diff --git a/riscv/sbi-asm-offsets.c b/riscv/sbi-asm-offsets.c
new file mode 100644
index 00000000..bd37b6a2
--- /dev/null
+++ b/riscv/sbi-asm-offsets.c
@@ -0,0 +1,11 @@
+// SPDX-License-Identifier: GPL-2.0-only
+#include <kbuild.h>
+#include <asm/sbi.h>
+
+int main(void)
+{
+	DEFINE(ASM_SBI_EXT_HSM, SBI_EXT_HSM);
+	DEFINE(ASM_SBI_EXT_HSM_HART_STOP, SBI_EXT_HSM_HART_STOP);
+
+	return 0;
+}
diff --git a/riscv/.gitignore b/riscv/.gitignore
new file mode 100644
index 00000000..0a8c5a36
--- /dev/null
+++ b/riscv/.gitignore
@@ -0,0 +1 @@
+/*-asm-offsets.[hs]
-- 
2.47.2


^ permalink raw reply related	[flat|nested] 13+ messages in thread

* [kvm-unit-tests PATCH v8 4/6] riscv: lib: Add SBI SSE extension definitions
  2025-03-07 16:15 [kvm-unit-tests PATCH v8 0/6] riscv: add SBI SSE extension tests Clément Léger
                   ` (2 preceding siblings ...)
  2025-03-07 16:15 ` [kvm-unit-tests PATCH v8 3/6] riscv: Use asm-offsets to generate SBI_EXT_HSM values Clément Léger
@ 2025-03-07 16:15 ` Clément Léger
  2025-03-07 16:15 ` [kvm-unit-tests PATCH v8 5/6] lib: riscv: Add SBI SSE support Clément Léger
  2025-03-07 16:15 ` [kvm-unit-tests PATCH v8 6/6] riscv: sbi: Add SSE extension tests Clément Léger
  5 siblings, 0 replies; 13+ messages in thread
From: Clément Léger @ 2025-03-07 16:15 UTC (permalink / raw)
  To: kvm, kvm-riscv
  Cc: Clément Léger, Andrew Jones, Anup Patel, Atish Patra,
	Andrew Jones

Add SBI SSE extension definitions in sbi.h

Signed-off-by: Clément Léger <cleger@rivosinc.com>
Reviewed-by: Andrew Jones <andrew.jones@linux.dev>
---
 lib/riscv/asm/sbi.h | 106 +++++++++++++++++++++++++++++++++++++++++++-
 1 file changed, 105 insertions(+), 1 deletion(-)

diff --git a/lib/riscv/asm/sbi.h b/lib/riscv/asm/sbi.h
index 2f4d91ef..780c9edd 100644
--- a/lib/riscv/asm/sbi.h
+++ b/lib/riscv/asm/sbi.h
@@ -30,6 +30,7 @@ enum sbi_ext_id {
 	SBI_EXT_DBCN = 0x4442434E,
 	SBI_EXT_SUSP = 0x53555350,
 	SBI_EXT_FWFT = 0x46574654,
+	SBI_EXT_SSE = 0x535345,
 };
 
 enum sbi_ext_base_fid {
@@ -78,7 +79,6 @@ enum sbi_ext_dbcn_fid {
 	SBI_EXT_DBCN_CONSOLE_WRITE_BYTE,
 };
 
-
 enum sbi_ext_fwft_fid {
 	SBI_EXT_FWFT_SET = 0,
 	SBI_EXT_FWFT_GET,
@@ -105,6 +105,110 @@ enum sbi_ext_fwft_fid {
 
 #define SBI_FWFT_SET_FLAG_LOCK			BIT(0)
 
+enum sbi_ext_sse_fid {
+	SBI_EXT_SSE_READ_ATTRS = 0,
+	SBI_EXT_SSE_WRITE_ATTRS,
+	SBI_EXT_SSE_REGISTER,
+	SBI_EXT_SSE_UNREGISTER,
+	SBI_EXT_SSE_ENABLE,
+	SBI_EXT_SSE_DISABLE,
+	SBI_EXT_SSE_COMPLETE,
+	SBI_EXT_SSE_INJECT,
+	SBI_EXT_SSE_HART_UNMASK,
+	SBI_EXT_SSE_HART_MASK,
+};
+
+/* SBI SSE Event Attributes. */
+enum sbi_sse_attr_id {
+	SBI_SSE_ATTR_STATUS		= 0x00000000,
+	SBI_SSE_ATTR_PRIORITY		= 0x00000001,
+	SBI_SSE_ATTR_CONFIG		= 0x00000002,
+	SBI_SSE_ATTR_PREFERRED_HART	= 0x00000003,
+	SBI_SSE_ATTR_ENTRY_PC		= 0x00000004,
+	SBI_SSE_ATTR_ENTRY_ARG		= 0x00000005,
+	SBI_SSE_ATTR_INTERRUPTED_SEPC	= 0x00000006,
+	SBI_SSE_ATTR_INTERRUPTED_FLAGS	= 0x00000007,
+	SBI_SSE_ATTR_INTERRUPTED_A6	= 0x00000008,
+	SBI_SSE_ATTR_INTERRUPTED_A7	= 0x00000009,
+};
+
+#define SBI_SSE_ATTR_STATUS_STATE_OFFSET	0
+#define SBI_SSE_ATTR_STATUS_STATE_MASK		0x3
+#define SBI_SSE_ATTR_STATUS_PENDING_OFFSET	2
+#define SBI_SSE_ATTR_STATUS_INJECT_OFFSET	3
+
+#define SBI_SSE_ATTR_CONFIG_ONESHOT		BIT(0)
+
+#define SBI_SSE_ATTR_INTERRUPTED_FLAGS_SSTATUS_SPP	BIT(0)
+#define SBI_SSE_ATTR_INTERRUPTED_FLAGS_SSTATUS_SPIE	BIT(1)
+#define SBI_SSE_ATTR_INTERRUPTED_FLAGS_HSTATUS_SPV	BIT(2)
+#define SBI_SSE_ATTR_INTERRUPTED_FLAGS_HSTATUS_SPVP	BIT(3)
+#define SBI_SSE_ATTR_INTERRUPTED_FLAGS_SSTATUS_SPELP	BIT(4)
+#define SBI_SSE_ATTR_INTERRUPTED_FLAGS_SSTATUS_SDT	BIT(5)
+
+enum sbi_sse_state {
+	SBI_SSE_STATE_UNUSED		= 0,
+	SBI_SSE_STATE_REGISTERED	= 1,
+	SBI_SSE_STATE_ENABLED		= 2,
+	SBI_SSE_STATE_RUNNING		= 3,
+};
+
+/* SBI SSE Event IDs. */
+/* Range 0x00000000 - 0x0000ffff */
+#define SBI_SSE_EVENT_LOCAL_HIGH_PRIO_RAS	0x00000000
+#define SBI_SSE_EVENT_LOCAL_DOUBLE_TRAP		0x00000001
+#define SBI_SSE_EVENT_LOCAL_RESERVED_0_START	0x00000002
+#define SBI_SSE_EVENT_LOCAL_RESERVED_0_END	0x00003fff
+#define SBI_SSE_EVENT_LOCAL_PLAT_0_START	0x00004000
+#define SBI_SSE_EVENT_LOCAL_PLAT_0_END		0x00007fff
+
+#define SBI_SSE_EVENT_GLOBAL_HIGH_PRIO_RAS	0x00008000
+#define SBI_SSE_EVENT_GLOBAL_RESERVED_0_START	0x00008001
+#define SBI_SSE_EVENT_GLOBAL_RESERVED_0_END	0x0000bfff
+#define SBI_SSE_EVENT_GLOBAL_PLAT_0_START	0x0000c000
+#define SBI_SSE_EVENT_GLOBAL_PLAT_0_END		0x0000ffff
+
+/* Range 0x00010000 - 0x0001ffff */
+#define SBI_SSE_EVENT_LOCAL_PMU_OVERFLOW	0x00010000
+#define SBI_SSE_EVENT_LOCAL_RESERVED_1_START	0x00010001
+#define SBI_SSE_EVENT_LOCAL_RESERVED_1_END	0x00013fff
+#define SBI_SSE_EVENT_LOCAL_PLAT_1_START	0x00014000
+#define SBI_SSE_EVENT_LOCAL_PLAT_1_END		0x00017fff
+
+#define SBI_SSE_EVENT_GLOBAL_RESERVED_1_START	0x00018000
+#define SBI_SSE_EVENT_GLOBAL_RESERVED_1_END	0x0001bfff
+#define SBI_SSE_EVENT_GLOBAL_PLAT_1_START	0x0001c000
+#define SBI_SSE_EVENT_GLOBAL_PLAT_1_END		0x0001ffff
+
+/* Range 0x00100000 - 0x0010ffff */
+#define SBI_SSE_EVENT_LOCAL_LOW_PRIO_RAS	0x00100000
+#define SBI_SSE_EVENT_LOCAL_RESERVED_2_START	0x00100001
+#define SBI_SSE_EVENT_LOCAL_RESERVED_2_END	0x00103fff
+#define SBI_SSE_EVENT_LOCAL_PLAT_2_START	0x00104000
+#define SBI_SSE_EVENT_LOCAL_PLAT_2_END		0x00107fff
+
+#define SBI_SSE_EVENT_GLOBAL_LOW_PRIO_RAS	0x00108000
+#define SBI_SSE_EVENT_GLOBAL_RESERVED_2_START	0x00108001
+#define SBI_SSE_EVENT_GLOBAL_RESERVED_2_END	0x0010bfff
+#define SBI_SSE_EVENT_GLOBAL_PLAT_2_START	0x0010c000
+#define SBI_SSE_EVENT_GLOBAL_PLAT_2_END		0x0010ffff
+
+/* Range 0xffff0000 - 0xffffffff */
+#define SBI_SSE_EVENT_LOCAL_SOFTWARE		0xffff0000
+#define SBI_SSE_EVENT_LOCAL_RESERVED_3_START	0xffff0001
+#define SBI_SSE_EVENT_LOCAL_RESERVED_3_END	0xffff3fff
+#define SBI_SSE_EVENT_LOCAL_PLAT_3_START	0xffff4000
+#define SBI_SSE_EVENT_LOCAL_PLAT_3_END		0xffff7fff
+
+#define SBI_SSE_EVENT_GLOBAL_SOFTWARE		0xffff8000
+#define SBI_SSE_EVENT_GLOBAL_RESERVED_3_START	0xffff8001
+#define SBI_SSE_EVENT_GLOBAL_RESERVED_3_END	0xffffbfff
+#define SBI_SSE_EVENT_GLOBAL_PLAT_3_START	0xffffc000
+#define SBI_SSE_EVENT_GLOBAL_PLAT_3_END		0xffffffff
+
+#define SBI_SSE_EVENT_PLATFORM_BIT		BIT(14)
+#define SBI_SSE_EVENT_GLOBAL_BIT		BIT(15)
+
 struct sbiret {
 	long error;
 	long value;
-- 
2.47.2


^ permalink raw reply related	[flat|nested] 13+ messages in thread

* [kvm-unit-tests PATCH v8 5/6] lib: riscv: Add SBI SSE support
  2025-03-07 16:15 [kvm-unit-tests PATCH v8 0/6] riscv: add SBI SSE extension tests Clément Léger
                   ` (3 preceding siblings ...)
  2025-03-07 16:15 ` [kvm-unit-tests PATCH v8 4/6] riscv: lib: Add SBI SSE extension definitions Clément Léger
@ 2025-03-07 16:15 ` Clément Léger
  2025-03-12 16:29   ` Andrew Jones
  2025-03-07 16:15 ` [kvm-unit-tests PATCH v8 6/6] riscv: sbi: Add SSE extension tests Clément Léger
  5 siblings, 1 reply; 13+ messages in thread
From: Clément Léger @ 2025-03-07 16:15 UTC (permalink / raw)
  To: kvm, kvm-riscv
  Cc: Clément Léger, Andrew Jones, Anup Patel, Atish Patra,
	Andrew Jones

Add support for registering and handling SSE events. This will be used
by sbi test as well as upcoming double trap tests.

Signed-off-by: Clément Léger <cleger@rivosinc.com>
Reviewed-by: Andrew Jones <andrew.jones@linux.dev>
---
 riscv/Makefile          |   1 +
 lib/riscv/asm/csr.h     |   1 +
 lib/riscv/asm/sbi.h     |  38 ++++++++++++++-
 lib/riscv/sbi-sse-asm.S | 103 ++++++++++++++++++++++++++++++++++++++++
 lib/riscv/asm-offsets.c |   9 ++++
 lib/riscv/sbi.c         |  76 +++++++++++++++++++++++++++++
 6 files changed, 227 insertions(+), 1 deletion(-)
 create mode 100644 lib/riscv/sbi-sse-asm.S

diff --git a/riscv/Makefile b/riscv/Makefile
index 02d2ac39..16fc125b 100644
--- a/riscv/Makefile
+++ b/riscv/Makefile
@@ -43,6 +43,7 @@ cflatobjs += lib/riscv/setup.o
 cflatobjs += lib/riscv/smp.o
 cflatobjs += lib/riscv/stack.o
 cflatobjs += lib/riscv/timer.o
+cflatobjs += lib/riscv/sbi-sse-asm.o
 ifeq ($(ARCH),riscv32)
 cflatobjs += lib/ldiv32.o
 endif
diff --git a/lib/riscv/asm/csr.h b/lib/riscv/asm/csr.h
index c7fc87a9..3e4b5fca 100644
--- a/lib/riscv/asm/csr.h
+++ b/lib/riscv/asm/csr.h
@@ -17,6 +17,7 @@
 #define CSR_TIME		0xc01
 
 #define SR_SIE			_AC(0x00000002, UL)
+#define SR_SPP			_AC(0x00000100, UL)
 
 /* Exception cause high bit - is an interrupt if set */
 #define CAUSE_IRQ_FLAG		(_AC(1, UL) << (__riscv_xlen - 1))
diff --git a/lib/riscv/asm/sbi.h b/lib/riscv/asm/sbi.h
index 780c9edd..acef8a5e 100644
--- a/lib/riscv/asm/sbi.h
+++ b/lib/riscv/asm/sbi.h
@@ -230,5 +230,41 @@ struct sbiret sbi_send_ipi_broadcast(void);
 struct sbiret sbi_set_timer(unsigned long stime_value);
 long sbi_probe(int ext);
 
-#endif /* !__ASSEMBLER__ */
+typedef void (*sbi_sse_handler_fn)(void *data, struct pt_regs *regs, unsigned int hartid);
+
+struct sbi_sse_handler_arg {
+	unsigned long reg_tmp;
+	sbi_sse_handler_fn handler;
+	void *handler_data;
+	void *stack;
+};
+
+extern void sbi_sse_entry(void);
+
+static inline bool sbi_sse_event_is_global(uint32_t event_id)
+{
+	return !!(event_id & SBI_SSE_EVENT_GLOBAL_BIT);
+}
+
+struct sbiret sbi_sse_read_attrs_raw(unsigned long event_id, unsigned long base_attr_id,
+				     unsigned long attr_count, unsigned long phys_lo,
+				     unsigned long phys_hi);
+struct sbiret sbi_sse_read_attrs(unsigned long event_id, unsigned long base_attr_id,
+				 unsigned long attr_count, unsigned long *values);
+struct sbiret sbi_sse_write_attrs_raw(unsigned long event_id, unsigned long base_attr_id,
+				      unsigned long attr_count, unsigned long phys_lo,
+				      unsigned long phys_hi);
+struct sbiret sbi_sse_write_attrs(unsigned long event_id, unsigned long base_attr_id,
+				  unsigned long attr_count, unsigned long *values);
+struct sbiret sbi_sse_register_raw(unsigned long event_id, unsigned long entry_pc,
+				   unsigned long entry_arg);
+struct sbiret sbi_sse_register(unsigned long event_id, struct sbi_sse_handler_arg *arg);
+struct sbiret sbi_sse_unregister(unsigned long event_id);
+struct sbiret sbi_sse_enable(unsigned long event_id);
+struct sbiret sbi_sse_disable(unsigned long event_id);
+struct sbiret sbi_sse_hart_mask(void);
+struct sbiret sbi_sse_hart_unmask(void);
+struct sbiret sbi_sse_inject(unsigned long event_id, unsigned long hart_id);
+
+#endif /* !__ASSEMBLY__ */
 #endif /* _ASMRISCV_SBI_H_ */
diff --git a/lib/riscv/sbi-sse-asm.S b/lib/riscv/sbi-sse-asm.S
new file mode 100644
index 00000000..e4efd1ff
--- /dev/null
+++ b/lib/riscv/sbi-sse-asm.S
@@ -0,0 +1,103 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/*
+ * RISC-V SSE events entry point.
+ *
+ * Copyright (C) 2025, Rivos Inc., Clément Léger <cleger@rivosinc.com>
+ */
+#define __ASSEMBLY__
+#include <asm/asm.h>
+#include <asm/asm-offsets.h>
+#include <asm/csr.h>
+#include <generated/sbi-asm-offsets.h>
+
+.section .text
+.global sbi_sse_entry
+sbi_sse_entry:
+	/* Save stack temporarily */
+	REG_S	sp, SBI_SSE_REG_TMP(a7)
+	/* Set entry stack */
+	REG_L	sp, SBI_SSE_HANDLER_STACK(a7)
+
+	addi	sp, sp, -(PT_SIZE)
+	REG_S	ra, PT_RA(sp)
+	REG_S	s0, PT_S0(sp)
+	REG_S	s1, PT_S1(sp)
+	REG_S	s2, PT_S2(sp)
+	REG_S	s3, PT_S3(sp)
+	REG_S	s4, PT_S4(sp)
+	REG_S	s5, PT_S5(sp)
+	REG_S	s6, PT_S6(sp)
+	REG_S	s7, PT_S7(sp)
+	REG_S	s8, PT_S8(sp)
+	REG_S	s9, PT_S9(sp)
+	REG_S	s10, PT_S10(sp)
+	REG_S	s11, PT_S11(sp)
+	REG_S	tp, PT_TP(sp)
+	REG_S	t0, PT_T0(sp)
+	REG_S	t1, PT_T1(sp)
+	REG_S	t2, PT_T2(sp)
+	REG_S	t3, PT_T3(sp)
+	REG_S	t4, PT_T4(sp)
+	REG_S	t5, PT_T5(sp)
+	REG_S	t6, PT_T6(sp)
+	REG_S	gp, PT_GP(sp)
+	REG_S	a0, PT_A0(sp)
+	REG_S	a1, PT_A1(sp)
+	REG_S	a2, PT_A2(sp)
+	REG_S	a3, PT_A3(sp)
+	REG_S	a4, PT_A4(sp)
+	REG_S	a5, PT_A5(sp)
+	csrr	a1, CSR_SEPC
+	REG_S	a1, PT_EPC(sp)
+	csrr	a2, CSR_SSTATUS
+	REG_S	a2, PT_STATUS(sp)
+
+	REG_L	a0, SBI_SSE_REG_TMP(a7)
+	REG_S	a0, PT_SP(sp)
+
+	REG_L	t0, SBI_SSE_HANDLER(a7)
+	REG_L	a0, SBI_SSE_HANDLER_DATA(a7)
+	mv	a1, sp
+	mv	a2, a6
+	jalr	t0
+
+	REG_L	a1, PT_EPC(sp)
+	REG_L	a2, PT_STATUS(sp)
+	csrw	CSR_SEPC, a1
+	csrw	CSR_SSTATUS, a2
+
+	REG_L	ra, PT_RA(sp)
+	REG_L	s0, PT_S0(sp)
+	REG_L	s1, PT_S1(sp)
+	REG_L	s2, PT_S2(sp)
+	REG_L	s3, PT_S3(sp)
+	REG_L	s4, PT_S4(sp)
+	REG_L	s5, PT_S5(sp)
+	REG_L	s6, PT_S6(sp)
+	REG_L	s7, PT_S7(sp)
+	REG_L	s8, PT_S8(sp)
+	REG_L	s9, PT_S9(sp)
+	REG_L	s10, PT_S10(sp)
+	REG_L	s11, PT_S11(sp)
+	REG_L	tp, PT_TP(sp)
+	REG_L	t0, PT_T0(sp)
+	REG_L	t1, PT_T1(sp)
+	REG_L	t2, PT_T2(sp)
+	REG_L	t3, PT_T3(sp)
+	REG_L	t4, PT_T4(sp)
+	REG_L	t5, PT_T5(sp)
+	REG_L	t6, PT_T6(sp)
+	REG_L	gp, PT_GP(sp)
+	REG_L	a0, PT_A0(sp)
+	REG_L	a1, PT_A1(sp)
+	REG_L	a2, PT_A2(sp)
+	REG_L	a3, PT_A3(sp)
+	REG_L	a4, PT_A4(sp)
+	REG_L	a5, PT_A5(sp)
+
+	REG_L	sp, PT_SP(sp)
+
+	li	a7, ASM_SBI_EXT_SSE
+	li	a6, ASM_SBI_EXT_SSE_COMPLETE
+	ecall
+
diff --git a/lib/riscv/asm-offsets.c b/lib/riscv/asm-offsets.c
index 6c511c14..a96c6e97 100644
--- a/lib/riscv/asm-offsets.c
+++ b/lib/riscv/asm-offsets.c
@@ -3,6 +3,7 @@
 #include <elf.h>
 #include <asm/processor.h>
 #include <asm/ptrace.h>
+#include <asm/sbi.h>
 #include <asm/smp.h>
 
 int main(void)
@@ -63,5 +64,13 @@ int main(void)
 	OFFSET(THREAD_INFO_HARTID, thread_info, hartid);
 	DEFINE(THREAD_INFO_SIZE, sizeof(struct thread_info));
 
+	DEFINE(ASM_SBI_EXT_SSE, SBI_EXT_SSE);
+	DEFINE(ASM_SBI_EXT_SSE_COMPLETE, SBI_EXT_SSE_COMPLETE);
+
+	OFFSET(SBI_SSE_REG_TMP, sbi_sse_handler_arg, reg_tmp);
+	OFFSET(SBI_SSE_HANDLER, sbi_sse_handler_arg, handler);
+	OFFSET(SBI_SSE_HANDLER_DATA, sbi_sse_handler_arg, handler_data);
+	OFFSET(SBI_SSE_HANDLER_STACK, sbi_sse_handler_arg, stack);
+
 	return 0;
 }
diff --git a/lib/riscv/sbi.c b/lib/riscv/sbi.c
index 02dd338c..1752c916 100644
--- a/lib/riscv/sbi.c
+++ b/lib/riscv/sbi.c
@@ -2,6 +2,7 @@
 #include <libcflat.h>
 #include <cpumask.h>
 #include <limits.h>
+#include <asm/io.h>
 #include <asm/sbi.h>
 #include <asm/setup.h>
 
@@ -31,6 +32,81 @@ struct sbiret sbi_ecall(int ext, int fid, unsigned long arg0,
 	return ret;
 }
 
+struct sbiret sbi_sse_read_attrs_raw(unsigned long event_id, unsigned long base_attr_id,
+				     unsigned long attr_count, unsigned long phys_lo,
+				      unsigned long phys_hi)
+{
+	return sbi_ecall(SBI_EXT_SSE, SBI_EXT_SSE_READ_ATTRS, event_id, base_attr_id, attr_count,
+			 phys_lo, phys_hi, 0);
+}
+
+struct sbiret sbi_sse_read_attrs(unsigned long event_id, unsigned long base_attr_id,
+				 unsigned long attr_count, unsigned long *values)
+{
+	phys_addr_t p = virt_to_phys(values);
+
+	return sbi_sse_read_attrs_raw(event_id, base_attr_id, attr_count, lower_32_bits(p),
+				      upper_32_bits(p));
+}
+
+struct sbiret sbi_sse_write_attrs_raw(unsigned long event_id, unsigned long base_attr_id,
+				      unsigned long attr_count, unsigned long phys_lo,
+				      unsigned long phys_hi)
+{
+	return sbi_ecall(SBI_EXT_SSE, SBI_EXT_SSE_WRITE_ATTRS, event_id, base_attr_id, attr_count,
+			 phys_lo, phys_hi, 0);
+}
+
+struct sbiret sbi_sse_write_attrs(unsigned long event_id, unsigned long base_attr_id,
+				  unsigned long attr_count, unsigned long *values)
+{
+	phys_addr_t p = virt_to_phys(values);
+
+	return sbi_sse_write_attrs_raw(event_id, base_attr_id, attr_count, lower_32_bits(p),
+				       upper_32_bits(p));
+}
+
+struct sbiret sbi_sse_register_raw(unsigned long event_id, unsigned long entry_pc,
+				   unsigned long entry_arg)
+{
+	return sbi_ecall(SBI_EXT_SSE, SBI_EXT_SSE_REGISTER, event_id, entry_pc, entry_arg, 0, 0, 0);
+}
+
+struct sbiret sbi_sse_register(unsigned long event_id, struct sbi_sse_handler_arg *arg)
+{
+	return sbi_sse_register_raw(event_id, (unsigned long) sbi_sse_entry, (unsigned long) arg);
+}
+
+struct sbiret sbi_sse_unregister(unsigned long event_id)
+{
+	return sbi_ecall(SBI_EXT_SSE, SBI_EXT_SSE_UNREGISTER, event_id, 0, 0, 0, 0, 0);
+}
+
+struct sbiret sbi_sse_enable(unsigned long event_id)
+{
+	return sbi_ecall(SBI_EXT_SSE, SBI_EXT_SSE_ENABLE, event_id, 0, 0, 0, 0, 0);
+}
+
+struct sbiret sbi_sse_disable(unsigned long event_id)
+{
+	return sbi_ecall(SBI_EXT_SSE, SBI_EXT_SSE_DISABLE, event_id, 0, 0, 0, 0, 0);
+}
+
+struct sbiret sbi_sse_hart_mask(void)
+{
+	return sbi_ecall(SBI_EXT_SSE, SBI_EXT_SSE_HART_MASK, 0, 0, 0, 0, 0, 0);
+}
+
+struct sbiret sbi_sse_hart_unmask(void)
+{
+	return sbi_ecall(SBI_EXT_SSE, SBI_EXT_SSE_HART_UNMASK, 0, 0, 0, 0, 0, 0);
+}
+
+struct sbiret sbi_sse_inject(unsigned long event_id, unsigned long hart_id)
+{
+	return sbi_ecall(SBI_EXT_SSE, SBI_EXT_SSE_INJECT, event_id, hart_id, 0, 0, 0, 0);
+}
+
 void sbi_shutdown(void)
 {
 	sbi_ecall(SBI_EXT_SRST, 0, 0, 0, 0, 0, 0, 0);
-- 
2.47.2


^ permalink raw reply related	[flat|nested] 13+ messages in thread

* [kvm-unit-tests PATCH v8 6/6] riscv: sbi: Add SSE extension tests
  2025-03-07 16:15 [kvm-unit-tests PATCH v8 0/6] riscv: add SBI SSE extension tests Clément Léger
                   ` (4 preceding siblings ...)
  2025-03-07 16:15 ` [kvm-unit-tests PATCH v8 5/6] lib: riscv: Add SBI SSE support Clément Léger
@ 2025-03-07 16:15 ` Clément Léger
  2025-03-12 17:51   ` Andrew Jones
  5 siblings, 1 reply; 13+ messages in thread
From: Clément Léger @ 2025-03-07 16:15 UTC (permalink / raw)
  To: kvm, kvm-riscv
  Cc: Clément Léger, Andrew Jones, Anup Patel, Atish Patra

Add SBI SSE extension tests for the following features:
- Test attributes errors (invalid values, RO, etc)
- Registration errors
- Simple events (register, enable, inject)
- Events with different priorities
- Global events dispatch on different harts
- Local events on all harts
- Hart mask/unmask events

Signed-off-by: Clément Léger <cleger@rivosinc.com>
---
 riscv/Makefile    |    1 +
 riscv/sbi-tests.h |    1 +
 riscv/sbi-sse.c   | 1215 +++++++++++++++++++++++++++++++++++++++++++++
 riscv/sbi.c       |    2 +
 4 files changed, 1219 insertions(+)
 create mode 100644 riscv/sbi-sse.c

diff --git a/riscv/Makefile b/riscv/Makefile
index 16fc125b..4fe2f1bb 100644
--- a/riscv/Makefile
+++ b/riscv/Makefile
@@ -18,6 +18,7 @@ tests += $(TEST_DIR)/sieve.$(exe)
 all: $(tests)
 
 $(TEST_DIR)/sbi-deps = $(TEST_DIR)/sbi-asm.o $(TEST_DIR)/sbi-fwft.o
+$(TEST_DIR)/sbi-deps += $(TEST_DIR)/sbi-sse.o
 
 # When built for EFI sieve needs extra memory, run with e.g. '-m 256' on QEMU
 $(TEST_DIR)/sieve.$(exe): AUXFLAGS = 0x1
diff --git a/riscv/sbi-tests.h b/riscv/sbi-tests.h
index b081464d..a71da809 100644
--- a/riscv/sbi-tests.h
+++ b/riscv/sbi-tests.h
@@ -71,6 +71,7 @@
 	sbiret_report(ret, expected_error, expected_value, "check sbi.error and sbi.value")
 
 void sbi_bad_fid(int ext);
+void check_sse(void);
 
 #endif /* __ASSEMBLER__ */
 #endif /* _RISCV_SBI_TESTS_H_ */
diff --git a/riscv/sbi-sse.c b/riscv/sbi-sse.c
new file mode 100644
index 00000000..7bd58b8b
--- /dev/null
+++ b/riscv/sbi-sse.c
@@ -0,0 +1,1215 @@
+// SPDX-License-Identifier: GPL-2.0-only
+/*
+ * SBI SSE testsuite
+ *
+ * Copyright (C) 2025, Rivos Inc., Clément Léger <cleger@rivosinc.com>
+ */
+#include <alloc.h>
+#include <alloc_page.h>
+#include <bitops.h>
+#include <cpumask.h>
+#include <libcflat.h>
+#include <on-cpus.h>
+#include <stdlib.h>
+
+#include <asm/barrier.h>
+#include <asm/delay.h>
+#include <asm/io.h>
+#include <asm/page.h>
+#include <asm/processor.h>
+#include <asm/sbi.h>
+#include <asm/setup.h>
+#include <asm/timer.h>
+
+#include "sbi-tests.h"
+
+#define SSE_STACK_SIZE	PAGE_SIZE
+
+struct sse_event_info {
+	uint32_t event_id;
+	const char *name;
+	bool can_inject;
+};
+
+static struct sse_event_info sse_event_infos[] = {
+	{
+		.event_id = SBI_SSE_EVENT_LOCAL_HIGH_PRIO_RAS,
+		.name = "local_high_prio_ras",
+	},
+	{
+		.event_id = SBI_SSE_EVENT_LOCAL_DOUBLE_TRAP,
+		.name = "double_trap",
+	},
+	{
+		.event_id = SBI_SSE_EVENT_GLOBAL_HIGH_PRIO_RAS,
+		.name = "global_high_prio_ras",
+	},
+	{
+		.event_id = SBI_SSE_EVENT_LOCAL_PMU_OVERFLOW,
+		.name = "local_pmu_overflow",
+	},
+	{
+		.event_id = SBI_SSE_EVENT_LOCAL_LOW_PRIO_RAS,
+		.name = "local_low_prio_ras",
+	},
+	{
+		.event_id = SBI_SSE_EVENT_GLOBAL_LOW_PRIO_RAS,
+		.name = "global_low_prio_ras",
+	},
+	{
+		.event_id = SBI_SSE_EVENT_LOCAL_SOFTWARE,
+		.name = "local_software",
+	},
+	{
+		.event_id = SBI_SSE_EVENT_GLOBAL_SOFTWARE,
+		.name = "global_software",
+	},
+};
+
+static const char *const attr_names[] = {
+	[SBI_SSE_ATTR_STATUS] = "status",
+	[SBI_SSE_ATTR_PRIORITY] = "priority",
+	[SBI_SSE_ATTR_CONFIG] = "config",
+	[SBI_SSE_ATTR_PREFERRED_HART] = "preferred_hart",
+	[SBI_SSE_ATTR_ENTRY_PC] = "entry_pc",
+	[SBI_SSE_ATTR_ENTRY_ARG] = "entry_arg",
+	[SBI_SSE_ATTR_INTERRUPTED_SEPC] = "interrupted_sepc",
+	[SBI_SSE_ATTR_INTERRUPTED_FLAGS] = "interrupted_flags",
+	[SBI_SSE_ATTR_INTERRUPTED_A6] = "interrupted_a6",
+	[SBI_SSE_ATTR_INTERRUPTED_A7] = "interrupted_a7",
+};
+
+static const unsigned long ro_attrs[] = {
+	SBI_SSE_ATTR_STATUS,
+	SBI_SSE_ATTR_ENTRY_PC,
+	SBI_SSE_ATTR_ENTRY_ARG,
+};
+
+static const unsigned long interrupted_attrs[] = {
+	SBI_SSE_ATTR_INTERRUPTED_SEPC,
+	SBI_SSE_ATTR_INTERRUPTED_FLAGS,
+	SBI_SSE_ATTR_INTERRUPTED_A6,
+	SBI_SSE_ATTR_INTERRUPTED_A7,
+};
+
+static const unsigned long interrupted_flags[] = {
+	SBI_SSE_ATTR_INTERRUPTED_FLAGS_SSTATUS_SPP,
+	SBI_SSE_ATTR_INTERRUPTED_FLAGS_SSTATUS_SPIE,
+	SBI_SSE_ATTR_INTERRUPTED_FLAGS_SSTATUS_SPELP,
+	SBI_SSE_ATTR_INTERRUPTED_FLAGS_SSTATUS_SDT,
+	SBI_SSE_ATTR_INTERRUPTED_FLAGS_HSTATUS_SPV,
+	SBI_SSE_ATTR_INTERRUPTED_FLAGS_HSTATUS_SPVP,
+};
+
+static struct sse_event_info *sse_event_get_info(uint32_t event_id)
+{
+	int i;
+
+	for (i = 0; i < ARRAY_SIZE(sse_event_infos); i++) {
+		if (sse_event_infos[i].event_id == event_id)
+			return &sse_event_infos[i];
+	}
+
+	assert_msg(false, "Invalid event id: %d", event_id);
+}
+
+static const char *sse_event_name(uint32_t event_id)
+{
+	return sse_event_get_info(event_id)->name;
+}
+
+static bool sse_event_can_inject(uint32_t event_id)
+{
+	return sse_event_get_info(event_id)->can_inject;
+}
+
+static struct sbiret sse_get_event_status_field(uint32_t event_id, unsigned long mask,
+						unsigned long shift, unsigned long *value)
+{
+	struct sbiret ret;
+	unsigned long status;
+
+	ret = sbi_sse_read_attrs(event_id, SBI_SSE_ATTR_STATUS, 1, &status);
+	if (ret.error) {
+		sbiret_report_error(&ret, SBI_SUCCESS, "Get event status");
+		return ret;
+	}
+
+	*value = (status & mask) >> shift;
+
+	return ret;
+}
+
+static struct sbiret sse_event_get_state(uint32_t event_id, enum sbi_sse_state *state)
+{
+	unsigned long status = 0;
+	struct sbiret ret;
+
+	ret = sse_get_event_status_field(event_id, SBI_SSE_ATTR_STATUS_STATE_MASK,
+					  SBI_SSE_ATTR_STATUS_STATE_OFFSET, &status);
+	*state = status;
+
+	return ret;
+}
+
+static unsigned long sse_global_event_set_current_hart(uint32_t event_id)
+{
+	struct sbiret ret;
+	unsigned long current_hart = current_thread_info()->hartid;
+
+	if (!sbi_sse_event_is_global(event_id))
+		return SBI_ERR_INVALID_PARAM;
+
+	ret = sbi_sse_write_attrs(event_id, SBI_SSE_ATTR_PREFERRED_HART, 1, &current_hart);
+	if (sbiret_report_error(&ret, SBI_SUCCESS, "Set preferred hart"))
+		return ret.error;
+
+	return 0;
+}
+
+static bool sse_check_state(uint32_t event_id, unsigned long expected_state)
+{
+	struct sbiret ret;
+	enum sbi_sse_state state;
+
+	ret = sse_event_get_state(event_id, &state);
+	if (ret.error)
+		return false;
+
+	return report(state == expected_state, "event status == %ld", expected_state);
+}
+
+static bool sse_event_pending(uint32_t event_id)
+{
+	bool pending = 0;
+
+	sse_get_event_status_field(event_id, BIT(SBI_SSE_ATTR_STATUS_PENDING_OFFSET),
+		SBI_SSE_ATTR_STATUS_PENDING_OFFSET, (unsigned long*)&pending);
+
+	return pending;
+}
+
+static void *sse_alloc_stack(void)
+{
+	/*
+	 * We assume that SSE_STACK_SIZE always fit in one page. This page will
+	 * always be decremented before storing anything on it in sse-entry.S.
+	 */
+	assert(SSE_STACK_SIZE <= PAGE_SIZE);
+
+	return (alloc_page() + SSE_STACK_SIZE);
+}
+
+static void sse_free_stack(void *stack)
+{
+	free_page(stack - SSE_STACK_SIZE);
+}
+
+static void sse_read_write_test(uint32_t event_id, unsigned long attr, unsigned long attr_count,
+				unsigned long *value, long expected_error, const char *str)
+{
+	struct sbiret ret;
+
+	ret = sbi_sse_read_attrs(event_id, attr, attr_count, value);
+	sbiret_report_error(&ret, expected_error, "Read %s error", str);
+
+	ret = sbi_sse_write_attrs(event_id, attr, attr_count, value);
+	sbiret_report_error(&ret, expected_error, "Write %s error", str);
+}
+
+#define ALL_ATTRS_COUNT	(SBI_SSE_ATTR_INTERRUPTED_A7 + 1)
+
+static void sse_test_attrs(uint32_t event_id)
+{
+	unsigned long value = 0;
+	struct sbiret ret;
+	void *ptr;
+	unsigned long values[ALL_ATTRS_COUNT];
+	unsigned int i;
+	const char *invalid_hart_str;
+	const char *attr_name;
+
+	report_prefix_push("attrs");
+
+	for (i = 0; i < ARRAY_SIZE(ro_attrs); i++) {
+		ret = sbi_sse_write_attrs(event_id, ro_attrs[i], 1, &value);
+		sbiret_report_error(&ret, SBI_ERR_DENIED, "RO attribute %s not writable",
+				    attr_names[ro_attrs[i]]);
+	}
+
+	ret = sbi_sse_read_attrs(event_id, SBI_SSE_ATTR_STATUS, ALL_ATTRS_COUNT, values);
+	sbiret_report_error(&ret, SBI_SUCCESS, "Read multiple attributes");
+
+	for (i = SBI_SSE_ATTR_STATUS; i <= SBI_SSE_ATTR_INTERRUPTED_A7; i++) {
+		ret = sbi_sse_read_attrs(event_id, i, 1, &value);
+		attr_name = attr_names[i];
+
+		sbiret_report_error(&ret, SBI_SUCCESS, "Read single attribute %s", attr_name);
+		if (values[i] != value)
+			report_fail("Attribute 0x%x single value read (0x%lx) differs from the one read with multiple attributes (0x%lx)",
+				    i, value, values[i]);
+		/*
+		 * Preferred hart reset value is defined by SBI vendor
+		 */
+		if(i != SBI_SSE_ATTR_PREFERRED_HART) {
+			/*
+			 * Specification states that injectable bit is implementation dependent
+			 * but other bits are zero-initialized.
+			 */
+			if (i == SBI_SSE_ATTR_STATUS)
+				value &= ~BIT(SBI_SSE_ATTR_STATUS_INJECT_OFFSET);
+			report(value == 0, "Attribute %s reset value is 0, found %lx", attr_name,
+			       value);
+		}
+	}
+
+#if __riscv_xlen > 32
+	value = BIT(32);
+	ret = sbi_sse_write_attrs(event_id, SBI_SSE_ATTR_PRIORITY, 1, &value);
+	sbiret_report_error(&ret, SBI_ERR_INVALID_PARAM, "Write invalid prio > 0xFFFFFFFF error");
+#endif
+
+	value = ~SBI_SSE_ATTR_CONFIG_ONESHOT;
+	ret = sbi_sse_write_attrs(event_id, SBI_SSE_ATTR_CONFIG, 1, &value);
+	sbiret_report_error(&ret, SBI_ERR_INVALID_PARAM, "Write invalid config value error");
+
+	if (sbi_sse_event_is_global(event_id)) {
+		invalid_hart_str = getenv("INVALID_HART_ID");
+		if (!invalid_hart_str)
+			value = 0xFFFFFFFFUL;
+		else
+			value = strtoul(invalid_hart_str, NULL, 0);
+
+		ret = sbi_sse_write_attrs(event_id, SBI_SSE_ATTR_PREFERRED_HART, 1, &value);
+		sbiret_report_error(&ret, SBI_ERR_INVALID_PARAM, "Set invalid hart id error");
+	} else {
+		/* Set Hart on local event -> RO */
+		value = current_thread_info()->hartid;
+		ret = sbi_sse_write_attrs(event_id, SBI_SSE_ATTR_PREFERRED_HART, 1, &value);
+		sbiret_report_error(&ret, SBI_ERR_DENIED,
+				    "Set hart id on local event error");
+	}
+
+	/* Set/get flags, sepc, a6, a7 */
+	for (i = 0; i < ARRAY_SIZE(interrupted_attrs); i++) {
+		attr_name = attr_names[interrupted_attrs[i]];
+		ret = sbi_sse_read_attrs(event_id, interrupted_attrs[i], 1, &value);
+		sbiret_report_error(&ret, SBI_SUCCESS, "Get interrupted %s", attr_name);
+
+		value = ARRAY_SIZE(interrupted_attrs) - i;
+		ret = sbi_sse_write_attrs(event_id, interrupted_attrs[i], 1, &value);
+		sbiret_report_error(&ret, SBI_ERR_INVALID_STATE,
+				    "Set attribute %s invalid state error", attr_name);
+	}
+
+	sse_read_write_test(event_id, SBI_SSE_ATTR_STATUS, 0, &value, SBI_ERR_INVALID_PARAM,
+			    "attribute attr_count == 0");
+	sse_read_write_test(event_id, SBI_SSE_ATTR_INTERRUPTED_A7 + 1, 1, &value, SBI_ERR_BAD_RANGE,
+			    "invalid attribute");
+
+	/* Misaligned pointer address */
+	ptr = (void *)&value;
+	ptr += 1;
+	sse_read_write_test(event_id, SBI_SSE_ATTR_STATUS, 1, ptr, SBI_ERR_INVALID_ADDRESS,
+		"attribute with invalid address");
+
+	report_prefix_pop();
+}
+
+static void sse_test_register_error(uint32_t event_id)
+{
+	struct sbiret ret;
+
+	report_prefix_push("register");
+
+	ret = sbi_sse_unregister(event_id);
+	sbiret_report_error(&ret, SBI_ERR_INVALID_STATE, "unregister non-registered event");
+
+	ret = sbi_sse_register_raw(event_id, 0x1, 0);
+	sbiret_report_error(&ret, SBI_ERR_INVALID_PARAM, "register misaligned entry");
+
+	ret = sbi_sse_register_raw(event_id, (unsigned long)sbi_sse_entry, 0);
+	sbiret_report_error(&ret, SBI_SUCCESS, "register");
+	if (ret.error)
+		goto done;
+
+	ret = sbi_sse_register_raw(event_id, (unsigned long)sbi_sse_entry, 0);
+	sbiret_report_error(&ret, SBI_ERR_INVALID_STATE, "register used event failure");
+
+	ret = sbi_sse_unregister(event_id);
+	sbiret_report_error(&ret, SBI_SUCCESS, "unregister");
+
+done:
+	report_prefix_pop();
+}
+
+struct sse_simple_test_arg {
+	bool done;
+	unsigned long expected_a6;
+	uint32_t event_id;
+};
+
+#if __riscv_xlen > 32
+
+struct alias_test_params {
+	unsigned long event_id;
+	unsigned long attr_id;
+	unsigned long attr_count;
+	const char *str;
+};
+
+static void test_alias(uint32_t event_id)
+{
+	struct alias_test_params *write, *read;
+	unsigned long write_value, read_value;
+	struct sbiret ret;
+	bool err = false;
+	int r, w;
+	struct alias_test_params params[] = {
+		{event_id, SBI_SSE_ATTR_INTERRUPTED_A6, 1, "non aliased"},
+		{BIT(32) + event_id, SBI_SSE_ATTR_INTERRUPTED_A6, 1, "aliased event_id"},
+		{event_id, BIT(32) + SBI_SSE_ATTR_INTERRUPTED_A6, 1, "aliased attr_id"},
+		{event_id, SBI_SSE_ATTR_INTERRUPTED_A6, BIT(32) + 1, "aliased attr_count"},
+	};
+
+	report_prefix_push("alias");
+	for (w = 0; w < ARRAY_SIZE(params); w++) {
+		write = &params[w];
+
+		write_value = 0xDEADBEEF + w;
+		ret = sbi_sse_write_attrs(write->event_id, write->attr_id, write->attr_count, &write_value);
+		if (ret.error)
+			sbiret_report_error(&ret, SBI_SUCCESS, "Write %s, event 0x%lx attr 0x%lx, attr count 0x%lx", write->str, write->event_id, write->attr_id, write->attr_count);
+
+		for (r = 0; r < ARRAY_SIZE(params); r++) {
+			read = &params[r];
+			read_value = 0;
+			ret = sbi_sse_read_attrs(read->event_id, read->attr_id, read->attr_count, &read_value);
+			if (ret.error)
+				sbiret_report_error(&ret, SBI_SUCCESS,
+						    "Read %s, event 0x%lx attr 0x%lx, attr count 0x%lx", read->str, read->event_id, read->attr_id, read->attr_count);
+
+			/* Do not spam output with a lot of reports */
+			if (write_value != read_value) {
+				err = true;
+				report_fail("Write %s, event 0x%lx attr 0x%lx, attr count 0x%lx value %lx =="
+					    "Read %s, event 0x%lx attr 0x%lx, attr count 0x%lx value %lx",
+				write->str, write->event_id, write->attr_id, write->attr_count,
+				write_value,
+				read->str, read->event_id, read->attr_id, read->attr_count,
+				read_value);
+			}
+		}
+	}
+
+	report(!err, "BIT(32) aliasing tests");
+	report_prefix_pop();
+}
+#endif
+
+static void sse_simple_handler(void *data, struct pt_regs *regs, unsigned int hartid)
+{
+	struct sse_simple_test_arg *arg = data;
+	int i;
+	struct sbiret ret;
+	const char *attr_name;
+	uint32_t event_id = READ_ONCE(arg->event_id), attr;
+	unsigned long value, prev_value, flags;
+	unsigned long interrupted_state[ARRAY_SIZE(interrupted_attrs)];
+	unsigned long modified_state[ARRAY_SIZE(interrupted_attrs)] = {4, 3, 2, 1};
+	unsigned long tmp_state[ARRAY_SIZE(interrupted_attrs)];
+
+	report((regs->status & SR_SPP) == SR_SPP, "Interrupted S-mode");
+	report(hartid == current_thread_info()->hartid, "Hartid correctly passed");
+	sse_check_state(event_id, SBI_SSE_STATE_RUNNING);
+	report(!sse_event_pending(event_id), "Event not pending");
+
+	/* Read full interrupted state */
+	ret = sbi_sse_read_attrs(event_id, SBI_SSE_ATTR_INTERRUPTED_SEPC,
+				 ARRAY_SIZE(interrupted_attrs), interrupted_state);
+	sbiret_report_error(&ret, SBI_SUCCESS, "Save full interrupted state from handler");
+
+	/* Write full modified state and read it */
+	ret = sbi_sse_write_attrs(event_id, SBI_SSE_ATTR_INTERRUPTED_SEPC,
+				  ARRAY_SIZE(modified_state), modified_state);
+	sbiret_report_error(&ret, SBI_SUCCESS,
+			    "Write full interrupted state from handler");
+
+	ret = sbi_sse_read_attrs(event_id, SBI_SSE_ATTR_INTERRUPTED_SEPC,
+				ARRAY_SIZE(tmp_state), tmp_state);
+	sbiret_report_error(&ret, SBI_SUCCESS, "Read full modified state from handler");
+
+	report(memcmp(tmp_state, modified_state, sizeof(modified_state)) == 0,
+	       "Full interrupted state successfully written");
+
+#if __riscv_xlen > 32
+	test_alias(event_id);
+#endif
+
+	/* Restore full saved state */
+	ret = sbi_sse_write_attrs(event_id, SBI_SSE_ATTR_INTERRUPTED_SEPC,
+		ARRAY_SIZE(interrupted_attrs), interrupted_state);
+	sbiret_report_error(&ret, SBI_SUCCESS,
+			    "Full interrupted state restore from handler");
+
+	/* We test SBI_SSE_ATTR_INTERRUPTED_FLAGS below with specific flag values */
+	for (i = 0; i < ARRAY_SIZE(interrupted_attrs); i++) {
+		attr = interrupted_attrs[i];
+		if (attr == SBI_SSE_ATTR_INTERRUPTED_FLAGS)
+			continue;
+
+		attr_name = attr_names[attr];
+
+		ret = sbi_sse_read_attrs(event_id, attr, 1, &prev_value);
+		sbiret_report_error(&ret, SBI_SUCCESS, "Get attr %s", attr_name);
+
+		value = 0xDEADBEEF + i;
+		ret = sbi_sse_write_attrs(event_id, attr, 1, &value);
+		sbiret_report_error(&ret, SBI_SUCCESS, "Set attr %s", attr_name);
+
+		ret = sbi_sse_read_attrs(event_id, attr, 1, &value);
+		sbiret_report_error(&ret, SBI_SUCCESS, "Get attr %s", attr_name);
+		report(value == 0xDEADBEEF + i, "Get attr %s, value: 0x%lx", attr_name,
+		       value);
+
+		ret = sbi_sse_write_attrs(event_id, attr, 1, &prev_value);
+		sbiret_report_error(&ret, SBI_SUCCESS, "Restore attr %s value", attr_name);
+	}
+
+	/* Test all flags allowed for SBI_SSE_ATTR_INTERRUPTED_FLAGS */
+	attr = SBI_SSE_ATTR_INTERRUPTED_FLAGS;
+	ret = sbi_sse_read_attrs(event_id, attr, 1, &prev_value);
+	sbiret_report_error(&ret, SBI_SUCCESS, "Save interrupted flags");
+
+	for (i = 0; i < ARRAY_SIZE(interrupted_flags); i++) {
+		flags = interrupted_flags[i];
+		ret = sbi_sse_write_attrs(event_id, attr, 1, &flags);
+		sbiret_report_error(&ret, SBI_SUCCESS,
+				    "Set interrupted flags bit 0x%lx value", flags);
+		ret = sbi_sse_read_attrs(event_id, attr, 1, &value);
+		sbiret_report_error(&ret, SBI_SUCCESS, "Get interrupted flags after set");
+		report(value == flags, "interrupted flags modified value: 0x%lx", value);
+	}
+
+	/* Write invalid bit in flag register */
+	flags = SBI_SSE_ATTR_INTERRUPTED_FLAGS_SSTATUS_SDT << 1;
+	ret = sbi_sse_write_attrs(event_id, attr, 1, &flags);
+	sbiret_report_error(&ret, SBI_ERR_INVALID_PARAM, "Set invalid flags bit 0x%lx value error",
+			    flags);
+
+	flags = BIT(SBI_SSE_ATTR_INTERRUPTED_FLAGS_SSTATUS_SDT + 1);
+	ret = sbi_sse_write_attrs(event_id, attr, 1, &flags);
+	sbiret_report_error(&ret, SBI_ERR_INVALID_PARAM, "Set invalid flags bit 0x%lx value error",
+			    flags);
+
+	ret = sbi_sse_write_attrs(event_id, attr, 1, &prev_value);
+	sbiret_report_error(&ret, SBI_SUCCESS, "Restore interrupted flags");
+
+	/* Try to change HARTID/Priority while running */
+	if (sbi_sse_event_is_global(event_id)) {
+		value = current_thread_info()->hartid;
+		ret = sbi_sse_write_attrs(event_id, SBI_SSE_ATTR_PREFERRED_HART, 1, &value);
+		sbiret_report_error(&ret, SBI_ERR_INVALID_STATE, "Set hart id while running error");
+	}
+
+	value = 0;
+	ret = sbi_sse_write_attrs(event_id, SBI_SSE_ATTR_PRIORITY, 1, &value);
+	sbiret_report_error(&ret, SBI_ERR_INVALID_STATE, "Set priority while running error");
+
+	value = READ_ONCE(arg->expected_a6);
+	report(interrupted_state[2] == value, "Interrupted state a6, expected 0x%lx, got 0x%lx",
+	       value, interrupted_state[2]);
+
+	report(interrupted_state[3] == SBI_EXT_SSE,
+	       "Interrupted state a7, expected 0x%x, got 0x%lx", SBI_EXT_SSE,
+	       interrupted_state[3]);
+
+	WRITE_ONCE(arg->done, true);
+}
+
+static int sse_test_inject_simple(uint32_t event_id)
+{
+	unsigned long value, error;
+	struct sbiret ret;
+	int err_ret = 1;
+	struct sse_simple_test_arg test_arg = {.event_id = event_id};
+	struct sbi_sse_handler_arg args = {
+		.handler = sse_simple_handler,
+		.handler_data = (void *)&test_arg,
+		.stack = sse_alloc_stack(),
+	};
+
+	report_prefix_push("simple");
+
+	if (!sse_check_state(event_id, SBI_SSE_STATE_UNUSED))
+		goto err;
+
+	ret = sbi_sse_register(event_id, &args);
+	if (!sbiret_report_error(&ret, SBI_SUCCESS, "register"))
+		goto err;
+
+	if (!sse_check_state(event_id, SBI_SSE_STATE_REGISTERED))
+		goto err;
+
+	if (sbi_sse_event_is_global(event_id)) {
+		/* Be sure global events are targeting the current hart */
+		error = sse_global_event_set_current_hart(event_id);
+		if (error)
+			goto err;
+	}
+
+	ret = sbi_sse_enable(event_id);
+	if (!sbiret_report_error(&ret, SBI_SUCCESS, "enable"))
+		goto err;
+
+	if (!sse_check_state(event_id, SBI_SSE_STATE_ENABLED))
+		goto err;
+
+	ret = sbi_sse_hart_mask();
+	if (!sbiret_report_error(&ret, SBI_SUCCESS, "hart mask"))
+		goto err;
+
+	ret = sbi_sse_inject(event_id, current_thread_info()->hartid);
+	if (!sbiret_report_error(&ret, SBI_SUCCESS, "injection masked")) {
+		sbi_sse_hart_unmask();
+		goto err;
+	}
+
+	report(READ_ONCE(test_arg.done) == 0, "event masked not handled");
+
+	/*
+	 * When unmasking the SSE events, we expect it to be injected
+	 * immediately so a6 should be SBI_EXT_SBI_SSE_HART_UNMASK
+	 */
+	WRITE_ONCE(test_arg.expected_a6, SBI_EXT_SSE_HART_UNMASK);
+	ret = sbi_sse_hart_unmask();
+	if (!sbiret_report_error(&ret, SBI_SUCCESS, "hart unmask")) {
+		goto err;
+	}
+
+	report(READ_ONCE(test_arg.done) == 1, "event unmasked handled");
+	WRITE_ONCE(test_arg.done, 0);
+	WRITE_ONCE(test_arg.expected_a6, SBI_EXT_SSE_INJECT);
+
+	/* Set as oneshot and verify it is disabled */
+	ret = sbi_sse_disable(event_id);
+	if (!sbiret_report_error(&ret, SBI_SUCCESS, "Disable event")) {
+		/* Nothing we can really do here, event can not be disabled */
+		goto err;
+	}
+
+	value = SBI_SSE_ATTR_CONFIG_ONESHOT;
+	ret = sbi_sse_write_attrs(event_id, SBI_SSE_ATTR_CONFIG, 1, &value);
+	if (!sbiret_report_error(&ret, SBI_SUCCESS, "Set event attribute as ONESHOT"))
+		goto err;
+
+	ret = sbi_sse_enable(event_id);
+	if (!sbiret_report_error(&ret, SBI_SUCCESS, "Enable event"))
+		goto err;
+
+	ret = sbi_sse_inject(event_id, current_thread_info()->hartid);
+	if (!sbiret_report_error(&ret, SBI_SUCCESS, "second injection"))
+		goto err;
+
+	report(READ_ONCE(test_arg.done) == 1, "event handled");
+	WRITE_ONCE(test_arg.done, 0);
+
+	if (!sse_check_state(event_id, SBI_SSE_STATE_REGISTERED))
+		goto err;
+
+	/* Clear ONESHOT FLAG */
+	value = 0;
+	ret = sbi_sse_write_attrs(event_id, SBI_SSE_ATTR_CONFIG, 1, &value);
+	if (!sbiret_report_error(&ret, SBI_SUCCESS, "Clear CONFIG.ONESHOT flag"))
+		goto err;
+
+	ret = sbi_sse_unregister(event_id);
+	if (!sbiret_report_error(&ret, SBI_SUCCESS, "unregister"))
+		goto err;
+
+	sse_check_state(event_id, SBI_SSE_STATE_UNUSED);
+
+	err_ret = 0;
+err:
+	sse_free_stack(args.stack);
+	report_prefix_pop();
+
+	return err_ret;
+}
+
+struct sse_foreign_cpu_test_arg {
+	bool done;
+	unsigned int expected_cpu;
+	uint32_t event_id;
+};
+
+static void sse_foreign_cpu_handler(void *data, struct pt_regs *regs, unsigned int hartid)
+{
+	struct sse_foreign_cpu_test_arg *arg = data;
+	unsigned int expected_cpu;
+
+	/* For arg content to be visible */
+	smp_rmb();
+	expected_cpu = READ_ONCE(arg->expected_cpu);
+	report(expected_cpu == current_thread_info()->cpu,
+	       "Received event on CPU (%d), expected CPU (%d)", current_thread_info()->cpu,
+	       expected_cpu);
+
+	WRITE_ONCE(arg->done, true);
+	/* For arg update to be visible for other CPUs */
+	smp_wmb();
+}
+
+struct sse_local_per_cpu {
+	struct sbi_sse_handler_arg args;
+	struct sbiret ret;
+	struct sse_foreign_cpu_test_arg handler_arg;
+};
+
+static void sse_register_enable_local(void *data)
+{
+	struct sbiret ret;
+	struct sse_local_per_cpu *cpu_args = data;
+	struct sse_local_per_cpu *cpu_arg = &cpu_args[current_thread_info()->cpu];
+	uint32_t event_id = cpu_arg->handler_arg.event_id;
+
+	ret = sbi_sse_register(event_id, &cpu_arg->args);
+	WRITE_ONCE(cpu_arg->ret, ret);
+	if (ret.error)
+		return;
+
+	ret = sbi_sse_enable(event_id);
+	WRITE_ONCE(cpu_arg->ret, ret);
+}
+
+static void sbi_sse_disable_unregister_local(void *data)
+{
+	struct sbiret ret;
+	struct sse_local_per_cpu *cpu_args = data;
+	struct sse_local_per_cpu *cpu_arg = &cpu_args[current_thread_info()->cpu];
+	uint32_t event_id = cpu_arg->handler_arg.event_id;
+
+	ret = sbi_sse_disable(event_id);
+	WRITE_ONCE(cpu_arg->ret, ret);
+	if (ret.error)
+		return;
+
+	ret = sbi_sse_unregister(event_id);
+	WRITE_ONCE(cpu_arg->ret, ret);
+}
+
+static int sse_test_inject_local(uint32_t event_id)
+{
+	int cpu;
+	int err_ret = 1;
+	struct sbiret ret;
+	struct sse_local_per_cpu *cpu_args, *cpu_arg;
+	struct sse_foreign_cpu_test_arg *handler_arg;
+
+	cpu_args = calloc(NR_CPUS, sizeof(struct sbi_sse_handler_arg));
+
+	report_prefix_push("local_dispatch");
+	for_each_online_cpu(cpu) {
+		cpu_arg = &cpu_args[cpu];
+		cpu_arg->handler_arg.event_id = event_id;
+		cpu_arg->args.stack = sse_alloc_stack();
+		cpu_arg->args.handler = sse_foreign_cpu_handler;
+		cpu_arg->args.handler_data = (void *)&cpu_arg->handler_arg;
+	}
+
+	on_cpus(sse_register_enable_local, cpu_args);
+	for_each_online_cpu(cpu) {
+		cpu_arg = &cpu_args[cpu];
+		ret = cpu_arg->ret;
+		if (ret.error) {
+			report_fail("CPU failed to register/enable event: %ld", ret.error);
+			goto err;
+		}
+
+		handler_arg = &cpu_arg->handler_arg;
+		WRITE_ONCE(handler_arg->expected_cpu, cpu);
+		/* For handler_arg content to be visible for other CPUs */
+		smp_wmb();
+		ret = sbi_sse_inject(event_id, cpus[cpu].hartid);
+		if (ret.error) {
+			report_fail("CPU failed to inject event: %ld", ret.error);
+			goto err;
+		}
+	}
+
+	for_each_online_cpu(cpu) {
+		handler_arg = &cpu_args[cpu].handler_arg;
+		smp_rmb();
+		while (!READ_ONCE(handler_arg->done)) {
+			/* For handler_arg update to be visible */
+			smp_rmb();
+			cpu_relax();
+		}
+		WRITE_ONCE(handler_arg->done, false);
+	}
+
+	on_cpus(sbi_sse_disable_unregister_local, cpu_args);
+	for_each_online_cpu(cpu) {
+		cpu_arg = &cpu_args[cpu];
+		ret = READ_ONCE(cpu_arg->ret);
+		if (ret.error) {
+			report_fail("CPU failed to disable/unregister event: %ld", ret.error);
+			goto err;
+		}
+	}
+
+	err_ret = 0;
+err:
+	for_each_online_cpu(cpu) {
+		cpu_arg = &cpu_args[cpu];
+		sse_free_stack(cpu_arg->args.stack);
+	}
+
+	report_pass("local event dispatch on all CPUs");
+	report_prefix_pop();
+
+	return err_ret;
+}
+
+static int sse_test_inject_global(uint32_t event_id)
+{
+	unsigned long value;
+	struct sbiret ret;
+	unsigned int cpu;
+	uint64_t timeout;
+	int err_ret = 1;
+	struct sse_foreign_cpu_test_arg test_arg = {.event_id = event_id};
+	struct sbi_sse_handler_arg args = {
+		.handler = sse_foreign_cpu_handler,
+		.handler_data = (void *)&test_arg,
+		.stack = sse_alloc_stack(),
+	};
+	enum sbi_sse_state state;
+
+	report_prefix_push("global_dispatch");
+
+	ret = sbi_sse_register(event_id, &args);
+	if (!sbiret_report_error(&ret, SBI_SUCCESS, "Register event"))
+		goto err;
+
+	for_each_online_cpu(cpu) {
+		WRITE_ONCE(test_arg.expected_cpu, cpu);
+		/* For test_arg content to be visible for other CPUs */
+		smp_wmb();
+		value = cpu;
+		ret = sbi_sse_write_attrs(event_id, SBI_SSE_ATTR_PREFERRED_HART, 1, &value);
+		if (!sbiret_report_error(&ret, SBI_SUCCESS, "Set preferred hart"))
+			goto err;
+
+		ret = sbi_sse_enable(event_id);
+		if (!sbiret_report_error(&ret, SBI_SUCCESS, "Enable event"))
+			goto err;
+
+		ret = sbi_sse_inject(event_id, cpu);
+		if (!sbiret_report_error(&ret, SBI_SUCCESS, "Inject event"))
+			goto err;
+
+		smp_rmb();
+		while (!READ_ONCE(test_arg.done)) {
+			/* For shared test_arg structure */
+			smp_rmb();
+			cpu_relax();
+		}
+
+		WRITE_ONCE(test_arg.done, false);
+
+		timeout = timer_get_cycles() + usec_to_cycles((uint64_t)1000);
+		/* Wait for event to be back in ENABLED state */
+		do {
+			ret = sse_event_get_state(event_id, &state);
+			if (ret.error)
+				goto err;
+			cpu_relax();
+		} while (state != SBI_SSE_STATE_ENABLED || timer_get_cycles() < timeout);
+
+		if (!report(state == SBI_SSE_STATE_ENABLED,
+		    "wait for event to be in enable state"))
+			goto err;
+
+		ret = sbi_sse_disable(event_id);
+		if (!sbiret_report_error(&ret, SBI_SUCCESS, "Disable event"))
+			goto err;
+
+		report_pass("Global event on CPU %d", cpu);
+	}
+
+	ret = sbi_sse_unregister(event_id);
+	sbiret_report_error(&ret, SBI_SUCCESS, "Unregister event");
+
+	err_ret = 0;
+
+err:
+	sse_free_stack(args.stack);
+	report_prefix_pop();
+
+	return err_ret;
+}
+
+struct priority_test_arg {
+	uint32_t event_id;
+	bool called;
+	u32 prio;
+	struct priority_test_arg *next_event_arg;
+	void (*check_func)(struct priority_test_arg *arg);
+};
+
+static void sse_hi_priority_test_handler(void *arg, struct pt_regs *regs,
+					 unsigned int hartid)
+{
+	struct priority_test_arg *targ = arg;
+	struct priority_test_arg *next = targ->next_event_arg;
+
+	targ->called = true;
+	if (next) {
+		sbi_sse_inject(next->event_id, current_thread_info()->hartid);
+
+		report(!sse_event_pending(next->event_id), "Higher priority event is not pending");
+		report(next->called, "Higher priority event was handled");
+	}
+}
+
+static void sse_low_priority_test_handler(void *arg, struct pt_regs *regs,
+					  unsigned int hartid)
+{
+	struct priority_test_arg *targ = arg;
+	struct priority_test_arg *next = targ->next_event_arg;
+
+	targ->called = true;
+
+	if (next) {
+		sbi_sse_inject(next->event_id, current_thread_info()->hartid);
+
+		report(sse_event_pending(next->event_id), "Lower priority event is pending");
+		report(!next->called, "Lower priority event %s was not handle before %s",
+		       sse_event_name(next->event_id), sse_event_name(targ->event_id));
+	}
+}
+
+static void sse_test_injection_priority_arg(struct priority_test_arg *in_args,
+					    unsigned int in_args_size,
+					    sbi_sse_handler_fn handler,
+					    const char *test_name)
+{
+	unsigned int i;
+	unsigned long value, uret;
+	struct sbiret ret;
+	uint32_t event_id;
+	struct priority_test_arg *arg;
+	unsigned int args_size = 0;
+	struct sbi_sse_handler_arg event_args[in_args_size];
+	struct priority_test_arg *args[in_args_size];
+	void *stack;
+	struct sbi_sse_handler_arg *event_arg;
+
+	report_prefix_push(test_name);
+
+	for (i = 0; i < in_args_size; i++) {
+		arg = &in_args[i];
+		event_id = arg->event_id;
+		if (!sse_event_can_inject(event_id))
+			continue;
+
+		args[args_size] = arg;
+		args_size++;
+		event_args->stack = 0;
+	}
+
+	if (!args_size) {
+		report_skip("No injectable events");
+		goto skip;
+	}
+
+	for (i = 0; i < args_size; i++) {
+		arg = args[i];
+		event_id = arg->event_id;
+		stack = sse_alloc_stack();
+
+		event_arg = &event_args[i];
+		event_arg->handler = handler;
+		event_arg->handler_data = (void *)arg;
+		event_arg->stack = stack;
+
+		if (i < (args_size - 1))
+			arg->next_event_arg = args[i + 1];
+		else
+			arg->next_event_arg = NULL;
+
+		/* Be sure global events are targeting the current hart */
+		if (sbi_sse_event_is_global(event_id)) {
+			uret = sse_global_event_set_current_hart(event_id);
+			if (uret)
+				goto err;
+		}
+
+		ret = sbi_sse_register(event_id, event_arg);
+		if (ret.error) {
+			sbiret_report_error(&ret, SBI_SUCCESS, "register event 0x%x", event_id);
+			goto err;
+		}
+
+		value = arg->prio;
+		ret = sbi_sse_write_attrs(event_id, SBI_SSE_ATTR_PRIORITY, 1, &value);
+		if (ret.error) {
+			sbiret_report_error(&ret, SBI_SUCCESS, "set event 0x%x priority", event_id);
+			goto err;
+		}
+		ret = sbi_sse_enable(event_id);
+		if (ret.error) {
+			sbiret_report_error(&ret, SBI_SUCCESS, "enable event 0x%x", event_id);
+			goto err;
+		}
+	}
+
+	/* Inject first event */
+	ret = sbi_sse_inject(args[0]->event_id, current_thread_info()->hartid);
+	sbiret_report_error(&ret, SBI_SUCCESS, "injection");
+
+err:
+	for (i = 0; i < args_size; i++) {
+		arg = args[i];
+		event_id = arg->event_id;
+
+		report(arg->called, "Event %s handler called", sse_event_name(arg->event_id));
+
+		ret = sbi_sse_disable(event_id);
+		if (ret.error)
+			sbiret_report_error(&ret, SBI_SUCCESS, "disable event 0x%x", event_id);
+
+		sbi_sse_unregister(event_id);
+		if (ret.error)
+			sbiret_report_error(&ret, SBI_SUCCESS, "unregister event 0x%x", event_id);
+
+		event_arg = &event_args[i];
+		if (event_arg->stack)
+			sse_free_stack(event_arg->stack);
+	}
+
+skip:
+	report_prefix_pop();
+}
+
+static struct priority_test_arg hi_prio_args[] = {
+	{.event_id = SBI_SSE_EVENT_GLOBAL_SOFTWARE},
+	{.event_id = SBI_SSE_EVENT_LOCAL_SOFTWARE},
+	{.event_id = SBI_SSE_EVENT_GLOBAL_LOW_PRIO_RAS},
+	{.event_id = SBI_SSE_EVENT_LOCAL_LOW_PRIO_RAS},
+	{.event_id = SBI_SSE_EVENT_LOCAL_PMU_OVERFLOW},
+	{.event_id = SBI_SSE_EVENT_GLOBAL_HIGH_PRIO_RAS},
+	{.event_id = SBI_SSE_EVENT_LOCAL_DOUBLE_TRAP},
+	{.event_id = SBI_SSE_EVENT_LOCAL_HIGH_PRIO_RAS},
+};
+
+static struct priority_test_arg low_prio_args[] = {
+	{.event_id = SBI_SSE_EVENT_LOCAL_HIGH_PRIO_RAS},
+	{.event_id = SBI_SSE_EVENT_LOCAL_DOUBLE_TRAP},
+	{.event_id = SBI_SSE_EVENT_GLOBAL_HIGH_PRIO_RAS},
+	{.event_id = SBI_SSE_EVENT_LOCAL_PMU_OVERFLOW},
+	{.event_id = SBI_SSE_EVENT_LOCAL_LOW_PRIO_RAS},
+	{.event_id = SBI_SSE_EVENT_GLOBAL_LOW_PRIO_RAS},
+	{.event_id = SBI_SSE_EVENT_LOCAL_SOFTWARE},
+	{.event_id = SBI_SSE_EVENT_GLOBAL_SOFTWARE},
+};
+
+static struct priority_test_arg prio_args[] = {
+	{.event_id = SBI_SSE_EVENT_GLOBAL_SOFTWARE,		.prio = 5},
+	{.event_id = SBI_SSE_EVENT_LOCAL_SOFTWARE,		.prio = 10},
+	{.event_id = SBI_SSE_EVENT_LOCAL_LOW_PRIO_RAS,		.prio = 12},
+	{.event_id = SBI_SSE_EVENT_LOCAL_PMU_OVERFLOW,		.prio = 15},
+	{.event_id = SBI_SSE_EVENT_GLOBAL_HIGH_PRIO_RAS,	.prio = 20},
+	{.event_id = SBI_SSE_EVENT_GLOBAL_LOW_PRIO_RAS, 	.prio = 22},
+	{.event_id = SBI_SSE_EVENT_LOCAL_HIGH_PRIO_RAS, 	.prio = 25},
+};
+
+static struct priority_test_arg same_prio_args[] = {
+	{.event_id = SBI_SSE_EVENT_LOCAL_PMU_OVERFLOW,		.prio = 0},
+	{.event_id = SBI_SSE_EVENT_GLOBAL_LOW_PRIO_RAS,		.prio = 0},
+	{.event_id = SBI_SSE_EVENT_LOCAL_HIGH_PRIO_RAS,		.prio = 10},
+	{.event_id = SBI_SSE_EVENT_LOCAL_SOFTWARE,		.prio = 10},
+	{.event_id = SBI_SSE_EVENT_GLOBAL_SOFTWARE,		.prio = 10},
+	{.event_id = SBI_SSE_EVENT_GLOBAL_HIGH_PRIO_RAS,	.prio = 20},
+	{.event_id = SBI_SSE_EVENT_LOCAL_LOW_PRIO_RAS,		.prio = 20},
+};
+
+static void sse_test_injection_priority(void)
+{
+	report_prefix_push("prio");
+
+	sse_test_injection_priority_arg(hi_prio_args, ARRAY_SIZE(hi_prio_args),
+					sse_hi_priority_test_handler, "high");
+
+	sse_test_injection_priority_arg(low_prio_args, ARRAY_SIZE(low_prio_args),
+					sse_low_priority_test_handler, "low");
+
+	sse_test_injection_priority_arg(prio_args, ARRAY_SIZE(prio_args),
+					sse_low_priority_test_handler, "changed");
+
+	sse_test_injection_priority_arg(same_prio_args, ARRAY_SIZE(same_prio_args),
+					sse_low_priority_test_handler, "same_prio_args");
+
+	report_prefix_pop();
+}
+
+static void test_invalid_event_id(unsigned long event_id)
+{
+	struct sbiret ret;
+	unsigned long value = 0;
+
+	ret = sbi_sse_register_raw(event_id, (unsigned long) sbi_sse_entry, 0);
+	sbiret_report_error(&ret, SBI_ERR_INVALID_PARAM,
+			    "register event_id 0x%lx", event_id);
+
+	ret = sbi_sse_unregister(event_id);
+	sbiret_report_error(&ret, SBI_ERR_INVALID_PARAM,
+			"unregister event_id 0x%lx", event_id);
+
+	ret = sbi_sse_enable(event_id);
+	sbiret_report_error(&ret, SBI_ERR_INVALID_PARAM,
+			    "enable event_id 0x%lx", event_id);
+
+	ret = sbi_sse_disable(event_id);
+	sbiret_report_error(&ret, SBI_ERR_INVALID_PARAM,
+			    "disable event_id 0x%lx", event_id);
+
+	ret = sbi_sse_inject(event_id, 0);
+	sbiret_report_error(&ret, SBI_ERR_INVALID_PARAM,
+			    "inject event_id 0x%lx", event_id);
+
+	ret = sbi_sse_write_attrs(event_id, SBI_SSE_ATTR_PRIORITY, 1, &value);
+	sbiret_report_error(&ret, SBI_ERR_INVALID_PARAM,
+			    "write attr event_id 0x%lx", event_id);
+
+	ret = sbi_sse_read_attrs(event_id, SBI_SSE_ATTR_PRIORITY, 1, &value);
+	sbiret_report_error(&ret, SBI_ERR_INVALID_PARAM,
+			    "read attr event_id 0x%lx", event_id);
+}
+
+static void sse_test_invalid_event_id(void)
+{
+
+	report_prefix_push("event_id");
+
+	test_invalid_event_id(SBI_SSE_EVENT_LOCAL_RESERVED_0_START);
+
+	report_prefix_pop();
+}
+
+static void sse_check_event_availability(uint32_t event_id, bool *can_inject, bool *supported)
+{
+	unsigned long status;
+	struct sbiret ret;
+
+	*can_inject = false;
+	*supported = false;
+
+	ret = sbi_sse_read_attrs(event_id, SBI_SSE_ATTR_STATUS, 1, &status);
+	if (ret.error != SBI_SUCCESS && ret.error != SBI_ERR_NOT_SUPPORTED) {
+		report_fail("Get event status != SBI_SUCCESS && != SBI_ERR_NOT_SUPPORTED: %ld", ret.error);
+		return;
+	}
+	if (ret.error == SBI_ERR_NOT_SUPPORTED)
+		return;
+
+	if (!ret.error)
+		*supported = true;
+
+	*can_inject = (status >> SBI_SSE_ATTR_STATUS_INJECT_OFFSET) & 1;
+}
+
+static void sse_secondary_boot_and_unmask(void *data)
+{
+	sbi_sse_hart_unmask();
+}
+
+static void sse_check_mask(void)
+{
+	struct sbiret ret;
+
+	/* Upon boot, event are masked, check that */
+	ret = sbi_sse_hart_mask();
+	sbiret_report_error(&ret, SBI_ERR_ALREADY_STOPPED, "hart mask at boot time");
+
+	ret = sbi_sse_hart_unmask();
+	sbiret_report_error(&ret, SBI_SUCCESS, "hart unmask");
+	ret = sbi_sse_hart_unmask();
+	sbiret_report_error(&ret, SBI_ERR_ALREADY_STARTED, "hart unmask twice error");
+
+	ret = sbi_sse_hart_mask();
+	sbiret_report_error(&ret, SBI_SUCCESS, "hart mask");
+	ret = sbi_sse_hart_mask();
+	sbiret_report_error(&ret, SBI_ERR_ALREADY_STOPPED, "hart mask twice");
+}
+
+static int run_inject_test(struct sse_event_info *info)
+{
+	unsigned long event_id = info->event_id;
+
+	if (!info->can_inject) {
+		report_skip("Event does not support injection, skipping injection tests");
+		return 0;
+	}
+
+	if (sse_test_inject_simple(event_id))
+		return 1;
+
+	if (sbi_sse_event_is_global(event_id))
+		return sse_test_inject_global(event_id);
+	else
+		return sse_test_inject_local(event_id);
+}
+
+void check_sse(void)
+{
+	struct sse_event_info *info;
+	unsigned long i, event_id;
+	bool supported;
+
+	report_prefix_push("sse");
+
+	if (!sbi_probe(SBI_EXT_SSE)) {
+		report_skip("extension not available");
+		report_prefix_pop();
+		return;
+	}
+
+	sse_check_mask();
+
+	/*
+	 * Dummy wakeup of all processors since some of them will be targeted
+	 * by global events without going through the wakeup call as well as
+	 * unmasking SSE events on all harts
+	 */
+	on_cpus(sse_secondary_boot_and_unmask, NULL);
+
+	sse_test_invalid_event_id();
+
+	for (i = 0; i < ARRAY_SIZE(sse_event_infos); i++) {
+		info = &sse_event_infos[i];
+		event_id = info->event_id;
+		report_prefix_push(info->name);
+		sse_check_event_availability(event_id, &info->can_inject, &supported);
+		if (!supported) {
+			report_skip("Event is not supported, skipping tests");
+			report_prefix_pop();
+			continue;
+		}
+
+		sse_test_attrs(event_id);
+		sse_test_register_error(event_id);
+
+		if (run_inject_test(info)) {
+			report_skip("Event test failed, event state unreliable");
+			info->can_inject = false;
+		}
+
+		report_prefix_pop();
+	}
+
+	sse_test_injection_priority();
+
+	report_prefix_pop();
+}
diff --git a/riscv/sbi.c b/riscv/sbi.c
index 0404bb81..478cb35d 100644
--- a/riscv/sbi.c
+++ b/riscv/sbi.c
@@ -32,6 +32,7 @@
 
 #define	HIGH_ADDR_BOUNDARY	((phys_addr_t)1 << 32)
 
+void check_sse(void);
 void check_fwft(void);
 
 static long __labs(long a)
@@ -1567,6 +1568,7 @@ int main(int argc, char **argv)
 	check_hsm();
 	check_dbcn();
 	check_susp();
+	check_sse();
 	check_fwft();
 
 	return report_summary();
-- 
2.47.2


^ permalink raw reply related	[flat|nested] 13+ messages in thread

* Re: [kvm-unit-tests PATCH v8 5/6] lib: riscv: Add SBI SSE support
  2025-03-07 16:15 ` [kvm-unit-tests PATCH v8 5/6] lib: riscv: Add SBI SSE support Clément Léger
@ 2025-03-12 16:29   ` Andrew Jones
  0 siblings, 0 replies; 13+ messages in thread
From: Andrew Jones @ 2025-03-12 16:29 UTC (permalink / raw)
  To: Clément Léger
  Cc: kvm, kvm-riscv, Anup Patel, Atish Patra, Andrew Jones

On Fri, Mar 07, 2025 at 05:15:47PM +0100, Clément Léger wrote:
> Add support for registering and handling SSE events. This will be used
> by sbi test as well as upcoming double trap tests.
> 
> Signed-off-by: Clément Léger <cleger@rivosinc.com>
> Reviewed-by: Andrew Jones <andrew.jones@linux.dev>
> ---
>  riscv/Makefile          |   1 +
>  lib/riscv/asm/csr.h     |   1 +
>  lib/riscv/asm/sbi.h     |  38 ++++++++++++++-
>  lib/riscv/sbi-sse-asm.S | 103 ++++++++++++++++++++++++++++++++++++++++
>  lib/riscv/asm-offsets.c |   9 ++++
>  lib/riscv/sbi.c         |  76 +++++++++++++++++++++++++++++
>  6 files changed, 227 insertions(+), 1 deletion(-)
>  create mode 100644 lib/riscv/sbi-sse-asm.S
> 
> diff --git a/riscv/Makefile b/riscv/Makefile
> index 02d2ac39..16fc125b 100644
> --- a/riscv/Makefile
> +++ b/riscv/Makefile
> @@ -43,6 +43,7 @@ cflatobjs += lib/riscv/setup.o
>  cflatobjs += lib/riscv/smp.o
>  cflatobjs += lib/riscv/stack.o
>  cflatobjs += lib/riscv/timer.o
> +cflatobjs += lib/riscv/sbi-sse-asm.o
>  ifeq ($(ARCH),riscv32)
>  cflatobjs += lib/ldiv32.o
>  endif
> diff --git a/lib/riscv/asm/csr.h b/lib/riscv/asm/csr.h
> index c7fc87a9..3e4b5fca 100644
> --- a/lib/riscv/asm/csr.h
> +++ b/lib/riscv/asm/csr.h
> @@ -17,6 +17,7 @@
>  #define CSR_TIME		0xc01
>  
>  #define SR_SIE			_AC(0x00000002, UL)
> +#define SR_SPP			_AC(0x00000100, UL)
>  
>  /* Exception cause high bit - is an interrupt if set */
>  #define CAUSE_IRQ_FLAG		(_AC(1, UL) << (__riscv_xlen - 1))
> diff --git a/lib/riscv/asm/sbi.h b/lib/riscv/asm/sbi.h
> index 780c9edd..acef8a5e 100644
> --- a/lib/riscv/asm/sbi.h
> +++ b/lib/riscv/asm/sbi.h
> @@ -230,5 +230,41 @@ struct sbiret sbi_send_ipi_broadcast(void);
>  struct sbiret sbi_set_timer(unsigned long stime_value);
>  long sbi_probe(int ext);
>  
> -#endif /* !__ASSEMBLER__ */
> +typedef void (*sbi_sse_handler_fn)(void *data, struct pt_regs *regs, unsigned int hartid);
> +
> +struct sbi_sse_handler_arg {
> +	unsigned long reg_tmp;
> +	sbi_sse_handler_fn handler;
> +	void *handler_data;
> +	void *stack;
> +};
> +
> +extern void sbi_sse_entry(void);
> +
> +static inline bool sbi_sse_event_is_global(uint32_t event_id)
> +{
> +	return !!(event_id & SBI_SSE_EVENT_GLOBAL_BIT);
> +}
> +
> +struct sbiret sbi_sse_read_attrs_raw(unsigned long event_id, unsigned long base_attr_id,
> +				     unsigned long attr_count, unsigned long phys_lo,
> +				     unsigned long phys_hi);
> +struct sbiret sbi_sse_read_attrs(unsigned long event_id, unsigned long base_attr_id,
> +				 unsigned long attr_count, unsigned long *values);
> +struct sbiret sbi_sse_write_attrs_raw(unsigned long event_id, unsigned long base_attr_id,
> +				      unsigned long attr_count, unsigned long phys_lo,
> +				      unsigned long phys_hi);
> +struct sbiret sbi_sse_write_attrs(unsigned long event_id, unsigned long base_attr_id,
> +				  unsigned long attr_count, unsigned long *values);
> +struct sbiret sbi_sse_register_raw(unsigned long event_id, unsigned long entry_pc,
> +				   unsigned long entry_arg);
> +struct sbiret sbi_sse_register(unsigned long event_id, struct sbi_sse_handler_arg *arg);
> +struct sbiret sbi_sse_unregister(unsigned long event_id);
> +struct sbiret sbi_sse_enable(unsigned long event_id);
> +struct sbiret sbi_sse_disable(unsigned long event_id);
> +struct sbiret sbi_sse_hart_mask(void);
> +struct sbiret sbi_sse_hart_unmask(void);
> +struct sbiret sbi_sse_inject(unsigned long event_id, unsigned long hart_id);
> +
> +#endif /* !__ASSEMBLY__ */

When rebasing on latest master this should have been left as
__ASSEMBLER__

>  #endif /* _ASMRISCV_SBI_H_ */
> diff --git a/lib/riscv/sbi-sse-asm.S b/lib/riscv/sbi-sse-asm.S
> new file mode 100644
> index 00000000..e4efd1ff
> --- /dev/null
> +++ b/lib/riscv/sbi-sse-asm.S
> @@ -0,0 +1,103 @@
> +/* SPDX-License-Identifier: GPL-2.0 */
> +/*
> + * RISC-V SSE events entry point.
> + *
> + * Copyright (C) 2025, Rivos Inc., Clément Léger <cleger@rivosinc.com>
> + */
> +#define __ASSEMBLY__

__ASSEMBLER__

> +#include <asm/asm.h>
> +#include <asm/asm-offsets.h>
> +#include <asm/csr.h>
> +#include <generated/sbi-asm-offsets.h>
> +
> +.section .text
> +.global sbi_sse_entry
> +sbi_sse_entry:
> +	/* Save stack temporarily */
> +	REG_S	sp, SBI_SSE_REG_TMP(a7)
> +	/* Set entry stack */
> +	REG_L	sp, SBI_SSE_HANDLER_STACK(a7)
> +
> +	addi	sp, sp, -(PT_SIZE)
> +	REG_S	ra, PT_RA(sp)
> +	REG_S	s0, PT_S0(sp)
> +	REG_S	s1, PT_S1(sp)
> +	REG_S	s2, PT_S2(sp)
> +	REG_S	s3, PT_S3(sp)
> +	REG_S	s4, PT_S4(sp)
> +	REG_S	s5, PT_S5(sp)
> +	REG_S	s6, PT_S6(sp)
> +	REG_S	s7, PT_S7(sp)
> +	REG_S	s8, PT_S8(sp)
> +	REG_S	s9, PT_S9(sp)
> +	REG_S	s10, PT_S10(sp)
> +	REG_S	s11, PT_S11(sp)
> +	REG_S	tp, PT_TP(sp)
> +	REG_S	t0, PT_T0(sp)
> +	REG_S	t1, PT_T1(sp)
> +	REG_S	t2, PT_T2(sp)
> +	REG_S	t3, PT_T3(sp)
> +	REG_S	t4, PT_T4(sp)
> +	REG_S	t5, PT_T5(sp)
> +	REG_S	t6, PT_T6(sp)
> +	REG_S	gp, PT_GP(sp)
> +	REG_S	a0, PT_A0(sp)
> +	REG_S	a1, PT_A1(sp)
> +	REG_S	a2, PT_A2(sp)
> +	REG_S	a3, PT_A3(sp)
> +	REG_S	a4, PT_A4(sp)
> +	REG_S	a5, PT_A5(sp)
> +	csrr	a1, CSR_SEPC
> +	REG_S	a1, PT_EPC(sp)
> +	csrr	a2, CSR_SSTATUS
> +	REG_S	a2, PT_STATUS(sp)
> +
> +	REG_L	a0, SBI_SSE_REG_TMP(a7)
> +	REG_S	a0, PT_SP(sp)
> +
> +	REG_L	t0, SBI_SSE_HANDLER(a7)
> +	REG_L	a0, SBI_SSE_HANDLER_DATA(a7)
> +	mv	a1, sp
> +	mv	a2, a6
> +	jalr	t0
> +
> +	REG_L	a1, PT_EPC(sp)
> +	REG_L	a2, PT_STATUS(sp)
> +	csrw	CSR_SEPC, a1
> +	csrw	CSR_SSTATUS, a2
> +
> +	REG_L	ra, PT_RA(sp)
> +	REG_L	s0, PT_S0(sp)
> +	REG_L	s1, PT_S1(sp)
> +	REG_L	s2, PT_S2(sp)
> +	REG_L	s3, PT_S3(sp)
> +	REG_L	s4, PT_S4(sp)
> +	REG_L	s5, PT_S5(sp)
> +	REG_L	s6, PT_S6(sp)
> +	REG_L	s7, PT_S7(sp)
> +	REG_L	s8, PT_S8(sp)
> +	REG_L	s9, PT_S9(sp)
> +	REG_L	s10, PT_S10(sp)
> +	REG_L	s11, PT_S11(sp)
> +	REG_L	tp, PT_TP(sp)
> +	REG_L	t0, PT_T0(sp)
> +	REG_L	t1, PT_T1(sp)
> +	REG_L	t2, PT_T2(sp)
> +	REG_L	t3, PT_T3(sp)
> +	REG_L	t4, PT_T4(sp)
> +	REG_L	t5, PT_T5(sp)
> +	REG_L	t6, PT_T6(sp)
> +	REG_L	gp, PT_GP(sp)
> +	REG_L	a0, PT_A0(sp)
> +	REG_L	a1, PT_A1(sp)
> +	REG_L	a2, PT_A2(sp)
> +	REG_L	a3, PT_A3(sp)
> +	REG_L	a4, PT_A4(sp)
> +	REG_L	a5, PT_A5(sp)
> +
> +	REG_L	sp, PT_SP(sp)
> +
> +	li	a7, ASM_SBI_EXT_SSE
> +	li	a6, ASM_SBI_EXT_SSE_COMPLETE
> +	ecall
> +
> diff --git a/lib/riscv/asm-offsets.c b/lib/riscv/asm-offsets.c
> index 6c511c14..a96c6e97 100644
> --- a/lib/riscv/asm-offsets.c
> +++ b/lib/riscv/asm-offsets.c
> @@ -3,6 +3,7 @@
>  #include <elf.h>
>  #include <asm/processor.h>
>  #include <asm/ptrace.h>
> +#include <asm/sbi.h>
>  #include <asm/smp.h>
>  
>  int main(void)
> @@ -63,5 +64,13 @@ int main(void)
>  	OFFSET(THREAD_INFO_HARTID, thread_info, hartid);
>  	DEFINE(THREAD_INFO_SIZE, sizeof(struct thread_info));
>  
> +	DEFINE(ASM_SBI_EXT_SSE, SBI_EXT_SSE);
> +	DEFINE(ASM_SBI_EXT_SSE_COMPLETE, SBI_EXT_SSE_COMPLETE);
> +
> +	OFFSET(SBI_SSE_REG_TMP, sbi_sse_handler_arg, reg_tmp);
> +	OFFSET(SBI_SSE_HANDLER, sbi_sse_handler_arg, handler);
> +	OFFSET(SBI_SSE_HANDLER_DATA, sbi_sse_handler_arg, handler_data);
> +	OFFSET(SBI_SSE_HANDLER_STACK, sbi_sse_handler_arg, stack);
> +
>  	return 0;
>  }
> diff --git a/lib/riscv/sbi.c b/lib/riscv/sbi.c
> index 02dd338c..1752c916 100644
> --- a/lib/riscv/sbi.c
> +++ b/lib/riscv/sbi.c
> @@ -2,6 +2,7 @@
>  #include <libcflat.h>
>  #include <cpumask.h>
>  #include <limits.h>
> +#include <asm/io.h>
>  #include <asm/sbi.h>
>  #include <asm/setup.h>
>  
> @@ -31,6 +32,81 @@ struct sbiret sbi_ecall(int ext, int fid, unsigned long arg0,
>  	return ret;
>  }
>  
> +struct sbiret sbi_sse_read_attrs_raw(unsigned long event_id, unsigned long base_attr_id,
> +				     unsigned long attr_count, unsigned long phys_lo,
> +				      unsigned long phys_hi)
                   extra space here ^

> +{
> +	return sbi_ecall(SBI_EXT_SSE, SBI_EXT_SSE_READ_ATTRS, event_id, base_attr_id, attr_count,
> +			 phys_lo, phys_hi, 0);
> +}
> +
> +struct sbiret sbi_sse_read_attrs(unsigned long event_id, unsigned long base_attr_id,
> +				 unsigned long attr_count, unsigned long *values)
> +{
> +	phys_addr_t p = virt_to_phys(values);
> +
> +	return sbi_sse_read_attrs_raw(event_id, base_attr_id, attr_count, lower_32_bits(p),
> +				      upper_32_bits(p));
> +}
> +
> +struct sbiret sbi_sse_write_attrs_raw(unsigned long event_id, unsigned long base_attr_id,
> +				      unsigned long attr_count, unsigned long phys_lo,
> +				      unsigned long phys_hi)
> +{
> +	return sbi_ecall(SBI_EXT_SSE, SBI_EXT_SSE_WRITE_ATTRS, event_id, base_attr_id, attr_count,
> +			 phys_lo, phys_hi, 0);
> +}
> +
> +struct sbiret sbi_sse_write_attrs(unsigned long event_id, unsigned long base_attr_id,
> +				  unsigned long attr_count, unsigned long *values)
> +{
> +	phys_addr_t p = virt_to_phys(values);
> +
> +	return sbi_sse_write_attrs_raw(event_id, base_attr_id, attr_count, lower_32_bits(p),
> +				       upper_32_bits(p));
> +}
> +
> +struct sbiret sbi_sse_register_raw(unsigned long event_id, unsigned long entry_pc,
> +				   unsigned long entry_arg)
> +{
> +	return sbi_ecall(SBI_EXT_SSE, SBI_EXT_SSE_REGISTER, event_id, entry_pc, entry_arg, 0, 0, 0);
> +}
> +
> +struct sbiret sbi_sse_register(unsigned long event_id, struct sbi_sse_handler_arg *arg)
> +{
> +	return sbi_sse_register_raw(event_id, (unsigned long) sbi_sse_entry, (unsigned long) arg);

nit: no spaces after casts

> +}
> +
> +struct sbiret sbi_sse_unregister(unsigned long event_id)
> +{
> +	return sbi_ecall(SBI_EXT_SSE, SBI_EXT_SSE_UNREGISTER, event_id, 0, 0, 0, 0, 0);
> +}
> +
> +struct sbiret sbi_sse_enable(unsigned long event_id)
> +{
> +	return sbi_ecall(SBI_EXT_SSE, SBI_EXT_SSE_ENABLE, event_id, 0, 0, 0, 0, 0);
> +}
> +
> +struct sbiret sbi_sse_disable(unsigned long event_id)
> +{
> +	return sbi_ecall(SBI_EXT_SSE, SBI_EXT_SSE_DISABLE, event_id, 0, 0, 0, 0, 0);
> +}
> +
> +struct sbiret sbi_sse_hart_mask(void)
> +{
> +	return sbi_ecall(SBI_EXT_SSE, SBI_EXT_SSE_HART_MASK, 0, 0, 0, 0, 0, 0);
> +}
> +
> +struct sbiret sbi_sse_hart_unmask(void)
> +{
> +	return sbi_ecall(SBI_EXT_SSE, SBI_EXT_SSE_HART_UNMASK, 0, 0, 0, 0, 0, 0);
> +}
> +
> +struct sbiret sbi_sse_inject(unsigned long event_id, unsigned long hart_id)
> +{
> +	return sbi_ecall(SBI_EXT_SSE, SBI_EXT_SSE_INJECT, event_id, hart_id, 0, 0, 0, 0);
> +}
> +
>  void sbi_shutdown(void)
>  {
>  	sbi_ecall(SBI_EXT_SRST, 0, 0, 0, 0, 0, 0, 0);
> -- 
> 2.47.2
>

Thanks,
drew

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [kvm-unit-tests PATCH v8 2/6] riscv: Set .aux.o files as .PRECIOUS
  2025-03-07 16:15 ` [kvm-unit-tests PATCH v8 2/6] riscv: Set .aux.o files as .PRECIOUS Clément Léger
@ 2025-03-12 16:30   ` Andrew Jones
  2025-03-12 17:53     ` Andrew Jones
  0 siblings, 1 reply; 13+ messages in thread
From: Andrew Jones @ 2025-03-12 16:30 UTC (permalink / raw)
  To: Clément Léger; +Cc: kvm, kvm-riscv, Anup Patel, Atish Patra

On Fri, Mar 07, 2025 at 05:15:44PM +0100, Clément Léger wrote:
> When compiling, we need to keep .aux.o file or they will be removed
> after the compilation which leads to dependent files to be recompiled.
> Set these files as .PRECIOUS to keep them.
> 
> Signed-off-by: Clément Léger <cleger@rivosinc.com>
> ---
>  riscv/Makefile | 1 +
>  1 file changed, 1 insertion(+)
> 
> diff --git a/riscv/Makefile b/riscv/Makefile
> index 52718f3f..ae9cf02a 100644
> --- a/riscv/Makefile
> +++ b/riscv/Makefile
> @@ -90,6 +90,7 @@ CFLAGS += -I $(SRCDIR)/lib -I $(SRCDIR)/lib/libfdt -I lib -I $(SRCDIR)/riscv
>  asm-offsets = lib/riscv/asm-offsets.h
>  include $(SRCDIR)/scripts/asm-offsets.mak
>  
> +.PRECIOUS: %.aux.o
>  %.aux.o: $(SRCDIR)/lib/auxinfo.c
>  	$(CC) $(CFLAGS) -c -o $@ $< \
>  		-DPROGNAME=\"$(notdir $(@:.aux.o=.$(exe)))\" -DAUXFLAGS=$(AUXFLAGS)
> -- 
> 2.47.2
>

Reviewed-by: Andrew Jones <ajones@ventanamicro.com>

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [kvm-unit-tests PATCH v8 6/6] riscv: sbi: Add SSE extension tests
  2025-03-07 16:15 ` [kvm-unit-tests PATCH v8 6/6] riscv: sbi: Add SSE extension tests Clément Léger
@ 2025-03-12 17:51   ` Andrew Jones
  2025-03-14  9:29     ` Clément Léger
  0 siblings, 1 reply; 13+ messages in thread
From: Andrew Jones @ 2025-03-12 17:51 UTC (permalink / raw)
  To: Clément Léger; +Cc: kvm, kvm-riscv, Anup Patel, Atish Patra

On Fri, Mar 07, 2025 at 05:15:48PM +0100, Clément Léger wrote:
> Add SBI SSE extension tests for the following features:
> - Test attributes errors (invalid values, RO, etc)
> - Registration errors
> - Simple events (register, enable, inject)
> - Events with different priorities
> - Global events dispatch on different harts
> - Local events on all harts
> - Hart mask/unmask events
> 
> Signed-off-by: Clément Léger <cleger@rivosinc.com>
> ---
>  riscv/Makefile    |    1 +
>  riscv/sbi-tests.h |    1 +
>  riscv/sbi-sse.c   | 1215 +++++++++++++++++++++++++++++++++++++++++++++
>  riscv/sbi.c       |    2 +
>  4 files changed, 1219 insertions(+)
>  create mode 100644 riscv/sbi-sse.c
> 
> diff --git a/riscv/Makefile b/riscv/Makefile
> index 16fc125b..4fe2f1bb 100644
> --- a/riscv/Makefile
> +++ b/riscv/Makefile
> @@ -18,6 +18,7 @@ tests += $(TEST_DIR)/sieve.$(exe)
>  all: $(tests)
>  
>  $(TEST_DIR)/sbi-deps = $(TEST_DIR)/sbi-asm.o $(TEST_DIR)/sbi-fwft.o
> +$(TEST_DIR)/sbi-deps += $(TEST_DIR)/sbi-sse.o
>  
>  # When built for EFI sieve needs extra memory, run with e.g. '-m 256' on QEMU
>  $(TEST_DIR)/sieve.$(exe): AUXFLAGS = 0x1
> diff --git a/riscv/sbi-tests.h b/riscv/sbi-tests.h
> index b081464d..a71da809 100644
> --- a/riscv/sbi-tests.h
> +++ b/riscv/sbi-tests.h
> @@ -71,6 +71,7 @@
>  	sbiret_report(ret, expected_error, expected_value, "check sbi.error and sbi.value")
>  
>  void sbi_bad_fid(int ext);
> +void check_sse(void);
>  
>  #endif /* __ASSEMBLER__ */
>  #endif /* _RISCV_SBI_TESTS_H_ */
> diff --git a/riscv/sbi-sse.c b/riscv/sbi-sse.c
> new file mode 100644
> index 00000000..7bd58b8b
> --- /dev/null
> +++ b/riscv/sbi-sse.c
> @@ -0,0 +1,1215 @@
> +// SPDX-License-Identifier: GPL-2.0-only
> +/*
> + * SBI SSE testsuite
> + *
> + * Copyright (C) 2025, Rivos Inc., Clément Léger <cleger@rivosinc.com>
> + */
> +#include <alloc.h>
> +#include <alloc_page.h>
> +#include <bitops.h>
> +#include <cpumask.h>
> +#include <libcflat.h>
> +#include <on-cpus.h>
> +#include <stdlib.h>
> +
> +#include <asm/barrier.h>
> +#include <asm/delay.h>
> +#include <asm/io.h>
> +#include <asm/page.h>
> +#include <asm/processor.h>
> +#include <asm/sbi.h>
> +#include <asm/setup.h>
> +#include <asm/timer.h>
> +
> +#include "sbi-tests.h"
> +
> +#define SSE_STACK_SIZE	PAGE_SIZE
> +
> +struct sse_event_info {
> +	uint32_t event_id;
> +	const char *name;
> +	bool can_inject;
> +};
> +
> +static struct sse_event_info sse_event_infos[] = {
> +	{
> +		.event_id = SBI_SSE_EVENT_LOCAL_HIGH_PRIO_RAS,
> +		.name = "local_high_prio_ras",
> +	},
> +	{
> +		.event_id = SBI_SSE_EVENT_LOCAL_DOUBLE_TRAP,
> +		.name = "double_trap",
> +	},
> +	{
> +		.event_id = SBI_SSE_EVENT_GLOBAL_HIGH_PRIO_RAS,
> +		.name = "global_high_prio_ras",
> +	},
> +	{
> +		.event_id = SBI_SSE_EVENT_LOCAL_PMU_OVERFLOW,
> +		.name = "local_pmu_overflow",
> +	},
> +	{
> +		.event_id = SBI_SSE_EVENT_LOCAL_LOW_PRIO_RAS,
> +		.name = "local_low_prio_ras",
> +	},
> +	{
> +		.event_id = SBI_SSE_EVENT_GLOBAL_LOW_PRIO_RAS,
> +		.name = "global_low_prio_ras",
> +	},
> +	{
> +		.event_id = SBI_SSE_EVENT_LOCAL_SOFTWARE,
> +		.name = "local_software",
> +	},
> +	{
> +		.event_id = SBI_SSE_EVENT_GLOBAL_SOFTWARE,
> +		.name = "global_software",
> +	},
> +};
> +
> +static const char *const attr_names[] = {
> +	[SBI_SSE_ATTR_STATUS] = "status",
> +	[SBI_SSE_ATTR_PRIORITY] = "priority",
> +	[SBI_SSE_ATTR_CONFIG] = "config",
> +	[SBI_SSE_ATTR_PREFERRED_HART] = "preferred_hart",
> +	[SBI_SSE_ATTR_ENTRY_PC] = "entry_pc",
> +	[SBI_SSE_ATTR_ENTRY_ARG] = "entry_arg",
> +	[SBI_SSE_ATTR_INTERRUPTED_SEPC] = "interrupted_sepc",
> +	[SBI_SSE_ATTR_INTERRUPTED_FLAGS] = "interrupted_flags",
> +	[SBI_SSE_ATTR_INTERRUPTED_A6] = "interrupted_a6",
> +	[SBI_SSE_ATTR_INTERRUPTED_A7] = "interrupted_a7",

nit: tabulate

> +};
> +
> +static const unsigned long ro_attrs[] = {
> +	SBI_SSE_ATTR_STATUS,
> +	SBI_SSE_ATTR_ENTRY_PC,
> +	SBI_SSE_ATTR_ENTRY_ARG,
> +};
> +
> +static const unsigned long interrupted_attrs[] = {
> +	SBI_SSE_ATTR_INTERRUPTED_SEPC,
> +	SBI_SSE_ATTR_INTERRUPTED_FLAGS,
> +	SBI_SSE_ATTR_INTERRUPTED_A6,
> +	SBI_SSE_ATTR_INTERRUPTED_A7,
> +};
> +
> +static const unsigned long interrupted_flags[] = {
> +	SBI_SSE_ATTR_INTERRUPTED_FLAGS_SSTATUS_SPP,
> +	SBI_SSE_ATTR_INTERRUPTED_FLAGS_SSTATUS_SPIE,
> +	SBI_SSE_ATTR_INTERRUPTED_FLAGS_SSTATUS_SPELP,
> +	SBI_SSE_ATTR_INTERRUPTED_FLAGS_SSTATUS_SDT,
> +	SBI_SSE_ATTR_INTERRUPTED_FLAGS_HSTATUS_SPV,
> +	SBI_SSE_ATTR_INTERRUPTED_FLAGS_HSTATUS_SPVP,
> +};
> +
> +static struct sse_event_info *sse_event_get_info(uint32_t event_id)
> +{
> +	int i;
> +
> +	for (i = 0; i < ARRAY_SIZE(sse_event_infos); i++) {
> +		if (sse_event_infos[i].event_id == event_id)
> +			return &sse_event_infos[i];
> +	}
> +
> +	assert_msg(false, "Invalid event id: %d", event_id);
> +}
> +
> +static const char *sse_event_name(uint32_t event_id)
> +{
> +	return sse_event_get_info(event_id)->name;
> +}
> +
> +static bool sse_event_can_inject(uint32_t event_id)
> +{
> +	return sse_event_get_info(event_id)->can_inject;
> +}
> +
> +static struct sbiret sse_get_event_status_field(uint32_t event_id, unsigned long mask,
> +						unsigned long shift, unsigned long *value)
> +{
> +	struct sbiret ret;
> +	unsigned long status;
> +
> +	ret = sbi_sse_read_attrs(event_id, SBI_SSE_ATTR_STATUS, 1, &status);
> +	if (ret.error) {
> +		sbiret_report_error(&ret, SBI_SUCCESS, "Get event status");
> +		return ret;
> +	}
> +
> +	*value = (status & mask) >> shift;
> +
> +	return ret;
> +}
> +
> +static struct sbiret sse_event_get_state(uint32_t event_id, enum sbi_sse_state *state)
> +{
> +	unsigned long status = 0;
> +	struct sbiret ret;
> +
> +	ret = sse_get_event_status_field(event_id, SBI_SSE_ATTR_STATUS_STATE_MASK,
> +					  SBI_SSE_ATTR_STATUS_STATE_OFFSET, &status);
> +	*state = status;
> +
> +	return ret;
> +}
> +
> +static unsigned long sse_global_event_set_current_hart(uint32_t event_id)
> +{
> +	struct sbiret ret;
> +	unsigned long current_hart = current_thread_info()->hartid;
> +
> +	if (!sbi_sse_event_is_global(event_id))
> +		return SBI_ERR_INVALID_PARAM;

The only two callers of sse_global_event_set_current_hart() check
sbi_sse_event_is_global() themselves, so this check can be an assert().
I think it should be anyway, since returning an SBI error, which didn't
originate in SBI, could lead to confusion.

> +
> +	ret = sbi_sse_write_attrs(event_id, SBI_SSE_ATTR_PREFERRED_HART, 1, &current_hart);
> +	if (sbiret_report_error(&ret, SBI_SUCCESS, "Set preferred hart"))
> +		return ret.error;
> +
> +	return 0;
> +}
> +
> +static bool sse_check_state(uint32_t event_id, unsigned long expected_state)
> +{
> +	struct sbiret ret;
> +	enum sbi_sse_state state;
> +
> +	ret = sse_event_get_state(event_id, &state);
> +	if (ret.error)
> +		return false;
> +
> +	return report(state == expected_state, "event status == %ld", expected_state);
> +}
> +
> +static bool sse_event_pending(uint32_t event_id)
> +{
> +	bool pending = 0;
> +
> +	sse_get_event_status_field(event_id, BIT(SBI_SSE_ATTR_STATUS_PENDING_OFFSET),
> +		SBI_SSE_ATTR_STATUS_PENDING_OFFSET, (unsigned long*)&pending);
> +
> +	return pending;
> +}
> +
> +static void *sse_alloc_stack(void)
> +{
> +	/*
> +	 * We assume that SSE_STACK_SIZE always fit in one page. This page will
> +	 * always be decremented before storing anything on it in sse-entry.S.
> +	 */
> +	assert(SSE_STACK_SIZE <= PAGE_SIZE);
> +
> +	return (alloc_page() + SSE_STACK_SIZE);
> +}
> +
> +static void sse_free_stack(void *stack)
> +{
> +	free_page(stack - SSE_STACK_SIZE);
> +}
> +
> +static void sse_read_write_test(uint32_t event_id, unsigned long attr, unsigned long attr_count,
> +				unsigned long *value, long expected_error, const char *str)
> +{
> +	struct sbiret ret;
> +
> +	ret = sbi_sse_read_attrs(event_id, attr, attr_count, value);
> +	sbiret_report_error(&ret, expected_error, "Read %s error", str);
> +
> +	ret = sbi_sse_write_attrs(event_id, attr, attr_count, value);
> +	sbiret_report_error(&ret, expected_error, "Write %s error", str);
> +}
> +
> +#define ALL_ATTRS_COUNT	(SBI_SSE_ATTR_INTERRUPTED_A7 + 1)
> +
> +static void sse_test_attrs(uint32_t event_id)
> +{
> +	unsigned long value = 0;
> +	struct sbiret ret;
> +	void *ptr;
> +	unsigned long values[ALL_ATTRS_COUNT];
> +	unsigned int i;
> +	const char *invalid_hart_str;
> +	const char *attr_name;
> +
> +	report_prefix_push("attrs");
> +
> +	for (i = 0; i < ARRAY_SIZE(ro_attrs); i++) {
> +		ret = sbi_sse_write_attrs(event_id, ro_attrs[i], 1, &value);
> +		sbiret_report_error(&ret, SBI_ERR_DENIED, "RO attribute %s not writable",
> +				    attr_names[ro_attrs[i]]);
> +	}
> +
> +	ret = sbi_sse_read_attrs(event_id, SBI_SSE_ATTR_STATUS, ALL_ATTRS_COUNT, values);
> +	sbiret_report_error(&ret, SBI_SUCCESS, "Read multiple attributes");
> +
> +	for (i = SBI_SSE_ATTR_STATUS; i <= SBI_SSE_ATTR_INTERRUPTED_A7; i++) {
> +		ret = sbi_sse_read_attrs(event_id, i, 1, &value);
> +		attr_name = attr_names[i];
> +
> +		sbiret_report_error(&ret, SBI_SUCCESS, "Read single attribute %s", attr_name);
> +		if (values[i] != value)
> +			report_fail("Attribute 0x%x single value read (0x%lx) differs from the one read with multiple attributes (0x%lx)",
> +				    i, value, values[i]);
> +		/*
> +		 * Preferred hart reset value is defined by SBI vendor
> +		 */
> +		if(i != SBI_SSE_ATTR_PREFERRED_HART) {
                  ^ missing space

> +			/*
> +			 * Specification states that injectable bit is implementation dependent
> +			 * but other bits are zero-initialized.
> +			 */
> +			if (i == SBI_SSE_ATTR_STATUS)
> +				value &= ~BIT(SBI_SSE_ATTR_STATUS_INJECT_OFFSET);
> +			report(value == 0, "Attribute %s reset value is 0, found %lx", attr_name,
> +			       value);

nit: no need to wrap the line

> +		}
> +	}
> +
> +#if __riscv_xlen > 32
> +	value = BIT(32);
> +	ret = sbi_sse_write_attrs(event_id, SBI_SSE_ATTR_PRIORITY, 1, &value);
> +	sbiret_report_error(&ret, SBI_ERR_INVALID_PARAM, "Write invalid prio > 0xFFFFFFFF error");
> +#endif
> +
> +	value = ~SBI_SSE_ATTR_CONFIG_ONESHOT;
> +	ret = sbi_sse_write_attrs(event_id, SBI_SSE_ATTR_CONFIG, 1, &value);
> +	sbiret_report_error(&ret, SBI_ERR_INVALID_PARAM, "Write invalid config value error");
> +
> +	if (sbi_sse_event_is_global(event_id)) {
> +		invalid_hart_str = getenv("INVALID_HART_ID");
> +		if (!invalid_hart_str)
> +			value = 0xFFFFFFFFUL;
> +		else
> +			value = strtoul(invalid_hart_str, NULL, 0);
> +
> +		ret = sbi_sse_write_attrs(event_id, SBI_SSE_ATTR_PREFERRED_HART, 1, &value);
> +		sbiret_report_error(&ret, SBI_ERR_INVALID_PARAM, "Set invalid hart id error");
> +	} else {
> +		/* Set Hart on local event -> RO */
> +		value = current_thread_info()->hartid;
> +		ret = sbi_sse_write_attrs(event_id, SBI_SSE_ATTR_PREFERRED_HART, 1, &value);
> +		sbiret_report_error(&ret, SBI_ERR_DENIED,
> +				    "Set hart id on local event error");
> +	}
> +
> +	/* Set/get flags, sepc, a6, a7 */
> +	for (i = 0; i < ARRAY_SIZE(interrupted_attrs); i++) {
> +		attr_name = attr_names[interrupted_attrs[i]];
> +		ret = sbi_sse_read_attrs(event_id, interrupted_attrs[i], 1, &value);
> +		sbiret_report_error(&ret, SBI_SUCCESS, "Get interrupted %s", attr_name);
> +
> +		value = ARRAY_SIZE(interrupted_attrs) - i;
> +		ret = sbi_sse_write_attrs(event_id, interrupted_attrs[i], 1, &value);
> +		sbiret_report_error(&ret, SBI_ERR_INVALID_STATE,
> +				    "Set attribute %s invalid state error", attr_name);
> +	}
> +
> +	sse_read_write_test(event_id, SBI_SSE_ATTR_STATUS, 0, &value, SBI_ERR_INVALID_PARAM,
> +			    "attribute attr_count == 0");
> +	sse_read_write_test(event_id, SBI_SSE_ATTR_INTERRUPTED_A7 + 1, 1, &value, SBI_ERR_BAD_RANGE,
> +			    "invalid attribute");
> +
> +	/* Misaligned pointer address */
> +	ptr = (void *)&value;
> +	ptr += 1;
> +	sse_read_write_test(event_id, SBI_SSE_ATTR_STATUS, 1, ptr, SBI_ERR_INVALID_ADDRESS,
> +		"attribute with invalid address");
> +
> +	report_prefix_pop();
> +}
> +
> +static void sse_test_register_error(uint32_t event_id)
> +{
> +	struct sbiret ret;
> +
> +	report_prefix_push("register");
> +
> +	ret = sbi_sse_unregister(event_id);
> +	sbiret_report_error(&ret, SBI_ERR_INVALID_STATE, "unregister non-registered event");
> +
> +	ret = sbi_sse_register_raw(event_id, 0x1, 0);
> +	sbiret_report_error(&ret, SBI_ERR_INVALID_PARAM, "register misaligned entry");
> +
> +	ret = sbi_sse_register_raw(event_id, (unsigned long)sbi_sse_entry, 0);

sbi_sse_register(event_id, sbi_sse_entry, NULL) to avoid a cast.

> +	sbiret_report_error(&ret, SBI_SUCCESS, "register");
> +	if (ret.error)
> +		goto done;
> +
> +	ret = sbi_sse_register_raw(event_id, (unsigned long)sbi_sse_entry, 0);

same

> +	sbiret_report_error(&ret, SBI_ERR_INVALID_STATE, "register used event failure");
> +
> +	ret = sbi_sse_unregister(event_id);
> +	sbiret_report_error(&ret, SBI_SUCCESS, "unregister");
> +
> +done:
> +	report_prefix_pop();
> +}
> +
> +struct sse_simple_test_arg {
> +	bool done;
> +	unsigned long expected_a6;
> +	uint32_t event_id;
> +};
> +
> +#if __riscv_xlen > 32
> +
> +struct alias_test_params {
> +	unsigned long event_id;
> +	unsigned long attr_id;
> +	unsigned long attr_count;
> +	const char *str;
> +};
> +
> +static void test_alias(uint32_t event_id)
> +{
> +	struct alias_test_params *write, *read;
> +	unsigned long write_value, read_value;
> +	struct sbiret ret;
> +	bool err = false;
> +	int r, w;
> +	struct alias_test_params params[] = {
> +		{event_id, SBI_SSE_ATTR_INTERRUPTED_A6, 1, "non aliased"},
> +		{BIT(32) + event_id, SBI_SSE_ATTR_INTERRUPTED_A6, 1, "aliased event_id"},
> +		{event_id, BIT(32) + SBI_SSE_ATTR_INTERRUPTED_A6, 1, "aliased attr_id"},
> +		{event_id, SBI_SSE_ATTR_INTERRUPTED_A6, BIT(32) + 1, "aliased attr_count"},
> +	};
> +
> +	report_prefix_push("alias");
> +	for (w = 0; w < ARRAY_SIZE(params); w++) {
> +		write = &params[w];
> +
> +		write_value = 0xDEADBEEF + w;
> +		ret = sbi_sse_write_attrs(write->event_id, write->attr_id, write->attr_count, &write_value);
> +		if (ret.error)
> +			sbiret_report_error(&ret, SBI_SUCCESS, "Write %s, event 0x%lx attr 0x%lx, attr count 0x%lx", write->str, write->event_id, write->attr_id, write->attr_count);

I like long lines, but this one is too long even for me. I'd break it
after the format string.

> +
> +		for (r = 0; r < ARRAY_SIZE(params); r++) {
> +			read = &params[r];
> +			read_value = 0;
> +			ret = sbi_sse_read_attrs(read->event_id, read->attr_id, read->attr_count, &read_value);
> +			if (ret.error)
> +				sbiret_report_error(&ret, SBI_SUCCESS,
> +						    "Read %s, event 0x%lx attr 0x%lx, attr count 0x%lx", read->str, read->event_id, read->attr_id, read->attr_count);

same

> +
> +			/* Do not spam output with a lot of reports */
> +			if (write_value != read_value) {
> +				err = true;
> +				report_fail("Write %s, event 0x%lx attr 0x%lx, attr count 0x%lx value %lx =="
> +					    "Read %s, event 0x%lx attr 0x%lx, attr count 0x%lx value %lx",
> +				write->str, write->event_id, write->attr_id, write->attr_count,
> +				write_value,
> +				read->str, read->event_id, read->attr_id, read->attr_count,
> +				read_value);

indentation is off

> +			}
> +		}
> +	}
> +
> +	report(!err, "BIT(32) aliasing tests");
> +	report_prefix_pop();
> +}
> +#endif
> +
> +static void sse_simple_handler(void *data, struct pt_regs *regs, unsigned int hartid)
> +{
> +	struct sse_simple_test_arg *arg = data;
> +	int i;
> +	struct sbiret ret;
> +	const char *attr_name;
> +	uint32_t event_id = READ_ONCE(arg->event_id), attr;
> +	unsigned long value, prev_value, flags;
> +	unsigned long interrupted_state[ARRAY_SIZE(interrupted_attrs)];
> +	unsigned long modified_state[ARRAY_SIZE(interrupted_attrs)] = {4, 3, 2, 1};
> +	unsigned long tmp_state[ARRAY_SIZE(interrupted_attrs)];
> +
> +	report((regs->status & SR_SPP) == SR_SPP, "Interrupted S-mode");
> +	report(hartid == current_thread_info()->hartid, "Hartid correctly passed");
> +	sse_check_state(event_id, SBI_SSE_STATE_RUNNING);
> +	report(!sse_event_pending(event_id), "Event not pending");
> +
> +	/* Read full interrupted state */
> +	ret = sbi_sse_read_attrs(event_id, SBI_SSE_ATTR_INTERRUPTED_SEPC,
> +				 ARRAY_SIZE(interrupted_attrs), interrupted_state);
> +	sbiret_report_error(&ret, SBI_SUCCESS, "Save full interrupted state from handler");
> +
> +	/* Write full modified state and read it */
> +	ret = sbi_sse_write_attrs(event_id, SBI_SSE_ATTR_INTERRUPTED_SEPC,
> +				  ARRAY_SIZE(modified_state), modified_state);
> +	sbiret_report_error(&ret, SBI_SUCCESS,
> +			    "Write full interrupted state from handler");
> +
> +	ret = sbi_sse_read_attrs(event_id, SBI_SSE_ATTR_INTERRUPTED_SEPC,
> +				ARRAY_SIZE(tmp_state), tmp_state);
> +	sbiret_report_error(&ret, SBI_SUCCESS, "Read full modified state from handler");
> +
> +	report(memcmp(tmp_state, modified_state, sizeof(modified_state)) == 0,
> +	       "Full interrupted state successfully written");
> +
> +#if __riscv_xlen > 32
> +	test_alias(event_id);
> +#endif
> +
> +	/* Restore full saved state */
> +	ret = sbi_sse_write_attrs(event_id, SBI_SSE_ATTR_INTERRUPTED_SEPC,
> +		ARRAY_SIZE(interrupted_attrs), interrupted_state);

indentation is off

> +	sbiret_report_error(&ret, SBI_SUCCESS,
> +			    "Full interrupted state restore from handler");

no need to wrap

> +
> +	/* We test SBI_SSE_ATTR_INTERRUPTED_FLAGS below with specific flag values */
> +	for (i = 0; i < ARRAY_SIZE(interrupted_attrs); i++) {
> +		attr = interrupted_attrs[i];
> +		if (attr == SBI_SSE_ATTR_INTERRUPTED_FLAGS)
> +			continue;
> +
> +		attr_name = attr_names[attr];
> +
> +		ret = sbi_sse_read_attrs(event_id, attr, 1, &prev_value);
> +		sbiret_report_error(&ret, SBI_SUCCESS, "Get attr %s", attr_name);
> +
> +		value = 0xDEADBEEF + i;
> +		ret = sbi_sse_write_attrs(event_id, attr, 1, &value);
> +		sbiret_report_error(&ret, SBI_SUCCESS, "Set attr %s", attr_name);
> +
> +		ret = sbi_sse_read_attrs(event_id, attr, 1, &value);
> +		sbiret_report_error(&ret, SBI_SUCCESS, "Get attr %s", attr_name);
> +		report(value == 0xDEADBEEF + i, "Get attr %s, value: 0x%lx", attr_name,
> +		       value);

no need to wrap

> +
> +		ret = sbi_sse_write_attrs(event_id, attr, 1, &prev_value);
> +		sbiret_report_error(&ret, SBI_SUCCESS, "Restore attr %s value", attr_name);
> +	}
> +
> +	/* Test all flags allowed for SBI_SSE_ATTR_INTERRUPTED_FLAGS */
> +	attr = SBI_SSE_ATTR_INTERRUPTED_FLAGS;
> +	ret = sbi_sse_read_attrs(event_id, attr, 1, &prev_value);
> +	sbiret_report_error(&ret, SBI_SUCCESS, "Save interrupted flags");
> +
> +	for (i = 0; i < ARRAY_SIZE(interrupted_flags); i++) {
> +		flags = interrupted_flags[i];
> +		ret = sbi_sse_write_attrs(event_id, attr, 1, &flags);
> +		sbiret_report_error(&ret, SBI_SUCCESS,
> +				    "Set interrupted flags bit 0x%lx value", flags);
> +		ret = sbi_sse_read_attrs(event_id, attr, 1, &value);
> +		sbiret_report_error(&ret, SBI_SUCCESS, "Get interrupted flags after set");
> +		report(value == flags, "interrupted flags modified value: 0x%lx", value);
> +	}
> +
> +	/* Write invalid bit in flag register */
> +	flags = SBI_SSE_ATTR_INTERRUPTED_FLAGS_SSTATUS_SDT << 1;
> +	ret = sbi_sse_write_attrs(event_id, attr, 1, &flags);
> +	sbiret_report_error(&ret, SBI_ERR_INVALID_PARAM, "Set invalid flags bit 0x%lx value error",
> +			    flags);
> +

no need to wrap

> +	flags = BIT(SBI_SSE_ATTR_INTERRUPTED_FLAGS_SSTATUS_SDT + 1);
> +	ret = sbi_sse_write_attrs(event_id, attr, 1, &flags);
> +	sbiret_report_error(&ret, SBI_ERR_INVALID_PARAM, "Set invalid flags bit 0x%lx value error",
> +			    flags);
> +

no need to wrap

> +	ret = sbi_sse_write_attrs(event_id, attr, 1, &prev_value);
> +	sbiret_report_error(&ret, SBI_SUCCESS, "Restore interrupted flags");
> +
> +	/* Try to change HARTID/Priority while running */
> +	if (sbi_sse_event_is_global(event_id)) {
> +		value = current_thread_info()->hartid;
> +		ret = sbi_sse_write_attrs(event_id, SBI_SSE_ATTR_PREFERRED_HART, 1, &value);
> +		sbiret_report_error(&ret, SBI_ERR_INVALID_STATE, "Set hart id while running error");
> +	}
> +
> +	value = 0;
> +	ret = sbi_sse_write_attrs(event_id, SBI_SSE_ATTR_PRIORITY, 1, &value);
> +	sbiret_report_error(&ret, SBI_ERR_INVALID_STATE, "Set priority while running error");
> +
> +	value = READ_ONCE(arg->expected_a6);
> +	report(interrupted_state[2] == value, "Interrupted state a6, expected 0x%lx, got 0x%lx",
> +	       value, interrupted_state[2]);
> +
> +	report(interrupted_state[3] == SBI_EXT_SSE,
> +	       "Interrupted state a7, expected 0x%x, got 0x%lx", SBI_EXT_SSE,
> +	       interrupted_state[3]);
> +
> +	WRITE_ONCE(arg->done, true);
> +}
> +
> +static int sse_test_inject_simple(uint32_t event_id)
> +{
> +	unsigned long value, error;
> +	struct sbiret ret;
> +	int err_ret = 1;
> +	struct sse_simple_test_arg test_arg = {.event_id = event_id};
> +	struct sbi_sse_handler_arg args = {
> +		.handler = sse_simple_handler,
> +		.handler_data = (void *)&test_arg,
> +		.stack = sse_alloc_stack(),
> +	};
> +
> +	report_prefix_push("simple");
> +
> +	if (!sse_check_state(event_id, SBI_SSE_STATE_UNUSED))
> +		goto err;
> +
> +	ret = sbi_sse_register(event_id, &args);
> +	if (!sbiret_report_error(&ret, SBI_SUCCESS, "register"))
> +		goto err;
> +
> +	if (!sse_check_state(event_id, SBI_SSE_STATE_REGISTERED))
> +		goto err;
> +
> +	if (sbi_sse_event_is_global(event_id)) {
> +		/* Be sure global events are targeting the current hart */
> +		error = sse_global_event_set_current_hart(event_id);
> +		if (error)
> +			goto err;
> +	}
> +
> +	ret = sbi_sse_enable(event_id);
> +	if (!sbiret_report_error(&ret, SBI_SUCCESS, "enable"))
> +		goto err;
> +
> +	if (!sse_check_state(event_id, SBI_SSE_STATE_ENABLED))
> +		goto err;
> +
> +	ret = sbi_sse_hart_mask();
> +	if (!sbiret_report_error(&ret, SBI_SUCCESS, "hart mask"))
> +		goto err;
> +
> +	ret = sbi_sse_inject(event_id, current_thread_info()->hartid);
> +	if (!sbiret_report_error(&ret, SBI_SUCCESS, "injection masked")) {
> +		sbi_sse_hart_unmask();
> +		goto err;
> +	}
> +
> +	report(READ_ONCE(test_arg.done) == 0, "event masked not handled");
> +
> +	/*
> +	 * When unmasking the SSE events, we expect it to be injected
> +	 * immediately so a6 should be SBI_EXT_SBI_SSE_HART_UNMASK
> +	 */
> +	WRITE_ONCE(test_arg.expected_a6, SBI_EXT_SSE_HART_UNMASK);
> +	ret = sbi_sse_hart_unmask();
> +	if (!sbiret_report_error(&ret, SBI_SUCCESS, "hart unmask")) {
> +		goto err;
> +	}
> +
> +	report(READ_ONCE(test_arg.done) == 1, "event unmasked handled");
> +	WRITE_ONCE(test_arg.done, 0);
> +	WRITE_ONCE(test_arg.expected_a6, SBI_EXT_SSE_INJECT);
> +
> +	/* Set as oneshot and verify it is disabled */
> +	ret = sbi_sse_disable(event_id);
> +	if (!sbiret_report_error(&ret, SBI_SUCCESS, "Disable event")) {
> +		/* Nothing we can really do here, event can not be disabled */
> +		goto err;
> +	}
> +
> +	value = SBI_SSE_ATTR_CONFIG_ONESHOT;
> +	ret = sbi_sse_write_attrs(event_id, SBI_SSE_ATTR_CONFIG, 1, &value);
> +	if (!sbiret_report_error(&ret, SBI_SUCCESS, "Set event attribute as ONESHOT"))
> +		goto err;
> +
> +	ret = sbi_sse_enable(event_id);
> +	if (!sbiret_report_error(&ret, SBI_SUCCESS, "Enable event"))
> +		goto err;
> +
> +	ret = sbi_sse_inject(event_id, current_thread_info()->hartid);
> +	if (!sbiret_report_error(&ret, SBI_SUCCESS, "second injection"))
> +		goto err;
> +
> +	report(READ_ONCE(test_arg.done) == 1, "event handled");
> +	WRITE_ONCE(test_arg.done, 0);
> +
> +	if (!sse_check_state(event_id, SBI_SSE_STATE_REGISTERED))
> +		goto err;
> +
> +	/* Clear ONESHOT FLAG */
> +	value = 0;
> +	ret = sbi_sse_write_attrs(event_id, SBI_SSE_ATTR_CONFIG, 1, &value);
> +	if (!sbiret_report_error(&ret, SBI_SUCCESS, "Clear CONFIG.ONESHOT flag"))
> +		goto err;
> +
> +	ret = sbi_sse_unregister(event_id);
> +	if (!sbiret_report_error(&ret, SBI_SUCCESS, "unregister"))
> +		goto err;
> +
> +	sse_check_state(event_id, SBI_SSE_STATE_UNUSED);
> +
> +	err_ret = 0;
> +err:
> +	sse_free_stack(args.stack);
> +	report_prefix_pop();
> +
> +	return err_ret;
> +}
> +
> +struct sse_foreign_cpu_test_arg {
> +	bool done;
> +	unsigned int expected_cpu;
> +	uint32_t event_id;
> +};
> +
> +static void sse_foreign_cpu_handler(void *data, struct pt_regs *regs, unsigned int hartid)
> +{
> +	struct sse_foreign_cpu_test_arg *arg = data;
> +	unsigned int expected_cpu;
> +
> +	/* For arg content to be visible */
> +	smp_rmb();
> +	expected_cpu = READ_ONCE(arg->expected_cpu);
> +	report(expected_cpu == current_thread_info()->cpu,
> +	       "Received event on CPU (%d), expected CPU (%d)", current_thread_info()->cpu,
> +	       expected_cpu);
> +
> +	WRITE_ONCE(arg->done, true);
> +	/* For arg update to be visible for other CPUs */
> +	smp_wmb();
> +}
> +
> +struct sse_local_per_cpu {
> +	struct sbi_sse_handler_arg args;
> +	struct sbiret ret;
> +	struct sse_foreign_cpu_test_arg handler_arg;
> +};
> +
> +static void sse_register_enable_local(void *data)
> +{
> +	struct sbiret ret;
> +	struct sse_local_per_cpu *cpu_args = data;
> +	struct sse_local_per_cpu *cpu_arg = &cpu_args[current_thread_info()->cpu];
> +	uint32_t event_id = cpu_arg->handler_arg.event_id;
> +
> +	ret = sbi_sse_register(event_id, &cpu_arg->args);
> +	WRITE_ONCE(cpu_arg->ret, ret);
> +	if (ret.error)
> +		return;
> +
> +	ret = sbi_sse_enable(event_id);
> +	WRITE_ONCE(cpu_arg->ret, ret);
> +}
> +
> +static void sbi_sse_disable_unregister_local(void *data)
> +{
> +	struct sbiret ret;
> +	struct sse_local_per_cpu *cpu_args = data;
> +	struct sse_local_per_cpu *cpu_arg = &cpu_args[current_thread_info()->cpu];
> +	uint32_t event_id = cpu_arg->handler_arg.event_id;
> +
> +	ret = sbi_sse_disable(event_id);
> +	WRITE_ONCE(cpu_arg->ret, ret);
> +	if (ret.error)
> +		return;
> +
> +	ret = sbi_sse_unregister(event_id);
> +	WRITE_ONCE(cpu_arg->ret, ret);
> +}
> +
> +static int sse_test_inject_local(uint32_t event_id)
> +{
> +	int cpu;
> +	int err_ret = 1;
> +	struct sbiret ret;
> +	struct sse_local_per_cpu *cpu_args, *cpu_arg;
> +	struct sse_foreign_cpu_test_arg *handler_arg;
> +
> +	cpu_args = calloc(NR_CPUS, sizeof(struct sbi_sse_handler_arg));
> +
> +	report_prefix_push("local_dispatch");
> +	for_each_online_cpu(cpu) {
> +		cpu_arg = &cpu_args[cpu];
> +		cpu_arg->handler_arg.event_id = event_id;
> +		cpu_arg->args.stack = sse_alloc_stack();
> +		cpu_arg->args.handler = sse_foreign_cpu_handler;
> +		cpu_arg->args.handler_data = (void *)&cpu_arg->handler_arg;
> +	}
> +
> +	on_cpus(sse_register_enable_local, cpu_args);
> +	for_each_online_cpu(cpu) {
> +		cpu_arg = &cpu_args[cpu];
> +		ret = cpu_arg->ret;
> +		if (ret.error) {
> +			report_fail("CPU failed to register/enable event: %ld", ret.error);
> +			goto err;
> +		}
> +
> +		handler_arg = &cpu_arg->handler_arg;
> +		WRITE_ONCE(handler_arg->expected_cpu, cpu);
> +		/* For handler_arg content to be visible for other CPUs */
> +		smp_wmb();
> +		ret = sbi_sse_inject(event_id, cpus[cpu].hartid);
> +		if (ret.error) {
> +			report_fail("CPU failed to inject event: %ld", ret.error);
> +			goto err;
> +		}
> +	}
> +
> +	for_each_online_cpu(cpu) {
> +		handler_arg = &cpu_args[cpu].handler_arg;
> +		smp_rmb();
> +		while (!READ_ONCE(handler_arg->done)) {
> +			/* For handler_arg update to be visible */
> +			smp_rmb();
> +			cpu_relax();
> +		}
> +		WRITE_ONCE(handler_arg->done, false);
> +	}
> +
> +	on_cpus(sbi_sse_disable_unregister_local, cpu_args);
> +	for_each_online_cpu(cpu) {
> +		cpu_arg = &cpu_args[cpu];
> +		ret = READ_ONCE(cpu_arg->ret);
> +		if (ret.error) {
> +			report_fail("CPU failed to disable/unregister event: %ld", ret.error);
> +			goto err;
> +		}
> +	}
> +
> +	err_ret = 0;
> +err:
> +	for_each_online_cpu(cpu) {
> +		cpu_arg = &cpu_args[cpu];
> +		sse_free_stack(cpu_arg->args.stack);
> +	}
> +
> +	report_pass("local event dispatch on all CPUs");
> +	report_prefix_pop();
> +
> +	return err_ret;
> +}
> +
> +static int sse_test_inject_global(uint32_t event_id)
> +{
> +	unsigned long value;
> +	struct sbiret ret;
> +	unsigned int cpu;
> +	uint64_t timeout;
> +	int err_ret = 1;
> +	struct sse_foreign_cpu_test_arg test_arg = {.event_id = event_id};
> +	struct sbi_sse_handler_arg args = {
> +		.handler = sse_foreign_cpu_handler,
> +		.handler_data = (void *)&test_arg,
> +		.stack = sse_alloc_stack(),
> +	};
> +	enum sbi_sse_state state;
> +
> +	report_prefix_push("global_dispatch");
> +
> +	ret = sbi_sse_register(event_id, &args);
> +	if (!sbiret_report_error(&ret, SBI_SUCCESS, "Register event"))
> +		goto err;
> +
> +	for_each_online_cpu(cpu) {
> +		WRITE_ONCE(test_arg.expected_cpu, cpu);
> +		/* For test_arg content to be visible for other CPUs */
> +		smp_wmb();
> +		value = cpu;
> +		ret = sbi_sse_write_attrs(event_id, SBI_SSE_ATTR_PREFERRED_HART, 1, &value);
> +		if (!sbiret_report_error(&ret, SBI_SUCCESS, "Set preferred hart"))
> +			goto err;
> +
> +		ret = sbi_sse_enable(event_id);
> +		if (!sbiret_report_error(&ret, SBI_SUCCESS, "Enable event"))
> +			goto err;
> +
> +		ret = sbi_sse_inject(event_id, cpu);
> +		if (!sbiret_report_error(&ret, SBI_SUCCESS, "Inject event"))
> +			goto err;
> +
> +		smp_rmb();
> +		while (!READ_ONCE(test_arg.done)) {
> +			/* For shared test_arg structure */
> +			smp_rmb();
> +			cpu_relax();
> +		}
> +
> +		WRITE_ONCE(test_arg.done, false);
> +
> +		timeout = timer_get_cycles() + usec_to_cycles((uint64_t)1000);

Should probably add an environment variable for this timeout and then use
1000 as the default when the variable isn't set.

> +		/* Wait for event to be back in ENABLED state */
> +		do {
> +			ret = sse_event_get_state(event_id, &state);
> +			if (ret.error)
> +				goto err;
> +			cpu_relax();
> +		} while (state != SBI_SSE_STATE_ENABLED || timer_get_cycles() < timeout);

                                                        ^ &&
> +
> +		if (!report(state == SBI_SSE_STATE_ENABLED,
> +		    "wait for event to be in enable state"))

nit: no need to wrap this line, but if we do, then "wait should be under
state

> +			goto err;
> +
> +		ret = sbi_sse_disable(event_id);
> +		if (!sbiret_report_error(&ret, SBI_SUCCESS, "Disable event"))
> +			goto err;
> +
> +		report_pass("Global event on CPU %d", cpu);
> +	}
> +
> +	ret = sbi_sse_unregister(event_id);
> +	sbiret_report_error(&ret, SBI_SUCCESS, "Unregister event");
> +
> +	err_ret = 0;
> +
> +err:
> +	sse_free_stack(args.stack);
> +	report_prefix_pop();
> +
> +	return err_ret;
> +}
> +
> +struct priority_test_arg {
> +	uint32_t event_id;
> +	bool called;
> +	u32 prio;
> +	struct priority_test_arg *next_event_arg;
> +	void (*check_func)(struct priority_test_arg *arg);
> +};
> +
> +static void sse_hi_priority_test_handler(void *arg, struct pt_regs *regs,
> +					 unsigned int hartid)
> +{
> +	struct priority_test_arg *targ = arg;
> +	struct priority_test_arg *next = targ->next_event_arg;
> +
> +	targ->called = true;
> +	if (next) {
> +		sbi_sse_inject(next->event_id, current_thread_info()->hartid);
> +
> +		report(!sse_event_pending(next->event_id), "Higher priority event is not pending");
> +		report(next->called, "Higher priority event was handled");
> +	}
> +}
> +
> +static void sse_low_priority_test_handler(void *arg, struct pt_regs *regs,
> +					  unsigned int hartid)
> +{
> +	struct priority_test_arg *targ = arg;
> +	struct priority_test_arg *next = targ->next_event_arg;
> +
> +	targ->called = true;
> +
> +	if (next) {
> +		sbi_sse_inject(next->event_id, current_thread_info()->hartid);
> +
> +		report(sse_event_pending(next->event_id), "Lower priority event is pending");
> +		report(!next->called, "Lower priority event %s was not handle before %s",

handled

> +		       sse_event_name(next->event_id), sse_event_name(targ->event_id));
> +	}
> +}
> +
> +static void sse_test_injection_priority_arg(struct priority_test_arg *in_args,
> +					    unsigned int in_args_size,
> +					    sbi_sse_handler_fn handler,
> +					    const char *test_name)
> +{
> +	unsigned int i;
> +	unsigned long value, uret;
> +	struct sbiret ret;
> +	uint32_t event_id;
> +	struct priority_test_arg *arg;
> +	unsigned int args_size = 0;
> +	struct sbi_sse_handler_arg event_args[in_args_size];
> +	struct priority_test_arg *args[in_args_size];
> +	void *stack;
> +	struct sbi_sse_handler_arg *event_arg;
> +
> +	report_prefix_push(test_name);
> +
> +	for (i = 0; i < in_args_size; i++) {
> +		arg = &in_args[i];
> +		event_id = arg->event_id;
> +		if (!sse_event_can_inject(event_id))
> +			continue;
> +
> +		args[args_size] = arg;
> +		args_size++;
> +		event_args->stack = 0;
> +	}
> +
> +	if (!args_size) {
> +		report_skip("No injectable events");
> +		goto skip;
> +	}
> +
> +	for (i = 0; i < args_size; i++) {
> +		arg = args[i];
> +		event_id = arg->event_id;
> +		stack = sse_alloc_stack();
> +
> +		event_arg = &event_args[i];
> +		event_arg->handler = handler;
> +		event_arg->handler_data = (void *)arg;
> +		event_arg->stack = stack;
> +
> +		if (i < (args_size - 1))
> +			arg->next_event_arg = args[i + 1];
> +		else
> +			arg->next_event_arg = NULL;
> +
> +		/* Be sure global events are targeting the current hart */
> +		if (sbi_sse_event_is_global(event_id)) {
> +			uret = sse_global_event_set_current_hart(event_id);
> +			if (uret)
> +				goto err;

If we goto err from here or the next goto, then we'll also get disable and
unregister failures reported.

> +		}
> +
> +		ret = sbi_sse_register(event_id, event_arg);
> +		if (ret.error) {
> +			sbiret_report_error(&ret, SBI_SUCCESS, "register event 0x%x", event_id);
> +			goto err;
> +		}
> +
> +		value = arg->prio;
> +		ret = sbi_sse_write_attrs(event_id, SBI_SSE_ATTR_PRIORITY, 1, &value);
> +		if (ret.error) {
> +			sbiret_report_error(&ret, SBI_SUCCESS, "set event 0x%x priority", event_id);
> +			goto err;

from here we'll get disable failures reported.

> +		}
> +		ret = sbi_sse_enable(event_id);
> +		if (ret.error) {
> +			sbiret_report_error(&ret, SBI_SUCCESS, "enable event 0x%x", event_id);
> +			goto err;
> +		}
> +	}

It looks like we need a complete

if (!a)
   goto a_out;
if (!b)
   goto b_out;

/* ret = success */

b_out:
  /* undo b */
a_out:
  /* undo a */

return ret;

type of model. Or just manage it directly.

 if (!a)
  ...
 if (!b) {
  /* undo a */
  ...
 }

> +
> +	/* Inject first event */
> +	ret = sbi_sse_inject(args[0]->event_id, current_thread_info()->hartid);
> +	sbiret_report_error(&ret, SBI_SUCCESS, "injection");
> +
> +err:
> +	for (i = 0; i < args_size; i++) {
> +		arg = args[i];
> +		event_id = arg->event_id;
> +
> +		report(arg->called, "Event %s handler called", sse_event_name(arg->event_id));
> +
> +		ret = sbi_sse_disable(event_id);
> +		if (ret.error)
> +			sbiret_report_error(&ret, SBI_SUCCESS, "disable event 0x%x", event_id);
> +
> +		sbi_sse_unregister(event_id);
> +		if (ret.error)
> +			sbiret_report_error(&ret, SBI_SUCCESS, "unregister event 0x%x", event_id);
> +
> +		event_arg = &event_args[i];
> +		if (event_arg->stack)
> +			sse_free_stack(event_arg->stack);
> +	}
> +
> +skip:
> +	report_prefix_pop();
> +}
> +
> +static struct priority_test_arg hi_prio_args[] = {
> +	{.event_id = SBI_SSE_EVENT_GLOBAL_SOFTWARE},
> +	{.event_id = SBI_SSE_EVENT_LOCAL_SOFTWARE},
> +	{.event_id = SBI_SSE_EVENT_GLOBAL_LOW_PRIO_RAS},
> +	{.event_id = SBI_SSE_EVENT_LOCAL_LOW_PRIO_RAS},
> +	{.event_id = SBI_SSE_EVENT_LOCAL_PMU_OVERFLOW},
> +	{.event_id = SBI_SSE_EVENT_GLOBAL_HIGH_PRIO_RAS},
> +	{.event_id = SBI_SSE_EVENT_LOCAL_DOUBLE_TRAP},
> +	{.event_id = SBI_SSE_EVENT_LOCAL_HIGH_PRIO_RAS},
> +};
> +
> +static struct priority_test_arg low_prio_args[] = {
> +	{.event_id = SBI_SSE_EVENT_LOCAL_HIGH_PRIO_RAS},
> +	{.event_id = SBI_SSE_EVENT_LOCAL_DOUBLE_TRAP},
> +	{.event_id = SBI_SSE_EVENT_GLOBAL_HIGH_PRIO_RAS},
> +	{.event_id = SBI_SSE_EVENT_LOCAL_PMU_OVERFLOW},
> +	{.event_id = SBI_SSE_EVENT_LOCAL_LOW_PRIO_RAS},
> +	{.event_id = SBI_SSE_EVENT_GLOBAL_LOW_PRIO_RAS},
> +	{.event_id = SBI_SSE_EVENT_LOCAL_SOFTWARE},
> +	{.event_id = SBI_SSE_EVENT_GLOBAL_SOFTWARE},
> +};
> +
> +static struct priority_test_arg prio_args[] = {
> +	{.event_id = SBI_SSE_EVENT_GLOBAL_SOFTWARE,		.prio = 5},
> +	{.event_id = SBI_SSE_EVENT_LOCAL_SOFTWARE,		.prio = 10},
> +	{.event_id = SBI_SSE_EVENT_LOCAL_LOW_PRIO_RAS,		.prio = 12},
> +	{.event_id = SBI_SSE_EVENT_LOCAL_PMU_OVERFLOW,		.prio = 15},
> +	{.event_id = SBI_SSE_EVENT_GLOBAL_HIGH_PRIO_RAS,	.prio = 20},
> +	{.event_id = SBI_SSE_EVENT_GLOBAL_LOW_PRIO_RAS, 	.prio = 22},
> +	{.event_id = SBI_SSE_EVENT_LOCAL_HIGH_PRIO_RAS, 	.prio = 25},
> +};
> +
> +static struct priority_test_arg same_prio_args[] = {
> +	{.event_id = SBI_SSE_EVENT_LOCAL_PMU_OVERFLOW,		.prio = 0},
> +	{.event_id = SBI_SSE_EVENT_GLOBAL_LOW_PRIO_RAS,		.prio = 0},
> +	{.event_id = SBI_SSE_EVENT_LOCAL_HIGH_PRIO_RAS,		.prio = 10},
> +	{.event_id = SBI_SSE_EVENT_LOCAL_SOFTWARE,		.prio = 10},
> +	{.event_id = SBI_SSE_EVENT_GLOBAL_SOFTWARE,		.prio = 10},
> +	{.event_id = SBI_SSE_EVENT_GLOBAL_HIGH_PRIO_RAS,	.prio = 20},
> +	{.event_id = SBI_SSE_EVENT_LOCAL_LOW_PRIO_RAS,		.prio = 20},
> +};
> +
> +static void sse_test_injection_priority(void)
> +{
> +	report_prefix_push("prio");
> +
> +	sse_test_injection_priority_arg(hi_prio_args, ARRAY_SIZE(hi_prio_args),
> +					sse_hi_priority_test_handler, "high");
> +
> +	sse_test_injection_priority_arg(low_prio_args, ARRAY_SIZE(low_prio_args),
> +					sse_low_priority_test_handler, "low");
> +
> +	sse_test_injection_priority_arg(prio_args, ARRAY_SIZE(prio_args),
> +					sse_low_priority_test_handler, "changed");
> +
> +	sse_test_injection_priority_arg(same_prio_args, ARRAY_SIZE(same_prio_args),
> +					sse_low_priority_test_handler, "same_prio_args");
> +
> +	report_prefix_pop();
> +}
> +
> +static void test_invalid_event_id(unsigned long event_id)
> +{
> +	struct sbiret ret;
> +	unsigned long value = 0;
> +
> +	ret = sbi_sse_register_raw(event_id, (unsigned long) sbi_sse_entry, 0);
> +	sbiret_report_error(&ret, SBI_ERR_INVALID_PARAM,
> +			    "register event_id 0x%lx", event_id);
> +
> +	ret = sbi_sse_unregister(event_id);
> +	sbiret_report_error(&ret, SBI_ERR_INVALID_PARAM,
> +			"unregister event_id 0x%lx", event_id);
> +
> +	ret = sbi_sse_enable(event_id);
> +	sbiret_report_error(&ret, SBI_ERR_INVALID_PARAM,
> +			    "enable event_id 0x%lx", event_id);
> +
> +	ret = sbi_sse_disable(event_id);
> +	sbiret_report_error(&ret, SBI_ERR_INVALID_PARAM,
> +			    "disable event_id 0x%lx", event_id);
> +
> +	ret = sbi_sse_inject(event_id, 0);
> +	sbiret_report_error(&ret, SBI_ERR_INVALID_PARAM,
> +			    "inject event_id 0x%lx", event_id);
> +
> +	ret = sbi_sse_write_attrs(event_id, SBI_SSE_ATTR_PRIORITY, 1, &value);
> +	sbiret_report_error(&ret, SBI_ERR_INVALID_PARAM,
> +			    "write attr event_id 0x%lx", event_id);
> +
> +	ret = sbi_sse_read_attrs(event_id, SBI_SSE_ATTR_PRIORITY, 1, &value);
> +	sbiret_report_error(&ret, SBI_ERR_INVALID_PARAM,
> +			    "read attr event_id 0x%lx", event_id);
> +}
> +
> +static void sse_test_invalid_event_id(void)
> +{
> +
> +	report_prefix_push("event_id");
> +
> +	test_invalid_event_id(SBI_SSE_EVENT_LOCAL_RESERVED_0_START);
> +
> +	report_prefix_pop();
> +}
> +
> +static void sse_check_event_availability(uint32_t event_id, bool *can_inject, bool *supported)
> +{
> +	unsigned long status;
> +	struct sbiret ret;
> +
> +	*can_inject = false;
> +	*supported = false;
> +
> +	ret = sbi_sse_read_attrs(event_id, SBI_SSE_ATTR_STATUS, 1, &status);
> +	if (ret.error != SBI_SUCCESS && ret.error != SBI_ERR_NOT_SUPPORTED) {
> +		report_fail("Get event status != SBI_SUCCESS && != SBI_ERR_NOT_SUPPORTED: %ld", ret.error);
> +		return;
> +	}
> +	if (ret.error == SBI_ERR_NOT_SUPPORTED)
> +		return;
> +
> +	if (!ret.error)
> +		*supported = true;
> +
> +	*can_inject = (status >> SBI_SSE_ATTR_STATUS_INJECT_OFFSET) & 1;

this should also be under the 'if (!ret.error)'

> +}
> +
> +static void sse_secondary_boot_and_unmask(void *data)
> +{
> +	sbi_sse_hart_unmask();
> +}
> +
> +static void sse_check_mask(void)
> +{
> +	struct sbiret ret;
> +
> +	/* Upon boot, event are masked, check that */
> +	ret = sbi_sse_hart_mask();
> +	sbiret_report_error(&ret, SBI_ERR_ALREADY_STOPPED, "hart mask at boot time");
> +
> +	ret = sbi_sse_hart_unmask();
> +	sbiret_report_error(&ret, SBI_SUCCESS, "hart unmask");
> +	ret = sbi_sse_hart_unmask();
> +	sbiret_report_error(&ret, SBI_ERR_ALREADY_STARTED, "hart unmask twice error");
> +
> +	ret = sbi_sse_hart_mask();
> +	sbiret_report_error(&ret, SBI_SUCCESS, "hart mask");
> +	ret = sbi_sse_hart_mask();
> +	sbiret_report_error(&ret, SBI_ERR_ALREADY_STOPPED, "hart mask twice");
> +}
> +
> +static int run_inject_test(struct sse_event_info *info)
> +{
> +	unsigned long event_id = info->event_id;
> +
> +	if (!info->can_inject) {
> +		report_skip("Event does not support injection, skipping injection tests");
> +		return 0;
> +	}
> +
> +	if (sse_test_inject_simple(event_id))
> +		return 1;
> +
> +	if (sbi_sse_event_is_global(event_id))
> +		return sse_test_inject_global(event_id);
> +	else
> +		return sse_test_inject_local(event_id);
> +}
> +
> +void check_sse(void)
> +{
> +	struct sse_event_info *info;
> +	unsigned long i, event_id;
> +	bool supported;
> +
> +	report_prefix_push("sse");
> +
> +	if (!sbi_probe(SBI_EXT_SSE)) {
> +		report_skip("extension not available");
> +		report_prefix_pop();
> +		return;
> +	}
> +
> +	sse_check_mask();
> +
> +	/*
> +	 * Dummy wakeup of all processors since some of them will be targeted
> +	 * by global events without going through the wakeup call as well as
> +	 * unmasking SSE events on all harts
> +	 */
> +	on_cpus(sse_secondary_boot_and_unmask, NULL);
> +
> +	sse_test_invalid_event_id();
> +
> +	for (i = 0; i < ARRAY_SIZE(sse_event_infos); i++) {
> +		info = &sse_event_infos[i];
> +		event_id = info->event_id;
> +		report_prefix_push(info->name);
> +		sse_check_event_availability(event_id, &info->can_inject, &supported);
> +		if (!supported) {
> +			report_skip("Event is not supported, skipping tests");
> +			report_prefix_pop();
> +			continue;
> +		}
> +
> +		sse_test_attrs(event_id);
> +		sse_test_register_error(event_id);
> +
> +		if (run_inject_test(info)) {
> +			report_skip("Event test failed, event state unreliable");
> +			info->can_inject = false;
> +		}
> +
> +		report_prefix_pop();
> +	}
> +
> +	sse_test_injection_priority();
> +
> +	report_prefix_pop();
> +}
> diff --git a/riscv/sbi.c b/riscv/sbi.c
> index 0404bb81..478cb35d 100644
> --- a/riscv/sbi.c
> +++ b/riscv/sbi.c
> @@ -32,6 +32,7 @@
>  
>  #define	HIGH_ADDR_BOUNDARY	((phys_addr_t)1 << 32)
>  
> +void check_sse(void);
>  void check_fwft(void);
>  
>  static long __labs(long a)
> @@ -1567,6 +1568,7 @@ int main(int argc, char **argv)
>  	check_hsm();
>  	check_dbcn();
>  	check_susp();
> +	check_sse();
>  	check_fwft();
>  
>  	return report_summary();
> -- 
> 2.47.2
> 

Thanks,
drew

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [kvm-unit-tests PATCH v8 2/6] riscv: Set .aux.o files as .PRECIOUS
  2025-03-12 16:30   ` Andrew Jones
@ 2025-03-12 17:53     ` Andrew Jones
  0 siblings, 0 replies; 13+ messages in thread
From: Andrew Jones @ 2025-03-12 17:53 UTC (permalink / raw)
  To: Clément Léger; +Cc: kvm, kvm-riscv, Anup Patel, Atish Patra

On Wed, Mar 12, 2025 at 05:30:09PM +0100, Andrew Jones wrote:
> On Fri, Mar 07, 2025 at 05:15:44PM +0100, Clément Léger wrote:
> > When compiling, we need to keep .aux.o file or they will be removed
> > after the compilation which leads to dependent files to be recompiled.
> > Set these files as .PRECIOUS to keep them.
> > 
> > Signed-off-by: Clément Léger <cleger@rivosinc.com>
> > ---
> >  riscv/Makefile | 1 +
> >  1 file changed, 1 insertion(+)
> > 
> > diff --git a/riscv/Makefile b/riscv/Makefile
> > index 52718f3f..ae9cf02a 100644
> > --- a/riscv/Makefile
> > +++ b/riscv/Makefile
> > @@ -90,6 +90,7 @@ CFLAGS += -I $(SRCDIR)/lib -I $(SRCDIR)/lib/libfdt -I lib -I $(SRCDIR)/riscv
> >  asm-offsets = lib/riscv/asm-offsets.h
> >  include $(SRCDIR)/scripts/asm-offsets.mak
> >  
> > +.PRECIOUS: %.aux.o
> >  %.aux.o: $(SRCDIR)/lib/auxinfo.c
> >  	$(CC) $(CFLAGS) -c -o $@ $< \
> >  		-DPROGNAME=\"$(notdir $(@:.aux.o=.$(exe)))\" -DAUXFLAGS=$(AUXFLAGS)
> > -- 
> > 2.47.2
> >
> 
> Reviewed-by: Andrew Jones <ajones@ventanamicro.com>

I meant to give this a

Reviewed-by: Andrew Jones <andrew.jones@linux.dev>

but I'm using the wrong mutt config right now...

drew

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [kvm-unit-tests PATCH v8 6/6] riscv: sbi: Add SSE extension tests
  2025-03-12 17:51   ` Andrew Jones
@ 2025-03-14  9:29     ` Clément Léger
  2025-03-14 10:09       ` Andrew Jones
  0 siblings, 1 reply; 13+ messages in thread
From: Clément Léger @ 2025-03-14  9:29 UTC (permalink / raw)
  To: Andrew Jones; +Cc: kvm, kvm-riscv, Anup Patel, Atish Patra



On 12/03/2025 18:51, Andrew Jones wrote:
> On Fri, Mar 07, 2025 at 05:15:48PM +0100, Clément Léger wrote:
>> Add SBI SSE extension tests for the following features:
>> - Test attributes errors (invalid values, RO, etc)
>> - Registration errors
>> - Simple events (register, enable, inject)
>> - Events with different priorities
>> - Global events dispatch on different harts
>> - Local events on all harts
>> - Hart mask/unmask events
>>
>> Signed-off-by: Clément Léger <cleger@rivosinc.com>
>> ---
>>  riscv/Makefile    |    1 +
>>  riscv/sbi-tests.h |    1 +
>>  riscv/sbi-sse.c   | 1215 +++++++++++++++++++++++++++++++++++++++++++++
>>  riscv/sbi.c       |    2 +
>>  4 files changed, 1219 insertions(+)
>>  create mode 100644 riscv/sbi-sse.c
>>
>> diff --git a/riscv/Makefile b/riscv/Makefile
>> index 16fc125b..4fe2f1bb 100644
>> --- a/riscv/Makefile
>> +++ b/riscv/Makefile
>> @@ -18,6 +18,7 @@ tests += $(TEST_DIR)/sieve.$(exe)
>>  all: $(tests)
>>  
>>  $(TEST_DIR)/sbi-deps = $(TEST_DIR)/sbi-asm.o $(TEST_DIR)/sbi-fwft.o
>> +$(TEST_DIR)/sbi-deps += $(TEST_DIR)/sbi-sse.o
>>  
>>  # When built for EFI sieve needs extra memory, run with e.g. '-m 256' on QEMU
>>  $(TEST_DIR)/sieve.$(exe): AUXFLAGS = 0x1
>> diff --git a/riscv/sbi-tests.h b/riscv/sbi-tests.h
>> index b081464d..a71da809 100644
>> --- a/riscv/sbi-tests.h
>> +++ b/riscv/sbi-tests.h
>> @@ -71,6 +71,7 @@
>>  	sbiret_report(ret, expected_error, expected_value, "check sbi.error and sbi.value")
>>  
>>  void sbi_bad_fid(int ext);
>> +void check_sse(void);
>>  
>>  #endif /* __ASSEMBLER__ */
>>  #endif /* _RISCV_SBI_TESTS_H_ */
>> diff --git a/riscv/sbi-sse.c b/riscv/sbi-sse.c
>> new file mode 100644
>> index 00000000..7bd58b8b
>> --- /dev/null
>> +++ b/riscv/sbi-sse.c
>> @@ -0,0 +1,1215 @@
>> +// SPDX-License-Identifier: GPL-2.0-only
>> +/*
>> + * SBI SSE testsuite
>> + *
>> + * Copyright (C) 2025, Rivos Inc., Clément Léger <cleger@rivosinc.com>
>> + */
>> +#include <alloc.h>
>> +#include <alloc_page.h>
>> +#include <bitops.h>
>> +#include <cpumask.h>
>> +#include <libcflat.h>
>> +#include <on-cpus.h>
>> +#include <stdlib.h>
>> +
>> +#include <asm/barrier.h>
>> +#include <asm/delay.h>
>> +#include <asm/io.h>
>> +#include <asm/page.h>
>> +#include <asm/processor.h>
>> +#include <asm/sbi.h>
>> +#include <asm/setup.h>
>> +#include <asm/timer.h>
>> +
>> +#include "sbi-tests.h"
>> +
>> +#define SSE_STACK_SIZE	PAGE_SIZE
>> +
>> +struct sse_event_info {
>> +	uint32_t event_id;
>> +	const char *name;
>> +	bool can_inject;
>> +};
>> +
>> +static struct sse_event_info sse_event_infos[] = {
>> +	{
>> +		.event_id = SBI_SSE_EVENT_LOCAL_HIGH_PRIO_RAS,
>> +		.name = "local_high_prio_ras",
>> +	},
>> +	{
>> +		.event_id = SBI_SSE_EVENT_LOCAL_DOUBLE_TRAP,
>> +		.name = "double_trap",
>> +	},
>> +	{
>> +		.event_id = SBI_SSE_EVENT_GLOBAL_HIGH_PRIO_RAS,
>> +		.name = "global_high_prio_ras",
>> +	},
>> +	{
>> +		.event_id = SBI_SSE_EVENT_LOCAL_PMU_OVERFLOW,
>> +		.name = "local_pmu_overflow",
>> +	},
>> +	{
>> +		.event_id = SBI_SSE_EVENT_LOCAL_LOW_PRIO_RAS,
>> +		.name = "local_low_prio_ras",
>> +	},
>> +	{
>> +		.event_id = SBI_SSE_EVENT_GLOBAL_LOW_PRIO_RAS,
>> +		.name = "global_low_prio_ras",
>> +	},
>> +	{
>> +		.event_id = SBI_SSE_EVENT_LOCAL_SOFTWARE,
>> +		.name = "local_software",
>> +	},
>> +	{
>> +		.event_id = SBI_SSE_EVENT_GLOBAL_SOFTWARE,
>> +		.name = "global_software",
>> +	},
>> +};
>> +
>> +static const char *const attr_names[] = {
>> +	[SBI_SSE_ATTR_STATUS] = "status",
>> +	[SBI_SSE_ATTR_PRIORITY] = "priority",
>> +	[SBI_SSE_ATTR_CONFIG] = "config",
>> +	[SBI_SSE_ATTR_PREFERRED_HART] = "preferred_hart",
>> +	[SBI_SSE_ATTR_ENTRY_PC] = "entry_pc",
>> +	[SBI_SSE_ATTR_ENTRY_ARG] = "entry_arg",
>> +	[SBI_SSE_ATTR_INTERRUPTED_SEPC] = "interrupted_sepc",
>> +	[SBI_SSE_ATTR_INTERRUPTED_FLAGS] = "interrupted_flags",
>> +	[SBI_SSE_ATTR_INTERRUPTED_A6] = "interrupted_a6",
>> +	[SBI_SSE_ATTR_INTERRUPTED_A7] = "interrupted_a7",
> 
> nit: tabulate
> 
>> +};
>> +
>> +static const unsigned long ro_attrs[] = {
>> +	SBI_SSE_ATTR_STATUS,
>> +	SBI_SSE_ATTR_ENTRY_PC,
>> +	SBI_SSE_ATTR_ENTRY_ARG,
>> +};
>> +
>> +static const unsigned long interrupted_attrs[] = {
>> +	SBI_SSE_ATTR_INTERRUPTED_SEPC,
>> +	SBI_SSE_ATTR_INTERRUPTED_FLAGS,
>> +	SBI_SSE_ATTR_INTERRUPTED_A6,
>> +	SBI_SSE_ATTR_INTERRUPTED_A7,
>> +};
>> +
>> +static const unsigned long interrupted_flags[] = {
>> +	SBI_SSE_ATTR_INTERRUPTED_FLAGS_SSTATUS_SPP,
>> +	SBI_SSE_ATTR_INTERRUPTED_FLAGS_SSTATUS_SPIE,
>> +	SBI_SSE_ATTR_INTERRUPTED_FLAGS_SSTATUS_SPELP,
>> +	SBI_SSE_ATTR_INTERRUPTED_FLAGS_SSTATUS_SDT,
>> +	SBI_SSE_ATTR_INTERRUPTED_FLAGS_HSTATUS_SPV,
>> +	SBI_SSE_ATTR_INTERRUPTED_FLAGS_HSTATUS_SPVP,
>> +};
>> +
>> +static struct sse_event_info *sse_event_get_info(uint32_t event_id)
>> +{
>> +	int i;
>> +
>> +	for (i = 0; i < ARRAY_SIZE(sse_event_infos); i++) {
>> +		if (sse_event_infos[i].event_id == event_id)
>> +			return &sse_event_infos[i];
>> +	}
>> +
>> +	assert_msg(false, "Invalid event id: %d", event_id);
>> +}
>> +
>> +static const char *sse_event_name(uint32_t event_id)
>> +{
>> +	return sse_event_get_info(event_id)->name;
>> +}
>> +
>> +static bool sse_event_can_inject(uint32_t event_id)
>> +{
>> +	return sse_event_get_info(event_id)->can_inject;
>> +}
>> +
>> +static struct sbiret sse_get_event_status_field(uint32_t event_id, unsigned long mask,
>> +						unsigned long shift, unsigned long *value)
>> +{
>> +	struct sbiret ret;
>> +	unsigned long status;
>> +
>> +	ret = sbi_sse_read_attrs(event_id, SBI_SSE_ATTR_STATUS, 1, &status);
>> +	if (ret.error) {
>> +		sbiret_report_error(&ret, SBI_SUCCESS, "Get event status");
>> +		return ret;
>> +	}
>> +
>> +	*value = (status & mask) >> shift;
>> +
>> +	return ret;
>> +}
>> +
>> +static struct sbiret sse_event_get_state(uint32_t event_id, enum sbi_sse_state *state)
>> +{
>> +	unsigned long status = 0;
>> +	struct sbiret ret;
>> +
>> +	ret = sse_get_event_status_field(event_id, SBI_SSE_ATTR_STATUS_STATE_MASK,
>> +					  SBI_SSE_ATTR_STATUS_STATE_OFFSET, &status);
>> +	*state = status;
>> +
>> +	return ret;
>> +}
>> +
>> +static unsigned long sse_global_event_set_current_hart(uint32_t event_id)
>> +{
>> +	struct sbiret ret;
>> +	unsigned long current_hart = current_thread_info()->hartid;
>> +
>> +	if (!sbi_sse_event_is_global(event_id))
>> +		return SBI_ERR_INVALID_PARAM;
> 
> The only two callers of sse_global_event_set_current_hart() check
> sbi_sse_event_is_global() themselves, so this check can be an assert().
> I think it should be anyway, since returning an SBI error, which didn't
> originate in SBI, could lead to confusion.

Indeed, that check is clearly meant to be an assert.

> 
>> +
>> +	ret = sbi_sse_write_attrs(event_id, SBI_SSE_ATTR_PREFERRED_HART, 1, &current_hart);
>> +	if (sbiret_report_error(&ret, SBI_SUCCESS, "Set preferred hart"))
>> +		return ret.error;
>> +
>> +	return 0;
>> +}
>> +
>> +static bool sse_check_state(uint32_t event_id, unsigned long expected_state)
>> +{
>> +	struct sbiret ret;
>> +	enum sbi_sse_state state;
>> +
>> +	ret = sse_event_get_state(event_id, &state);
>> +	if (ret.error)
>> +		return false;
>> +
>> +	return report(state == expected_state, "event status == %ld", expected_state);
>> +}
>> +
>> +static bool sse_event_pending(uint32_t event_id)
>> +{
>> +	bool pending = 0;
>> +
>> +	sse_get_event_status_field(event_id, BIT(SBI_SSE_ATTR_STATUS_PENDING_OFFSET),
>> +		SBI_SSE_ATTR_STATUS_PENDING_OFFSET, (unsigned long*)&pending);
>> +
>> +	return pending;
>> +}
>> +
>> +static void *sse_alloc_stack(void)
>> +{
>> +	/*
>> +	 * We assume that SSE_STACK_SIZE always fit in one page. This page will
>> +	 * always be decremented before storing anything on it in sse-entry.S.
>> +	 */
>> +	assert(SSE_STACK_SIZE <= PAGE_SIZE);
>> +
>> +	return (alloc_page() + SSE_STACK_SIZE);
>> +}
>> +
>> +static void sse_free_stack(void *stack)
>> +{
>> +	free_page(stack - SSE_STACK_SIZE);
>> +}
>> +
>> +static void sse_read_write_test(uint32_t event_id, unsigned long attr, unsigned long attr_count,
>> +				unsigned long *value, long expected_error, const char *str)
>> +{
>> +	struct sbiret ret;
>> +
>> +	ret = sbi_sse_read_attrs(event_id, attr, attr_count, value);
>> +	sbiret_report_error(&ret, expected_error, "Read %s error", str);
>> +
>> +	ret = sbi_sse_write_attrs(event_id, attr, attr_count, value);
>> +	sbiret_report_error(&ret, expected_error, "Write %s error", str);
>> +}
>> +
>> +#define ALL_ATTRS_COUNT	(SBI_SSE_ATTR_INTERRUPTED_A7 + 1)
>> +
>> +static void sse_test_attrs(uint32_t event_id)
>> +{
>> +	unsigned long value = 0;
>> +	struct sbiret ret;
>> +	void *ptr;
>> +	unsigned long values[ALL_ATTRS_COUNT];
>> +	unsigned int i;
>> +	const char *invalid_hart_str;
>> +	const char *attr_name;
>> +
>> +	report_prefix_push("attrs");
>> +
>> +	for (i = 0; i < ARRAY_SIZE(ro_attrs); i++) {
>> +		ret = sbi_sse_write_attrs(event_id, ro_attrs[i], 1, &value);
>> +		sbiret_report_error(&ret, SBI_ERR_DENIED, "RO attribute %s not writable",
>> +				    attr_names[ro_attrs[i]]);
>> +	}
>> +
>> +	ret = sbi_sse_read_attrs(event_id, SBI_SSE_ATTR_STATUS, ALL_ATTRS_COUNT, values);
>> +	sbiret_report_error(&ret, SBI_SUCCESS, "Read multiple attributes");
>> +
>> +	for (i = SBI_SSE_ATTR_STATUS; i <= SBI_SSE_ATTR_INTERRUPTED_A7; i++) {
>> +		ret = sbi_sse_read_attrs(event_id, i, 1, &value);
>> +		attr_name = attr_names[i];
>> +
>> +		sbiret_report_error(&ret, SBI_SUCCESS, "Read single attribute %s", attr_name);
>> +		if (values[i] != value)
>> +			report_fail("Attribute 0x%x single value read (0x%lx) differs from the one read with multiple attributes (0x%lx)",
>> +				    i, value, values[i]);
>> +		/*
>> +		 * Preferred hart reset value is defined by SBI vendor
>> +		 */
>> +		if(i != SBI_SSE_ATTR_PREFERRED_HART) {
>                   ^ missing space
> 
>> +			/*
>> +			 * Specification states that injectable bit is implementation dependent
>> +			 * but other bits are zero-initialized.
>> +			 */
>> +			if (i == SBI_SSE_ATTR_STATUS)
>> +				value &= ~BIT(SBI_SSE_ATTR_STATUS_INJECT_OFFSET);
>> +			report(value == 0, "Attribute %s reset value is 0, found %lx", attr_name,
>> +			       value);
> 
> nit: no need to wrap the line
> 
>> +		}
>> +	}
>> +
>> +#if __riscv_xlen > 32
>> +	value = BIT(32);
>> +	ret = sbi_sse_write_attrs(event_id, SBI_SSE_ATTR_PRIORITY, 1, &value);
>> +	sbiret_report_error(&ret, SBI_ERR_INVALID_PARAM, "Write invalid prio > 0xFFFFFFFF error");
>> +#endif
>> +
>> +	value = ~SBI_SSE_ATTR_CONFIG_ONESHOT;
>> +	ret = sbi_sse_write_attrs(event_id, SBI_SSE_ATTR_CONFIG, 1, &value);
>> +	sbiret_report_error(&ret, SBI_ERR_INVALID_PARAM, "Write invalid config value error");
>> +
>> +	if (sbi_sse_event_is_global(event_id)) {
>> +		invalid_hart_str = getenv("INVALID_HART_ID");
>> +		if (!invalid_hart_str)
>> +			value = 0xFFFFFFFFUL;
>> +		else
>> +			value = strtoul(invalid_hart_str, NULL, 0);
>> +
>> +		ret = sbi_sse_write_attrs(event_id, SBI_SSE_ATTR_PREFERRED_HART, 1, &value);
>> +		sbiret_report_error(&ret, SBI_ERR_INVALID_PARAM, "Set invalid hart id error");
>> +	} else {
>> +		/* Set Hart on local event -> RO */
>> +		value = current_thread_info()->hartid;
>> +		ret = sbi_sse_write_attrs(event_id, SBI_SSE_ATTR_PREFERRED_HART, 1, &value);
>> +		sbiret_report_error(&ret, SBI_ERR_DENIED,
>> +				    "Set hart id on local event error");
>> +	}
>> +
>> +	/* Set/get flags, sepc, a6, a7 */
>> +	for (i = 0; i < ARRAY_SIZE(interrupted_attrs); i++) {
>> +		attr_name = attr_names[interrupted_attrs[i]];
>> +		ret = sbi_sse_read_attrs(event_id, interrupted_attrs[i], 1, &value);
>> +		sbiret_report_error(&ret, SBI_SUCCESS, "Get interrupted %s", attr_name);
>> +
>> +		value = ARRAY_SIZE(interrupted_attrs) - i;
>> +		ret = sbi_sse_write_attrs(event_id, interrupted_attrs[i], 1, &value);
>> +		sbiret_report_error(&ret, SBI_ERR_INVALID_STATE,
>> +				    "Set attribute %s invalid state error", attr_name);
>> +	}
>> +
>> +	sse_read_write_test(event_id, SBI_SSE_ATTR_STATUS, 0, &value, SBI_ERR_INVALID_PARAM,
>> +			    "attribute attr_count == 0");
>> +	sse_read_write_test(event_id, SBI_SSE_ATTR_INTERRUPTED_A7 + 1, 1, &value, SBI_ERR_BAD_RANGE,
>> +			    "invalid attribute");
>> +
>> +	/* Misaligned pointer address */
>> +	ptr = (void *)&value;
>> +	ptr += 1;
>> +	sse_read_write_test(event_id, SBI_SSE_ATTR_STATUS, 1, ptr, SBI_ERR_INVALID_ADDRESS,
>> +		"attribute with invalid address");
>> +
>> +	report_prefix_pop();
>> +}
>> +
>> +static void sse_test_register_error(uint32_t event_id)
>> +{
>> +	struct sbiret ret;
>> +
>> +	report_prefix_push("register");
>> +
>> +	ret = sbi_sse_unregister(event_id);
>> +	sbiret_report_error(&ret, SBI_ERR_INVALID_STATE, "unregister non-registered event");
>> +
>> +	ret = sbi_sse_register_raw(event_id, 0x1, 0);
>> +	sbiret_report_error(&ret, SBI_ERR_INVALID_PARAM, "register misaligned entry");
>> +
>> +	ret = sbi_sse_register_raw(event_id, (unsigned long)sbi_sse_entry, 0);
> 
> sbi_sse_register(event_id, sbi_sse_entry, NULL) to avoid a cast.
> 
>> +	sbiret_report_error(&ret, SBI_SUCCESS, "register");
>> +	if (ret.error)
>> +		goto done;
>> +
>> +	ret = sbi_sse_register_raw(event_id, (unsigned long)sbi_sse_entry, 0);
> 
> same
> 
>> +	sbiret_report_error(&ret, SBI_ERR_INVALID_STATE, "register used event failure");
>> +
>> +	ret = sbi_sse_unregister(event_id);
>> +	sbiret_report_error(&ret, SBI_SUCCESS, "unregister");
>> +
>> +done:
>> +	report_prefix_pop();
>> +}
>> +
>> +struct sse_simple_test_arg {
>> +	bool done;
>> +	unsigned long expected_a6;
>> +	uint32_t event_id;
>> +};
>> +
>> +#if __riscv_xlen > 32
>> +
>> +struct alias_test_params {
>> +	unsigned long event_id;
>> +	unsigned long attr_id;
>> +	unsigned long attr_count;
>> +	const char *str;
>> +};
>> +
>> +static void test_alias(uint32_t event_id)
>> +{
>> +	struct alias_test_params *write, *read;
>> +	unsigned long write_value, read_value;
>> +	struct sbiret ret;
>> +	bool err = false;
>> +	int r, w;
>> +	struct alias_test_params params[] = {
>> +		{event_id, SBI_SSE_ATTR_INTERRUPTED_A6, 1, "non aliased"},
>> +		{BIT(32) + event_id, SBI_SSE_ATTR_INTERRUPTED_A6, 1, "aliased event_id"},
>> +		{event_id, BIT(32) + SBI_SSE_ATTR_INTERRUPTED_A6, 1, "aliased attr_id"},
>> +		{event_id, SBI_SSE_ATTR_INTERRUPTED_A6, BIT(32) + 1, "aliased attr_count"},
>> +	};
>> +
>> +	report_prefix_push("alias");
>> +	for (w = 0; w < ARRAY_SIZE(params); w++) {
>> +		write = &params[w];
>> +
>> +		write_value = 0xDEADBEEF + w;
>> +		ret = sbi_sse_write_attrs(write->event_id, write->attr_id, write->attr_count, &write_value);
>> +		if (ret.error)
>> +			sbiret_report_error(&ret, SBI_SUCCESS, "Write %s, event 0x%lx attr 0x%lx, attr count 0x%lx", write->str, write->event_id, write->attr_id, write->attr_count);
> 
> I like long lines, but this one is too long even for me. I'd break it
> after the format string.
> 
>> +
>> +		for (r = 0; r < ARRAY_SIZE(params); r++) {
>> +			read = &params[r];
>> +			read_value = 0;
>> +			ret = sbi_sse_read_attrs(read->event_id, read->attr_id, read->attr_count, &read_value);
>> +			if (ret.error)
>> +				sbiret_report_error(&ret, SBI_SUCCESS,
>> +						    "Read %s, event 0x%lx attr 0x%lx, attr count 0x%lx", read->str, read->event_id, read->attr_id, read->attr_count);
> 
> same
> 
>> +
>> +			/* Do not spam output with a lot of reports */
>> +			if (write_value != read_value) {
>> +				err = true;
>> +				report_fail("Write %s, event 0x%lx attr 0x%lx, attr count 0x%lx value %lx =="
>> +					    "Read %s, event 0x%lx attr 0x%lx, attr count 0x%lx value %lx",
>> +				write->str, write->event_id, write->attr_id, write->attr_count,
>> +				write_value,
>> +				read->str, read->event_id, read->attr_id, read->attr_count,
>> +				read_value);
> 
> indentation is off
> 
>> +			}
>> +		}
>> +	}
>> +
>> +	report(!err, "BIT(32) aliasing tests");
>> +	report_prefix_pop();
>> +}
>> +#endif
>> +
>> +static void sse_simple_handler(void *data, struct pt_regs *regs, unsigned int hartid)
>> +{
>> +	struct sse_simple_test_arg *arg = data;
>> +	int i;
>> +	struct sbiret ret;
>> +	const char *attr_name;
>> +	uint32_t event_id = READ_ONCE(arg->event_id), attr;
>> +	unsigned long value, prev_value, flags;
>> +	unsigned long interrupted_state[ARRAY_SIZE(interrupted_attrs)];
>> +	unsigned long modified_state[ARRAY_SIZE(interrupted_attrs)] = {4, 3, 2, 1};
>> +	unsigned long tmp_state[ARRAY_SIZE(interrupted_attrs)];
>> +
>> +	report((regs->status & SR_SPP) == SR_SPP, "Interrupted S-mode");
>> +	report(hartid == current_thread_info()->hartid, "Hartid correctly passed");
>> +	sse_check_state(event_id, SBI_SSE_STATE_RUNNING);
>> +	report(!sse_event_pending(event_id), "Event not pending");
>> +
>> +	/* Read full interrupted state */
>> +	ret = sbi_sse_read_attrs(event_id, SBI_SSE_ATTR_INTERRUPTED_SEPC,
>> +				 ARRAY_SIZE(interrupted_attrs), interrupted_state);
>> +	sbiret_report_error(&ret, SBI_SUCCESS, "Save full interrupted state from handler");
>> +
>> +	/* Write full modified state and read it */
>> +	ret = sbi_sse_write_attrs(event_id, SBI_SSE_ATTR_INTERRUPTED_SEPC,
>> +				  ARRAY_SIZE(modified_state), modified_state);
>> +	sbiret_report_error(&ret, SBI_SUCCESS,
>> +			    "Write full interrupted state from handler");
>> +
>> +	ret = sbi_sse_read_attrs(event_id, SBI_SSE_ATTR_INTERRUPTED_SEPC,
>> +				ARRAY_SIZE(tmp_state), tmp_state);
>> +	sbiret_report_error(&ret, SBI_SUCCESS, "Read full modified state from handler");
>> +
>> +	report(memcmp(tmp_state, modified_state, sizeof(modified_state)) == 0,
>> +	       "Full interrupted state successfully written");
>> +
>> +#if __riscv_xlen > 32
>> +	test_alias(event_id);
>> +#endif
>> +
>> +	/* Restore full saved state */
>> +	ret = sbi_sse_write_attrs(event_id, SBI_SSE_ATTR_INTERRUPTED_SEPC,
>> +		ARRAY_SIZE(interrupted_attrs), interrupted_state);
> 
> indentation is off
> 
>> +	sbiret_report_error(&ret, SBI_SUCCESS,
>> +			    "Full interrupted state restore from handler");
> 
> no need to wrap
> 
>> +
>> +	/* We test SBI_SSE_ATTR_INTERRUPTED_FLAGS below with specific flag values */
>> +	for (i = 0; i < ARRAY_SIZE(interrupted_attrs); i++) {
>> +		attr = interrupted_attrs[i];
>> +		if (attr == SBI_SSE_ATTR_INTERRUPTED_FLAGS)
>> +			continue;
>> +
>> +		attr_name = attr_names[attr];
>> +
>> +		ret = sbi_sse_read_attrs(event_id, attr, 1, &prev_value);
>> +		sbiret_report_error(&ret, SBI_SUCCESS, "Get attr %s", attr_name);
>> +
>> +		value = 0xDEADBEEF + i;
>> +		ret = sbi_sse_write_attrs(event_id, attr, 1, &value);
>> +		sbiret_report_error(&ret, SBI_SUCCESS, "Set attr %s", attr_name);
>> +
>> +		ret = sbi_sse_read_attrs(event_id, attr, 1, &value);
>> +		sbiret_report_error(&ret, SBI_SUCCESS, "Get attr %s", attr_name);
>> +		report(value == 0xDEADBEEF + i, "Get attr %s, value: 0x%lx", attr_name,
>> +		       value);
> 
> no need to wrap
> 
>> +
>> +		ret = sbi_sse_write_attrs(event_id, attr, 1, &prev_value);
>> +		sbiret_report_error(&ret, SBI_SUCCESS, "Restore attr %s value", attr_name);
>> +	}
>> +
>> +	/* Test all flags allowed for SBI_SSE_ATTR_INTERRUPTED_FLAGS */
>> +	attr = SBI_SSE_ATTR_INTERRUPTED_FLAGS;
>> +	ret = sbi_sse_read_attrs(event_id, attr, 1, &prev_value);
>> +	sbiret_report_error(&ret, SBI_SUCCESS, "Save interrupted flags");
>> +
>> +	for (i = 0; i < ARRAY_SIZE(interrupted_flags); i++) {
>> +		flags = interrupted_flags[i];
>> +		ret = sbi_sse_write_attrs(event_id, attr, 1, &flags);
>> +		sbiret_report_error(&ret, SBI_SUCCESS,
>> +				    "Set interrupted flags bit 0x%lx value", flags);
>> +		ret = sbi_sse_read_attrs(event_id, attr, 1, &value);
>> +		sbiret_report_error(&ret, SBI_SUCCESS, "Get interrupted flags after set");
>> +		report(value == flags, "interrupted flags modified value: 0x%lx", value);
>> +	}
>> +
>> +	/* Write invalid bit in flag register */
>> +	flags = SBI_SSE_ATTR_INTERRUPTED_FLAGS_SSTATUS_SDT << 1;
>> +	ret = sbi_sse_write_attrs(event_id, attr, 1, &flags);
>> +	sbiret_report_error(&ret, SBI_ERR_INVALID_PARAM, "Set invalid flags bit 0x%lx value error",
>> +			    flags);
>> +
> 
> no need to wrap

That's actually > 100 columns if unwrap.
> 
>> +	flags = BIT(SBI_SSE_ATTR_INTERRUPTED_FLAGS_SSTATUS_SDT + 1);
>> +	ret = sbi_sse_write_attrs(event_id, attr, 1, &flags);
>> +	sbiret_report_error(&ret, SBI_ERR_INVALID_PARAM, "Set invalid flags bit 0x%lx value error",
>> +			    flags);
>> +
> 
> no need to wrap

Ditto

> 
>> +	ret = sbi_sse_write_attrs(event_id, attr, 1, &prev_value);
>> +	sbiret_report_error(&ret, SBI_SUCCESS, "Restore interrupted flags");
>> +
>> +	/* Try to change HARTID/Priority while running */
>> +	if (sbi_sse_event_is_global(event_id)) {
>> +		value = current_thread_info()->hartid;
>> +		ret = sbi_sse_write_attrs(event_id, SBI_SSE_ATTR_PREFERRED_HART, 1, &value);
>> +		sbiret_report_error(&ret, SBI_ERR_INVALID_STATE, "Set hart id while running error");
>> +	}
>> +
>> +	value = 0;
>> +	ret = sbi_sse_write_attrs(event_id, SBI_SSE_ATTR_PRIORITY, 1, &value);
>> +	sbiret_report_error(&ret, SBI_ERR_INVALID_STATE, "Set priority while running error");
>> +
>> +	value = READ_ONCE(arg->expected_a6);
>> +	report(interrupted_state[2] == value, "Interrupted state a6, expected 0x%lx, got 0x%lx",
>> +	       value, interrupted_state[2]);
>> +
>> +	report(interrupted_state[3] == SBI_EXT_SSE,
>> +	       "Interrupted state a7, expected 0x%x, got 0x%lx", SBI_EXT_SSE,
>> +	       interrupted_state[3]);
>> +
>> +	WRITE_ONCE(arg->done, true);
>> +}
>> +
>> +static int sse_test_inject_simple(uint32_t event_id)
>> +{
>> +	unsigned long value, error;
>> +	struct sbiret ret;
>> +	int err_ret = 1;
>> +	struct sse_simple_test_arg test_arg = {.event_id = event_id};
>> +	struct sbi_sse_handler_arg args = {
>> +		.handler = sse_simple_handler,
>> +		.handler_data = (void *)&test_arg,
>> +		.stack = sse_alloc_stack(),
>> +	};
>> +
>> +	report_prefix_push("simple");
>> +
>> +	if (!sse_check_state(event_id, SBI_SSE_STATE_UNUSED))
>> +		goto err;
>> +
>> +	ret = sbi_sse_register(event_id, &args);
>> +	if (!sbiret_report_error(&ret, SBI_SUCCESS, "register"))
>> +		goto err;
>> +
>> +	if (!sse_check_state(event_id, SBI_SSE_STATE_REGISTERED))
>> +		goto err;
>> +
>> +	if (sbi_sse_event_is_global(event_id)) {
>> +		/* Be sure global events are targeting the current hart */
>> +		error = sse_global_event_set_current_hart(event_id);
>> +		if (error)
>> +			goto err;
>> +	}
>> +
>> +	ret = sbi_sse_enable(event_id);
>> +	if (!sbiret_report_error(&ret, SBI_SUCCESS, "enable"))
>> +		goto err;
>> +
>> +	if (!sse_check_state(event_id, SBI_SSE_STATE_ENABLED))
>> +		goto err;
>> +
>> +	ret = sbi_sse_hart_mask();
>> +	if (!sbiret_report_error(&ret, SBI_SUCCESS, "hart mask"))
>> +		goto err;
>> +
>> +	ret = sbi_sse_inject(event_id, current_thread_info()->hartid);
>> +	if (!sbiret_report_error(&ret, SBI_SUCCESS, "injection masked")) {
>> +		sbi_sse_hart_unmask();
>> +		goto err;
>> +	}
>> +
>> +	report(READ_ONCE(test_arg.done) == 0, "event masked not handled");
>> +
>> +	/*
>> +	 * When unmasking the SSE events, we expect it to be injected
>> +	 * immediately so a6 should be SBI_EXT_SBI_SSE_HART_UNMASK
>> +	 */
>> +	WRITE_ONCE(test_arg.expected_a6, SBI_EXT_SSE_HART_UNMASK);
>> +	ret = sbi_sse_hart_unmask();
>> +	if (!sbiret_report_error(&ret, SBI_SUCCESS, "hart unmask")) {
>> +		goto err;
>> +	}
>> +
>> +	report(READ_ONCE(test_arg.done) == 1, "event unmasked handled");
>> +	WRITE_ONCE(test_arg.done, 0);
>> +	WRITE_ONCE(test_arg.expected_a6, SBI_EXT_SSE_INJECT);
>> +
>> +	/* Set as oneshot and verify it is disabled */
>> +	ret = sbi_sse_disable(event_id);
>> +	if (!sbiret_report_error(&ret, SBI_SUCCESS, "Disable event")) {
>> +		/* Nothing we can really do here, event can not be disabled */
>> +		goto err;
>> +	}
>> +
>> +	value = SBI_SSE_ATTR_CONFIG_ONESHOT;
>> +	ret = sbi_sse_write_attrs(event_id, SBI_SSE_ATTR_CONFIG, 1, &value);
>> +	if (!sbiret_report_error(&ret, SBI_SUCCESS, "Set event attribute as ONESHOT"))
>> +		goto err;
>> +
>> +	ret = sbi_sse_enable(event_id);
>> +	if (!sbiret_report_error(&ret, SBI_SUCCESS, "Enable event"))
>> +		goto err;
>> +
>> +	ret = sbi_sse_inject(event_id, current_thread_info()->hartid);
>> +	if (!sbiret_report_error(&ret, SBI_SUCCESS, "second injection"))
>> +		goto err;
>> +
>> +	report(READ_ONCE(test_arg.done) == 1, "event handled");
>> +	WRITE_ONCE(test_arg.done, 0);
>> +
>> +	if (!sse_check_state(event_id, SBI_SSE_STATE_REGISTERED))
>> +		goto err;
>> +
>> +	/* Clear ONESHOT FLAG */
>> +	value = 0;
>> +	ret = sbi_sse_write_attrs(event_id, SBI_SSE_ATTR_CONFIG, 1, &value);
>> +	if (!sbiret_report_error(&ret, SBI_SUCCESS, "Clear CONFIG.ONESHOT flag"))
>> +		goto err;
>> +
>> +	ret = sbi_sse_unregister(event_id);
>> +	if (!sbiret_report_error(&ret, SBI_SUCCESS, "unregister"))
>> +		goto err;
>> +
>> +	sse_check_state(event_id, SBI_SSE_STATE_UNUSED);
>> +
>> +	err_ret = 0;
>> +err:
>> +	sse_free_stack(args.stack);
>> +	report_prefix_pop();
>> +
>> +	return err_ret;
>> +}
>> +
>> +struct sse_foreign_cpu_test_arg {
>> +	bool done;
>> +	unsigned int expected_cpu;
>> +	uint32_t event_id;
>> +};
>> +
>> +static void sse_foreign_cpu_handler(void *data, struct pt_regs *regs, unsigned int hartid)
>> +{
>> +	struct sse_foreign_cpu_test_arg *arg = data;
>> +	unsigned int expected_cpu;
>> +
>> +	/* For arg content to be visible */
>> +	smp_rmb();
>> +	expected_cpu = READ_ONCE(arg->expected_cpu);
>> +	report(expected_cpu == current_thread_info()->cpu,
>> +	       "Received event on CPU (%d), expected CPU (%d)", current_thread_info()->cpu,
>> +	       expected_cpu);
>> +
>> +	WRITE_ONCE(arg->done, true);
>> +	/* For arg update to be visible for other CPUs */
>> +	smp_wmb();
>> +}
>> +
>> +struct sse_local_per_cpu {
>> +	struct sbi_sse_handler_arg args;
>> +	struct sbiret ret;
>> +	struct sse_foreign_cpu_test_arg handler_arg;
>> +};
>> +
>> +static void sse_register_enable_local(void *data)
>> +{
>> +	struct sbiret ret;
>> +	struct sse_local_per_cpu *cpu_args = data;
>> +	struct sse_local_per_cpu *cpu_arg = &cpu_args[current_thread_info()->cpu];
>> +	uint32_t event_id = cpu_arg->handler_arg.event_id;
>> +
>> +	ret = sbi_sse_register(event_id, &cpu_arg->args);
>> +	WRITE_ONCE(cpu_arg->ret, ret);
>> +	if (ret.error)
>> +		return;
>> +
>> +	ret = sbi_sse_enable(event_id);
>> +	WRITE_ONCE(cpu_arg->ret, ret);
>> +}
>> +
>> +static void sbi_sse_disable_unregister_local(void *data)
>> +{
>> +	struct sbiret ret;
>> +	struct sse_local_per_cpu *cpu_args = data;
>> +	struct sse_local_per_cpu *cpu_arg = &cpu_args[current_thread_info()->cpu];
>> +	uint32_t event_id = cpu_arg->handler_arg.event_id;
>> +
>> +	ret = sbi_sse_disable(event_id);
>> +	WRITE_ONCE(cpu_arg->ret, ret);
>> +	if (ret.error)
>> +		return;
>> +
>> +	ret = sbi_sse_unregister(event_id);
>> +	WRITE_ONCE(cpu_arg->ret, ret);
>> +}
>> +
>> +static int sse_test_inject_local(uint32_t event_id)
>> +{
>> +	int cpu;
>> +	int err_ret = 1;
>> +	struct sbiret ret;
>> +	struct sse_local_per_cpu *cpu_args, *cpu_arg;
>> +	struct sse_foreign_cpu_test_arg *handler_arg;
>> +
>> +	cpu_args = calloc(NR_CPUS, sizeof(struct sbi_sse_handler_arg));
>> +
>> +	report_prefix_push("local_dispatch");
>> +	for_each_online_cpu(cpu) {
>> +		cpu_arg = &cpu_args[cpu];
>> +		cpu_arg->handler_arg.event_id = event_id;
>> +		cpu_arg->args.stack = sse_alloc_stack();
>> +		cpu_arg->args.handler = sse_foreign_cpu_handler;
>> +		cpu_arg->args.handler_data = (void *)&cpu_arg->handler_arg;
>> +	}
>> +
>> +	on_cpus(sse_register_enable_local, cpu_args);
>> +	for_each_online_cpu(cpu) {
>> +		cpu_arg = &cpu_args[cpu];
>> +		ret = cpu_arg->ret;
>> +		if (ret.error) {
>> +			report_fail("CPU failed to register/enable event: %ld", ret.error);
>> +			goto err;
>> +		}
>> +
>> +		handler_arg = &cpu_arg->handler_arg;
>> +		WRITE_ONCE(handler_arg->expected_cpu, cpu);
>> +		/* For handler_arg content to be visible for other CPUs */
>> +		smp_wmb();
>> +		ret = sbi_sse_inject(event_id, cpus[cpu].hartid);
>> +		if (ret.error) {
>> +			report_fail("CPU failed to inject event: %ld", ret.error);
>> +			goto err;
>> +		}
>> +	}
>> +
>> +	for_each_online_cpu(cpu) {
>> +		handler_arg = &cpu_args[cpu].handler_arg;
>> +		smp_rmb();
>> +		while (!READ_ONCE(handler_arg->done)) {
>> +			/* For handler_arg update to be visible */
>> +			smp_rmb();
>> +			cpu_relax();
>> +		}
>> +		WRITE_ONCE(handler_arg->done, false);
>> +	}
>> +
>> +	on_cpus(sbi_sse_disable_unregister_local, cpu_args);
>> +	for_each_online_cpu(cpu) {
>> +		cpu_arg = &cpu_args[cpu];
>> +		ret = READ_ONCE(cpu_arg->ret);
>> +		if (ret.error) {
>> +			report_fail("CPU failed to disable/unregister event: %ld", ret.error);
>> +			goto err;
>> +		}
>> +	}
>> +
>> +	err_ret = 0;
>> +err:
>> +	for_each_online_cpu(cpu) {
>> +		cpu_arg = &cpu_args[cpu];
>> +		sse_free_stack(cpu_arg->args.stack);
>> +	}
>> +
>> +	report_pass("local event dispatch on all CPUs");
>> +	report_prefix_pop();
>> +
>> +	return err_ret;
>> +}
>> +
>> +static int sse_test_inject_global(uint32_t event_id)
>> +{
>> +	unsigned long value;
>> +	struct sbiret ret;
>> +	unsigned int cpu;
>> +	uint64_t timeout;
>> +	int err_ret = 1;
>> +	struct sse_foreign_cpu_test_arg test_arg = {.event_id = event_id};
>> +	struct sbi_sse_handler_arg args = {
>> +		.handler = sse_foreign_cpu_handler,
>> +		.handler_data = (void *)&test_arg,
>> +		.stack = sse_alloc_stack(),
>> +	};
>> +	enum sbi_sse_state state;
>> +
>> +	report_prefix_push("global_dispatch");
>> +
>> +	ret = sbi_sse_register(event_id, &args);
>> +	if (!sbiret_report_error(&ret, SBI_SUCCESS, "Register event"))
>> +		goto err;
>> +
>> +	for_each_online_cpu(cpu) {
>> +		WRITE_ONCE(test_arg.expected_cpu, cpu);
>> +		/* For test_arg content to be visible for other CPUs */
>> +		smp_wmb();
>> +		value = cpu;
>> +		ret = sbi_sse_write_attrs(event_id, SBI_SSE_ATTR_PREFERRED_HART, 1, &value);
>> +		if (!sbiret_report_error(&ret, SBI_SUCCESS, "Set preferred hart"))
>> +			goto err;
>> +
>> +		ret = sbi_sse_enable(event_id);
>> +		if (!sbiret_report_error(&ret, SBI_SUCCESS, "Enable event"))
>> +			goto err;
>> +
>> +		ret = sbi_sse_inject(event_id, cpu);
>> +		if (!sbiret_report_error(&ret, SBI_SUCCESS, "Inject event"))
>> +			goto err;
>> +
>> +		smp_rmb();
>> +		while (!READ_ONCE(test_arg.done)) {
>> +			/* For shared test_arg structure */
>> +			smp_rmb();
>> +			cpu_relax();
>> +		}
>> +
>> +		WRITE_ONCE(test_arg.done, false);
>> +
>> +		timeout = timer_get_cycles() + usec_to_cycles((uint64_t)1000);
> 
> Should probably add an environment variable for this timeout and then use
> 1000 as the default when the variable isn't set.

Ack.

> 
>> +		/* Wait for event to be back in ENABLED state */
>> +		do {
>> +			ret = sse_event_get_state(event_id, &state);
>> +			if (ret.error)
>> +				goto err;
>> +			cpu_relax();
>> +		} while (state != SBI_SSE_STATE_ENABLED || timer_get_cycles() < timeout);
> 
>                                                         ^ &&
>> +
>> +		if (!report(state == SBI_SSE_STATE_ENABLED,
>> +		    "wait for event to be in enable state"))
> 
> nit: no need to wrap this line, but if we do, then "wait should be under
> state
> 
>> +			goto err;
>> +
>> +		ret = sbi_sse_disable(event_id);
>> +		if (!sbiret_report_error(&ret, SBI_SUCCESS, "Disable event"))
>> +			goto err;
>> +
>> +		report_pass("Global event on CPU %d", cpu);
>> +	}
>> +
>> +	ret = sbi_sse_unregister(event_id);
>> +	sbiret_report_error(&ret, SBI_SUCCESS, "Unregister event");
>> +
>> +	err_ret = 0;
>> +
>> +err:
>> +	sse_free_stack(args.stack);
>> +	report_prefix_pop();
>> +
>> +	return err_ret;
>> +}
>> +
>> +struct priority_test_arg {
>> +	uint32_t event_id;
>> +	bool called;
>> +	u32 prio;
>> +	struct priority_test_arg *next_event_arg;
>> +	void (*check_func)(struct priority_test_arg *arg);
>> +};
>> +
>> +static void sse_hi_priority_test_handler(void *arg, struct pt_regs *regs,
>> +					 unsigned int hartid)
>> +{
>> +	struct priority_test_arg *targ = arg;
>> +	struct priority_test_arg *next = targ->next_event_arg;
>> +
>> +	targ->called = true;
>> +	if (next) {
>> +		sbi_sse_inject(next->event_id, current_thread_info()->hartid);
>> +
>> +		report(!sse_event_pending(next->event_id), "Higher priority event is not pending");
>> +		report(next->called, "Higher priority event was handled");
>> +	}
>> +}
>> +
>> +static void sse_low_priority_test_handler(void *arg, struct pt_regs *regs,
>> +					  unsigned int hartid)
>> +{
>> +	struct priority_test_arg *targ = arg;
>> +	struct priority_test_arg *next = targ->next_event_arg;
>> +
>> +	targ->called = true;
>> +
>> +	if (next) {
>> +		sbi_sse_inject(next->event_id, current_thread_info()->hartid);
>> +
>> +		report(sse_event_pending(next->event_id), "Lower priority event is pending");
>> +		report(!next->called, "Lower priority event %s was not handle before %s",
> 
> handled
> 
>> +		       sse_event_name(next->event_id), sse_event_name(targ->event_id));
>> +	}
>> +}
>> +
>> +static void sse_test_injection_priority_arg(struct priority_test_arg *in_args,
>> +					    unsigned int in_args_size,
>> +					    sbi_sse_handler_fn handler,
>> +					    const char *test_name)
>> +{
>> +	unsigned int i;
>> +	unsigned long value, uret;
>> +	struct sbiret ret;
>> +	uint32_t event_id;
>> +	struct priority_test_arg *arg;
>> +	unsigned int args_size = 0;
>> +	struct sbi_sse_handler_arg event_args[in_args_size];
>> +	struct priority_test_arg *args[in_args_size];
>> +	void *stack;
>> +	struct sbi_sse_handler_arg *event_arg;
>> +
>> +	report_prefix_push(test_name);
>> +
>> +	for (i = 0; i < in_args_size; i++) {
>> +		arg = &in_args[i];
>> +		event_id = arg->event_id;
>> +		if (!sse_event_can_inject(event_id))
>> +			continue;
>> +
>> +		args[args_size] = arg;
>> +		args_size++;
>> +		event_args->stack = 0;
>> +	}
>> +
>> +	if (!args_size) {
>> +		report_skip("No injectable events");
>> +		goto skip;
>> +	}
>> +
>> +	for (i = 0; i < args_size; i++) {
>> +		arg = args[i];
>> +		event_id = arg->event_id;
>> +		stack = sse_alloc_stack();
>> +
>> +		event_arg = &event_args[i];
>> +		event_arg->handler = handler;
>> +		event_arg->handler_data = (void *)arg;
>> +		event_arg->stack = stack;
>> +
>> +		if (i < (args_size - 1))
>> +			arg->next_event_arg = args[i + 1];
>> +		else
>> +			arg->next_event_arg = NULL;
>> +
>> +		/* Be sure global events are targeting the current hart */
>> +		if (sbi_sse_event_is_global(event_id)) {
>> +			uret = sse_global_event_set_current_hart(event_id);
>> +			if (uret)
>> +				goto err;
> 
> If we goto err from here or the next goto, then we'll also get disable and
> unregister failures reported.
> 
>> +		}
>> +
>> +		ret = sbi_sse_register(event_id, event_arg);
>> +		if (ret.error) {
>> +			sbiret_report_error(&ret, SBI_SUCCESS, "register event 0x%x", event_id);
>> +			goto err;
>> +		}
>> +
>> +		value = arg->prio;
>> +		ret = sbi_sse_write_attrs(event_id, SBI_SSE_ATTR_PRIORITY, 1, &value);
>> +		if (ret.error) {
>> +			sbiret_report_error(&ret, SBI_SUCCESS, "set event 0x%x priority", event_id);
>> +			goto err;
> 
> from here we'll get disable failures reported.
> 
>> +		}
>> +		ret = sbi_sse_enable(event_id);
>> +		if (ret.error) {
>> +			sbiret_report_error(&ret, SBI_SUCCESS, "enable event 0x%x", event_id);
>> +			goto err;
>> +		}
>> +	}
> 
> It looks like we need a complete
> 
> if (!a)
>    goto a_out;
> if (!b)
>    goto b_out;
> 
> /* ret = success */
> 
> b_out:
>   /* undo b */
> a_out:
>   /* undo a */
> 
> return ret;
> 
> type of model. Or just manage it directly.
> 
>  if (!a)
>   ...
>  if (!b) {
>   /* undo a */
>   ...
>  }
> 
>> +
>> +	/* Inject first event */
>> +	ret = sbi_sse_inject(args[0]->event_id, current_thread_info()->hartid);
>> +	sbiret_report_error(&ret, SBI_SUCCESS, "injection");
>> +
>> +err:
>> +	for (i = 0; i < args_size; i++) {
>> +		arg = args[i];
>> +		event_id = arg->event_id;
>> +
>> +		report(arg->called, "Event %s handler called", sse_event_name(arg->event_id));
>> +
>> +		ret = sbi_sse_disable(event_id);
>> +		if (ret.error)
>> +			sbiret_report_error(&ret, SBI_SUCCESS, "disable event 0x%x", event_id);
>> +
>> +		sbi_sse_unregister(event_id);
>> +		if (ret.error)
>> +			sbiret_report_error(&ret, SBI_SUCCESS, "unregister event 0x%x", event_id);
>> +
>> +		event_arg = &event_args[i];
>> +		if (event_arg->stack)
>> +			sse_free_stack(event_arg->stack);
>> +	}
>> +
>> +skip:
>> +	report_prefix_pop();
>> +}
>> +
>> +static struct priority_test_arg hi_prio_args[] = {
>> +	{.event_id = SBI_SSE_EVENT_GLOBAL_SOFTWARE},
>> +	{.event_id = SBI_SSE_EVENT_LOCAL_SOFTWARE},
>> +	{.event_id = SBI_SSE_EVENT_GLOBAL_LOW_PRIO_RAS},
>> +	{.event_id = SBI_SSE_EVENT_LOCAL_LOW_PRIO_RAS},
>> +	{.event_id = SBI_SSE_EVENT_LOCAL_PMU_OVERFLOW},
>> +	{.event_id = SBI_SSE_EVENT_GLOBAL_HIGH_PRIO_RAS},
>> +	{.event_id = SBI_SSE_EVENT_LOCAL_DOUBLE_TRAP},
>> +	{.event_id = SBI_SSE_EVENT_LOCAL_HIGH_PRIO_RAS},
>> +};
>> +
>> +static struct priority_test_arg low_prio_args[] = {
>> +	{.event_id = SBI_SSE_EVENT_LOCAL_HIGH_PRIO_RAS},
>> +	{.event_id = SBI_SSE_EVENT_LOCAL_DOUBLE_TRAP},
>> +	{.event_id = SBI_SSE_EVENT_GLOBAL_HIGH_PRIO_RAS},
>> +	{.event_id = SBI_SSE_EVENT_LOCAL_PMU_OVERFLOW},
>> +	{.event_id = SBI_SSE_EVENT_LOCAL_LOW_PRIO_RAS},
>> +	{.event_id = SBI_SSE_EVENT_GLOBAL_LOW_PRIO_RAS},
>> +	{.event_id = SBI_SSE_EVENT_LOCAL_SOFTWARE},
>> +	{.event_id = SBI_SSE_EVENT_GLOBAL_SOFTWARE},
>> +};
>> +
>> +static struct priority_test_arg prio_args[] = {
>> +	{.event_id = SBI_SSE_EVENT_GLOBAL_SOFTWARE,		.prio = 5},
>> +	{.event_id = SBI_SSE_EVENT_LOCAL_SOFTWARE,		.prio = 10},
>> +	{.event_id = SBI_SSE_EVENT_LOCAL_LOW_PRIO_RAS,		.prio = 12},
>> +	{.event_id = SBI_SSE_EVENT_LOCAL_PMU_OVERFLOW,		.prio = 15},
>> +	{.event_id = SBI_SSE_EVENT_GLOBAL_HIGH_PRIO_RAS,	.prio = 20},
>> +	{.event_id = SBI_SSE_EVENT_GLOBAL_LOW_PRIO_RAS, 	.prio = 22},
>> +	{.event_id = SBI_SSE_EVENT_LOCAL_HIGH_PRIO_RAS, 	.prio = 25},
>> +};
>> +
>> +static struct priority_test_arg same_prio_args[] = {
>> +	{.event_id = SBI_SSE_EVENT_LOCAL_PMU_OVERFLOW,		.prio = 0},
>> +	{.event_id = SBI_SSE_EVENT_GLOBAL_LOW_PRIO_RAS,		.prio = 0},
>> +	{.event_id = SBI_SSE_EVENT_LOCAL_HIGH_PRIO_RAS,		.prio = 10},
>> +	{.event_id = SBI_SSE_EVENT_LOCAL_SOFTWARE,		.prio = 10},
>> +	{.event_id = SBI_SSE_EVENT_GLOBAL_SOFTWARE,		.prio = 10},
>> +	{.event_id = SBI_SSE_EVENT_GLOBAL_HIGH_PRIO_RAS,	.prio = 20},
>> +	{.event_id = SBI_SSE_EVENT_LOCAL_LOW_PRIO_RAS,		.prio = 20},
>> +};
>> +
>> +static void sse_test_injection_priority(void)
>> +{
>> +	report_prefix_push("prio");
>> +
>> +	sse_test_injection_priority_arg(hi_prio_args, ARRAY_SIZE(hi_prio_args),
>> +					sse_hi_priority_test_handler, "high");
>> +
>> +	sse_test_injection_priority_arg(low_prio_args, ARRAY_SIZE(low_prio_args),
>> +					sse_low_priority_test_handler, "low");
>> +
>> +	sse_test_injection_priority_arg(prio_args, ARRAY_SIZE(prio_args),
>> +					sse_low_priority_test_handler, "changed");
>> +
>> +	sse_test_injection_priority_arg(same_prio_args, ARRAY_SIZE(same_prio_args),
>> +					sse_low_priority_test_handler, "same_prio_args");
>> +
>> +	report_prefix_pop();
>> +}
>> +
>> +static void test_invalid_event_id(unsigned long event_id)
>> +{
>> +	struct sbiret ret;
>> +	unsigned long value = 0;
>> +
>> +	ret = sbi_sse_register_raw(event_id, (unsigned long) sbi_sse_entry, 0);
>> +	sbiret_report_error(&ret, SBI_ERR_INVALID_PARAM,
>> +			    "register event_id 0x%lx", event_id);
>> +
>> +	ret = sbi_sse_unregister(event_id);
>> +	sbiret_report_error(&ret, SBI_ERR_INVALID_PARAM,
>> +			"unregister event_id 0x%lx", event_id);
>> +
>> +	ret = sbi_sse_enable(event_id);
>> +	sbiret_report_error(&ret, SBI_ERR_INVALID_PARAM,
>> +			    "enable event_id 0x%lx", event_id);
>> +
>> +	ret = sbi_sse_disable(event_id);
>> +	sbiret_report_error(&ret, SBI_ERR_INVALID_PARAM,
>> +			    "disable event_id 0x%lx", event_id);
>> +
>> +	ret = sbi_sse_inject(event_id, 0);
>> +	sbiret_report_error(&ret, SBI_ERR_INVALID_PARAM,
>> +			    "inject event_id 0x%lx", event_id);
>> +
>> +	ret = sbi_sse_write_attrs(event_id, SBI_SSE_ATTR_PRIORITY, 1, &value);
>> +	sbiret_report_error(&ret, SBI_ERR_INVALID_PARAM,
>> +			    "write attr event_id 0x%lx", event_id);
>> +
>> +	ret = sbi_sse_read_attrs(event_id, SBI_SSE_ATTR_PRIORITY, 1, &value);
>> +	sbiret_report_error(&ret, SBI_ERR_INVALID_PARAM,
>> +			    "read attr event_id 0x%lx", event_id);
>> +}
>> +
>> +static void sse_test_invalid_event_id(void)
>> +{
>> +
>> +	report_prefix_push("event_id");
>> +
>> +	test_invalid_event_id(SBI_SSE_EVENT_LOCAL_RESERVED_0_START);
>> +
>> +	report_prefix_pop();
>> +}
>> +
>> +static void sse_check_event_availability(uint32_t event_id, bool *can_inject, bool *supported)
>> +{
>> +	unsigned long status;
>> +	struct sbiret ret;
>> +
>> +	*can_inject = false;
>> +	*supported = false;
>> +
>> +	ret = sbi_sse_read_attrs(event_id, SBI_SSE_ATTR_STATUS, 1, &status);
>> +	if (ret.error != SBI_SUCCESS && ret.error != SBI_ERR_NOT_SUPPORTED) {
>> +		report_fail("Get event status != SBI_SUCCESS && != SBI_ERR_NOT_SUPPORTED: %ld", ret.error);
>> +		return;
>> +	}
>> +	if (ret.error == SBI_ERR_NOT_SUPPORTED)
>> +		return;
>> +
>> +	if (!ret.error)
>> +		*supported = true;
>> +
>> +	*can_inject = (status >> SBI_SSE_ATTR_STATUS_INJECT_OFFSET) & 1;
> 
> this should also be under the 'if (!ret.error)'

the 'if (!ret.error)' is actually useless since at that point it is == 0
(ie based on the two 'if's above.

> 
>> +}
>> +
>> +static void sse_secondary_boot_and_unmask(void *data)
>> +{
>> +	sbi_sse_hart_unmask();
>> +}
>> +
>> +static void sse_check_mask(void)
>> +{
>> +	struct sbiret ret;
>> +
>> +	/* Upon boot, event are masked, check that */
>> +	ret = sbi_sse_hart_mask();
>> +	sbiret_report_error(&ret, SBI_ERR_ALREADY_STOPPED, "hart mask at boot time");
>> +
>> +	ret = sbi_sse_hart_unmask();
>> +	sbiret_report_error(&ret, SBI_SUCCESS, "hart unmask");
>> +	ret = sbi_sse_hart_unmask();
>> +	sbiret_report_error(&ret, SBI_ERR_ALREADY_STARTED, "hart unmask twice error");
>> +
>> +	ret = sbi_sse_hart_mask();
>> +	sbiret_report_error(&ret, SBI_SUCCESS, "hart mask");
>> +	ret = sbi_sse_hart_mask();
>> +	sbiret_report_error(&ret, SBI_ERR_ALREADY_STOPPED, "hart mask twice");
>> +}
>> +
>> +static int run_inject_test(struct sse_event_info *info)
>> +{
>> +	unsigned long event_id = info->event_id;
>> +
>> +	if (!info->can_inject) {
>> +		report_skip("Event does not support injection, skipping injection tests");
>> +		return 0;
>> +	}
>> +
>> +	if (sse_test_inject_simple(event_id))
>> +		return 1;
>> +
>> +	if (sbi_sse_event_is_global(event_id))
>> +		return sse_test_inject_global(event_id);
>> +	else
>> +		return sse_test_inject_local(event_id);
>> +}
>> +
>> +void check_sse(void)
>> +{
>> +	struct sse_event_info *info;
>> +	unsigned long i, event_id;
>> +	bool supported;
>> +
>> +	report_prefix_push("sse");
>> +
>> +	if (!sbi_probe(SBI_EXT_SSE)) {
>> +		report_skip("extension not available");
>> +		report_prefix_pop();
>> +		return;
>> +	}
>> +
>> +	sse_check_mask();
>> +
>> +	/*
>> +	 * Dummy wakeup of all processors since some of them will be targeted
>> +	 * by global events without going through the wakeup call as well as
>> +	 * unmasking SSE events on all harts
>> +	 */
>> +	on_cpus(sse_secondary_boot_and_unmask, NULL);
>> +
>> +	sse_test_invalid_event_id();
>> +
>> +	for (i = 0; i < ARRAY_SIZE(sse_event_infos); i++) {
>> +		info = &sse_event_infos[i];
>> +		event_id = info->event_id;
>> +		report_prefix_push(info->name);
>> +		sse_check_event_availability(event_id, &info->can_inject, &supported);
>> +		if (!supported) {
>> +			report_skip("Event is not supported, skipping tests");
>> +			report_prefix_pop();
>> +			continue;
>> +		}
>> +
>> +		sse_test_attrs(event_id);
>> +		sse_test_register_error(event_id);
>> +
>> +		if (run_inject_test(info)) {
>> +			report_skip("Event test failed, event state unreliable");
>> +			info->can_inject = false;
>> +		}
>> +
>> +		report_prefix_pop();
>> +	}
>> +
>> +	sse_test_injection_priority();
>> +
>> +	report_prefix_pop();
>> +}
>> diff --git a/riscv/sbi.c b/riscv/sbi.c
>> index 0404bb81..478cb35d 100644
>> --- a/riscv/sbi.c
>> +++ b/riscv/sbi.c
>> @@ -32,6 +32,7 @@
>>  
>>  #define	HIGH_ADDR_BOUNDARY	((phys_addr_t)1 << 32)
>>  
>> +void check_sse(void);
>>  void check_fwft(void);
>>  
>>  static long __labs(long a)
>> @@ -1567,6 +1568,7 @@ int main(int argc, char **argv)
>>  	check_hsm();
>>  	check_dbcn();
>>  	check_susp();
>> +	check_sse();
>>  	check_fwft();
>>  
>>  	return report_summary();
>> -- 
>> 2.47.2
>>
> 
> Thanks,
> drew


^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [kvm-unit-tests PATCH v8 6/6] riscv: sbi: Add SSE extension tests
  2025-03-14  9:29     ` Clément Léger
@ 2025-03-14 10:09       ` Andrew Jones
  0 siblings, 0 replies; 13+ messages in thread
From: Andrew Jones @ 2025-03-14 10:09 UTC (permalink / raw)
  To: Clément Léger; +Cc: kvm, kvm-riscv, Anup Patel, Atish Patra

On Fri, Mar 14, 2025 at 10:29:04AM +0100, Clément Léger wrote:
> 
> 
> On 12/03/2025 18:51, Andrew Jones wrote:
...
> >> +	sbiret_report_error(&ret, SBI_ERR_INVALID_PARAM, "Set invalid flags bit 0x%lx value error",
> >> +			    flags);
> >> +
> > 
> > no need to wrap
> 
> That's actually > 100 columns if unwrap.

True, even more than just 1 or 2 chars, so we can leave it wrapped. But,
I'd still merge it sticking it out too :-)

Thanks,
drew

^ permalink raw reply	[flat|nested] 13+ messages in thread

end of thread, other threads:[~2025-03-14 10:09 UTC | newest]

Thread overview: 13+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2025-03-07 16:15 [kvm-unit-tests PATCH v8 0/6] riscv: add SBI SSE extension tests Clément Léger
2025-03-07 16:15 ` [kvm-unit-tests PATCH v8 1/6] kbuild: Allow multiple asm-offsets file to be generated Clément Léger
2025-03-07 16:15 ` [kvm-unit-tests PATCH v8 2/6] riscv: Set .aux.o files as .PRECIOUS Clément Léger
2025-03-12 16:30   ` Andrew Jones
2025-03-12 17:53     ` Andrew Jones
2025-03-07 16:15 ` [kvm-unit-tests PATCH v8 3/6] riscv: Use asm-offsets to generate SBI_EXT_HSM values Clément Léger
2025-03-07 16:15 ` [kvm-unit-tests PATCH v8 4/6] riscv: lib: Add SBI SSE extension definitions Clément Léger
2025-03-07 16:15 ` [kvm-unit-tests PATCH v8 5/6] lib: riscv: Add SBI SSE support Clément Léger
2025-03-12 16:29   ` Andrew Jones
2025-03-07 16:15 ` [kvm-unit-tests PATCH v8 6/6] riscv: sbi: Add SSE extension tests Clément Léger
2025-03-12 17:51   ` Andrew Jones
2025-03-14  9:29     ` Clément Léger
2025-03-14 10:09       ` Andrew Jones

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox