* [kvm-unit-tests PATCH v7 0/6] riscv: add SBI SSE extension tests
@ 2025-02-14 11:44 Clément Léger
2025-02-14 11:44 ` [kvm-unit-tests PATCH v7 1/6] kbuild: Allow multiple asm-offsets file to be generated Clément Léger
` (5 more replies)
0 siblings, 6 replies; 14+ messages in thread
From: Clément Léger @ 2025-02-14 11:44 UTC (permalink / raw)
To: kvm, kvm-riscv
Cc: Clément Léger, Andrew Jones, Anup Patel, Atish Patra
This series adds an individual test for SBI SSE extension as well as
needed infrastructure for SSE support. It also adds test specific
asm-offsets generation to use custom OFFSET and DEFINE from the test
directory.
---
V7:
- Test ids/attributes/attributes count > 32 bits
- Rename all SSE function to sbi_sse_*
- Use event_id instead of event/evt
- Factorize read/write test
- Use virt_to_phys() for attributes read/write.
- Extensively use sbiret_report_error()
- Change check function return values to bool.
- Added assert for stack size to be below or equal to PAGE_SIZE
- Use en env variable for the maximum hart ID
- Check that individual read from attributes matches the multiple
attributes read.
- Added multiple attributes write at once
- Used READ_ONCE/WRITE_ONCE
- Inject all local event at once rather than looping fopr each core.
- Split test_arg for local_dispatch test so that all CPUs can run at
once.
- Move SSE entry and generic code to lib/riscv for other tests
- Fix unmask/mask state checking
V6:
- Add missing $(generated-file) dependencies for "-deps" objects
- Split SSE entry from sbi-asm.S to sse-asm.S and all SSE core functions
since it will be useful for other tests as well (dbltrp).
V5:
- Update event ranges based on latest spec
- Rename asm-offset-test.c to sbi-asm-offset.c
V4:
- Fix typo sbi_ext_ss_fid -> sbi_ext_sse_fid
- Add proper asm-offset generation for tests
- Move SSE specific file from lib/riscv to riscv/
V3:
- Add -deps variable for test specific dependencies
- Fix formatting errors/typo in sbi.h
- Add missing double trap event
- Alphabetize sbi-sse.c includes
- Fix a6 content after unmasking event
- Add SSE HART_MASK/UNMASK test
- Use mv instead of move
- move sbi_check_sse() definition in sbi.c
- Remove sbi_sse test from unitests.cfg
V2:
- Rebased on origin/master and integrate it into sbi.c tests
Clément Léger (6):
kbuild: Allow multiple asm-offsets file to be generated
riscv: Set .aux.o files as .PRECIOUS
riscv: Use asm-offsets to generate SBI_EXT_HSM values
riscv: lib: Add SBI SSE extension definitions
lib: riscv: Add SBI SSE support
riscv: sbi: Add SSE extension tests
scripts/asm-offsets.mak | 22 +-
riscv/Makefile | 6 +-
lib/riscv/asm/csr.h | 1 +
lib/riscv/asm/sbi-sse.h | 48 ++
lib/riscv/asm/sbi.h | 106 +++-
lib/riscv/sbi-sse-asm.S | 103 ++++
lib/riscv/asm-offsets.c | 9 +
lib/riscv/sbi-sse.c | 84 ++++
riscv/sbi-asm.S | 6 +-
riscv/sbi-asm-offsets.c | 11 +
riscv/sbi-sse.c | 1054 +++++++++++++++++++++++++++++++++++++++
riscv/sbi.c | 2 +
riscv/.gitignore | 1 +
13 files changed, 1442 insertions(+), 11 deletions(-)
create mode 100644 lib/riscv/asm/sbi-sse.h
create mode 100644 lib/riscv/sbi-sse-asm.S
create mode 100644 lib/riscv/sbi-sse.c
create mode 100644 riscv/sbi-asm-offsets.c
create mode 100644 riscv/sbi-sse.c
create mode 100644 riscv/.gitignore
--
2.47.2
^ permalink raw reply [flat|nested] 14+ messages in thread
* [kvm-unit-tests PATCH v7 1/6] kbuild: Allow multiple asm-offsets file to be generated
2025-02-14 11:44 [kvm-unit-tests PATCH v7 0/6] riscv: add SBI SSE extension tests Clément Léger
@ 2025-02-14 11:44 ` Clément Léger
2025-02-14 11:44 ` [kvm-unit-tests PATCH v7 2/6] riscv: Set .aux.o files as .PRECIOUS Clément Léger
` (4 subsequent siblings)
5 siblings, 0 replies; 14+ messages in thread
From: Clément Léger @ 2025-02-14 11:44 UTC (permalink / raw)
To: kvm, kvm-riscv
Cc: Clément Léger, Andrew Jones, Anup Patel, Atish Patra,
Andrew Jones
In order to allow multiple asm-offsets files to generated the include
guard need to be different between these file. Add a asm_offset_name
makefile macro to obtain an uppercase name matching the original asm
offsets file.
Signed-off-by: Clément Léger <cleger@rivosinc.com>
Reviewed-by: Andrew Jones <andrew.jones@linux.dev>
---
scripts/asm-offsets.mak | 22 +++++++++++++++-------
1 file changed, 15 insertions(+), 7 deletions(-)
diff --git a/scripts/asm-offsets.mak b/scripts/asm-offsets.mak
index 7b64162d..a5fdbf5d 100644
--- a/scripts/asm-offsets.mak
+++ b/scripts/asm-offsets.mak
@@ -15,10 +15,14 @@ define sed-y
s:->::; p;}'
endef
+define asm_offset_name
+ $(shell echo $(notdir $(1)) | tr [:lower:]- [:upper:]_)
+endef
+
define make_asm_offsets
(set -e; \
- echo "#ifndef __ASM_OFFSETS_H__"; \
- echo "#define __ASM_OFFSETS_H__"; \
+ echo "#ifndef __$(strip $(asm_offset_name))_H__"; \
+ echo "#define __$(strip $(asm_offset_name))_H__"; \
echo "/*"; \
echo " * Generated file. DO NOT MODIFY."; \
echo " *"; \
@@ -29,12 +33,16 @@ define make_asm_offsets
echo "#endif" ) > $@
endef
-$(asm-offsets:.h=.s): $(asm-offsets:.h=.c)
- $(CC) $(CFLAGS) -fverbose-asm -S -o $@ $<
+define gen_asm_offsets_rules
+$(1).s: $(1).c
+ $(CC) $(CFLAGS) -fverbose-asm -S -o $$@ $$<
+
+$(1).h: $(1).s
+ $$(call make_asm_offsets,$(1))
+ cp -f $$@ lib/generated/
+endef
-$(asm-offsets): $(asm-offsets:.h=.s)
- $(call make_asm_offsets)
- cp -f $(asm-offsets) lib/generated/
+$(foreach o,$(asm-offsets),$(eval $(call gen_asm_offsets_rules, $(o:.h=))))
OBJDIRS += lib/generated
--
2.47.2
^ permalink raw reply related [flat|nested] 14+ messages in thread
* [kvm-unit-tests PATCH v7 2/6] riscv: Set .aux.o files as .PRECIOUS
2025-02-14 11:44 [kvm-unit-tests PATCH v7 0/6] riscv: add SBI SSE extension tests Clément Léger
2025-02-14 11:44 ` [kvm-unit-tests PATCH v7 1/6] kbuild: Allow multiple asm-offsets file to be generated Clément Léger
@ 2025-02-14 11:44 ` Clément Léger
2025-02-27 15:36 ` Andrew Jones
2025-02-14 11:44 ` [kvm-unit-tests PATCH v7 3/6] riscv: Use asm-offsets to generate SBI_EXT_HSM values Clément Léger
` (3 subsequent siblings)
5 siblings, 1 reply; 14+ messages in thread
From: Clément Léger @ 2025-02-14 11:44 UTC (permalink / raw)
To: kvm, kvm-riscv
Cc: Clément Léger, Andrew Jones, Anup Patel, Atish Patra
When compiling, we need to keep .aux.o file or they will be removed
after the compilation which leads to dependent files to be recompiled.
Set these files as .PRECIOUS to keep them.
Signed-off-by: Clément Léger <cleger@rivosinc.com>
---
riscv/Makefile | 1 +
1 file changed, 1 insertion(+)
diff --git a/riscv/Makefile b/riscv/Makefile
index 52718f3f..ae9cf02a 100644
--- a/riscv/Makefile
+++ b/riscv/Makefile
@@ -90,6 +90,7 @@ CFLAGS += -I $(SRCDIR)/lib -I $(SRCDIR)/lib/libfdt -I lib -I $(SRCDIR)/riscv
asm-offsets = lib/riscv/asm-offsets.h
include $(SRCDIR)/scripts/asm-offsets.mak
+.PRECIOUS: %.aux.o
%.aux.o: $(SRCDIR)/lib/auxinfo.c
$(CC) $(CFLAGS) -c -o $@ $< \
-DPROGNAME=\"$(notdir $(@:.aux.o=.$(exe)))\" -DAUXFLAGS=$(AUXFLAGS)
--
2.47.2
^ permalink raw reply related [flat|nested] 14+ messages in thread
* [kvm-unit-tests PATCH v7 3/6] riscv: Use asm-offsets to generate SBI_EXT_HSM values
2025-02-14 11:44 [kvm-unit-tests PATCH v7 0/6] riscv: add SBI SSE extension tests Clément Léger
2025-02-14 11:44 ` [kvm-unit-tests PATCH v7 1/6] kbuild: Allow multiple asm-offsets file to be generated Clément Léger
2025-02-14 11:44 ` [kvm-unit-tests PATCH v7 2/6] riscv: Set .aux.o files as .PRECIOUS Clément Léger
@ 2025-02-14 11:44 ` Clément Léger
2025-02-14 11:44 ` [kvm-unit-tests PATCH v7 4/6] riscv: lib: Add SBI SSE extension definitions Clément Léger
` (2 subsequent siblings)
5 siblings, 0 replies; 14+ messages in thread
From: Clément Léger @ 2025-02-14 11:44 UTC (permalink / raw)
To: kvm, kvm-riscv
Cc: Clément Léger, Andrew Jones, Anup Patel, Atish Patra,
Andrew Jones
Replace hardcoded values with generated ones using sbi-asm-offset. This
allows to directly use ASM_SBI_EXT_HSM and ASM_SBI_EXT_HSM_STOP in
assembly.
Signed-off-by: Clément Léger <cleger@rivosinc.com>
Reviewed-by: Andrew Jones <andrew.jones@linux.dev>
---
riscv/Makefile | 2 +-
riscv/sbi-asm.S | 6 ++++--
riscv/sbi-asm-offsets.c | 11 +++++++++++
riscv/.gitignore | 1 +
4 files changed, 17 insertions(+), 3 deletions(-)
create mode 100644 riscv/sbi-asm-offsets.c
create mode 100644 riscv/.gitignore
diff --git a/riscv/Makefile b/riscv/Makefile
index ae9cf02a..02d2ac39 100644
--- a/riscv/Makefile
+++ b/riscv/Makefile
@@ -87,7 +87,7 @@ CFLAGS += -ffreestanding
CFLAGS += -O2
CFLAGS += -I $(SRCDIR)/lib -I $(SRCDIR)/lib/libfdt -I lib -I $(SRCDIR)/riscv
-asm-offsets = lib/riscv/asm-offsets.h
+asm-offsets = lib/riscv/asm-offsets.h riscv/sbi-asm-offsets.h
include $(SRCDIR)/scripts/asm-offsets.mak
.PRECIOUS: %.aux.o
diff --git a/riscv/sbi-asm.S b/riscv/sbi-asm.S
index 923c2cec..b9c2696f 100644
--- a/riscv/sbi-asm.S
+++ b/riscv/sbi-asm.S
@@ -7,6 +7,8 @@
#define __ASSEMBLY__
#include <asm/asm.h>
#include <asm/csr.h>
+#include <asm/asm-offsets.h>
+#include <generated/sbi-asm-offsets.h>
#include "sbi-tests.h"
@@ -58,8 +60,8 @@ sbi_hsm_check:
7: lb t0, 0(t1)
pause
beqz t0, 7b
- li a7, 0x48534d /* SBI_EXT_HSM */
- li a6, 1 /* SBI_EXT_HSM_HART_STOP */
+ li a7, ASM_SBI_EXT_HSM
+ li a6, ASM_SBI_EXT_HSM_HART_STOP
ecall
8: pause
j 8b
diff --git a/riscv/sbi-asm-offsets.c b/riscv/sbi-asm-offsets.c
new file mode 100644
index 00000000..bd37b6a2
--- /dev/null
+++ b/riscv/sbi-asm-offsets.c
@@ -0,0 +1,11 @@
+// SPDX-License-Identifier: GPL-2.0-only
+#include <kbuild.h>
+#include <asm/sbi.h>
+
+int main(void)
+{
+ DEFINE(ASM_SBI_EXT_HSM, SBI_EXT_HSM);
+ DEFINE(ASM_SBI_EXT_HSM_HART_STOP, SBI_EXT_HSM_HART_STOP);
+
+ return 0;
+}
diff --git a/riscv/.gitignore b/riscv/.gitignore
new file mode 100644
index 00000000..0a8c5a36
--- /dev/null
+++ b/riscv/.gitignore
@@ -0,0 +1 @@
+/*-asm-offsets.[hs]
--
2.47.2
^ permalink raw reply related [flat|nested] 14+ messages in thread
* [kvm-unit-tests PATCH v7 4/6] riscv: lib: Add SBI SSE extension definitions
2025-02-14 11:44 [kvm-unit-tests PATCH v7 0/6] riscv: add SBI SSE extension tests Clément Léger
` (2 preceding siblings ...)
2025-02-14 11:44 ` [kvm-unit-tests PATCH v7 3/6] riscv: Use asm-offsets to generate SBI_EXT_HSM values Clément Léger
@ 2025-02-14 11:44 ` Clément Léger
2025-02-14 11:44 ` [kvm-unit-tests PATCH v7 5/6] lib: riscv: Add SBI SSE support Clément Léger
2025-02-14 11:44 ` [kvm-unit-tests PATCH v7 6/6] riscv: sbi: Add SSE extension tests Clément Léger
5 siblings, 0 replies; 14+ messages in thread
From: Clément Léger @ 2025-02-14 11:44 UTC (permalink / raw)
To: kvm, kvm-riscv
Cc: Clément Léger, Andrew Jones, Anup Patel, Atish Patra,
Andrew Jones
Add SBI SSE extension definitions in sbi.h
Signed-off-by: Clément Léger <cleger@rivosinc.com>
Reviewed-by: Andrew Jones <andrew.jones@linux.dev>
---
lib/riscv/asm/sbi.h | 106 +++++++++++++++++++++++++++++++++++++++++++-
1 file changed, 105 insertions(+), 1 deletion(-)
diff --git a/lib/riscv/asm/sbi.h b/lib/riscv/asm/sbi.h
index 397400f2..c70cad34 100644
--- a/lib/riscv/asm/sbi.h
+++ b/lib/riscv/asm/sbi.h
@@ -30,6 +30,7 @@ enum sbi_ext_id {
SBI_EXT_DBCN = 0x4442434E,
SBI_EXT_SUSP = 0x53555350,
SBI_EXT_FWFT = 0x46574654,
+ SBI_EXT_SSE = 0x535345,
};
enum sbi_ext_base_fid {
@@ -78,7 +79,6 @@ enum sbi_ext_dbcn_fid {
SBI_EXT_DBCN_CONSOLE_WRITE_BYTE,
};
-
enum sbi_ext_fwft_fid {
SBI_EXT_FWFT_SET = 0,
SBI_EXT_FWFT_GET,
@@ -105,6 +105,110 @@ enum sbi_ext_fwft_fid {
#define SBI_FWFT_SET_FLAG_LOCK BIT(0)
+enum sbi_ext_sse_fid {
+ SBI_EXT_SSE_READ_ATTRS = 0,
+ SBI_EXT_SSE_WRITE_ATTRS,
+ SBI_EXT_SSE_REGISTER,
+ SBI_EXT_SSE_UNREGISTER,
+ SBI_EXT_SSE_ENABLE,
+ SBI_EXT_SSE_DISABLE,
+ SBI_EXT_SSE_COMPLETE,
+ SBI_EXT_SSE_INJECT,
+ SBI_EXT_SSE_HART_UNMASK,
+ SBI_EXT_SSE_HART_MASK,
+};
+
+/* SBI SSE Event Attributes. */
+enum sbi_sse_attr_id {
+ SBI_SSE_ATTR_STATUS = 0x00000000,
+ SBI_SSE_ATTR_PRIORITY = 0x00000001,
+ SBI_SSE_ATTR_CONFIG = 0x00000002,
+ SBI_SSE_ATTR_PREFERRED_HART = 0x00000003,
+ SBI_SSE_ATTR_ENTRY_PC = 0x00000004,
+ SBI_SSE_ATTR_ENTRY_ARG = 0x00000005,
+ SBI_SSE_ATTR_INTERRUPTED_SEPC = 0x00000006,
+ SBI_SSE_ATTR_INTERRUPTED_FLAGS = 0x00000007,
+ SBI_SSE_ATTR_INTERRUPTED_A6 = 0x00000008,
+ SBI_SSE_ATTR_INTERRUPTED_A7 = 0x00000009,
+};
+
+#define SBI_SSE_ATTR_STATUS_STATE_OFFSET 0
+#define SBI_SSE_ATTR_STATUS_STATE_MASK 0x3
+#define SBI_SSE_ATTR_STATUS_PENDING_OFFSET 2
+#define SBI_SSE_ATTR_STATUS_INJECT_OFFSET 3
+
+#define SBI_SSE_ATTR_CONFIG_ONESHOT BIT(0)
+
+#define SBI_SSE_ATTR_INTERRUPTED_FLAGS_SSTATUS_SPP BIT(0)
+#define SBI_SSE_ATTR_INTERRUPTED_FLAGS_SSTATUS_SPIE BIT(1)
+#define SBI_SSE_ATTR_INTERRUPTED_FLAGS_HSTATUS_SPV BIT(2)
+#define SBI_SSE_ATTR_INTERRUPTED_FLAGS_HSTATUS_SPVP BIT(3)
+#define SBI_SSE_ATTR_INTERRUPTED_FLAGS_SSTATUS_SPELP BIT(4)
+#define SBI_SSE_ATTR_INTERRUPTED_FLAGS_SSTATUS_SDT BIT(5)
+
+enum sbi_sse_state {
+ SBI_SSE_STATE_UNUSED = 0,
+ SBI_SSE_STATE_REGISTERED = 1,
+ SBI_SSE_STATE_ENABLED = 2,
+ SBI_SSE_STATE_RUNNING = 3,
+};
+
+/* SBI SSE Event IDs. */
+/* Range 0x00000000 - 0x0000ffff */
+#define SBI_SSE_EVENT_LOCAL_HIGH_PRIO_RAS 0x00000000
+#define SBI_SSE_EVENT_LOCAL_DOUBLE_TRAP 0x00000001
+#define SBI_SSE_EVENT_LOCAL_RESERVED_0_START 0x00000002
+#define SBI_SSE_EVENT_LOCAL_RESERVED_0_END 0x00003fff
+#define SBI_SSE_EVENT_LOCAL_PLAT_0_START 0x00004000
+#define SBI_SSE_EVENT_LOCAL_PLAT_0_END 0x00007fff
+
+#define SBI_SSE_EVENT_GLOBAL_HIGH_PRIO_RAS 0x00008000
+#define SBI_SSE_EVENT_GLOBAL_RESERVED_0_START 0x00008001
+#define SBI_SSE_EVENT_GLOBAL_RESERVED_0_END 0x0000bfff
+#define SBI_SSE_EVENT_GLOBAL_PLAT_0_START 0x0000c000
+#define SBI_SSE_EVENT_GLOBAL_PLAT_0_END 0x0000ffff
+
+/* Range 0x00010000 - 0x0001ffff */
+#define SBI_SSE_EVENT_LOCAL_PMU_OVERFLOW 0x00010000
+#define SBI_SSE_EVENT_LOCAL_RESERVED_1_START 0x00010001
+#define SBI_SSE_EVENT_LOCAL_RESERVED_1_END 0x00013fff
+#define SBI_SSE_EVENT_LOCAL_PLAT_1_START 0x00014000
+#define SBI_SSE_EVENT_LOCAL_PLAT_1_END 0x00017fff
+
+#define SBI_SSE_EVENT_GLOBAL_RESERVED_1_START 0x00018000
+#define SBI_SSE_EVENT_GLOBAL_RESERVED_1_END 0x0001bfff
+#define SBI_SSE_EVENT_GLOBAL_PLAT_1_START 0x0001c000
+#define SBI_SSE_EVENT_GLOBAL_PLAT_1_END 0x0001ffff
+
+/* Range 0x00100000 - 0x0010ffff */
+#define SBI_SSE_EVENT_LOCAL_LOW_PRIO_RAS 0x00100000
+#define SBI_SSE_EVENT_LOCAL_RESERVED_2_START 0x00100001
+#define SBI_SSE_EVENT_LOCAL_RESERVED_2_END 0x00103fff
+#define SBI_SSE_EVENT_LOCAL_PLAT_2_START 0x00104000
+#define SBI_SSE_EVENT_LOCAL_PLAT_2_END 0x00107fff
+
+#define SBI_SSE_EVENT_GLOBAL_LOW_PRIO_RAS 0x00108000
+#define SBI_SSE_EVENT_GLOBAL_RESERVED_2_START 0x00108001
+#define SBI_SSE_EVENT_GLOBAL_RESERVED_2_END 0x0010bfff
+#define SBI_SSE_EVENT_GLOBAL_PLAT_2_START 0x0010c000
+#define SBI_SSE_EVENT_GLOBAL_PLAT_2_END 0x0010ffff
+
+/* Range 0xffff0000 - 0xffffffff */
+#define SBI_SSE_EVENT_LOCAL_SOFTWARE 0xffff0000
+#define SBI_SSE_EVENT_LOCAL_RESERVED_3_START 0xffff0001
+#define SBI_SSE_EVENT_LOCAL_RESERVED_3_END 0xffff3fff
+#define SBI_SSE_EVENT_LOCAL_PLAT_3_START 0xffff4000
+#define SBI_SSE_EVENT_LOCAL_PLAT_3_END 0xffff7fff
+
+#define SBI_SSE_EVENT_GLOBAL_SOFTWARE 0xffff8000
+#define SBI_SSE_EVENT_GLOBAL_RESERVED_3_START 0xffff8001
+#define SBI_SSE_EVENT_GLOBAL_RESERVED_3_END 0xffffbfff
+#define SBI_SSE_EVENT_GLOBAL_PLAT_3_START 0xffffc000
+#define SBI_SSE_EVENT_GLOBAL_PLAT_3_END 0xffffffff
+
+#define SBI_SSE_EVENT_PLATFORM_BIT BIT(14)
+#define SBI_SSE_EVENT_GLOBAL_BIT BIT(15)
+
struct sbiret {
long error;
long value;
--
2.47.2
^ permalink raw reply related [flat|nested] 14+ messages in thread
* [kvm-unit-tests PATCH v7 5/6] lib: riscv: Add SBI SSE support
2025-02-14 11:44 [kvm-unit-tests PATCH v7 0/6] riscv: add SBI SSE extension tests Clément Léger
` (3 preceding siblings ...)
2025-02-14 11:44 ` [kvm-unit-tests PATCH v7 4/6] riscv: lib: Add SBI SSE extension definitions Clément Léger
@ 2025-02-14 11:44 ` Clément Léger
2025-02-27 16:03 ` Andrew Jones
2025-02-14 11:44 ` [kvm-unit-tests PATCH v7 6/6] riscv: sbi: Add SSE extension tests Clément Léger
5 siblings, 1 reply; 14+ messages in thread
From: Clément Léger @ 2025-02-14 11:44 UTC (permalink / raw)
To: kvm, kvm-riscv
Cc: Clément Léger, Andrew Jones, Anup Patel, Atish Patra
Add support for registering and handling SSE events. This will be used
by sbi test as well as upcoming double trap tests.
Signed-off-by: Clément Léger <cleger@rivosinc.com>
---
riscv/Makefile | 2 +
lib/riscv/asm/csr.h | 1 +
lib/riscv/asm/sbi-sse.h | 48 +++++++++++++++++++
lib/riscv/sbi-sse-asm.S | 103 ++++++++++++++++++++++++++++++++++++++++
lib/riscv/asm-offsets.c | 9 ++++
lib/riscv/sbi-sse.c | 84 ++++++++++++++++++++++++++++++++
6 files changed, 247 insertions(+)
create mode 100644 lib/riscv/asm/sbi-sse.h
create mode 100644 lib/riscv/sbi-sse-asm.S
create mode 100644 lib/riscv/sbi-sse.c
diff --git a/riscv/Makefile b/riscv/Makefile
index 02d2ac39..ed590ede 100644
--- a/riscv/Makefile
+++ b/riscv/Makefile
@@ -43,6 +43,8 @@ cflatobjs += lib/riscv/setup.o
cflatobjs += lib/riscv/smp.o
cflatobjs += lib/riscv/stack.o
cflatobjs += lib/riscv/timer.o
+cflatobjs += lib/riscv/sbi-sse-asm.o
+cflatobjs += lib/riscv/sbi-sse.o
ifeq ($(ARCH),riscv32)
cflatobjs += lib/ldiv32.o
endif
diff --git a/lib/riscv/asm/csr.h b/lib/riscv/asm/csr.h
index 16f5ddd7..389d9850 100644
--- a/lib/riscv/asm/csr.h
+++ b/lib/riscv/asm/csr.h
@@ -17,6 +17,7 @@
#define CSR_TIME 0xc01
#define SR_SIE _AC(0x00000002, UL)
+#define SR_SPP _AC(0x00000100, UL)
/* Exception cause high bit - is an interrupt if set */
#define CAUSE_IRQ_FLAG (_AC(1, UL) << (__riscv_xlen - 1))
diff --git a/lib/riscv/asm/sbi-sse.h b/lib/riscv/asm/sbi-sse.h
new file mode 100644
index 00000000..ba18ce27
--- /dev/null
+++ b/lib/riscv/asm/sbi-sse.h
@@ -0,0 +1,48 @@
+/* SPDX-License-Identifier: GPL-2.0-only */
+/*
+ * SBI SSE library interface
+ *
+ * Copyright (C) 2025, Rivos Inc., Clément Léger <cleger@rivosinc.com>
+ */
+#ifndef _RISCV_SSE_H_
+#define _RISCV_SSE_H_
+
+#include <asm/sbi.h>
+
+typedef void (*sbi_sse_handler_fn)(void *data, struct pt_regs *regs, unsigned int hartid);
+
+struct sbi_sse_handler_arg {
+ unsigned long reg_tmp;
+ sbi_sse_handler_fn handler;
+ void *handler_data;
+ void *stack;
+};
+
+extern void sbi_sse_entry(void);
+
+static inline bool sbi_sse_event_is_global(uint32_t event_id)
+{
+ return !!(event_id & SBI_SSE_EVENT_GLOBAL_BIT);
+}
+
+struct sbiret sbi_sse_read_attrs_raw(unsigned long event_id, unsigned long base_attr_id,
+ unsigned long attr_count, unsigned long phys_lo,
+ unsigned long phys_hi);
+struct sbiret sbi_sse_read_attrs(unsigned long event_id, unsigned long base_attr_id,
+ unsigned long attr_count, unsigned long *values);
+struct sbiret sbi_sse_write_attrs_raw(unsigned long event_id, unsigned long base_attr_id,
+ unsigned long attr_count, unsigned long phys_lo,
+ unsigned long phys_hi);
+struct sbiret sbi_sse_write_attrs(unsigned long event_id, unsigned long base_attr_id,
+ unsigned long attr_count, unsigned long *values);
+struct sbiret sbi_sse_register_raw(unsigned long event_id, unsigned long entry_pc,
+ unsigned long entry_arg);
+struct sbiret sbi_sse_register(unsigned long event_id, struct sbi_sse_handler_arg *arg);
+struct sbiret sbi_sse_unregister(unsigned long event_id);
+struct sbiret sbi_sse_enable(unsigned long event_id);
+struct sbiret sbi_sse_hart_mask(void);
+struct sbiret sbi_sse_hart_unmask(void);
+struct sbiret sbi_sse_inject(unsigned long event_id, unsigned long hart_id);
+struct sbiret sbi_sse_disable(unsigned long event_id);
+
+#endif /* !_RISCV_SSE_H_ */
diff --git a/lib/riscv/sbi-sse-asm.S b/lib/riscv/sbi-sse-asm.S
new file mode 100644
index 00000000..5f60b839
--- /dev/null
+++ b/lib/riscv/sbi-sse-asm.S
@@ -0,0 +1,103 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/*
+ * RISC-V SSE events entry point.
+ *
+ * Copyright (C) 2025, Rivos Inc., Clément Léger <cleger@rivosinc.com>
+ */
+#define __ASSEMBLY__
+#include <asm/asm.h>
+#include <asm/csr.h>
+#include <asm/asm-offsets.h>
+#include <generated/sbi-asm-offsets.h>
+
+.section .text
+.global sbi_sse_entry
+sbi_sse_entry:
+ /* Save stack temporarily */
+ REG_S sp, SBI_SSE_REG_TMP(a7)
+ /* Set entry stack */
+ REG_L sp, SBI_SSE_HANDLER_STACK(a7)
+
+ addi sp, sp, -(PT_SIZE)
+ REG_S ra, PT_RA(sp)
+ REG_S s0, PT_S0(sp)
+ REG_S s1, PT_S1(sp)
+ REG_S s2, PT_S2(sp)
+ REG_S s3, PT_S3(sp)
+ REG_S s4, PT_S4(sp)
+ REG_S s5, PT_S5(sp)
+ REG_S s6, PT_S6(sp)
+ REG_S s7, PT_S7(sp)
+ REG_S s8, PT_S8(sp)
+ REG_S s9, PT_S9(sp)
+ REG_S s10, PT_S10(sp)
+ REG_S s11, PT_S11(sp)
+ REG_S tp, PT_TP(sp)
+ REG_S t0, PT_T0(sp)
+ REG_S t1, PT_T1(sp)
+ REG_S t2, PT_T2(sp)
+ REG_S t3, PT_T3(sp)
+ REG_S t4, PT_T4(sp)
+ REG_S t5, PT_T5(sp)
+ REG_S t6, PT_T6(sp)
+ REG_S gp, PT_GP(sp)
+ REG_S a0, PT_A0(sp)
+ REG_S a1, PT_A1(sp)
+ REG_S a2, PT_A2(sp)
+ REG_S a3, PT_A3(sp)
+ REG_S a4, PT_A4(sp)
+ REG_S a5, PT_A5(sp)
+ csrr a1, CSR_SEPC
+ REG_S a1, PT_EPC(sp)
+ csrr a2, CSR_SSTATUS
+ REG_S a2, PT_STATUS(sp)
+
+ REG_L a0, SBI_SSE_REG_TMP(a7)
+ REG_S a0, PT_SP(sp)
+
+ REG_L t0, SBI_SSE_HANDLER(a7)
+ REG_L a0, SBI_SSE_HANDLER_DATA(a7)
+ mv a1, sp
+ mv a2, a6
+ jalr t0
+
+ REG_L a1, PT_EPC(sp)
+ REG_L a2, PT_STATUS(sp)
+ csrw CSR_SEPC, a1
+ csrw CSR_SSTATUS, a2
+
+ REG_L ra, PT_RA(sp)
+ REG_L s0, PT_S0(sp)
+ REG_L s1, PT_S1(sp)
+ REG_L s2, PT_S2(sp)
+ REG_L s3, PT_S3(sp)
+ REG_L s4, PT_S4(sp)
+ REG_L s5, PT_S5(sp)
+ REG_L s6, PT_S6(sp)
+ REG_L s7, PT_S7(sp)
+ REG_L s8, PT_S8(sp)
+ REG_L s9, PT_S9(sp)
+ REG_L s10, PT_S10(sp)
+ REG_L s11, PT_S11(sp)
+ REG_L tp, PT_TP(sp)
+ REG_L t0, PT_T0(sp)
+ REG_L t1, PT_T1(sp)
+ REG_L t2, PT_T2(sp)
+ REG_L t3, PT_T3(sp)
+ REG_L t4, PT_T4(sp)
+ REG_L t5, PT_T5(sp)
+ REG_L t6, PT_T6(sp)
+ REG_L gp, PT_GP(sp)
+ REG_L a0, PT_A0(sp)
+ REG_L a1, PT_A1(sp)
+ REG_L a2, PT_A2(sp)
+ REG_L a3, PT_A3(sp)
+ REG_L a4, PT_A4(sp)
+ REG_L a5, PT_A5(sp)
+
+ REG_L sp, PT_SP(sp)
+
+ li a7, ASM_SBI_EXT_SSE
+ li a6, ASM_SBI_EXT_SSE_COMPLETE
+ ecall
+
diff --git a/lib/riscv/asm-offsets.c b/lib/riscv/asm-offsets.c
index 6c511c14..77e5d26d 100644
--- a/lib/riscv/asm-offsets.c
+++ b/lib/riscv/asm-offsets.c
@@ -4,6 +4,7 @@
#include <asm/processor.h>
#include <asm/ptrace.h>
#include <asm/smp.h>
+#include <asm/sbi-sse.h>
int main(void)
{
@@ -63,5 +64,13 @@ int main(void)
OFFSET(THREAD_INFO_HARTID, thread_info, hartid);
DEFINE(THREAD_INFO_SIZE, sizeof(struct thread_info));
+ DEFINE(ASM_SBI_EXT_SSE, SBI_EXT_SSE);
+ DEFINE(ASM_SBI_EXT_SSE_COMPLETE, SBI_EXT_SSE_COMPLETE);
+
+ OFFSET(SBI_SSE_REG_TMP, sbi_sse_handler_arg, reg_tmp);
+ OFFSET(SBI_SSE_HANDLER, sbi_sse_handler_arg, handler);
+ OFFSET(SBI_SSE_HANDLER_DATA, sbi_sse_handler_arg, handler_data);
+ OFFSET(SBI_SSE_HANDLER_STACK, sbi_sse_handler_arg, stack);
+
return 0;
}
diff --git a/lib/riscv/sbi-sse.c b/lib/riscv/sbi-sse.c
new file mode 100644
index 00000000..bc4dd10e
--- /dev/null
+++ b/lib/riscv/sbi-sse.c
@@ -0,0 +1,84 @@
+// SPDX-License-Identifier: GPL-2.0-only
+/*
+ * SBI SSE library
+ *
+ * Copyright (C) 2025, Rivos Inc., Clément Léger <cleger@rivosinc.com>
+ */
+#include <asm/sbi.h>
+#include <asm/sbi-sse.h>
+#include <asm/io.h>
+
+struct sbiret sbi_sse_read_attrs_raw(unsigned long event_id, unsigned long base_attr_id,
+ unsigned long attr_count, unsigned long phys_lo,
+ unsigned long phys_hi)
+{
+ return sbi_ecall(SBI_EXT_SSE, SBI_EXT_SSE_READ_ATTRS, event_id, base_attr_id, attr_count,
+ phys_lo, phys_hi, 0);
+}
+
+struct sbiret sbi_sse_read_attrs(unsigned long event_id, unsigned long base_attr_id,
+ unsigned long attr_count, unsigned long *values)
+{
+ phys_addr_t p = virt_to_phys(values);
+
+ return sbi_sse_read_attrs_raw(event_id, base_attr_id, attr_count, lower_32_bits(p),
+ upper_32_bits(p));
+}
+
+struct sbiret sbi_sse_write_attrs_raw(unsigned long event_id, unsigned long base_attr_id,
+ unsigned long attr_count, unsigned long phys_lo,
+ unsigned long phys_hi)
+{
+ return sbi_ecall(SBI_EXT_SSE, SBI_EXT_SSE_WRITE_ATTRS, event_id, base_attr_id, attr_count,
+ phys_lo, phys_hi, 0);
+}
+
+struct sbiret sbi_sse_write_attrs(unsigned long event_id, unsigned long base_attr_id,
+ unsigned long attr_count, unsigned long *values)
+{
+ phys_addr_t p = virt_to_phys(values);
+
+ return sbi_sse_write_attrs_raw(event_id, base_attr_id, attr_count, lower_32_bits(p),
+ upper_32_bits(p));
+}
+
+struct sbiret sbi_sse_register_raw(unsigned long event_id, unsigned long entry_pc,
+ unsigned long entry_arg)
+{
+ return sbi_ecall(SBI_EXT_SSE, SBI_EXT_SSE_REGISTER, event_id, entry_pc, entry_arg, 0, 0, 0);
+}
+
+struct sbiret sbi_sse_register(unsigned long event_id, struct sbi_sse_handler_arg *arg)
+{
+ return sbi_sse_register_raw(event_id, virt_to_phys(sbi_sse_entry), virt_to_phys(arg));
+}
+
+struct sbiret sbi_sse_unregister(unsigned long event_id)
+{
+ return sbi_ecall(SBI_EXT_SSE, SBI_EXT_SSE_UNREGISTER, event_id, 0, 0, 0, 0, 0);
+}
+
+struct sbiret sbi_sse_enable(unsigned long event_id)
+{
+ return sbi_ecall(SBI_EXT_SSE, SBI_EXT_SSE_ENABLE, event_id, 0, 0, 0, 0, 0);
+}
+
+struct sbiret sbi_sse_disable(unsigned long event_id)
+{
+ return sbi_ecall(SBI_EXT_SSE, SBI_EXT_SSE_DISABLE, event_id, 0, 0, 0, 0, 0);
+}
+
+struct sbiret sbi_sse_hart_mask(void)
+{
+ return sbi_ecall(SBI_EXT_SSE, SBI_EXT_SSE_HART_MASK, 0, 0, 0, 0, 0, 0);
+}
+
+struct sbiret sbi_sse_hart_unmask(void)
+{
+ return sbi_ecall(SBI_EXT_SSE, SBI_EXT_SSE_HART_UNMASK, 0, 0, 0, 0, 0, 0);
+}
+
+struct sbiret sbi_sse_inject(unsigned long event_id, unsigned long hart_id)
+{
+ return sbi_ecall(SBI_EXT_SSE, SBI_EXT_SSE_INJECT, event_id, hart_id, 0, 0, 0, 0);
+}
--
2.47.2
^ permalink raw reply related [flat|nested] 14+ messages in thread
* [kvm-unit-tests PATCH v7 6/6] riscv: sbi: Add SSE extension tests
2025-02-14 11:44 [kvm-unit-tests PATCH v7 0/6] riscv: add SBI SSE extension tests Clément Léger
` (4 preceding siblings ...)
2025-02-14 11:44 ` [kvm-unit-tests PATCH v7 5/6] lib: riscv: Add SBI SSE support Clément Léger
@ 2025-02-14 11:44 ` Clément Léger
2025-02-28 17:51 ` Andrew Jones
5 siblings, 1 reply; 14+ messages in thread
From: Clément Léger @ 2025-02-14 11:44 UTC (permalink / raw)
To: kvm, kvm-riscv
Cc: Clément Léger, Andrew Jones, Anup Patel, Atish Patra
Add SBI SSE extension tests for the following features:
- Test attributes errors (invalid values, RO, etc)
- Registration errors
- Simple events (register, enable, inject)
- Events with different priorities
- Global events dispatch on different harts
- Local events on all harts
- Hart mask/unmask events
Signed-off-by: Clément Léger <cleger@rivosinc.com>
---
riscv/Makefile | 1 +
riscv/sbi-sse.c | 1054 +++++++++++++++++++++++++++++++++++++++++++++++
riscv/sbi.c | 2 +
3 files changed, 1057 insertions(+)
create mode 100644 riscv/sbi-sse.c
diff --git a/riscv/Makefile b/riscv/Makefile
index ed590ede..ea62e05f 100644
--- a/riscv/Makefile
+++ b/riscv/Makefile
@@ -18,6 +18,7 @@ tests += $(TEST_DIR)/sieve.$(exe)
all: $(tests)
$(TEST_DIR)/sbi-deps = $(TEST_DIR)/sbi-asm.o $(TEST_DIR)/sbi-fwft.o
+$(TEST_DIR)/sbi-deps += $(TEST_DIR)/sbi-sse.o
# When built for EFI sieve needs extra memory, run with e.g. '-m 256' on QEMU
$(TEST_DIR)/sieve.$(exe): AUXFLAGS = 0x1
diff --git a/riscv/sbi-sse.c b/riscv/sbi-sse.c
new file mode 100644
index 00000000..27f47f73
--- /dev/null
+++ b/riscv/sbi-sse.c
@@ -0,0 +1,1054 @@
+// SPDX-License-Identifier: GPL-2.0-only
+/*
+ * SBI SSE testsuite
+ *
+ * Copyright (C) 2025, Rivos Inc., Clément Léger <cleger@rivosinc.com>
+ */
+#include <alloc.h>
+#include <alloc_page.h>
+#include <bitops.h>
+#include <cpumask.h>
+#include <libcflat.h>
+#include <on-cpus.h>
+#include <stdlib.h>
+
+#include <asm/barrier.h>
+#include <asm/io.h>
+#include <asm/page.h>
+#include <asm/processor.h>
+#include <asm/sbi.h>
+#include <asm/sbi-sse.h>
+#include <asm/setup.h>
+
+#include "sbi-tests.h"
+
+#define SSE_STACK_SIZE PAGE_SIZE
+
+void check_sse(void);
+
+struct sse_event_info {
+ uint32_t event_id;
+ const char *name;
+ bool can_inject;
+};
+
+static struct sse_event_info sse_event_infos[] = {
+ {
+ .event_id = SBI_SSE_EVENT_LOCAL_HIGH_PRIO_RAS,
+ .name = "local_high_prio_ras",
+ },
+ {
+ .event_id = SBI_SSE_EVENT_LOCAL_DOUBLE_TRAP,
+ .name = "double_trap",
+ },
+ {
+ .event_id = SBI_SSE_EVENT_GLOBAL_HIGH_PRIO_RAS,
+ .name = "global_high_prio_ras",
+ },
+ {
+ .event_id = SBI_SSE_EVENT_LOCAL_PMU_OVERFLOW,
+ .name = "local_pmu_overflow",
+ },
+ {
+ .event_id = SBI_SSE_EVENT_LOCAL_LOW_PRIO_RAS,
+ .name = "local_low_prio_ras",
+ },
+ {
+ .event_id = SBI_SSE_EVENT_GLOBAL_LOW_PRIO_RAS,
+ .name = "global_low_prio_ras",
+ },
+ {
+ .event_id = SBI_SSE_EVENT_LOCAL_SOFTWARE,
+ .name = "local_software",
+ },
+ {
+ .event_id = SBI_SSE_EVENT_GLOBAL_SOFTWARE,
+ .name = "global_software",
+ },
+};
+
+static const char *const attr_names[] = {
+ [SBI_SSE_ATTR_STATUS] = "status",
+ [SBI_SSE_ATTR_PRIORITY] = "priority",
+ [SBI_SSE_ATTR_CONFIG] = "config",
+ [SBI_SSE_ATTR_PREFERRED_HART] = "preferred_hart",
+ [SBI_SSE_ATTR_ENTRY_PC] = "entry_pc",
+ [SBI_SSE_ATTR_ENTRY_ARG] = "entry_arg",
+ [SBI_SSE_ATTR_INTERRUPTED_SEPC] = "interrupted_sepc",
+ [SBI_SSE_ATTR_INTERRUPTED_FLAGS] = "interrupted_flags",
+ [SBI_SSE_ATTR_INTERRUPTED_A6] = "interrupted_a6",
+ [SBI_SSE_ATTR_INTERRUPTED_A7] = "interrupted_a7",
+};
+
+static const unsigned long ro_attrs[] = {
+ SBI_SSE_ATTR_STATUS,
+ SBI_SSE_ATTR_ENTRY_PC,
+ SBI_SSE_ATTR_ENTRY_ARG,
+};
+
+static const unsigned long interrupted_attrs[] = {
+ SBI_SSE_ATTR_INTERRUPTED_SEPC,
+ SBI_SSE_ATTR_INTERRUPTED_FLAGS,
+ SBI_SSE_ATTR_INTERRUPTED_A6,
+ SBI_SSE_ATTR_INTERRUPTED_A7,
+};
+
+static const unsigned long interrupted_flags[] = {
+ SBI_SSE_ATTR_INTERRUPTED_FLAGS_SSTATUS_SPP,
+ SBI_SSE_ATTR_INTERRUPTED_FLAGS_SSTATUS_SPIE,
+ SBI_SSE_ATTR_INTERRUPTED_FLAGS_SSTATUS_SPELP,
+ SBI_SSE_ATTR_INTERRUPTED_FLAGS_SSTATUS_SDT,
+ SBI_SSE_ATTR_INTERRUPTED_FLAGS_HSTATUS_SPV,
+ SBI_SSE_ATTR_INTERRUPTED_FLAGS_HSTATUS_SPVP,
+};
+
+static struct sse_event_info *sse_event_get_info(uint32_t event_id)
+{
+ int i;
+
+ for (i = 0; i < ARRAY_SIZE(sse_event_infos); i++) {
+ if (sse_event_infos[i].event_id == event_id)
+ return &sse_event_infos[i];
+ }
+
+ assert_msg(false, "Invalid event id: %d", event_id);
+}
+
+static const char *sse_event_name(uint32_t event_id)
+{
+ return sse_event_get_info(event_id)->name;
+}
+
+static bool sse_event_can_inject(uint32_t event_id)
+{
+ return sse_event_get_info(event_id)->can_inject;
+}
+
+static struct sbiret sse_event_get_state(uint32_t event_id, enum sbi_sse_state *state)
+{
+ struct sbiret ret;
+ unsigned long status;
+
+ ret = sbi_sse_read_attrs(event_id, SBI_SSE_ATTR_STATUS, 1, &status);
+ if (ret.error) {
+ sbiret_report_error(&ret, 0, "Get SSE event status no error");
+ return ret;
+ }
+
+ *state = status & SBI_SSE_ATTR_STATUS_STATE_MASK;
+
+ return ret;
+}
+
+static void sse_global_event_set_current_hart(uint32_t event_id)
+{
+ struct sbiret ret;
+ unsigned long current_hart = current_thread_info()->hartid;
+
+ if (!sbi_sse_event_is_global(event_id))
+ return;
+
+ ret = sbi_sse_write_attrs(event_id, SBI_SSE_ATTR_PREFERRED_HART, 1, ¤t_hart);
+ if (ret.error)
+ report_abort("set preferred hart failure, error %ld", ret.error);
+}
+
+static bool sse_check_state(uint32_t event_id, unsigned long expected_state)
+{
+ struct sbiret ret;
+ enum sbi_sse_state state;
+
+ ret = sse_event_get_state(event_id, &state);
+ if (ret.error)
+ return false;
+ report(state == expected_state, "SSE event status == %ld", expected_state);
+
+ return state == expected_state;
+}
+
+static bool sse_event_pending(uint32_t event_id)
+{
+ struct sbiret ret;
+ unsigned long status;
+
+ ret = sbi_sse_read_attrs(event_id, SBI_SSE_ATTR_STATUS, 1, &status);
+ if (ret.error) {
+ report_fail("Failed to get SSE event status, error %ld", ret.error);
+ return false;
+ }
+
+ return !!(status & BIT(SBI_SSE_ATTR_STATUS_PENDING_OFFSET));
+}
+
+static void *sse_alloc_stack(void)
+{
+ /*
+ * We assume that SSE_STACK_SIZE always fit in one page. This page will
+ * always be decrement before storing anything on it in sse-entry.S.
+ */
+ assert(SSE_STACK_SIZE <= PAGE_SIZE);
+
+ return (alloc_page() + SSE_STACK_SIZE);
+}
+
+static void sse_free_stack(void *stack)
+{
+ free_page(stack - SSE_STACK_SIZE);
+}
+
+static void sse_read_write_test(uint32_t event_id, unsigned long attr, unsigned long attr_count,
+ unsigned long *value, long expected_error, const char *str)
+{
+ struct sbiret ret;
+
+ ret = sbi_sse_read_attrs(event_id, attr, attr_count, value);
+ sbiret_report_error(&ret, expected_error, "Read %s error", str);
+
+ ret = sbi_sse_write_attrs(event_id, attr, attr_count, value);
+ sbiret_report_error(&ret, expected_error, "Write %s error", str);
+}
+
+#define ALL_ATTRS_COUNT (SBI_SSE_ATTR_INTERRUPTED_A7 + 1)
+
+static void sse_test_attrs(uint32_t event_id)
+{
+ unsigned long value = 0;
+ struct sbiret ret;
+ void *ptr;
+ unsigned long values[ALL_ATTRS_COUNT];
+ unsigned int i;
+ const char *max_hart_str;
+ const char *attr_name;
+
+ report_prefix_push("attrs");
+
+ for (i = 0; i < ARRAY_SIZE(ro_attrs); i++) {
+ ret = sbi_sse_write_attrs(event_id, ro_attrs[i], 1, &value);
+ sbiret_report_error(&ret, SBI_ERR_BAD_RANGE, "RO attribute %s not writable",
+ attr_names[ro_attrs[i]]);
+ }
+
+ ret = sbi_sse_read_attrs(event_id, SBI_SSE_ATTR_STATUS, ALL_ATTRS_COUNT, values);
+ sbiret_report_error(&ret, SBI_SUCCESS, "Read multiple attributes no error");
+
+ for (i = SBI_SSE_ATTR_STATUS; i <= SBI_SSE_ATTR_INTERRUPTED_A7; i++) {
+ ret = sbi_sse_read_attrs(event_id, i, 1, &value);
+ attr_name = attr_names[i];
+
+ sbiret_report_error(&ret, SBI_SUCCESS, "Read single attribute %s", attr_name);
+ if (values[i] != value)
+ report_fail("Attribute 0x%x single value read (0x%lx) differs from the one read with multiple attributes (0x%lx)",
+ i, value, values[i]);
+ /*
+ * Preferred hart reset value is defined by SBI vendor and
+ * status injectable bit also depends on the SBI implementation
+ */
+ if (i != SBI_SSE_ATTR_STATUS && i != SBI_SSE_ATTR_PREFERRED_HART)
+ report(value == 0, "Attribute %s reset value is 0", attr_name);
+ }
+
+#if __riscv_xlen > 32
+ value = BIT(32);
+ ret = sbi_sse_write_attrs(event_id, SBI_SSE_ATTR_PRIORITY, 1, &value);
+ sbiret_report_error(&ret, SBI_ERR_INVALID_PARAM, "Write invalid prio > 0xFFFFFFFF error");
+#endif
+
+ value = ~SBI_SSE_ATTR_CONFIG_ONESHOT;
+ ret = sbi_sse_write_attrs(event_id, SBI_SSE_ATTR_CONFIG, 1, &value);
+ sbiret_report_error(&ret, SBI_ERR_INVALID_PARAM, "Write invalid config value error");
+
+ if (sbi_sse_event_is_global(event_id)) {
+ max_hart_str = getenv("MAX_HART_ID");
+ if (!max_hart_str)
+ value = 0xFFFFFFFFUL;
+ else
+ value = strtoul(max_hart_str, NULL, 0);
+
+ ret = sbi_sse_write_attrs(event_id, SBI_SSE_ATTR_PREFERRED_HART, 1, &value);
+ sbiret_report_error(&ret, SBI_ERR_INVALID_PARAM, "Set invalid hart id error");
+ } else {
+ /* Set Hart on local event -> RO */
+ value = current_thread_info()->hartid;
+ ret = sbi_sse_write_attrs(event_id, SBI_SSE_ATTR_PREFERRED_HART, 1, &value);
+ sbiret_report_error(&ret, SBI_ERR_BAD_RANGE, "Set hart id on local event error");
+ }
+
+ /* Set/get flags, sepc, a6, a7 */
+ for (i = 0; i < ARRAY_SIZE(interrupted_attrs); i++) {
+ attr_name = attr_names[interrupted_attrs[i]];
+ ret = sbi_sse_read_attrs(event_id, interrupted_attrs[i], 1, &value);
+ sbiret_report_error(&ret, SBI_SUCCESS, "Get interrupted %s no error", attr_name);
+
+ value = ARRAY_SIZE(interrupted_attrs) - i;
+ ret = sbi_sse_write_attrs(event_id, interrupted_attrs[i], 1, &value);
+ sbiret_report_error(&ret, SBI_ERR_INVALID_STATE,
+ "Set attribute %s invalid state error", attr_name);
+ }
+
+ sse_read_write_test(event_id, SBI_SSE_ATTR_STATUS, 0, &value, SBI_ERR_INVALID_PARAM,
+ "attribute attr_count == 0");
+ sse_read_write_test(event_id, SBI_SSE_ATTR_INTERRUPTED_A7 + 1, 1, &value, SBI_ERR_BAD_RANGE,
+ "invalid attribute");
+
+#if __riscv_xlen > 32
+ sse_read_write_test(event_id, BIT(32), 1, &value, SBI_ERR_INVALID_PARAM,
+ "attribute id > 32 bits");
+ sse_read_write_test(event_id, SBI_SSE_ATTR_STATUS, BIT(32), &value, SBI_ERR_INVALID_PARAM,
+ "attribute count > 32 bits");
+#endif
+
+ /* Misaligned pointer address */
+ ptr = (void *) &value;
+ ptr += 1;
+ sse_read_write_test(event_id, SBI_SSE_ATTR_STATUS, 1, ptr, SBI_ERR_INVALID_ADDRESS,
+ "attribute with invalid address");
+
+ report_prefix_pop();
+}
+
+static void sse_test_register_error(uint32_t event_id)
+{
+ struct sbiret ret;
+
+ report_prefix_push("register");
+
+ ret = sbi_sse_unregister(event_id);
+ sbiret_report_error(&ret, SBI_ERR_INVALID_STATE, "SSE unregister non registered event");
+
+ ret = sbi_sse_register_raw(event_id, 0x1, 0);
+ sbiret_report_error(&ret, SBI_ERR_INVALID_PARAM, "SSE register misaligned entry");
+
+ ret = sbi_sse_register_raw(event_id, virt_to_phys(sbi_sse_entry), 0);
+ sbiret_report_error(&ret, SBI_SUCCESS, "SSE register ok");
+ if (ret.error)
+ goto done;
+
+ ret = sbi_sse_register_raw(event_id, virt_to_phys(sbi_sse_entry), 0);
+ sbiret_report_error(&ret, SBI_ERR_INVALID_STATE, "SSE register twice failure");
+
+ ret = sbi_sse_unregister(event_id);
+ sbiret_report_error(&ret, SBI_SUCCESS, "SSE unregister ok");
+
+done:
+ report_prefix_pop();
+}
+
+struct sse_simple_test_arg {
+ bool done;
+ unsigned long expected_a6;
+ uint32_t event_id;
+};
+
+static void sse_simple_handler(void *data, struct pt_regs *regs, unsigned int hartid)
+{
+ struct sse_simple_test_arg *arg = data;
+ int i;
+ struct sbiret ret;
+ const char *attr_name;
+ uint32_t event_id = READ_ONCE(arg->event_id), attr;
+ unsigned long value, prev_value, flags;
+ unsigned long interrupted_state[ARRAY_SIZE(interrupted_attrs)];
+ unsigned long modified_state[ARRAY_SIZE(interrupted_attrs)] = {4, 3, 2, 1};
+ unsigned long tmp_state[ARRAY_SIZE(interrupted_attrs)];
+
+ report((regs->status & SR_SPP) == SR_SPP, "Interrupted S-mode");
+ report(hartid == current_thread_info()->hartid, "Hartid correctly passed");
+ sse_check_state(event_id, SBI_SSE_STATE_RUNNING);
+ report(!sse_event_pending(event_id), "Event not pending");
+
+ /* Read full interrupted state */
+ ret = sbi_sse_read_attrs(event_id, SBI_SSE_ATTR_INTERRUPTED_SEPC,
+ ARRAY_SIZE(interrupted_attrs), interrupted_state);
+ sbiret_report_error(&ret, SBI_SUCCESS, "Save full interrupted state from SSE handler ok");
+
+ /* Write full modified state and read it */
+ ret = sbi_sse_write_attrs(event_id, SBI_SSE_ATTR_INTERRUPTED_SEPC,
+ ARRAY_SIZE(modified_state), modified_state);
+ sbiret_report_error(&ret, SBI_SUCCESS,
+ "Write full interrupted state from SSE handler ok");
+
+ ret = sbi_sse_read_attrs(event_id, SBI_SSE_ATTR_INTERRUPTED_SEPC,
+ ARRAY_SIZE(tmp_state), tmp_state);
+ sbiret_report_error(&ret, SBI_SUCCESS, "Read full modified state from SSE handler ok");
+
+ report(memcmp(tmp_state, modified_state, sizeof(modified_state)) == 0,
+ "Full interrupted state successfully written");
+
+ /* Restore full saved state */
+ ret = sbi_sse_write_attrs(event_id, SBI_SSE_ATTR_INTERRUPTED_SEPC,
+ ARRAY_SIZE(interrupted_attrs), interrupted_state);
+ sbiret_report_error(&ret, SBI_SUCCESS,
+ "Full interrupted state restore from SSE handler ok");
+
+ /* We test SBI_SSE_ATTR_INTERRUPTED_FLAGS below with specific flag values */
+ for (i = 0; i < ARRAY_SIZE(interrupted_attrs); i++) {
+ attr = interrupted_attrs[i];
+ if (attr == SBI_SSE_ATTR_INTERRUPTED_FLAGS)
+ continue;
+
+ attr_name = attr_names[attr];
+
+ ret = sbi_sse_read_attrs(event_id, attr, 1, &prev_value);
+ sbiret_report_error(&ret, SBI_SUCCESS, "Get attr %s no error", attr_name);
+
+ value = 0xDEADBEEF + i;
+ ret = sbi_sse_write_attrs(event_id, attr, 1, &value);
+ sbiret_report_error(&ret, SBI_SUCCESS, "Set attr %s no error", attr_name);
+
+ ret = sbi_sse_read_attrs(event_id, attr, 1, &value);
+ sbiret_report_error(&ret, SBI_SUCCESS, "Get attr %s no error", attr_name);
+ report(value == 0xDEADBEEF + i, "Get attr %s no error, value: 0x%lx", attr_name,
+ value);
+
+ ret = sbi_sse_write_attrs(event_id, attr, 1, &prev_value);
+ sbiret_report_error(&ret, SBI_SUCCESS, "Restore attr %s value no error", attr_name);
+ }
+
+ /* Test all flags allowed for SBI_SSE_ATTR_INTERRUPTED_FLAGS*/
+ attr = SBI_SSE_ATTR_INTERRUPTED_FLAGS;
+ ret = sbi_sse_read_attrs(event_id, attr, 1, &prev_value);
+ sbiret_report_error(&ret, SBI_SUCCESS, "Save interrupted flags no error");
+
+ for (i = 0; i < ARRAY_SIZE(interrupted_flags); i++) {
+ flags = interrupted_flags[i];
+ ret = sbi_sse_write_attrs(event_id, attr, 1, &flags);
+ sbiret_report_error(&ret, SBI_SUCCESS,
+ "Set interrupted flags bit 0x%lx value no error", flags);
+ ret = sbi_sse_read_attrs(event_id, attr, 1, &value);
+ sbiret_report_error(&ret, SBI_SUCCESS, "Get interrupted flags after set no error");
+ report(value == flags, "interrupted flags modified value ok: 0x%lx", value);
+ }
+
+ /* Write invalid bit in flag register */
+ flags = SBI_SSE_ATTR_INTERRUPTED_FLAGS_SSTATUS_SDT << 1;
+ ret = sbi_sse_write_attrs(event_id, attr, 1, &flags);
+ sbiret_report_error(&ret, SBI_ERR_INVALID_PARAM, "Set invalid flags bit 0x%lx value error",
+ flags);
+#if __riscv_xlen > 32
+ flags = BIT(32);
+ ret = sbi_sse_write_attrs(event_id, attr, 1, &flags);
+ sbiret_report_error(&ret, SBI_ERR_INVALID_PARAM, "Set invalid flags bit 0x%lx value error",
+ flags);
+#endif
+
+ ret = sbi_sse_write_attrs(event_id, attr, 1, &prev_value);
+ sbiret_report_error(&ret, SBI_SUCCESS, "Restore interrupted flags no error");
+
+ /* Try to change HARTID/Priority while running */
+ if (sbi_sse_event_is_global(event_id)) {
+ value = current_thread_info()->hartid;
+ ret = sbi_sse_write_attrs(event_id, SBI_SSE_ATTR_PREFERRED_HART, 1, &value);
+ sbiret_report_error(&ret, SBI_ERR_INVALID_STATE, "Set hart id while running error");
+ }
+
+ value = 0;
+ ret = sbi_sse_write_attrs(event_id, SBI_SSE_ATTR_PRIORITY, 1, &value);
+ sbiret_report_error(&ret, SBI_ERR_INVALID_STATE, "Set priority while running error");
+
+ report(interrupted_state[2] == READ_ONCE(arg->expected_a6),
+ "Interrupted state a6 ok, expected 0x%lx, got 0x%lx",
+ READ_ONCE(arg->expected_a6), interrupted_state[2]);
+
+ report(interrupted_state[3] == SBI_EXT_SSE,
+ "Interrupted state a7 ok, expected 0x%x, got 0x%lx", SBI_EXT_SSE,
+ interrupted_state[3]);
+
+ WRITE_ONCE(arg->done, true);
+}
+
+static void sse_test_inject_simple(uint32_t event_id)
+{
+ unsigned long value;
+ struct sbiret ret;
+ struct sse_simple_test_arg test_arg = {.event_id = event_id};
+ struct sbi_sse_handler_arg args = {
+ .handler = sse_simple_handler,
+ .handler_data = (void *) &test_arg,
+ .stack = sse_alloc_stack(),
+ };
+
+ report_prefix_push("simple");
+
+ if (!sse_check_state(event_id, SBI_SSE_STATE_UNUSED))
+ goto done;
+
+ ret = sbi_sse_register(event_id, &args);
+ if (!sbiret_report_error(&ret, SBI_SUCCESS, "SSE register no error"))
+ goto done;
+
+ if (!sse_check_state(event_id, SBI_SSE_STATE_REGISTERED))
+ goto done;
+
+ /* Be sure global events are targeting the current hart */
+ sse_global_event_set_current_hart(event_id);
+
+ ret = sbi_sse_enable(event_id);
+ if (!sbiret_report_error(&ret, SBI_SUCCESS, "SSE enable no error"))
+ goto done;
+
+ if (!sse_check_state(event_id, SBI_SSE_STATE_ENABLED))
+ goto done;
+
+ ret = sbi_sse_hart_mask();
+ if (!sbiret_report_error(&ret, SBI_SUCCESS, "SSE hart mask no error"))
+ goto done;
+
+ ret = sbi_sse_inject(event_id, current_thread_info()->hartid);
+ if (!sbiret_report_error(&ret, SBI_SUCCESS, "SSE injection masked no error"))
+ goto done;
+
+ barrier();
+ report(test_arg.done == 0, "SSE event masked not handled");
+
+ /*
+ * When unmasking the SSE events, we expect it to be injected
+ * immediately so a6 should be SBI_EXT_SBI_SSE_HART_UNMASK
+ */
+ test_arg.expected_a6 = SBI_EXT_SSE_HART_UNMASK;
+ ret = sbi_sse_hart_unmask();
+ if (!sbiret_report_error(&ret, SBI_SUCCESS, "SSE hart unmask no error"))
+ goto done;
+
+ barrier();
+ report(test_arg.done == 1, "SSE event unmasked handled");
+ test_arg.done = 0;
+ test_arg.expected_a6 = SBI_EXT_SSE_INJECT;
+
+ /* Set as oneshot and verify it is disabled */
+ ret = sbi_sse_disable(event_id);
+ sbiret_report_error(&ret, SBI_SUCCESS, "Disable event ok");
+ value = SBI_SSE_ATTR_CONFIG_ONESHOT;
+ ret = sbi_sse_write_attrs(event_id, SBI_SSE_ATTR_CONFIG, 1, &value);
+ sbiret_report_error(&ret, SBI_SUCCESS, "Set event attribute as ONESHOT");
+ ret = sbi_sse_enable(event_id);
+ sbiret_report_error(&ret, SBI_SUCCESS, "Enable event ok");
+
+ ret = sbi_sse_inject(event_id, current_thread_info()->hartid);
+ if (!sbiret_report_error(&ret, SBI_SUCCESS, "SSE second injection no error"))
+ goto done;
+
+ barrier();
+ report(test_arg.done == 1, "SSE event handled ok");
+ test_arg.done = 0;
+
+ if (!sse_check_state(event_id, SBI_SSE_STATE_REGISTERED))
+ goto done;
+
+ /* Clear ONESHOT FLAG */
+ value = 0;
+ sbi_sse_write_attrs(event_id, SBI_SSE_ATTR_CONFIG, 1, &value);
+
+ ret = sbi_sse_unregister(event_id);
+ if (!sbiret_report_error(&ret, SBI_SUCCESS, "SSE unregister no error"))
+ goto done;
+
+ sse_check_state(event_id, SBI_SSE_STATE_UNUSED);
+
+done:
+ sse_free_stack(args.stack);
+ report_prefix_pop();
+}
+
+struct sse_foreign_cpu_test_arg {
+ bool done;
+ unsigned int expected_cpu;
+ uint32_t event_id;
+};
+
+static void sse_foreign_cpu_handler(void *data, struct pt_regs *regs, unsigned int hartid)
+{
+ struct sse_foreign_cpu_test_arg *arg = data;
+ unsigned int expected_cpu;
+
+ /* For arg content to be visible */
+ smp_rmb();
+ expected_cpu = READ_ONCE(arg->expected_cpu);
+ report(expected_cpu == current_thread_info()->cpu,
+ "Received event on CPU (%d), expected CPU (%d)", current_thread_info()->cpu,
+ expected_cpu);
+
+ WRITE_ONCE(arg->done, true);
+ /* For arg update to be visible for other CPUs */
+ smp_wmb();
+}
+
+struct sse_local_per_cpu {
+ struct sbi_sse_handler_arg args;
+ struct sbiret ret;
+ struct sse_foreign_cpu_test_arg handler_arg;
+};
+
+static void sse_register_enable_local(void *data)
+{
+ struct sbiret ret;
+ struct sse_local_per_cpu *cpu_args = data;
+ struct sse_local_per_cpu *cpu_arg = &cpu_args[current_thread_info()->cpu];
+ uint32_t event_id = cpu_arg->handler_arg.event_id;
+
+ ret = sbi_sse_register(event_id, &cpu_arg->args);
+ WRITE_ONCE(cpu_arg->ret, ret);
+ if (ret.error)
+ return;
+
+ ret = sbi_sse_enable(event_id);
+ WRITE_ONCE(cpu_arg->ret, ret);
+}
+
+static void sbi_sse_disable_unregister_local(void *data)
+{
+ struct sbiret ret;
+ struct sse_local_per_cpu *cpu_args = data;
+ struct sse_local_per_cpu *cpu_arg = &cpu_args[current_thread_info()->cpu];
+ uint32_t event_id = cpu_arg->handler_arg.event_id;
+
+ ret = sbi_sse_disable(event_id);
+ WRITE_ONCE(cpu_arg->ret, ret);
+ if (ret.error)
+ return;
+
+ ret = sbi_sse_unregister(event_id);
+ WRITE_ONCE(cpu_arg->ret, ret);
+}
+
+static void sse_test_inject_local(uint32_t event_id)
+{
+ int cpu;
+ struct sbiret ret;
+ struct sse_local_per_cpu *cpu_args, *cpu_arg;
+ struct sse_foreign_cpu_test_arg *handler_arg;
+
+ cpu_args = calloc(NR_CPUS, sizeof(struct sbi_sse_handler_arg));
+
+ report_prefix_push("local_dispatch");
+ for_each_online_cpu(cpu) {
+ cpu_arg = &cpu_args[cpu];
+ cpu_arg->handler_arg.event_id = event_id;
+ cpu_arg->args.stack = sse_alloc_stack();
+ cpu_arg->args.handler = sse_foreign_cpu_handler;
+ cpu_arg->args.handler_data = (void *)&cpu_arg->handler_arg;
+ }
+
+ on_cpus(sse_register_enable_local, cpu_args);
+ for_each_online_cpu(cpu) {
+ cpu_arg = &cpu_args[cpu];
+ ret = cpu_arg->ret;
+ if (ret.error)
+ report_abort("CPU failed to register/enable SSE event: %ld",
+ ret.error);
+
+ handler_arg = &cpu_arg->handler_arg;
+ WRITE_ONCE(handler_arg->expected_cpu, cpu);
+ /* For handler_arg content to be visible for other CPUs */
+ smp_wmb();
+ ret = sbi_sse_inject(event_id, cpus[cpu].hartid);
+ if (ret.error)
+ report_abort("CPU failed to register/enable SSE event: %ld",
+ ret.error);
+ }
+
+ for_each_online_cpu(cpu) {
+ handler_arg = &cpu_args[cpu].handler_arg;
+ while (!READ_ONCE(handler_arg->done)) {
+ /* For handler_arg update to be visible */
+ smp_rmb();
+ }
+ WRITE_ONCE(handler_arg->done, false);
+ }
+
+ on_cpus(sbi_sse_disable_unregister_local, cpu_args);
+ for_each_online_cpu(cpu) {
+ cpu_arg = &cpu_args[cpu];
+ ret = READ_ONCE(cpu_arg->ret);
+ if (ret.error)
+ report_abort("CPU failed to disable/unregister SSE event: %ld",
+ ret.error);
+ }
+
+ for_each_online_cpu(cpu) {
+ cpu_arg = &cpu_args[cpu];
+
+ sse_free_stack(cpu_arg->args.stack);
+ }
+
+ report_pass("local event dispatch on all CPUs");
+ report_prefix_pop();
+
+}
+
+static void sse_test_inject_global(uint32_t event_id)
+{
+ unsigned long value;
+ struct sbiret ret;
+ unsigned int cpu;
+ struct sse_foreign_cpu_test_arg test_arg = {.event_id = event_id};
+ struct sbi_sse_handler_arg args = {
+ .handler = sse_foreign_cpu_handler,
+ .handler_data = (void *) &test_arg,
+ .stack = sse_alloc_stack(),
+ };
+ enum sbi_sse_state state;
+
+ report_prefix_push("global_dispatch");
+
+ ret = sbi_sse_register(event_id, &args);
+ if (!sbiret_report_error(&ret, SBI_SUCCESS, "Register event no error"))
+ goto err_free_stack;
+
+ for_each_online_cpu(cpu) {
+ WRITE_ONCE(test_arg.expected_cpu, cpu);
+ /* For test_arg content to be visible for other CPUs */
+ smp_wmb();
+ value = cpu;
+ ret = sbi_sse_write_attrs(event_id, SBI_SSE_ATTR_PREFERRED_HART, 1, &value);
+ if (!sbiret_report_error(&ret, SBI_SUCCESS, "Set preferred hart no error"))
+ goto err_unregister;
+
+ ret = sbi_sse_enable(event_id);
+ if (!sbiret_report_error(&ret, SBI_SUCCESS, "Enable SSE event no error"))
+ goto err_unregister;
+
+ ret = sbi_sse_inject(event_id, cpu);
+ if (!sbiret_report_error(&ret, SBI_SUCCESS, "Inject SSE event no error"))
+ goto err_disable;
+
+ while (!READ_ONCE(test_arg.done)) {
+ /* For shared test_arg structure */
+ smp_rmb();
+ }
+
+ WRITE_ONCE(test_arg.done, false);
+
+ /* Wait for event to be in ENABLED state */
+ do {
+ ret = sse_event_get_state(event_id, &state);
+ if (sbiret_report_error(&ret, SBI_SUCCESS, "Get SSE event state no error"))
+ goto err_disable;
+ } while (state != SBI_SSE_STATE_ENABLED);
+
+err_disable:
+ ret = sbi_sse_disable(event_id);
+ if (!sbiret_report_error(&ret, SBI_SUCCESS, "Disable SSE event no error"))
+ goto err_unregister;
+
+ report_pass("Global event on CPU %d", cpu);
+ }
+
+err_unregister:
+ ret = sbi_sse_unregister(event_id);
+ sbiret_report_error(&ret, SBI_SUCCESS, "Unregister SSE event no error");
+
+err_free_stack:
+ sse_free_stack(args.stack);
+ report_prefix_pop();
+}
+
+struct priority_test_arg {
+ uint32_t event_id;
+ bool called;
+ u32 prio;
+ struct priority_test_arg *next_event_arg;
+ void (*check_func)(struct priority_test_arg *arg);
+};
+
+static void sse_hi_priority_test_handler(void *arg, struct pt_regs *regs,
+ unsigned int hartid)
+{
+ struct priority_test_arg *targ = arg;
+ struct priority_test_arg *next = targ->next_event_arg;
+
+ targ->called = true;
+ if (next) {
+ sbi_sse_inject(next->event_id, current_thread_info()->hartid);
+
+ report(!sse_event_pending(next->event_id), "Higher priority event is pending");
+ report(next->called, "Higher priority event was not handled");
+ }
+}
+
+static void sse_low_priority_test_handler(void *arg, struct pt_regs *regs,
+ unsigned int hartid)
+{
+ struct priority_test_arg *targ = arg;
+ struct priority_test_arg *next = targ->next_event_arg;
+
+ targ->called = true;
+
+ if (next) {
+ sbi_sse_inject(next->event_id, current_thread_info()->hartid);
+
+ report(sse_event_pending(next->event_id), "Lower priority event is pending");
+ report(!next->called, "Lower priority event %s was handle before %s",
+ sse_event_name(next->event_id), sse_event_name(targ->event_id));
+ }
+}
+
+static void sse_test_injection_priority_arg(struct priority_test_arg *in_args,
+ unsigned int in_args_size,
+ sbi_sse_handler_fn handler,
+ const char *test_name)
+{
+ unsigned int i;
+ unsigned long value;
+ struct sbiret ret;
+ uint32_t event_id;
+ struct priority_test_arg *arg;
+ unsigned int args_size = 0;
+ struct sbi_sse_handler_arg event_args[in_args_size];
+ struct priority_test_arg *args[in_args_size];
+ void *stack;
+ struct sbi_sse_handler_arg *event_arg;
+
+ report_prefix_push(test_name);
+
+ for (i = 0; i < in_args_size; i++) {
+ arg = &in_args[i];
+ event_id = arg->event_id;
+ if (!sse_event_can_inject(event_id))
+ continue;
+
+ args[args_size] = arg;
+ args_size++;
+ }
+
+ if (!args_size) {
+ report_skip("No event injectable");
+ report_prefix_pop();
+ goto skip;
+ }
+
+ for (i = 0; i < args_size; i++) {
+ arg = args[i];
+ event_id = arg->event_id;
+ stack = sse_alloc_stack();
+
+ event_arg = &event_args[i];
+ event_arg->handler = handler;
+ event_arg->handler_data = (void *)arg;
+ event_arg->stack = stack;
+
+ if (i < (args_size - 1))
+ arg->next_event_arg = args[i + 1];
+ else
+ arg->next_event_arg = NULL;
+
+ /* Be sure global events are targeting the current hart */
+ sse_global_event_set_current_hart(event_id);
+
+ sbi_sse_register(event_id, event_arg);
+ value = arg->prio;
+ sbi_sse_write_attrs(event_id, SBI_SSE_ATTR_PRIORITY, 1, &value);
+ sbi_sse_enable(event_id);
+ }
+
+ /* Inject first event */
+ ret = sbi_sse_inject(args[0]->event_id, current_thread_info()->hartid);
+ sbiret_report_error(&ret, SBI_SUCCESS, "SSE injection no error");
+
+ for (i = 0; i < args_size; i++) {
+ arg = args[i];
+ event_id = arg->event_id;
+
+ report(arg->called, "Event %s handler called", sse_event_name(arg->event_id));
+
+ sbi_sse_disable(event_id);
+ sbi_sse_unregister(event_id);
+
+ event_arg = &event_args[i];
+ sse_free_stack(event_arg->stack);
+ }
+
+skip:
+ report_prefix_pop();
+}
+
+static struct priority_test_arg hi_prio_args[] = {
+ {.event_id = SBI_SSE_EVENT_GLOBAL_SOFTWARE},
+ {.event_id = SBI_SSE_EVENT_LOCAL_SOFTWARE},
+ {.event_id = SBI_SSE_EVENT_GLOBAL_LOW_PRIO_RAS},
+ {.event_id = SBI_SSE_EVENT_LOCAL_LOW_PRIO_RAS},
+ {.event_id = SBI_SSE_EVENT_LOCAL_PMU_OVERFLOW},
+ {.event_id = SBI_SSE_EVENT_GLOBAL_HIGH_PRIO_RAS},
+ {.event_id = SBI_SSE_EVENT_LOCAL_DOUBLE_TRAP},
+ {.event_id = SBI_SSE_EVENT_LOCAL_HIGH_PRIO_RAS},
+};
+
+static struct priority_test_arg low_prio_args[] = {
+ {.event_id = SBI_SSE_EVENT_LOCAL_HIGH_PRIO_RAS},
+ {.event_id = SBI_SSE_EVENT_LOCAL_DOUBLE_TRAP},
+ {.event_id = SBI_SSE_EVENT_GLOBAL_HIGH_PRIO_RAS},
+ {.event_id = SBI_SSE_EVENT_LOCAL_PMU_OVERFLOW},
+ {.event_id = SBI_SSE_EVENT_LOCAL_LOW_PRIO_RAS},
+ {.event_id = SBI_SSE_EVENT_GLOBAL_LOW_PRIO_RAS},
+ {.event_id = SBI_SSE_EVENT_LOCAL_SOFTWARE},
+ {.event_id = SBI_SSE_EVENT_GLOBAL_SOFTWARE},
+};
+
+static struct priority_test_arg prio_args[] = {
+ {.event_id = SBI_SSE_EVENT_GLOBAL_SOFTWARE, .prio = 5},
+ {.event_id = SBI_SSE_EVENT_LOCAL_SOFTWARE, .prio = 10},
+ {.event_id = SBI_SSE_EVENT_LOCAL_LOW_PRIO_RAS, .prio = 12},
+ {.event_id = SBI_SSE_EVENT_LOCAL_PMU_OVERFLOW, .prio = 15},
+ {.event_id = SBI_SSE_EVENT_GLOBAL_HIGH_PRIO_RAS, .prio = 20},
+ {.event_id = SBI_SSE_EVENT_GLOBAL_LOW_PRIO_RAS, .prio = 22},
+ {.event_id = SBI_SSE_EVENT_LOCAL_HIGH_PRIO_RAS, .prio = 25},
+};
+
+static struct priority_test_arg same_prio_args[] = {
+ {.event_id = SBI_SSE_EVENT_LOCAL_PMU_OVERFLOW, .prio = 0},
+ {.event_id = SBI_SSE_EVENT_GLOBAL_LOW_PRIO_RAS, .prio = 0},
+ {.event_id = SBI_SSE_EVENT_LOCAL_HIGH_PRIO_RAS, .prio = 10},
+ {.event_id = SBI_SSE_EVENT_LOCAL_SOFTWARE, .prio = 10},
+ {.event_id = SBI_SSE_EVENT_GLOBAL_SOFTWARE, .prio = 10},
+ {.event_id = SBI_SSE_EVENT_GLOBAL_HIGH_PRIO_RAS, .prio = 20},
+ {.event_id = SBI_SSE_EVENT_LOCAL_LOW_PRIO_RAS, .prio = 20},
+};
+
+static void sse_test_injection_priority(void)
+{
+ report_prefix_push("prio");
+
+ sse_test_injection_priority_arg(hi_prio_args, ARRAY_SIZE(hi_prio_args),
+ sse_hi_priority_test_handler, "high");
+
+ sse_test_injection_priority_arg(low_prio_args, ARRAY_SIZE(low_prio_args),
+ sse_low_priority_test_handler, "low");
+
+ sse_test_injection_priority_arg(prio_args, ARRAY_SIZE(prio_args),
+ sse_low_priority_test_handler, "changed");
+
+ sse_test_injection_priority_arg(same_prio_args, ARRAY_SIZE(same_prio_args),
+ sse_low_priority_test_handler, "same_prio_args");
+
+ report_prefix_pop();
+}
+
+static void test_invalid_event_id(unsigned long event_id)
+{
+ struct sbiret ret;
+ unsigned long value = 0;
+
+ ret = sbi_sse_register_raw(event_id, (unsigned long) sbi_sse_entry, 0);
+ sbiret_report_error(&ret, SBI_ERR_INVALID_PARAM,
+ "SSE register event_id 0x%lx invalid param", event_id);
+
+ ret = sbi_sse_unregister(event_id);
+ sbiret_report_error(&ret, SBI_ERR_INVALID_PARAM,
+ "SSE unregister event_id 0x%lx invalid param", event_id);
+
+ ret = sbi_sse_enable(event_id);
+ sbiret_report_error(&ret, SBI_ERR_INVALID_PARAM,
+ "SSE enable event_id 0x%lx invalid param", event_id);
+
+ ret = sbi_sse_disable(event_id);
+ sbiret_report_error(&ret, SBI_ERR_INVALID_PARAM,
+ "SSE disable event_id 0x%lx invalid param", event_id);
+
+ ret = sbi_sse_inject(event_id, 0);
+ sbiret_report_error(&ret, SBI_ERR_INVALID_PARAM,
+ "SSE inject event_id 0x%lx invalid param", event_id);
+
+ ret = sbi_sse_write_attrs(event_id, SBI_SSE_ATTR_PRIORITY, 1, &value);
+ sbiret_report_error(&ret, SBI_ERR_INVALID_PARAM,
+ "SSE write attr event_id 0x%lx invalid param", event_id);
+
+ ret = sbi_sse_read_attrs(event_id, SBI_SSE_ATTR_PRIORITY, 1, &value);
+ sbiret_report_error(&ret, SBI_ERR_INVALID_PARAM,
+ "SSE read attr event_id 0x%lx invalid param", event_id);
+}
+
+static void sse_test_invalid_event_id(void)
+{
+
+ report_prefix_push("event_id");
+
+#if __riscv_xlen > 32
+ test_invalid_event_id(BIT_ULL(32));
+#endif
+
+ report_prefix_pop();
+}
+
+static bool sse_can_inject(uint32_t event_id)
+{
+ struct sbiret ret;
+ unsigned long status;
+
+ ret = sbi_sse_read_attrs(event_id, SBI_SSE_ATTR_STATUS, 1, &status);
+ if (ret.error)
+ return false;
+
+ return !!(status & BIT(SBI_SSE_ATTR_STATUS_INJECT_OFFSET));
+}
+
+static void boot_secondary(void *data)
+{
+ sbi_sse_hart_unmask();
+}
+
+static void sse_check_mask(void)
+{
+ struct sbiret ret;
+
+ /* Upon boot, event are masked, check that */
+ ret = sbi_sse_hart_mask();
+ sbiret_report_error(&ret, SBI_ERR_ALREADY_STOPPED, "SSE hart mask at boot time ok");
+
+ ret = sbi_sse_hart_unmask();
+ sbiret_report_error(&ret, SBI_SUCCESS, "SSE hart no error ok");
+ ret = sbi_sse_hart_unmask();
+ sbiret_report_error(&ret, SBI_ERR_ALREADY_STARTED, "SSE hart unmask twice error ok");
+
+ ret = sbi_sse_hart_mask();
+ sbiret_report_error(&ret, SBI_SUCCESS, "SSE hart mask no error");
+ ret = sbi_sse_hart_mask();
+ sbiret_report_error(&ret, SBI_ERR_ALREADY_STOPPED, "SSE hart mask twice ok");
+}
+
+void check_sse(void)
+{
+ unsigned long i, event_id;
+
+ report_prefix_push("sse");
+
+ if (!sbi_probe(SBI_EXT_SSE)) {
+ report_skip("SSE extension not available");
+ report_prefix_pop();
+ return;
+ }
+
+ sse_check_mask();
+
+ /*
+ * Dummy wakeup of all processors since some of them will be targeted
+ * by global events without going through the wakeup call as well as
+ * unmasking SSE events on all harts
+ */
+ on_cpus(boot_secondary, NULL);
+
+ sse_test_invalid_event_id();
+
+ for (i = 0; i < ARRAY_SIZE(sse_event_infos); i++) {
+ event_id = sse_event_infos[i].event_id;
+ report_prefix_push(sse_event_infos[i].name);
+ if (!sse_can_inject(event_id)) {
+ report_skip("Event 0x%lx does not support injection", event_id);
+ report_prefix_pop();
+ continue;
+ } else {
+ sse_event_infos[i].can_inject = true;
+ }
+ sse_test_attrs(event_id);
+ sse_test_register_error(event_id);
+ sse_test_inject_simple(event_id);
+ if (sbi_sse_event_is_global(event_id))
+ sse_test_inject_global(event_id);
+ else
+ sse_test_inject_local(event_id);
+
+ report_prefix_pop();
+ }
+
+ sse_test_injection_priority();
+
+ report_prefix_pop();
+}
diff --git a/riscv/sbi.c b/riscv/sbi.c
index 7c7a2d2d..49e81bda 100644
--- a/riscv/sbi.c
+++ b/riscv/sbi.c
@@ -32,6 +32,7 @@
#define HIGH_ADDR_BOUNDARY ((phys_addr_t)1 << 32)
+void check_sse(void);
void check_fwft(void);
static long __labs(long a)
@@ -1439,6 +1440,7 @@ int main(int argc, char **argv)
check_hsm();
check_dbcn();
check_susp();
+ check_sse();
check_fwft();
return report_summary();
--
2.47.2
^ permalink raw reply related [flat|nested] 14+ messages in thread
* Re: [kvm-unit-tests PATCH v7 2/6] riscv: Set .aux.o files as .PRECIOUS
2025-02-14 11:44 ` [kvm-unit-tests PATCH v7 2/6] riscv: Set .aux.o files as .PRECIOUS Clément Léger
@ 2025-02-27 15:36 ` Andrew Jones
0 siblings, 0 replies; 14+ messages in thread
From: Andrew Jones @ 2025-02-27 15:36 UTC (permalink / raw)
To: Clément Léger
Cc: kvm, kvm-riscv, Andrew Jones, Anup Patel, Atish Patra, npiggin
On Fri, Feb 14, 2025 at 12:44:15PM +0100, Clément Léger wrote:
> When compiling, we need to keep .aux.o file or they will be removed
> after the compilation which leads to dependent files to be recompiled.
> Set these files as .PRECIOUS to keep them.
There was a thread[1] about this 7 months ago or so. I've CC'ed Nicholas
to see if there were any more thoughts on how we should proceed.
[1] https://lore.kernel.org/all/20240612044234.212156-1-npiggin@gmail.com/
Thanks,
drew
>
> Signed-off-by: Clément Léger <cleger@rivosinc.com>
> ---
> riscv/Makefile | 1 +
> 1 file changed, 1 insertion(+)
>
> diff --git a/riscv/Makefile b/riscv/Makefile
> index 52718f3f..ae9cf02a 100644
> --- a/riscv/Makefile
> +++ b/riscv/Makefile
> @@ -90,6 +90,7 @@ CFLAGS += -I $(SRCDIR)/lib -I $(SRCDIR)/lib/libfdt -I lib -I $(SRCDIR)/riscv
> asm-offsets = lib/riscv/asm-offsets.h
> include $(SRCDIR)/scripts/asm-offsets.mak
>
> +.PRECIOUS: %.aux.o
> %.aux.o: $(SRCDIR)/lib/auxinfo.c
> $(CC) $(CFLAGS) -c -o $@ $< \
> -DPROGNAME=\"$(notdir $(@:.aux.o=.$(exe)))\" -DAUXFLAGS=$(AUXFLAGS)
> --
> 2.47.2
>
^ permalink raw reply [flat|nested] 14+ messages in thread
* Re: [kvm-unit-tests PATCH v7 5/6] lib: riscv: Add SBI SSE support
2025-02-14 11:44 ` [kvm-unit-tests PATCH v7 5/6] lib: riscv: Add SBI SSE support Clément Léger
@ 2025-02-27 16:03 ` Andrew Jones
2025-03-06 10:04 ` Clément Léger
0 siblings, 1 reply; 14+ messages in thread
From: Andrew Jones @ 2025-02-27 16:03 UTC (permalink / raw)
To: Clément Léger
Cc: kvm, kvm-riscv, Andrew Jones, Anup Patel, Atish Patra
On Fri, Feb 14, 2025 at 12:44:18PM +0100, Clément Léger wrote:
> Add support for registering and handling SSE events. This will be used
> by sbi test as well as upcoming double trap tests.
>
> Signed-off-by: Clément Léger <cleger@rivosinc.com>
> ---
> riscv/Makefile | 2 +
> lib/riscv/asm/csr.h | 1 +
> lib/riscv/asm/sbi-sse.h | 48 +++++++++++++++++++
> lib/riscv/sbi-sse-asm.S | 103 ++++++++++++++++++++++++++++++++++++++++
> lib/riscv/asm-offsets.c | 9 ++++
> lib/riscv/sbi-sse.c | 84 ++++++++++++++++++++++++++++++++
> 6 files changed, 247 insertions(+)
> create mode 100644 lib/riscv/asm/sbi-sse.h
> create mode 100644 lib/riscv/sbi-sse-asm.S
> create mode 100644 lib/riscv/sbi-sse.c
>
> diff --git a/riscv/Makefile b/riscv/Makefile
> index 02d2ac39..ed590ede 100644
> --- a/riscv/Makefile
> +++ b/riscv/Makefile
> @@ -43,6 +43,8 @@ cflatobjs += lib/riscv/setup.o
> cflatobjs += lib/riscv/smp.o
> cflatobjs += lib/riscv/stack.o
> cflatobjs += lib/riscv/timer.o
> +cflatobjs += lib/riscv/sbi-sse-asm.o
> +cflatobjs += lib/riscv/sbi-sse.o
> ifeq ($(ARCH),riscv32)
> cflatobjs += lib/ldiv32.o
> endif
> diff --git a/lib/riscv/asm/csr.h b/lib/riscv/asm/csr.h
> index 16f5ddd7..389d9850 100644
> --- a/lib/riscv/asm/csr.h
> +++ b/lib/riscv/asm/csr.h
> @@ -17,6 +17,7 @@
> #define CSR_TIME 0xc01
>
> #define SR_SIE _AC(0x00000002, UL)
> +#define SR_SPP _AC(0x00000100, UL)
>
> /* Exception cause high bit - is an interrupt if set */
> #define CAUSE_IRQ_FLAG (_AC(1, UL) << (__riscv_xlen - 1))
> diff --git a/lib/riscv/asm/sbi-sse.h b/lib/riscv/asm/sbi-sse.h
> new file mode 100644
> index 00000000..ba18ce27
> --- /dev/null
> +++ b/lib/riscv/asm/sbi-sse.h
> @@ -0,0 +1,48 @@
> +/* SPDX-License-Identifier: GPL-2.0-only */
> +/*
> + * SBI SSE library interface
> + *
> + * Copyright (C) 2025, Rivos Inc., Clément Léger <cleger@rivosinc.com>
> + */
> +#ifndef _RISCV_SSE_H_
> +#define _RISCV_SSE_H_
> +
> +#include <asm/sbi.h>
> +
> +typedef void (*sbi_sse_handler_fn)(void *data, struct pt_regs *regs, unsigned int hartid);
> +
> +struct sbi_sse_handler_arg {
> + unsigned long reg_tmp;
> + sbi_sse_handler_fn handler;
> + void *handler_data;
> + void *stack;
> +};
> +
> +extern void sbi_sse_entry(void);
> +
> +static inline bool sbi_sse_event_is_global(uint32_t event_id)
> +{
> + return !!(event_id & SBI_SSE_EVENT_GLOBAL_BIT);
> +}
> +
> +struct sbiret sbi_sse_read_attrs_raw(unsigned long event_id, unsigned long base_attr_id,
> + unsigned long attr_count, unsigned long phys_lo,
> + unsigned long phys_hi);
> +struct sbiret sbi_sse_read_attrs(unsigned long event_id, unsigned long base_attr_id,
> + unsigned long attr_count, unsigned long *values);
> +struct sbiret sbi_sse_write_attrs_raw(unsigned long event_id, unsigned long base_attr_id,
> + unsigned long attr_count, unsigned long phys_lo,
> + unsigned long phys_hi);
> +struct sbiret sbi_sse_write_attrs(unsigned long event_id, unsigned long base_attr_id,
> + unsigned long attr_count, unsigned long *values);
> +struct sbiret sbi_sse_register_raw(unsigned long event_id, unsigned long entry_pc,
> + unsigned long entry_arg);
> +struct sbiret sbi_sse_register(unsigned long event_id, struct sbi_sse_handler_arg *arg);
> +struct sbiret sbi_sse_unregister(unsigned long event_id);
> +struct sbiret sbi_sse_enable(unsigned long event_id);
> +struct sbiret sbi_sse_hart_mask(void);
> +struct sbiret sbi_sse_hart_unmask(void);
> +struct sbiret sbi_sse_inject(unsigned long event_id, unsigned long hart_id);
> +struct sbiret sbi_sse_disable(unsigned long event_id);
nit: I'd put disable up by enable to keep pairs together.
> +
> +#endif /* !_RISCV_SSE_H_ */
> diff --git a/lib/riscv/sbi-sse-asm.S b/lib/riscv/sbi-sse-asm.S
> new file mode 100644
> index 00000000..5f60b839
> --- /dev/null
> +++ b/lib/riscv/sbi-sse-asm.S
> @@ -0,0 +1,103 @@
> +/* SPDX-License-Identifier: GPL-2.0 */
> +/*
> + * RISC-V SSE events entry point.
> + *
> + * Copyright (C) 2025, Rivos Inc., Clément Léger <cleger@rivosinc.com>
> + */
> +#define __ASSEMBLY__
> +#include <asm/asm.h>
> +#include <asm/csr.h>
> +#include <asm/asm-offsets.h>
nit: asm-offsets above csr if we want to practice alphabetical ordering.
> +#include <generated/sbi-asm-offsets.h>
> +
> +.section .text
> +.global sbi_sse_entry
> +sbi_sse_entry:
> + /* Save stack temporarily */
> + REG_S sp, SBI_SSE_REG_TMP(a7)
> + /* Set entry stack */
> + REG_L sp, SBI_SSE_HANDLER_STACK(a7)
> +
> + addi sp, sp, -(PT_SIZE)
> + REG_S ra, PT_RA(sp)
> + REG_S s0, PT_S0(sp)
> + REG_S s1, PT_S1(sp)
> + REG_S s2, PT_S2(sp)
> + REG_S s3, PT_S3(sp)
> + REG_S s4, PT_S4(sp)
> + REG_S s5, PT_S5(sp)
> + REG_S s6, PT_S6(sp)
> + REG_S s7, PT_S7(sp)
> + REG_S s8, PT_S8(sp)
> + REG_S s9, PT_S9(sp)
> + REG_S s10, PT_S10(sp)
> + REG_S s11, PT_S11(sp)
> + REG_S tp, PT_TP(sp)
> + REG_S t0, PT_T0(sp)
> + REG_S t1, PT_T1(sp)
> + REG_S t2, PT_T2(sp)
> + REG_S t3, PT_T3(sp)
> + REG_S t4, PT_T4(sp)
> + REG_S t5, PT_T5(sp)
> + REG_S t6, PT_T6(sp)
> + REG_S gp, PT_GP(sp)
> + REG_S a0, PT_A0(sp)
> + REG_S a1, PT_A1(sp)
> + REG_S a2, PT_A2(sp)
> + REG_S a3, PT_A3(sp)
> + REG_S a4, PT_A4(sp)
> + REG_S a5, PT_A5(sp)
> + csrr a1, CSR_SEPC
> + REG_S a1, PT_EPC(sp)
> + csrr a2, CSR_SSTATUS
> + REG_S a2, PT_STATUS(sp)
> +
> + REG_L a0, SBI_SSE_REG_TMP(a7)
> + REG_S a0, PT_SP(sp)
> +
> + REG_L t0, SBI_SSE_HANDLER(a7)
> + REG_L a0, SBI_SSE_HANDLER_DATA(a7)
> + mv a1, sp
> + mv a2, a6
> + jalr t0
> +
> + REG_L a1, PT_EPC(sp)
> + REG_L a2, PT_STATUS(sp)
> + csrw CSR_SEPC, a1
> + csrw CSR_SSTATUS, a2
> +
> + REG_L ra, PT_RA(sp)
> + REG_L s0, PT_S0(sp)
> + REG_L s1, PT_S1(sp)
> + REG_L s2, PT_S2(sp)
> + REG_L s3, PT_S3(sp)
> + REG_L s4, PT_S4(sp)
> + REG_L s5, PT_S5(sp)
> + REG_L s6, PT_S6(sp)
> + REG_L s7, PT_S7(sp)
> + REG_L s8, PT_S8(sp)
> + REG_L s9, PT_S9(sp)
> + REG_L s10, PT_S10(sp)
> + REG_L s11, PT_S11(sp)
> + REG_L tp, PT_TP(sp)
> + REG_L t0, PT_T0(sp)
> + REG_L t1, PT_T1(sp)
> + REG_L t2, PT_T2(sp)
> + REG_L t3, PT_T3(sp)
> + REG_L t4, PT_T4(sp)
> + REG_L t5, PT_T5(sp)
> + REG_L t6, PT_T6(sp)
> + REG_L gp, PT_GP(sp)
> + REG_L a0, PT_A0(sp)
> + REG_L a1, PT_A1(sp)
> + REG_L a2, PT_A2(sp)
> + REG_L a3, PT_A3(sp)
> + REG_L a4, PT_A4(sp)
> + REG_L a5, PT_A5(sp)
> +
> + REG_L sp, PT_SP(sp)
> +
> + li a7, ASM_SBI_EXT_SSE
> + li a6, ASM_SBI_EXT_SSE_COMPLETE
> + ecall
nit: Format asm with a tab after the operator like save/restore_context do.
> +
> diff --git a/lib/riscv/asm-offsets.c b/lib/riscv/asm-offsets.c
> index 6c511c14..77e5d26d 100644
> --- a/lib/riscv/asm-offsets.c
> +++ b/lib/riscv/asm-offsets.c
> @@ -4,6 +4,7 @@
> #include <asm/processor.h>
> #include <asm/ptrace.h>
> #include <asm/smp.h>
> +#include <asm/sbi-sse.h>
nit: alphabetize
>
> int main(void)
> {
> @@ -63,5 +64,13 @@ int main(void)
> OFFSET(THREAD_INFO_HARTID, thread_info, hartid);
> DEFINE(THREAD_INFO_SIZE, sizeof(struct thread_info));
>
> + DEFINE(ASM_SBI_EXT_SSE, SBI_EXT_SSE);
> + DEFINE(ASM_SBI_EXT_SSE_COMPLETE, SBI_EXT_SSE_COMPLETE);
> +
> + OFFSET(SBI_SSE_REG_TMP, sbi_sse_handler_arg, reg_tmp);
> + OFFSET(SBI_SSE_HANDLER, sbi_sse_handler_arg, handler);
> + OFFSET(SBI_SSE_HANDLER_DATA, sbi_sse_handler_arg, handler_data);
> + OFFSET(SBI_SSE_HANDLER_STACK, sbi_sse_handler_arg, stack);
> +
> return 0;
> }
> diff --git a/lib/riscv/sbi-sse.c b/lib/riscv/sbi-sse.c
> new file mode 100644
> index 00000000..bc4dd10e
> --- /dev/null
> +++ b/lib/riscv/sbi-sse.c
I think we could just put these wrappers in lib/riscv/sbi.c, but OK.
> @@ -0,0 +1,84 @@
> +// SPDX-License-Identifier: GPL-2.0-only
> +/*
> + * SBI SSE library
> + *
> + * Copyright (C) 2025, Rivos Inc., Clément Léger <cleger@rivosinc.com>
> + */
> +#include <asm/sbi.h>
> +#include <asm/sbi-sse.h>
> +#include <asm/io.h>
nit: alphabetize
> +
> +struct sbiret sbi_sse_read_attrs_raw(unsigned long event_id, unsigned long base_attr_id,
> + unsigned long attr_count, unsigned long phys_lo,
> + unsigned long phys_hi)
> +{
> + return sbi_ecall(SBI_EXT_SSE, SBI_EXT_SSE_READ_ATTRS, event_id, base_attr_id, attr_count,
> + phys_lo, phys_hi, 0);
> +}
> +
> +struct sbiret sbi_sse_read_attrs(unsigned long event_id, unsigned long base_attr_id,
> + unsigned long attr_count, unsigned long *values)
> +{
> + phys_addr_t p = virt_to_phys(values);
> +
> + return sbi_sse_read_attrs_raw(event_id, base_attr_id, attr_count, lower_32_bits(p),
> + upper_32_bits(p));
> +}
> +
> +struct sbiret sbi_sse_write_attrs_raw(unsigned long event_id, unsigned long base_attr_id,
> + unsigned long attr_count, unsigned long phys_lo,
> + unsigned long phys_hi)
> +{
> + return sbi_ecall(SBI_EXT_SSE, SBI_EXT_SSE_WRITE_ATTRS, event_id, base_attr_id, attr_count,
> + phys_lo, phys_hi, 0);
> +}
> +
> +struct sbiret sbi_sse_write_attrs(unsigned long event_id, unsigned long base_attr_id,
> + unsigned long attr_count, unsigned long *values)
> +{
> + phys_addr_t p = virt_to_phys(values);
> +
> + return sbi_sse_write_attrs_raw(event_id, base_attr_id, attr_count, lower_32_bits(p),
> + upper_32_bits(p));
> +}
> +
> +struct sbiret sbi_sse_register_raw(unsigned long event_id, unsigned long entry_pc,
> + unsigned long entry_arg)
> +{
> + return sbi_ecall(SBI_EXT_SSE, SBI_EXT_SSE_REGISTER, event_id, entry_pc, entry_arg, 0, 0, 0);
> +}
> +
> +struct sbiret sbi_sse_register(unsigned long event_id, struct sbi_sse_handler_arg *arg)
> +{
phys_addr_t entry = virt_to_phys(sbi_sse_entry);
phys_addr_t arg = virt_to_phys(arg);
assert(__riscv_xlen > 32 || upper_32_bits(entry) == 0);
assert(__riscv_xlen > 32 || upper_32_bits(arg) == 0);
return sbi_sse_register_raw(event_id, entry, arg);
> + return sbi_sse_register_raw(event_id, virt_to_phys(sbi_sse_entry), virt_to_phys(arg));
> +}
> +
> +struct sbiret sbi_sse_unregister(unsigned long event_id)
> +{
> + return sbi_ecall(SBI_EXT_SSE, SBI_EXT_SSE_UNREGISTER, event_id, 0, 0, 0, 0, 0);
> +}
> +
> +struct sbiret sbi_sse_enable(unsigned long event_id)
> +{
> + return sbi_ecall(SBI_EXT_SSE, SBI_EXT_SSE_ENABLE, event_id, 0, 0, 0, 0, 0);
> +}
> +
> +struct sbiret sbi_sse_disable(unsigned long event_id)
> +{
> + return sbi_ecall(SBI_EXT_SSE, SBI_EXT_SSE_DISABLE, event_id, 0, 0, 0, 0, 0);
> +}
> +
> +struct sbiret sbi_sse_hart_mask(void)
> +{
> + return sbi_ecall(SBI_EXT_SSE, SBI_EXT_SSE_HART_MASK, 0, 0, 0, 0, 0, 0);
> +}
> +
> +struct sbiret sbi_sse_hart_unmask(void)
> +{
> + return sbi_ecall(SBI_EXT_SSE, SBI_EXT_SSE_HART_UNMASK, 0, 0, 0, 0, 0, 0);
> +}
> +
> +struct sbiret sbi_sse_inject(unsigned long event_id, unsigned long hart_id)
> +{
> + return sbi_ecall(SBI_EXT_SSE, SBI_EXT_SSE_INJECT, event_id, hart_id, 0, 0, 0, 0);
> +}
> --
> 2.47.2
>
Besides the nits and the sanity checking for rv32,
Reviewed-by: Andrew Jones <andrew.jones@linux.dev>
^ permalink raw reply [flat|nested] 14+ messages in thread
* Re: [kvm-unit-tests PATCH v7 6/6] riscv: sbi: Add SSE extension tests
2025-02-14 11:44 ` [kvm-unit-tests PATCH v7 6/6] riscv: sbi: Add SSE extension tests Clément Léger
@ 2025-02-28 17:51 ` Andrew Jones
2025-03-06 14:32 ` Clément Léger
0 siblings, 1 reply; 14+ messages in thread
From: Andrew Jones @ 2025-02-28 17:51 UTC (permalink / raw)
To: Clément Léger
Cc: kvm, kvm-riscv, Andrew Jones, Anup Patel, Atish Patra
Global style note: For casts we prefer no space between the (type) and
the variable, i.e. (type)var
On Fri, Feb 14, 2025 at 12:44:19PM +0100, Clément Léger wrote:
> Add SBI SSE extension tests for the following features:
> - Test attributes errors (invalid values, RO, etc)
> - Registration errors
> - Simple events (register, enable, inject)
> - Events with different priorities
> - Global events dispatch on different harts
> - Local events on all harts
> - Hart mask/unmask events
>
> Signed-off-by: Clément Léger <cleger@rivosinc.com>
> ---
> riscv/Makefile | 1 +
> riscv/sbi-sse.c | 1054 +++++++++++++++++++++++++++++++++++++++++++++++
> riscv/sbi.c | 2 +
> 3 files changed, 1057 insertions(+)
> create mode 100644 riscv/sbi-sse.c
>
> diff --git a/riscv/Makefile b/riscv/Makefile
> index ed590ede..ea62e05f 100644
> --- a/riscv/Makefile
> +++ b/riscv/Makefile
> @@ -18,6 +18,7 @@ tests += $(TEST_DIR)/sieve.$(exe)
> all: $(tests)
>
> $(TEST_DIR)/sbi-deps = $(TEST_DIR)/sbi-asm.o $(TEST_DIR)/sbi-fwft.o
> +$(TEST_DIR)/sbi-deps += $(TEST_DIR)/sbi-sse.o
>
> # When built for EFI sieve needs extra memory, run with e.g. '-m 256' on QEMU
> $(TEST_DIR)/sieve.$(exe): AUXFLAGS = 0x1
> diff --git a/riscv/sbi-sse.c b/riscv/sbi-sse.c
> new file mode 100644
> index 00000000..27f47f73
> --- /dev/null
> +++ b/riscv/sbi-sse.c
> @@ -0,0 +1,1054 @@
> +// SPDX-License-Identifier: GPL-2.0-only
> +/*
> + * SBI SSE testsuite
> + *
> + * Copyright (C) 2025, Rivos Inc., Clément Léger <cleger@rivosinc.com>
> + */
> +#include <alloc.h>
> +#include <alloc_page.h>
> +#include <bitops.h>
> +#include <cpumask.h>
> +#include <libcflat.h>
> +#include <on-cpus.h>
> +#include <stdlib.h>
> +
> +#include <asm/barrier.h>
> +#include <asm/io.h>
> +#include <asm/page.h>
> +#include <asm/processor.h>
> +#include <asm/sbi.h>
> +#include <asm/sbi-sse.h>
> +#include <asm/setup.h>
> +
> +#include "sbi-tests.h"
> +
> +#define SSE_STACK_SIZE PAGE_SIZE
> +
> +void check_sse(void);
Since we now have an #ifndef __ASSEMBLY__ section in riscv/sbi-tests.h we
can just put this prototype there.
> +
> +struct sse_event_info {
> + uint32_t event_id;
> + const char *name;
> + bool can_inject;
> +};
> +
> +static struct sse_event_info sse_event_infos[] = {
> + {
> + .event_id = SBI_SSE_EVENT_LOCAL_HIGH_PRIO_RAS,
> + .name = "local_high_prio_ras",
> + },
> + {
> + .event_id = SBI_SSE_EVENT_LOCAL_DOUBLE_TRAP,
> + .name = "double_trap",
> + },
> + {
> + .event_id = SBI_SSE_EVENT_GLOBAL_HIGH_PRIO_RAS,
> + .name = "global_high_prio_ras",
> + },
> + {
> + .event_id = SBI_SSE_EVENT_LOCAL_PMU_OVERFLOW,
> + .name = "local_pmu_overflow",
> + },
> + {
> + .event_id = SBI_SSE_EVENT_LOCAL_LOW_PRIO_RAS,
> + .name = "local_low_prio_ras",
> + },
> + {
> + .event_id = SBI_SSE_EVENT_GLOBAL_LOW_PRIO_RAS,
> + .name = "global_low_prio_ras",
> + },
> + {
> + .event_id = SBI_SSE_EVENT_LOCAL_SOFTWARE,
> + .name = "local_software",
> + },
> + {
> + .event_id = SBI_SSE_EVENT_GLOBAL_SOFTWARE,
> + .name = "global_software",
> + },
> +};
> +
> +static const char *const attr_names[] = {
> + [SBI_SSE_ATTR_STATUS] = "status",
> + [SBI_SSE_ATTR_PRIORITY] = "priority",
> + [SBI_SSE_ATTR_CONFIG] = "config",
> + [SBI_SSE_ATTR_PREFERRED_HART] = "preferred_hart",
> + [SBI_SSE_ATTR_ENTRY_PC] = "entry_pc",
> + [SBI_SSE_ATTR_ENTRY_ARG] = "entry_arg",
> + [SBI_SSE_ATTR_INTERRUPTED_SEPC] = "interrupted_sepc",
> + [SBI_SSE_ATTR_INTERRUPTED_FLAGS] = "interrupted_flags",
> + [SBI_SSE_ATTR_INTERRUPTED_A6] = "interrupted_a6",
> + [SBI_SSE_ATTR_INTERRUPTED_A7] = "interrupted_a7",
> +};
> +
> +static const unsigned long ro_attrs[] = {
> + SBI_SSE_ATTR_STATUS,
> + SBI_SSE_ATTR_ENTRY_PC,
> + SBI_SSE_ATTR_ENTRY_ARG,
> +};
> +
> +static const unsigned long interrupted_attrs[] = {
> + SBI_SSE_ATTR_INTERRUPTED_SEPC,
> + SBI_SSE_ATTR_INTERRUPTED_FLAGS,
> + SBI_SSE_ATTR_INTERRUPTED_A6,
> + SBI_SSE_ATTR_INTERRUPTED_A7,
> +};
> +
> +static const unsigned long interrupted_flags[] = {
> + SBI_SSE_ATTR_INTERRUPTED_FLAGS_SSTATUS_SPP,
> + SBI_SSE_ATTR_INTERRUPTED_FLAGS_SSTATUS_SPIE,
> + SBI_SSE_ATTR_INTERRUPTED_FLAGS_SSTATUS_SPELP,
> + SBI_SSE_ATTR_INTERRUPTED_FLAGS_SSTATUS_SDT,
> + SBI_SSE_ATTR_INTERRUPTED_FLAGS_HSTATUS_SPV,
> + SBI_SSE_ATTR_INTERRUPTED_FLAGS_HSTATUS_SPVP,
> +};
> +
> +static struct sse_event_info *sse_event_get_info(uint32_t event_id)
> +{
> + int i;
> +
> + for (i = 0; i < ARRAY_SIZE(sse_event_infos); i++) {
> + if (sse_event_infos[i].event_id == event_id)
> + return &sse_event_infos[i];
> + }
> +
> + assert_msg(false, "Invalid event id: %d", event_id);
> +}
> +
> +static const char *sse_event_name(uint32_t event_id)
> +{
> + return sse_event_get_info(event_id)->name;
> +}
> +
> +static bool sse_event_can_inject(uint32_t event_id)
> +{
> + return sse_event_get_info(event_id)->can_inject;
> +}
> +
> +static struct sbiret sse_event_get_state(uint32_t event_id, enum sbi_sse_state *state)
> +{
> + struct sbiret ret;
> + unsigned long status;
> +
> + ret = sbi_sse_read_attrs(event_id, SBI_SSE_ATTR_STATUS, 1, &status);
> + if (ret.error) {
> + sbiret_report_error(&ret, 0, "Get SSE event status no error");
> + return ret;
> + }
> +
> + *state = status & SBI_SSE_ATTR_STATUS_STATE_MASK;
> +
> + return ret;
> +}
> +
> +static void sse_global_event_set_current_hart(uint32_t event_id)
> +{
> + struct sbiret ret;
> + unsigned long current_hart = current_thread_info()->hartid;
> +
> + if (!sbi_sse_event_is_global(event_id))
> + return;
> +
> + ret = sbi_sse_write_attrs(event_id, SBI_SSE_ATTR_PREFERRED_HART, 1, ¤t_hart);
> + if (ret.error)
> + report_abort("set preferred hart failure, error %ld", ret.error);
Are we sure we want to abort? Or should we try to propagate an error from
here and just bail out of SSE tests?
> +}
> +
> +static bool sse_check_state(uint32_t event_id, unsigned long expected_state)
> +{
> + struct sbiret ret;
> + enum sbi_sse_state state;
> +
> + ret = sse_event_get_state(event_id, &state);
> + if (ret.error)
> + return false;
> + report(state == expected_state, "SSE event status == %ld", expected_state);
> +
> + return state == expected_state;
Can just write
return report(state == expected_state, "SSE event status == %ld", expected_state);
> +}
> +
> +static bool sse_event_pending(uint32_t event_id)
> +{
> + struct sbiret ret;
> + unsigned long status;
> +
> + ret = sbi_sse_read_attrs(event_id, SBI_SSE_ATTR_STATUS, 1, &status);
> + if (ret.error) {
> + report_fail("Failed to get SSE event status, error %ld", ret.error);
sbiret_report_error()
> + return false;
> + }
> +
> + return !!(status & BIT(SBI_SSE_ATTR_STATUS_PENDING_OFFSET));
> +}
> +
> +static void *sse_alloc_stack(void)
> +{
> + /*
> + * We assume that SSE_STACK_SIZE always fit in one page. This page will
> + * always be decrement before storing anything on it in sse-entry.S.
decremented
> + */
> + assert(SSE_STACK_SIZE <= PAGE_SIZE);
> +
> + return (alloc_page() + SSE_STACK_SIZE);
> +}
> +
> +static void sse_free_stack(void *stack)
> +{
> + free_page(stack - SSE_STACK_SIZE);
> +}
> +
> +static void sse_read_write_test(uint32_t event_id, unsigned long attr, unsigned long attr_count,
> + unsigned long *value, long expected_error, const char *str)
> +{
> + struct sbiret ret;
> +
> + ret = sbi_sse_read_attrs(event_id, attr, attr_count, value);
> + sbiret_report_error(&ret, expected_error, "Read %s error", str);
> +
> + ret = sbi_sse_write_attrs(event_id, attr, attr_count, value);
> + sbiret_report_error(&ret, expected_error, "Write %s error", str);
> +}
> +
> +#define ALL_ATTRS_COUNT (SBI_SSE_ATTR_INTERRUPTED_A7 + 1)
> +
> +static void sse_test_attrs(uint32_t event_id)
> +{
> + unsigned long value = 0;
> + struct sbiret ret;
> + void *ptr;
> + unsigned long values[ALL_ATTRS_COUNT];
> + unsigned int i;
> + const char *max_hart_str;
> + const char *attr_name;
> +
> + report_prefix_push("attrs");
> +
> + for (i = 0; i < ARRAY_SIZE(ro_attrs); i++) {
> + ret = sbi_sse_write_attrs(event_id, ro_attrs[i], 1, &value);
> + sbiret_report_error(&ret, SBI_ERR_BAD_RANGE, "RO attribute %s not writable",
> + attr_names[ro_attrs[i]]);
> + }
> +
> + ret = sbi_sse_read_attrs(event_id, SBI_SSE_ATTR_STATUS, ALL_ATTRS_COUNT, values);
> + sbiret_report_error(&ret, SBI_SUCCESS, "Read multiple attributes no error");
> +
> + for (i = SBI_SSE_ATTR_STATUS; i <= SBI_SSE_ATTR_INTERRUPTED_A7; i++) {
> + ret = sbi_sse_read_attrs(event_id, i, 1, &value);
> + attr_name = attr_names[i];
> +
> + sbiret_report_error(&ret, SBI_SUCCESS, "Read single attribute %s", attr_name);
> + if (values[i] != value)
> + report_fail("Attribute 0x%x single value read (0x%lx) differs from the one read with multiple attributes (0x%lx)",
> + i, value, values[i]);
> + /*
> + * Preferred hart reset value is defined by SBI vendor and
> + * status injectable bit also depends on the SBI implementation
The spec says the STATUS attribute reset value is zero. In any case, if
certain bits can be tested then we should test them, so we can mask off
the injectable bit and still check for zero.
> + */
> + if (i != SBI_SSE_ATTR_STATUS && i != SBI_SSE_ATTR_PREFERRED_HART)
> + report(value == 0, "Attribute %s reset value is 0", attr_name);
> + }
> +
> +#if __riscv_xlen > 32
> + value = BIT(32);
> + ret = sbi_sse_write_attrs(event_id, SBI_SSE_ATTR_PRIORITY, 1, &value);
> + sbiret_report_error(&ret, SBI_ERR_INVALID_PARAM, "Write invalid prio > 0xFFFFFFFF error");
> +#endif
> +
> + value = ~SBI_SSE_ATTR_CONFIG_ONESHOT;
> + ret = sbi_sse_write_attrs(event_id, SBI_SSE_ATTR_CONFIG, 1, &value);
> + sbiret_report_error(&ret, SBI_ERR_INVALID_PARAM, "Write invalid config value error");
> +
> + if (sbi_sse_event_is_global(event_id)) {
> + max_hart_str = getenv("MAX_HART_ID");
This should be named "INVALID_HARTID"
> + if (!max_hart_str)
> + value = 0xFFFFFFFFUL;
> + else
> + value = strtoul(max_hart_str, NULL, 0);
> +
> + ret = sbi_sse_write_attrs(event_id, SBI_SSE_ATTR_PREFERRED_HART, 1, &value);
> + sbiret_report_error(&ret, SBI_ERR_INVALID_PARAM, "Set invalid hart id error");
The spec doesn't say you can get invalid-param for a bad hartid.
> + } else {
> + /* Set Hart on local event -> RO */
> + value = current_thread_info()->hartid;
> + ret = sbi_sse_write_attrs(event_id, SBI_SSE_ATTR_PREFERRED_HART, 1, &value);
> + sbiret_report_error(&ret, SBI_ERR_BAD_RANGE, "Set hart id on local event error");
The spec doesn't say you can get bad-range for a valid hartid when set
locally.
> + }
> +
> + /* Set/get flags, sepc, a6, a7 */
> + for (i = 0; i < ARRAY_SIZE(interrupted_attrs); i++) {
> + attr_name = attr_names[interrupted_attrs[i]];
> + ret = sbi_sse_read_attrs(event_id, interrupted_attrs[i], 1, &value);
> + sbiret_report_error(&ret, SBI_SUCCESS, "Get interrupted %s no error", attr_name);
> +
> + value = ARRAY_SIZE(interrupted_attrs) - i;
I don't understand how this creates an invalid value for all interrupted attrs?
> + ret = sbi_sse_write_attrs(event_id, interrupted_attrs[i], 1, &value);
> + sbiret_report_error(&ret, SBI_ERR_INVALID_STATE,
The spec doesn't state SBI_ERR_INVALID_STATE will ever be returned for
sbi_sse_write_attrs()
> + "Set attribute %s invalid state error", attr_name);
> + }
> +
> + sse_read_write_test(event_id, SBI_SSE_ATTR_STATUS, 0, &value, SBI_ERR_INVALID_PARAM,
> + "attribute attr_count == 0");
> + sse_read_write_test(event_id, SBI_SSE_ATTR_INTERRUPTED_A7 + 1, 1, &value, SBI_ERR_BAD_RANGE,
> + "invalid attribute");
> +
> +#if __riscv_xlen > 32
> + sse_read_write_test(event_id, BIT(32), 1, &value, SBI_ERR_INVALID_PARAM,
> + "attribute id > 32 bits");
> + sse_read_write_test(event_id, SBI_SSE_ATTR_STATUS, BIT(32), &value, SBI_ERR_INVALID_PARAM,
> + "attribute count > 32 bits");
I think you plan to change these to expect them to behave like
base_attr_id=1 and attr_count=1.
> +#endif
> +
> + /* Misaligned pointer address */
> + ptr = (void *) &value;
> + ptr += 1;
> + sse_read_write_test(event_id, SBI_SSE_ATTR_STATUS, 1, ptr, SBI_ERR_INVALID_ADDRESS,
> + "attribute with invalid address");
> +
> + report_prefix_pop();
> +}
> +
> +static void sse_test_register_error(uint32_t event_id)
> +{
> + struct sbiret ret;
> +
> + report_prefix_push("register");
> +
> + ret = sbi_sse_unregister(event_id);
> + sbiret_report_error(&ret, SBI_ERR_INVALID_STATE, "SSE unregister non registered event");
non-registered
> +
> + ret = sbi_sse_register_raw(event_id, 0x1, 0);
> + sbiret_report_error(&ret, SBI_ERR_INVALID_PARAM, "SSE register misaligned entry");
> +
> + ret = sbi_sse_register_raw(event_id, virt_to_phys(sbi_sse_entry), 0);
> + sbiret_report_error(&ret, SBI_SUCCESS, "SSE register ok");
> + if (ret.error)
> + goto done;
> +
> + ret = sbi_sse_register_raw(event_id, virt_to_phys(sbi_sse_entry), 0);
> + sbiret_report_error(&ret, SBI_ERR_INVALID_STATE, "SSE register twice failure");
"SSE register used event"
> +
> + ret = sbi_sse_unregister(event_id);
> + sbiret_report_error(&ret, SBI_SUCCESS, "SSE unregister ok");
> +
> +done:
> + report_prefix_pop();
> +}
> +
> +struct sse_simple_test_arg {
> + bool done;
> + unsigned long expected_a6;
> + uint32_t event_id;
> +};
> +
> +static void sse_simple_handler(void *data, struct pt_regs *regs, unsigned int hartid)
> +{
> + struct sse_simple_test_arg *arg = data;
> + int i;
> + struct sbiret ret;
> + const char *attr_name;
> + uint32_t event_id = READ_ONCE(arg->event_id), attr;
> + unsigned long value, prev_value, flags;
> + unsigned long interrupted_state[ARRAY_SIZE(interrupted_attrs)];
> + unsigned long modified_state[ARRAY_SIZE(interrupted_attrs)] = {4, 3, 2, 1};
> + unsigned long tmp_state[ARRAY_SIZE(interrupted_attrs)];
> +
> + report((regs->status & SR_SPP) == SR_SPP, "Interrupted S-mode");
> + report(hartid == current_thread_info()->hartid, "Hartid correctly passed");
> + sse_check_state(event_id, SBI_SSE_STATE_RUNNING);
> + report(!sse_event_pending(event_id), "Event not pending");
> +
> + /* Read full interrupted state */
> + ret = sbi_sse_read_attrs(event_id, SBI_SSE_ATTR_INTERRUPTED_SEPC,
> + ARRAY_SIZE(interrupted_attrs), interrupted_state);
> + sbiret_report_error(&ret, SBI_SUCCESS, "Save full interrupted state from SSE handler ok");
> +
> + /* Write full modified state and read it */
> + ret = sbi_sse_write_attrs(event_id, SBI_SSE_ATTR_INTERRUPTED_SEPC,
> + ARRAY_SIZE(modified_state), modified_state);
> + sbiret_report_error(&ret, SBI_SUCCESS,
> + "Write full interrupted state from SSE handler ok");
> +
> + ret = sbi_sse_read_attrs(event_id, SBI_SSE_ATTR_INTERRUPTED_SEPC,
> + ARRAY_SIZE(tmp_state), tmp_state);
> + sbiret_report_error(&ret, SBI_SUCCESS, "Read full modified state from SSE handler ok");
> +
> + report(memcmp(tmp_state, modified_state, sizeof(modified_state)) == 0,
> + "Full interrupted state successfully written");
"Full... should line up under the report(, not memcmp(
> +
> + /* Restore full saved state */
> + ret = sbi_sse_write_attrs(event_id, SBI_SSE_ATTR_INTERRUPTED_SEPC,
> + ARRAY_SIZE(interrupted_attrs), interrupted_state);
> + sbiret_report_error(&ret, SBI_SUCCESS,
> + "Full interrupted state restore from SSE handler ok");
> +
> + /* We test SBI_SSE_ATTR_INTERRUPTED_FLAGS below with specific flag values */
> + for (i = 0; i < ARRAY_SIZE(interrupted_attrs); i++) {
> + attr = interrupted_attrs[i];
> + if (attr == SBI_SSE_ATTR_INTERRUPTED_FLAGS)
> + continue;
> +
> + attr_name = attr_names[attr];
> +
> + ret = sbi_sse_read_attrs(event_id, attr, 1, &prev_value);
> + sbiret_report_error(&ret, SBI_SUCCESS, "Get attr %s no error", attr_name);
> +
> + value = 0xDEADBEEF + i;
> + ret = sbi_sse_write_attrs(event_id, attr, 1, &value);
> + sbiret_report_error(&ret, SBI_SUCCESS, "Set attr %s no error", attr_name);
> +
> + ret = sbi_sse_read_attrs(event_id, attr, 1, &value);
> + sbiret_report_error(&ret, SBI_SUCCESS, "Get attr %s no error", attr_name);
> + report(value == 0xDEADBEEF + i, "Get attr %s no error, value: 0x%lx", attr_name,
> + value);
> +
> + ret = sbi_sse_write_attrs(event_id, attr, 1, &prev_value);
> + sbiret_report_error(&ret, SBI_SUCCESS, "Restore attr %s value no error", attr_name);
> + }
> +
> + /* Test all flags allowed for SBI_SSE_ATTR_INTERRUPTED_FLAGS*/
^
missing
space
> + attr = SBI_SSE_ATTR_INTERRUPTED_FLAGS;
> + ret = sbi_sse_read_attrs(event_id, attr, 1, &prev_value);
> + sbiret_report_error(&ret, SBI_SUCCESS, "Save interrupted flags no error");
> +
> + for (i = 0; i < ARRAY_SIZE(interrupted_flags); i++) {
> + flags = interrupted_flags[i];
> + ret = sbi_sse_write_attrs(event_id, attr, 1, &flags);
> + sbiret_report_error(&ret, SBI_SUCCESS,
> + "Set interrupted flags bit 0x%lx value no error", flags);
> + ret = sbi_sse_read_attrs(event_id, attr, 1, &value);
> + sbiret_report_error(&ret, SBI_SUCCESS, "Get interrupted flags after set no error");
> + report(value == flags, "interrupted flags modified value ok: 0x%lx", value);
Do we also need to test with more than one flag set at a time?
> + }
> +
> + /* Write invalid bit in flag register */
> + flags = SBI_SSE_ATTR_INTERRUPTED_FLAGS_SSTATUS_SDT << 1;
> + ret = sbi_sse_write_attrs(event_id, attr, 1, &flags);
> + sbiret_report_error(&ret, SBI_ERR_INVALID_PARAM, "Set invalid flags bit 0x%lx value error",
> + flags);
> +#if __riscv_xlen > 32
> + flags = BIT(32);
> + ret = sbi_sse_write_attrs(event_id, attr, 1, &flags);
> + sbiret_report_error(&ret, SBI_ERR_INVALID_PARAM, "Set invalid flags bit 0x%lx value error",
This should have a different report string than the test above.
> + flags);
> +#endif
> +
> + ret = sbi_sse_write_attrs(event_id, attr, 1, &prev_value);
> + sbiret_report_error(&ret, SBI_SUCCESS, "Restore interrupted flags no error");
> +
> + /* Try to change HARTID/Priority while running */
> + if (sbi_sse_event_is_global(event_id)) {
> + value = current_thread_info()->hartid;
> + ret = sbi_sse_write_attrs(event_id, SBI_SSE_ATTR_PREFERRED_HART, 1, &value);
> + sbiret_report_error(&ret, SBI_ERR_INVALID_STATE, "Set hart id while running error");
> + }
SBI_ERR_INVALID_STATE is not listed as a possible error for write_attrs.
> +
> + value = 0;
> + ret = sbi_sse_write_attrs(event_id, SBI_SSE_ATTR_PRIORITY, 1, &value);
> + sbiret_report_error(&ret, SBI_ERR_INVALID_STATE, "Set priority while running error");
> +
> + report(interrupted_state[2] == READ_ONCE(arg->expected_a6),
> + "Interrupted state a6 ok, expected 0x%lx, got 0x%lx",
> + READ_ONCE(arg->expected_a6), interrupted_state[2]);
Could just READ_ONCE expected_a6 once.
> +
> + report(interrupted_state[3] == SBI_EXT_SSE,
> + "Interrupted state a7 ok, expected 0x%x, got 0x%lx", SBI_EXT_SSE,
> + interrupted_state[3]);
> +
> + WRITE_ONCE(arg->done, true);
> +}
> +
> +static void sse_test_inject_simple(uint32_t event_id)
> +{
> + unsigned long value;
> + struct sbiret ret;
> + struct sse_simple_test_arg test_arg = {.event_id = event_id};
> + struct sbi_sse_handler_arg args = {
> + .handler = sse_simple_handler,
> + .handler_data = (void *) &test_arg,
> + .stack = sse_alloc_stack(),
> + };
> +
> + report_prefix_push("simple");
> +
> + if (!sse_check_state(event_id, SBI_SSE_STATE_UNUSED))
> + goto done;
> +
> + ret = sbi_sse_register(event_id, &args);
> + if (!sbiret_report_error(&ret, SBI_SUCCESS, "SSE register no error"))
> + goto done;
> +
> + if (!sse_check_state(event_id, SBI_SSE_STATE_REGISTERED))
> + goto done;
> +
> + /* Be sure global events are targeting the current hart */
> + sse_global_event_set_current_hart(event_id);
> +
> + ret = sbi_sse_enable(event_id);
> + if (!sbiret_report_error(&ret, SBI_SUCCESS, "SSE enable no error"))
> + goto done;
> +
> + if (!sse_check_state(event_id, SBI_SSE_STATE_ENABLED))
> + goto done;
> +
> + ret = sbi_sse_hart_mask();
> + if (!sbiret_report_error(&ret, SBI_SUCCESS, "SSE hart mask no error"))
> + goto done;
> +
> + ret = sbi_sse_inject(event_id, current_thread_info()->hartid);
> + if (!sbiret_report_error(&ret, SBI_SUCCESS, "SSE injection masked no error"))
> + goto done;
> +
> + barrier();
> + report(test_arg.done == 0, "SSE event masked not handled");
Could instead drop the barrier() calls and use READ/WRITE_ONCE with all
test_arg member accesses.
> +
> + /*
> + * When unmasking the SSE events, we expect it to be injected
> + * immediately so a6 should be SBI_EXT_SBI_SSE_HART_UNMASK
> + */
> + test_arg.expected_a6 = SBI_EXT_SSE_HART_UNMASK;
> + ret = sbi_sse_hart_unmask();
> + if (!sbiret_report_error(&ret, SBI_SUCCESS, "SSE hart unmask no error"))
> + goto done;
> +
> + barrier();
> + report(test_arg.done == 1, "SSE event unmasked handled");
> + test_arg.done = 0;
> + test_arg.expected_a6 = SBI_EXT_SSE_INJECT;
> +
> + /* Set as oneshot and verify it is disabled */
> + ret = sbi_sse_disable(event_id);
> + sbiret_report_error(&ret, SBI_SUCCESS, "Disable event ok");
> + value = SBI_SSE_ATTR_CONFIG_ONESHOT;
> + ret = sbi_sse_write_attrs(event_id, SBI_SSE_ATTR_CONFIG, 1, &value);
> + sbiret_report_error(&ret, SBI_SUCCESS, "Set event attribute as ONESHOT");
> + ret = sbi_sse_enable(event_id);
> + sbiret_report_error(&ret, SBI_SUCCESS, "Enable event ok");
> +
> + ret = sbi_sse_inject(event_id, current_thread_info()->hartid);
> + if (!sbiret_report_error(&ret, SBI_SUCCESS, "SSE second injection no error"))
> + goto done;
> +
> + barrier();
> + report(test_arg.done == 1, "SSE event handled ok");
> + test_arg.done = 0;
> +
> + if (!sse_check_state(event_id, SBI_SSE_STATE_REGISTERED))
> + goto done;
> +
> + /* Clear ONESHOT FLAG */
> + value = 0;
> + sbi_sse_write_attrs(event_id, SBI_SSE_ATTR_CONFIG, 1, &value);
Why clear the oneshot flag before unregistering (and not check if the attr
write was successful)?
> +
> + ret = sbi_sse_unregister(event_id);
> + if (!sbiret_report_error(&ret, SBI_SUCCESS, "SSE unregister no error"))
> + goto done;
> +
> + sse_check_state(event_id, SBI_SSE_STATE_UNUSED);
> +
> +done:
Is it ok to leave this function with an event registered/enabled? If not,
then some of the goto's above should goto other labels which disable and
unregister.
> + sse_free_stack(args.stack);
> + report_prefix_pop();
> +}
> +
> +struct sse_foreign_cpu_test_arg {
> + bool done;
> + unsigned int expected_cpu;
> + uint32_t event_id;
> +};
> +
> +static void sse_foreign_cpu_handler(void *data, struct pt_regs *regs, unsigned int hartid)
> +{
> + struct sse_foreign_cpu_test_arg *arg = data;
> + unsigned int expected_cpu;
> +
> + /* For arg content to be visible */
> + smp_rmb();
> + expected_cpu = READ_ONCE(arg->expected_cpu);
> + report(expected_cpu == current_thread_info()->cpu,
> + "Received event on CPU (%d), expected CPU (%d)", current_thread_info()->cpu,
> + expected_cpu);
> +
> + WRITE_ONCE(arg->done, true);
> + /* For arg update to be visible for other CPUs */
> + smp_wmb();
> +}
> +
> +struct sse_local_per_cpu {
> + struct sbi_sse_handler_arg args;
> + struct sbiret ret;
> + struct sse_foreign_cpu_test_arg handler_arg;
> +};
> +
> +static void sse_register_enable_local(void *data)
> +{
> + struct sbiret ret;
> + struct sse_local_per_cpu *cpu_args = data;
> + struct sse_local_per_cpu *cpu_arg = &cpu_args[current_thread_info()->cpu];
> + uint32_t event_id = cpu_arg->handler_arg.event_id;
> +
> + ret = sbi_sse_register(event_id, &cpu_arg->args);
> + WRITE_ONCE(cpu_arg->ret, ret);
> + if (ret.error)
> + return;
> +
> + ret = sbi_sse_enable(event_id);
> + WRITE_ONCE(cpu_arg->ret, ret);
> +}
> +
> +static void sbi_sse_disable_unregister_local(void *data)
> +{
> + struct sbiret ret;
> + struct sse_local_per_cpu *cpu_args = data;
> + struct sse_local_per_cpu *cpu_arg = &cpu_args[current_thread_info()->cpu];
> + uint32_t event_id = cpu_arg->handler_arg.event_id;
> +
> + ret = sbi_sse_disable(event_id);
> + WRITE_ONCE(cpu_arg->ret, ret);
> + if (ret.error)
> + return;
> +
> + ret = sbi_sse_unregister(event_id);
> + WRITE_ONCE(cpu_arg->ret, ret);
> +}
> +
> +static void sse_test_inject_local(uint32_t event_id)
> +{
> + int cpu;
> + struct sbiret ret;
> + struct sse_local_per_cpu *cpu_args, *cpu_arg;
> + struct sse_foreign_cpu_test_arg *handler_arg;
> +
> + cpu_args = calloc(NR_CPUS, sizeof(struct sbi_sse_handler_arg));
> +
> + report_prefix_push("local_dispatch");
> + for_each_online_cpu(cpu) {
> + cpu_arg = &cpu_args[cpu];
> + cpu_arg->handler_arg.event_id = event_id;
> + cpu_arg->args.stack = sse_alloc_stack();
> + cpu_arg->args.handler = sse_foreign_cpu_handler;
> + cpu_arg->args.handler_data = (void *)&cpu_arg->handler_arg;
> + }
> +
> + on_cpus(sse_register_enable_local, cpu_args);
> + for_each_online_cpu(cpu) {
> + cpu_arg = &cpu_args[cpu];
> + ret = cpu_arg->ret;
> + if (ret.error)
> + report_abort("CPU failed to register/enable SSE event: %ld",
> + ret.error);
Do we need this to be an abort?
> +
> + handler_arg = &cpu_arg->handler_arg;
> + WRITE_ONCE(handler_arg->expected_cpu, cpu);
> + /* For handler_arg content to be visible for other CPUs */
> + smp_wmb();
> + ret = sbi_sse_inject(event_id, cpus[cpu].hartid);
> + if (ret.error)
> + report_abort("CPU failed to register/enable SSE event: %ld",
> + ret.error);
abort?
> + }
> +
> + for_each_online_cpu(cpu) {
> + handler_arg = &cpu_args[cpu].handler_arg;
Need smp_rmb() here in case we read done=1 on first try in order
to be sure that all other data ordered wrt the done write can
be observed.
> + while (!READ_ONCE(handler_arg->done)) {
Maybe a cpu_relax() here.
> + /* For handler_arg update to be visible */
> + smp_rmb();
> + }
> + WRITE_ONCE(handler_arg->done, false);
> + }
> +
> + on_cpus(sbi_sse_disable_unregister_local, cpu_args);
> + for_each_online_cpu(cpu) {
> + cpu_arg = &cpu_args[cpu];
> + ret = READ_ONCE(cpu_arg->ret);
> + if (ret.error)
> + report_abort("CPU failed to disable/unregister SSE event: %ld",
> + ret.error);
abort?
> + }
> +
> + for_each_online_cpu(cpu) {
> + cpu_arg = &cpu_args[cpu];
> +
delete unnecessary blank line
> + sse_free_stack(cpu_arg->args.stack);
> + }
> +
> + report_pass("local event dispatch on all CPUs");
> + report_prefix_pop();
> +
> +}
> +
> +static void sse_test_inject_global(uint32_t event_id)
> +{
> + unsigned long value;
> + struct sbiret ret;
> + unsigned int cpu;
> + struct sse_foreign_cpu_test_arg test_arg = {.event_id = event_id};
> + struct sbi_sse_handler_arg args = {
> + .handler = sse_foreign_cpu_handler,
> + .handler_data = (void *) &test_arg,
> + .stack = sse_alloc_stack(),
> + };
> + enum sbi_sse_state state;
> +
> + report_prefix_push("global_dispatch");
> +
> + ret = sbi_sse_register(event_id, &args);
> + if (!sbiret_report_error(&ret, SBI_SUCCESS, "Register event no error"))
> + goto err_free_stack;
> +
> + for_each_online_cpu(cpu) {
> + WRITE_ONCE(test_arg.expected_cpu, cpu);
> + /* For test_arg content to be visible for other CPUs */
> + smp_wmb();
> + value = cpu;
> + ret = sbi_sse_write_attrs(event_id, SBI_SSE_ATTR_PREFERRED_HART, 1, &value);
> + if (!sbiret_report_error(&ret, SBI_SUCCESS, "Set preferred hart no error"))
> + goto err_unregister;
> +
> + ret = sbi_sse_enable(event_id);
> + if (!sbiret_report_error(&ret, SBI_SUCCESS, "Enable SSE event no error"))
> + goto err_unregister;
> +
> + ret = sbi_sse_inject(event_id, cpu);
> + if (!sbiret_report_error(&ret, SBI_SUCCESS, "Inject SSE event no error"))
> + goto err_disable;
> +
smp_rmb()
> + while (!READ_ONCE(test_arg.done)) {
cpu_relax()
> + /* For shared test_arg structure */
> + smp_rmb();
> + }
> +
> + WRITE_ONCE(test_arg.done, false);
> +
> + /* Wait for event to be in ENABLED state */
> + do {
> + ret = sse_event_get_state(event_id, &state);
> + if (sbiret_report_error(&ret, SBI_SUCCESS, "Get SSE event state no error"))
> + goto err_disable;
cpu_relax() or even use udelay()?
> + } while (state != SBI_SSE_STATE_ENABLED);
> +
> +err_disable:
> + ret = sbi_sse_disable(event_id);
> + if (!sbiret_report_error(&ret, SBI_SUCCESS, "Disable SSE event no error"))
> + goto err_unregister;
> +
> + report_pass("Global event on CPU %d", cpu);
> + }
> +
> +err_unregister:
> + ret = sbi_sse_unregister(event_id);
> + sbiret_report_error(&ret, SBI_SUCCESS, "Unregister SSE event no error");
> +
> +err_free_stack:
> + sse_free_stack(args.stack);
> + report_prefix_pop();
> +}
> +
> +struct priority_test_arg {
> + uint32_t event_id;
> + bool called;
> + u32 prio;
> + struct priority_test_arg *next_event_arg;
> + void (*check_func)(struct priority_test_arg *arg);
> +};
> +
> +static void sse_hi_priority_test_handler(void *arg, struct pt_regs *regs,
> + unsigned int hartid)
> +{
> + struct priority_test_arg *targ = arg;
> + struct priority_test_arg *next = targ->next_event_arg;
> +
> + targ->called = true;
> + if (next) {
> + sbi_sse_inject(next->event_id, current_thread_info()->hartid);
> +
> + report(!sse_event_pending(next->event_id), "Higher priority event is pending");
"is not pending"
> + report(next->called, "Higher priority event was not handled");
"was handled"
> + }
> +}
> +
> +static void sse_low_priority_test_handler(void *arg, struct pt_regs *regs,
> + unsigned int hartid)
> +{
> + struct priority_test_arg *targ = arg;
> + struct priority_test_arg *next = targ->next_event_arg;
> +
> + targ->called = true;
> +
> + if (next) {
> + sbi_sse_inject(next->event_id, current_thread_info()->hartid);
> +
> + report(sse_event_pending(next->event_id), "Lower priority event is pending");
> + report(!next->called, "Lower priority event %s was handle before %s",
"was not handled"
> + sse_event_name(next->event_id), sse_event_name(targ->event_id));
> + }
> +}
> +
> +static void sse_test_injection_priority_arg(struct priority_test_arg *in_args,
> + unsigned int in_args_size,
> + sbi_sse_handler_fn handler,
> + const char *test_name)
> +{
> + unsigned int i;
> + unsigned long value;
> + struct sbiret ret;
> + uint32_t event_id;
> + struct priority_test_arg *arg;
> + unsigned int args_size = 0;
> + struct sbi_sse_handler_arg event_args[in_args_size];
> + struct priority_test_arg *args[in_args_size];
> + void *stack;
> + struct sbi_sse_handler_arg *event_arg;
> +
> + report_prefix_push(test_name);
> +
> + for (i = 0; i < in_args_size; i++) {
> + arg = &in_args[i];
> + event_id = arg->event_id;
> + if (!sse_event_can_inject(event_id))
> + continue;
> +
> + args[args_size] = arg;
> + args_size++;
> + }
> +
> + if (!args_size) {
> + report_skip("No event injectable");
> + report_prefix_pop();
> + goto skip;
> + }
> +
> + for (i = 0; i < args_size; i++) {
> + arg = args[i];
> + event_id = arg->event_id;
> + stack = sse_alloc_stack();
> +
> + event_arg = &event_args[i];
> + event_arg->handler = handler;
> + event_arg->handler_data = (void *)arg;
> + event_arg->stack = stack;
> +
> + if (i < (args_size - 1))
> + arg->next_event_arg = args[i + 1];
> + else
> + arg->next_event_arg = NULL;
> +
> + /* Be sure global events are targeting the current hart */
> + sse_global_event_set_current_hart(event_id);
> +
> + sbi_sse_register(event_id, event_arg);
> + value = arg->prio;
> + sbi_sse_write_attrs(event_id, SBI_SSE_ATTR_PRIORITY, 1, &value);
> + sbi_sse_enable(event_id);
No return code checks for these SSE calls? If we're 99% sure they should
succeed, then I'd still check them with asserts.
> + }
> +
> + /* Inject first event */
> + ret = sbi_sse_inject(args[0]->event_id, current_thread_info()->hartid);
> + sbiret_report_error(&ret, SBI_SUCCESS, "SSE injection no error");
> +
> + for (i = 0; i < args_size; i++) {
> + arg = args[i];
> + event_id = arg->event_id;
> +
> + report(arg->called, "Event %s handler called", sse_event_name(arg->event_id));
> +
> + sbi_sse_disable(event_id);
> + sbi_sse_unregister(event_id);
No return code checks?
> +
> + event_arg = &event_args[i];
> + sse_free_stack(event_arg->stack);
> + }
> +
> +skip:
> + report_prefix_pop();
Isn't this an extra pop()? We should be able to return directly from the
report_skip() if-block.
> +}
> +
> +static struct priority_test_arg hi_prio_args[] = {
> + {.event_id = SBI_SSE_EVENT_GLOBAL_SOFTWARE},
> + {.event_id = SBI_SSE_EVENT_LOCAL_SOFTWARE},
> + {.event_id = SBI_SSE_EVENT_GLOBAL_LOW_PRIO_RAS},
> + {.event_id = SBI_SSE_EVENT_LOCAL_LOW_PRIO_RAS},
> + {.event_id = SBI_SSE_EVENT_LOCAL_PMU_OVERFLOW},
> + {.event_id = SBI_SSE_EVENT_GLOBAL_HIGH_PRIO_RAS},
> + {.event_id = SBI_SSE_EVENT_LOCAL_DOUBLE_TRAP},
> + {.event_id = SBI_SSE_EVENT_LOCAL_HIGH_PRIO_RAS},
> +};
> +
> +static struct priority_test_arg low_prio_args[] = {
> + {.event_id = SBI_SSE_EVENT_LOCAL_HIGH_PRIO_RAS},
> + {.event_id = SBI_SSE_EVENT_LOCAL_DOUBLE_TRAP},
> + {.event_id = SBI_SSE_EVENT_GLOBAL_HIGH_PRIO_RAS},
> + {.event_id = SBI_SSE_EVENT_LOCAL_PMU_OVERFLOW},
> + {.event_id = SBI_SSE_EVENT_LOCAL_LOW_PRIO_RAS},
> + {.event_id = SBI_SSE_EVENT_GLOBAL_LOW_PRIO_RAS},
> + {.event_id = SBI_SSE_EVENT_LOCAL_SOFTWARE},
> + {.event_id = SBI_SSE_EVENT_GLOBAL_SOFTWARE},
> +};
> +
> +static struct priority_test_arg prio_args[] = {
> + {.event_id = SBI_SSE_EVENT_GLOBAL_SOFTWARE, .prio = 5},
> + {.event_id = SBI_SSE_EVENT_LOCAL_SOFTWARE, .prio = 10},
> + {.event_id = SBI_SSE_EVENT_LOCAL_LOW_PRIO_RAS, .prio = 12},
> + {.event_id = SBI_SSE_EVENT_LOCAL_PMU_OVERFLOW, .prio = 15},
> + {.event_id = SBI_SSE_EVENT_GLOBAL_HIGH_PRIO_RAS, .prio = 20},
> + {.event_id = SBI_SSE_EVENT_GLOBAL_LOW_PRIO_RAS, .prio = 22},
> + {.event_id = SBI_SSE_EVENT_LOCAL_HIGH_PRIO_RAS, .prio = 25},
> +};
> +
> +static struct priority_test_arg same_prio_args[] = {
> + {.event_id = SBI_SSE_EVENT_LOCAL_PMU_OVERFLOW, .prio = 0},
> + {.event_id = SBI_SSE_EVENT_GLOBAL_LOW_PRIO_RAS, .prio = 0},
> + {.event_id = SBI_SSE_EVENT_LOCAL_HIGH_PRIO_RAS, .prio = 10},
> + {.event_id = SBI_SSE_EVENT_LOCAL_SOFTWARE, .prio = 10},
> + {.event_id = SBI_SSE_EVENT_GLOBAL_SOFTWARE, .prio = 10},
> + {.event_id = SBI_SSE_EVENT_GLOBAL_HIGH_PRIO_RAS, .prio = 20},
> + {.event_id = SBI_SSE_EVENT_LOCAL_LOW_PRIO_RAS, .prio = 20},
> +};
nit: could tab out the .prio's in order to better tabulate the above two
structs.
> +
> +static void sse_test_injection_priority(void)
> +{
> + report_prefix_push("prio");
> +
> + sse_test_injection_priority_arg(hi_prio_args, ARRAY_SIZE(hi_prio_args),
> + sse_hi_priority_test_handler, "high");
> +
> + sse_test_injection_priority_arg(low_prio_args, ARRAY_SIZE(low_prio_args),
> + sse_low_priority_test_handler, "low");
> +
> + sse_test_injection_priority_arg(prio_args, ARRAY_SIZE(prio_args),
> + sse_low_priority_test_handler, "changed");
> +
> + sse_test_injection_priority_arg(same_prio_args, ARRAY_SIZE(same_prio_args),
> + sse_low_priority_test_handler, "same_prio_args");
> +
> + report_prefix_pop();
> +}
> +
> +static void test_invalid_event_id(unsigned long event_id)
> +{
> + struct sbiret ret;
> + unsigned long value = 0;
> +
> + ret = sbi_sse_register_raw(event_id, (unsigned long) sbi_sse_entry, 0);
> + sbiret_report_error(&ret, SBI_ERR_INVALID_PARAM,
> + "SSE register event_id 0x%lx invalid param", event_id);
> +
> + ret = sbi_sse_unregister(event_id);
> + sbiret_report_error(&ret, SBI_ERR_INVALID_PARAM,
> + "SSE unregister event_id 0x%lx invalid param", event_id);
> +
> + ret = sbi_sse_enable(event_id);
> + sbiret_report_error(&ret, SBI_ERR_INVALID_PARAM,
> + "SSE enable event_id 0x%lx invalid param", event_id);
> +
> + ret = sbi_sse_disable(event_id);
> + sbiret_report_error(&ret, SBI_ERR_INVALID_PARAM,
> + "SSE disable event_id 0x%lx invalid param", event_id);
> +
> + ret = sbi_sse_inject(event_id, 0);
> + sbiret_report_error(&ret, SBI_ERR_INVALID_PARAM,
> + "SSE inject event_id 0x%lx invalid param", event_id);
> +
> + ret = sbi_sse_write_attrs(event_id, SBI_SSE_ATTR_PRIORITY, 1, &value);
> + sbiret_report_error(&ret, SBI_ERR_INVALID_PARAM,
> + "SSE write attr event_id 0x%lx invalid param", event_id);
> +
> + ret = sbi_sse_read_attrs(event_id, SBI_SSE_ATTR_PRIORITY, 1, &value);
> + sbiret_report_error(&ret, SBI_ERR_INVALID_PARAM,
> + "SSE read attr event_id 0x%lx invalid param", event_id);
We can s/invalid param/ from all the strings above since we output
SBI_ERR_INVALID_PARAM with the latest sbiret_report_error()
> +}
> +
> +static void sse_test_invalid_event_id(void)
> +{
> +
> + report_prefix_push("event_id");
> +
> +#if __riscv_xlen > 32
> + test_invalid_event_id(BIT_ULL(32));
> +#endif
With your latest opensbi changes we'll truncate eventid, so we need
another invalid eventid for the tests. That's good since we can then test
for rv32 too.
> +
> + report_prefix_pop();
> +}
> +
> +static bool sse_can_inject(uint32_t event_id)
> +{
> + struct sbiret ret;
> + unsigned long status;
> +
> + ret = sbi_sse_read_attrs(event_id, SBI_SSE_ATTR_STATUS, 1, &status);
> + if (ret.error)
Don't we want to assert or complain loudly about an unexpected status read
failure?
> + return false;
> +
> + return !!(status & BIT(SBI_SSE_ATTR_STATUS_INJECT_OFFSET));
> +}
> +
> +static void boot_secondary(void *data)
Maybe name this sse_secondary_boot_and_unmask?
> +{
> + sbi_sse_hart_unmask();
> +}
> +
> +static void sse_check_mask(void)
> +{
> + struct sbiret ret;
> +
> + /* Upon boot, event are masked, check that */
> + ret = sbi_sse_hart_mask();
> + sbiret_report_error(&ret, SBI_ERR_ALREADY_STOPPED, "SSE hart mask at boot time ok");
> +
> + ret = sbi_sse_hart_unmask();
> + sbiret_report_error(&ret, SBI_SUCCESS, "SSE hart no error ok");
^ unmask
> + ret = sbi_sse_hart_unmask();
> + sbiret_report_error(&ret, SBI_ERR_ALREADY_STARTED, "SSE hart unmask twice error ok");
> +
> + ret = sbi_sse_hart_mask();
> + sbiret_report_error(&ret, SBI_SUCCESS, "SSE hart mask no error");
> + ret = sbi_sse_hart_mask();
> + sbiret_report_error(&ret, SBI_ERR_ALREADY_STOPPED, "SSE hart mask twice ok");
> +}
> +
> +void check_sse(void)
> +{
> + unsigned long i, event_id;
> +
> + report_prefix_push("sse");
> +
> + if (!sbi_probe(SBI_EXT_SSE)) {
> + report_skip("SSE extension not available");
> + report_prefix_pop();
> + return;
> + }
> +
> + sse_check_mask();
> +
> + /*
> + * Dummy wakeup of all processors since some of them will be targeted
> + * by global events without going through the wakeup call as well as
> + * unmasking SSE events on all harts
> + */
> + on_cpus(boot_secondary, NULL);
> +
> + sse_test_invalid_event_id();
> +
> + for (i = 0; i < ARRAY_SIZE(sse_event_infos); i++) {
> + event_id = sse_event_infos[i].event_id;
> + report_prefix_push(sse_event_infos[i].name);
> + if (!sse_can_inject(event_id)) {
> + report_skip("Event 0x%lx does not support injection", event_id);
> + report_prefix_pop();
> + continue;
> + } else {
> + sse_event_infos[i].can_inject = true;
What do we need to cache can_inject=true for if we filter out all events
which have can_inject=false? Should we run at least something with
events which don't support injection?
> + }
> + sse_test_attrs(event_id);
> + sse_test_register_error(event_id);
> + sse_test_inject_simple(event_id);
> + if (sbi_sse_event_is_global(event_id))
> + sse_test_inject_global(event_id);
> + else
> + sse_test_inject_local(event_id);
> +
> + report_prefix_pop();
> + }
> +
> + sse_test_injection_priority();
> +
> + report_prefix_pop();
> +}
> diff --git a/riscv/sbi.c b/riscv/sbi.c
> index 7c7a2d2d..49e81bda 100644
> --- a/riscv/sbi.c
> +++ b/riscv/sbi.c
> @@ -32,6 +32,7 @@
>
> #define HIGH_ADDR_BOUNDARY ((phys_addr_t)1 << 32)
>
> +void check_sse(void);
> void check_fwft(void);
>
> static long __labs(long a)
> @@ -1439,6 +1440,7 @@ int main(int argc, char **argv)
> check_hsm();
> check_dbcn();
> check_susp();
> + check_sse();
> check_fwft();
>
> return report_summary();
> --
> 2.47.2
>
Thanks,
drew
^ permalink raw reply [flat|nested] 14+ messages in thread
* Re: [kvm-unit-tests PATCH v7 5/6] lib: riscv: Add SBI SSE support
2025-02-27 16:03 ` Andrew Jones
@ 2025-03-06 10:04 ` Clément Léger
0 siblings, 0 replies; 14+ messages in thread
From: Clément Léger @ 2025-03-06 10:04 UTC (permalink / raw)
To: Andrew Jones; +Cc: kvm, kvm-riscv, Andrew Jones, Anup Patel, Atish Patra
On 27/02/2025 17:03, Andrew Jones wrote:
> On Fri, Feb 14, 2025 at 12:44:18PM +0100, Clément Léger wrote:
>> Add support for registering and handling SSE events. This will be used
>> by sbi test as well as upcoming double trap tests.
>>
>> Signed-off-by: Clément Léger <cleger@rivosinc.com>
>> ---
>> riscv/Makefile | 2 +
>> lib/riscv/asm/csr.h | 1 +
>> lib/riscv/asm/sbi-sse.h | 48 +++++++++++++++++++
>> lib/riscv/sbi-sse-asm.S | 103 ++++++++++++++++++++++++++++++++++++++++
>> lib/riscv/asm-offsets.c | 9 ++++
>> lib/riscv/sbi-sse.c | 84 ++++++++++++++++++++++++++++++++
>> 6 files changed, 247 insertions(+)
>> create mode 100644 lib/riscv/asm/sbi-sse.h
>> create mode 100644 lib/riscv/sbi-sse-asm.S
>> create mode 100644 lib/riscv/sbi-sse.c
>>
>> diff --git a/riscv/Makefile b/riscv/Makefile
>> index 02d2ac39..ed590ede 100644
>> --- a/riscv/Makefile
>> +++ b/riscv/Makefile
>> @@ -43,6 +43,8 @@ cflatobjs += lib/riscv/setup.o
>> cflatobjs += lib/riscv/smp.o
>> cflatobjs += lib/riscv/stack.o
>> cflatobjs += lib/riscv/timer.o
>> +cflatobjs += lib/riscv/sbi-sse-asm.o
>> +cflatobjs += lib/riscv/sbi-sse.o
>> ifeq ($(ARCH),riscv32)
>> cflatobjs += lib/ldiv32.o
>> endif
>> diff --git a/lib/riscv/asm/csr.h b/lib/riscv/asm/csr.h
>> index 16f5ddd7..389d9850 100644
>> --- a/lib/riscv/asm/csr.h
>> +++ b/lib/riscv/asm/csr.h
>> @@ -17,6 +17,7 @@
>> #define CSR_TIME 0xc01
>>
>> #define SR_SIE _AC(0x00000002, UL)
>> +#define SR_SPP _AC(0x00000100, UL)
>>
>> /* Exception cause high bit - is an interrupt if set */
>> #define CAUSE_IRQ_FLAG (_AC(1, UL) << (__riscv_xlen - 1))
>> diff --git a/lib/riscv/asm/sbi-sse.h b/lib/riscv/asm/sbi-sse.h
>> new file mode 100644
>> index 00000000..ba18ce27
>> --- /dev/null
>> +++ b/lib/riscv/asm/sbi-sse.h
>> @@ -0,0 +1,48 @@
>> +/* SPDX-License-Identifier: GPL-2.0-only */
>> +/*
>> + * SBI SSE library interface
>> + *
>> + * Copyright (C) 2025, Rivos Inc., Clément Léger <cleger@rivosinc.com>
>> + */
>> +#ifndef _RISCV_SSE_H_
>> +#define _RISCV_SSE_H_
>> +
>> +#include <asm/sbi.h>
>> +
>> +typedef void (*sbi_sse_handler_fn)(void *data, struct pt_regs *regs, unsigned int hartid);
>> +
>> +struct sbi_sse_handler_arg {
>> + unsigned long reg_tmp;
>> + sbi_sse_handler_fn handler;
>> + void *handler_data;
>> + void *stack;
>> +};
>> +
>> +extern void sbi_sse_entry(void);
>> +
>> +static inline bool sbi_sse_event_is_global(uint32_t event_id)
>> +{
>> + return !!(event_id & SBI_SSE_EVENT_GLOBAL_BIT);
>> +}
>> +
>> +struct sbiret sbi_sse_read_attrs_raw(unsigned long event_id, unsigned long base_attr_id,
>> + unsigned long attr_count, unsigned long phys_lo,
>> + unsigned long phys_hi);
>> +struct sbiret sbi_sse_read_attrs(unsigned long event_id, unsigned long base_attr_id,
>> + unsigned long attr_count, unsigned long *values);
>> +struct sbiret sbi_sse_write_attrs_raw(unsigned long event_id, unsigned long base_attr_id,
>> + unsigned long attr_count, unsigned long phys_lo,
>> + unsigned long phys_hi);
>> +struct sbiret sbi_sse_write_attrs(unsigned long event_id, unsigned long base_attr_id,
>> + unsigned long attr_count, unsigned long *values);
>> +struct sbiret sbi_sse_register_raw(unsigned long event_id, unsigned long entry_pc,
>> + unsigned long entry_arg);
>> +struct sbiret sbi_sse_register(unsigned long event_id, struct sbi_sse_handler_arg *arg);
>> +struct sbiret sbi_sse_unregister(unsigned long event_id);
>> +struct sbiret sbi_sse_enable(unsigned long event_id);
>> +struct sbiret sbi_sse_hart_mask(void);
>> +struct sbiret sbi_sse_hart_unmask(void);
>> +struct sbiret sbi_sse_inject(unsigned long event_id, unsigned long hart_id);
>> +struct sbiret sbi_sse_disable(unsigned long event_id);
>
> nit: I'd put disable up by enable to keep pairs together.
>
>> +
>> +#endif /* !_RISCV_SSE_H_ */
>> diff --git a/lib/riscv/sbi-sse-asm.S b/lib/riscv/sbi-sse-asm.S
>> new file mode 100644
>> index 00000000..5f60b839
>> --- /dev/null
>> +++ b/lib/riscv/sbi-sse-asm.S
>> @@ -0,0 +1,103 @@
>> +/* SPDX-License-Identifier: GPL-2.0 */
>> +/*
>> + * RISC-V SSE events entry point.
>> + *
>> + * Copyright (C) 2025, Rivos Inc., Clément Léger <cleger@rivosinc.com>
>> + */
>> +#define __ASSEMBLY__
>> +#include <asm/asm.h>
>> +#include <asm/csr.h>
>> +#include <asm/asm-offsets.h>
>
> nit: asm-offsets above csr if we want to practice alphabetical ordering.
>
>> +#include <generated/sbi-asm-offsets.h>
>> +
>> +.section .text
>> +.global sbi_sse_entry
>> +sbi_sse_entry:
>> + /* Save stack temporarily */
>> + REG_S sp, SBI_SSE_REG_TMP(a7)
>> + /* Set entry stack */
>> + REG_L sp, SBI_SSE_HANDLER_STACK(a7)
>> +
>> + addi sp, sp, -(PT_SIZE)
>> + REG_S ra, PT_RA(sp)
>> + REG_S s0, PT_S0(sp)
>> + REG_S s1, PT_S1(sp)
>> + REG_S s2, PT_S2(sp)
>> + REG_S s3, PT_S3(sp)
>> + REG_S s4, PT_S4(sp)
>> + REG_S s5, PT_S5(sp)
>> + REG_S s6, PT_S6(sp)
>> + REG_S s7, PT_S7(sp)
>> + REG_S s8, PT_S8(sp)
>> + REG_S s9, PT_S9(sp)
>> + REG_S s10, PT_S10(sp)
>> + REG_S s11, PT_S11(sp)
>> + REG_S tp, PT_TP(sp)
>> + REG_S t0, PT_T0(sp)
>> + REG_S t1, PT_T1(sp)
>> + REG_S t2, PT_T2(sp)
>> + REG_S t3, PT_T3(sp)
>> + REG_S t4, PT_T4(sp)
>> + REG_S t5, PT_T5(sp)
>> + REG_S t6, PT_T6(sp)
>> + REG_S gp, PT_GP(sp)
>> + REG_S a0, PT_A0(sp)
>> + REG_S a1, PT_A1(sp)
>> + REG_S a2, PT_A2(sp)
>> + REG_S a3, PT_A3(sp)
>> + REG_S a4, PT_A4(sp)
>> + REG_S a5, PT_A5(sp)
>> + csrr a1, CSR_SEPC
>> + REG_S a1, PT_EPC(sp)
>> + csrr a2, CSR_SSTATUS
>> + REG_S a2, PT_STATUS(sp)
>> +
>> + REG_L a0, SBI_SSE_REG_TMP(a7)
>> + REG_S a0, PT_SP(sp)
>> +
>> + REG_L t0, SBI_SSE_HANDLER(a7)
>> + REG_L a0, SBI_SSE_HANDLER_DATA(a7)
>> + mv a1, sp
>> + mv a2, a6
>> + jalr t0
>> +
>> + REG_L a1, PT_EPC(sp)
>> + REG_L a2, PT_STATUS(sp)
>> + csrw CSR_SEPC, a1
>> + csrw CSR_SSTATUS, a2
>> +
>> + REG_L ra, PT_RA(sp)
>> + REG_L s0, PT_S0(sp)
>> + REG_L s1, PT_S1(sp)
>> + REG_L s2, PT_S2(sp)
>> + REG_L s3, PT_S3(sp)
>> + REG_L s4, PT_S4(sp)
>> + REG_L s5, PT_S5(sp)
>> + REG_L s6, PT_S6(sp)
>> + REG_L s7, PT_S7(sp)
>> + REG_L s8, PT_S8(sp)
>> + REG_L s9, PT_S9(sp)
>> + REG_L s10, PT_S10(sp)
>> + REG_L s11, PT_S11(sp)
>> + REG_L tp, PT_TP(sp)
>> + REG_L t0, PT_T0(sp)
>> + REG_L t1, PT_T1(sp)
>> + REG_L t2, PT_T2(sp)
>> + REG_L t3, PT_T3(sp)
>> + REG_L t4, PT_T4(sp)
>> + REG_L t5, PT_T5(sp)
>> + REG_L t6, PT_T6(sp)
>> + REG_L gp, PT_GP(sp)
>> + REG_L a0, PT_A0(sp)
>> + REG_L a1, PT_A1(sp)
>> + REG_L a2, PT_A2(sp)
>> + REG_L a3, PT_A3(sp)
>> + REG_L a4, PT_A4(sp)
>> + REG_L a5, PT_A5(sp)
>> +
>> + REG_L sp, PT_SP(sp)
>> +
>> + li a7, ASM_SBI_EXT_SSE
>> + li a6, ASM_SBI_EXT_SSE_COMPLETE
>> + ecall
>
> nit: Format asm with a tab after the operator like save/restore_context do.
>
>> +
>> diff --git a/lib/riscv/asm-offsets.c b/lib/riscv/asm-offsets.c
>> index 6c511c14..77e5d26d 100644
>> --- a/lib/riscv/asm-offsets.c
>> +++ b/lib/riscv/asm-offsets.c
>> @@ -4,6 +4,7 @@
>> #include <asm/processor.h>
>> #include <asm/ptrace.h>
>> #include <asm/smp.h>
>> +#include <asm/sbi-sse.h>
>
> nit: alphabetize
>
>>
>> int main(void)
>> {
>> @@ -63,5 +64,13 @@ int main(void)
>> OFFSET(THREAD_INFO_HARTID, thread_info, hartid);
>> DEFINE(THREAD_INFO_SIZE, sizeof(struct thread_info));
>>
>> + DEFINE(ASM_SBI_EXT_SSE, SBI_EXT_SSE);
>> + DEFINE(ASM_SBI_EXT_SSE_COMPLETE, SBI_EXT_SSE_COMPLETE);
>> +
>> + OFFSET(SBI_SSE_REG_TMP, sbi_sse_handler_arg, reg_tmp);
>> + OFFSET(SBI_SSE_HANDLER, sbi_sse_handler_arg, handler);
>> + OFFSET(SBI_SSE_HANDLER_DATA, sbi_sse_handler_arg, handler_data);
>> + OFFSET(SBI_SSE_HANDLER_STACK, sbi_sse_handler_arg, stack);
>> +
>> return 0;
>> }
>> diff --git a/lib/riscv/sbi-sse.c b/lib/riscv/sbi-sse.c
>> new file mode 100644
>> index 00000000..bc4dd10e
>> --- /dev/null
>> +++ b/lib/riscv/sbi-sse.c
>
> I think we could just put these wrappers in lib/riscv/sbi.c, but OK.
Hey Andrew,
I'll move everything in sbi.c ;)
>
>> @@ -0,0 +1,84 @@
>> +// SPDX-License-Identifier: GPL-2.0-only
>> +/*
>> + * SBI SSE library
>> + *
>> + * Copyright (C) 2025, Rivos Inc., Clément Léger <cleger@rivosinc.com>
>> + */
>> +#include <asm/sbi.h>
>> +#include <asm/sbi-sse.h>
>> +#include <asm/io.h>
>
> nit: alphabetize
>
>> +
>> +struct sbiret sbi_sse_read_attrs_raw(unsigned long event_id, unsigned long base_attr_id,
>> + unsigned long attr_count, unsigned long phys_lo,
>> + unsigned long phys_hi)
>> +{
>> + return sbi_ecall(SBI_EXT_SSE, SBI_EXT_SSE_READ_ATTRS, event_id, base_attr_id, attr_count,
>> + phys_lo, phys_hi, 0);
>> +}
>> +
>> +struct sbiret sbi_sse_read_attrs(unsigned long event_id, unsigned long base_attr_id,
>> + unsigned long attr_count, unsigned long *values)
>> +{
>> + phys_addr_t p = virt_to_phys(values);
>> +
>> + return sbi_sse_read_attrs_raw(event_id, base_attr_id, attr_count, lower_32_bits(p),
>> + upper_32_bits(p));
>> +}
>> +
>> +struct sbiret sbi_sse_write_attrs_raw(unsigned long event_id, unsigned long base_attr_id,
>> + unsigned long attr_count, unsigned long phys_lo,
>> + unsigned long phys_hi)
>> +{
>> + return sbi_ecall(SBI_EXT_SSE, SBI_EXT_SSE_WRITE_ATTRS, event_id, base_attr_id, attr_count,
>> + phys_lo, phys_hi, 0);
>> +}
>> +
>> +struct sbiret sbi_sse_write_attrs(unsigned long event_id, unsigned long base_attr_id,
>> + unsigned long attr_count, unsigned long *values)
>> +{
>> + phys_addr_t p = virt_to_phys(values);
>> +
>> + return sbi_sse_write_attrs_raw(event_id, base_attr_id, attr_count, lower_32_bits(p),
>> + upper_32_bits(p));
>> +}
>> +
>> +struct sbiret sbi_sse_register_raw(unsigned long event_id, unsigned long entry_pc,
>> + unsigned long entry_arg)
>> +{
>> + return sbi_ecall(SBI_EXT_SSE, SBI_EXT_SSE_REGISTER, event_id, entry_pc, entry_arg, 0, 0, 0);
>> +}
>> +
>> +struct sbiret sbi_sse_register(unsigned long event_id, struct sbi_sse_handler_arg *arg)
>> +{
>
> phys_addr_t entry = virt_to_phys(sbi_sse_entry);
> phys_addr_t arg = virt_to_phys(arg);
>
> assert(__riscv_xlen > 32 || upper_32_bits(entry) == 0);
> assert(__riscv_xlen > 32 || upper_32_bits(arg) == 0);
Ahem, while looking at it, there is actually no reason to pass phys
address there :|. That should be virtual pointer, I'll remove that.
Thanks,
Clément
>
> return sbi_sse_register_raw(event_id, entry, arg);
>
>> + return sbi_sse_register_raw(event_id, virt_to_phys(sbi_sse_entry), virt_to_phys(arg));
>> +}
>> +
>> +struct sbiret sbi_sse_unregister(unsigned long event_id)
>> +{
>> + return sbi_ecall(SBI_EXT_SSE, SBI_EXT_SSE_UNREGISTER, event_id, 0, 0, 0, 0, 0);
>> +}
>> +
>> +struct sbiret sbi_sse_enable(unsigned long event_id)
>> +{
>> + return sbi_ecall(SBI_EXT_SSE, SBI_EXT_SSE_ENABLE, event_id, 0, 0, 0, 0, 0);
>> +}
>> +
>> +struct sbiret sbi_sse_disable(unsigned long event_id)
>> +{
>> + return sbi_ecall(SBI_EXT_SSE, SBI_EXT_SSE_DISABLE, event_id, 0, 0, 0, 0, 0);
>> +}
>> +
>> +struct sbiret sbi_sse_hart_mask(void)
>> +{
>> + return sbi_ecall(SBI_EXT_SSE, SBI_EXT_SSE_HART_MASK, 0, 0, 0, 0, 0, 0);
>> +}
>> +
>> +struct sbiret sbi_sse_hart_unmask(void)
>> +{
>> + return sbi_ecall(SBI_EXT_SSE, SBI_EXT_SSE_HART_UNMASK, 0, 0, 0, 0, 0, 0);
>> +}
>> +
>> +struct sbiret sbi_sse_inject(unsigned long event_id, unsigned long hart_id)
>> +{
>> + return sbi_ecall(SBI_EXT_SSE, SBI_EXT_SSE_INJECT, event_id, hart_id, 0, 0, 0, 0);
>> +}
>> --
>> 2.47.2
>>
>
> Besides the nits and the sanity checking for rv32,
>
> Reviewed-by: Andrew Jones <andrew.jones@linux.dev>
^ permalink raw reply [flat|nested] 14+ messages in thread
* Re: [kvm-unit-tests PATCH v7 6/6] riscv: sbi: Add SSE extension tests
2025-02-28 17:51 ` Andrew Jones
@ 2025-03-06 14:32 ` Clément Léger
2025-03-06 15:15 ` Andrew Jones
0 siblings, 1 reply; 14+ messages in thread
From: Clément Léger @ 2025-03-06 14:32 UTC (permalink / raw)
To: Andrew Jones; +Cc: kvm, kvm-riscv, Andrew Jones, Anup Patel, Atish Patra
On 28/02/2025 18:51, Andrew Jones wrote:
>
> Global style note: For casts we prefer no space between the (type) and
> the variable, i.e. (type)var
>
> On Fri, Feb 14, 2025 at 12:44:19PM +0100, Clément Léger wrote:
>> Add SBI SSE extension tests for the following features:
>> - Test attributes errors (invalid values, RO, etc)
>> - Registration errors
>> - Simple events (register, enable, inject)
>> - Events with different priorities
>> - Global events dispatch on different harts
>> - Local events on all harts
>> - Hart mask/unmask events
>>
>> Signed-off-by: Clément Léger <cleger@rivosinc.com>
>> ---
>> riscv/Makefile | 1 +
>> riscv/sbi-sse.c | 1054 +++++++++++++++++++++++++++++++++++++++++++++++
>> riscv/sbi.c | 2 +
>> 3 files changed, 1057 insertions(+)
>> create mode 100644 riscv/sbi-sse.c
>>
>> diff --git a/riscv/Makefile b/riscv/Makefile
>> index ed590ede..ea62e05f 100644
>> --- a/riscv/Makefile
>> +++ b/riscv/Makefile
>> @@ -18,6 +18,7 @@ tests += $(TEST_DIR)/sieve.$(exe)
>> all: $(tests)
>>
>> $(TEST_DIR)/sbi-deps = $(TEST_DIR)/sbi-asm.o $(TEST_DIR)/sbi-fwft.o
>> +$(TEST_DIR)/sbi-deps += $(TEST_DIR)/sbi-sse.o
>>
>> # When built for EFI sieve needs extra memory, run with e.g. '-m 256' on QEMU
>> $(TEST_DIR)/sieve.$(exe): AUXFLAGS = 0x1
>> diff --git a/riscv/sbi-sse.c b/riscv/sbi-sse.c
>> new file mode 100644
>> index 00000000..27f47f73
>> --- /dev/null
>> +++ b/riscv/sbi-sse.c
>> @@ -0,0 +1,1054 @@
>> +// SPDX-License-Identifier: GPL-2.0-only
>> +/*
>> + * SBI SSE testsuite
>> + *
>> + * Copyright (C) 2025, Rivos Inc., Clément Léger <cleger@rivosinc.com>
>> + */
>> +#include <alloc.h>
>> +#include <alloc_page.h>
>> +#include <bitops.h>
>> +#include <cpumask.h>
>> +#include <libcflat.h>
>> +#include <on-cpus.h>
>> +#include <stdlib.h>
>> +
>> +#include <asm/barrier.h>
>> +#include <asm/io.h>
>> +#include <asm/page.h>
>> +#include <asm/processor.h>
>> +#include <asm/sbi.h>
>> +#include <asm/sbi-sse.h>
>> +#include <asm/setup.h>
>> +
>> +#include "sbi-tests.h"
>> +
>> +#define SSE_STACK_SIZE PAGE_SIZE
>> +
>> +void check_sse(void);
>
> Since we now have an #ifndef __ASSEMBLY__ section in riscv/sbi-tests.h we
> can just put this prototype there.
>
>> +
>> +struct sse_event_info {
>> + uint32_t event_id;
>> + const char *name;
>> + bool can_inject;
>> +};
>> +
>> +static struct sse_event_info sse_event_infos[] = {
>> + {
>> + .event_id = SBI_SSE_EVENT_LOCAL_HIGH_PRIO_RAS,
>> + .name = "local_high_prio_ras",
>> + },
>> + {
>> + .event_id = SBI_SSE_EVENT_LOCAL_DOUBLE_TRAP,
>> + .name = "double_trap",
>> + },
>> + {
>> + .event_id = SBI_SSE_EVENT_GLOBAL_HIGH_PRIO_RAS,
>> + .name = "global_high_prio_ras",
>> + },
>> + {
>> + .event_id = SBI_SSE_EVENT_LOCAL_PMU_OVERFLOW,
>> + .name = "local_pmu_overflow",
>> + },
>> + {
>> + .event_id = SBI_SSE_EVENT_LOCAL_LOW_PRIO_RAS,
>> + .name = "local_low_prio_ras",
>> + },
>> + {
>> + .event_id = SBI_SSE_EVENT_GLOBAL_LOW_PRIO_RAS,
>> + .name = "global_low_prio_ras",
>> + },
>> + {
>> + .event_id = SBI_SSE_EVENT_LOCAL_SOFTWARE,
>> + .name = "local_software",
>> + },
>> + {
>> + .event_id = SBI_SSE_EVENT_GLOBAL_SOFTWARE,
>> + .name = "global_software",
>> + },
>> +};
>> +
>> +static const char *const attr_names[] = {
>> + [SBI_SSE_ATTR_STATUS] = "status",
>> + [SBI_SSE_ATTR_PRIORITY] = "priority",
>> + [SBI_SSE_ATTR_CONFIG] = "config",
>> + [SBI_SSE_ATTR_PREFERRED_HART] = "preferred_hart",
>> + [SBI_SSE_ATTR_ENTRY_PC] = "entry_pc",
>> + [SBI_SSE_ATTR_ENTRY_ARG] = "entry_arg",
>> + [SBI_SSE_ATTR_INTERRUPTED_SEPC] = "interrupted_sepc",
>> + [SBI_SSE_ATTR_INTERRUPTED_FLAGS] = "interrupted_flags",
>> + [SBI_SSE_ATTR_INTERRUPTED_A6] = "interrupted_a6",
>> + [SBI_SSE_ATTR_INTERRUPTED_A7] = "interrupted_a7",
>> +};
>> +
>> +static const unsigned long ro_attrs[] = {
>> + SBI_SSE_ATTR_STATUS,
>> + SBI_SSE_ATTR_ENTRY_PC,
>> + SBI_SSE_ATTR_ENTRY_ARG,
>> +};
>> +
>> +static const unsigned long interrupted_attrs[] = {
>> + SBI_SSE_ATTR_INTERRUPTED_SEPC,
>> + SBI_SSE_ATTR_INTERRUPTED_FLAGS,
>> + SBI_SSE_ATTR_INTERRUPTED_A6,
>> + SBI_SSE_ATTR_INTERRUPTED_A7,
>> +};
>> +
>> +static const unsigned long interrupted_flags[] = {
>> + SBI_SSE_ATTR_INTERRUPTED_FLAGS_SSTATUS_SPP,
>> + SBI_SSE_ATTR_INTERRUPTED_FLAGS_SSTATUS_SPIE,
>> + SBI_SSE_ATTR_INTERRUPTED_FLAGS_SSTATUS_SPELP,
>> + SBI_SSE_ATTR_INTERRUPTED_FLAGS_SSTATUS_SDT,
>> + SBI_SSE_ATTR_INTERRUPTED_FLAGS_HSTATUS_SPV,
>> + SBI_SSE_ATTR_INTERRUPTED_FLAGS_HSTATUS_SPVP,
>> +};
>> +
>> +static struct sse_event_info *sse_event_get_info(uint32_t event_id)
>> +{
>> + int i;
>> +
>> + for (i = 0; i < ARRAY_SIZE(sse_event_infos); i++) {
>> + if (sse_event_infos[i].event_id == event_id)
>> + return &sse_event_infos[i];
>> + }
>> +
>> + assert_msg(false, "Invalid event id: %d", event_id);
>> +}
>> +
>> +static const char *sse_event_name(uint32_t event_id)
>> +{
>> + return sse_event_get_info(event_id)->name;
>> +}
>> +
>> +static bool sse_event_can_inject(uint32_t event_id)
>> +{
>> + return sse_event_get_info(event_id)->can_inject;
>> +}
>> +
>> +static struct sbiret sse_event_get_state(uint32_t event_id, enum sbi_sse_state *state)
>> +{
>> + struct sbiret ret;
>> + unsigned long status;
>> +
>> + ret = sbi_sse_read_attrs(event_id, SBI_SSE_ATTR_STATUS, 1, &status);
>> + if (ret.error) {
>> + sbiret_report_error(&ret, 0, "Get SSE event status no error");
>> + return ret;
>> + }
>> +
>> + *state = status & SBI_SSE_ATTR_STATUS_STATE_MASK;
>> +
>> + return ret;
>> +}
>> +
>> +static void sse_global_event_set_current_hart(uint32_t event_id)
>> +{
>> + struct sbiret ret;
>> + unsigned long current_hart = current_thread_info()->hartid;
>> +
>> + if (!sbi_sse_event_is_global(event_id))
>> + return;
>> +
>> + ret = sbi_sse_write_attrs(event_id, SBI_SSE_ATTR_PREFERRED_HART, 1, ¤t_hart);
>> + if (ret.error)
>> + report_abort("set preferred hart failure, error %ld", ret.error);
>
> Are we sure we want to abort? Or should we try to propagate an error from
> here and just bail out of SSE tests?
>
>> +}
>> +
>> +static bool sse_check_state(uint32_t event_id, unsigned long expected_state)
>> +{
>> + struct sbiret ret;
>> + enum sbi_sse_state state;
>> +
>> + ret = sse_event_get_state(event_id, &state);
>> + if (ret.error)
>> + return false;
>> + report(state == expected_state, "SSE event status == %ld", expected_state);
>> +
>> + return state == expected_state;
>
> Can just write
>
> return report(state == expected_state, "SSE event status == %ld", expected_state);
>
>> +}
>> +
>> +static bool sse_event_pending(uint32_t event_id)
>> +{
>> + struct sbiret ret;
>> + unsigned long status;
>> +
>> + ret = sbi_sse_read_attrs(event_id, SBI_SSE_ATTR_STATUS, 1, &status);
>> + if (ret.error) {
>> + report_fail("Failed to get SSE event status, error %ld", ret.error);
>
> sbiret_report_error()
>
>> + return false;
>> + }
>> +
>> + return !!(status & BIT(SBI_SSE_ATTR_STATUS_PENDING_OFFSET));
>> +}
>> +
>> +static void *sse_alloc_stack(void)
>> +{
>> + /*
>> + * We assume that SSE_STACK_SIZE always fit in one page. This page will
>> + * always be decrement before storing anything on it in sse-entry.S.
>
> decremented
>
>> + */
>> + assert(SSE_STACK_SIZE <= PAGE_SIZE);
>> +
>> + return (alloc_page() + SSE_STACK_SIZE);
>> +}
>> +
>> +static void sse_free_stack(void *stack)
>> +{
>> + free_page(stack - SSE_STACK_SIZE);
>> +}
>> +
>> +static void sse_read_write_test(uint32_t event_id, unsigned long attr, unsigned long attr_count,
>> + unsigned long *value, long expected_error, const char *str)
>> +{
>> + struct sbiret ret;
>> +
>> + ret = sbi_sse_read_attrs(event_id, attr, attr_count, value);
>> + sbiret_report_error(&ret, expected_error, "Read %s error", str);
>> +
>> + ret = sbi_sse_write_attrs(event_id, attr, attr_count, value);
>> + sbiret_report_error(&ret, expected_error, "Write %s error", str);
>> +}
>> +
>> +#define ALL_ATTRS_COUNT (SBI_SSE_ATTR_INTERRUPTED_A7 + 1)
>> +
>> +static void sse_test_attrs(uint32_t event_id)
>> +{
>> + unsigned long value = 0;
>> + struct sbiret ret;
>> + void *ptr;
>> + unsigned long values[ALL_ATTRS_COUNT];
>> + unsigned int i;
>> + const char *max_hart_str;
>> + const char *attr_name;
>> +
>> + report_prefix_push("attrs");
>> +
>> + for (i = 0; i < ARRAY_SIZE(ro_attrs); i++) {
>> + ret = sbi_sse_write_attrs(event_id, ro_attrs[i], 1, &value);
>> + sbiret_report_error(&ret, SBI_ERR_BAD_RANGE, "RO attribute %s not writable",
>> + attr_names[ro_attrs[i]]);
>> + }
>> +
>> + ret = sbi_sse_read_attrs(event_id, SBI_SSE_ATTR_STATUS, ALL_ATTRS_COUNT, values);
>> + sbiret_report_error(&ret, SBI_SUCCESS, "Read multiple attributes no error");
>> +
>> + for (i = SBI_SSE_ATTR_STATUS; i <= SBI_SSE_ATTR_INTERRUPTED_A7; i++) {
>> + ret = sbi_sse_read_attrs(event_id, i, 1, &value);
>> + attr_name = attr_names[i];
>> +
>> + sbiret_report_error(&ret, SBI_SUCCESS, "Read single attribute %s", attr_name);
>> + if (values[i] != value)
>> + report_fail("Attribute 0x%x single value read (0x%lx) differs from the one read with multiple attributes (0x%lx)",
>> + i, value, values[i]);
>> + /*
>> + * Preferred hart reset value is defined by SBI vendor and
>> + * status injectable bit also depends on the SBI implementation
>
> The spec says the STATUS attribute reset value is zero. In any case, if
> certain bits can be tested then we should test them, so we can mask off
> the injectable bit and still check for zero.
Ack.
>
>> + */
>> + if (i != SBI_SSE_ATTR_STATUS && i != SBI_SSE_ATTR_PREFERRED_HART)
>> + report(value == 0, "Attribute %s reset value is 0", attr_name);
>> + }
>> +
>> +#if __riscv_xlen > 32
>> + value = BIT(32);
>> + ret = sbi_sse_write_attrs(event_id, SBI_SSE_ATTR_PRIORITY, 1, &value);
>> + sbiret_report_error(&ret, SBI_ERR_INVALID_PARAM, "Write invalid prio > 0xFFFFFFFF error");
>> +#endif
>> +
>> + value = ~SBI_SSE_ATTR_CONFIG_ONESHOT;
>> + ret = sbi_sse_write_attrs(event_id, SBI_SSE_ATTR_CONFIG, 1, &value);
>> + sbiret_report_error(&ret, SBI_ERR_INVALID_PARAM, "Write invalid config value error");
>> +
>> + if (sbi_sse_event_is_global(event_id)) {
>> + max_hart_str = getenv("MAX_HART_ID");
>
> This should be named "INVALID_HARTID"
>
>> + if (!max_hart_str)
>> + value = 0xFFFFFFFFUL;
>> + else
>> + value = strtoul(max_hart_str, NULL, 0);
>> +
>> + ret = sbi_sse_write_attrs(event_id, SBI_SSE_ATTR_PREFERRED_HART, 1, &value);
>> + sbiret_report_error(&ret, SBI_ERR_INVALID_PARAM, "Set invalid hart id error");
>
> The spec doesn't say you can get invalid-param for a bad hartid.
Indeed.
>
>> + } else {
>> + /* Set Hart on local event -> RO */
>> + value = current_thread_info()->hartid;
>> + ret = sbi_sse_write_attrs(event_id, SBI_SSE_ATTR_PREFERRED_HART, 1, &value);
>> + sbiret_report_error(&ret, SBI_ERR_BAD_RANGE, "Set hart id on local event error");
>
> The spec doesn't say you can get bad-range for a valid hartid when set
> locally.
Ditto.
>
>> + }
>> +
>> + /* Set/get flags, sepc, a6, a7 */
>> + for (i = 0; i < ARRAY_SIZE(interrupted_attrs); i++) {
>> + attr_name = attr_names[interrupted_attrs[i]];
>> + ret = sbi_sse_read_attrs(event_id, interrupted_attrs[i], 1, &value);
>> + sbiret_report_error(&ret, SBI_SUCCESS, "Get interrupted %s no error", attr_name);
>> +
>> + value = ARRAY_SIZE(interrupted_attrs) - i;
>
> I don't understand how this creates an invalid value for all interrupted attrs?
>
>> + ret = sbi_sse_write_attrs(event_id, interrupted_attrs[i], 1, &value);
>> + sbiret_report_error(&ret, SBI_ERR_INVALID_STATE,
>
> The spec doesn't state SBI_ERR_INVALID_STATE will ever be returned for
> sbi_sse_write_attrs()
Yeah, indeed, the attributes are specified as being writable only when
in RUNNING state. I inferred that as well, i'll probably send a
clarification to the spec for that.
>
>> + "Set attribute %s invalid state error", attr_name);
>> + }
>> +
>> + sse_read_write_test(event_id, SBI_SSE_ATTR_STATUS, 0, &value, SBI_ERR_INVALID_PARAM,
>> + "attribute attr_count == 0");
>> + sse_read_write_test(event_id, SBI_SSE_ATTR_INTERRUPTED_A7 + 1, 1, &value, SBI_ERR_BAD_RANGE,
>> + "invalid attribute");
>> +
>> +#if __riscv_xlen > 32
>> + sse_read_write_test(event_id, BIT(32), 1, &value, SBI_ERR_INVALID_PARAM,
>> + "attribute id > 32 bits");
>> + sse_read_write_test(event_id, SBI_SSE_ATTR_STATUS, BIT(32), &value, SBI_ERR_INVALID_PARAM,
>> + "attribute count > 32 bits");
>
> I think you plan to change these to expect them to behave like
> base_attr_id=1 and attr_count=1.
>
>> +#endif
>> +
>> + /* Misaligned pointer address */
>> + ptr = (void *) &value;
>> + ptr += 1;
>> + sse_read_write_test(event_id, SBI_SSE_ATTR_STATUS, 1, ptr, SBI_ERR_INVALID_ADDRESS,
>> + "attribute with invalid address");
>> +
>> + report_prefix_pop();
>> +}
>> +
>> +static void sse_test_register_error(uint32_t event_id)
>> +{
>> + struct sbiret ret;
>> +
>> + report_prefix_push("register");
>> +
>> + ret = sbi_sse_unregister(event_id);
>> + sbiret_report_error(&ret, SBI_ERR_INVALID_STATE, "SSE unregister non registered event");
>
> non-registered
>
>> +
>> + ret = sbi_sse_register_raw(event_id, 0x1, 0);
>> + sbiret_report_error(&ret, SBI_ERR_INVALID_PARAM, "SSE register misaligned entry");
>> +
>> + ret = sbi_sse_register_raw(event_id, virt_to_phys(sbi_sse_entry), 0);
>> + sbiret_report_error(&ret, SBI_SUCCESS, "SSE register ok");
>> + if (ret.error)
>> + goto done;
>> +
>> + ret = sbi_sse_register_raw(event_id, virt_to_phys(sbi_sse_entry), 0);
>> + sbiret_report_error(&ret, SBI_ERR_INVALID_STATE, "SSE register twice failure");
>
> "SSE register used event"
>
>> +
>> + ret = sbi_sse_unregister(event_id);
>> + sbiret_report_error(&ret, SBI_SUCCESS, "SSE unregister ok");
>> +
>> +done:
>> + report_prefix_pop();
>> +}
>> +
>> +struct sse_simple_test_arg {
>> + bool done;
>> + unsigned long expected_a6;
>> + uint32_t event_id;
>> +};
>> +
>> +static void sse_simple_handler(void *data, struct pt_regs *regs, unsigned int hartid)
>> +{
>> + struct sse_simple_test_arg *arg = data;
>> + int i;
>> + struct sbiret ret;
>> + const char *attr_name;
>> + uint32_t event_id = READ_ONCE(arg->event_id), attr;
>> + unsigned long value, prev_value, flags;
>> + unsigned long interrupted_state[ARRAY_SIZE(interrupted_attrs)];
>> + unsigned long modified_state[ARRAY_SIZE(interrupted_attrs)] = {4, 3, 2, 1};
>> + unsigned long tmp_state[ARRAY_SIZE(interrupted_attrs)];
>> +
>> + report((regs->status & SR_SPP) == SR_SPP, "Interrupted S-mode");
>> + report(hartid == current_thread_info()->hartid, "Hartid correctly passed");
>> + sse_check_state(event_id, SBI_SSE_STATE_RUNNING);
>> + report(!sse_event_pending(event_id), "Event not pending");
>> +
>> + /* Read full interrupted state */
>> + ret = sbi_sse_read_attrs(event_id, SBI_SSE_ATTR_INTERRUPTED_SEPC,
>> + ARRAY_SIZE(interrupted_attrs), interrupted_state);
>> + sbiret_report_error(&ret, SBI_SUCCESS, "Save full interrupted state from SSE handler ok");
>> +
>> + /* Write full modified state and read it */
>> + ret = sbi_sse_write_attrs(event_id, SBI_SSE_ATTR_INTERRUPTED_SEPC,
>> + ARRAY_SIZE(modified_state), modified_state);
>> + sbiret_report_error(&ret, SBI_SUCCESS,
>> + "Write full interrupted state from SSE handler ok");
>> +
>> + ret = sbi_sse_read_attrs(event_id, SBI_SSE_ATTR_INTERRUPTED_SEPC,
>> + ARRAY_SIZE(tmp_state), tmp_state);
>> + sbiret_report_error(&ret, SBI_SUCCESS, "Read full modified state from SSE handler ok");
>> +
>> + report(memcmp(tmp_state, modified_state, sizeof(modified_state)) == 0,
>> + "Full interrupted state successfully written");
>
> "Full... should line up under the report(, not memcmp(
>
>> +
>> + /* Restore full saved state */
>> + ret = sbi_sse_write_attrs(event_id, SBI_SSE_ATTR_INTERRUPTED_SEPC,
>> + ARRAY_SIZE(interrupted_attrs), interrupted_state);
>> + sbiret_report_error(&ret, SBI_SUCCESS,
>> + "Full interrupted state restore from SSE handler ok");
>> +
>> + /* We test SBI_SSE_ATTR_INTERRUPTED_FLAGS below with specific flag values */
>> + for (i = 0; i < ARRAY_SIZE(interrupted_attrs); i++) {
>> + attr = interrupted_attrs[i];
>> + if (attr == SBI_SSE_ATTR_INTERRUPTED_FLAGS)
>> + continue;
>> +
>> + attr_name = attr_names[attr];
>> +
>> + ret = sbi_sse_read_attrs(event_id, attr, 1, &prev_value);
>> + sbiret_report_error(&ret, SBI_SUCCESS, "Get attr %s no error", attr_name);
>> +
>> + value = 0xDEADBEEF + i;
>> + ret = sbi_sse_write_attrs(event_id, attr, 1, &value);
>> + sbiret_report_error(&ret, SBI_SUCCESS, "Set attr %s no error", attr_name);
>> +
>> + ret = sbi_sse_read_attrs(event_id, attr, 1, &value);
>> + sbiret_report_error(&ret, SBI_SUCCESS, "Get attr %s no error", attr_name);
>> + report(value == 0xDEADBEEF + i, "Get attr %s no error, value: 0x%lx", attr_name,
>> + value);
>> +
>> + ret = sbi_sse_write_attrs(event_id, attr, 1, &prev_value);
>> + sbiret_report_error(&ret, SBI_SUCCESS, "Restore attr %s value no error", attr_name);
>> + }
>> +
>> + /* Test all flags allowed for SBI_SSE_ATTR_INTERRUPTED_FLAGS*/
> ^
> missing
> space
>
>> + attr = SBI_SSE_ATTR_INTERRUPTED_FLAGS;
>> + ret = sbi_sse_read_attrs(event_id, attr, 1, &prev_value);
>> + sbiret_report_error(&ret, SBI_SUCCESS, "Save interrupted flags no error");
>> +
>> + for (i = 0; i < ARRAY_SIZE(interrupted_flags); i++) {
>> + flags = interrupted_flags[i];
>> + ret = sbi_sse_write_attrs(event_id, attr, 1, &flags);
>> + sbiret_report_error(&ret, SBI_SUCCESS,
>> + "Set interrupted flags bit 0x%lx value no error", flags);
>> + ret = sbi_sse_read_attrs(event_id, attr, 1, &value);
>> + sbiret_report_error(&ret, SBI_SUCCESS, "Get interrupted flags after set no error");
>> + report(value == flags, "interrupted flags modified value ok: 0x%lx", value);
>
> Do we also need to test with more than one flag set at a time?
That is already done a few lines above (see /* Restore full saved state */).
>
>> + }
>> +
>> + /* Write invalid bit in flag register */
>> + flags = SBI_SSE_ATTR_INTERRUPTED_FLAGS_SSTATUS_SDT << 1;
>> + ret = sbi_sse_write_attrs(event_id, attr, 1, &flags);
>> + sbiret_report_error(&ret, SBI_ERR_INVALID_PARAM, "Set invalid flags bit 0x%lx value error",
>> + flags);
>> +#if __riscv_xlen > 32
>> + flags = BIT(32);
>> + ret = sbi_sse_write_attrs(event_id, attr, 1, &flags);
>> + sbiret_report_error(&ret, SBI_ERR_INVALID_PARAM, "Set invalid flags bit 0x%lx value error",
>
> This should have a different report string than the test above.
The bit value format does differentiate the printf though.
>
>> + flags);
>> +#endif
>> +
>> + ret = sbi_sse_write_attrs(event_id, attr, 1, &prev_value);
>> + sbiret_report_error(&ret, SBI_SUCCESS, "Restore interrupted flags no error");
>> +
>> + /* Try to change HARTID/Priority while running */
>> + if (sbi_sse_event_is_global(event_id)) {
>> + value = current_thread_info()->hartid;
>> + ret = sbi_sse_write_attrs(event_id, SBI_SSE_ATTR_PREFERRED_HART, 1, &value);
>> + sbiret_report_error(&ret, SBI_ERR_INVALID_STATE, "Set hart id while running error");
>> + }
>
> SBI_ERR_INVALID_STATE is not listed as a possible error for write_attrs.
Will update the spec.
>
>> +
>> + value = 0;
>> + ret = sbi_sse_write_attrs(event_id, SBI_SSE_ATTR_PRIORITY, 1, &value);
>> + sbiret_report_error(&ret, SBI_ERR_INVALID_STATE, "Set priority while running error");
>> +
>> + report(interrupted_state[2] == READ_ONCE(arg->expected_a6),
>> + "Interrupted state a6 ok, expected 0x%lx, got 0x%lx",
>> + READ_ONCE(arg->expected_a6), interrupted_state[2]);
>
> Could just READ_ONCE expected_a6 once.
Ack.
>
>> +
>> + report(interrupted_state[3] == SBI_EXT_SSE,
>> + "Interrupted state a7 ok, expected 0x%x, got 0x%lx", SBI_EXT_SSE,
>> + interrupted_state[3]);
>> +
>> + WRITE_ONCE(arg->done, true);
>> +}
>> +
>> +static void sse_test_inject_simple(uint32_t event_id)
>> +{
>> + unsigned long value;
>> + struct sbiret ret;
>> + struct sse_simple_test_arg test_arg = {.event_id = event_id};
>> + struct sbi_sse_handler_arg args = {
>> + .handler = sse_simple_handler,
>> + .handler_data = (void *) &test_arg,
>> + .stack = sse_alloc_stack(),
>> + };
>> +
>> + report_prefix_push("simple");
>> +
>> + if (!sse_check_state(event_id, SBI_SSE_STATE_UNUSED))
>> + goto done;
>> +
>> + ret = sbi_sse_register(event_id, &args);
>> + if (!sbiret_report_error(&ret, SBI_SUCCESS, "SSE register no error"))
>> + goto done;
>> +
>> + if (!sse_check_state(event_id, SBI_SSE_STATE_REGISTERED))
>> + goto done;
>> +
>> + /* Be sure global events are targeting the current hart */
>> + sse_global_event_set_current_hart(event_id);
>> +
>> + ret = sbi_sse_enable(event_id);
>> + if (!sbiret_report_error(&ret, SBI_SUCCESS, "SSE enable no error"))
>> + goto done;
>> +
>> + if (!sse_check_state(event_id, SBI_SSE_STATE_ENABLED))
>> + goto done;
>> +
>> + ret = sbi_sse_hart_mask();
>> + if (!sbiret_report_error(&ret, SBI_SUCCESS, "SSE hart mask no error"))
>> + goto done;
>> +
>> + ret = sbi_sse_inject(event_id, current_thread_info()->hartid);
>> + if (!sbiret_report_error(&ret, SBI_SUCCESS, "SSE injection masked no error"))
>> + goto done;
>> +
>> + barrier();
>> + report(test_arg.done == 0, "SSE event masked not handled");
>
> Could instead drop the barrier() calls and use READ/WRITE_ONCE with all
> test_arg member accesses.
>
>> +
>> + /*
>> + * When unmasking the SSE events, we expect it to be injected
>> + * immediately so a6 should be SBI_EXT_SBI_SSE_HART_UNMASK
>> + */
>> + test_arg.expected_a6 = SBI_EXT_SSE_HART_UNMASK;
>> + ret = sbi_sse_hart_unmask();
>> + if (!sbiret_report_error(&ret, SBI_SUCCESS, "SSE hart unmask no error"))
>> + goto done;
>> +
>> + barrier();
>> + report(test_arg.done == 1, "SSE event unmasked handled");
>> + test_arg.done = 0;
>> + test_arg.expected_a6 = SBI_EXT_SSE_INJECT;
>> +
>> + /* Set as oneshot and verify it is disabled */
>> + ret = sbi_sse_disable(event_id);
>> + sbiret_report_error(&ret, SBI_SUCCESS, "Disable event ok");
>> + value = SBI_SSE_ATTR_CONFIG_ONESHOT;
>> + ret = sbi_sse_write_attrs(event_id, SBI_SSE_ATTR_CONFIG, 1, &value);
>> + sbiret_report_error(&ret, SBI_SUCCESS, "Set event attribute as ONESHOT");
>> + ret = sbi_sse_enable(event_id);
>> + sbiret_report_error(&ret, SBI_SUCCESS, "Enable event ok");
>> +
>> + ret = sbi_sse_inject(event_id, current_thread_info()->hartid);
>> + if (!sbiret_report_error(&ret, SBI_SUCCESS, "SSE second injection no error"))
>> + goto done;
>> +
>> + barrier();
>> + report(test_arg.done == 1, "SSE event handled ok");
>> + test_arg.done = 0;
>> +
>> + if (!sse_check_state(event_id, SBI_SSE_STATE_REGISTERED))
>> + goto done;
>> +
>> + /* Clear ONESHOT FLAG */
>> + value = 0;
>> + sbi_sse_write_attrs(event_id, SBI_SSE_ATTR_CONFIG, 1, &value);
>
> Why clear the oneshot flag before unregistering (and not check if the attr
> write was successful)?
I previously add some loop over the test but the oneshot messed up the
next test. Anyway, I think it's better to restore the event to it's
previous state. I'll check for the return value though.
>
>> +
>> + ret = sbi_sse_unregister(event_id);
>> + if (!sbiret_report_error(&ret, SBI_SUCCESS, "SSE unregister no error"))
>> + goto done;
>> +
>> + sse_check_state(event_id, SBI_SSE_STATE_UNUSED);
>> +
>> +done:
>
> Is it ok to leave this function with an event registered/enabled? If not,
> then some of the goto's above should goto other labels which disable and
> unregister.
No it's not but it's massive pain to keep everything coherent when it
fails ;)
>
>> + sse_free_stack(args.stack);
>> + report_prefix_pop();
>> +}
>> +
>> +struct sse_foreign_cpu_test_arg {
>> + bool done;
>> + unsigned int expected_cpu;
>> + uint32_t event_id;
>> +};
>> +
>> +static void sse_foreign_cpu_handler(void *data, struct pt_regs *regs, unsigned int hartid)
>> +{
>> + struct sse_foreign_cpu_test_arg *arg = data;
>> + unsigned int expected_cpu;
>> +
>> + /* For arg content to be visible */
>> + smp_rmb();
>> + expected_cpu = READ_ONCE(arg->expected_cpu);
>> + report(expected_cpu == current_thread_info()->cpu,
>> + "Received event on CPU (%d), expected CPU (%d)", current_thread_info()->cpu,
>> + expected_cpu);
>> +
>> + WRITE_ONCE(arg->done, true);
>> + /* For arg update to be visible for other CPUs */
>> + smp_wmb();
>> +}
>> +
>> +struct sse_local_per_cpu {
>> + struct sbi_sse_handler_arg args;
>> + struct sbiret ret;
>> + struct sse_foreign_cpu_test_arg handler_arg;
>> +};
>> +
>> +static void sse_register_enable_local(void *data)
>> +{
>> + struct sbiret ret;
>> + struct sse_local_per_cpu *cpu_args = data;
>> + struct sse_local_per_cpu *cpu_arg = &cpu_args[current_thread_info()->cpu];
>> + uint32_t event_id = cpu_arg->handler_arg.event_id;
>> +
>> + ret = sbi_sse_register(event_id, &cpu_arg->args);
>> + WRITE_ONCE(cpu_arg->ret, ret);
>> + if (ret.error)
>> + return;
>> +
>> + ret = sbi_sse_enable(event_id);
>> + WRITE_ONCE(cpu_arg->ret, ret);
>> +}
>> +
>> +static void sbi_sse_disable_unregister_local(void *data)
>> +{
>> + struct sbiret ret;
>> + struct sse_local_per_cpu *cpu_args = data;
>> + struct sse_local_per_cpu *cpu_arg = &cpu_args[current_thread_info()->cpu];
>> + uint32_t event_id = cpu_arg->handler_arg.event_id;
>> +
>> + ret = sbi_sse_disable(event_id);
>> + WRITE_ONCE(cpu_arg->ret, ret);
>> + if (ret.error)
>> + return;
>> +
>> + ret = sbi_sse_unregister(event_id);
>> + WRITE_ONCE(cpu_arg->ret, ret);
>> +}
>> +
>> +static void sse_test_inject_local(uint32_t event_id)
>> +{
>> + int cpu;
>> + struct sbiret ret;
>> + struct sse_local_per_cpu *cpu_args, *cpu_arg;
>> + struct sse_foreign_cpu_test_arg *handler_arg;
>> +
>> + cpu_args = calloc(NR_CPUS, sizeof(struct sbi_sse_handler_arg));
>> +
>> + report_prefix_push("local_dispatch");
>> + for_each_online_cpu(cpu) {
>> + cpu_arg = &cpu_args[cpu];
>> + cpu_arg->handler_arg.event_id = event_id;
>> + cpu_arg->args.stack = sse_alloc_stack();
>> + cpu_arg->args.handler = sse_foreign_cpu_handler;
>> + cpu_arg->args.handler_data = (void *)&cpu_arg->handler_arg;
>> + }
>> +
>> + on_cpus(sse_register_enable_local, cpu_args);
>> + for_each_online_cpu(cpu) {
>> + cpu_arg = &cpu_args[cpu];
>> + ret = cpu_arg->ret;
>> + if (ret.error)
>> + report_abort("CPU failed to register/enable SSE event: %ld",
>> + ret.error);
>
> Do we need this to be an abort?
Probably not but since events used some state machine, we probably
messed up the state. Same comment than before applies, it's a pain to
restore events to their previous state properly. I figured out that
since this is a test, it might not be *that* important to keep it
totally sane in case something goes wrong. i'll try some best effort to
handle that though.
>
>> +
>> + handler_arg = &cpu_arg->handler_arg;
>> + WRITE_ONCE(handler_arg->expected_cpu, cpu);
>> + /* For handler_arg content to be visible for other CPUs */
>> + smp_wmb();
>> + ret = sbi_sse_inject(event_id, cpus[cpu].hartid);
>> + if (ret.error)
>> + report_abort("CPU failed to register/enable SSE event: %ld",
>> + ret.error);
>
> abort?
>
>> + }
>> +
>> + for_each_online_cpu(cpu) {
>> + handler_arg = &cpu_args[cpu].handler_arg;
>
> Need smp_rmb() here in case we read done=1 on first try in order
> to be sure that all other data ordered wrt the done write can
> be observed.
>
>> + while (!READ_ONCE(handler_arg->done)) {
>
> Maybe a cpu_relax() here.
>
>> + /* For handler_arg update to be visible */
>> + smp_rmb();
>> + }
>> + WRITE_ONCE(handler_arg->done, false);
>> + }
>> +
>> + on_cpus(sbi_sse_disable_unregister_local, cpu_args);
>> + for_each_online_cpu(cpu) {
>> + cpu_arg = &cpu_args[cpu];
>> + ret = READ_ONCE(cpu_arg->ret);
>> + if (ret.error)
>> + report_abort("CPU failed to disable/unregister SSE event: %ld",
>> + ret.error);
>
> abort?
>
>> + }
>> +
>> + for_each_online_cpu(cpu) {
>> + cpu_arg = &cpu_args[cpu];
>> +
>
> delete unnecessary blank line
>
>> + sse_free_stack(cpu_arg->args.stack);
>> + }
>> +
>> + report_pass("local event dispatch on all CPUs");
>> + report_prefix_pop();
>> +
>> +}
>> +
>> +static void sse_test_inject_global(uint32_t event_id)
>> +{
>> + unsigned long value;
>> + struct sbiret ret;
>> + unsigned int cpu;
>> + struct sse_foreign_cpu_test_arg test_arg = {.event_id = event_id};
>> + struct sbi_sse_handler_arg args = {
>> + .handler = sse_foreign_cpu_handler,
>> + .handler_data = (void *) &test_arg,
>> + .stack = sse_alloc_stack(),
>> + };
>> + enum sbi_sse_state state;
>> +
>> + report_prefix_push("global_dispatch");
>> +
>> + ret = sbi_sse_register(event_id, &args);
>> + if (!sbiret_report_error(&ret, SBI_SUCCESS, "Register event no error"))
>> + goto err_free_stack;
>> +
>> + for_each_online_cpu(cpu) {
>> + WRITE_ONCE(test_arg.expected_cpu, cpu);
>> + /* For test_arg content to be visible for other CPUs */
>> + smp_wmb();
>> + value = cpu;
>> + ret = sbi_sse_write_attrs(event_id, SBI_SSE_ATTR_PREFERRED_HART, 1, &value);
>> + if (!sbiret_report_error(&ret, SBI_SUCCESS, "Set preferred hart no error"))
>> + goto err_unregister;
>> +
>> + ret = sbi_sse_enable(event_id);
>> + if (!sbiret_report_error(&ret, SBI_SUCCESS, "Enable SSE event no error"))
>> + goto err_unregister;
>> +
>> + ret = sbi_sse_inject(event_id, cpu);
>> + if (!sbiret_report_error(&ret, SBI_SUCCESS, "Inject SSE event no error"))
>> + goto err_disable;
>> +
>
> smp_rmb()
>
>> + while (!READ_ONCE(test_arg.done)) {
>
> cpu_relax()
>
>> + /* For shared test_arg structure */
>> + smp_rmb();
>> + }
>> +
>> + WRITE_ONCE(test_arg.done, false);
>> +
>> + /* Wait for event to be in ENABLED state */
>> + do {
>> + ret = sse_event_get_state(event_id, &state);
>> + if (sbiret_report_error(&ret, SBI_SUCCESS, "Get SSE event state no error"))
>> + goto err_disable;
>
> cpu_relax() or even use udelay()?
>
>> + } while (state != SBI_SSE_STATE_ENABLED);
>> +
>> +err_disable:
>> + ret = sbi_sse_disable(event_id);
>> + if (!sbiret_report_error(&ret, SBI_SUCCESS, "Disable SSE event no error"))
>> + goto err_unregister;
>> +
>> + report_pass("Global event on CPU %d", cpu);
>> + }
>> +
>> +err_unregister:
>> + ret = sbi_sse_unregister(event_id);
>> + sbiret_report_error(&ret, SBI_SUCCESS, "Unregister SSE event no error");
>> +
>> +err_free_stack:
>> + sse_free_stack(args.stack);
>> + report_prefix_pop();
>> +}
>> +
>> +struct priority_test_arg {
>> + uint32_t event_id;
>> + bool called;
>> + u32 prio;
>> + struct priority_test_arg *next_event_arg;
>> + void (*check_func)(struct priority_test_arg *arg);
>> +};
>> +
>> +static void sse_hi_priority_test_handler(void *arg, struct pt_regs *regs,
>> + unsigned int hartid)
>> +{
>> + struct priority_test_arg *targ = arg;
>> + struct priority_test_arg *next = targ->next_event_arg;
>> +
>> + targ->called = true;
>> + if (next) {
>> + sbi_sse_inject(next->event_id, current_thread_info()->hartid);
>> +
>> + report(!sse_event_pending(next->event_id), "Higher priority event is pending");
>
> "is not pending"
>
>> + report(next->called, "Higher priority event was not handled");
>
> "was handled"
>
>> + }
>> +}
>> +
>> +static void sse_low_priority_test_handler(void *arg, struct pt_regs *regs,
>> + unsigned int hartid)
>> +{
>> + struct priority_test_arg *targ = arg;
>> + struct priority_test_arg *next = targ->next_event_arg;
>> +
>> + targ->called = true;
>> +
>> + if (next) {
>> + sbi_sse_inject(next->event_id, current_thread_info()->hartid);
>> +
>> + report(sse_event_pending(next->event_id), "Lower priority event is pending");
>> + report(!next->called, "Lower priority event %s was handle before %s",
>
> "was not handled"
>
>> + sse_event_name(next->event_id), sse_event_name(targ->event_id));
>> + }
>> +}
>> +
>> +static void sse_test_injection_priority_arg(struct priority_test_arg *in_args,
>> + unsigned int in_args_size,
>> + sbi_sse_handler_fn handler,
>> + const char *test_name)
>> +{
>> + unsigned int i;
>> + unsigned long value;
>> + struct sbiret ret;
>> + uint32_t event_id;
>> + struct priority_test_arg *arg;
>> + unsigned int args_size = 0;
>> + struct sbi_sse_handler_arg event_args[in_args_size];
>> + struct priority_test_arg *args[in_args_size];
>> + void *stack;
>> + struct sbi_sse_handler_arg *event_arg;
>> +
>> + report_prefix_push(test_name);
>> +
>> + for (i = 0; i < in_args_size; i++) {
>> + arg = &in_args[i];
>> + event_id = arg->event_id;
>> + if (!sse_event_can_inject(event_id))
>> + continue;
>> +
>> + args[args_size] = arg;
>> + args_size++;
>> + }
>> +
>> + if (!args_size) {
>> + report_skip("No event injectable");
>> + report_prefix_pop();
>> + goto skip;
>> + }
>> +
>> + for (i = 0; i < args_size; i++) {
>> + arg = args[i];
>> + event_id = arg->event_id;
>> + stack = sse_alloc_stack();
>> +
>> + event_arg = &event_args[i];
>> + event_arg->handler = handler;
>> + event_arg->handler_data = (void *)arg;
>> + event_arg->stack = stack;
>> +
>> + if (i < (args_size - 1))
>> + arg->next_event_arg = args[i + 1];
>> + else
>> + arg->next_event_arg = NULL;
>> +
>> + /* Be sure global events are targeting the current hart */
>> + sse_global_event_set_current_hart(event_id);
>> +
>> + sbi_sse_register(event_id, event_arg);
>> + value = arg->prio;
>> + sbi_sse_write_attrs(event_id, SBI_SSE_ATTR_PRIORITY, 1, &value);
>> + sbi_sse_enable(event_id);
>
> No return code checks for these SSE calls? If we're 99% sure they should
> succeed, then I'd still check them with asserts.
I was a bit lazy here. Since the goal is *not* to check the event state
themselve but rather the ordering, I didn't bother checking them. As
said before, habndling error and event state properly in case of error
seemed like a churn to me *just* for testing. I'll try something better
as well though.
>
>> + }
>> +
>> + /* Inject first event */
>> + ret = sbi_sse_inject(args[0]->event_id, current_thread_info()->hartid);
>> + sbiret_report_error(&ret, SBI_SUCCESS, "SSE injection no error");
>> +
>> + for (i = 0; i < args_size; i++) {
>> + arg = args[i];
>> + event_id = arg->event_id;
>> +
>> + report(arg->called, "Event %s handler called", sse_event_name(arg->event_id));
>> +
>> + sbi_sse_disable(event_id);
>> + sbi_sse_unregister(event_id);
>
> No return code checks?
>
>> +
>> + event_arg = &event_args[i];
>> + sse_free_stack(event_arg->stack);
>> + }
>> +
>> +skip:
>> + report_prefix_pop();
>
> Isn't this an extra pop()? We should be able to return directly from the
> report_skip() if-block.
Indeed.
>
>> +}
>> +
>> +static struct priority_test_arg hi_prio_args[] = {
>> + {.event_id = SBI_SSE_EVENT_GLOBAL_SOFTWARE},
>> + {.event_id = SBI_SSE_EVENT_LOCAL_SOFTWARE},
>> + {.event_id = SBI_SSE_EVENT_GLOBAL_LOW_PRIO_RAS},
>> + {.event_id = SBI_SSE_EVENT_LOCAL_LOW_PRIO_RAS},
>> + {.event_id = SBI_SSE_EVENT_LOCAL_PMU_OVERFLOW},
>> + {.event_id = SBI_SSE_EVENT_GLOBAL_HIGH_PRIO_RAS},
>> + {.event_id = SBI_SSE_EVENT_LOCAL_DOUBLE_TRAP},
>> + {.event_id = SBI_SSE_EVENT_LOCAL_HIGH_PRIO_RAS},
>> +};
>> +
>> +static struct priority_test_arg low_prio_args[] = {
>> + {.event_id = SBI_SSE_EVENT_LOCAL_HIGH_PRIO_RAS},
>> + {.event_id = SBI_SSE_EVENT_LOCAL_DOUBLE_TRAP},
>> + {.event_id = SBI_SSE_EVENT_GLOBAL_HIGH_PRIO_RAS},
>> + {.event_id = SBI_SSE_EVENT_LOCAL_PMU_OVERFLOW},
>> + {.event_id = SBI_SSE_EVENT_LOCAL_LOW_PRIO_RAS},
>> + {.event_id = SBI_SSE_EVENT_GLOBAL_LOW_PRIO_RAS},
>> + {.event_id = SBI_SSE_EVENT_LOCAL_SOFTWARE},
>> + {.event_id = SBI_SSE_EVENT_GLOBAL_SOFTWARE},
>> +};
>> +
>> +static struct priority_test_arg prio_args[] = {
>> + {.event_id = SBI_SSE_EVENT_GLOBAL_SOFTWARE, .prio = 5},
>> + {.event_id = SBI_SSE_EVENT_LOCAL_SOFTWARE, .prio = 10},
>> + {.event_id = SBI_SSE_EVENT_LOCAL_LOW_PRIO_RAS, .prio = 12},
>> + {.event_id = SBI_SSE_EVENT_LOCAL_PMU_OVERFLOW, .prio = 15},
>> + {.event_id = SBI_SSE_EVENT_GLOBAL_HIGH_PRIO_RAS, .prio = 20},
>> + {.event_id = SBI_SSE_EVENT_GLOBAL_LOW_PRIO_RAS, .prio = 22},
>> + {.event_id = SBI_SSE_EVENT_LOCAL_HIGH_PRIO_RAS, .prio = 25},
>> +};
>> +
>> +static struct priority_test_arg same_prio_args[] = {
>> + {.event_id = SBI_SSE_EVENT_LOCAL_PMU_OVERFLOW, .prio = 0},
>> + {.event_id = SBI_SSE_EVENT_GLOBAL_LOW_PRIO_RAS, .prio = 0},
>> + {.event_id = SBI_SSE_EVENT_LOCAL_HIGH_PRIO_RAS, .prio = 10},
>> + {.event_id = SBI_SSE_EVENT_LOCAL_SOFTWARE, .prio = 10},
>> + {.event_id = SBI_SSE_EVENT_GLOBAL_SOFTWARE, .prio = 10},
>> + {.event_id = SBI_SSE_EVENT_GLOBAL_HIGH_PRIO_RAS, .prio = 20},
>> + {.event_id = SBI_SSE_EVENT_LOCAL_LOW_PRIO_RAS, .prio = 20},
>> +};
>
> nit: could tab out the .prio's in order to better tabulate the above two
> structs.
>
>> +
>> +static void sse_test_injection_priority(void)
>> +{
>> + report_prefix_push("prio");
>> +
>> + sse_test_injection_priority_arg(hi_prio_args, ARRAY_SIZE(hi_prio_args),
>> + sse_hi_priority_test_handler, "high");
>> +
>> + sse_test_injection_priority_arg(low_prio_args, ARRAY_SIZE(low_prio_args),
>> + sse_low_priority_test_handler, "low");
>> +
>> + sse_test_injection_priority_arg(prio_args, ARRAY_SIZE(prio_args),
>> + sse_low_priority_test_handler, "changed");
>> +
>> + sse_test_injection_priority_arg(same_prio_args, ARRAY_SIZE(same_prio_args),
>> + sse_low_priority_test_handler, "same_prio_args");
>> +
>> + report_prefix_pop();
>> +}
>> +
>> +static void test_invalid_event_id(unsigned long event_id)
>> +{
>> + struct sbiret ret;
>> + unsigned long value = 0;
>> +
>> + ret = sbi_sse_register_raw(event_id, (unsigned long) sbi_sse_entry, 0);
>> + sbiret_report_error(&ret, SBI_ERR_INVALID_PARAM,
>> + "SSE register event_id 0x%lx invalid param", event_id);
>> +
>> + ret = sbi_sse_unregister(event_id);
>> + sbiret_report_error(&ret, SBI_ERR_INVALID_PARAM,
>> + "SSE unregister event_id 0x%lx invalid param", event_id);
>> +
>> + ret = sbi_sse_enable(event_id);
>> + sbiret_report_error(&ret, SBI_ERR_INVALID_PARAM,
>> + "SSE enable event_id 0x%lx invalid param", event_id);
>> +
>> + ret = sbi_sse_disable(event_id);
>> + sbiret_report_error(&ret, SBI_ERR_INVALID_PARAM,
>> + "SSE disable event_id 0x%lx invalid param", event_id);
>> +
>> + ret = sbi_sse_inject(event_id, 0);
>> + sbiret_report_error(&ret, SBI_ERR_INVALID_PARAM,
>> + "SSE inject event_id 0x%lx invalid param", event_id);
>> +
>> + ret = sbi_sse_write_attrs(event_id, SBI_SSE_ATTR_PRIORITY, 1, &value);
>> + sbiret_report_error(&ret, SBI_ERR_INVALID_PARAM,
>> + "SSE write attr event_id 0x%lx invalid param", event_id);
>> +
>> + ret = sbi_sse_read_attrs(event_id, SBI_SSE_ATTR_PRIORITY, 1, &value);
>> + sbiret_report_error(&ret, SBI_ERR_INVALID_PARAM,
>> + "SSE read attr event_id 0x%lx invalid param", event_id);
>
> We can s/invalid param/ from all the strings above since we output
> SBI_ERR_INVALID_PARAM with the latest sbiret_report_error()
Yeah, I'll make some cleanup in other strings as well.
>
>> +}
>> +
>> +static void sse_test_invalid_event_id(void)
>> +{
>> +
>> + report_prefix_push("event_id");
>> +
>> +#if __riscv_xlen > 32
>> + test_invalid_event_id(BIT_ULL(32));
>> +#endif
>
> With your latest opensbi changes we'll truncate eventid, so we need
> another invalid eventid for the tests. That's good since we can then test
> for rv32 too.
Indeed.
>
>> +
>> + report_prefix_pop();
>> +}
>> +
>> +static bool sse_can_inject(uint32_t event_id)
>> +{
>> + struct sbiret ret;
>> + unsigned long status;
>> +
>> + ret = sbi_sse_read_attrs(event_id, SBI_SSE_ATTR_STATUS, 1, &status);
>> + if (ret.error)
>
> Don't we want to assert or complain loudly about an unexpected status read
> failure?
I can add some warning yeah
>
>> + return false;
>> +
>> + return !!(status & BIT(SBI_SSE_ATTR_STATUS_INJECT_OFFSET));
>> +}
>> +
>> +static void boot_secondary(void *data)
>
> Maybe name this sse_secondary_boot_and_unmask?
>
>> +{
>> + sbi_sse_hart_unmask();
>> +}
>> +
>> +static void sse_check_mask(void)
>> +{
>> + struct sbiret ret;
>> +
>> + /* Upon boot, event are masked, check that */
>> + ret = sbi_sse_hart_mask();
>> + sbiret_report_error(&ret, SBI_ERR_ALREADY_STOPPED, "SSE hart mask at boot time ok");
>> +
>> + ret = sbi_sse_hart_unmask();
>> + sbiret_report_error(&ret, SBI_SUCCESS, "SSE hart no error ok");
> ^ unmask
>
>> + ret = sbi_sse_hart_unmask();
>> + sbiret_report_error(&ret, SBI_ERR_ALREADY_STARTED, "SSE hart unmask twice error ok");
>> +
>> + ret = sbi_sse_hart_mask();
>> + sbiret_report_error(&ret, SBI_SUCCESS, "SSE hart mask no error");
>> + ret = sbi_sse_hart_mask();
>> + sbiret_report_error(&ret, SBI_ERR_ALREADY_STOPPED, "SSE hart mask twice ok");
>> +}
>> +
>> +void check_sse(void)
>> +{
>> + unsigned long i, event_id;
>> +
>> + report_prefix_push("sse");
>> +
>> + if (!sbi_probe(SBI_EXT_SSE)) {
>> + report_skip("SSE extension not available");
>> + report_prefix_pop();
>> + return;
>> + }
>> +
>> + sse_check_mask();
>> +
>> + /*
>> + * Dummy wakeup of all processors since some of them will be targeted
>> + * by global events without going through the wakeup call as well as
>> + * unmasking SSE events on all harts
>> + */
>> + on_cpus(boot_secondary, NULL);
>> +
>> + sse_test_invalid_event_id();
>> +
>> + for (i = 0; i < ARRAY_SIZE(sse_event_infos); i++) {
>> + event_id = sse_event_infos[i].event_id;
>> + report_prefix_push(sse_event_infos[i].name);
>> + if (!sse_can_inject(event_id)) {
>> + report_skip("Event 0x%lx does not support injection", event_id);
>> + report_prefix_pop();
>> + continue;
>> + } else {
>> + sse_event_infos[i].can_inject = true;
>
> What do we need to cache can_inject=true for if we filter out all events
> which have can_inject=false?
The cache value is used in priority tests.
Should we run at least something with
> events which don't support injection?
Yeah, a few tests can be run, I'll modify that
>
>> + }
>> + sse_test_attrs(event_id);
>> + sse_test_register_error(event_id);
>> + sse_test_inject_simple(event_id);
>> + if (sbi_sse_event_is_global(event_id))
>> + sse_test_inject_global(event_id);
>> + else
>> + sse_test_inject_local(event_id);
>> +
>> + report_prefix_pop();
>> + }
>> +
>> + sse_test_injection_priority();
>> +
>> + report_prefix_pop();
>> +}
>> diff --git a/riscv/sbi.c b/riscv/sbi.c
>> index 7c7a2d2d..49e81bda 100644
>> --- a/riscv/sbi.c
>> +++ b/riscv/sbi.c
>> @@ -32,6 +32,7 @@
>>
>> #define HIGH_ADDR_BOUNDARY ((phys_addr_t)1 << 32)
>>
>> +void check_sse(void);
>> void check_fwft(void);
>>
>> static long __labs(long a)
>> @@ -1439,6 +1440,7 @@ int main(int argc, char **argv)
>> check_hsm();
>> check_dbcn();
>> check_susp();
>> + check_sse();
>> check_fwft();
>>
>> return report_summary();
>> --
>> 2.47.2
>>
>
> Thanks,
> drew
^ permalink raw reply [flat|nested] 14+ messages in thread
* Re: [kvm-unit-tests PATCH v7 6/6] riscv: sbi: Add SSE extension tests
2025-03-06 14:32 ` Clément Léger
@ 2025-03-06 15:15 ` Andrew Jones
2025-03-06 15:18 ` Clément Léger
0 siblings, 1 reply; 14+ messages in thread
From: Andrew Jones @ 2025-03-06 15:15 UTC (permalink / raw)
To: Clément Léger
Cc: kvm, kvm-riscv, Andrew Jones, Anup Patel, Atish Patra
On Thu, Mar 06, 2025 at 03:32:39PM +0100, Clément Léger wrote:
>
>
> On 28/02/2025 18:51, Andrew Jones wrote:
...
> >> + attr = SBI_SSE_ATTR_INTERRUPTED_FLAGS;
> >> + ret = sbi_sse_read_attrs(event_id, attr, 1, &prev_value);
> >> + sbiret_report_error(&ret, SBI_SUCCESS, "Save interrupted flags no error");
> >> +
> >> + for (i = 0; i < ARRAY_SIZE(interrupted_flags); i++) {
> >> + flags = interrupted_flags[i];
> >> + ret = sbi_sse_write_attrs(event_id, attr, 1, &flags);
> >> + sbiret_report_error(&ret, SBI_SUCCESS,
> >> + "Set interrupted flags bit 0x%lx value no error", flags);
> >> + ret = sbi_sse_read_attrs(event_id, attr, 1, &value);
> >> + sbiret_report_error(&ret, SBI_SUCCESS, "Get interrupted flags after set no error");
> >> + report(value == flags, "interrupted flags modified value ok: 0x%lx", value);
> >
> > Do we also need to test with more than one flag set at a time?
>
> That is already done a few lines above (see /* Restore full saved state */).
OK
>
> >
> >> + }
> >> +
> >> + /* Write invalid bit in flag register */
> >> + flags = SBI_SSE_ATTR_INTERRUPTED_FLAGS_SSTATUS_SDT << 1;
> >> + ret = sbi_sse_write_attrs(event_id, attr, 1, &flags);
> >> + sbiret_report_error(&ret, SBI_ERR_INVALID_PARAM, "Set invalid flags bit 0x%lx value error",
> >> + flags);
> >> +#if __riscv_xlen > 32
> >> + flags = BIT(32);
> >> + ret = sbi_sse_write_attrs(event_id, attr, 1, &flags);
> >> + sbiret_report_error(&ret, SBI_ERR_INVALID_PARAM, "Set invalid flags bit 0x%lx value error",
> >
> > This should have a different report string than the test above.
>
> The bit value format does differentiate the printf though.
OK
...
> >> + ret = sbi_sse_unregister(event_id);
> >> + if (!sbiret_report_error(&ret, SBI_SUCCESS, "SSE unregister no error"))
> >> + goto done;
> >> +
> >> + sse_check_state(event_id, SBI_SSE_STATE_UNUSED);
> >> +
> >> +done:
> >
> > Is it ok to leave this function with an event registered/enabled? If not,
> > then some of the goto's above should goto other labels which disable and
> > unregister.
>
> No it's not but it's massive pain to keep everything coherent when it
> fails ;)
>
asserts/aborts are fine if we can't recover easily, but then we should
move the SSE tests out of the main SBI test into its own test so we
don't short-circuit all other tests that may follow it.
...
> >> + /* Be sure global events are targeting the current hart */
> >> + sse_global_event_set_current_hart(event_id);
> >> +
> >> + sbi_sse_register(event_id, event_arg);
> >> + value = arg->prio;
> >> + sbi_sse_write_attrs(event_id, SBI_SSE_ATTR_PRIORITY, 1, &value);
> >> + sbi_sse_enable(event_id);
> >
> > No return code checks for these SSE calls? If we're 99% sure they should
> > succeed, then I'd still check them with asserts.
>
> I was a bit lazy here. Since the goal is *not* to check the event state
> themselve but rather the ordering, I didn't bother checking them. As
> said before, habndling error and event state properly in case of error
> seemed like a churn to me *just* for testing. I'll try something better
> as well though.
>
We always want at least asserts() in order to catch the train when it
first goes off the rails, rather than after it smashed through a village
or two.
Thanks,
drew
^ permalink raw reply [flat|nested] 14+ messages in thread
* Re: [kvm-unit-tests PATCH v7 6/6] riscv: sbi: Add SSE extension tests
2025-03-06 15:15 ` Andrew Jones
@ 2025-03-06 15:18 ` Clément Léger
0 siblings, 0 replies; 14+ messages in thread
From: Clément Léger @ 2025-03-06 15:18 UTC (permalink / raw)
To: Andrew Jones; +Cc: kvm, kvm-riscv, Andrew Jones, Anup Patel, Atish Patra
On 06/03/2025 16:15, Andrew Jones wrote:
> On Thu, Mar 06, 2025 at 03:32:39PM +0100, Clément Léger wrote:
>>
>>
>> On 28/02/2025 18:51, Andrew Jones wrote:
> ...
>>>> + attr = SBI_SSE_ATTR_INTERRUPTED_FLAGS;
>>>> + ret = sbi_sse_read_attrs(event_id, attr, 1, &prev_value);
>>>> + sbiret_report_error(&ret, SBI_SUCCESS, "Save interrupted flags no error");
>>>> +
>>>> + for (i = 0; i < ARRAY_SIZE(interrupted_flags); i++) {
>>>> + flags = interrupted_flags[i];
>>>> + ret = sbi_sse_write_attrs(event_id, attr, 1, &flags);
>>>> + sbiret_report_error(&ret, SBI_SUCCESS,
>>>> + "Set interrupted flags bit 0x%lx value no error", flags);
>>>> + ret = sbi_sse_read_attrs(event_id, attr, 1, &value);
>>>> + sbiret_report_error(&ret, SBI_SUCCESS, "Get interrupted flags after set no error");
>>>> + report(value == flags, "interrupted flags modified value ok: 0x%lx", value);
>>>
>>> Do we also need to test with more than one flag set at a time?
>>
>> That is already done a few lines above (see /* Restore full saved state */).
>
> OK
>
>>
>>>
>>>> + }
>>>> +
>>>> + /* Write invalid bit in flag register */
>>>> + flags = SBI_SSE_ATTR_INTERRUPTED_FLAGS_SSTATUS_SDT << 1;
>>>> + ret = sbi_sse_write_attrs(event_id, attr, 1, &flags);
>>>> + sbiret_report_error(&ret, SBI_ERR_INVALID_PARAM, "Set invalid flags bit 0x%lx value error",
>>>> + flags);
>>>> +#if __riscv_xlen > 32
>>>> + flags = BIT(32);
>>>> + ret = sbi_sse_write_attrs(event_id, attr, 1, &flags);
>>>> + sbiret_report_error(&ret, SBI_ERR_INVALID_PARAM, "Set invalid flags bit 0x%lx value error",
>>>
>>> This should have a different report string than the test above.
>>
>> The bit value format does differentiate the printf though.
>
> OK
>
> ...
>>>> + ret = sbi_sse_unregister(event_id);
>>>> + if (!sbiret_report_error(&ret, SBI_SUCCESS, "SSE unregister no error"))
>>>> + goto done;
>>>> +
>>>> + sse_check_state(event_id, SBI_SSE_STATE_UNUSED);
>>>> +
>>>> +done:
>>>
>>> Is it ok to leave this function with an event registered/enabled? If not,
>>> then some of the goto's above should goto other labels which disable and
>>> unregister.
>>
>> No it's not but it's massive pain to keep everything coherent when it
>> fails ;)
>>
>
> asserts/aborts are fine if we can't recover easily, but then we should
> move the SSE tests out of the main SBI test into its own test so we
> don't short-circuit all other tests that may follow it.
Oh yes I did not though of that. I could as well short circuit the sse
event test themselves. Currently test function returns void but that
could be handled more gracefully so that we don't break all other tests
as well.
>
> ...
>>>> + /* Be sure global events are targeting the current hart */
>>>> + sse_global_event_set_current_hart(event_id);
>>>> +
>>>> + sbi_sse_register(event_id, event_arg);
>>>> + value = arg->prio;
>>>> + sbi_sse_write_attrs(event_id, SBI_SSE_ATTR_PRIORITY, 1, &value);
>>>> + sbi_sse_enable(event_id);
>>>
>>> No return code checks for these SSE calls? If we're 99% sure they should
>>> succeed, then I'd still check them with asserts.
>>
>> I was a bit lazy here. Since the goal is *not* to check the event state
>> themselve but rather the ordering, I didn't bother checking them. As
>> said before, habndling error and event state properly in case of error
>> seemed like a churn to me *just* for testing. I'll try something better
>> as well though.
>>
>
> We always want at least asserts() in order to catch the train when it
> first goes off the rails, rather than after it smashed through a village
> or two.
Yeah sure ;) I'll try to do what I said before: short circuit the
remaining SSE test for the event itself rather than aborting or
splitting the tests.
Thanks,
Clément
>
> Thanks,
> drew
^ permalink raw reply [flat|nested] 14+ messages in thread
end of thread, other threads:[~2025-03-06 15:18 UTC | newest]
Thread overview: 14+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2025-02-14 11:44 [kvm-unit-tests PATCH v7 0/6] riscv: add SBI SSE extension tests Clément Léger
2025-02-14 11:44 ` [kvm-unit-tests PATCH v7 1/6] kbuild: Allow multiple asm-offsets file to be generated Clément Léger
2025-02-14 11:44 ` [kvm-unit-tests PATCH v7 2/6] riscv: Set .aux.o files as .PRECIOUS Clément Léger
2025-02-27 15:36 ` Andrew Jones
2025-02-14 11:44 ` [kvm-unit-tests PATCH v7 3/6] riscv: Use asm-offsets to generate SBI_EXT_HSM values Clément Léger
2025-02-14 11:44 ` [kvm-unit-tests PATCH v7 4/6] riscv: lib: Add SBI SSE extension definitions Clément Léger
2025-02-14 11:44 ` [kvm-unit-tests PATCH v7 5/6] lib: riscv: Add SBI SSE support Clément Léger
2025-02-27 16:03 ` Andrew Jones
2025-03-06 10:04 ` Clément Léger
2025-02-14 11:44 ` [kvm-unit-tests PATCH v7 6/6] riscv: sbi: Add SSE extension tests Clément Léger
2025-02-28 17:51 ` Andrew Jones
2025-03-06 14:32 ` Clément Léger
2025-03-06 15:15 ` Andrew Jones
2025-03-06 15:18 ` Clément Léger
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox