linux-arm-kernel.lists.infradead.org archive mirror
 help / color / mirror / Atom feed
* [PATCH v4 00/28] KVM: arm64: NV trap forwarding infrastructure
@ 2023-08-15 18:38 Marc Zyngier
  2023-08-15 18:38 ` [PATCH v4 01/28] arm64: Add missing VA CMO encodings Marc Zyngier
                   ` (28 more replies)
  0 siblings, 29 replies; 48+ messages in thread
From: Marc Zyngier @ 2023-08-15 18:38 UTC (permalink / raw)
  To: kvmarm, kvm, linux-arm-kernel
  Cc: Catalin Marinas, Eric Auger, Mark Brown, Mark Rutland,
	Will Deacon, Alexandru Elisei, Andre Przywara, Chase Conklin,
	Ganapatrao Kulkarni, Darren Hart, Miguel Luis, Jing Zhang,
	James Morse, Suzuki K Poulose, Oliver Upton, Zenghui Yu

Another week, another version. Change log below.

I'll drop this into -next now, and see what happens.

* From v3 [3]:

  - Renamed trap_group to cgt_group_id (Eric)

  - Plenty of comment rework (Eric)

  - Fix HCR_EL2.FIEN handling (Miguel)

  - Fix missing validation of last MCB entry (Miguel)

  - Fix instance of recursive MCB (Jing)

  - Handle error return from xa_store()/xa_range_store() (Jing)

  - Propagate error generated from populate_nv_trap_config() (Jing)

  - Added more consistency checks for sysregs and trap bits

  - Added a line number entry, which is useful for debugging
    overlapping entries.

  - Fixed duplicate entries for SP_EL1, DBGDTRRX_EL0 and SCXTNUM_EL0

  - Correctly handle fine grained trapping for ERET

  - Collected RBs, with thanks

* From v2 [2]:

  - Another set up fixups thanks to Oliver, Eric and Miguel: TRCID
    bits, duplicate encodings, sanity checking, error handling at boot
    time, spelling mistakes...

  - Split the HFGxTR_EL2 patch in two patches: one that provides the
    FGT infrastructure, and one that provides the HFGxTR_EL2 traps. It
    makes it easier to review and matches the rest of the series.

  - Collected RBs, with thanks

* From v1 [1]:

  - Lots of fixups all over the map (too many to mention) after Eric's
    fantastic reviewing effort. Hopefully the result is easier to
    understand and less wrong

  - Amended Mark's patch to use the ARM64_CPUID_FIELDS() macro

  - Collected RBs, with thanks.

[1] https://lore.kernel.org/all/20230712145810.3864793-1-maz@kernel.org
[2] https://lore.kernel.org/all/20230728082952.959212-1-maz@kernel.org
[3] https://lore.kernel.org/all/20230808114711.2013842-1-maz@kernel.org

Marc Zyngier (27):
  arm64: Add missing VA CMO encodings
  arm64: Add missing ERX*_EL1 encodings
  arm64: Add missing DC ZVA/GVA/GZVA encodings
  arm64: Add TLBI operation encodings
  arm64: Add AT operation encodings
  arm64: Add debug registers affected by HDFGxTR_EL2
  arm64: Add missing BRB/CFP/DVP/CPP instructions
  arm64: Add HDFGRTR_EL2 and HDFGWTR_EL2 layouts
  KVM: arm64: Correctly handle ACCDATA_EL1 traps
  KVM: arm64: Add missing HCR_EL2 trap bits
  KVM: arm64: nv: Add FGT registers
  KVM: arm64: Restructure FGT register switching
  KVM: arm64: nv: Add trap forwarding infrastructure
  KVM: arm64: nv: Add trap forwarding for HCR_EL2
  KVM: arm64: nv: Expose FEAT_EVT to nested guests
  KVM: arm64: nv: Add trap forwarding for MDCR_EL2
  KVM: arm64: nv: Add trap forwarding for CNTHCTL_EL2
  KVM: arm64: nv: Add fine grained trap forwarding infrastructure
  KVM: arm64: nv: Add trap forwarding for HFGxTR_EL2
  KVM: arm64: nv: Add trap forwarding for HFGITR_EL2
  KVM: arm64: nv: Add trap forwarding for HDFGxTR_EL2
  KVM: arm64: nv: Add SVC trap forwarding
  KVM: arm64: nv: Expand ERET trap forwarding to handle FGT
  KVM: arm64: nv: Add switching support for HFGxTR/HDFGxTR
  KVM: arm64: nv: Expose FGT to nested guests
  KVM: arm64: Move HCRX_EL2 switch to load/put on VHE systems
  KVM: arm64: nv: Add support for HCRX_EL2

Mark Brown (1):
  arm64: Add feature detection for fine grained traps

 arch/arm64/include/asm/kvm_arm.h        |   50 +
 arch/arm64/include/asm/kvm_host.h       |    7 +
 arch/arm64/include/asm/kvm_nested.h     |    2 +
 arch/arm64/include/asm/sysreg.h         |  268 +++-
 arch/arm64/kernel/cpufeature.c          |    7 +
 arch/arm64/kvm/arm.c                    |    4 +
 arch/arm64/kvm/emulate-nested.c         | 1850 +++++++++++++++++++++++
 arch/arm64/kvm/handle_exit.c            |   29 +-
 arch/arm64/kvm/hyp/include/hyp/switch.h |  127 +-
 arch/arm64/kvm/nested.c                 |   11 +-
 arch/arm64/kvm/sys_regs.c               |   15 +
 arch/arm64/kvm/trace_arm.h              |   26 +
 arch/arm64/tools/cpucaps                |    1 +
 arch/arm64/tools/sysreg                 |  129 ++
 14 files changed, 2485 insertions(+), 41 deletions(-)

-- 
2.34.1


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 48+ messages in thread

* [PATCH v4 01/28] arm64: Add missing VA CMO encodings
  2023-08-15 18:38 [PATCH v4 00/28] KVM: arm64: NV trap forwarding infrastructure Marc Zyngier
@ 2023-08-15 18:38 ` Marc Zyngier
  2023-08-15 18:38 ` [PATCH v4 02/28] arm64: Add missing ERX*_EL1 encodings Marc Zyngier
                   ` (27 subsequent siblings)
  28 siblings, 0 replies; 48+ messages in thread
From: Marc Zyngier @ 2023-08-15 18:38 UTC (permalink / raw)
  To: kvmarm, kvm, linux-arm-kernel
  Cc: Catalin Marinas, Eric Auger, Mark Brown, Mark Rutland,
	Will Deacon, Alexandru Elisei, Andre Przywara, Chase Conklin,
	Ganapatrao Kulkarni, Darren Hart, Miguel Luis, Jing Zhang,
	James Morse, Suzuki K Poulose, Oliver Upton, Zenghui Yu

Add the missing VA-based CMOs encodings.

Reviewed-by: Eric Auger <eric.auger@redhat.com>
Reviewed-by: Miguel Luis <miguel.luis@oracle.com>
Acked-by: Catalin Marinas <catalin.marinas@arm.com>
Reviewed-by: Zenghui Yu <yuzenghui@huawei.com>
Reviewed-by: Jing Zhang <jingzhangos@google.com>
Signed-off-by: Marc Zyngier <maz@kernel.org>
---
 arch/arm64/include/asm/sysreg.h | 26 ++++++++++++++++++++++++++
 1 file changed, 26 insertions(+)

diff --git a/arch/arm64/include/asm/sysreg.h b/arch/arm64/include/asm/sysreg.h
index b481935e9314..85447e68951a 100644
--- a/arch/arm64/include/asm/sysreg.h
+++ b/arch/arm64/include/asm/sysreg.h
@@ -124,6 +124,32 @@
 #define SYS_DC_CIGSW			sys_insn(1, 0, 7, 14, 4)
 #define SYS_DC_CIGDSW			sys_insn(1, 0, 7, 14, 6)
 
+#define SYS_IC_IALLUIS			sys_insn(1, 0, 7, 1, 0)
+#define SYS_IC_IALLU			sys_insn(1, 0, 7, 5, 0)
+#define SYS_IC_IVAU			sys_insn(1, 3, 7, 5, 1)
+
+#define SYS_DC_IVAC			sys_insn(1, 0, 7, 6, 1)
+#define SYS_DC_IGVAC			sys_insn(1, 0, 7, 6, 3)
+#define SYS_DC_IGDVAC			sys_insn(1, 0, 7, 6, 5)
+
+#define SYS_DC_CVAC			sys_insn(1, 3, 7, 10, 1)
+#define SYS_DC_CGVAC			sys_insn(1, 3, 7, 10, 3)
+#define SYS_DC_CGDVAC			sys_insn(1, 3, 7, 10, 5)
+
+#define SYS_DC_CVAU			sys_insn(1, 3, 7, 11, 1)
+
+#define SYS_DC_CVAP			sys_insn(1, 3, 7, 12, 1)
+#define SYS_DC_CGVAP			sys_insn(1, 3, 7, 12, 3)
+#define SYS_DC_CGDVAP			sys_insn(1, 3, 7, 12, 5)
+
+#define SYS_DC_CVADP			sys_insn(1, 3, 7, 13, 1)
+#define SYS_DC_CGVADP			sys_insn(1, 3, 7, 13, 3)
+#define SYS_DC_CGDVADP			sys_insn(1, 3, 7, 13, 5)
+
+#define SYS_DC_CIVAC			sys_insn(1, 3, 7, 14, 1)
+#define SYS_DC_CIGVAC			sys_insn(1, 3, 7, 14, 3)
+#define SYS_DC_CIGDVAC			sys_insn(1, 3, 7, 14, 5)
+
 /*
  * Automatically generated definitions for system registers, the
  * manual encodings below are in the process of being converted to
-- 
2.34.1


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply related	[flat|nested] 48+ messages in thread

* [PATCH v4 02/28] arm64: Add missing ERX*_EL1 encodings
  2023-08-15 18:38 [PATCH v4 00/28] KVM: arm64: NV trap forwarding infrastructure Marc Zyngier
  2023-08-15 18:38 ` [PATCH v4 01/28] arm64: Add missing VA CMO encodings Marc Zyngier
@ 2023-08-15 18:38 ` Marc Zyngier
  2023-08-15 18:38 ` [PATCH v4 03/28] arm64: Add missing DC ZVA/GVA/GZVA encodings Marc Zyngier
                   ` (26 subsequent siblings)
  28 siblings, 0 replies; 48+ messages in thread
From: Marc Zyngier @ 2023-08-15 18:38 UTC (permalink / raw)
  To: kvmarm, kvm, linux-arm-kernel
  Cc: Catalin Marinas, Eric Auger, Mark Brown, Mark Rutland,
	Will Deacon, Alexandru Elisei, Andre Przywara, Chase Conklin,
	Ganapatrao Kulkarni, Darren Hart, Miguel Luis, Jing Zhang,
	James Morse, Suzuki K Poulose, Oliver Upton, Zenghui Yu

We only describe a few of the ERX*_EL1 registers. Add the missing
ones (ERXPFGF_EL1, ERXPFGCTL_EL1, ERXPFGCDN_EL1, ERXMISC2_EL1 and
ERXMISC3_EL1).

Reviewed-by: Eric Auger <eric.auger@redhat.com>
Reviewed-by: Miguel Luis <miguel.luis@oracle.com>
Acked-by: Catalin Marinas <catalin.marinas@arm.com>
Reviewed-by: Zenghui Yu <yuzenghui@huawei.com>
Reviewed-by: Jing Zhang <jingzhangos@google.com>
Signed-off-by: Marc Zyngier <maz@kernel.org>
---
 arch/arm64/include/asm/sysreg.h | 5 +++++
 1 file changed, 5 insertions(+)

diff --git a/arch/arm64/include/asm/sysreg.h b/arch/arm64/include/asm/sysreg.h
index 85447e68951a..ed2739897859 100644
--- a/arch/arm64/include/asm/sysreg.h
+++ b/arch/arm64/include/asm/sysreg.h
@@ -229,8 +229,13 @@
 #define SYS_ERXCTLR_EL1			sys_reg(3, 0, 5, 4, 1)
 #define SYS_ERXSTATUS_EL1		sys_reg(3, 0, 5, 4, 2)
 #define SYS_ERXADDR_EL1			sys_reg(3, 0, 5, 4, 3)
+#define SYS_ERXPFGF_EL1			sys_reg(3, 0, 5, 4, 4)
+#define SYS_ERXPFGCTL_EL1		sys_reg(3, 0, 5, 4, 5)
+#define SYS_ERXPFGCDN_EL1		sys_reg(3, 0, 5, 4, 6)
 #define SYS_ERXMISC0_EL1		sys_reg(3, 0, 5, 5, 0)
 #define SYS_ERXMISC1_EL1		sys_reg(3, 0, 5, 5, 1)
+#define SYS_ERXMISC2_EL1		sys_reg(3, 0, 5, 5, 2)
+#define SYS_ERXMISC3_EL1		sys_reg(3, 0, 5, 5, 3)
 #define SYS_TFSR_EL1			sys_reg(3, 0, 5, 6, 0)
 #define SYS_TFSRE0_EL1			sys_reg(3, 0, 5, 6, 1)
 
-- 
2.34.1


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply related	[flat|nested] 48+ messages in thread

* [PATCH v4 03/28] arm64: Add missing DC ZVA/GVA/GZVA encodings
  2023-08-15 18:38 [PATCH v4 00/28] KVM: arm64: NV trap forwarding infrastructure Marc Zyngier
  2023-08-15 18:38 ` [PATCH v4 01/28] arm64: Add missing VA CMO encodings Marc Zyngier
  2023-08-15 18:38 ` [PATCH v4 02/28] arm64: Add missing ERX*_EL1 encodings Marc Zyngier
@ 2023-08-15 18:38 ` Marc Zyngier
  2023-08-15 18:38 ` [PATCH v4 04/28] arm64: Add TLBI operation encodings Marc Zyngier
                   ` (25 subsequent siblings)
  28 siblings, 0 replies; 48+ messages in thread
From: Marc Zyngier @ 2023-08-15 18:38 UTC (permalink / raw)
  To: kvmarm, kvm, linux-arm-kernel
  Cc: Catalin Marinas, Eric Auger, Mark Brown, Mark Rutland,
	Will Deacon, Alexandru Elisei, Andre Przywara, Chase Conklin,
	Ganapatrao Kulkarni, Darren Hart, Miguel Luis, Jing Zhang,
	James Morse, Suzuki K Poulose, Oliver Upton, Zenghui Yu

Add the missing DC *VA encodings.

Reviewed-by: Eric Auger <eric.auger@redhat.com>
Reviewed-by: Miguel Luis <miguel.luis@oracle.com>
Acked-by: Catalin Marinas <catalin.marinas@arm.com>
Reviewed-by: Zenghui Yu <yuzenghui@huawei.com>
Reviewed-by: Jing Zhang <jingzhangos@google.com>
Signed-off-by: Marc Zyngier <maz@kernel.org>
---
 arch/arm64/include/asm/sysreg.h | 5 +++++
 1 file changed, 5 insertions(+)

diff --git a/arch/arm64/include/asm/sysreg.h b/arch/arm64/include/asm/sysreg.h
index ed2739897859..5084add86897 100644
--- a/arch/arm64/include/asm/sysreg.h
+++ b/arch/arm64/include/asm/sysreg.h
@@ -150,6 +150,11 @@
 #define SYS_DC_CIGVAC			sys_insn(1, 3, 7, 14, 3)
 #define SYS_DC_CIGDVAC			sys_insn(1, 3, 7, 14, 5)
 
+/* Data cache zero operations */
+#define SYS_DC_ZVA			sys_insn(1, 3, 7, 4, 1)
+#define SYS_DC_GVA			sys_insn(1, 3, 7, 4, 3)
+#define SYS_DC_GZVA			sys_insn(1, 3, 7, 4, 4)
+
 /*
  * Automatically generated definitions for system registers, the
  * manual encodings below are in the process of being converted to
-- 
2.34.1


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply related	[flat|nested] 48+ messages in thread

* [PATCH v4 04/28] arm64: Add TLBI operation encodings
  2023-08-15 18:38 [PATCH v4 00/28] KVM: arm64: NV trap forwarding infrastructure Marc Zyngier
                   ` (2 preceding siblings ...)
  2023-08-15 18:38 ` [PATCH v4 03/28] arm64: Add missing DC ZVA/GVA/GZVA encodings Marc Zyngier
@ 2023-08-15 18:38 ` Marc Zyngier
  2023-08-15 18:38 ` [PATCH v4 05/28] arm64: Add AT " Marc Zyngier
                   ` (24 subsequent siblings)
  28 siblings, 0 replies; 48+ messages in thread
From: Marc Zyngier @ 2023-08-15 18:38 UTC (permalink / raw)
  To: kvmarm, kvm, linux-arm-kernel
  Cc: Catalin Marinas, Eric Auger, Mark Brown, Mark Rutland,
	Will Deacon, Alexandru Elisei, Andre Przywara, Chase Conklin,
	Ganapatrao Kulkarni, Darren Hart, Miguel Luis, Jing Zhang,
	James Morse, Suzuki K Poulose, Oliver Upton, Zenghui Yu

Add all the TLBI encodings that are usable from Non-Secure.

Reviewed-by: Eric Auger <eric.auger@redhat.com>
Acked-by: Catalin Marinas <catalin.marinas@arm.com>
Reviewed-by: Miguel Luis <miguel.luis@oracle.com>
Reviewed-by: Zenghui Yu <yuzenghui@huawei.com>
Reviewed-by: Jing Zhang <jingzhangos@google.com>
Signed-off-by: Marc Zyngier <maz@kernel.org>
---
 arch/arm64/include/asm/sysreg.h | 128 ++++++++++++++++++++++++++++++++
 1 file changed, 128 insertions(+)

diff --git a/arch/arm64/include/asm/sysreg.h b/arch/arm64/include/asm/sysreg.h
index 5084add86897..72e18480ce62 100644
--- a/arch/arm64/include/asm/sysreg.h
+++ b/arch/arm64/include/asm/sysreg.h
@@ -514,6 +514,134 @@
 
 #define SYS_SP_EL2			sys_reg(3, 6,  4, 1, 0)
 
+/* TLBI instructions */
+#define OP_TLBI_VMALLE1OS		sys_insn(1, 0, 8, 1, 0)
+#define OP_TLBI_VAE1OS			sys_insn(1, 0, 8, 1, 1)
+#define OP_TLBI_ASIDE1OS		sys_insn(1, 0, 8, 1, 2)
+#define OP_TLBI_VAAE1OS			sys_insn(1, 0, 8, 1, 3)
+#define OP_TLBI_VALE1OS			sys_insn(1, 0, 8, 1, 5)
+#define OP_TLBI_VAALE1OS		sys_insn(1, 0, 8, 1, 7)
+#define OP_TLBI_RVAE1IS			sys_insn(1, 0, 8, 2, 1)
+#define OP_TLBI_RVAAE1IS		sys_insn(1, 0, 8, 2, 3)
+#define OP_TLBI_RVALE1IS		sys_insn(1, 0, 8, 2, 5)
+#define OP_TLBI_RVAALE1IS		sys_insn(1, 0, 8, 2, 7)
+#define OP_TLBI_VMALLE1IS		sys_insn(1, 0, 8, 3, 0)
+#define OP_TLBI_VAE1IS			sys_insn(1, 0, 8, 3, 1)
+#define OP_TLBI_ASIDE1IS		sys_insn(1, 0, 8, 3, 2)
+#define OP_TLBI_VAAE1IS			sys_insn(1, 0, 8, 3, 3)
+#define OP_TLBI_VALE1IS			sys_insn(1, 0, 8, 3, 5)
+#define OP_TLBI_VAALE1IS		sys_insn(1, 0, 8, 3, 7)
+#define OP_TLBI_RVAE1OS			sys_insn(1, 0, 8, 5, 1)
+#define OP_TLBI_RVAAE1OS		sys_insn(1, 0, 8, 5, 3)
+#define OP_TLBI_RVALE1OS		sys_insn(1, 0, 8, 5, 5)
+#define OP_TLBI_RVAALE1OS		sys_insn(1, 0, 8, 5, 7)
+#define OP_TLBI_RVAE1			sys_insn(1, 0, 8, 6, 1)
+#define OP_TLBI_RVAAE1			sys_insn(1, 0, 8, 6, 3)
+#define OP_TLBI_RVALE1			sys_insn(1, 0, 8, 6, 5)
+#define OP_TLBI_RVAALE1			sys_insn(1, 0, 8, 6, 7)
+#define OP_TLBI_VMALLE1			sys_insn(1, 0, 8, 7, 0)
+#define OP_TLBI_VAE1			sys_insn(1, 0, 8, 7, 1)
+#define OP_TLBI_ASIDE1			sys_insn(1, 0, 8, 7, 2)
+#define OP_TLBI_VAAE1			sys_insn(1, 0, 8, 7, 3)
+#define OP_TLBI_VALE1			sys_insn(1, 0, 8, 7, 5)
+#define OP_TLBI_VAALE1			sys_insn(1, 0, 8, 7, 7)
+#define OP_TLBI_VMALLE1OSNXS		sys_insn(1, 0, 9, 1, 0)
+#define OP_TLBI_VAE1OSNXS		sys_insn(1, 0, 9, 1, 1)
+#define OP_TLBI_ASIDE1OSNXS		sys_insn(1, 0, 9, 1, 2)
+#define OP_TLBI_VAAE1OSNXS		sys_insn(1, 0, 9, 1, 3)
+#define OP_TLBI_VALE1OSNXS		sys_insn(1, 0, 9, 1, 5)
+#define OP_TLBI_VAALE1OSNXS		sys_insn(1, 0, 9, 1, 7)
+#define OP_TLBI_RVAE1ISNXS		sys_insn(1, 0, 9, 2, 1)
+#define OP_TLBI_RVAAE1ISNXS		sys_insn(1, 0, 9, 2, 3)
+#define OP_TLBI_RVALE1ISNXS		sys_insn(1, 0, 9, 2, 5)
+#define OP_TLBI_RVAALE1ISNXS		sys_insn(1, 0, 9, 2, 7)
+#define OP_TLBI_VMALLE1ISNXS		sys_insn(1, 0, 9, 3, 0)
+#define OP_TLBI_VAE1ISNXS		sys_insn(1, 0, 9, 3, 1)
+#define OP_TLBI_ASIDE1ISNXS		sys_insn(1, 0, 9, 3, 2)
+#define OP_TLBI_VAAE1ISNXS		sys_insn(1, 0, 9, 3, 3)
+#define OP_TLBI_VALE1ISNXS		sys_insn(1, 0, 9, 3, 5)
+#define OP_TLBI_VAALE1ISNXS		sys_insn(1, 0, 9, 3, 7)
+#define OP_TLBI_RVAE1OSNXS		sys_insn(1, 0, 9, 5, 1)
+#define OP_TLBI_RVAAE1OSNXS		sys_insn(1, 0, 9, 5, 3)
+#define OP_TLBI_RVALE1OSNXS		sys_insn(1, 0, 9, 5, 5)
+#define OP_TLBI_RVAALE1OSNXS		sys_insn(1, 0, 9, 5, 7)
+#define OP_TLBI_RVAE1NXS		sys_insn(1, 0, 9, 6, 1)
+#define OP_TLBI_RVAAE1NXS		sys_insn(1, 0, 9, 6, 3)
+#define OP_TLBI_RVALE1NXS		sys_insn(1, 0, 9, 6, 5)
+#define OP_TLBI_RVAALE1NXS		sys_insn(1, 0, 9, 6, 7)
+#define OP_TLBI_VMALLE1NXS		sys_insn(1, 0, 9, 7, 0)
+#define OP_TLBI_VAE1NXS			sys_insn(1, 0, 9, 7, 1)
+#define OP_TLBI_ASIDE1NXS		sys_insn(1, 0, 9, 7, 2)
+#define OP_TLBI_VAAE1NXS		sys_insn(1, 0, 9, 7, 3)
+#define OP_TLBI_VALE1NXS		sys_insn(1, 0, 9, 7, 5)
+#define OP_TLBI_VAALE1NXS		sys_insn(1, 0, 9, 7, 7)
+#define OP_TLBI_IPAS2E1IS		sys_insn(1, 4, 8, 0, 1)
+#define OP_TLBI_RIPAS2E1IS		sys_insn(1, 4, 8, 0, 2)
+#define OP_TLBI_IPAS2LE1IS		sys_insn(1, 4, 8, 0, 5)
+#define OP_TLBI_RIPAS2LE1IS		sys_insn(1, 4, 8, 0, 6)
+#define OP_TLBI_ALLE2OS			sys_insn(1, 4, 8, 1, 0)
+#define OP_TLBI_VAE2OS			sys_insn(1, 4, 8, 1, 1)
+#define OP_TLBI_ALLE1OS			sys_insn(1, 4, 8, 1, 4)
+#define OP_TLBI_VALE2OS			sys_insn(1, 4, 8, 1, 5)
+#define OP_TLBI_VMALLS12E1OS		sys_insn(1, 4, 8, 1, 6)
+#define OP_TLBI_RVAE2IS			sys_insn(1, 4, 8, 2, 1)
+#define OP_TLBI_RVALE2IS		sys_insn(1, 4, 8, 2, 5)
+#define OP_TLBI_ALLE2IS			sys_insn(1, 4, 8, 3, 0)
+#define OP_TLBI_VAE2IS			sys_insn(1, 4, 8, 3, 1)
+#define OP_TLBI_ALLE1IS			sys_insn(1, 4, 8, 3, 4)
+#define OP_TLBI_VALE2IS			sys_insn(1, 4, 8, 3, 5)
+#define OP_TLBI_VMALLS12E1IS		sys_insn(1, 4, 8, 3, 6)
+#define OP_TLBI_IPAS2E1OS		sys_insn(1, 4, 8, 4, 0)
+#define OP_TLBI_IPAS2E1			sys_insn(1, 4, 8, 4, 1)
+#define OP_TLBI_RIPAS2E1		sys_insn(1, 4, 8, 4, 2)
+#define OP_TLBI_RIPAS2E1OS		sys_insn(1, 4, 8, 4, 3)
+#define OP_TLBI_IPAS2LE1OS		sys_insn(1, 4, 8, 4, 4)
+#define OP_TLBI_IPAS2LE1		sys_insn(1, 4, 8, 4, 5)
+#define OP_TLBI_RIPAS2LE1		sys_insn(1, 4, 8, 4, 6)
+#define OP_TLBI_RIPAS2LE1OS		sys_insn(1, 4, 8, 4, 7)
+#define OP_TLBI_RVAE2OS			sys_insn(1, 4, 8, 5, 1)
+#define OP_TLBI_RVALE2OS		sys_insn(1, 4, 8, 5, 5)
+#define OP_TLBI_RVAE2			sys_insn(1, 4, 8, 6, 1)
+#define OP_TLBI_RVALE2			sys_insn(1, 4, 8, 6, 5)
+#define OP_TLBI_ALLE2			sys_insn(1, 4, 8, 7, 0)
+#define OP_TLBI_VAE2			sys_insn(1, 4, 8, 7, 1)
+#define OP_TLBI_ALLE1			sys_insn(1, 4, 8, 7, 4)
+#define OP_TLBI_VALE2			sys_insn(1, 4, 8, 7, 5)
+#define OP_TLBI_VMALLS12E1		sys_insn(1, 4, 8, 7, 6)
+#define OP_TLBI_IPAS2E1ISNXS		sys_insn(1, 4, 9, 0, 1)
+#define OP_TLBI_RIPAS2E1ISNXS		sys_insn(1, 4, 9, 0, 2)
+#define OP_TLBI_IPAS2LE1ISNXS		sys_insn(1, 4, 9, 0, 5)
+#define OP_TLBI_RIPAS2LE1ISNXS		sys_insn(1, 4, 9, 0, 6)
+#define OP_TLBI_ALLE2OSNXS		sys_insn(1, 4, 9, 1, 0)
+#define OP_TLBI_VAE2OSNXS		sys_insn(1, 4, 9, 1, 1)
+#define OP_TLBI_ALLE1OSNXS		sys_insn(1, 4, 9, 1, 4)
+#define OP_TLBI_VALE2OSNXS		sys_insn(1, 4, 9, 1, 5)
+#define OP_TLBI_VMALLS12E1OSNXS		sys_insn(1, 4, 9, 1, 6)
+#define OP_TLBI_RVAE2ISNXS		sys_insn(1, 4, 9, 2, 1)
+#define OP_TLBI_RVALE2ISNXS		sys_insn(1, 4, 9, 2, 5)
+#define OP_TLBI_ALLE2ISNXS		sys_insn(1, 4, 9, 3, 0)
+#define OP_TLBI_VAE2ISNXS		sys_insn(1, 4, 9, 3, 1)
+#define OP_TLBI_ALLE1ISNXS		sys_insn(1, 4, 9, 3, 4)
+#define OP_TLBI_VALE2ISNXS		sys_insn(1, 4, 9, 3, 5)
+#define OP_TLBI_VMALLS12E1ISNXS		sys_insn(1, 4, 9, 3, 6)
+#define OP_TLBI_IPAS2E1OSNXS		sys_insn(1, 4, 9, 4, 0)
+#define OP_TLBI_IPAS2E1NXS		sys_insn(1, 4, 9, 4, 1)
+#define OP_TLBI_RIPAS2E1NXS		sys_insn(1, 4, 9, 4, 2)
+#define OP_TLBI_RIPAS2E1OSNXS		sys_insn(1, 4, 9, 4, 3)
+#define OP_TLBI_IPAS2LE1OSNXS		sys_insn(1, 4, 9, 4, 4)
+#define OP_TLBI_IPAS2LE1NXS		sys_insn(1, 4, 9, 4, 5)
+#define OP_TLBI_RIPAS2LE1NXS		sys_insn(1, 4, 9, 4, 6)
+#define OP_TLBI_RIPAS2LE1OSNXS		sys_insn(1, 4, 9, 4, 7)
+#define OP_TLBI_RVAE2OSNXS		sys_insn(1, 4, 9, 5, 1)
+#define OP_TLBI_RVALE2OSNXS		sys_insn(1, 4, 9, 5, 5)
+#define OP_TLBI_RVAE2NXS		sys_insn(1, 4, 9, 6, 1)
+#define OP_TLBI_RVALE2NXS		sys_insn(1, 4, 9, 6, 5)
+#define OP_TLBI_ALLE2NXS		sys_insn(1, 4, 9, 7, 0)
+#define OP_TLBI_VAE2NXS			sys_insn(1, 4, 9, 7, 1)
+#define OP_TLBI_ALLE1NXS		sys_insn(1, 4, 9, 7, 4)
+#define OP_TLBI_VALE2NXS		sys_insn(1, 4, 9, 7, 5)
+#define OP_TLBI_VMALLS12E1NXS		sys_insn(1, 4, 9, 7, 6)
+
 /* Common SCTLR_ELx flags. */
 #define SCTLR_ELx_ENTP2	(BIT(60))
 #define SCTLR_ELx_DSSBS	(BIT(44))
-- 
2.34.1


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply related	[flat|nested] 48+ messages in thread

* [PATCH v4 05/28] arm64: Add AT operation encodings
  2023-08-15 18:38 [PATCH v4 00/28] KVM: arm64: NV trap forwarding infrastructure Marc Zyngier
                   ` (3 preceding siblings ...)
  2023-08-15 18:38 ` [PATCH v4 04/28] arm64: Add TLBI operation encodings Marc Zyngier
@ 2023-08-15 18:38 ` Marc Zyngier
  2023-08-15 18:38 ` [PATCH v4 06/28] arm64: Add debug registers affected by HDFGxTR_EL2 Marc Zyngier
                   ` (23 subsequent siblings)
  28 siblings, 0 replies; 48+ messages in thread
From: Marc Zyngier @ 2023-08-15 18:38 UTC (permalink / raw)
  To: kvmarm, kvm, linux-arm-kernel
  Cc: Catalin Marinas, Eric Auger, Mark Brown, Mark Rutland,
	Will Deacon, Alexandru Elisei, Andre Przywara, Chase Conklin,
	Ganapatrao Kulkarni, Darren Hart, Miguel Luis, Jing Zhang,
	James Morse, Suzuki K Poulose, Oliver Upton, Zenghui Yu

Add the encodings for the AT operation that are usable from NS.

Reviewed-by: Eric Auger <eric.auger@redhat.com>
Acked-by: Catalin Marinas <catalin.marinas@arm.com>
Reviewed-by: Miguel Luis <miguel.luis@oracle.com>
Reviewed-by: Zenghui Yu <yuzenghui@huawei.com>
Reviewed-by: Jing Zhang <jingzhangos@google.com>
Signed-off-by: Marc Zyngier <maz@kernel.org>
---
 arch/arm64/include/asm/sysreg.h | 17 +++++++++++++++++
 1 file changed, 17 insertions(+)

diff --git a/arch/arm64/include/asm/sysreg.h b/arch/arm64/include/asm/sysreg.h
index 72e18480ce62..76289339b43b 100644
--- a/arch/arm64/include/asm/sysreg.h
+++ b/arch/arm64/include/asm/sysreg.h
@@ -514,6 +514,23 @@
 
 #define SYS_SP_EL2			sys_reg(3, 6,  4, 1, 0)
 
+/* AT instructions */
+#define AT_Op0 1
+#define AT_CRn 7
+
+#define OP_AT_S1E1R	sys_insn(AT_Op0, 0, AT_CRn, 8, 0)
+#define OP_AT_S1E1W	sys_insn(AT_Op0, 0, AT_CRn, 8, 1)
+#define OP_AT_S1E0R	sys_insn(AT_Op0, 0, AT_CRn, 8, 2)
+#define OP_AT_S1E0W	sys_insn(AT_Op0, 0, AT_CRn, 8, 3)
+#define OP_AT_S1E1RP	sys_insn(AT_Op0, 0, AT_CRn, 9, 0)
+#define OP_AT_S1E1WP	sys_insn(AT_Op0, 0, AT_CRn, 9, 1)
+#define OP_AT_S1E2R	sys_insn(AT_Op0, 4, AT_CRn, 8, 0)
+#define OP_AT_S1E2W	sys_insn(AT_Op0, 4, AT_CRn, 8, 1)
+#define OP_AT_S12E1R	sys_insn(AT_Op0, 4, AT_CRn, 8, 4)
+#define OP_AT_S12E1W	sys_insn(AT_Op0, 4, AT_CRn, 8, 5)
+#define OP_AT_S12E0R	sys_insn(AT_Op0, 4, AT_CRn, 8, 6)
+#define OP_AT_S12E0W	sys_insn(AT_Op0, 4, AT_CRn, 8, 7)
+
 /* TLBI instructions */
 #define OP_TLBI_VMALLE1OS		sys_insn(1, 0, 8, 1, 0)
 #define OP_TLBI_VAE1OS			sys_insn(1, 0, 8, 1, 1)
-- 
2.34.1


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply related	[flat|nested] 48+ messages in thread

* [PATCH v4 06/28] arm64: Add debug registers affected by HDFGxTR_EL2
  2023-08-15 18:38 [PATCH v4 00/28] KVM: arm64: NV trap forwarding infrastructure Marc Zyngier
                   ` (4 preceding siblings ...)
  2023-08-15 18:38 ` [PATCH v4 05/28] arm64: Add AT " Marc Zyngier
@ 2023-08-15 18:38 ` Marc Zyngier
  2023-08-15 18:38 ` [PATCH v4 07/28] arm64: Add missing BRB/CFP/DVP/CPP instructions Marc Zyngier
                   ` (22 subsequent siblings)
  28 siblings, 0 replies; 48+ messages in thread
From: Marc Zyngier @ 2023-08-15 18:38 UTC (permalink / raw)
  To: kvmarm, kvm, linux-arm-kernel
  Cc: Catalin Marinas, Eric Auger, Mark Brown, Mark Rutland,
	Will Deacon, Alexandru Elisei, Andre Przywara, Chase Conklin,
	Ganapatrao Kulkarni, Darren Hart, Miguel Luis, Jing Zhang,
	James Morse, Suzuki K Poulose, Oliver Upton, Zenghui Yu

The HDFGxTR_EL2 registers trap a (huge) set of debug and trace
related registers. Add their encodings (and only that, because
we really don't care about what these registers actually do at
this stage).

Acked-by: Catalin Marinas <catalin.marinas@arm.com>
Reviewed-by: Eric Auger <eric.auger@redhat.com>
Reviewed-by: Jing Zhang <jingzhangos@google.com>
Signed-off-by: Marc Zyngier <maz@kernel.org>
---
 arch/arm64/include/asm/sysreg.h | 76 +++++++++++++++++++++++++++++++++
 1 file changed, 76 insertions(+)

diff --git a/arch/arm64/include/asm/sysreg.h b/arch/arm64/include/asm/sysreg.h
index 76289339b43b..bb5a0877a210 100644
--- a/arch/arm64/include/asm/sysreg.h
+++ b/arch/arm64/include/asm/sysreg.h
@@ -194,6 +194,82 @@
 #define SYS_DBGDTRTX_EL0		sys_reg(2, 3, 0, 5, 0)
 #define SYS_DBGVCR32_EL2		sys_reg(2, 4, 0, 7, 0)
 
+#define SYS_BRBINF_EL1(n)		sys_reg(2, 1, 8, (n & 15), (((n & 16) >> 2) | 0))
+#define SYS_BRBINFINJ_EL1		sys_reg(2, 1, 9, 1, 0)
+#define SYS_BRBSRC_EL1(n)		sys_reg(2, 1, 8, (n & 15), (((n & 16) >> 2) | 1))
+#define SYS_BRBSRCINJ_EL1		sys_reg(2, 1, 9, 1, 1)
+#define SYS_BRBTGT_EL1(n)		sys_reg(2, 1, 8, (n & 15), (((n & 16) >> 2) | 2))
+#define SYS_BRBTGTINJ_EL1		sys_reg(2, 1, 9, 1, 2)
+#define SYS_BRBTS_EL1			sys_reg(2, 1, 9, 0, 2)
+
+#define SYS_BRBCR_EL1			sys_reg(2, 1, 9, 0, 0)
+#define SYS_BRBFCR_EL1			sys_reg(2, 1, 9, 0, 1)
+#define SYS_BRBIDR0_EL1			sys_reg(2, 1, 9, 2, 0)
+
+#define SYS_TRCITECR_EL1		sys_reg(3, 0, 1, 2, 3)
+#define SYS_TRCACATR(m)			sys_reg(2, 1, 2, ((m & 7) << 1), (2 | (m >> 3)))
+#define SYS_TRCACVR(m)			sys_reg(2, 1, 2, ((m & 7) << 1), (0 | (m >> 3)))
+#define SYS_TRCAUTHSTATUS		sys_reg(2, 1, 7, 14, 6)
+#define SYS_TRCAUXCTLR			sys_reg(2, 1, 0, 6, 0)
+#define SYS_TRCBBCTLR			sys_reg(2, 1, 0, 15, 0)
+#define SYS_TRCCCCTLR			sys_reg(2, 1, 0, 14, 0)
+#define SYS_TRCCIDCCTLR0		sys_reg(2, 1, 3, 0, 2)
+#define SYS_TRCCIDCCTLR1		sys_reg(2, 1, 3, 1, 2)
+#define SYS_TRCCIDCVR(m)		sys_reg(2, 1, 3, ((m & 7) << 1), 0)
+#define SYS_TRCCLAIMCLR			sys_reg(2, 1, 7, 9, 6)
+#define SYS_TRCCLAIMSET			sys_reg(2, 1, 7, 8, 6)
+#define SYS_TRCCNTCTLR(m)		sys_reg(2, 1, 0, (4 | (m & 3)), 5)
+#define SYS_TRCCNTRLDVR(m)		sys_reg(2, 1, 0, (0 | (m & 3)), 5)
+#define SYS_TRCCNTVR(m)			sys_reg(2, 1, 0, (8 | (m & 3)), 5)
+#define SYS_TRCCONFIGR			sys_reg(2, 1, 0, 4, 0)
+#define SYS_TRCDEVARCH			sys_reg(2, 1, 7, 15, 6)
+#define SYS_TRCDEVID			sys_reg(2, 1, 7, 2, 7)
+#define SYS_TRCEVENTCTL0R		sys_reg(2, 1, 0, 8, 0)
+#define SYS_TRCEVENTCTL1R		sys_reg(2, 1, 0, 9, 0)
+#define SYS_TRCEXTINSELR(m)		sys_reg(2, 1, 0, (8 | (m & 3)), 4)
+#define SYS_TRCIDR0			sys_reg(2, 1, 0, 8, 7)
+#define SYS_TRCIDR10			sys_reg(2, 1, 0, 2, 6)
+#define SYS_TRCIDR11			sys_reg(2, 1, 0, 3, 6)
+#define SYS_TRCIDR12			sys_reg(2, 1, 0, 4, 6)
+#define SYS_TRCIDR13			sys_reg(2, 1, 0, 5, 6)
+#define SYS_TRCIDR1			sys_reg(2, 1, 0, 9, 7)
+#define SYS_TRCIDR2			sys_reg(2, 1, 0, 10, 7)
+#define SYS_TRCIDR3			sys_reg(2, 1, 0, 11, 7)
+#define SYS_TRCIDR4			sys_reg(2, 1, 0, 12, 7)
+#define SYS_TRCIDR5			sys_reg(2, 1, 0, 13, 7)
+#define SYS_TRCIDR6			sys_reg(2, 1, 0, 14, 7)
+#define SYS_TRCIDR7			sys_reg(2, 1, 0, 15, 7)
+#define SYS_TRCIDR8			sys_reg(2, 1, 0, 0, 6)
+#define SYS_TRCIDR9			sys_reg(2, 1, 0, 1, 6)
+#define SYS_TRCIMSPEC(m)		sys_reg(2, 1, 0, (m & 7), 7)
+#define SYS_TRCITEEDCR			sys_reg(2, 1, 0, 2, 1)
+#define SYS_TRCOSLSR			sys_reg(2, 1, 1, 1, 4)
+#define SYS_TRCPRGCTLR			sys_reg(2, 1, 0, 1, 0)
+#define SYS_TRCQCTLR			sys_reg(2, 1, 0, 1, 1)
+#define SYS_TRCRSCTLR(m)		sys_reg(2, 1, 1, (m & 15), (0 | (m >> 4)))
+#define SYS_TRCRSR			sys_reg(2, 1, 0, 10, 0)
+#define SYS_TRCSEQEVR(m)		sys_reg(2, 1, 0, (m & 3), 4)
+#define SYS_TRCSEQRSTEVR		sys_reg(2, 1, 0, 6, 4)
+#define SYS_TRCSEQSTR			sys_reg(2, 1, 0, 7, 4)
+#define SYS_TRCSSCCR(m)			sys_reg(2, 1, 1, (m & 7), 2)
+#define SYS_TRCSSCSR(m)			sys_reg(2, 1, 1, (8 | (m & 7)), 2)
+#define SYS_TRCSSPCICR(m)		sys_reg(2, 1, 1, (m & 7), 3)
+#define SYS_TRCSTALLCTLR		sys_reg(2, 1, 0, 11, 0)
+#define SYS_TRCSTATR			sys_reg(2, 1, 0, 3, 0)
+#define SYS_TRCSYNCPR			sys_reg(2, 1, 0, 13, 0)
+#define SYS_TRCTRACEIDR			sys_reg(2, 1, 0, 0, 1)
+#define SYS_TRCTSCTLR			sys_reg(2, 1, 0, 12, 0)
+#define SYS_TRCVICTLR			sys_reg(2, 1, 0, 0, 2)
+#define SYS_TRCVIIECTLR			sys_reg(2, 1, 0, 1, 2)
+#define SYS_TRCVIPCSSCTLR		sys_reg(2, 1, 0, 3, 2)
+#define SYS_TRCVISSCTLR			sys_reg(2, 1, 0, 2, 2)
+#define SYS_TRCVMIDCCTLR0		sys_reg(2, 1, 3, 2, 2)
+#define SYS_TRCVMIDCCTLR1		sys_reg(2, 1, 3, 3, 2)
+#define SYS_TRCVMIDCVR(m)		sys_reg(2, 1, 3, ((m & 7) << 1), 1)
+
+/* ETM */
+#define SYS_TRCOSLAR			sys_reg(2, 1, 1, 0, 4)
+
 #define SYS_MIDR_EL1			sys_reg(3, 0, 0, 0, 0)
 #define SYS_MPIDR_EL1			sys_reg(3, 0, 0, 0, 5)
 #define SYS_REVIDR_EL1			sys_reg(3, 0, 0, 0, 6)
-- 
2.34.1


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply related	[flat|nested] 48+ messages in thread

* [PATCH v4 07/28] arm64: Add missing BRB/CFP/DVP/CPP instructions
  2023-08-15 18:38 [PATCH v4 00/28] KVM: arm64: NV trap forwarding infrastructure Marc Zyngier
                   ` (5 preceding siblings ...)
  2023-08-15 18:38 ` [PATCH v4 06/28] arm64: Add debug registers affected by HDFGxTR_EL2 Marc Zyngier
@ 2023-08-15 18:38 ` Marc Zyngier
  2023-08-15 18:38 ` [PATCH v4 08/28] arm64: Add HDFGRTR_EL2 and HDFGWTR_EL2 layouts Marc Zyngier
                   ` (21 subsequent siblings)
  28 siblings, 0 replies; 48+ messages in thread
From: Marc Zyngier @ 2023-08-15 18:38 UTC (permalink / raw)
  To: kvmarm, kvm, linux-arm-kernel
  Cc: Catalin Marinas, Eric Auger, Mark Brown, Mark Rutland,
	Will Deacon, Alexandru Elisei, Andre Przywara, Chase Conklin,
	Ganapatrao Kulkarni, Darren Hart, Miguel Luis, Jing Zhang,
	James Morse, Suzuki K Poulose, Oliver Upton, Zenghui Yu

HFGITR_EL2 traps a bunch of instructions for which we don't have
encodings yet. Add them.

Reviewed-by: Miguel Luis <miguel.luis@oracle.com>
Reviewed-by: Eric Auger <eric.auger@redhat.com>
Acked-by: Catalin Marinas <catalin.marinas@arm.com>
Reviewed-by: Jing Zhang <jingzhangos@google.com>
Signed-off-by: Marc Zyngier <maz@kernel.org>
---
 arch/arm64/include/asm/sysreg.h | 7 +++++++
 1 file changed, 7 insertions(+)

diff --git a/arch/arm64/include/asm/sysreg.h b/arch/arm64/include/asm/sysreg.h
index bb5a0877a210..6d9d7ac4b31c 100644
--- a/arch/arm64/include/asm/sysreg.h
+++ b/arch/arm64/include/asm/sysreg.h
@@ -735,6 +735,13 @@
 #define OP_TLBI_VALE2NXS		sys_insn(1, 4, 9, 7, 5)
 #define OP_TLBI_VMALLS12E1NXS		sys_insn(1, 4, 9, 7, 6)
 
+/* Misc instructions */
+#define OP_BRB_IALL			sys_insn(1, 1, 7, 2, 4)
+#define OP_BRB_INJ			sys_insn(1, 1, 7, 2, 5)
+#define OP_CFP_RCTX			sys_insn(1, 3, 7, 3, 4)
+#define OP_DVP_RCTX			sys_insn(1, 3, 7, 3, 5)
+#define OP_CPP_RCTX			sys_insn(1, 3, 7, 3, 7)
+
 /* Common SCTLR_ELx flags. */
 #define SCTLR_ELx_ENTP2	(BIT(60))
 #define SCTLR_ELx_DSSBS	(BIT(44))
-- 
2.34.1


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply related	[flat|nested] 48+ messages in thread

* [PATCH v4 08/28] arm64: Add HDFGRTR_EL2 and HDFGWTR_EL2 layouts
  2023-08-15 18:38 [PATCH v4 00/28] KVM: arm64: NV trap forwarding infrastructure Marc Zyngier
                   ` (6 preceding siblings ...)
  2023-08-15 18:38 ` [PATCH v4 07/28] arm64: Add missing BRB/CFP/DVP/CPP instructions Marc Zyngier
@ 2023-08-15 18:38 ` Marc Zyngier
  2023-08-15 18:38 ` [PATCH v4 09/28] arm64: Add feature detection for fine grained traps Marc Zyngier
                   ` (20 subsequent siblings)
  28 siblings, 0 replies; 48+ messages in thread
From: Marc Zyngier @ 2023-08-15 18:38 UTC (permalink / raw)
  To: kvmarm, kvm, linux-arm-kernel
  Cc: Catalin Marinas, Eric Auger, Mark Brown, Mark Rutland,
	Will Deacon, Alexandru Elisei, Andre Przywara, Chase Conklin,
	Ganapatrao Kulkarni, Darren Hart, Miguel Luis, Jing Zhang,
	James Morse, Suzuki K Poulose, Oliver Upton, Zenghui Yu

As we're about to implement full support for FEAT_FGT, add the
full HDFGRTR_EL2 and HDFGWTR_EL2 layouts.

Reviewed-by: Mark Brown <broonie@kernel.org>
Acked-by: Catalin Marinas <catalin.marinas@arm.com>
Reviewed-by: Miguel Luis <miguel.luis@oracle.com>
Reviewed-by: Jing Zhang <jingzhangos@google.com>
Reviewed-by: Eric Auger <eric.auger@redhat.com>
Signed-off-by: Marc Zyngier <maz@kernel.org>
---
 arch/arm64/include/asm/sysreg.h |   2 -
 arch/arm64/tools/sysreg         | 129 ++++++++++++++++++++++++++++++++
 2 files changed, 129 insertions(+), 2 deletions(-)

diff --git a/arch/arm64/include/asm/sysreg.h b/arch/arm64/include/asm/sysreg.h
index 6d9d7ac4b31c..043c677e9f04 100644
--- a/arch/arm64/include/asm/sysreg.h
+++ b/arch/arm64/include/asm/sysreg.h
@@ -495,8 +495,6 @@
 #define SYS_VTCR_EL2			sys_reg(3, 4, 2, 1, 2)
 
 #define SYS_TRFCR_EL2			sys_reg(3, 4, 1, 2, 1)
-#define SYS_HDFGRTR_EL2			sys_reg(3, 4, 3, 1, 4)
-#define SYS_HDFGWTR_EL2			sys_reg(3, 4, 3, 1, 5)
 #define SYS_HAFGRTR_EL2			sys_reg(3, 4, 3, 1, 6)
 #define SYS_SPSR_EL2			sys_reg(3, 4, 4, 0, 0)
 #define SYS_ELR_EL2			sys_reg(3, 4, 4, 0, 1)
diff --git a/arch/arm64/tools/sysreg b/arch/arm64/tools/sysreg
index 65866bf819c3..2517ef7c21cf 100644
--- a/arch/arm64/tools/sysreg
+++ b/arch/arm64/tools/sysreg
@@ -2156,6 +2156,135 @@ Field	1	ICIALLU
 Field	0	ICIALLUIS
 EndSysreg
 
+Sysreg HDFGRTR_EL2	3	4	3	1	4
+Field	63	PMBIDR_EL1
+Field	62	nPMSNEVFR_EL1
+Field	61	nBRBDATA
+Field	60	nBRBCTL
+Field	59	nBRBIDR
+Field	58	PMCEIDn_EL0
+Field	57	PMUSERENR_EL0
+Field	56	TRBTRG_EL1
+Field	55	TRBSR_EL1
+Field	54	TRBPTR_EL1
+Field	53	TRBMAR_EL1
+Field	52	TRBLIMITR_EL1
+Field	51	TRBIDR_EL1
+Field	50	TRBBASER_EL1
+Res0	49
+Field	48	TRCVICTLR
+Field	47	TRCSTATR
+Field	46	TRCSSCSRn
+Field	45	TRCSEQSTR
+Field	44	TRCPRGCTLR
+Field	43	TRCOSLSR
+Res0	42
+Field	41	TRCIMSPECn
+Field	40	TRCID
+Res0	39:38
+Field	37	TRCCNTVRn
+Field	36	TRCCLAIM
+Field	35	TRCAUXCTLR
+Field	34	TRCAUTHSTATUS
+Field	33	TRC
+Field	32	PMSLATFR_EL1
+Field	31	PMSIRR_EL1
+Field	30	PMSIDR_EL1
+Field	29	PMSICR_EL1
+Field	28	PMSFCR_EL1
+Field	27	PMSEVFR_EL1
+Field	26	PMSCR_EL1
+Field	25	PMBSR_EL1
+Field	24	PMBPTR_EL1
+Field	23	PMBLIMITR_EL1
+Field	22	PMMIR_EL1
+Res0	21:20
+Field	19	PMSELR_EL0
+Field	18	PMOVS
+Field	17	PMINTEN
+Field	16	PMCNTEN
+Field	15	PMCCNTR_EL0
+Field	14	PMCCFILTR_EL0
+Field	13	PMEVTYPERn_EL0
+Field	12	PMEVCNTRn_EL0
+Field	11	OSDLR_EL1
+Field	10	OSECCR_EL1
+Field	9	OSLSR_EL1
+Res0	8
+Field	7	DBGPRCR_EL1
+Field	6	DBGAUTHSTATUS_EL1
+Field	5	DBGCLAIM
+Field	4	MDSCR_EL1
+Field	3	DBGWVRn_EL1
+Field	2	DBGWCRn_EL1
+Field	1	DBGBVRn_EL1
+Field	0	DBGBCRn_EL1
+EndSysreg
+
+Sysreg HDFGWTR_EL2	3	4	3	1	5
+Res0	63
+Field	62	nPMSNEVFR_EL1
+Field	61	nBRBDATA
+Field	60	nBRBCTL
+Res0	59:58
+Field	57	PMUSERENR_EL0
+Field	56	TRBTRG_EL1
+Field	55	TRBSR_EL1
+Field	54	TRBPTR_EL1
+Field	53	TRBMAR_EL1
+Field	52	TRBLIMITR_EL1
+Res0	51
+Field	50	TRBBASER_EL1
+Field	49	TRFCR_EL1
+Field	48	TRCVICTLR
+Res0	47
+Field	46	TRCSSCSRn
+Field	45	TRCSEQSTR
+Field	44	TRCPRGCTLR
+Res0	43
+Field	42	TRCOSLAR
+Field	41	TRCIMSPECn
+Res0	40:38
+Field	37	TRCCNTVRn
+Field	36	TRCCLAIM
+Field	35	TRCAUXCTLR
+Res0	34
+Field	33	TRC
+Field	32	PMSLATFR_EL1
+Field	31	PMSIRR_EL1
+Res0	30
+Field	29	PMSICR_EL1
+Field	28	PMSFCR_EL1
+Field	27	PMSEVFR_EL1
+Field	26	PMSCR_EL1
+Field	25	PMBSR_EL1
+Field	24	PMBPTR_EL1
+Field	23	PMBLIMITR_EL1
+Res0	22
+Field	21	PMCR_EL0
+Field	20	PMSWINC_EL0
+Field	19	PMSELR_EL0
+Field	18	PMOVS
+Field	17	PMINTEN
+Field	16	PMCNTEN
+Field	15	PMCCNTR_EL0
+Field	14	PMCCFILTR_EL0
+Field	13	PMEVTYPERn_EL0
+Field	12	PMEVCNTRn_EL0
+Field	11	OSDLR_EL1
+Field	10	OSECCR_EL1
+Res0	9
+Field	8	OSLAR_EL1
+Field	7	DBGPRCR_EL1
+Res0	6
+Field	5	DBGCLAIM
+Field	4	MDSCR_EL1
+Field	3	DBGWVRn_EL1
+Field	2	DBGWCRn_EL1
+Field	1	DBGBVRn_EL1
+Field	0	DBGBCRn_EL1
+EndSysreg
+
 Sysreg	ZCR_EL2	3	4	1	2	0
 Fields	ZCR_ELx
 EndSysreg
-- 
2.34.1


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply related	[flat|nested] 48+ messages in thread

* [PATCH v4 09/28] arm64: Add feature detection for fine grained traps
  2023-08-15 18:38 [PATCH v4 00/28] KVM: arm64: NV trap forwarding infrastructure Marc Zyngier
                   ` (7 preceding siblings ...)
  2023-08-15 18:38 ` [PATCH v4 08/28] arm64: Add HDFGRTR_EL2 and HDFGWTR_EL2 layouts Marc Zyngier
@ 2023-08-15 18:38 ` Marc Zyngier
  2023-08-15 18:38 ` [PATCH v4 10/28] KVM: arm64: Correctly handle ACCDATA_EL1 traps Marc Zyngier
                   ` (19 subsequent siblings)
  28 siblings, 0 replies; 48+ messages in thread
From: Marc Zyngier @ 2023-08-15 18:38 UTC (permalink / raw)
  To: kvmarm, kvm, linux-arm-kernel
  Cc: Catalin Marinas, Eric Auger, Mark Brown, Mark Rutland,
	Will Deacon, Alexandru Elisei, Andre Przywara, Chase Conklin,
	Ganapatrao Kulkarni, Darren Hart, Miguel Luis, Jing Zhang,
	James Morse, Suzuki K Poulose, Oliver Upton, Zenghui Yu

From: Mark Brown <broonie@kernel.org>

In order to allow us to have shared code for managing fine grained traps
for KVM guests add it as a detected feature rather than relying on it
being a dependency of other features.

Acked-by: Catalin Marinas <catalin.marinas@arm.com>
Reviewed-by: Eric Auger <eric.auger@redhat.com>
Signed-off-by: Mark Brown <broonie@kernel.org>
[maz: converted to ARM64_CPUID_FIELDS()]
Link: https://lore.kernel.org/r/20230301-kvm-arm64-fgt-v4-1-1bf8d235ac1f@kernel.org
Reviewed-by: Zenghui Yu <yuzenghui@huawei.com>
Reviewed-by: Miguel Luis <miguel.luis@oracle.com>
Reviewed-by: Jing Zhang <jingzhangos@google.com>
Signed-off-by: Marc Zyngier <maz@kernel.org>
---
 arch/arm64/kernel/cpufeature.c | 7 +++++++
 arch/arm64/tools/cpucaps       | 1 +
 2 files changed, 8 insertions(+)

diff --git a/arch/arm64/kernel/cpufeature.c b/arch/arm64/kernel/cpufeature.c
index f9d456fe132d..668e2872a086 100644
--- a/arch/arm64/kernel/cpufeature.c
+++ b/arch/arm64/kernel/cpufeature.c
@@ -2627,6 +2627,13 @@ static const struct arm64_cpu_capabilities arm64_features[] = {
 		.matches = has_cpuid_feature,
 		ARM64_CPUID_FIELDS(ID_AA64ISAR1_EL1, LRCPC, IMP)
 	},
+	{
+		.desc = "Fine Grained Traps",
+		.type = ARM64_CPUCAP_SYSTEM_FEATURE,
+		.capability = ARM64_HAS_FGT,
+		.matches = has_cpuid_feature,
+		ARM64_CPUID_FIELDS(ID_AA64MMFR0_EL1, FGT, IMP)
+	},
 #ifdef CONFIG_ARM64_SME
 	{
 		.desc = "Scalable Matrix Extension",
diff --git a/arch/arm64/tools/cpucaps b/arch/arm64/tools/cpucaps
index c80ed4f3cbce..c3f06fdef609 100644
--- a/arch/arm64/tools/cpucaps
+++ b/arch/arm64/tools/cpucaps
@@ -26,6 +26,7 @@ HAS_ECV
 HAS_ECV_CNTPOFF
 HAS_EPAN
 HAS_EVT
+HAS_FGT
 HAS_GENERIC_AUTH
 HAS_GENERIC_AUTH_ARCH_QARMA3
 HAS_GENERIC_AUTH_ARCH_QARMA5
-- 
2.34.1


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply related	[flat|nested] 48+ messages in thread

* [PATCH v4 10/28] KVM: arm64: Correctly handle ACCDATA_EL1 traps
  2023-08-15 18:38 [PATCH v4 00/28] KVM: arm64: NV trap forwarding infrastructure Marc Zyngier
                   ` (8 preceding siblings ...)
  2023-08-15 18:38 ` [PATCH v4 09/28] arm64: Add feature detection for fine grained traps Marc Zyngier
@ 2023-08-15 18:38 ` Marc Zyngier
  2023-08-15 18:38 ` [PATCH v4 11/28] KVM: arm64: Add missing HCR_EL2 trap bits Marc Zyngier
                   ` (18 subsequent siblings)
  28 siblings, 0 replies; 48+ messages in thread
From: Marc Zyngier @ 2023-08-15 18:38 UTC (permalink / raw)
  To: kvmarm, kvm, linux-arm-kernel
  Cc: Catalin Marinas, Eric Auger, Mark Brown, Mark Rutland,
	Will Deacon, Alexandru Elisei, Andre Przywara, Chase Conklin,
	Ganapatrao Kulkarni, Darren Hart, Miguel Luis, Jing Zhang,
	James Morse, Suzuki K Poulose, Oliver Upton, Zenghui Yu

As we blindly reset some HFGxTR_EL2 bits to 0, we also randomly trap
unsuspecting sysregs that have their trap bits with a negative
polarity.

ACCDATA_EL1 is one such register that can be accessed by the guest,
causing a splat on the host as we don't have a proper handler for
it.

Adding such handler addresses the issue, though there are a number
of other registers missing as the current architecture documentation
doesn't describe them yet.

Reviewed-by: Eric Auger <eric.auger@redhat.com>
Reviewed-by: Miguel Luis <miguel.luis@oracle.com>
Reviewed-by: Jing Zhang <jingzhangos@google.com>
Signed-off-by: Marc Zyngier <maz@kernel.org>
---
 arch/arm64/include/asm/sysreg.h | 2 ++
 arch/arm64/kvm/sys_regs.c       | 2 ++
 2 files changed, 4 insertions(+)

diff --git a/arch/arm64/include/asm/sysreg.h b/arch/arm64/include/asm/sysreg.h
index 043c677e9f04..818c111009ca 100644
--- a/arch/arm64/include/asm/sysreg.h
+++ b/arch/arm64/include/asm/sysreg.h
@@ -387,6 +387,8 @@
 #define SYS_ICC_IGRPEN0_EL1		sys_reg(3, 0, 12, 12, 6)
 #define SYS_ICC_IGRPEN1_EL1		sys_reg(3, 0, 12, 12, 7)
 
+#define SYS_ACCDATA_EL1			sys_reg(3, 0, 13, 0, 5)
+
 #define SYS_CNTKCTL_EL1			sys_reg(3, 0, 14, 1, 0)
 
 #define SYS_AIDR_EL1			sys_reg(3, 1, 0, 0, 7)
diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c
index 2ca2973abe66..38f221f9fc98 100644
--- a/arch/arm64/kvm/sys_regs.c
+++ b/arch/arm64/kvm/sys_regs.c
@@ -2151,6 +2151,8 @@ static const struct sys_reg_desc sys_reg_descs[] = {
 	{ SYS_DESC(SYS_CONTEXTIDR_EL1), access_vm_reg, reset_val, CONTEXTIDR_EL1, 0 },
 	{ SYS_DESC(SYS_TPIDR_EL1), NULL, reset_unknown, TPIDR_EL1 },
 
+	{ SYS_DESC(SYS_ACCDATA_EL1), undef_access },
+
 	{ SYS_DESC(SYS_SCXTNUM_EL1), undef_access },
 
 	{ SYS_DESC(SYS_CNTKCTL_EL1), NULL, reset_val, CNTKCTL_EL1, 0},
-- 
2.34.1


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply related	[flat|nested] 48+ messages in thread

* [PATCH v4 11/28] KVM: arm64: Add missing HCR_EL2 trap bits
  2023-08-15 18:38 [PATCH v4 00/28] KVM: arm64: NV trap forwarding infrastructure Marc Zyngier
                   ` (9 preceding siblings ...)
  2023-08-15 18:38 ` [PATCH v4 10/28] KVM: arm64: Correctly handle ACCDATA_EL1 traps Marc Zyngier
@ 2023-08-15 18:38 ` Marc Zyngier
  2023-08-15 18:38 ` [PATCH v4 12/28] KVM: arm64: nv: Add FGT registers Marc Zyngier
                   ` (17 subsequent siblings)
  28 siblings, 0 replies; 48+ messages in thread
From: Marc Zyngier @ 2023-08-15 18:38 UTC (permalink / raw)
  To: kvmarm, kvm, linux-arm-kernel
  Cc: Catalin Marinas, Eric Auger, Mark Brown, Mark Rutland,
	Will Deacon, Alexandru Elisei, Andre Przywara, Chase Conklin,
	Ganapatrao Kulkarni, Darren Hart, Miguel Luis, Jing Zhang,
	James Morse, Suzuki K Poulose, Oliver Upton, Zenghui Yu

We're still missing a handfull of HCR_EL2 trap bits. Add them.

Reviewed-by: Eric Auger <eric.auger@redhat.com>
Reviewed-by: Miguel Luis <miguel.luis@oracle.com>
Reviewed-by: Jing Zhang <jingzhangos@google.com>
Signed-off-by: Marc Zyngier <maz@kernel.org>
---
 arch/arm64/include/asm/kvm_arm.h | 9 +++++++++
 1 file changed, 9 insertions(+)

diff --git a/arch/arm64/include/asm/kvm_arm.h b/arch/arm64/include/asm/kvm_arm.h
index 58e5eb27da68..028049b147df 100644
--- a/arch/arm64/include/asm/kvm_arm.h
+++ b/arch/arm64/include/asm/kvm_arm.h
@@ -18,10 +18,19 @@
 #define HCR_DCT		(UL(1) << 57)
 #define HCR_ATA_SHIFT	56
 #define HCR_ATA		(UL(1) << HCR_ATA_SHIFT)
+#define HCR_TTLBOS	(UL(1) << 55)
+#define HCR_TTLBIS	(UL(1) << 54)
+#define HCR_ENSCXT	(UL(1) << 53)
+#define HCR_TOCU	(UL(1) << 52)
 #define HCR_AMVOFFEN	(UL(1) << 51)
+#define HCR_TICAB	(UL(1) << 50)
 #define HCR_TID4	(UL(1) << 49)
 #define HCR_FIEN	(UL(1) << 47)
 #define HCR_FWB		(UL(1) << 46)
+#define HCR_NV2		(UL(1) << 45)
+#define HCR_AT		(UL(1) << 44)
+#define HCR_NV1		(UL(1) << 43)
+#define HCR_NV		(UL(1) << 42)
 #define HCR_API		(UL(1) << 41)
 #define HCR_APK		(UL(1) << 40)
 #define HCR_TEA		(UL(1) << 37)
-- 
2.34.1


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply related	[flat|nested] 48+ messages in thread

* [PATCH v4 12/28] KVM: arm64: nv: Add FGT registers
  2023-08-15 18:38 [PATCH v4 00/28] KVM: arm64: NV trap forwarding infrastructure Marc Zyngier
                   ` (10 preceding siblings ...)
  2023-08-15 18:38 ` [PATCH v4 11/28] KVM: arm64: Add missing HCR_EL2 trap bits Marc Zyngier
@ 2023-08-15 18:38 ` Marc Zyngier
  2023-08-15 18:38 ` [PATCH v4 13/28] KVM: arm64: Restructure FGT register switching Marc Zyngier
                   ` (16 subsequent siblings)
  28 siblings, 0 replies; 48+ messages in thread
From: Marc Zyngier @ 2023-08-15 18:38 UTC (permalink / raw)
  To: kvmarm, kvm, linux-arm-kernel
  Cc: Catalin Marinas, Eric Auger, Mark Brown, Mark Rutland,
	Will Deacon, Alexandru Elisei, Andre Przywara, Chase Conklin,
	Ganapatrao Kulkarni, Darren Hart, Miguel Luis, Jing Zhang,
	James Morse, Suzuki K Poulose, Oliver Upton, Zenghui Yu

Add the 5 registers covering FEAT_FGT. The AMU-related registers
are currently left out as we don't have a plan for them. Yet.

Reviewed-by: Eric Auger <eric.auger@redhat.com>
Reviewed-by: Miguel Luis <miguel.luis@oracle.com>
Reviewed-by: Jing Zhang <jingzhangos@google.com>
Signed-off-by: Marc Zyngier <maz@kernel.org>
---
 arch/arm64/include/asm/kvm_host.h | 5 +++++
 arch/arm64/kvm/sys_regs.c         | 5 +++++
 2 files changed, 10 insertions(+)

diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h
index d3dd05bbfe23..721680da1011 100644
--- a/arch/arm64/include/asm/kvm_host.h
+++ b/arch/arm64/include/asm/kvm_host.h
@@ -400,6 +400,11 @@ enum vcpu_sysreg {
 	TPIDR_EL2,	/* EL2 Software Thread ID Register */
 	CNTHCTL_EL2,	/* Counter-timer Hypervisor Control register */
 	SP_EL2,		/* EL2 Stack Pointer */
+	HFGRTR_EL2,
+	HFGWTR_EL2,
+	HFGITR_EL2,
+	HDFGRTR_EL2,
+	HDFGWTR_EL2,
 	CNTHP_CTL_EL2,
 	CNTHP_CVAL_EL2,
 	CNTHV_CTL_EL2,
diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c
index 38f221f9fc98..f5baaa508926 100644
--- a/arch/arm64/kvm/sys_regs.c
+++ b/arch/arm64/kvm/sys_regs.c
@@ -2367,6 +2367,9 @@ static const struct sys_reg_desc sys_reg_descs[] = {
 	EL2_REG(MDCR_EL2, access_rw, reset_val, 0),
 	EL2_REG(CPTR_EL2, access_rw, reset_val, CPTR_NVHE_EL2_RES1),
 	EL2_REG(HSTR_EL2, access_rw, reset_val, 0),
+	EL2_REG(HFGRTR_EL2, access_rw, reset_val, 0),
+	EL2_REG(HFGWTR_EL2, access_rw, reset_val, 0),
+	EL2_REG(HFGITR_EL2, access_rw, reset_val, 0),
 	EL2_REG(HACR_EL2, access_rw, reset_val, 0),
 
 	EL2_REG(TTBR0_EL2, access_rw, reset_val, 0),
@@ -2376,6 +2379,8 @@ static const struct sys_reg_desc sys_reg_descs[] = {
 	EL2_REG(VTCR_EL2, access_rw, reset_val, 0),
 
 	{ SYS_DESC(SYS_DACR32_EL2), NULL, reset_unknown, DACR32_EL2 },
+	EL2_REG(HDFGRTR_EL2, access_rw, reset_val, 0),
+	EL2_REG(HDFGWTR_EL2, access_rw, reset_val, 0),
 	EL2_REG(SPSR_EL2, access_rw, reset_val, 0),
 	EL2_REG(ELR_EL2, access_rw, reset_val, 0),
 	{ SYS_DESC(SYS_SP_EL1), access_sp_el1},
-- 
2.34.1


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply related	[flat|nested] 48+ messages in thread

* [PATCH v4 13/28] KVM: arm64: Restructure FGT register switching
  2023-08-15 18:38 [PATCH v4 00/28] KVM: arm64: NV trap forwarding infrastructure Marc Zyngier
                   ` (11 preceding siblings ...)
  2023-08-15 18:38 ` [PATCH v4 12/28] KVM: arm64: nv: Add FGT registers Marc Zyngier
@ 2023-08-15 18:38 ` Marc Zyngier
  2023-08-15 18:38 ` [PATCH v4 14/28] KVM: arm64: nv: Add trap forwarding infrastructure Marc Zyngier
                   ` (15 subsequent siblings)
  28 siblings, 0 replies; 48+ messages in thread
From: Marc Zyngier @ 2023-08-15 18:38 UTC (permalink / raw)
  To: kvmarm, kvm, linux-arm-kernel
  Cc: Catalin Marinas, Eric Auger, Mark Brown, Mark Rutland,
	Will Deacon, Alexandru Elisei, Andre Przywara, Chase Conklin,
	Ganapatrao Kulkarni, Darren Hart, Miguel Luis, Jing Zhang,
	James Morse, Suzuki K Poulose, Oliver Upton, Zenghui Yu

As we're about to majorly extend the handling of FGT registers,
restructure the code to actually save/restore the registers
as required. This is made easy thanks to the previous addition
of the EL2 registers, allowing us to use the host context for
this purpose.

Reviewed-by: Eric Auger <eric.auger@redhat.com>
Reviewed-by: Oliver Upton <oliver.upton@linux.dev>
Reviewed-by: Miguel Luis <miguel.luis@oracle.com>
Signed-off-by: Marc Zyngier <maz@kernel.org>
---
 arch/arm64/include/asm/kvm_arm.h        | 21 ++++++++++
 arch/arm64/kvm/hyp/include/hyp/switch.h | 56 +++++++++++++------------
 2 files changed, 50 insertions(+), 27 deletions(-)

diff --git a/arch/arm64/include/asm/kvm_arm.h b/arch/arm64/include/asm/kvm_arm.h
index 028049b147df..85908aa18908 100644
--- a/arch/arm64/include/asm/kvm_arm.h
+++ b/arch/arm64/include/asm/kvm_arm.h
@@ -333,6 +333,27 @@
 				 BIT(18) |		\
 				 GENMASK(16, 15))
 
+/*
+ * FGT register definitions
+ *
+ * RES0 and polarity masks as of DDI0487J.a, to be updated as needed.
+ * We're not using the generated masks as they are usually ahead of
+ * the published ARM ARM, which we use as a reference.
+ *
+ * Once we get to a point where the two describe the same thing, we'll
+ * merge the definitions. One day.
+ */
+#define __HFGRTR_EL2_RES0	(GENMASK(63, 56) | GENMASK(53, 51))
+#define __HFGRTR_EL2_MASK	GENMASK(49, 0)
+#define __HFGRTR_EL2_nMASK	(GENMASK(55, 54) | BIT(50))
+
+#define __HFGWTR_EL2_RES0	(GENMASK(63, 56) | GENMASK(53, 51) |	\
+				 BIT(46) | BIT(42) | BIT(40) | BIT(28) | \
+				 GENMASK(26, 25) | BIT(21) | BIT(18) |	\
+				 GENMASK(15, 14) | GENMASK(10, 9) | BIT(2))
+#define __HFGWTR_EL2_MASK	GENMASK(49, 0)
+#define __HFGWTR_EL2_nMASK	(GENMASK(55, 54) | BIT(50))
+
 /* Hyp Prefetch Fault Address Register (HPFAR/HDFAR) */
 #define HPFAR_MASK	(~UL(0xf))
 /*
diff --git a/arch/arm64/kvm/hyp/include/hyp/switch.h b/arch/arm64/kvm/hyp/include/hyp/switch.h
index 4bddb8541bec..e096b16e85fd 100644
--- a/arch/arm64/kvm/hyp/include/hyp/switch.h
+++ b/arch/arm64/kvm/hyp/include/hyp/switch.h
@@ -70,20 +70,19 @@ static inline void __activate_traps_fpsimd32(struct kvm_vcpu *vcpu)
 	}
 }
 
-static inline bool __hfgxtr_traps_required(void)
-{
-	if (cpus_have_final_cap(ARM64_SME))
-		return true;
-
-	if (cpus_have_final_cap(ARM64_WORKAROUND_AMPERE_AC03_CPU_38))
-		return true;
 
-	return false;
-}
 
-static inline void __activate_traps_hfgxtr(void)
+static inline void __activate_traps_hfgxtr(struct kvm_vcpu *vcpu)
 {
+	struct kvm_cpu_context *hctxt = &this_cpu_ptr(&kvm_host_data)->host_ctxt;
 	u64 r_clr = 0, w_clr = 0, r_set = 0, w_set = 0, tmp;
+	u64 r_val, w_val;
+
+	if (!cpus_have_final_cap(ARM64_HAS_FGT))
+		return;
+
+	ctxt_sys_reg(hctxt, HFGRTR_EL2) = read_sysreg_s(SYS_HFGRTR_EL2);
+	ctxt_sys_reg(hctxt, HFGWTR_EL2) = read_sysreg_s(SYS_HFGWTR_EL2);
 
 	if (cpus_have_final_cap(ARM64_SME)) {
 		tmp = HFGxTR_EL2_nSMPRI_EL1_MASK | HFGxTR_EL2_nTPIDR2_EL0_MASK;
@@ -98,26 +97,31 @@ static inline void __activate_traps_hfgxtr(void)
 	if (cpus_have_final_cap(ARM64_WORKAROUND_AMPERE_AC03_CPU_38))
 		w_set |= HFGxTR_EL2_TCR_EL1_MASK;
 
-	sysreg_clear_set_s(SYS_HFGRTR_EL2, r_clr, r_set);
-	sysreg_clear_set_s(SYS_HFGWTR_EL2, w_clr, w_set);
+
+	/* The default is not to trap anything but ACCDATA_EL1 */
+	r_val = __HFGRTR_EL2_nMASK & ~HFGxTR_EL2_nACCDATA_EL1;
+	r_val |= r_set;
+	r_val &= ~r_clr;
+
+	w_val = __HFGWTR_EL2_nMASK & ~HFGxTR_EL2_nACCDATA_EL1;
+	w_val |= w_set;
+	w_val &= ~w_clr;
+
+	write_sysreg_s(r_val, SYS_HFGRTR_EL2);
+	write_sysreg_s(w_val, SYS_HFGWTR_EL2);
 }
 
-static inline void __deactivate_traps_hfgxtr(void)
+static inline void __deactivate_traps_hfgxtr(struct kvm_vcpu *vcpu)
 {
-	u64 r_clr = 0, w_clr = 0, r_set = 0, w_set = 0, tmp;
+	struct kvm_cpu_context *hctxt = &this_cpu_ptr(&kvm_host_data)->host_ctxt;
 
-	if (cpus_have_final_cap(ARM64_SME)) {
-		tmp = HFGxTR_EL2_nSMPRI_EL1_MASK | HFGxTR_EL2_nTPIDR2_EL0_MASK;
+	if (!cpus_have_final_cap(ARM64_HAS_FGT))
+		return;
 
-		r_set |= tmp;
-		w_set |= tmp;
-	}
+	write_sysreg_s(ctxt_sys_reg(hctxt, HFGRTR_EL2), SYS_HFGRTR_EL2);
+	write_sysreg_s(ctxt_sys_reg(hctxt, HFGWTR_EL2), SYS_HFGWTR_EL2);
 
-	if (cpus_have_final_cap(ARM64_WORKAROUND_AMPERE_AC03_CPU_38))
-		w_clr |= HFGxTR_EL2_TCR_EL1_MASK;
 
-	sysreg_clear_set_s(SYS_HFGRTR_EL2, r_clr, r_set);
-	sysreg_clear_set_s(SYS_HFGWTR_EL2, w_clr, w_set);
 }
 
 static inline void __activate_traps_common(struct kvm_vcpu *vcpu)
@@ -145,8 +149,7 @@ static inline void __activate_traps_common(struct kvm_vcpu *vcpu)
 	vcpu->arch.mdcr_el2_host = read_sysreg(mdcr_el2);
 	write_sysreg(vcpu->arch.mdcr_el2, mdcr_el2);
 
-	if (__hfgxtr_traps_required())
-		__activate_traps_hfgxtr();
+	__activate_traps_hfgxtr(vcpu);
 }
 
 static inline void __deactivate_traps_common(struct kvm_vcpu *vcpu)
@@ -162,8 +165,7 @@ static inline void __deactivate_traps_common(struct kvm_vcpu *vcpu)
 		vcpu_clear_flag(vcpu, PMUSERENR_ON_CPU);
 	}
 
-	if (__hfgxtr_traps_required())
-		__deactivate_traps_hfgxtr();
+	__deactivate_traps_hfgxtr(vcpu);
 }
 
 static inline void ___activate_traps(struct kvm_vcpu *vcpu)
-- 
2.34.1


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply related	[flat|nested] 48+ messages in thread

* [PATCH v4 14/28] KVM: arm64: nv: Add trap forwarding infrastructure
  2023-08-15 18:38 [PATCH v4 00/28] KVM: arm64: NV trap forwarding infrastructure Marc Zyngier
                   ` (12 preceding siblings ...)
  2023-08-15 18:38 ` [PATCH v4 13/28] KVM: arm64: Restructure FGT register switching Marc Zyngier
@ 2023-08-15 18:38 ` Marc Zyngier
  2023-08-15 21:34   ` Jing Zhang
  2023-08-16  9:34   ` Miguel Luis
  2023-08-15 18:38 ` [PATCH v4 15/28] KVM: arm64: nv: Add trap forwarding for HCR_EL2 Marc Zyngier
                   ` (14 subsequent siblings)
  28 siblings, 2 replies; 48+ messages in thread
From: Marc Zyngier @ 2023-08-15 18:38 UTC (permalink / raw)
  To: kvmarm, kvm, linux-arm-kernel
  Cc: Catalin Marinas, Eric Auger, Mark Brown, Mark Rutland,
	Will Deacon, Alexandru Elisei, Andre Przywara, Chase Conklin,
	Ganapatrao Kulkarni, Darren Hart, Miguel Luis, Jing Zhang,
	James Morse, Suzuki K Poulose, Oliver Upton, Zenghui Yu

A significant part of what a NV hypervisor needs to do is to decide
whether a trap from a L2+ guest has to be forwarded to a L1 guest
or handled locally. This is done by checking for the trap bits that
the guest hypervisor has set and acting accordingly, as described by
the architecture.

A previous approach was to sprinkle a bunch of checks in all the
system register accessors, but this is pretty error prone and doesn't
help getting an overview of what is happening.

Instead, implement a set of global tables that describe a trap bit,
combinations of trap bits, behaviours on trap, and what bits must
be evaluated on a system register trap.

Although this is painful to describe, this allows to specify each
and every control bit in a static manner. To make it efficient,
the table is inserted in an xarray that is global to the system,
and checked each time we trap a system register while running
a L2 guest.

Add the basic infrastructure for now, while additional patches will
implement configuration registers.

Signed-off-by: Marc Zyngier <maz@kernel.org>
---
 arch/arm64/include/asm/kvm_host.h   |   1 +
 arch/arm64/include/asm/kvm_nested.h |   2 +
 arch/arm64/kvm/emulate-nested.c     | 282 ++++++++++++++++++++++++++++
 arch/arm64/kvm/sys_regs.c           |   6 +
 arch/arm64/kvm/trace_arm.h          |  26 +++
 5 files changed, 317 insertions(+)

diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h
index 721680da1011..cb1c5c54cedd 100644
--- a/arch/arm64/include/asm/kvm_host.h
+++ b/arch/arm64/include/asm/kvm_host.h
@@ -988,6 +988,7 @@ int kvm_handle_cp10_id(struct kvm_vcpu *vcpu);
 void kvm_reset_sys_regs(struct kvm_vcpu *vcpu);
 
 int __init kvm_sys_reg_table_init(void);
+int __init populate_nv_trap_config(void);
 
 bool lock_all_vcpus(struct kvm *kvm);
 void unlock_all_vcpus(struct kvm *kvm);
diff --git a/arch/arm64/include/asm/kvm_nested.h b/arch/arm64/include/asm/kvm_nested.h
index 8fb67f032fd1..fa23cc9c2adc 100644
--- a/arch/arm64/include/asm/kvm_nested.h
+++ b/arch/arm64/include/asm/kvm_nested.h
@@ -11,6 +11,8 @@ static inline bool vcpu_has_nv(const struct kvm_vcpu *vcpu)
 		test_bit(KVM_ARM_VCPU_HAS_EL2, vcpu->arch.features));
 }
 
+extern bool __check_nv_sr_forward(struct kvm_vcpu *vcpu);
+
 struct sys_reg_params;
 struct sys_reg_desc;
 
diff --git a/arch/arm64/kvm/emulate-nested.c b/arch/arm64/kvm/emulate-nested.c
index b96662029fb1..d5837ed0077c 100644
--- a/arch/arm64/kvm/emulate-nested.c
+++ b/arch/arm64/kvm/emulate-nested.c
@@ -14,6 +14,288 @@
 
 #include "trace.h"
 
+enum trap_behaviour {
+	BEHAVE_HANDLE_LOCALLY	= 0,
+	BEHAVE_FORWARD_READ	= BIT(0),
+	BEHAVE_FORWARD_WRITE	= BIT(1),
+	BEHAVE_FORWARD_ANY	= BEHAVE_FORWARD_READ | BEHAVE_FORWARD_WRITE,
+};
+
+struct trap_bits {
+	const enum vcpu_sysreg		index;
+	const enum trap_behaviour	behaviour;
+	const u64			value;
+	const u64			mask;
+};
+
+/* Coarse Grained Trap definitions */
+enum cgt_group_id {
+	/* Indicates no coarse trap control */
+	__RESERVED__,
+
+	/*
+	 * The first batch of IDs denote coarse trapping that are used
+	 * on their own instead of being part of a combination of
+	 * trap controls.
+	 */
+
+	/*
+	 * Anything after this point is a combination of coarse trap
+	 * controls, which must all be evaluated to decide what to do.
+	 */
+	__MULTIPLE_CONTROL_BITS__,
+
+	/*
+	 * Anything after this point requires a callback evaluating a
+	 * complex trap condition. Hopefully we'll never need this...
+	 */
+	__COMPLEX_CONDITIONS__,
+
+	/* Must be last */
+	__NR_CGT_GROUP_IDS__
+};
+
+static const struct trap_bits coarse_trap_bits[] = {
+};
+
+#define MCB(id, ...)						\
+	[id - __MULTIPLE_CONTROL_BITS__]	=		\
+		(const enum cgt_group_id[]){			\
+		__VA_ARGS__, __RESERVED__			\
+		}
+
+static const enum cgt_group_id *coarse_control_combo[] = {
+};
+
+typedef enum trap_behaviour (*complex_condition_check)(struct kvm_vcpu *);
+
+#define CCC(id, fn)				\
+	[id - __COMPLEX_CONDITIONS__] = fn
+
+static const complex_condition_check ccc[] = {
+};
+
+/*
+ * Bit assignment for the trap controls. We use a 64bit word with the
+ * following layout for each trapped sysreg:
+ *
+ * [9:0]	enum cgt_group_id (10 bits)
+ * [62:10]	Unused (53 bits)
+ * [63]		RES0 - Must be zero, as lost on insertion in the xarray
+ */
+#define TC_CGT_BITS	10
+
+union trap_config {
+	u64	val;
+	struct {
+		unsigned long	cgt:TC_CGT_BITS; /* Coarse Grained Trap id */
+		unsigned long	unused:53;	 /* Unused, should be zero */
+		unsigned long	mbz:1;		 /* Must Be Zero */
+	};
+};
+
+struct encoding_to_trap_config {
+	const u32			encoding;
+	const u32			end;
+	const union trap_config		tc;
+	const unsigned int		line;
+};
+
+#define SR_RANGE_TRAP(sr_start, sr_end, trap_id)			\
+	{								\
+		.encoding	= sr_start,				\
+		.end		= sr_end,				\
+		.tc		= {					\
+			.cgt		= trap_id,			\
+		},							\
+		.line = __LINE__,					\
+	}
+
+#define SR_TRAP(sr, trap_id)		SR_RANGE_TRAP(sr, sr, trap_id)
+
+/*
+ * Map encoding to trap bits for exception reported with EC=0x18.
+ * These must only be evaluated when running a nested hypervisor, but
+ * that the current context is not a hypervisor context. When the
+ * trapped access matches one of the trap controls, the exception is
+ * re-injected in the nested hypervisor.
+ */
+static const struct encoding_to_trap_config encoding_to_cgt[] __initconst = {
+};
+
+static DEFINE_XARRAY(sr_forward_xa);
+
+static union trap_config get_trap_config(u32 sysreg)
+{
+	return (union trap_config) {
+		.val = xa_to_value(xa_load(&sr_forward_xa, sysreg)),
+	};
+}
+
+static __init void print_nv_trap_error(const struct encoding_to_trap_config *tc,
+				       const char *type, int err)
+{
+	kvm_err("%s line %d encoding range "
+		"(%d, %d, %d, %d, %d) - (%d, %d, %d, %d, %d) (err=%d)\n",
+		type, tc->line,
+		sys_reg_Op0(tc->encoding), sys_reg_Op1(tc->encoding),
+		sys_reg_CRn(tc->encoding), sys_reg_CRm(tc->encoding),
+		sys_reg_Op2(tc->encoding),
+		sys_reg_Op0(tc->end), sys_reg_Op1(tc->end),
+		sys_reg_CRn(tc->end), sys_reg_CRm(tc->end),
+		sys_reg_Op2(tc->end),
+		err);
+}
+
+int __init populate_nv_trap_config(void)
+{
+	int ret = 0;
+
+	BUILD_BUG_ON(sizeof(union trap_config) != sizeof(void *));
+	BUILD_BUG_ON(__NR_CGT_GROUP_IDS__ > BIT(TC_CGT_BITS));
+
+	for (int i = 0; i < ARRAY_SIZE(encoding_to_cgt); i++) {
+		const struct encoding_to_trap_config *cgt = &encoding_to_cgt[i];
+		void *prev;
+
+		if (cgt->tc.val & BIT(63)) {
+			kvm_err("CGT[%d] has MBZ bit set\n", i);
+			ret = -EINVAL;
+		}
+
+		if (cgt->encoding != cgt->end) {
+			prev = xa_store_range(&sr_forward_xa,
+					      cgt->encoding, cgt->end,
+					      xa_mk_value(cgt->tc.val),
+					      GFP_KERNEL);
+		} else {
+			prev = xa_store(&sr_forward_xa, cgt->encoding,
+					xa_mk_value(cgt->tc.val), GFP_KERNEL);
+			if (prev && !xa_is_err(prev)) {
+				ret = -EINVAL;
+				print_nv_trap_error(cgt, "Duplicate CGT", ret);
+			}
+		}
+
+		if (xa_is_err(prev)) {
+			ret = xa_err(prev);
+			print_nv_trap_error(cgt, "Failed CGT insertion", ret);
+		}
+	}
+
+	kvm_info("nv: %ld coarse grained trap handlers\n",
+		 ARRAY_SIZE(encoding_to_cgt));
+
+	for (int id = __MULTIPLE_CONTROL_BITS__; id < __COMPLEX_CONDITIONS__; id++) {
+		const enum cgt_group_id *cgids;
+
+		cgids = coarse_control_combo[id - __MULTIPLE_CONTROL_BITS__];
+
+		for (int i = 0; cgids[i] != __RESERVED__; i++) {
+			if (cgids[i] >= __MULTIPLE_CONTROL_BITS__) {
+				kvm_err("Recursive MCB %d/%d\n", id, cgids[i]);
+				ret = -EINVAL;
+			}
+		}
+	}
+
+	if (ret)
+		xa_destroy(&sr_forward_xa);
+
+	return ret;
+}
+
+static enum trap_behaviour get_behaviour(struct kvm_vcpu *vcpu,
+					 const struct trap_bits *tb)
+{
+	enum trap_behaviour b = BEHAVE_HANDLE_LOCALLY;
+	u64 val;
+
+	val = __vcpu_sys_reg(vcpu, tb->index);
+	if ((val & tb->mask) == tb->value)
+		b |= tb->behaviour;
+
+	return b;
+}
+
+static enum trap_behaviour __compute_trap_behaviour(struct kvm_vcpu *vcpu,
+						    const enum cgt_group_id id,
+						    enum trap_behaviour b)
+{
+	switch (id) {
+		const enum cgt_group_id *cgids;
+
+	case __RESERVED__ ... __MULTIPLE_CONTROL_BITS__ - 1:
+		if (likely(id != __RESERVED__))
+			b |= get_behaviour(vcpu, &coarse_trap_bits[id]);
+		break;
+	case __MULTIPLE_CONTROL_BITS__ ... __COMPLEX_CONDITIONS__ - 1:
+		/* Yes, this is recursive. Don't do anything stupid. */
+		cgids = coarse_control_combo[id - __MULTIPLE_CONTROL_BITS__];
+		for (int i = 0; cgids[i] != __RESERVED__; i++)
+			b |= __compute_trap_behaviour(vcpu, cgids[i], b);
+		break;
+	default:
+		if (ARRAY_SIZE(ccc))
+			b |= ccc[id -  __COMPLEX_CONDITIONS__](vcpu);
+		break;
+	}
+
+	return b;
+}
+
+static enum trap_behaviour compute_trap_behaviour(struct kvm_vcpu *vcpu,
+						  const union trap_config tc)
+{
+	enum trap_behaviour b = BEHAVE_HANDLE_LOCALLY;
+
+	return __compute_trap_behaviour(vcpu, tc.cgt, b);
+}
+
+bool __check_nv_sr_forward(struct kvm_vcpu *vcpu)
+{
+	union trap_config tc;
+	enum trap_behaviour b;
+	bool is_read;
+	u32 sysreg;
+	u64 esr;
+
+	if (!vcpu_has_nv(vcpu) || is_hyp_ctxt(vcpu))
+		return false;
+
+	esr = kvm_vcpu_get_esr(vcpu);
+	sysreg = esr_sys64_to_sysreg(esr);
+	is_read = (esr & ESR_ELx_SYS64_ISS_DIR_MASK) == ESR_ELx_SYS64_ISS_DIR_READ;
+
+	tc = get_trap_config(sysreg);
+
+	/*
+	 * A value of 0 for the whole entry means that we know nothing
+	 * for this sysreg, and that it cannot be re-injected into the
+	 * nested hypervisor. In this situation, let's cut it short.
+	 *
+	 * Note that ultimately, we could also make use of the xarray
+	 * to store the index of the sysreg in the local descriptor
+	 * array, avoiding another search... Hint, hint...
+	 */
+	if (!tc.val)
+		return false;
+
+	b = compute_trap_behaviour(vcpu, tc);
+
+	if (((b & BEHAVE_FORWARD_READ) && is_read) ||
+	    ((b & BEHAVE_FORWARD_WRITE) && !is_read))
+		goto inject;
+
+	return false;
+
+inject:
+	trace_kvm_forward_sysreg_trap(vcpu, sysreg, is_read);
+
+	kvm_inject_nested_sync(vcpu, kvm_vcpu_get_esr(vcpu));
+	return true;
+}
+
 static u64 kvm_check_illegal_exception_return(struct kvm_vcpu *vcpu, u64 spsr)
 {
 	u64 mode = spsr & PSR_MODE_MASK;
diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c
index f5baaa508926..9556896311db 100644
--- a/arch/arm64/kvm/sys_regs.c
+++ b/arch/arm64/kvm/sys_regs.c
@@ -3177,6 +3177,9 @@ int kvm_handle_sys_reg(struct kvm_vcpu *vcpu)
 
 	trace_kvm_handle_sys_reg(esr);
 
+	if (__check_nv_sr_forward(vcpu))
+		return 1;
+
 	params = esr_sys64_to_params(esr);
 	params.regval = vcpu_get_reg(vcpu, Rt);
 
@@ -3594,5 +3597,8 @@ int __init kvm_sys_reg_table_init(void)
 	if (!first_idreg)
 		return -EINVAL;
 
+	if (kvm_get_mode() == KVM_MODE_NV)
+		return populate_nv_trap_config();
+
 	return 0;
 }
diff --git a/arch/arm64/kvm/trace_arm.h b/arch/arm64/kvm/trace_arm.h
index 6ce5c025218d..8ad53104934d 100644
--- a/arch/arm64/kvm/trace_arm.h
+++ b/arch/arm64/kvm/trace_arm.h
@@ -364,6 +364,32 @@ TRACE_EVENT(kvm_inject_nested_exception,
 		  __entry->hcr_el2)
 );
 
+TRACE_EVENT(kvm_forward_sysreg_trap,
+	    TP_PROTO(struct kvm_vcpu *vcpu, u32 sysreg, bool is_read),
+	    TP_ARGS(vcpu, sysreg, is_read),
+
+	    TP_STRUCT__entry(
+		__field(u64,	pc)
+		__field(u32,	sysreg)
+		__field(bool,	is_read)
+	    ),
+
+	    TP_fast_assign(
+		__entry->pc = *vcpu_pc(vcpu);
+		__entry->sysreg = sysreg;
+		__entry->is_read = is_read;
+	    ),
+
+	    TP_printk("%llx %c (%d,%d,%d,%d,%d)",
+		      __entry->pc,
+		      __entry->is_read ? 'R' : 'W',
+		      sys_reg_Op0(__entry->sysreg),
+		      sys_reg_Op1(__entry->sysreg),
+		      sys_reg_CRn(__entry->sysreg),
+		      sys_reg_CRm(__entry->sysreg),
+		      sys_reg_Op2(__entry->sysreg))
+);
+
 #endif /* _TRACE_ARM_ARM64_KVM_H */
 
 #undef TRACE_INCLUDE_PATH
-- 
2.34.1


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply related	[flat|nested] 48+ messages in thread

* [PATCH v4 15/28] KVM: arm64: nv: Add trap forwarding for HCR_EL2
  2023-08-15 18:38 [PATCH v4 00/28] KVM: arm64: NV trap forwarding infrastructure Marc Zyngier
                   ` (13 preceding siblings ...)
  2023-08-15 18:38 ` [PATCH v4 14/28] KVM: arm64: nv: Add trap forwarding infrastructure Marc Zyngier
@ 2023-08-15 18:38 ` Marc Zyngier
  2023-08-15 21:37   ` Jing Zhang
  2023-08-17 11:05   ` Miguel Luis
  2023-08-15 18:38 ` [PATCH v4 16/28] KVM: arm64: nv: Expose FEAT_EVT to nested guests Marc Zyngier
                   ` (13 subsequent siblings)
  28 siblings, 2 replies; 48+ messages in thread
From: Marc Zyngier @ 2023-08-15 18:38 UTC (permalink / raw)
  To: kvmarm, kvm, linux-arm-kernel
  Cc: Catalin Marinas, Eric Auger, Mark Brown, Mark Rutland,
	Will Deacon, Alexandru Elisei, Andre Przywara, Chase Conklin,
	Ganapatrao Kulkarni, Darren Hart, Miguel Luis, Jing Zhang,
	James Morse, Suzuki K Poulose, Oliver Upton, Zenghui Yu

Describe the HCR_EL2 register, and associate it with all the sysregs
it allows to trap.

Reviewed-by: Eric Auger <eric.auger@redhat.com>
Signed-off-by: Marc Zyngier <maz@kernel.org>
---
 arch/arm64/kvm/emulate-nested.c | 488 ++++++++++++++++++++++++++++++++
 1 file changed, 488 insertions(+)

diff --git a/arch/arm64/kvm/emulate-nested.c b/arch/arm64/kvm/emulate-nested.c
index d5837ed0077c..975a30ef874a 100644
--- a/arch/arm64/kvm/emulate-nested.c
+++ b/arch/arm64/kvm/emulate-nested.c
@@ -38,12 +38,48 @@ enum cgt_group_id {
 	 * on their own instead of being part of a combination of
 	 * trap controls.
 	 */
+	CGT_HCR_TID1,
+	CGT_HCR_TID2,
+	CGT_HCR_TID3,
+	CGT_HCR_IMO,
+	CGT_HCR_FMO,
+	CGT_HCR_TIDCP,
+	CGT_HCR_TACR,
+	CGT_HCR_TSW,
+	CGT_HCR_TPC,
+	CGT_HCR_TPU,
+	CGT_HCR_TTLB,
+	CGT_HCR_TVM,
+	CGT_HCR_TDZ,
+	CGT_HCR_TRVM,
+	CGT_HCR_TLOR,
+	CGT_HCR_TERR,
+	CGT_HCR_APK,
+	CGT_HCR_NV,
+	CGT_HCR_NV_nNV2,
+	CGT_HCR_NV1_nNV2,
+	CGT_HCR_AT,
+	CGT_HCR_nFIEN,
+	CGT_HCR_TID4,
+	CGT_HCR_TICAB,
+	CGT_HCR_TOCU,
+	CGT_HCR_ENSCXT,
+	CGT_HCR_TTLBIS,
+	CGT_HCR_TTLBOS,
 
 	/*
 	 * Anything after this point is a combination of coarse trap
 	 * controls, which must all be evaluated to decide what to do.
 	 */
 	__MULTIPLE_CONTROL_BITS__,
+	CGT_HCR_IMO_FMO = __MULTIPLE_CONTROL_BITS__,
+	CGT_HCR_TID2_TID4,
+	CGT_HCR_TTLB_TTLBIS,
+	CGT_HCR_TTLB_TTLBOS,
+	CGT_HCR_TVM_TRVM,
+	CGT_HCR_TPU_TICAB,
+	CGT_HCR_TPU_TOCU,
+	CGT_HCR_NV1_nNV2_ENSCXT,
 
 	/*
 	 * Anything after this point requires a callback evaluating a
@@ -56,6 +92,174 @@ enum cgt_group_id {
 };
 
 static const struct trap_bits coarse_trap_bits[] = {
+	[CGT_HCR_TID1] = {
+		.index		= HCR_EL2,
+		.value 		= HCR_TID1,
+		.mask		= HCR_TID1,
+		.behaviour	= BEHAVE_FORWARD_READ,
+	},
+	[CGT_HCR_TID2] = {
+		.index		= HCR_EL2,
+		.value 		= HCR_TID2,
+		.mask		= HCR_TID2,
+		.behaviour	= BEHAVE_FORWARD_ANY,
+	},
+	[CGT_HCR_TID3] = {
+		.index		= HCR_EL2,
+		.value 		= HCR_TID3,
+		.mask		= HCR_TID3,
+		.behaviour	= BEHAVE_FORWARD_READ,
+	},
+	[CGT_HCR_IMO] = {
+		.index		= HCR_EL2,
+		.value 		= HCR_IMO,
+		.mask		= HCR_IMO,
+		.behaviour	= BEHAVE_FORWARD_WRITE,
+	},
+	[CGT_HCR_FMO] = {
+		.index		= HCR_EL2,
+		.value 		= HCR_FMO,
+		.mask		= HCR_FMO,
+		.behaviour	= BEHAVE_FORWARD_WRITE,
+	},
+	[CGT_HCR_TIDCP] = {
+		.index		= HCR_EL2,
+		.value		= HCR_TIDCP,
+		.mask		= HCR_TIDCP,
+		.behaviour	= BEHAVE_FORWARD_ANY,
+	},
+	[CGT_HCR_TACR] = {
+		.index		= HCR_EL2,
+		.value		= HCR_TACR,
+		.mask		= HCR_TACR,
+		.behaviour	= BEHAVE_FORWARD_ANY,
+	},
+	[CGT_HCR_TSW] = {
+		.index		= HCR_EL2,
+		.value		= HCR_TSW,
+		.mask		= HCR_TSW,
+		.behaviour	= BEHAVE_FORWARD_ANY,
+	},
+	[CGT_HCR_TPC] = { /* Also called TCPC when FEAT_DPB is implemented */
+		.index		= HCR_EL2,
+		.value		= HCR_TPC,
+		.mask		= HCR_TPC,
+		.behaviour	= BEHAVE_FORWARD_ANY,
+	},
+	[CGT_HCR_TPU] = {
+		.index		= HCR_EL2,
+		.value		= HCR_TPU,
+		.mask		= HCR_TPU,
+		.behaviour	= BEHAVE_FORWARD_ANY,
+	},
+	[CGT_HCR_TTLB] = {
+		.index		= HCR_EL2,
+		.value		= HCR_TTLB,
+		.mask		= HCR_TTLB,
+		.behaviour	= BEHAVE_FORWARD_ANY,
+	},
+	[CGT_HCR_TVM] = {
+		.index		= HCR_EL2,
+		.value		= HCR_TVM,
+		.mask		= HCR_TVM,
+		.behaviour	= BEHAVE_FORWARD_WRITE,
+	},
+	[CGT_HCR_TDZ] = {
+		.index		= HCR_EL2,
+		.value		= HCR_TDZ,
+		.mask		= HCR_TDZ,
+		.behaviour	= BEHAVE_FORWARD_ANY,
+	},
+	[CGT_HCR_TRVM] = {
+		.index		= HCR_EL2,
+		.value		= HCR_TRVM,
+		.mask		= HCR_TRVM,
+		.behaviour	= BEHAVE_FORWARD_READ,
+	},
+	[CGT_HCR_TLOR] = {
+		.index		= HCR_EL2,
+		.value		= HCR_TLOR,
+		.mask		= HCR_TLOR,
+		.behaviour	= BEHAVE_FORWARD_ANY,
+	},
+	[CGT_HCR_TERR] = {
+		.index		= HCR_EL2,
+		.value		= HCR_TERR,
+		.mask		= HCR_TERR,
+		.behaviour	= BEHAVE_FORWARD_ANY,
+	},
+	[CGT_HCR_APK] = {
+		.index		= HCR_EL2,
+		.value		= 0,
+		.mask		= HCR_APK,
+		.behaviour	= BEHAVE_FORWARD_ANY,
+	},
+	[CGT_HCR_NV] = {
+		.index		= HCR_EL2,
+		.value		= HCR_NV,
+		.mask		= HCR_NV,
+		.behaviour	= BEHAVE_FORWARD_ANY,
+	},
+	[CGT_HCR_NV_nNV2] = {
+		.index		= HCR_EL2,
+		.value		= HCR_NV,
+		.mask		= HCR_NV | HCR_NV2,
+		.behaviour	= BEHAVE_FORWARD_ANY,
+	},
+	[CGT_HCR_NV1_nNV2] = {
+		.index		= HCR_EL2,
+		.value		= HCR_NV | HCR_NV1,
+		.mask		= HCR_NV | HCR_NV1 | HCR_NV2,
+		.behaviour	= BEHAVE_FORWARD_ANY,
+	},
+	[CGT_HCR_AT] = {
+		.index		= HCR_EL2,
+		.value		= HCR_AT,
+		.mask		= HCR_AT,
+		.behaviour	= BEHAVE_FORWARD_ANY,
+	},
+	[CGT_HCR_nFIEN] = {
+		.index		= HCR_EL2,
+		.value		= 0,
+		.mask		= HCR_FIEN,
+		.behaviour	= BEHAVE_FORWARD_ANY,
+	},
+	[CGT_HCR_TID4] = {
+		.index		= HCR_EL2,
+		.value 		= HCR_TID4,
+		.mask		= HCR_TID4,
+		.behaviour	= BEHAVE_FORWARD_ANY,
+	},
+	[CGT_HCR_TICAB] = {
+		.index		= HCR_EL2,
+		.value 		= HCR_TICAB,
+		.mask		= HCR_TICAB,
+		.behaviour	= BEHAVE_FORWARD_ANY,
+	},
+	[CGT_HCR_TOCU] = {
+		.index		= HCR_EL2,
+		.value 		= HCR_TOCU,
+		.mask		= HCR_TOCU,
+		.behaviour	= BEHAVE_FORWARD_ANY,
+	},
+	[CGT_HCR_ENSCXT] = {
+		.index		= HCR_EL2,
+		.value 		= 0,
+		.mask		= HCR_ENSCXT,
+		.behaviour	= BEHAVE_FORWARD_ANY,
+	},
+	[CGT_HCR_TTLBIS] = {
+		.index		= HCR_EL2,
+		.value		= HCR_TTLBIS,
+		.mask		= HCR_TTLBIS,
+		.behaviour	= BEHAVE_FORWARD_ANY,
+	},
+	[CGT_HCR_TTLBOS] = {
+		.index		= HCR_EL2,
+		.value		= HCR_TTLBOS,
+		.mask		= HCR_TTLBOS,
+		.behaviour	= BEHAVE_FORWARD_ANY,
+	},
 };
 
 #define MCB(id, ...)						\
@@ -65,6 +269,14 @@ static const struct trap_bits coarse_trap_bits[] = {
 		}
 
 static const enum cgt_group_id *coarse_control_combo[] = {
+	MCB(CGT_HCR_IMO_FMO,		CGT_HCR_IMO, CGT_HCR_FMO),
+	MCB(CGT_HCR_TID2_TID4,		CGT_HCR_TID2, CGT_HCR_TID4),
+	MCB(CGT_HCR_TTLB_TTLBIS,	CGT_HCR_TTLB, CGT_HCR_TTLBIS),
+	MCB(CGT_HCR_TTLB_TTLBOS,	CGT_HCR_TTLB, CGT_HCR_TTLBOS),
+	MCB(CGT_HCR_TVM_TRVM,		CGT_HCR_TVM, CGT_HCR_TRVM),
+	MCB(CGT_HCR_TPU_TICAB,		CGT_HCR_TPU, CGT_HCR_TICAB),
+	MCB(CGT_HCR_TPU_TOCU,		CGT_HCR_TPU, CGT_HCR_TOCU),
+	MCB(CGT_HCR_NV1_nNV2_ENSCXT,	CGT_HCR_NV1_nNV2, CGT_HCR_ENSCXT),
 };
 
 typedef enum trap_behaviour (*complex_condition_check)(struct kvm_vcpu *);
@@ -121,6 +333,282 @@ struct encoding_to_trap_config {
  * re-injected in the nested hypervisor.
  */
 static const struct encoding_to_trap_config encoding_to_cgt[] __initconst = {
+	SR_TRAP(SYS_REVIDR_EL1,		CGT_HCR_TID1),
+	SR_TRAP(SYS_AIDR_EL1,		CGT_HCR_TID1),
+	SR_TRAP(SYS_SMIDR_EL1,		CGT_HCR_TID1),
+	SR_TRAP(SYS_CTR_EL0,		CGT_HCR_TID2),
+	SR_TRAP(SYS_CCSIDR_EL1,		CGT_HCR_TID2_TID4),
+	SR_TRAP(SYS_CCSIDR2_EL1,	CGT_HCR_TID2_TID4),
+	SR_TRAP(SYS_CLIDR_EL1,		CGT_HCR_TID2_TID4),
+	SR_TRAP(SYS_CSSELR_EL1,		CGT_HCR_TID2_TID4),
+	SR_RANGE_TRAP(SYS_ID_PFR0_EL1,
+		      sys_reg(3, 0, 0, 7, 7), CGT_HCR_TID3),
+	SR_TRAP(SYS_ICC_SGI0R_EL1,	CGT_HCR_IMO_FMO),
+	SR_TRAP(SYS_ICC_ASGI1R_EL1,	CGT_HCR_IMO_FMO),
+	SR_TRAP(SYS_ICC_SGI1R_EL1,	CGT_HCR_IMO_FMO),
+	SR_RANGE_TRAP(sys_reg(3, 0, 11, 0, 0),
+		      sys_reg(3, 0, 11, 15, 7), CGT_HCR_TIDCP),
+	SR_RANGE_TRAP(sys_reg(3, 1, 11, 0, 0),
+		      sys_reg(3, 1, 11, 15, 7), CGT_HCR_TIDCP),
+	SR_RANGE_TRAP(sys_reg(3, 2, 11, 0, 0),
+		      sys_reg(3, 2, 11, 15, 7), CGT_HCR_TIDCP),
+	SR_RANGE_TRAP(sys_reg(3, 3, 11, 0, 0),
+		      sys_reg(3, 3, 11, 15, 7), CGT_HCR_TIDCP),
+	SR_RANGE_TRAP(sys_reg(3, 4, 11, 0, 0),
+		      sys_reg(3, 4, 11, 15, 7), CGT_HCR_TIDCP),
+	SR_RANGE_TRAP(sys_reg(3, 5, 11, 0, 0),
+		      sys_reg(3, 5, 11, 15, 7), CGT_HCR_TIDCP),
+	SR_RANGE_TRAP(sys_reg(3, 6, 11, 0, 0),
+		      sys_reg(3, 6, 11, 15, 7), CGT_HCR_TIDCP),
+	SR_RANGE_TRAP(sys_reg(3, 7, 11, 0, 0),
+		      sys_reg(3, 7, 11, 15, 7), CGT_HCR_TIDCP),
+	SR_RANGE_TRAP(sys_reg(3, 0, 15, 0, 0),
+		      sys_reg(3, 0, 15, 15, 7), CGT_HCR_TIDCP),
+	SR_RANGE_TRAP(sys_reg(3, 1, 15, 0, 0),
+		      sys_reg(3, 1, 15, 15, 7), CGT_HCR_TIDCP),
+	SR_RANGE_TRAP(sys_reg(3, 2, 15, 0, 0),
+		      sys_reg(3, 2, 15, 15, 7), CGT_HCR_TIDCP),
+	SR_RANGE_TRAP(sys_reg(3, 3, 15, 0, 0),
+		      sys_reg(3, 3, 15, 15, 7), CGT_HCR_TIDCP),
+	SR_RANGE_TRAP(sys_reg(3, 4, 15, 0, 0),
+		      sys_reg(3, 4, 15, 15, 7), CGT_HCR_TIDCP),
+	SR_RANGE_TRAP(sys_reg(3, 5, 15, 0, 0),
+		      sys_reg(3, 5, 15, 15, 7), CGT_HCR_TIDCP),
+	SR_RANGE_TRAP(sys_reg(3, 6, 15, 0, 0),
+		      sys_reg(3, 6, 15, 15, 7), CGT_HCR_TIDCP),
+	SR_RANGE_TRAP(sys_reg(3, 7, 15, 0, 0),
+		      sys_reg(3, 7, 15, 15, 7), CGT_HCR_TIDCP),
+	SR_TRAP(SYS_ACTLR_EL1,		CGT_HCR_TACR),
+	SR_TRAP(SYS_DC_ISW,		CGT_HCR_TSW),
+	SR_TRAP(SYS_DC_CSW,		CGT_HCR_TSW),
+	SR_TRAP(SYS_DC_CISW,		CGT_HCR_TSW),
+	SR_TRAP(SYS_DC_IGSW,		CGT_HCR_TSW),
+	SR_TRAP(SYS_DC_IGDSW,		CGT_HCR_TSW),
+	SR_TRAP(SYS_DC_CGSW,		CGT_HCR_TSW),
+	SR_TRAP(SYS_DC_CGDSW,		CGT_HCR_TSW),
+	SR_TRAP(SYS_DC_CIGSW,		CGT_HCR_TSW),
+	SR_TRAP(SYS_DC_CIGDSW,		CGT_HCR_TSW),
+	SR_TRAP(SYS_DC_CIVAC,		CGT_HCR_TPC),
+	SR_TRAP(SYS_DC_CVAC,		CGT_HCR_TPC),
+	SR_TRAP(SYS_DC_CVAP,		CGT_HCR_TPC),
+	SR_TRAP(SYS_DC_CVADP,		CGT_HCR_TPC),
+	SR_TRAP(SYS_DC_IVAC,		CGT_HCR_TPC),
+	SR_TRAP(SYS_DC_CIGVAC,		CGT_HCR_TPC),
+	SR_TRAP(SYS_DC_CIGDVAC,		CGT_HCR_TPC),
+	SR_TRAP(SYS_DC_IGVAC,		CGT_HCR_TPC),
+	SR_TRAP(SYS_DC_IGDVAC,		CGT_HCR_TPC),
+	SR_TRAP(SYS_DC_CGVAC,		CGT_HCR_TPC),
+	SR_TRAP(SYS_DC_CGDVAC,		CGT_HCR_TPC),
+	SR_TRAP(SYS_DC_CGVAP,		CGT_HCR_TPC),
+	SR_TRAP(SYS_DC_CGDVAP,		CGT_HCR_TPC),
+	SR_TRAP(SYS_DC_CGVADP,		CGT_HCR_TPC),
+	SR_TRAP(SYS_DC_CGDVADP,		CGT_HCR_TPC),
+	SR_TRAP(SYS_IC_IVAU,		CGT_HCR_TPU_TOCU),
+	SR_TRAP(SYS_IC_IALLU,		CGT_HCR_TPU_TOCU),
+	SR_TRAP(SYS_IC_IALLUIS,		CGT_HCR_TPU_TICAB),
+	SR_TRAP(SYS_DC_CVAU,		CGT_HCR_TPU_TOCU),
+	SR_TRAP(OP_TLBI_RVAE1,		CGT_HCR_TTLB),
+	SR_TRAP(OP_TLBI_RVAAE1,		CGT_HCR_TTLB),
+	SR_TRAP(OP_TLBI_RVALE1,		CGT_HCR_TTLB),
+	SR_TRAP(OP_TLBI_RVAALE1,	CGT_HCR_TTLB),
+	SR_TRAP(OP_TLBI_VMALLE1,	CGT_HCR_TTLB),
+	SR_TRAP(OP_TLBI_VAE1,		CGT_HCR_TTLB),
+	SR_TRAP(OP_TLBI_ASIDE1,		CGT_HCR_TTLB),
+	SR_TRAP(OP_TLBI_VAAE1,		CGT_HCR_TTLB),
+	SR_TRAP(OP_TLBI_VALE1,		CGT_HCR_TTLB),
+	SR_TRAP(OP_TLBI_VAALE1,		CGT_HCR_TTLB),
+	SR_TRAP(OP_TLBI_RVAE1NXS,	CGT_HCR_TTLB),
+	SR_TRAP(OP_TLBI_RVAAE1NXS,	CGT_HCR_TTLB),
+	SR_TRAP(OP_TLBI_RVALE1NXS,	CGT_HCR_TTLB),
+	SR_TRAP(OP_TLBI_RVAALE1NXS,	CGT_HCR_TTLB),
+	SR_TRAP(OP_TLBI_VMALLE1NXS,	CGT_HCR_TTLB),
+	SR_TRAP(OP_TLBI_VAE1NXS,	CGT_HCR_TTLB),
+	SR_TRAP(OP_TLBI_ASIDE1NXS,	CGT_HCR_TTLB),
+	SR_TRAP(OP_TLBI_VAAE1NXS,	CGT_HCR_TTLB),
+	SR_TRAP(OP_TLBI_VALE1NXS,	CGT_HCR_TTLB),
+	SR_TRAP(OP_TLBI_VAALE1NXS,	CGT_HCR_TTLB),
+	SR_TRAP(OP_TLBI_RVAE1IS,	CGT_HCR_TTLB_TTLBIS),
+	SR_TRAP(OP_TLBI_RVAAE1IS,	CGT_HCR_TTLB_TTLBIS),
+	SR_TRAP(OP_TLBI_RVALE1IS,	CGT_HCR_TTLB_TTLBIS),
+	SR_TRAP(OP_TLBI_RVAALE1IS,	CGT_HCR_TTLB_TTLBIS),
+	SR_TRAP(OP_TLBI_VMALLE1IS,	CGT_HCR_TTLB_TTLBIS),
+	SR_TRAP(OP_TLBI_VAE1IS,		CGT_HCR_TTLB_TTLBIS),
+	SR_TRAP(OP_TLBI_ASIDE1IS,	CGT_HCR_TTLB_TTLBIS),
+	SR_TRAP(OP_TLBI_VAAE1IS,	CGT_HCR_TTLB_TTLBIS),
+	SR_TRAP(OP_TLBI_VALE1IS,	CGT_HCR_TTLB_TTLBIS),
+	SR_TRAP(OP_TLBI_VAALE1IS,	CGT_HCR_TTLB_TTLBIS),
+	SR_TRAP(OP_TLBI_RVAE1ISNXS,	CGT_HCR_TTLB_TTLBIS),
+	SR_TRAP(OP_TLBI_RVAAE1ISNXS,	CGT_HCR_TTLB_TTLBIS),
+	SR_TRAP(OP_TLBI_RVALE1ISNXS,	CGT_HCR_TTLB_TTLBIS),
+	SR_TRAP(OP_TLBI_RVAALE1ISNXS,	CGT_HCR_TTLB_TTLBIS),
+	SR_TRAP(OP_TLBI_VMALLE1ISNXS,	CGT_HCR_TTLB_TTLBIS),
+	SR_TRAP(OP_TLBI_VAE1ISNXS,	CGT_HCR_TTLB_TTLBIS),
+	SR_TRAP(OP_TLBI_ASIDE1ISNXS,	CGT_HCR_TTLB_TTLBIS),
+	SR_TRAP(OP_TLBI_VAAE1ISNXS,	CGT_HCR_TTLB_TTLBIS),
+	SR_TRAP(OP_TLBI_VALE1ISNXS,	CGT_HCR_TTLB_TTLBIS),
+	SR_TRAP(OP_TLBI_VAALE1ISNXS,	CGT_HCR_TTLB_TTLBIS),
+	SR_TRAP(OP_TLBI_VMALLE1OS,	CGT_HCR_TTLB_TTLBOS),
+	SR_TRAP(OP_TLBI_VAE1OS,		CGT_HCR_TTLB_TTLBOS),
+	SR_TRAP(OP_TLBI_ASIDE1OS,	CGT_HCR_TTLB_TTLBOS),
+	SR_TRAP(OP_TLBI_VAAE1OS,	CGT_HCR_TTLB_TTLBOS),
+	SR_TRAP(OP_TLBI_VALE1OS,	CGT_HCR_TTLB_TTLBOS),
+	SR_TRAP(OP_TLBI_VAALE1OS,	CGT_HCR_TTLB_TTLBOS),
+	SR_TRAP(OP_TLBI_RVAE1OS,	CGT_HCR_TTLB_TTLBOS),
+	SR_TRAP(OP_TLBI_RVAAE1OS,	CGT_HCR_TTLB_TTLBOS),
+	SR_TRAP(OP_TLBI_RVALE1OS,	CGT_HCR_TTLB_TTLBOS),
+	SR_TRAP(OP_TLBI_RVAALE1OS,	CGT_HCR_TTLB_TTLBOS),
+	SR_TRAP(OP_TLBI_VMALLE1OSNXS,	CGT_HCR_TTLB_TTLBOS),
+	SR_TRAP(OP_TLBI_VAE1OSNXS,	CGT_HCR_TTLB_TTLBOS),
+	SR_TRAP(OP_TLBI_ASIDE1OSNXS,	CGT_HCR_TTLB_TTLBOS),
+	SR_TRAP(OP_TLBI_VAAE1OSNXS,	CGT_HCR_TTLB_TTLBOS),
+	SR_TRAP(OP_TLBI_VALE1OSNXS,	CGT_HCR_TTLB_TTLBOS),
+	SR_TRAP(OP_TLBI_VAALE1OSNXS,	CGT_HCR_TTLB_TTLBOS),
+	SR_TRAP(OP_TLBI_RVAE1OSNXS,	CGT_HCR_TTLB_TTLBOS),
+	SR_TRAP(OP_TLBI_RVAAE1OSNXS,	CGT_HCR_TTLB_TTLBOS),
+	SR_TRAP(OP_TLBI_RVALE1OSNXS,	CGT_HCR_TTLB_TTLBOS),
+	SR_TRAP(OP_TLBI_RVAALE1OSNXS,	CGT_HCR_TTLB_TTLBOS),
+	SR_TRAP(SYS_SCTLR_EL1,		CGT_HCR_TVM_TRVM),
+	SR_TRAP(SYS_TTBR0_EL1,		CGT_HCR_TVM_TRVM),
+	SR_TRAP(SYS_TTBR1_EL1,		CGT_HCR_TVM_TRVM),
+	SR_TRAP(SYS_TCR_EL1,		CGT_HCR_TVM_TRVM),
+	SR_TRAP(SYS_ESR_EL1,		CGT_HCR_TVM_TRVM),
+	SR_TRAP(SYS_FAR_EL1,		CGT_HCR_TVM_TRVM),
+	SR_TRAP(SYS_AFSR0_EL1,		CGT_HCR_TVM_TRVM),
+	SR_TRAP(SYS_AFSR1_EL1,		CGT_HCR_TVM_TRVM),
+	SR_TRAP(SYS_MAIR_EL1,		CGT_HCR_TVM_TRVM),
+	SR_TRAP(SYS_AMAIR_EL1,		CGT_HCR_TVM_TRVM),
+	SR_TRAP(SYS_CONTEXTIDR_EL1,	CGT_HCR_TVM_TRVM),
+	SR_TRAP(SYS_DC_ZVA,		CGT_HCR_TDZ),
+	SR_TRAP(SYS_DC_GVA,		CGT_HCR_TDZ),
+	SR_TRAP(SYS_DC_GZVA,		CGT_HCR_TDZ),
+	SR_TRAP(SYS_LORSA_EL1,		CGT_HCR_TLOR),
+	SR_TRAP(SYS_LOREA_EL1, 		CGT_HCR_TLOR),
+	SR_TRAP(SYS_LORN_EL1, 		CGT_HCR_TLOR),
+	SR_TRAP(SYS_LORC_EL1, 		CGT_HCR_TLOR),
+	SR_TRAP(SYS_LORID_EL1,		CGT_HCR_TLOR),
+	SR_TRAP(SYS_ERRIDR_EL1,		CGT_HCR_TERR),
+	SR_TRAP(SYS_ERRSELR_EL1,	CGT_HCR_TERR),
+	SR_TRAP(SYS_ERXADDR_EL1,	CGT_HCR_TERR),
+	SR_TRAP(SYS_ERXCTLR_EL1,	CGT_HCR_TERR),
+	SR_TRAP(SYS_ERXFR_EL1,		CGT_HCR_TERR),
+	SR_TRAP(SYS_ERXMISC0_EL1,	CGT_HCR_TERR),
+	SR_TRAP(SYS_ERXMISC1_EL1,	CGT_HCR_TERR),
+	SR_TRAP(SYS_ERXMISC2_EL1,	CGT_HCR_TERR),
+	SR_TRAP(SYS_ERXMISC3_EL1,	CGT_HCR_TERR),
+	SR_TRAP(SYS_ERXSTATUS_EL1,	CGT_HCR_TERR),
+	SR_TRAP(SYS_APIAKEYLO_EL1,	CGT_HCR_APK),
+	SR_TRAP(SYS_APIAKEYHI_EL1,	CGT_HCR_APK),
+	SR_TRAP(SYS_APIBKEYLO_EL1,	CGT_HCR_APK),
+	SR_TRAP(SYS_APIBKEYHI_EL1,	CGT_HCR_APK),
+	SR_TRAP(SYS_APDAKEYLO_EL1,	CGT_HCR_APK),
+	SR_TRAP(SYS_APDAKEYHI_EL1,	CGT_HCR_APK),
+	SR_TRAP(SYS_APDBKEYLO_EL1,	CGT_HCR_APK),
+	SR_TRAP(SYS_APDBKEYHI_EL1,	CGT_HCR_APK),
+	SR_TRAP(SYS_APGAKEYLO_EL1,	CGT_HCR_APK),
+	SR_TRAP(SYS_APGAKEYHI_EL1,	CGT_HCR_APK),
+	/* All _EL2 registers */
+	SR_RANGE_TRAP(sys_reg(3, 4, 0, 0, 0),
+		      sys_reg(3, 4, 3, 15, 7), CGT_HCR_NV),
+	/* Skip the SP_EL1 encoding... */
+	SR_RANGE_TRAP(sys_reg(3, 4, 4, 1, 1),
+		      sys_reg(3, 4, 10, 15, 7), CGT_HCR_NV),
+	SR_RANGE_TRAP(sys_reg(3, 4, 12, 0, 0),
+		      sys_reg(3, 4, 14, 15, 7), CGT_HCR_NV),
+	/* All _EL02, _EL12 registers */
+	SR_RANGE_TRAP(sys_reg(3, 5, 0, 0, 0),
+		      sys_reg(3, 5, 10, 15, 7), CGT_HCR_NV),
+	SR_RANGE_TRAP(sys_reg(3, 5, 12, 0, 0),
+		      sys_reg(3, 5, 14, 15, 7), CGT_HCR_NV),
+	SR_TRAP(OP_AT_S1E2R,		CGT_HCR_NV),
+	SR_TRAP(OP_AT_S1E2W,		CGT_HCR_NV),
+	SR_TRAP(OP_AT_S12E1R,		CGT_HCR_NV),
+	SR_TRAP(OP_AT_S12E1W,		CGT_HCR_NV),
+	SR_TRAP(OP_AT_S12E0R,		CGT_HCR_NV),
+	SR_TRAP(OP_AT_S12E0W,		CGT_HCR_NV),
+	SR_TRAP(OP_TLBI_IPAS2E1,	CGT_HCR_NV),
+	SR_TRAP(OP_TLBI_RIPAS2E1,	CGT_HCR_NV),
+	SR_TRAP(OP_TLBI_IPAS2LE1,	CGT_HCR_NV),
+	SR_TRAP(OP_TLBI_RIPAS2LE1,	CGT_HCR_NV),
+	SR_TRAP(OP_TLBI_RVAE2,		CGT_HCR_NV),
+	SR_TRAP(OP_TLBI_RVALE2,		CGT_HCR_NV),
+	SR_TRAP(OP_TLBI_ALLE2,		CGT_HCR_NV),
+	SR_TRAP(OP_TLBI_VAE2,		CGT_HCR_NV),
+	SR_TRAP(OP_TLBI_ALLE1,		CGT_HCR_NV),
+	SR_TRAP(OP_TLBI_VALE2,		CGT_HCR_NV),
+	SR_TRAP(OP_TLBI_VMALLS12E1,	CGT_HCR_NV),
+	SR_TRAP(OP_TLBI_IPAS2E1NXS,	CGT_HCR_NV),
+	SR_TRAP(OP_TLBI_RIPAS2E1NXS,	CGT_HCR_NV),
+	SR_TRAP(OP_TLBI_IPAS2LE1NXS,	CGT_HCR_NV),
+	SR_TRAP(OP_TLBI_RIPAS2LE1NXS,	CGT_HCR_NV),
+	SR_TRAP(OP_TLBI_RVAE2NXS,	CGT_HCR_NV),
+	SR_TRAP(OP_TLBI_RVALE2NXS,	CGT_HCR_NV),
+	SR_TRAP(OP_TLBI_ALLE2NXS,	CGT_HCR_NV),
+	SR_TRAP(OP_TLBI_VAE2NXS,	CGT_HCR_NV),
+	SR_TRAP(OP_TLBI_ALLE1NXS,	CGT_HCR_NV),
+	SR_TRAP(OP_TLBI_VALE2NXS,	CGT_HCR_NV),
+	SR_TRAP(OP_TLBI_VMALLS12E1NXS,	CGT_HCR_NV),
+	SR_TRAP(OP_TLBI_IPAS2E1IS,	CGT_HCR_NV),
+	SR_TRAP(OP_TLBI_RIPAS2E1IS,	CGT_HCR_NV),
+	SR_TRAP(OP_TLBI_IPAS2LE1IS,	CGT_HCR_NV),
+	SR_TRAP(OP_TLBI_RIPAS2LE1IS,	CGT_HCR_NV),
+	SR_TRAP(OP_TLBI_RVAE2IS,	CGT_HCR_NV),
+	SR_TRAP(OP_TLBI_RVALE2IS,	CGT_HCR_NV),
+	SR_TRAP(OP_TLBI_ALLE2IS,	CGT_HCR_NV),
+	SR_TRAP(OP_TLBI_VAE2IS,		CGT_HCR_NV),
+	SR_TRAP(OP_TLBI_ALLE1IS,	CGT_HCR_NV),
+	SR_TRAP(OP_TLBI_VALE2IS,	CGT_HCR_NV),
+	SR_TRAP(OP_TLBI_VMALLS12E1IS,	CGT_HCR_NV),
+	SR_TRAP(OP_TLBI_IPAS2E1ISNXS,	CGT_HCR_NV),
+	SR_TRAP(OP_TLBI_RIPAS2E1ISNXS,	CGT_HCR_NV),
+	SR_TRAP(OP_TLBI_IPAS2LE1ISNXS,	CGT_HCR_NV),
+	SR_TRAP(OP_TLBI_RIPAS2LE1ISNXS,	CGT_HCR_NV),
+	SR_TRAP(OP_TLBI_RVAE2ISNXS,	CGT_HCR_NV),
+	SR_TRAP(OP_TLBI_RVALE2ISNXS,	CGT_HCR_NV),
+	SR_TRAP(OP_TLBI_ALLE2ISNXS,	CGT_HCR_NV),
+	SR_TRAP(OP_TLBI_VAE2ISNXS,	CGT_HCR_NV),
+	SR_TRAP(OP_TLBI_ALLE1ISNXS,	CGT_HCR_NV),
+	SR_TRAP(OP_TLBI_VALE2ISNXS,	CGT_HCR_NV),
+	SR_TRAP(OP_TLBI_VMALLS12E1ISNXS,CGT_HCR_NV),
+	SR_TRAP(OP_TLBI_ALLE2OS,	CGT_HCR_NV),
+	SR_TRAP(OP_TLBI_VAE2OS,		CGT_HCR_NV),
+	SR_TRAP(OP_TLBI_ALLE1OS,	CGT_HCR_NV),
+	SR_TRAP(OP_TLBI_VALE2OS,	CGT_HCR_NV),
+	SR_TRAP(OP_TLBI_VMALLS12E1OS,	CGT_HCR_NV),
+	SR_TRAP(OP_TLBI_IPAS2E1OS,	CGT_HCR_NV),
+	SR_TRAP(OP_TLBI_RIPAS2E1OS,	CGT_HCR_NV),
+	SR_TRAP(OP_TLBI_IPAS2LE1OS,	CGT_HCR_NV),
+	SR_TRAP(OP_TLBI_RIPAS2LE1OS,	CGT_HCR_NV),
+	SR_TRAP(OP_TLBI_RVAE2OS,	CGT_HCR_NV),
+	SR_TRAP(OP_TLBI_RVALE2OS,	CGT_HCR_NV),
+	SR_TRAP(OP_TLBI_ALLE2OSNXS,	CGT_HCR_NV),
+	SR_TRAP(OP_TLBI_VAE2OSNXS,	CGT_HCR_NV),
+	SR_TRAP(OP_TLBI_ALLE1OSNXS,	CGT_HCR_NV),
+	SR_TRAP(OP_TLBI_VALE2OSNXS,	CGT_HCR_NV),
+	SR_TRAP(OP_TLBI_VMALLS12E1OSNXS,CGT_HCR_NV),
+	SR_TRAP(OP_TLBI_IPAS2E1OSNXS,	CGT_HCR_NV),
+	SR_TRAP(OP_TLBI_RIPAS2E1OSNXS,	CGT_HCR_NV),
+	SR_TRAP(OP_TLBI_IPAS2LE1OSNXS,	CGT_HCR_NV),
+	SR_TRAP(OP_TLBI_RIPAS2LE1OSNXS,	CGT_HCR_NV),
+	SR_TRAP(OP_TLBI_RVAE2OSNXS,	CGT_HCR_NV),
+	SR_TRAP(OP_TLBI_RVALE2OSNXS,	CGT_HCR_NV),
+	SR_TRAP(OP_CPP_RCTX, 		CGT_HCR_NV),
+	SR_TRAP(OP_DVP_RCTX, 		CGT_HCR_NV),
+	SR_TRAP(OP_CFP_RCTX, 		CGT_HCR_NV),
+	SR_TRAP(SYS_SP_EL1,		CGT_HCR_NV_nNV2),
+	SR_TRAP(SYS_VBAR_EL1,		CGT_HCR_NV1_nNV2),
+	SR_TRAP(SYS_ELR_EL1,		CGT_HCR_NV1_nNV2),
+	SR_TRAP(SYS_SPSR_EL1,		CGT_HCR_NV1_nNV2),
+	SR_TRAP(SYS_SCXTNUM_EL1,	CGT_HCR_NV1_nNV2_ENSCXT),
+	SR_TRAP(SYS_SCXTNUM_EL0,	CGT_HCR_ENSCXT),
+	SR_TRAP(OP_AT_S1E1R, 		CGT_HCR_AT),
+	SR_TRAP(OP_AT_S1E1W, 		CGT_HCR_AT),
+	SR_TRAP(OP_AT_S1E0R, 		CGT_HCR_AT),
+	SR_TRAP(OP_AT_S1E0W, 		CGT_HCR_AT),
+	SR_TRAP(OP_AT_S1E1RP, 		CGT_HCR_AT),
+	SR_TRAP(OP_AT_S1E1WP, 		CGT_HCR_AT),
+	SR_TRAP(SYS_ERXPFGF_EL1,	CGT_HCR_nFIEN),
+	SR_TRAP(SYS_ERXPFGCTL_EL1,	CGT_HCR_nFIEN),
+	SR_TRAP(SYS_ERXPFGCDN_EL1,	CGT_HCR_nFIEN),
 };
 
 static DEFINE_XARRAY(sr_forward_xa);
-- 
2.34.1


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply related	[flat|nested] 48+ messages in thread

* [PATCH v4 16/28] KVM: arm64: nv: Expose FEAT_EVT to nested guests
  2023-08-15 18:38 [PATCH v4 00/28] KVM: arm64: NV trap forwarding infrastructure Marc Zyngier
                   ` (14 preceding siblings ...)
  2023-08-15 18:38 ` [PATCH v4 15/28] KVM: arm64: nv: Add trap forwarding for HCR_EL2 Marc Zyngier
@ 2023-08-15 18:38 ` Marc Zyngier
  2023-08-15 18:38 ` [PATCH v4 17/28] KVM: arm64: nv: Add trap forwarding for MDCR_EL2 Marc Zyngier
                   ` (12 subsequent siblings)
  28 siblings, 0 replies; 48+ messages in thread
From: Marc Zyngier @ 2023-08-15 18:38 UTC (permalink / raw)
  To: kvmarm, kvm, linux-arm-kernel
  Cc: Catalin Marinas, Eric Auger, Mark Brown, Mark Rutland,
	Will Deacon, Alexandru Elisei, Andre Przywara, Chase Conklin,
	Ganapatrao Kulkarni, Darren Hart, Miguel Luis, Jing Zhang,
	James Morse, Suzuki K Poulose, Oliver Upton, Zenghui Yu

Now that we properly implement FEAT_EVT (as we correctly forward
traps), expose it to guests.

Reviewed-by: Eric Auger <eric.auger@redhat.com>
Reviewed-by: Jing Zhang <jingzhangos@google.com>
Signed-off-by: Marc Zyngier <maz@kernel.org>
---
 arch/arm64/kvm/nested.c | 3 +--
 1 file changed, 1 insertion(+), 2 deletions(-)

diff --git a/arch/arm64/kvm/nested.c b/arch/arm64/kvm/nested.c
index 315354d27978..7f80f385d9e8 100644
--- a/arch/arm64/kvm/nested.c
+++ b/arch/arm64/kvm/nested.c
@@ -124,8 +124,7 @@ void access_nested_id_reg(struct kvm_vcpu *v, struct sys_reg_params *p,
 		break;
 
 	case SYS_ID_AA64MMFR2_EL1:
-		val &= ~(NV_FTR(MMFR2, EVT)	|
-			 NV_FTR(MMFR2, BBM)	|
+		val &= ~(NV_FTR(MMFR2, BBM)	|
 			 NV_FTR(MMFR2, TTL)	|
 			 GENMASK_ULL(47, 44)	|
 			 NV_FTR(MMFR2, ST)	|
-- 
2.34.1


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply related	[flat|nested] 48+ messages in thread

* [PATCH v4 17/28] KVM: arm64: nv: Add trap forwarding for MDCR_EL2
  2023-08-15 18:38 [PATCH v4 00/28] KVM: arm64: NV trap forwarding infrastructure Marc Zyngier
                   ` (15 preceding siblings ...)
  2023-08-15 18:38 ` [PATCH v4 16/28] KVM: arm64: nv: Expose FEAT_EVT to nested guests Marc Zyngier
@ 2023-08-15 18:38 ` Marc Zyngier
  2023-08-15 22:33   ` Jing Zhang
  2023-08-15 18:38 ` [PATCH v4 18/28] KVM: arm64: nv: Add trap forwarding for CNTHCTL_EL2 Marc Zyngier
                   ` (11 subsequent siblings)
  28 siblings, 1 reply; 48+ messages in thread
From: Marc Zyngier @ 2023-08-15 18:38 UTC (permalink / raw)
  To: kvmarm, kvm, linux-arm-kernel
  Cc: Catalin Marinas, Eric Auger, Mark Brown, Mark Rutland,
	Will Deacon, Alexandru Elisei, Andre Przywara, Chase Conklin,
	Ganapatrao Kulkarni, Darren Hart, Miguel Luis, Jing Zhang,
	James Morse, Suzuki K Poulose, Oliver Upton, Zenghui Yu

Describe the MDCR_EL2 register, and associate it with all the sysregs
it allows to trap.

Reviewed-by: Eric Auger <eric.auger@redhat.com>
Signed-off-by: Marc Zyngier <maz@kernel.org>
---
 arch/arm64/kvm/emulate-nested.c | 268 ++++++++++++++++++++++++++++++++
 1 file changed, 268 insertions(+)

diff --git a/arch/arm64/kvm/emulate-nested.c b/arch/arm64/kvm/emulate-nested.c
index 975a30ef874a..241e44eeed6d 100644
--- a/arch/arm64/kvm/emulate-nested.c
+++ b/arch/arm64/kvm/emulate-nested.c
@@ -67,6 +67,18 @@ enum cgt_group_id {
 	CGT_HCR_TTLBIS,
 	CGT_HCR_TTLBOS,
 
+	CGT_MDCR_TPMCR,
+	CGT_MDCR_TPM,
+	CGT_MDCR_TDE,
+	CGT_MDCR_TDA,
+	CGT_MDCR_TDOSA,
+	CGT_MDCR_TDRA,
+	CGT_MDCR_E2PB,
+	CGT_MDCR_TPMS,
+	CGT_MDCR_TTRF,
+	CGT_MDCR_E2TB,
+	CGT_MDCR_TDCC,
+
 	/*
 	 * Anything after this point is a combination of coarse trap
 	 * controls, which must all be evaluated to decide what to do.
@@ -80,6 +92,11 @@ enum cgt_group_id {
 	CGT_HCR_TPU_TICAB,
 	CGT_HCR_TPU_TOCU,
 	CGT_HCR_NV1_nNV2_ENSCXT,
+	CGT_MDCR_TPM_TPMCR,
+	CGT_MDCR_TDE_TDA,
+	CGT_MDCR_TDE_TDOSA,
+	CGT_MDCR_TDE_TDRA,
+	CGT_MDCR_TDCC_TDE_TDA,
 
 	/*
 	 * Anything after this point requires a callback evaluating a
@@ -260,6 +277,72 @@ static const struct trap_bits coarse_trap_bits[] = {
 		.mask		= HCR_TTLBOS,
 		.behaviour	= BEHAVE_FORWARD_ANY,
 	},
+	[CGT_MDCR_TPMCR] = {
+		.index		= MDCR_EL2,
+		.value		= MDCR_EL2_TPMCR,
+		.mask		= MDCR_EL2_TPMCR,
+		.behaviour	= BEHAVE_FORWARD_ANY,
+	},
+	[CGT_MDCR_TPM] = {
+		.index		= MDCR_EL2,
+		.value		= MDCR_EL2_TPM,
+		.mask		= MDCR_EL2_TPM,
+		.behaviour	= BEHAVE_FORWARD_ANY,
+	},
+	[CGT_MDCR_TDE] = {
+		.index		= MDCR_EL2,
+		.value		= MDCR_EL2_TDE,
+		.mask		= MDCR_EL2_TDE,
+		.behaviour	= BEHAVE_FORWARD_ANY,
+	},
+	[CGT_MDCR_TDA] = {
+		.index		= MDCR_EL2,
+		.value		= MDCR_EL2_TDA,
+		.mask		= MDCR_EL2_TDA,
+		.behaviour	= BEHAVE_FORWARD_ANY,
+	},
+	[CGT_MDCR_TDOSA] = {
+		.index		= MDCR_EL2,
+		.value		= MDCR_EL2_TDOSA,
+		.mask		= MDCR_EL2_TDOSA,
+		.behaviour	= BEHAVE_FORWARD_ANY,
+	},
+	[CGT_MDCR_TDRA] = {
+		.index		= MDCR_EL2,
+		.value		= MDCR_EL2_TDRA,
+		.mask		= MDCR_EL2_TDRA,
+		.behaviour	= BEHAVE_FORWARD_ANY,
+	},
+	[CGT_MDCR_E2PB] = {
+		.index		= MDCR_EL2,
+		.value		= 0,
+		.mask		= BIT(MDCR_EL2_E2PB_SHIFT),
+		.behaviour	= BEHAVE_FORWARD_ANY,
+	},
+	[CGT_MDCR_TPMS] = {
+		.index		= MDCR_EL2,
+		.value		= MDCR_EL2_TPMS,
+		.mask		= MDCR_EL2_TPMS,
+		.behaviour	= BEHAVE_FORWARD_ANY,
+	},
+	[CGT_MDCR_TTRF] = {
+		.index		= MDCR_EL2,
+		.value		= MDCR_EL2_TTRF,
+		.mask		= MDCR_EL2_TTRF,
+		.behaviour	= BEHAVE_FORWARD_ANY,
+	},
+	[CGT_MDCR_E2TB] = {
+		.index		= MDCR_EL2,
+		.value		= 0,
+		.mask		= BIT(MDCR_EL2_E2TB_SHIFT),
+		.behaviour	= BEHAVE_FORWARD_ANY,
+	},
+	[CGT_MDCR_TDCC] = {
+		.index		= MDCR_EL2,
+		.value		= MDCR_EL2_TDCC,
+		.mask		= MDCR_EL2_TDCC,
+		.behaviour	= BEHAVE_FORWARD_ANY,
+	},
 };
 
 #define MCB(id, ...)						\
@@ -277,6 +360,11 @@ static const enum cgt_group_id *coarse_control_combo[] = {
 	MCB(CGT_HCR_TPU_TICAB,		CGT_HCR_TPU, CGT_HCR_TICAB),
 	MCB(CGT_HCR_TPU_TOCU,		CGT_HCR_TPU, CGT_HCR_TOCU),
 	MCB(CGT_HCR_NV1_nNV2_ENSCXT,	CGT_HCR_NV1_nNV2, CGT_HCR_ENSCXT),
+	MCB(CGT_MDCR_TPM_TPMCR,		CGT_MDCR_TPM, CGT_MDCR_TPMCR),
+	MCB(CGT_MDCR_TDE_TDA,		CGT_MDCR_TDE, CGT_MDCR_TDA),
+	MCB(CGT_MDCR_TDE_TDOSA,		CGT_MDCR_TDE, CGT_MDCR_TDOSA),
+	MCB(CGT_MDCR_TDE_TDRA,		CGT_MDCR_TDE, CGT_MDCR_TDRA),
+	MCB(CGT_MDCR_TDCC_TDE_TDA,	CGT_MDCR_TDCC, CGT_MDCR_TDE, CGT_MDCR_TDA),
 };
 
 typedef enum trap_behaviour (*complex_condition_check)(struct kvm_vcpu *);
@@ -609,6 +697,186 @@ static const struct encoding_to_trap_config encoding_to_cgt[] __initconst = {
 	SR_TRAP(SYS_ERXPFGF_EL1,	CGT_HCR_nFIEN),
 	SR_TRAP(SYS_ERXPFGCTL_EL1,	CGT_HCR_nFIEN),
 	SR_TRAP(SYS_ERXPFGCDN_EL1,	CGT_HCR_nFIEN),
+	SR_TRAP(SYS_PMCR_EL0,		CGT_MDCR_TPM_TPMCR),
+	SR_TRAP(SYS_PMCNTENSET_EL0,	CGT_MDCR_TPM),
+	SR_TRAP(SYS_PMCNTENCLR_EL0,	CGT_MDCR_TPM),
+	SR_TRAP(SYS_PMOVSSET_EL0,	CGT_MDCR_TPM),
+	SR_TRAP(SYS_PMOVSCLR_EL0,	CGT_MDCR_TPM),
+	SR_TRAP(SYS_PMCEID0_EL0,	CGT_MDCR_TPM),
+	SR_TRAP(SYS_PMCEID1_EL0,	CGT_MDCR_TPM),
+	SR_TRAP(SYS_PMXEVTYPER_EL0,	CGT_MDCR_TPM),
+	SR_TRAP(SYS_PMSWINC_EL0,	CGT_MDCR_TPM),
+	SR_TRAP(SYS_PMSELR_EL0,		CGT_MDCR_TPM),
+	SR_TRAP(SYS_PMXEVCNTR_EL0,	CGT_MDCR_TPM),
+	SR_TRAP(SYS_PMCCNTR_EL0,	CGT_MDCR_TPM),
+	SR_TRAP(SYS_PMUSERENR_EL0,	CGT_MDCR_TPM),
+	SR_TRAP(SYS_PMINTENSET_EL1,	CGT_MDCR_TPM),
+	SR_TRAP(SYS_PMINTENCLR_EL1,	CGT_MDCR_TPM),
+	SR_TRAP(SYS_PMMIR_EL1,		CGT_MDCR_TPM),
+	SR_TRAP(SYS_PMEVCNTRn_EL0(0),	CGT_MDCR_TPM),
+	SR_TRAP(SYS_PMEVCNTRn_EL0(1),	CGT_MDCR_TPM),
+	SR_TRAP(SYS_PMEVCNTRn_EL0(2),	CGT_MDCR_TPM),
+	SR_TRAP(SYS_PMEVCNTRn_EL0(3),	CGT_MDCR_TPM),
+	SR_TRAP(SYS_PMEVCNTRn_EL0(4),	CGT_MDCR_TPM),
+	SR_TRAP(SYS_PMEVCNTRn_EL0(5),	CGT_MDCR_TPM),
+	SR_TRAP(SYS_PMEVCNTRn_EL0(6),	CGT_MDCR_TPM),
+	SR_TRAP(SYS_PMEVCNTRn_EL0(7),	CGT_MDCR_TPM),
+	SR_TRAP(SYS_PMEVCNTRn_EL0(8),	CGT_MDCR_TPM),
+	SR_TRAP(SYS_PMEVCNTRn_EL0(9),	CGT_MDCR_TPM),
+	SR_TRAP(SYS_PMEVCNTRn_EL0(10),	CGT_MDCR_TPM),
+	SR_TRAP(SYS_PMEVCNTRn_EL0(11),	CGT_MDCR_TPM),
+	SR_TRAP(SYS_PMEVCNTRn_EL0(12),	CGT_MDCR_TPM),
+	SR_TRAP(SYS_PMEVCNTRn_EL0(13),	CGT_MDCR_TPM),
+	SR_TRAP(SYS_PMEVCNTRn_EL0(14),	CGT_MDCR_TPM),
+	SR_TRAP(SYS_PMEVCNTRn_EL0(15),	CGT_MDCR_TPM),
+	SR_TRAP(SYS_PMEVCNTRn_EL0(16),	CGT_MDCR_TPM),
+	SR_TRAP(SYS_PMEVCNTRn_EL0(17),	CGT_MDCR_TPM),
+	SR_TRAP(SYS_PMEVCNTRn_EL0(18),	CGT_MDCR_TPM),
+	SR_TRAP(SYS_PMEVCNTRn_EL0(19),	CGT_MDCR_TPM),
+	SR_TRAP(SYS_PMEVCNTRn_EL0(20),	CGT_MDCR_TPM),
+	SR_TRAP(SYS_PMEVCNTRn_EL0(21),	CGT_MDCR_TPM),
+	SR_TRAP(SYS_PMEVCNTRn_EL0(22),	CGT_MDCR_TPM),
+	SR_TRAP(SYS_PMEVCNTRn_EL0(23),	CGT_MDCR_TPM),
+	SR_TRAP(SYS_PMEVCNTRn_EL0(24),	CGT_MDCR_TPM),
+	SR_TRAP(SYS_PMEVCNTRn_EL0(25),	CGT_MDCR_TPM),
+	SR_TRAP(SYS_PMEVCNTRn_EL0(26),	CGT_MDCR_TPM),
+	SR_TRAP(SYS_PMEVCNTRn_EL0(27),	CGT_MDCR_TPM),
+	SR_TRAP(SYS_PMEVCNTRn_EL0(28),	CGT_MDCR_TPM),
+	SR_TRAP(SYS_PMEVCNTRn_EL0(29),	CGT_MDCR_TPM),
+	SR_TRAP(SYS_PMEVCNTRn_EL0(30),	CGT_MDCR_TPM),
+	SR_TRAP(SYS_PMEVTYPERn_EL0(0),	CGT_MDCR_TPM),
+	SR_TRAP(SYS_PMEVTYPERn_EL0(1),	CGT_MDCR_TPM),
+	SR_TRAP(SYS_PMEVTYPERn_EL0(2),	CGT_MDCR_TPM),
+	SR_TRAP(SYS_PMEVTYPERn_EL0(3),	CGT_MDCR_TPM),
+	SR_TRAP(SYS_PMEVTYPERn_EL0(4),	CGT_MDCR_TPM),
+	SR_TRAP(SYS_PMEVTYPERn_EL0(5),	CGT_MDCR_TPM),
+	SR_TRAP(SYS_PMEVTYPERn_EL0(6),	CGT_MDCR_TPM),
+	SR_TRAP(SYS_PMEVTYPERn_EL0(7),	CGT_MDCR_TPM),
+	SR_TRAP(SYS_PMEVTYPERn_EL0(8),	CGT_MDCR_TPM),
+	SR_TRAP(SYS_PMEVTYPERn_EL0(9),	CGT_MDCR_TPM),
+	SR_TRAP(SYS_PMEVTYPERn_EL0(10),	CGT_MDCR_TPM),
+	SR_TRAP(SYS_PMEVTYPERn_EL0(11),	CGT_MDCR_TPM),
+	SR_TRAP(SYS_PMEVTYPERn_EL0(12),	CGT_MDCR_TPM),
+	SR_TRAP(SYS_PMEVTYPERn_EL0(13),	CGT_MDCR_TPM),
+	SR_TRAP(SYS_PMEVTYPERn_EL0(14),	CGT_MDCR_TPM),
+	SR_TRAP(SYS_PMEVTYPERn_EL0(15),	CGT_MDCR_TPM),
+	SR_TRAP(SYS_PMEVTYPERn_EL0(16),	CGT_MDCR_TPM),
+	SR_TRAP(SYS_PMEVTYPERn_EL0(17),	CGT_MDCR_TPM),
+	SR_TRAP(SYS_PMEVTYPERn_EL0(18),	CGT_MDCR_TPM),
+	SR_TRAP(SYS_PMEVTYPERn_EL0(19),	CGT_MDCR_TPM),
+	SR_TRAP(SYS_PMEVTYPERn_EL0(20),	CGT_MDCR_TPM),
+	SR_TRAP(SYS_PMEVTYPERn_EL0(21),	CGT_MDCR_TPM),
+	SR_TRAP(SYS_PMEVTYPERn_EL0(22),	CGT_MDCR_TPM),
+	SR_TRAP(SYS_PMEVTYPERn_EL0(23),	CGT_MDCR_TPM),
+	SR_TRAP(SYS_PMEVTYPERn_EL0(24),	CGT_MDCR_TPM),
+	SR_TRAP(SYS_PMEVTYPERn_EL0(25),	CGT_MDCR_TPM),
+	SR_TRAP(SYS_PMEVTYPERn_EL0(26),	CGT_MDCR_TPM),
+	SR_TRAP(SYS_PMEVTYPERn_EL0(27),	CGT_MDCR_TPM),
+	SR_TRAP(SYS_PMEVTYPERn_EL0(28),	CGT_MDCR_TPM),
+	SR_TRAP(SYS_PMEVTYPERn_EL0(29),	CGT_MDCR_TPM),
+	SR_TRAP(SYS_PMEVTYPERn_EL0(30),	CGT_MDCR_TPM),
+	SR_TRAP(SYS_PMCCFILTR_EL0,	CGT_MDCR_TPM),
+	SR_TRAP(SYS_MDCCSR_EL0,		CGT_MDCR_TDCC_TDE_TDA),
+	SR_TRAP(SYS_MDCCINT_EL1,	CGT_MDCR_TDCC_TDE_TDA),
+	SR_TRAP(SYS_OSDTRRX_EL1,	CGT_MDCR_TDCC_TDE_TDA),
+	SR_TRAP(SYS_OSDTRTX_EL1,	CGT_MDCR_TDCC_TDE_TDA),
+	SR_TRAP(SYS_DBGDTR_EL0,		CGT_MDCR_TDCC_TDE_TDA),
+	/*
+	 * Also covers DBGDTRRX_EL0, which has the same encoding as
+	 * SYS_DBGDTRTX_EL0...
+	 */
+	SR_TRAP(SYS_DBGDTRTX_EL0,	CGT_MDCR_TDCC_TDE_TDA),
+	SR_TRAP(SYS_MDSCR_EL1,		CGT_MDCR_TDE_TDA),
+	SR_TRAP(SYS_OSECCR_EL1,		CGT_MDCR_TDE_TDA),
+	SR_TRAP(SYS_DBGBVRn_EL1(0),	CGT_MDCR_TDE_TDA),
+	SR_TRAP(SYS_DBGBVRn_EL1(1),	CGT_MDCR_TDE_TDA),
+	SR_TRAP(SYS_DBGBVRn_EL1(2),	CGT_MDCR_TDE_TDA),
+	SR_TRAP(SYS_DBGBVRn_EL1(3),	CGT_MDCR_TDE_TDA),
+	SR_TRAP(SYS_DBGBVRn_EL1(4),	CGT_MDCR_TDE_TDA),
+	SR_TRAP(SYS_DBGBVRn_EL1(5),	CGT_MDCR_TDE_TDA),
+	SR_TRAP(SYS_DBGBVRn_EL1(6),	CGT_MDCR_TDE_TDA),
+	SR_TRAP(SYS_DBGBVRn_EL1(7),	CGT_MDCR_TDE_TDA),
+	SR_TRAP(SYS_DBGBVRn_EL1(8),	CGT_MDCR_TDE_TDA),
+	SR_TRAP(SYS_DBGBVRn_EL1(9),	CGT_MDCR_TDE_TDA),
+	SR_TRAP(SYS_DBGBVRn_EL1(10),	CGT_MDCR_TDE_TDA),
+	SR_TRAP(SYS_DBGBVRn_EL1(11),	CGT_MDCR_TDE_TDA),
+	SR_TRAP(SYS_DBGBVRn_EL1(12),	CGT_MDCR_TDE_TDA),
+	SR_TRAP(SYS_DBGBVRn_EL1(13),	CGT_MDCR_TDE_TDA),
+	SR_TRAP(SYS_DBGBVRn_EL1(14),	CGT_MDCR_TDE_TDA),
+	SR_TRAP(SYS_DBGBVRn_EL1(15),	CGT_MDCR_TDE_TDA),
+	SR_TRAP(SYS_DBGBCRn_EL1(0),	CGT_MDCR_TDE_TDA),
+	SR_TRAP(SYS_DBGBCRn_EL1(1),	CGT_MDCR_TDE_TDA),
+	SR_TRAP(SYS_DBGBCRn_EL1(2),	CGT_MDCR_TDE_TDA),
+	SR_TRAP(SYS_DBGBCRn_EL1(3),	CGT_MDCR_TDE_TDA),
+	SR_TRAP(SYS_DBGBCRn_EL1(4),	CGT_MDCR_TDE_TDA),
+	SR_TRAP(SYS_DBGBCRn_EL1(5),	CGT_MDCR_TDE_TDA),
+	SR_TRAP(SYS_DBGBCRn_EL1(6),	CGT_MDCR_TDE_TDA),
+	SR_TRAP(SYS_DBGBCRn_EL1(7),	CGT_MDCR_TDE_TDA),
+	SR_TRAP(SYS_DBGBCRn_EL1(8),	CGT_MDCR_TDE_TDA),
+	SR_TRAP(SYS_DBGBCRn_EL1(9),	CGT_MDCR_TDE_TDA),
+	SR_TRAP(SYS_DBGBCRn_EL1(10),	CGT_MDCR_TDE_TDA),
+	SR_TRAP(SYS_DBGBCRn_EL1(11),	CGT_MDCR_TDE_TDA),
+	SR_TRAP(SYS_DBGBCRn_EL1(12),	CGT_MDCR_TDE_TDA),
+	SR_TRAP(SYS_DBGBCRn_EL1(13),	CGT_MDCR_TDE_TDA),
+	SR_TRAP(SYS_DBGBCRn_EL1(14),	CGT_MDCR_TDE_TDA),
+	SR_TRAP(SYS_DBGBCRn_EL1(15),	CGT_MDCR_TDE_TDA),
+	SR_TRAP(SYS_DBGWVRn_EL1(0),	CGT_MDCR_TDE_TDA),
+	SR_TRAP(SYS_DBGWVRn_EL1(1),	CGT_MDCR_TDE_TDA),
+	SR_TRAP(SYS_DBGWVRn_EL1(2),	CGT_MDCR_TDE_TDA),
+	SR_TRAP(SYS_DBGWVRn_EL1(3),	CGT_MDCR_TDE_TDA),
+	SR_TRAP(SYS_DBGWVRn_EL1(4),	CGT_MDCR_TDE_TDA),
+	SR_TRAP(SYS_DBGWVRn_EL1(5),	CGT_MDCR_TDE_TDA),
+	SR_TRAP(SYS_DBGWVRn_EL1(6),	CGT_MDCR_TDE_TDA),
+	SR_TRAP(SYS_DBGWVRn_EL1(7),	CGT_MDCR_TDE_TDA),
+	SR_TRAP(SYS_DBGWVRn_EL1(8),	CGT_MDCR_TDE_TDA),
+	SR_TRAP(SYS_DBGWVRn_EL1(9),	CGT_MDCR_TDE_TDA),
+	SR_TRAP(SYS_DBGWVRn_EL1(10),	CGT_MDCR_TDE_TDA),
+	SR_TRAP(SYS_DBGWVRn_EL1(11),	CGT_MDCR_TDE_TDA),
+	SR_TRAP(SYS_DBGWVRn_EL1(12),	CGT_MDCR_TDE_TDA),
+	SR_TRAP(SYS_DBGWVRn_EL1(13),	CGT_MDCR_TDE_TDA),
+	SR_TRAP(SYS_DBGWVRn_EL1(14),	CGT_MDCR_TDE_TDA),
+	SR_TRAP(SYS_DBGWVRn_EL1(15),	CGT_MDCR_TDE_TDA),
+	SR_TRAP(SYS_DBGWCRn_EL1(0),	CGT_MDCR_TDE_TDA),
+	SR_TRAP(SYS_DBGWCRn_EL1(1),	CGT_MDCR_TDE_TDA),
+	SR_TRAP(SYS_DBGWCRn_EL1(2),	CGT_MDCR_TDE_TDA),
+	SR_TRAP(SYS_DBGWCRn_EL1(3),	CGT_MDCR_TDE_TDA),
+	SR_TRAP(SYS_DBGWCRn_EL1(4),	CGT_MDCR_TDE_TDA),
+	SR_TRAP(SYS_DBGWCRn_EL1(5),	CGT_MDCR_TDE_TDA),
+	SR_TRAP(SYS_DBGWCRn_EL1(6),	CGT_MDCR_TDE_TDA),
+	SR_TRAP(SYS_DBGWCRn_EL1(7),	CGT_MDCR_TDE_TDA),
+	SR_TRAP(SYS_DBGWCRn_EL1(8),	CGT_MDCR_TDE_TDA),
+	SR_TRAP(SYS_DBGWCRn_EL1(9),	CGT_MDCR_TDE_TDA),
+	SR_TRAP(SYS_DBGWCRn_EL1(10),	CGT_MDCR_TDE_TDA),
+	SR_TRAP(SYS_DBGWCRn_EL1(11),	CGT_MDCR_TDE_TDA),
+	SR_TRAP(SYS_DBGWCRn_EL1(12),	CGT_MDCR_TDE_TDA),
+	SR_TRAP(SYS_DBGWCRn_EL1(13),	CGT_MDCR_TDE_TDA),
+	SR_TRAP(SYS_DBGWCRn_EL1(14),	CGT_MDCR_TDE_TDA),
+	SR_TRAP(SYS_DBGCLAIMSET_EL1,	CGT_MDCR_TDE_TDA),
+	SR_TRAP(SYS_DBGCLAIMCLR_EL1,	CGT_MDCR_TDE_TDA),
+	SR_TRAP(SYS_DBGAUTHSTATUS_EL1,	CGT_MDCR_TDE_TDA),
+	SR_TRAP(SYS_OSLAR_EL1,		CGT_MDCR_TDE_TDOSA),
+	SR_TRAP(SYS_OSLSR_EL1,		CGT_MDCR_TDE_TDOSA),
+	SR_TRAP(SYS_OSDLR_EL1,		CGT_MDCR_TDE_TDOSA),
+	SR_TRAP(SYS_DBGPRCR_EL1,	CGT_MDCR_TDE_TDOSA),
+	SR_TRAP(SYS_MDRAR_EL1,		CGT_MDCR_TDE_TDRA),
+	SR_TRAP(SYS_PMBLIMITR_EL1,	CGT_MDCR_E2PB),
+	SR_TRAP(SYS_PMBPTR_EL1,		CGT_MDCR_E2PB),
+	SR_TRAP(SYS_PMBSR_EL1,		CGT_MDCR_E2PB),
+	SR_TRAP(SYS_PMSCR_EL1,		CGT_MDCR_TPMS),
+	SR_TRAP(SYS_PMSEVFR_EL1,	CGT_MDCR_TPMS),
+	SR_TRAP(SYS_PMSFCR_EL1,		CGT_MDCR_TPMS),
+	SR_TRAP(SYS_PMSICR_EL1,		CGT_MDCR_TPMS),
+	SR_TRAP(SYS_PMSIDR_EL1,		CGT_MDCR_TPMS),
+	SR_TRAP(SYS_PMSIRR_EL1,		CGT_MDCR_TPMS),
+	SR_TRAP(SYS_PMSLATFR_EL1,	CGT_MDCR_TPMS),
+	SR_TRAP(SYS_PMSNEVFR_EL1,	CGT_MDCR_TPMS),
+	SR_TRAP(SYS_TRFCR_EL1,		CGT_MDCR_TTRF),
+	SR_TRAP(SYS_TRBBASER_EL1,	CGT_MDCR_E2TB),
+	SR_TRAP(SYS_TRBLIMITR_EL1,	CGT_MDCR_E2TB),
+	SR_TRAP(SYS_TRBMAR_EL1, 	CGT_MDCR_E2TB),
+	SR_TRAP(SYS_TRBPTR_EL1, 	CGT_MDCR_E2TB),
+	SR_TRAP(SYS_TRBSR_EL1, 		CGT_MDCR_E2TB),
+	SR_TRAP(SYS_TRBTRG_EL1,		CGT_MDCR_E2TB),
 };
 
 static DEFINE_XARRAY(sr_forward_xa);
-- 
2.34.1


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply related	[flat|nested] 48+ messages in thread

* [PATCH v4 18/28] KVM: arm64: nv: Add trap forwarding for CNTHCTL_EL2
  2023-08-15 18:38 [PATCH v4 00/28] KVM: arm64: NV trap forwarding infrastructure Marc Zyngier
                   ` (16 preceding siblings ...)
  2023-08-15 18:38 ` [PATCH v4 17/28] KVM: arm64: nv: Add trap forwarding for MDCR_EL2 Marc Zyngier
@ 2023-08-15 18:38 ` Marc Zyngier
  2023-08-15 22:42   ` Jing Zhang
  2023-08-15 18:38 ` [PATCH v4 19/28] KVM: arm64: nv: Add fine grained trap forwarding infrastructure Marc Zyngier
                   ` (10 subsequent siblings)
  28 siblings, 1 reply; 48+ messages in thread
From: Marc Zyngier @ 2023-08-15 18:38 UTC (permalink / raw)
  To: kvmarm, kvm, linux-arm-kernel
  Cc: Catalin Marinas, Eric Auger, Mark Brown, Mark Rutland,
	Will Deacon, Alexandru Elisei, Andre Przywara, Chase Conklin,
	Ganapatrao Kulkarni, Darren Hart, Miguel Luis, Jing Zhang,
	James Morse, Suzuki K Poulose, Oliver Upton, Zenghui Yu

Describe the CNTHCTL_EL2 register, and associate it with all the sysregs
it allows to trap.

Reviewed-by: Eric Auger <eric.auger@redhat.com>
Signed-off-by: Marc Zyngier <maz@kernel.org>
---
 arch/arm64/kvm/emulate-nested.c | 50 ++++++++++++++++++++++++++++++++-
 1 file changed, 49 insertions(+), 1 deletion(-)

diff --git a/arch/arm64/kvm/emulate-nested.c b/arch/arm64/kvm/emulate-nested.c
index 241e44eeed6d..860910386b5b 100644
--- a/arch/arm64/kvm/emulate-nested.c
+++ b/arch/arm64/kvm/emulate-nested.c
@@ -100,9 +100,11 @@ enum cgt_group_id {
 
 	/*
 	 * Anything after this point requires a callback evaluating a
-	 * complex trap condition. Hopefully we'll never need this...
+	 * complex trap condition. Ugly stuff.
 	 */
 	__COMPLEX_CONDITIONS__,
+	CGT_CNTHCTL_EL1PCTEN = __COMPLEX_CONDITIONS__,
+	CGT_CNTHCTL_EL1PTEN,
 
 	/* Must be last */
 	__NR_CGT_GROUP_IDS__
@@ -369,10 +371,51 @@ static const enum cgt_group_id *coarse_control_combo[] = {
 
 typedef enum trap_behaviour (*complex_condition_check)(struct kvm_vcpu *);
 
+/*
+ * Warning, maximum confusion ahead.
+ *
+ * When E2H=0, CNTHCTL_EL2[1:0] are defined as EL1PCEN:EL1PCTEN
+ * When E2H=1, CNTHCTL_EL2[11:10] are defined as EL1PTEN:EL1PCTEN
+ *
+ * Note the single letter difference? Yet, the bits have the same
+ * function despite a different layout and a different name.
+ *
+ * We don't try to reconcile this mess. We just use the E2H=0 bits
+ * to generate something that is in the E2H=1 format, and live with
+ * it. You're welcome.
+ */
+static u64 get_sanitized_cnthctl(struct kvm_vcpu *vcpu)
+{
+	u64 val = __vcpu_sys_reg(vcpu, CNTHCTL_EL2);
+
+	if (!vcpu_el2_e2h_is_set(vcpu))
+		val = (val & (CNTHCTL_EL1PCEN | CNTHCTL_EL1PCTEN)) << 10;
+
+	return val & ((CNTHCTL_EL1PCEN | CNTHCTL_EL1PCTEN) << 10);
+}
+
+static enum trap_behaviour check_cnthctl_el1pcten(struct kvm_vcpu *vcpu)
+{
+	if (get_sanitized_cnthctl(vcpu) & (CNTHCTL_EL1PCTEN << 10))
+		return BEHAVE_HANDLE_LOCALLY;
+
+	return BEHAVE_FORWARD_ANY;
+}
+
+static enum trap_behaviour check_cnthctl_el1pten(struct kvm_vcpu *vcpu)
+{
+	if (get_sanitized_cnthctl(vcpu) & (CNTHCTL_EL1PCEN << 10))
+		return BEHAVE_HANDLE_LOCALLY;
+
+	return BEHAVE_FORWARD_ANY;
+}
+
 #define CCC(id, fn)				\
 	[id - __COMPLEX_CONDITIONS__] = fn
 
 static const complex_condition_check ccc[] = {
+	CCC(CGT_CNTHCTL_EL1PCTEN, check_cnthctl_el1pcten),
+	CCC(CGT_CNTHCTL_EL1PTEN, check_cnthctl_el1pten),
 };
 
 /*
@@ -877,6 +920,11 @@ static const struct encoding_to_trap_config encoding_to_cgt[] __initconst = {
 	SR_TRAP(SYS_TRBPTR_EL1, 	CGT_MDCR_E2TB),
 	SR_TRAP(SYS_TRBSR_EL1, 		CGT_MDCR_E2TB),
 	SR_TRAP(SYS_TRBTRG_EL1,		CGT_MDCR_E2TB),
+	SR_TRAP(SYS_CNTP_TVAL_EL0,	CGT_CNTHCTL_EL1PTEN),
+	SR_TRAP(SYS_CNTP_CVAL_EL0,	CGT_CNTHCTL_EL1PTEN),
+	SR_TRAP(SYS_CNTP_CTL_EL0,	CGT_CNTHCTL_EL1PTEN),
+	SR_TRAP(SYS_CNTPCT_EL0,		CGT_CNTHCTL_EL1PCTEN),
+	SR_TRAP(SYS_CNTPCTSS_EL0,	CGT_CNTHCTL_EL1PCTEN),
 };
 
 static DEFINE_XARRAY(sr_forward_xa);
-- 
2.34.1


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply related	[flat|nested] 48+ messages in thread

* [PATCH v4 19/28] KVM: arm64: nv: Add fine grained trap forwarding infrastructure
  2023-08-15 18:38 [PATCH v4 00/28] KVM: arm64: NV trap forwarding infrastructure Marc Zyngier
                   ` (17 preceding siblings ...)
  2023-08-15 18:38 ` [PATCH v4 18/28] KVM: arm64: nv: Add trap forwarding for CNTHCTL_EL2 Marc Zyngier
@ 2023-08-15 18:38 ` Marc Zyngier
  2023-08-15 22:44   ` Jing Zhang
  2023-08-15 18:38 ` [PATCH v4 20/28] KVM: arm64: nv: Add trap forwarding for HFGxTR_EL2 Marc Zyngier
                   ` (9 subsequent siblings)
  28 siblings, 1 reply; 48+ messages in thread
From: Marc Zyngier @ 2023-08-15 18:38 UTC (permalink / raw)
  To: kvmarm, kvm, linux-arm-kernel
  Cc: Catalin Marinas, Eric Auger, Mark Brown, Mark Rutland,
	Will Deacon, Alexandru Elisei, Andre Przywara, Chase Conklin,
	Ganapatrao Kulkarni, Darren Hart, Miguel Luis, Jing Zhang,
	James Morse, Suzuki K Poulose, Oliver Upton, Zenghui Yu

Fine Grained Traps are fun. Not.

Implement the fine grained trap forwarding, reusing the Coarse Grained
Traps infrastructure previously implemented.

Each sysreg/instruction inserted in the xarray gets a FGT group
(vaguely equivalent to a register number), a bit number in that register,
and a polarity.

It is then pretty easy to check the FGT state at handling time, just
like we do for the coarse version (it is just faster).

Reviewed-by: Eric Auger <eric.auger@redhat.com>
Signed-off-by: Marc Zyngier <maz@kernel.org>
---
 arch/arm64/kvm/emulate-nested.c | 90 +++++++++++++++++++++++++++++++--
 1 file changed, 87 insertions(+), 3 deletions(-)

diff --git a/arch/arm64/kvm/emulate-nested.c b/arch/arm64/kvm/emulate-nested.c
index 860910386b5b..0da9d92ed921 100644
--- a/arch/arm64/kvm/emulate-nested.c
+++ b/arch/arm64/kvm/emulate-nested.c
@@ -423,16 +423,23 @@ static const complex_condition_check ccc[] = {
  * following layout for each trapped sysreg:
  *
  * [9:0]	enum cgt_group_id (10 bits)
- * [62:10]	Unused (53 bits)
+ * [13:10]	enum fgt_group_id (4 bits)
+ * [19:14]	bit number in the FGT register (6 bits)
+ * [20]		trap polarity (1 bit)
+ * [62:21]	Unused (42 bits)
  * [63]		RES0 - Must be zero, as lost on insertion in the xarray
  */
 #define TC_CGT_BITS	10
+#define TC_FGT_BITS	4
 
 union trap_config {
 	u64	val;
 	struct {
 		unsigned long	cgt:TC_CGT_BITS; /* Coarse Grained Trap id */
-		unsigned long	unused:53;	 /* Unused, should be zero */
+		unsigned long	fgt:TC_FGT_BITS; /* Fine Grained Trap id */
+		unsigned long	bit:6;		 /* Bit number */
+		unsigned long	pol:1;		 /* Polarity */
+		unsigned long	unused:42;	 /* Unused, should be zero */
 		unsigned long	mbz:1;		 /* Must Be Zero */
 	};
 };
@@ -929,6 +936,28 @@ static const struct encoding_to_trap_config encoding_to_cgt[] __initconst = {
 
 static DEFINE_XARRAY(sr_forward_xa);
 
+enum fgt_group_id {
+	__NO_FGT_GROUP__,
+
+	/* Must be last */
+	__NR_FGT_GROUP_IDS__
+};
+
+#define SR_FGT(sr, g, b, p)					\
+	{							\
+		.encoding	= sr,				\
+		.end		= sr,				\
+		.tc		= {				\
+			.fgt = g ## _GROUP,			\
+			.bit = g ## _EL2_ ## b ## _SHIFT,	\
+			.pol = p,				\
+		},						\
+		.line = __LINE__,				\
+	}
+
+static const struct encoding_to_trap_config encoding_to_fgt[] __initconst = {
+};
+
 static union trap_config get_trap_config(u32 sysreg)
 {
 	return (union trap_config) {
@@ -957,6 +986,7 @@ int __init populate_nv_trap_config(void)
 
 	BUILD_BUG_ON(sizeof(union trap_config) != sizeof(void *));
 	BUILD_BUG_ON(__NR_CGT_GROUP_IDS__ > BIT(TC_CGT_BITS));
+	BUILD_BUG_ON(__NR_FGT_GROUP_IDS__ > BIT(TC_FGT_BITS));
 
 	for (int i = 0; i < ARRAY_SIZE(encoding_to_cgt); i++) {
 		const struct encoding_to_trap_config *cgt = &encoding_to_cgt[i];
@@ -990,6 +1020,34 @@ int __init populate_nv_trap_config(void)
 	kvm_info("nv: %ld coarse grained trap handlers\n",
 		 ARRAY_SIZE(encoding_to_cgt));
 
+	if (!cpus_have_final_cap(ARM64_HAS_FGT))
+		goto check_mcb;
+
+	for (int i = 0; i < ARRAY_SIZE(encoding_to_fgt); i++) {
+		const struct encoding_to_trap_config *fgt = &encoding_to_fgt[i];
+		union trap_config tc;
+
+		if (fgt->tc.fgt >= __NR_FGT_GROUP_IDS__) {
+			ret = -EINVAL;
+			print_nv_trap_error(fgt, "Invalid FGT", ret);
+		}
+
+		tc = get_trap_config(fgt->encoding);
+
+		if (tc.fgt) {
+			ret = -EINVAL;
+			print_nv_trap_error(fgt, "Duplicate FGT", ret);
+		}
+
+		tc.val |= fgt->tc.val;
+		xa_store(&sr_forward_xa, fgt->encoding,
+			 xa_mk_value(tc.val), GFP_KERNEL);
+	}
+
+	kvm_info("nv: %ld fine grained trap handlers\n",
+		 ARRAY_SIZE(encoding_to_fgt));
+
+check_mcb:
 	for (int id = __MULTIPLE_CONTROL_BITS__; id < __COMPLEX_CONDITIONS__; id++) {
 		const enum cgt_group_id *cgids;
 
@@ -1056,13 +1114,26 @@ static enum trap_behaviour compute_trap_behaviour(struct kvm_vcpu *vcpu,
 	return __compute_trap_behaviour(vcpu, tc.cgt, b);
 }
 
+static bool check_fgt_bit(u64 val, const union trap_config tc)
+{
+	return ((val >> tc.bit) & 1) == tc.pol;
+}
+
+#define sanitised_sys_reg(vcpu, reg)			\
+	({						\
+		u64 __val;				\
+		__val = __vcpu_sys_reg(vcpu, reg);	\
+		__val &= ~__ ## reg ## _RES0;		\
+		(__val);				\
+	})
+
 bool __check_nv_sr_forward(struct kvm_vcpu *vcpu)
 {
 	union trap_config tc;
 	enum trap_behaviour b;
 	bool is_read;
 	u32 sysreg;
-	u64 esr;
+	u64 esr, val;
 
 	if (!vcpu_has_nv(vcpu) || is_hyp_ctxt(vcpu))
 		return false;
@@ -1085,6 +1156,19 @@ bool __check_nv_sr_forward(struct kvm_vcpu *vcpu)
 	if (!tc.val)
 		return false;
 
+	switch ((enum fgt_group_id)tc.fgt) {
+	case __NO_FGT_GROUP__:
+		break;
+
+	case __NR_FGT_GROUP_IDS__:
+		/* Something is really wrong, bail out */
+		WARN_ONCE(1, "__NR_FGT_GROUP_IDS__");
+		return false;
+	}
+
+	if (tc.fgt != __NO_FGT_GROUP__ && check_fgt_bit(val, tc))
+		goto inject;
+
 	b = compute_trap_behaviour(vcpu, tc);
 
 	if (((b & BEHAVE_FORWARD_READ) && is_read) ||
-- 
2.34.1


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply related	[flat|nested] 48+ messages in thread

* [PATCH v4 20/28] KVM: arm64: nv: Add trap forwarding for HFGxTR_EL2
  2023-08-15 18:38 [PATCH v4 00/28] KVM: arm64: NV trap forwarding infrastructure Marc Zyngier
                   ` (18 preceding siblings ...)
  2023-08-15 18:38 ` [PATCH v4 19/28] KVM: arm64: nv: Add fine grained trap forwarding infrastructure Marc Zyngier
@ 2023-08-15 18:38 ` Marc Zyngier
  2023-08-15 22:51   ` Jing Zhang
  2023-08-15 18:38 ` [PATCH v4 21/28] KVM: arm64: nv: Add trap forwarding for HFGITR_EL2 Marc Zyngier
                   ` (8 subsequent siblings)
  28 siblings, 1 reply; 48+ messages in thread
From: Marc Zyngier @ 2023-08-15 18:38 UTC (permalink / raw)
  To: kvmarm, kvm, linux-arm-kernel
  Cc: Catalin Marinas, Eric Auger, Mark Brown, Mark Rutland,
	Will Deacon, Alexandru Elisei, Andre Przywara, Chase Conklin,
	Ganapatrao Kulkarni, Darren Hart, Miguel Luis, Jing Zhang,
	James Morse, Suzuki K Poulose, Oliver Upton, Zenghui Yu

Implement the trap forwarding for traps described by HFGxTR_EL2,
reusing the Fine Grained Traps infrastructure previously implemented.

Reviewed-by: Eric Auger <eric.auger@redhat.com>
Signed-off-by: Marc Zyngier <maz@kernel.org>
---
 arch/arm64/kvm/emulate-nested.c | 71 +++++++++++++++++++++++++++++++++
 1 file changed, 71 insertions(+)

diff --git a/arch/arm64/kvm/emulate-nested.c b/arch/arm64/kvm/emulate-nested.c
index 0da9d92ed921..0e34797515b6 100644
--- a/arch/arm64/kvm/emulate-nested.c
+++ b/arch/arm64/kvm/emulate-nested.c
@@ -938,6 +938,7 @@ static DEFINE_XARRAY(sr_forward_xa);
 
 enum fgt_group_id {
 	__NO_FGT_GROUP__,
+	HFGxTR_GROUP,
 
 	/* Must be last */
 	__NR_FGT_GROUP_IDS__
@@ -956,6 +957,69 @@ enum fgt_group_id {
 	}
 
 static const struct encoding_to_trap_config encoding_to_fgt[] __initconst = {
+	/* HFGRTR_EL2, HFGWTR_EL2 */
+	SR_FGT(SYS_TPIDR2_EL0,		HFGxTR, nTPIDR2_EL0, 0),
+	SR_FGT(SYS_SMPRI_EL1,		HFGxTR, nSMPRI_EL1, 0),
+	SR_FGT(SYS_ACCDATA_EL1,		HFGxTR, nACCDATA_EL1, 0),
+	SR_FGT(SYS_ERXADDR_EL1,		HFGxTR, ERXADDR_EL1, 1),
+	SR_FGT(SYS_ERXPFGCDN_EL1,	HFGxTR, ERXPFGCDN_EL1, 1),
+	SR_FGT(SYS_ERXPFGCTL_EL1,	HFGxTR, ERXPFGCTL_EL1, 1),
+	SR_FGT(SYS_ERXPFGF_EL1,		HFGxTR, ERXPFGF_EL1, 1),
+	SR_FGT(SYS_ERXMISC0_EL1,	HFGxTR, ERXMISCn_EL1, 1),
+	SR_FGT(SYS_ERXMISC1_EL1,	HFGxTR, ERXMISCn_EL1, 1),
+	SR_FGT(SYS_ERXMISC2_EL1,	HFGxTR, ERXMISCn_EL1, 1),
+	SR_FGT(SYS_ERXMISC3_EL1,	HFGxTR, ERXMISCn_EL1, 1),
+	SR_FGT(SYS_ERXSTATUS_EL1,	HFGxTR, ERXSTATUS_EL1, 1),
+	SR_FGT(SYS_ERXCTLR_EL1,		HFGxTR, ERXCTLR_EL1, 1),
+	SR_FGT(SYS_ERXFR_EL1,		HFGxTR, ERXFR_EL1, 1),
+	SR_FGT(SYS_ERRSELR_EL1,		HFGxTR, ERRSELR_EL1, 1),
+	SR_FGT(SYS_ERRIDR_EL1,		HFGxTR, ERRIDR_EL1, 1),
+	SR_FGT(SYS_ICC_IGRPEN0_EL1,	HFGxTR, ICC_IGRPENn_EL1, 1),
+	SR_FGT(SYS_ICC_IGRPEN1_EL1,	HFGxTR, ICC_IGRPENn_EL1, 1),
+	SR_FGT(SYS_VBAR_EL1,		HFGxTR, VBAR_EL1, 1),
+	SR_FGT(SYS_TTBR1_EL1,		HFGxTR, TTBR1_EL1, 1),
+	SR_FGT(SYS_TTBR0_EL1,		HFGxTR, TTBR0_EL1, 1),
+	SR_FGT(SYS_TPIDR_EL0,		HFGxTR, TPIDR_EL0, 1),
+	SR_FGT(SYS_TPIDRRO_EL0,		HFGxTR, TPIDRRO_EL0, 1),
+	SR_FGT(SYS_TPIDR_EL1,		HFGxTR, TPIDR_EL1, 1),
+	SR_FGT(SYS_TCR_EL1,		HFGxTR, TCR_EL1, 1),
+	SR_FGT(SYS_SCXTNUM_EL0,		HFGxTR, SCXTNUM_EL0, 1),
+	SR_FGT(SYS_SCXTNUM_EL1, 	HFGxTR, SCXTNUM_EL1, 1),
+	SR_FGT(SYS_SCTLR_EL1, 		HFGxTR, SCTLR_EL1, 1),
+	SR_FGT(SYS_REVIDR_EL1, 		HFGxTR, REVIDR_EL1, 1),
+	SR_FGT(SYS_PAR_EL1, 		HFGxTR, PAR_EL1, 1),
+	SR_FGT(SYS_MPIDR_EL1, 		HFGxTR, MPIDR_EL1, 1),
+	SR_FGT(SYS_MIDR_EL1, 		HFGxTR, MIDR_EL1, 1),
+	SR_FGT(SYS_MAIR_EL1, 		HFGxTR, MAIR_EL1, 1),
+	SR_FGT(SYS_LORSA_EL1, 		HFGxTR, LORSA_EL1, 1),
+	SR_FGT(SYS_LORN_EL1, 		HFGxTR, LORN_EL1, 1),
+	SR_FGT(SYS_LORID_EL1, 		HFGxTR, LORID_EL1, 1),
+	SR_FGT(SYS_LOREA_EL1, 		HFGxTR, LOREA_EL1, 1),
+	SR_FGT(SYS_LORC_EL1, 		HFGxTR, LORC_EL1, 1),
+	SR_FGT(SYS_ISR_EL1, 		HFGxTR, ISR_EL1, 1),
+	SR_FGT(SYS_FAR_EL1, 		HFGxTR, FAR_EL1, 1),
+	SR_FGT(SYS_ESR_EL1, 		HFGxTR, ESR_EL1, 1),
+	SR_FGT(SYS_DCZID_EL0, 		HFGxTR, DCZID_EL0, 1),
+	SR_FGT(SYS_CTR_EL0, 		HFGxTR, CTR_EL0, 1),
+	SR_FGT(SYS_CSSELR_EL1, 		HFGxTR, CSSELR_EL1, 1),
+	SR_FGT(SYS_CPACR_EL1, 		HFGxTR, CPACR_EL1, 1),
+	SR_FGT(SYS_CONTEXTIDR_EL1, 	HFGxTR, CONTEXTIDR_EL1, 1),
+	SR_FGT(SYS_CLIDR_EL1, 		HFGxTR, CLIDR_EL1, 1),
+	SR_FGT(SYS_CCSIDR_EL1, 		HFGxTR, CCSIDR_EL1, 1),
+	SR_FGT(SYS_APIBKEYLO_EL1, 	HFGxTR, APIBKey, 1),
+	SR_FGT(SYS_APIBKEYHI_EL1, 	HFGxTR, APIBKey, 1),
+	SR_FGT(SYS_APIAKEYLO_EL1, 	HFGxTR, APIAKey, 1),
+	SR_FGT(SYS_APIAKEYHI_EL1, 	HFGxTR, APIAKey, 1),
+	SR_FGT(SYS_APGAKEYLO_EL1, 	HFGxTR, APGAKey, 1),
+	SR_FGT(SYS_APGAKEYHI_EL1, 	HFGxTR, APGAKey, 1),
+	SR_FGT(SYS_APDBKEYLO_EL1, 	HFGxTR, APDBKey, 1),
+	SR_FGT(SYS_APDBKEYHI_EL1, 	HFGxTR, APDBKey, 1),
+	SR_FGT(SYS_APDAKEYLO_EL1, 	HFGxTR, APDAKey, 1),
+	SR_FGT(SYS_APDAKEYHI_EL1, 	HFGxTR, APDAKey, 1),
+	SR_FGT(SYS_AMAIR_EL1, 		HFGxTR, AMAIR_EL1, 1),
+	SR_FGT(SYS_AIDR_EL1, 		HFGxTR, AIDR_EL1, 1),
+	SR_FGT(SYS_AFSR1_EL1, 		HFGxTR, AFSR1_EL1, 1),
+	SR_FGT(SYS_AFSR0_EL1, 		HFGxTR, AFSR0_EL1, 1),
 };
 
 static union trap_config get_trap_config(u32 sysreg)
@@ -1160,6 +1224,13 @@ bool __check_nv_sr_forward(struct kvm_vcpu *vcpu)
 	case __NO_FGT_GROUP__:
 		break;
 
+	case HFGxTR_GROUP:
+		if (is_read)
+			val = sanitised_sys_reg(vcpu, HFGRTR_EL2);
+		else
+			val = sanitised_sys_reg(vcpu, HFGWTR_EL2);
+		break;
+
 	case __NR_FGT_GROUP_IDS__:
 		/* Something is really wrong, bail out */
 		WARN_ONCE(1, "__NR_FGT_GROUP_IDS__");
-- 
2.34.1


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply related	[flat|nested] 48+ messages in thread

* [PATCH v4 21/28] KVM: arm64: nv: Add trap forwarding for HFGITR_EL2
  2023-08-15 18:38 [PATCH v4 00/28] KVM: arm64: NV trap forwarding infrastructure Marc Zyngier
                   ` (19 preceding siblings ...)
  2023-08-15 18:38 ` [PATCH v4 20/28] KVM: arm64: nv: Add trap forwarding for HFGxTR_EL2 Marc Zyngier
@ 2023-08-15 18:38 ` Marc Zyngier
  2023-08-15 22:55   ` Jing Zhang
  2023-08-15 18:38 ` [PATCH v4 22/28] KVM: arm64: nv: Add trap forwarding for HDFGxTR_EL2 Marc Zyngier
                   ` (7 subsequent siblings)
  28 siblings, 1 reply; 48+ messages in thread
From: Marc Zyngier @ 2023-08-15 18:38 UTC (permalink / raw)
  To: kvmarm, kvm, linux-arm-kernel
  Cc: Catalin Marinas, Eric Auger, Mark Brown, Mark Rutland,
	Will Deacon, Alexandru Elisei, Andre Przywara, Chase Conklin,
	Ganapatrao Kulkarni, Darren Hart, Miguel Luis, Jing Zhang,
	James Morse, Suzuki K Poulose, Oliver Upton, Zenghui Yu

Similarly, implement the trap forwarding for instructions affected
by HFGITR_EL2.

Note that the TLBI*nXS instructions should be affected by HCRX_EL2,
which will be dealt with down the line. Also, ERET* and SVC traps
are handled separately.

Reviewed-by: Eric Auger <eric.auger@redhat.com>
Signed-off-by: Marc Zyngier <maz@kernel.org>
---
 arch/arm64/include/asm/kvm_arm.h |   4 ++
 arch/arm64/kvm/emulate-nested.c  | 109 +++++++++++++++++++++++++++++++
 2 files changed, 113 insertions(+)

diff --git a/arch/arm64/include/asm/kvm_arm.h b/arch/arm64/include/asm/kvm_arm.h
index 85908aa18908..809bc86acefd 100644
--- a/arch/arm64/include/asm/kvm_arm.h
+++ b/arch/arm64/include/asm/kvm_arm.h
@@ -354,6 +354,10 @@
 #define __HFGWTR_EL2_MASK	GENMASK(49, 0)
 #define __HFGWTR_EL2_nMASK	(GENMASK(55, 54) | BIT(50))
 
+#define __HFGITR_EL2_RES0	GENMASK(63, 57)
+#define __HFGITR_EL2_MASK	GENMASK(54, 0)
+#define __HFGITR_EL2_nMASK	GENMASK(56, 55)
+
 /* Hyp Prefetch Fault Address Register (HPFAR/HDFAR) */
 #define HPFAR_MASK	(~UL(0xf))
 /*
diff --git a/arch/arm64/kvm/emulate-nested.c b/arch/arm64/kvm/emulate-nested.c
index 0e34797515b6..a1a7792db412 100644
--- a/arch/arm64/kvm/emulate-nested.c
+++ b/arch/arm64/kvm/emulate-nested.c
@@ -939,6 +939,7 @@ static DEFINE_XARRAY(sr_forward_xa);
 enum fgt_group_id {
 	__NO_FGT_GROUP__,
 	HFGxTR_GROUP,
+	HFGITR_GROUP,
 
 	/* Must be last */
 	__NR_FGT_GROUP_IDS__
@@ -1020,6 +1021,110 @@ static const struct encoding_to_trap_config encoding_to_fgt[] __initconst = {
 	SR_FGT(SYS_AIDR_EL1, 		HFGxTR, AIDR_EL1, 1),
 	SR_FGT(SYS_AFSR1_EL1, 		HFGxTR, AFSR1_EL1, 1),
 	SR_FGT(SYS_AFSR0_EL1, 		HFGxTR, AFSR0_EL1, 1),
+	/* HFGITR_EL2 */
+	SR_FGT(OP_BRB_IALL, 		HFGITR, nBRBIALL, 0),
+	SR_FGT(OP_BRB_INJ, 		HFGITR, nBRBINJ, 0),
+	SR_FGT(SYS_DC_CVAC, 		HFGITR, DCCVAC, 1),
+	SR_FGT(SYS_DC_CGVAC, 		HFGITR, DCCVAC, 1),
+	SR_FGT(SYS_DC_CGDVAC, 		HFGITR, DCCVAC, 1),
+	SR_FGT(OP_CPP_RCTX, 		HFGITR, CPPRCTX, 1),
+	SR_FGT(OP_DVP_RCTX, 		HFGITR, DVPRCTX, 1),
+	SR_FGT(OP_CFP_RCTX, 		HFGITR, CFPRCTX, 1),
+	SR_FGT(OP_TLBI_VAALE1, 		HFGITR, TLBIVAALE1, 1),
+	SR_FGT(OP_TLBI_VALE1, 		HFGITR, TLBIVALE1, 1),
+	SR_FGT(OP_TLBI_VAAE1, 		HFGITR, TLBIVAAE1, 1),
+	SR_FGT(OP_TLBI_ASIDE1, 		HFGITR, TLBIASIDE1, 1),
+	SR_FGT(OP_TLBI_VAE1, 		HFGITR, TLBIVAE1, 1),
+	SR_FGT(OP_TLBI_VMALLE1, 	HFGITR, TLBIVMALLE1, 1),
+	SR_FGT(OP_TLBI_RVAALE1, 	HFGITR, TLBIRVAALE1, 1),
+	SR_FGT(OP_TLBI_RVALE1, 		HFGITR, TLBIRVALE1, 1),
+	SR_FGT(OP_TLBI_RVAAE1, 		HFGITR, TLBIRVAAE1, 1),
+	SR_FGT(OP_TLBI_RVAE1, 		HFGITR, TLBIRVAE1, 1),
+	SR_FGT(OP_TLBI_RVAALE1IS, 	HFGITR, TLBIRVAALE1IS, 1),
+	SR_FGT(OP_TLBI_RVALE1IS, 	HFGITR, TLBIRVALE1IS, 1),
+	SR_FGT(OP_TLBI_RVAAE1IS, 	HFGITR, TLBIRVAAE1IS, 1),
+	SR_FGT(OP_TLBI_RVAE1IS, 	HFGITR, TLBIRVAE1IS, 1),
+	SR_FGT(OP_TLBI_VAALE1IS, 	HFGITR, TLBIVAALE1IS, 1),
+	SR_FGT(OP_TLBI_VALE1IS, 	HFGITR, TLBIVALE1IS, 1),
+	SR_FGT(OP_TLBI_VAAE1IS, 	HFGITR, TLBIVAAE1IS, 1),
+	SR_FGT(OP_TLBI_ASIDE1IS, 	HFGITR, TLBIASIDE1IS, 1),
+	SR_FGT(OP_TLBI_VAE1IS, 		HFGITR, TLBIVAE1IS, 1),
+	SR_FGT(OP_TLBI_VMALLE1IS, 	HFGITR, TLBIVMALLE1IS, 1),
+	SR_FGT(OP_TLBI_RVAALE1OS, 	HFGITR, TLBIRVAALE1OS, 1),
+	SR_FGT(OP_TLBI_RVALE1OS, 	HFGITR, TLBIRVALE1OS, 1),
+	SR_FGT(OP_TLBI_RVAAE1OS, 	HFGITR, TLBIRVAAE1OS, 1),
+	SR_FGT(OP_TLBI_RVAE1OS, 	HFGITR, TLBIRVAE1OS, 1),
+	SR_FGT(OP_TLBI_VAALE1OS, 	HFGITR, TLBIVAALE1OS, 1),
+	SR_FGT(OP_TLBI_VALE1OS, 	HFGITR, TLBIVALE1OS, 1),
+	SR_FGT(OP_TLBI_VAAE1OS, 	HFGITR, TLBIVAAE1OS, 1),
+	SR_FGT(OP_TLBI_ASIDE1OS, 	HFGITR, TLBIASIDE1OS, 1),
+	SR_FGT(OP_TLBI_VAE1OS, 		HFGITR, TLBIVAE1OS, 1),
+	SR_FGT(OP_TLBI_VMALLE1OS, 	HFGITR, TLBIVMALLE1OS, 1),
+	/* FIXME: nXS variants must be checked against HCRX_EL2.FGTnXS */
+	SR_FGT(OP_TLBI_VAALE1NXS, 	HFGITR, TLBIVAALE1, 1),
+	SR_FGT(OP_TLBI_VALE1NXS, 	HFGITR, TLBIVALE1, 1),
+	SR_FGT(OP_TLBI_VAAE1NXS, 	HFGITR, TLBIVAAE1, 1),
+	SR_FGT(OP_TLBI_ASIDE1NXS, 	HFGITR, TLBIASIDE1, 1),
+	SR_FGT(OP_TLBI_VAE1NXS, 	HFGITR, TLBIVAE1, 1),
+	SR_FGT(OP_TLBI_VMALLE1NXS, 	HFGITR, TLBIVMALLE1, 1),
+	SR_FGT(OP_TLBI_RVAALE1NXS, 	HFGITR, TLBIRVAALE1, 1),
+	SR_FGT(OP_TLBI_RVALE1NXS, 	HFGITR, TLBIRVALE1, 1),
+	SR_FGT(OP_TLBI_RVAAE1NXS, 	HFGITR, TLBIRVAAE1, 1),
+	SR_FGT(OP_TLBI_RVAE1NXS, 	HFGITR, TLBIRVAE1, 1),
+	SR_FGT(OP_TLBI_RVAALE1ISNXS, 	HFGITR, TLBIRVAALE1IS, 1),
+	SR_FGT(OP_TLBI_RVALE1ISNXS, 	HFGITR, TLBIRVALE1IS, 1),
+	SR_FGT(OP_TLBI_RVAAE1ISNXS, 	HFGITR, TLBIRVAAE1IS, 1),
+	SR_FGT(OP_TLBI_RVAE1ISNXS, 	HFGITR, TLBIRVAE1IS, 1),
+	SR_FGT(OP_TLBI_VAALE1ISNXS, 	HFGITR, TLBIVAALE1IS, 1),
+	SR_FGT(OP_TLBI_VALE1ISNXS, 	HFGITR, TLBIVALE1IS, 1),
+	SR_FGT(OP_TLBI_VAAE1ISNXS, 	HFGITR, TLBIVAAE1IS, 1),
+	SR_FGT(OP_TLBI_ASIDE1ISNXS, 	HFGITR, TLBIASIDE1IS, 1),
+	SR_FGT(OP_TLBI_VAE1ISNXS, 	HFGITR, TLBIVAE1IS, 1),
+	SR_FGT(OP_TLBI_VMALLE1ISNXS, 	HFGITR, TLBIVMALLE1IS, 1),
+	SR_FGT(OP_TLBI_RVAALE1OSNXS, 	HFGITR, TLBIRVAALE1OS, 1),
+	SR_FGT(OP_TLBI_RVALE1OSNXS, 	HFGITR, TLBIRVALE1OS, 1),
+	SR_FGT(OP_TLBI_RVAAE1OSNXS, 	HFGITR, TLBIRVAAE1OS, 1),
+	SR_FGT(OP_TLBI_RVAE1OSNXS, 	HFGITR, TLBIRVAE1OS, 1),
+	SR_FGT(OP_TLBI_VAALE1OSNXS, 	HFGITR, TLBIVAALE1OS, 1),
+	SR_FGT(OP_TLBI_VALE1OSNXS, 	HFGITR, TLBIVALE1OS, 1),
+	SR_FGT(OP_TLBI_VAAE1OSNXS, 	HFGITR, TLBIVAAE1OS, 1),
+	SR_FGT(OP_TLBI_ASIDE1OSNXS, 	HFGITR, TLBIASIDE1OS, 1),
+	SR_FGT(OP_TLBI_VAE1OSNXS, 	HFGITR, TLBIVAE1OS, 1),
+	SR_FGT(OP_TLBI_VMALLE1OSNXS, 	HFGITR, TLBIVMALLE1OS, 1),
+	SR_FGT(OP_AT_S1E1WP, 		HFGITR, ATS1E1WP, 1),
+	SR_FGT(OP_AT_S1E1RP, 		HFGITR, ATS1E1RP, 1),
+	SR_FGT(OP_AT_S1E0W, 		HFGITR, ATS1E0W, 1),
+	SR_FGT(OP_AT_S1E0R, 		HFGITR, ATS1E0R, 1),
+	SR_FGT(OP_AT_S1E1W, 		HFGITR, ATS1E1W, 1),
+	SR_FGT(OP_AT_S1E1R, 		HFGITR, ATS1E1R, 1),
+	SR_FGT(SYS_DC_ZVA, 		HFGITR, DCZVA, 1),
+	SR_FGT(SYS_DC_GVA, 		HFGITR, DCZVA, 1),
+	SR_FGT(SYS_DC_GZVA, 		HFGITR, DCZVA, 1),
+	SR_FGT(SYS_DC_CIVAC, 		HFGITR, DCCIVAC, 1),
+	SR_FGT(SYS_DC_CIGVAC, 		HFGITR, DCCIVAC, 1),
+	SR_FGT(SYS_DC_CIGDVAC, 		HFGITR, DCCIVAC, 1),
+	SR_FGT(SYS_DC_CVADP, 		HFGITR, DCCVADP, 1),
+	SR_FGT(SYS_DC_CGVADP, 		HFGITR, DCCVADP, 1),
+	SR_FGT(SYS_DC_CGDVADP, 		HFGITR, DCCVADP, 1),
+	SR_FGT(SYS_DC_CVAP, 		HFGITR, DCCVAP, 1),
+	SR_FGT(SYS_DC_CGVAP, 		HFGITR, DCCVAP, 1),
+	SR_FGT(SYS_DC_CGDVAP, 		HFGITR, DCCVAP, 1),
+	SR_FGT(SYS_DC_CVAU, 		HFGITR, DCCVAU, 1),
+	SR_FGT(SYS_DC_CISW, 		HFGITR, DCCISW, 1),
+	SR_FGT(SYS_DC_CIGSW, 		HFGITR, DCCISW, 1),
+	SR_FGT(SYS_DC_CIGDSW, 		HFGITR, DCCISW, 1),
+	SR_FGT(SYS_DC_CSW, 		HFGITR, DCCSW, 1),
+	SR_FGT(SYS_DC_CGSW, 		HFGITR, DCCSW, 1),
+	SR_FGT(SYS_DC_CGDSW, 		HFGITR, DCCSW, 1),
+	SR_FGT(SYS_DC_ISW, 		HFGITR, DCISW, 1),
+	SR_FGT(SYS_DC_IGSW, 		HFGITR, DCISW, 1),
+	SR_FGT(SYS_DC_IGDSW, 		HFGITR, DCISW, 1),
+	SR_FGT(SYS_DC_IVAC, 		HFGITR, DCIVAC, 1),
+	SR_FGT(SYS_DC_IGVAC, 		HFGITR, DCIVAC, 1),
+	SR_FGT(SYS_DC_IGDVAC, 		HFGITR, DCIVAC, 1),
+	SR_FGT(SYS_IC_IVAU, 		HFGITR, ICIVAU, 1),
+	SR_FGT(SYS_IC_IALLU, 		HFGITR, ICIALLU, 1),
+	SR_FGT(SYS_IC_IALLUIS, 		HFGITR, ICIALLUIS, 1),
 };
 
 static union trap_config get_trap_config(u32 sysreg)
@@ -1231,6 +1336,10 @@ bool __check_nv_sr_forward(struct kvm_vcpu *vcpu)
 			val = sanitised_sys_reg(vcpu, HFGWTR_EL2);
 		break;
 
+	case HFGITR_GROUP:
+		val = sanitised_sys_reg(vcpu, HFGITR_EL2);
+		break;
+
 	case __NR_FGT_GROUP_IDS__:
 		/* Something is really wrong, bail out */
 		WARN_ONCE(1, "__NR_FGT_GROUP_IDS__");
-- 
2.34.1


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply related	[flat|nested] 48+ messages in thread

* [PATCH v4 22/28] KVM: arm64: nv: Add trap forwarding for HDFGxTR_EL2
  2023-08-15 18:38 [PATCH v4 00/28] KVM: arm64: NV trap forwarding infrastructure Marc Zyngier
                   ` (20 preceding siblings ...)
  2023-08-15 18:38 ` [PATCH v4 21/28] KVM: arm64: nv: Add trap forwarding for HFGITR_EL2 Marc Zyngier
@ 2023-08-15 18:38 ` Marc Zyngier
  2023-08-15 23:10   ` Jing Zhang
  2023-08-15 18:38 ` [PATCH v4 23/28] KVM: arm64: nv: Add SVC trap forwarding Marc Zyngier
                   ` (6 subsequent siblings)
  28 siblings, 1 reply; 48+ messages in thread
From: Marc Zyngier @ 2023-08-15 18:38 UTC (permalink / raw)
  To: kvmarm, kvm, linux-arm-kernel
  Cc: Catalin Marinas, Eric Auger, Mark Brown, Mark Rutland,
	Will Deacon, Alexandru Elisei, Andre Przywara, Chase Conklin,
	Ganapatrao Kulkarni, Darren Hart, Miguel Luis, Jing Zhang,
	James Morse, Suzuki K Poulose, Oliver Upton, Zenghui Yu

... and finally, the Debug version of FGT, with its *enormous*
list of trapped registers.

Reviewed-by: Suzuki K Poulose <suzuki.poulose@arm.com>
Reviewed-by: Eric Auger <eric.auger@redhat.com>
Signed-off-by: Marc Zyngier <maz@kernel.org>
---
 arch/arm64/include/asm/kvm_arm.h |  11 +
 arch/arm64/kvm/emulate-nested.c  | 474 +++++++++++++++++++++++++++++++
 2 files changed, 485 insertions(+)

diff --git a/arch/arm64/include/asm/kvm_arm.h b/arch/arm64/include/asm/kvm_arm.h
index 809bc86acefd..d229f238c3b6 100644
--- a/arch/arm64/include/asm/kvm_arm.h
+++ b/arch/arm64/include/asm/kvm_arm.h
@@ -358,6 +358,17 @@
 #define __HFGITR_EL2_MASK	GENMASK(54, 0)
 #define __HFGITR_EL2_nMASK	GENMASK(56, 55)
 
+#define __HDFGRTR_EL2_RES0	(BIT(49) | BIT(42) | GENMASK(39, 38) |	\
+				 GENMASK(21, 20) | BIT(8))
+#define __HDFGRTR_EL2_MASK	~__HDFGRTR_EL2_nMASK
+#define __HDFGRTR_EL2_nMASK	GENMASK(62, 59)
+
+#define __HDFGWTR_EL2_RES0	(BIT(63) | GENMASK(59, 58) | BIT(51) | BIT(47) | \
+				 BIT(43) | GENMASK(40, 38) | BIT(34) | BIT(30) | \
+				 BIT(22) | BIT(9) | BIT(6))
+#define __HDFGWTR_EL2_MASK	~__HDFGWTR_EL2_nMASK
+#define __HDFGWTR_EL2_nMASK	GENMASK(62, 60)
+
 /* Hyp Prefetch Fault Address Register (HPFAR/HDFAR) */
 #define HPFAR_MASK	(~UL(0xf))
 /*
diff --git a/arch/arm64/kvm/emulate-nested.c b/arch/arm64/kvm/emulate-nested.c
index a1a7792db412..c9662f9a345e 100644
--- a/arch/arm64/kvm/emulate-nested.c
+++ b/arch/arm64/kvm/emulate-nested.c
@@ -939,6 +939,8 @@ static DEFINE_XARRAY(sr_forward_xa);
 enum fgt_group_id {
 	__NO_FGT_GROUP__,
 	HFGxTR_GROUP,
+	HDFGRTR_GROUP,
+	HDFGWTR_GROUP,
 	HFGITR_GROUP,
 
 	/* Must be last */
@@ -1125,6 +1127,470 @@ static const struct encoding_to_trap_config encoding_to_fgt[] __initconst = {
 	SR_FGT(SYS_IC_IVAU, 		HFGITR, ICIVAU, 1),
 	SR_FGT(SYS_IC_IALLU, 		HFGITR, ICIALLU, 1),
 	SR_FGT(SYS_IC_IALLUIS, 		HFGITR, ICIALLUIS, 1),
+	/* HDFGRTR_EL2 */
+	SR_FGT(SYS_PMBIDR_EL1, 		HDFGRTR, PMBIDR_EL1, 1),
+	SR_FGT(SYS_PMSNEVFR_EL1, 	HDFGRTR, nPMSNEVFR_EL1, 0),
+	SR_FGT(SYS_BRBINF_EL1(0), 	HDFGRTR, nBRBDATA, 0),
+	SR_FGT(SYS_BRBINF_EL1(1), 	HDFGRTR, nBRBDATA, 0),
+	SR_FGT(SYS_BRBINF_EL1(2), 	HDFGRTR, nBRBDATA, 0),
+	SR_FGT(SYS_BRBINF_EL1(3), 	HDFGRTR, nBRBDATA, 0),
+	SR_FGT(SYS_BRBINF_EL1(4), 	HDFGRTR, nBRBDATA, 0),
+	SR_FGT(SYS_BRBINF_EL1(5), 	HDFGRTR, nBRBDATA, 0),
+	SR_FGT(SYS_BRBINF_EL1(6), 	HDFGRTR, nBRBDATA, 0),
+	SR_FGT(SYS_BRBINF_EL1(7), 	HDFGRTR, nBRBDATA, 0),
+	SR_FGT(SYS_BRBINF_EL1(8), 	HDFGRTR, nBRBDATA, 0),
+	SR_FGT(SYS_BRBINF_EL1(9), 	HDFGRTR, nBRBDATA, 0),
+	SR_FGT(SYS_BRBINF_EL1(10), 	HDFGRTR, nBRBDATA, 0),
+	SR_FGT(SYS_BRBINF_EL1(11), 	HDFGRTR, nBRBDATA, 0),
+	SR_FGT(SYS_BRBINF_EL1(12), 	HDFGRTR, nBRBDATA, 0),
+	SR_FGT(SYS_BRBINF_EL1(13), 	HDFGRTR, nBRBDATA, 0),
+	SR_FGT(SYS_BRBINF_EL1(14), 	HDFGRTR, nBRBDATA, 0),
+	SR_FGT(SYS_BRBINF_EL1(15), 	HDFGRTR, nBRBDATA, 0),
+	SR_FGT(SYS_BRBINF_EL1(16), 	HDFGRTR, nBRBDATA, 0),
+	SR_FGT(SYS_BRBINF_EL1(17), 	HDFGRTR, nBRBDATA, 0),
+	SR_FGT(SYS_BRBINF_EL1(18), 	HDFGRTR, nBRBDATA, 0),
+	SR_FGT(SYS_BRBINF_EL1(19), 	HDFGRTR, nBRBDATA, 0),
+	SR_FGT(SYS_BRBINF_EL1(20), 	HDFGRTR, nBRBDATA, 0),
+	SR_FGT(SYS_BRBINF_EL1(21), 	HDFGRTR, nBRBDATA, 0),
+	SR_FGT(SYS_BRBINF_EL1(22), 	HDFGRTR, nBRBDATA, 0),
+	SR_FGT(SYS_BRBINF_EL1(23), 	HDFGRTR, nBRBDATA, 0),
+	SR_FGT(SYS_BRBINF_EL1(24), 	HDFGRTR, nBRBDATA, 0),
+	SR_FGT(SYS_BRBINF_EL1(25), 	HDFGRTR, nBRBDATA, 0),
+	SR_FGT(SYS_BRBINF_EL1(26), 	HDFGRTR, nBRBDATA, 0),
+	SR_FGT(SYS_BRBINF_EL1(27), 	HDFGRTR, nBRBDATA, 0),
+	SR_FGT(SYS_BRBINF_EL1(28), 	HDFGRTR, nBRBDATA, 0),
+	SR_FGT(SYS_BRBINF_EL1(29), 	HDFGRTR, nBRBDATA, 0),
+	SR_FGT(SYS_BRBINF_EL1(30), 	HDFGRTR, nBRBDATA, 0),
+	SR_FGT(SYS_BRBINF_EL1(31), 	HDFGRTR, nBRBDATA, 0),
+	SR_FGT(SYS_BRBINFINJ_EL1, 	HDFGRTR, nBRBDATA, 0),
+	SR_FGT(SYS_BRBSRC_EL1(0), 	HDFGRTR, nBRBDATA, 0),
+	SR_FGT(SYS_BRBSRC_EL1(1), 	HDFGRTR, nBRBDATA, 0),
+	SR_FGT(SYS_BRBSRC_EL1(2), 	HDFGRTR, nBRBDATA, 0),
+	SR_FGT(SYS_BRBSRC_EL1(3), 	HDFGRTR, nBRBDATA, 0),
+	SR_FGT(SYS_BRBSRC_EL1(4), 	HDFGRTR, nBRBDATA, 0),
+	SR_FGT(SYS_BRBSRC_EL1(5), 	HDFGRTR, nBRBDATA, 0),
+	SR_FGT(SYS_BRBSRC_EL1(6), 	HDFGRTR, nBRBDATA, 0),
+	SR_FGT(SYS_BRBSRC_EL1(7), 	HDFGRTR, nBRBDATA, 0),
+	SR_FGT(SYS_BRBSRC_EL1(8), 	HDFGRTR, nBRBDATA, 0),
+	SR_FGT(SYS_BRBSRC_EL1(9), 	HDFGRTR, nBRBDATA, 0),
+	SR_FGT(SYS_BRBSRC_EL1(10), 	HDFGRTR, nBRBDATA, 0),
+	SR_FGT(SYS_BRBSRC_EL1(11), 	HDFGRTR, nBRBDATA, 0),
+	SR_FGT(SYS_BRBSRC_EL1(12), 	HDFGRTR, nBRBDATA, 0),
+	SR_FGT(SYS_BRBSRC_EL1(13), 	HDFGRTR, nBRBDATA, 0),
+	SR_FGT(SYS_BRBSRC_EL1(14), 	HDFGRTR, nBRBDATA, 0),
+	SR_FGT(SYS_BRBSRC_EL1(15), 	HDFGRTR, nBRBDATA, 0),
+	SR_FGT(SYS_BRBSRC_EL1(16), 	HDFGRTR, nBRBDATA, 0),
+	SR_FGT(SYS_BRBSRC_EL1(17), 	HDFGRTR, nBRBDATA, 0),
+	SR_FGT(SYS_BRBSRC_EL1(18), 	HDFGRTR, nBRBDATA, 0),
+	SR_FGT(SYS_BRBSRC_EL1(19), 	HDFGRTR, nBRBDATA, 0),
+	SR_FGT(SYS_BRBSRC_EL1(20), 	HDFGRTR, nBRBDATA, 0),
+	SR_FGT(SYS_BRBSRC_EL1(21), 	HDFGRTR, nBRBDATA, 0),
+	SR_FGT(SYS_BRBSRC_EL1(22), 	HDFGRTR, nBRBDATA, 0),
+	SR_FGT(SYS_BRBSRC_EL1(23), 	HDFGRTR, nBRBDATA, 0),
+	SR_FGT(SYS_BRBSRC_EL1(24), 	HDFGRTR, nBRBDATA, 0),
+	SR_FGT(SYS_BRBSRC_EL1(25), 	HDFGRTR, nBRBDATA, 0),
+	SR_FGT(SYS_BRBSRC_EL1(26), 	HDFGRTR, nBRBDATA, 0),
+	SR_FGT(SYS_BRBSRC_EL1(27), 	HDFGRTR, nBRBDATA, 0),
+	SR_FGT(SYS_BRBSRC_EL1(28), 	HDFGRTR, nBRBDATA, 0),
+	SR_FGT(SYS_BRBSRC_EL1(29), 	HDFGRTR, nBRBDATA, 0),
+	SR_FGT(SYS_BRBSRC_EL1(30), 	HDFGRTR, nBRBDATA, 0),
+	SR_FGT(SYS_BRBSRC_EL1(31), 	HDFGRTR, nBRBDATA, 0),
+	SR_FGT(SYS_BRBSRCINJ_EL1, 	HDFGRTR, nBRBDATA, 0),
+	SR_FGT(SYS_BRBTGT_EL1(0), 	HDFGRTR, nBRBDATA, 0),
+	SR_FGT(SYS_BRBTGT_EL1(1), 	HDFGRTR, nBRBDATA, 0),
+	SR_FGT(SYS_BRBTGT_EL1(2), 	HDFGRTR, nBRBDATA, 0),
+	SR_FGT(SYS_BRBTGT_EL1(3), 	HDFGRTR, nBRBDATA, 0),
+	SR_FGT(SYS_BRBTGT_EL1(4), 	HDFGRTR, nBRBDATA, 0),
+	SR_FGT(SYS_BRBTGT_EL1(5), 	HDFGRTR, nBRBDATA, 0),
+	SR_FGT(SYS_BRBTGT_EL1(6), 	HDFGRTR, nBRBDATA, 0),
+	SR_FGT(SYS_BRBTGT_EL1(7), 	HDFGRTR, nBRBDATA, 0),
+	SR_FGT(SYS_BRBTGT_EL1(8), 	HDFGRTR, nBRBDATA, 0),
+	SR_FGT(SYS_BRBTGT_EL1(9), 	HDFGRTR, nBRBDATA, 0),
+	SR_FGT(SYS_BRBTGT_EL1(10), 	HDFGRTR, nBRBDATA, 0),
+	SR_FGT(SYS_BRBTGT_EL1(11), 	HDFGRTR, nBRBDATA, 0),
+	SR_FGT(SYS_BRBTGT_EL1(12), 	HDFGRTR, nBRBDATA, 0),
+	SR_FGT(SYS_BRBTGT_EL1(13), 	HDFGRTR, nBRBDATA, 0),
+	SR_FGT(SYS_BRBTGT_EL1(14), 	HDFGRTR, nBRBDATA, 0),
+	SR_FGT(SYS_BRBTGT_EL1(15), 	HDFGRTR, nBRBDATA, 0),
+	SR_FGT(SYS_BRBTGT_EL1(16), 	HDFGRTR, nBRBDATA, 0),
+	SR_FGT(SYS_BRBTGT_EL1(17), 	HDFGRTR, nBRBDATA, 0),
+	SR_FGT(SYS_BRBTGT_EL1(18), 	HDFGRTR, nBRBDATA, 0),
+	SR_FGT(SYS_BRBTGT_EL1(19), 	HDFGRTR, nBRBDATA, 0),
+	SR_FGT(SYS_BRBTGT_EL1(20), 	HDFGRTR, nBRBDATA, 0),
+	SR_FGT(SYS_BRBTGT_EL1(21), 	HDFGRTR, nBRBDATA, 0),
+	SR_FGT(SYS_BRBTGT_EL1(22), 	HDFGRTR, nBRBDATA, 0),
+	SR_FGT(SYS_BRBTGT_EL1(23), 	HDFGRTR, nBRBDATA, 0),
+	SR_FGT(SYS_BRBTGT_EL1(24), 	HDFGRTR, nBRBDATA, 0),
+	SR_FGT(SYS_BRBTGT_EL1(25), 	HDFGRTR, nBRBDATA, 0),
+	SR_FGT(SYS_BRBTGT_EL1(26), 	HDFGRTR, nBRBDATA, 0),
+	SR_FGT(SYS_BRBTGT_EL1(27), 	HDFGRTR, nBRBDATA, 0),
+	SR_FGT(SYS_BRBTGT_EL1(28), 	HDFGRTR, nBRBDATA, 0),
+	SR_FGT(SYS_BRBTGT_EL1(29), 	HDFGRTR, nBRBDATA, 0),
+	SR_FGT(SYS_BRBTGT_EL1(30), 	HDFGRTR, nBRBDATA, 0),
+	SR_FGT(SYS_BRBTGT_EL1(31), 	HDFGRTR, nBRBDATA, 0),
+	SR_FGT(SYS_BRBTGTINJ_EL1, 	HDFGRTR, nBRBDATA, 0),
+	SR_FGT(SYS_BRBTS_EL1, 		HDFGRTR, nBRBDATA, 0),
+	SR_FGT(SYS_BRBCR_EL1, 		HDFGRTR, nBRBCTL, 0),
+	SR_FGT(SYS_BRBFCR_EL1, 		HDFGRTR, nBRBCTL, 0),
+	SR_FGT(SYS_BRBIDR0_EL1, 	HDFGRTR, nBRBIDR, 0),
+	SR_FGT(SYS_PMCEID0_EL0, 	HDFGRTR, PMCEIDn_EL0, 1),
+	SR_FGT(SYS_PMCEID1_EL0, 	HDFGRTR, PMCEIDn_EL0, 1),
+	SR_FGT(SYS_PMUSERENR_EL0, 	HDFGRTR, PMUSERENR_EL0, 1),
+	SR_FGT(SYS_TRBTRG_EL1, 		HDFGRTR, TRBTRG_EL1, 1),
+	SR_FGT(SYS_TRBSR_EL1, 		HDFGRTR, TRBSR_EL1, 1),
+	SR_FGT(SYS_TRBPTR_EL1, 		HDFGRTR, TRBPTR_EL1, 1),
+	SR_FGT(SYS_TRBMAR_EL1, 		HDFGRTR, TRBMAR_EL1, 1),
+	SR_FGT(SYS_TRBLIMITR_EL1, 	HDFGRTR, TRBLIMITR_EL1, 1),
+	SR_FGT(SYS_TRBIDR_EL1, 		HDFGRTR, TRBIDR_EL1, 1),
+	SR_FGT(SYS_TRBBASER_EL1, 	HDFGRTR, TRBBASER_EL1, 1),
+	SR_FGT(SYS_TRCVICTLR, 		HDFGRTR, TRCVICTLR, 1),
+	SR_FGT(SYS_TRCSTATR, 		HDFGRTR, TRCSTATR, 1),
+	SR_FGT(SYS_TRCSSCSR(0), 	HDFGRTR, TRCSSCSRn, 1),
+	SR_FGT(SYS_TRCSSCSR(1), 	HDFGRTR, TRCSSCSRn, 1),
+	SR_FGT(SYS_TRCSSCSR(2), 	HDFGRTR, TRCSSCSRn, 1),
+	SR_FGT(SYS_TRCSSCSR(3), 	HDFGRTR, TRCSSCSRn, 1),
+	SR_FGT(SYS_TRCSSCSR(4), 	HDFGRTR, TRCSSCSRn, 1),
+	SR_FGT(SYS_TRCSSCSR(5), 	HDFGRTR, TRCSSCSRn, 1),
+	SR_FGT(SYS_TRCSSCSR(6), 	HDFGRTR, TRCSSCSRn, 1),
+	SR_FGT(SYS_TRCSSCSR(7), 	HDFGRTR, TRCSSCSRn, 1),
+	SR_FGT(SYS_TRCSEQSTR, 		HDFGRTR, TRCSEQSTR, 1),
+	SR_FGT(SYS_TRCPRGCTLR, 		HDFGRTR, TRCPRGCTLR, 1),
+	SR_FGT(SYS_TRCOSLSR, 		HDFGRTR, TRCOSLSR, 1),
+	SR_FGT(SYS_TRCIMSPEC(0), 	HDFGRTR, TRCIMSPECn, 1),
+	SR_FGT(SYS_TRCIMSPEC(1), 	HDFGRTR, TRCIMSPECn, 1),
+	SR_FGT(SYS_TRCIMSPEC(2), 	HDFGRTR, TRCIMSPECn, 1),
+	SR_FGT(SYS_TRCIMSPEC(3), 	HDFGRTR, TRCIMSPECn, 1),
+	SR_FGT(SYS_TRCIMSPEC(4), 	HDFGRTR, TRCIMSPECn, 1),
+	SR_FGT(SYS_TRCIMSPEC(5), 	HDFGRTR, TRCIMSPECn, 1),
+	SR_FGT(SYS_TRCIMSPEC(6), 	HDFGRTR, TRCIMSPECn, 1),
+	SR_FGT(SYS_TRCIMSPEC(7), 	HDFGRTR, TRCIMSPECn, 1),
+	SR_FGT(SYS_TRCDEVARCH, 		HDFGRTR, TRCID, 1),
+	SR_FGT(SYS_TRCDEVID, 		HDFGRTR, TRCID, 1),
+	SR_FGT(SYS_TRCIDR0, 		HDFGRTR, TRCID, 1),
+	SR_FGT(SYS_TRCIDR1, 		HDFGRTR, TRCID, 1),
+	SR_FGT(SYS_TRCIDR2, 		HDFGRTR, TRCID, 1),
+	SR_FGT(SYS_TRCIDR3, 		HDFGRTR, TRCID, 1),
+	SR_FGT(SYS_TRCIDR4, 		HDFGRTR, TRCID, 1),
+	SR_FGT(SYS_TRCIDR5, 		HDFGRTR, TRCID, 1),
+	SR_FGT(SYS_TRCIDR6, 		HDFGRTR, TRCID, 1),
+	SR_FGT(SYS_TRCIDR7, 		HDFGRTR, TRCID, 1),
+	SR_FGT(SYS_TRCIDR8, 		HDFGRTR, TRCID, 1),
+	SR_FGT(SYS_TRCIDR9, 		HDFGRTR, TRCID, 1),
+	SR_FGT(SYS_TRCIDR10, 		HDFGRTR, TRCID, 1),
+	SR_FGT(SYS_TRCIDR11, 		HDFGRTR, TRCID, 1),
+	SR_FGT(SYS_TRCIDR12, 		HDFGRTR, TRCID, 1),
+	SR_FGT(SYS_TRCIDR13, 		HDFGRTR, TRCID, 1),
+	SR_FGT(SYS_TRCCNTVR(0), 	HDFGRTR, TRCCNTVRn, 1),
+	SR_FGT(SYS_TRCCNTVR(1), 	HDFGRTR, TRCCNTVRn, 1),
+	SR_FGT(SYS_TRCCNTVR(2), 	HDFGRTR, TRCCNTVRn, 1),
+	SR_FGT(SYS_TRCCNTVR(3), 	HDFGRTR, TRCCNTVRn, 1),
+	SR_FGT(SYS_TRCCLAIMCLR, 	HDFGRTR, TRCCLAIM, 1),
+	SR_FGT(SYS_TRCCLAIMSET, 	HDFGRTR, TRCCLAIM, 1),
+	SR_FGT(SYS_TRCAUXCTLR, 		HDFGRTR, TRCAUXCTLR, 1),
+	SR_FGT(SYS_TRCAUTHSTATUS, 	HDFGRTR, TRCAUTHSTATUS, 1),
+	SR_FGT(SYS_TRCACATR(0), 	HDFGRTR, TRC, 1),
+	SR_FGT(SYS_TRCACATR(1), 	HDFGRTR, TRC, 1),
+	SR_FGT(SYS_TRCACATR(2), 	HDFGRTR, TRC, 1),
+	SR_FGT(SYS_TRCACATR(3), 	HDFGRTR, TRC, 1),
+	SR_FGT(SYS_TRCACATR(4), 	HDFGRTR, TRC, 1),
+	SR_FGT(SYS_TRCACATR(5), 	HDFGRTR, TRC, 1),
+	SR_FGT(SYS_TRCACATR(6), 	HDFGRTR, TRC, 1),
+	SR_FGT(SYS_TRCACATR(7), 	HDFGRTR, TRC, 1),
+	SR_FGT(SYS_TRCACATR(8), 	HDFGRTR, TRC, 1),
+	SR_FGT(SYS_TRCACATR(9), 	HDFGRTR, TRC, 1),
+	SR_FGT(SYS_TRCACATR(10), 	HDFGRTR, TRC, 1),
+	SR_FGT(SYS_TRCACATR(11), 	HDFGRTR, TRC, 1),
+	SR_FGT(SYS_TRCACATR(12), 	HDFGRTR, TRC, 1),
+	SR_FGT(SYS_TRCACATR(13), 	HDFGRTR, TRC, 1),
+	SR_FGT(SYS_TRCACATR(14), 	HDFGRTR, TRC, 1),
+	SR_FGT(SYS_TRCACATR(15), 	HDFGRTR, TRC, 1),
+	SR_FGT(SYS_TRCACVR(0), 		HDFGRTR, TRC, 1),
+	SR_FGT(SYS_TRCACVR(1), 		HDFGRTR, TRC, 1),
+	SR_FGT(SYS_TRCACVR(2), 		HDFGRTR, TRC, 1),
+	SR_FGT(SYS_TRCACVR(3), 		HDFGRTR, TRC, 1),
+	SR_FGT(SYS_TRCACVR(4), 		HDFGRTR, TRC, 1),
+	SR_FGT(SYS_TRCACVR(5), 		HDFGRTR, TRC, 1),
+	SR_FGT(SYS_TRCACVR(6), 		HDFGRTR, TRC, 1),
+	SR_FGT(SYS_TRCACVR(7), 		HDFGRTR, TRC, 1),
+	SR_FGT(SYS_TRCACVR(8), 		HDFGRTR, TRC, 1),
+	SR_FGT(SYS_TRCACVR(9), 		HDFGRTR, TRC, 1),
+	SR_FGT(SYS_TRCACVR(10), 	HDFGRTR, TRC, 1),
+	SR_FGT(SYS_TRCACVR(11), 	HDFGRTR, TRC, 1),
+	SR_FGT(SYS_TRCACVR(12), 	HDFGRTR, TRC, 1),
+	SR_FGT(SYS_TRCACVR(13), 	HDFGRTR, TRC, 1),
+	SR_FGT(SYS_TRCACVR(14), 	HDFGRTR, TRC, 1),
+	SR_FGT(SYS_TRCACVR(15), 	HDFGRTR, TRC, 1),
+	SR_FGT(SYS_TRCBBCTLR, 		HDFGRTR, TRC, 1),
+	SR_FGT(SYS_TRCCCCTLR, 		HDFGRTR, TRC, 1),
+	SR_FGT(SYS_TRCCIDCCTLR0, 	HDFGRTR, TRC, 1),
+	SR_FGT(SYS_TRCCIDCCTLR1, 	HDFGRTR, TRC, 1),
+	SR_FGT(SYS_TRCCIDCVR(0), 	HDFGRTR, TRC, 1),
+	SR_FGT(SYS_TRCCIDCVR(1), 	HDFGRTR, TRC, 1),
+	SR_FGT(SYS_TRCCIDCVR(2), 	HDFGRTR, TRC, 1),
+	SR_FGT(SYS_TRCCIDCVR(3), 	HDFGRTR, TRC, 1),
+	SR_FGT(SYS_TRCCIDCVR(4), 	HDFGRTR, TRC, 1),
+	SR_FGT(SYS_TRCCIDCVR(5), 	HDFGRTR, TRC, 1),
+	SR_FGT(SYS_TRCCIDCVR(6), 	HDFGRTR, TRC, 1),
+	SR_FGT(SYS_TRCCIDCVR(7), 	HDFGRTR, TRC, 1),
+	SR_FGT(SYS_TRCCNTCTLR(0), 	HDFGRTR, TRC, 1),
+	SR_FGT(SYS_TRCCNTCTLR(1), 	HDFGRTR, TRC, 1),
+	SR_FGT(SYS_TRCCNTCTLR(2), 	HDFGRTR, TRC, 1),
+	SR_FGT(SYS_TRCCNTCTLR(3), 	HDFGRTR, TRC, 1),
+	SR_FGT(SYS_TRCCNTRLDVR(0), 	HDFGRTR, TRC, 1),
+	SR_FGT(SYS_TRCCNTRLDVR(1), 	HDFGRTR, TRC, 1),
+	SR_FGT(SYS_TRCCNTRLDVR(2), 	HDFGRTR, TRC, 1),
+	SR_FGT(SYS_TRCCNTRLDVR(3), 	HDFGRTR, TRC, 1),
+	SR_FGT(SYS_TRCCONFIGR, 		HDFGRTR, TRC, 1),
+	SR_FGT(SYS_TRCEVENTCTL0R, 	HDFGRTR, TRC, 1),
+	SR_FGT(SYS_TRCEVENTCTL1R, 	HDFGRTR, TRC, 1),
+	SR_FGT(SYS_TRCEXTINSELR(0), 	HDFGRTR, TRC, 1),
+	SR_FGT(SYS_TRCEXTINSELR(1), 	HDFGRTR, TRC, 1),
+	SR_FGT(SYS_TRCEXTINSELR(2), 	HDFGRTR, TRC, 1),
+	SR_FGT(SYS_TRCEXTINSELR(3), 	HDFGRTR, TRC, 1),
+	SR_FGT(SYS_TRCQCTLR, 		HDFGRTR, TRC, 1),
+	SR_FGT(SYS_TRCRSCTLR(2), 	HDFGRTR, TRC, 1),
+	SR_FGT(SYS_TRCRSCTLR(3), 	HDFGRTR, TRC, 1),
+	SR_FGT(SYS_TRCRSCTLR(4), 	HDFGRTR, TRC, 1),
+	SR_FGT(SYS_TRCRSCTLR(5), 	HDFGRTR, TRC, 1),
+	SR_FGT(SYS_TRCRSCTLR(6), 	HDFGRTR, TRC, 1),
+	SR_FGT(SYS_TRCRSCTLR(7), 	HDFGRTR, TRC, 1),
+	SR_FGT(SYS_TRCRSCTLR(8), 	HDFGRTR, TRC, 1),
+	SR_FGT(SYS_TRCRSCTLR(9), 	HDFGRTR, TRC, 1),
+	SR_FGT(SYS_TRCRSCTLR(10), 	HDFGRTR, TRC, 1),
+	SR_FGT(SYS_TRCRSCTLR(11), 	HDFGRTR, TRC, 1),
+	SR_FGT(SYS_TRCRSCTLR(12), 	HDFGRTR, TRC, 1),
+	SR_FGT(SYS_TRCRSCTLR(13), 	HDFGRTR, TRC, 1),
+	SR_FGT(SYS_TRCRSCTLR(14), 	HDFGRTR, TRC, 1),
+	SR_FGT(SYS_TRCRSCTLR(15), 	HDFGRTR, TRC, 1),
+	SR_FGT(SYS_TRCRSCTLR(16), 	HDFGRTR, TRC, 1),
+	SR_FGT(SYS_TRCRSCTLR(17), 	HDFGRTR, TRC, 1),
+	SR_FGT(SYS_TRCRSCTLR(18), 	HDFGRTR, TRC, 1),
+	SR_FGT(SYS_TRCRSCTLR(19), 	HDFGRTR, TRC, 1),
+	SR_FGT(SYS_TRCRSCTLR(20), 	HDFGRTR, TRC, 1),
+	SR_FGT(SYS_TRCRSCTLR(21), 	HDFGRTR, TRC, 1),
+	SR_FGT(SYS_TRCRSCTLR(22), 	HDFGRTR, TRC, 1),
+	SR_FGT(SYS_TRCRSCTLR(23), 	HDFGRTR, TRC, 1),
+	SR_FGT(SYS_TRCRSCTLR(24), 	HDFGRTR, TRC, 1),
+	SR_FGT(SYS_TRCRSCTLR(25), 	HDFGRTR, TRC, 1),
+	SR_FGT(SYS_TRCRSCTLR(26), 	HDFGRTR, TRC, 1),
+	SR_FGT(SYS_TRCRSCTLR(27), 	HDFGRTR, TRC, 1),
+	SR_FGT(SYS_TRCRSCTLR(28), 	HDFGRTR, TRC, 1),
+	SR_FGT(SYS_TRCRSCTLR(29), 	HDFGRTR, TRC, 1),
+	SR_FGT(SYS_TRCRSCTLR(30), 	HDFGRTR, TRC, 1),
+	SR_FGT(SYS_TRCRSCTLR(31), 	HDFGRTR, TRC, 1),
+	SR_FGT(SYS_TRCRSR, 		HDFGRTR, TRC, 1),
+	SR_FGT(SYS_TRCSEQEVR(0), 	HDFGRTR, TRC, 1),
+	SR_FGT(SYS_TRCSEQEVR(1), 	HDFGRTR, TRC, 1),
+	SR_FGT(SYS_TRCSEQEVR(2), 	HDFGRTR, TRC, 1),
+	SR_FGT(SYS_TRCSEQRSTEVR, 	HDFGRTR, TRC, 1),
+	SR_FGT(SYS_TRCSSCCR(0), 	HDFGRTR, TRC, 1),
+	SR_FGT(SYS_TRCSSCCR(1), 	HDFGRTR, TRC, 1),
+	SR_FGT(SYS_TRCSSCCR(2), 	HDFGRTR, TRC, 1),
+	SR_FGT(SYS_TRCSSCCR(3), 	HDFGRTR, TRC, 1),
+	SR_FGT(SYS_TRCSSCCR(4), 	HDFGRTR, TRC, 1),
+	SR_FGT(SYS_TRCSSCCR(5), 	HDFGRTR, TRC, 1),
+	SR_FGT(SYS_TRCSSCCR(6), 	HDFGRTR, TRC, 1),
+	SR_FGT(SYS_TRCSSCCR(7), 	HDFGRTR, TRC, 1),
+	SR_FGT(SYS_TRCSSPCICR(0), 	HDFGRTR, TRC, 1),
+	SR_FGT(SYS_TRCSSPCICR(1), 	HDFGRTR, TRC, 1),
+	SR_FGT(SYS_TRCSSPCICR(2), 	HDFGRTR, TRC, 1),
+	SR_FGT(SYS_TRCSSPCICR(3), 	HDFGRTR, TRC, 1),
+	SR_FGT(SYS_TRCSSPCICR(4), 	HDFGRTR, TRC, 1),
+	SR_FGT(SYS_TRCSSPCICR(5), 	HDFGRTR, TRC, 1),
+	SR_FGT(SYS_TRCSSPCICR(6), 	HDFGRTR, TRC, 1),
+	SR_FGT(SYS_TRCSSPCICR(7), 	HDFGRTR, TRC, 1),
+	SR_FGT(SYS_TRCSTALLCTLR, 	HDFGRTR, TRC, 1),
+	SR_FGT(SYS_TRCSYNCPR, 		HDFGRTR, TRC, 1),
+	SR_FGT(SYS_TRCTRACEIDR, 	HDFGRTR, TRC, 1),
+	SR_FGT(SYS_TRCTSCTLR, 		HDFGRTR, TRC, 1),
+	SR_FGT(SYS_TRCVIIECTLR, 	HDFGRTR, TRC, 1),
+	SR_FGT(SYS_TRCVIPCSSCTLR, 	HDFGRTR, TRC, 1),
+	SR_FGT(SYS_TRCVISSCTLR, 	HDFGRTR, TRC, 1),
+	SR_FGT(SYS_TRCVMIDCCTLR0, 	HDFGRTR, TRC, 1),
+	SR_FGT(SYS_TRCVMIDCCTLR1, 	HDFGRTR, TRC, 1),
+	SR_FGT(SYS_TRCVMIDCVR(0), 	HDFGRTR, TRC, 1),
+	SR_FGT(SYS_TRCVMIDCVR(1), 	HDFGRTR, TRC, 1),
+	SR_FGT(SYS_TRCVMIDCVR(2), 	HDFGRTR, TRC, 1),
+	SR_FGT(SYS_TRCVMIDCVR(3), 	HDFGRTR, TRC, 1),
+	SR_FGT(SYS_TRCVMIDCVR(4), 	HDFGRTR, TRC, 1),
+	SR_FGT(SYS_TRCVMIDCVR(5), 	HDFGRTR, TRC, 1),
+	SR_FGT(SYS_TRCVMIDCVR(6), 	HDFGRTR, TRC, 1),
+	SR_FGT(SYS_TRCVMIDCVR(7), 	HDFGRTR, TRC, 1),
+	SR_FGT(SYS_PMSLATFR_EL1, 	HDFGRTR, PMSLATFR_EL1, 1),
+	SR_FGT(SYS_PMSIRR_EL1, 		HDFGRTR, PMSIRR_EL1, 1),
+	SR_FGT(SYS_PMSIDR_EL1, 		HDFGRTR, PMSIDR_EL1, 1),
+	SR_FGT(SYS_PMSICR_EL1, 		HDFGRTR, PMSICR_EL1, 1),
+	SR_FGT(SYS_PMSFCR_EL1, 		HDFGRTR, PMSFCR_EL1, 1),
+	SR_FGT(SYS_PMSEVFR_EL1, 	HDFGRTR, PMSEVFR_EL1, 1),
+	SR_FGT(SYS_PMSCR_EL1, 		HDFGRTR, PMSCR_EL1, 1),
+	SR_FGT(SYS_PMBSR_EL1, 		HDFGRTR, PMBSR_EL1, 1),
+	SR_FGT(SYS_PMBPTR_EL1, 		HDFGRTR, PMBPTR_EL1, 1),
+	SR_FGT(SYS_PMBLIMITR_EL1, 	HDFGRTR, PMBLIMITR_EL1, 1),
+	SR_FGT(SYS_PMMIR_EL1, 		HDFGRTR, PMMIR_EL1, 1),
+	SR_FGT(SYS_PMSELR_EL0, 		HDFGRTR, PMSELR_EL0, 1),
+	SR_FGT(SYS_PMOVSCLR_EL0, 	HDFGRTR, PMOVS, 1),
+	SR_FGT(SYS_PMOVSSET_EL0, 	HDFGRTR, PMOVS, 1),
+	SR_FGT(SYS_PMINTENCLR_EL1, 	HDFGRTR, PMINTEN, 1),
+	SR_FGT(SYS_PMINTENSET_EL1, 	HDFGRTR, PMINTEN, 1),
+	SR_FGT(SYS_PMCNTENCLR_EL0, 	HDFGRTR, PMCNTEN, 1),
+	SR_FGT(SYS_PMCNTENSET_EL0, 	HDFGRTR, PMCNTEN, 1),
+	SR_FGT(SYS_PMCCNTR_EL0, 	HDFGRTR, PMCCNTR_EL0, 1),
+	SR_FGT(SYS_PMCCFILTR_EL0, 	HDFGRTR, PMCCFILTR_EL0, 1),
+	SR_FGT(SYS_PMEVTYPERn_EL0(0), 	HDFGRTR, PMEVTYPERn_EL0, 1),
+	SR_FGT(SYS_PMEVTYPERn_EL0(1), 	HDFGRTR, PMEVTYPERn_EL0, 1),
+	SR_FGT(SYS_PMEVTYPERn_EL0(2), 	HDFGRTR, PMEVTYPERn_EL0, 1),
+	SR_FGT(SYS_PMEVTYPERn_EL0(3), 	HDFGRTR, PMEVTYPERn_EL0, 1),
+	SR_FGT(SYS_PMEVTYPERn_EL0(4), 	HDFGRTR, PMEVTYPERn_EL0, 1),
+	SR_FGT(SYS_PMEVTYPERn_EL0(5), 	HDFGRTR, PMEVTYPERn_EL0, 1),
+	SR_FGT(SYS_PMEVTYPERn_EL0(6), 	HDFGRTR, PMEVTYPERn_EL0, 1),
+	SR_FGT(SYS_PMEVTYPERn_EL0(7), 	HDFGRTR, PMEVTYPERn_EL0, 1),
+	SR_FGT(SYS_PMEVTYPERn_EL0(8), 	HDFGRTR, PMEVTYPERn_EL0, 1),
+	SR_FGT(SYS_PMEVTYPERn_EL0(9), 	HDFGRTR, PMEVTYPERn_EL0, 1),
+	SR_FGT(SYS_PMEVTYPERn_EL0(10), 	HDFGRTR, PMEVTYPERn_EL0, 1),
+	SR_FGT(SYS_PMEVTYPERn_EL0(11), 	HDFGRTR, PMEVTYPERn_EL0, 1),
+	SR_FGT(SYS_PMEVTYPERn_EL0(12), 	HDFGRTR, PMEVTYPERn_EL0, 1),
+	SR_FGT(SYS_PMEVTYPERn_EL0(13), 	HDFGRTR, PMEVTYPERn_EL0, 1),
+	SR_FGT(SYS_PMEVTYPERn_EL0(14), 	HDFGRTR, PMEVTYPERn_EL0, 1),
+	SR_FGT(SYS_PMEVTYPERn_EL0(15), 	HDFGRTR, PMEVTYPERn_EL0, 1),
+	SR_FGT(SYS_PMEVTYPERn_EL0(16), 	HDFGRTR, PMEVTYPERn_EL0, 1),
+	SR_FGT(SYS_PMEVTYPERn_EL0(17), 	HDFGRTR, PMEVTYPERn_EL0, 1),
+	SR_FGT(SYS_PMEVTYPERn_EL0(18), 	HDFGRTR, PMEVTYPERn_EL0, 1),
+	SR_FGT(SYS_PMEVTYPERn_EL0(19), 	HDFGRTR, PMEVTYPERn_EL0, 1),
+	SR_FGT(SYS_PMEVTYPERn_EL0(20), 	HDFGRTR, PMEVTYPERn_EL0, 1),
+	SR_FGT(SYS_PMEVTYPERn_EL0(21), 	HDFGRTR, PMEVTYPERn_EL0, 1),
+	SR_FGT(SYS_PMEVTYPERn_EL0(22), 	HDFGRTR, PMEVTYPERn_EL0, 1),
+	SR_FGT(SYS_PMEVTYPERn_EL0(23), 	HDFGRTR, PMEVTYPERn_EL0, 1),
+	SR_FGT(SYS_PMEVTYPERn_EL0(24), 	HDFGRTR, PMEVTYPERn_EL0, 1),
+	SR_FGT(SYS_PMEVTYPERn_EL0(25), 	HDFGRTR, PMEVTYPERn_EL0, 1),
+	SR_FGT(SYS_PMEVTYPERn_EL0(26), 	HDFGRTR, PMEVTYPERn_EL0, 1),
+	SR_FGT(SYS_PMEVTYPERn_EL0(27), 	HDFGRTR, PMEVTYPERn_EL0, 1),
+	SR_FGT(SYS_PMEVTYPERn_EL0(28), 	HDFGRTR, PMEVTYPERn_EL0, 1),
+	SR_FGT(SYS_PMEVTYPERn_EL0(29), 	HDFGRTR, PMEVTYPERn_EL0, 1),
+	SR_FGT(SYS_PMEVTYPERn_EL0(30), 	HDFGRTR, PMEVTYPERn_EL0, 1),
+	SR_FGT(SYS_PMEVCNTRn_EL0(0), 	HDFGRTR, PMEVCNTRn_EL0, 1),
+	SR_FGT(SYS_PMEVCNTRn_EL0(1), 	HDFGRTR, PMEVCNTRn_EL0, 1),
+	SR_FGT(SYS_PMEVCNTRn_EL0(2), 	HDFGRTR, PMEVCNTRn_EL0, 1),
+	SR_FGT(SYS_PMEVCNTRn_EL0(3), 	HDFGRTR, PMEVCNTRn_EL0, 1),
+	SR_FGT(SYS_PMEVCNTRn_EL0(4), 	HDFGRTR, PMEVCNTRn_EL0, 1),
+	SR_FGT(SYS_PMEVCNTRn_EL0(5), 	HDFGRTR, PMEVCNTRn_EL0, 1),
+	SR_FGT(SYS_PMEVCNTRn_EL0(6), 	HDFGRTR, PMEVCNTRn_EL0, 1),
+	SR_FGT(SYS_PMEVCNTRn_EL0(7), 	HDFGRTR, PMEVCNTRn_EL0, 1),
+	SR_FGT(SYS_PMEVCNTRn_EL0(8), 	HDFGRTR, PMEVCNTRn_EL0, 1),
+	SR_FGT(SYS_PMEVCNTRn_EL0(9), 	HDFGRTR, PMEVCNTRn_EL0, 1),
+	SR_FGT(SYS_PMEVCNTRn_EL0(10), 	HDFGRTR, PMEVCNTRn_EL0, 1),
+	SR_FGT(SYS_PMEVCNTRn_EL0(11), 	HDFGRTR, PMEVCNTRn_EL0, 1),
+	SR_FGT(SYS_PMEVCNTRn_EL0(12), 	HDFGRTR, PMEVCNTRn_EL0, 1),
+	SR_FGT(SYS_PMEVCNTRn_EL0(13), 	HDFGRTR, PMEVCNTRn_EL0, 1),
+	SR_FGT(SYS_PMEVCNTRn_EL0(14), 	HDFGRTR, PMEVCNTRn_EL0, 1),
+	SR_FGT(SYS_PMEVCNTRn_EL0(15), 	HDFGRTR, PMEVCNTRn_EL0, 1),
+	SR_FGT(SYS_PMEVCNTRn_EL0(16), 	HDFGRTR, PMEVCNTRn_EL0, 1),
+	SR_FGT(SYS_PMEVCNTRn_EL0(17), 	HDFGRTR, PMEVCNTRn_EL0, 1),
+	SR_FGT(SYS_PMEVCNTRn_EL0(18), 	HDFGRTR, PMEVCNTRn_EL0, 1),
+	SR_FGT(SYS_PMEVCNTRn_EL0(19), 	HDFGRTR, PMEVCNTRn_EL0, 1),
+	SR_FGT(SYS_PMEVCNTRn_EL0(20), 	HDFGRTR, PMEVCNTRn_EL0, 1),
+	SR_FGT(SYS_PMEVCNTRn_EL0(21), 	HDFGRTR, PMEVCNTRn_EL0, 1),
+	SR_FGT(SYS_PMEVCNTRn_EL0(22), 	HDFGRTR, PMEVCNTRn_EL0, 1),
+	SR_FGT(SYS_PMEVCNTRn_EL0(23), 	HDFGRTR, PMEVCNTRn_EL0, 1),
+	SR_FGT(SYS_PMEVCNTRn_EL0(24), 	HDFGRTR, PMEVCNTRn_EL0, 1),
+	SR_FGT(SYS_PMEVCNTRn_EL0(25), 	HDFGRTR, PMEVCNTRn_EL0, 1),
+	SR_FGT(SYS_PMEVCNTRn_EL0(26), 	HDFGRTR, PMEVCNTRn_EL0, 1),
+	SR_FGT(SYS_PMEVCNTRn_EL0(27), 	HDFGRTR, PMEVCNTRn_EL0, 1),
+	SR_FGT(SYS_PMEVCNTRn_EL0(28), 	HDFGRTR, PMEVCNTRn_EL0, 1),
+	SR_FGT(SYS_PMEVCNTRn_EL0(29), 	HDFGRTR, PMEVCNTRn_EL0, 1),
+	SR_FGT(SYS_PMEVCNTRn_EL0(30), 	HDFGRTR, PMEVCNTRn_EL0, 1),
+	SR_FGT(SYS_OSDLR_EL1, 		HDFGRTR, OSDLR_EL1, 1),
+	SR_FGT(SYS_OSECCR_EL1, 		HDFGRTR, OSECCR_EL1, 1),
+	SR_FGT(SYS_OSLSR_EL1, 		HDFGRTR, OSLSR_EL1, 1),
+	SR_FGT(SYS_DBGPRCR_EL1, 	HDFGRTR, DBGPRCR_EL1, 1),
+	SR_FGT(SYS_DBGAUTHSTATUS_EL1, 	HDFGRTR, DBGAUTHSTATUS_EL1, 1),
+	SR_FGT(SYS_DBGCLAIMSET_EL1, 	HDFGRTR, DBGCLAIM, 1),
+	SR_FGT(SYS_DBGCLAIMCLR_EL1, 	HDFGRTR, DBGCLAIM, 1),
+	SR_FGT(SYS_MDSCR_EL1, 		HDFGRTR, MDSCR_EL1, 1),
+	/*
+	 * The trap bits capture *64* debug registers per bit, but the
+	 * ARM ARM only describes the encoding for the first 16, and
+	 * we don't really support more than that anyway.
+	 */
+	SR_FGT(SYS_DBGWVRn_EL1(0), 	HDFGRTR, DBGWVRn_EL1, 1),
+	SR_FGT(SYS_DBGWVRn_EL1(1), 	HDFGRTR, DBGWVRn_EL1, 1),
+	SR_FGT(SYS_DBGWVRn_EL1(2), 	HDFGRTR, DBGWVRn_EL1, 1),
+	SR_FGT(SYS_DBGWVRn_EL1(3), 	HDFGRTR, DBGWVRn_EL1, 1),
+	SR_FGT(SYS_DBGWVRn_EL1(4), 	HDFGRTR, DBGWVRn_EL1, 1),
+	SR_FGT(SYS_DBGWVRn_EL1(5), 	HDFGRTR, DBGWVRn_EL1, 1),
+	SR_FGT(SYS_DBGWVRn_EL1(6), 	HDFGRTR, DBGWVRn_EL1, 1),
+	SR_FGT(SYS_DBGWVRn_EL1(7), 	HDFGRTR, DBGWVRn_EL1, 1),
+	SR_FGT(SYS_DBGWVRn_EL1(8), 	HDFGRTR, DBGWVRn_EL1, 1),
+	SR_FGT(SYS_DBGWVRn_EL1(9), 	HDFGRTR, DBGWVRn_EL1, 1),
+	SR_FGT(SYS_DBGWVRn_EL1(10), 	HDFGRTR, DBGWVRn_EL1, 1),
+	SR_FGT(SYS_DBGWVRn_EL1(11), 	HDFGRTR, DBGWVRn_EL1, 1),
+	SR_FGT(SYS_DBGWVRn_EL1(12), 	HDFGRTR, DBGWVRn_EL1, 1),
+	SR_FGT(SYS_DBGWVRn_EL1(13), 	HDFGRTR, DBGWVRn_EL1, 1),
+	SR_FGT(SYS_DBGWVRn_EL1(14), 	HDFGRTR, DBGWVRn_EL1, 1),
+	SR_FGT(SYS_DBGWVRn_EL1(15), 	HDFGRTR, DBGWVRn_EL1, 1),
+	SR_FGT(SYS_DBGWCRn_EL1(0), 	HDFGRTR, DBGWCRn_EL1, 1),
+	SR_FGT(SYS_DBGWCRn_EL1(1), 	HDFGRTR, DBGWCRn_EL1, 1),
+	SR_FGT(SYS_DBGWCRn_EL1(2), 	HDFGRTR, DBGWCRn_EL1, 1),
+	SR_FGT(SYS_DBGWCRn_EL1(3), 	HDFGRTR, DBGWCRn_EL1, 1),
+	SR_FGT(SYS_DBGWCRn_EL1(4), 	HDFGRTR, DBGWCRn_EL1, 1),
+	SR_FGT(SYS_DBGWCRn_EL1(5), 	HDFGRTR, DBGWCRn_EL1, 1),
+	SR_FGT(SYS_DBGWCRn_EL1(6), 	HDFGRTR, DBGWCRn_EL1, 1),
+	SR_FGT(SYS_DBGWCRn_EL1(7), 	HDFGRTR, DBGWCRn_EL1, 1),
+	SR_FGT(SYS_DBGWCRn_EL1(8), 	HDFGRTR, DBGWCRn_EL1, 1),
+	SR_FGT(SYS_DBGWCRn_EL1(9), 	HDFGRTR, DBGWCRn_EL1, 1),
+	SR_FGT(SYS_DBGWCRn_EL1(10), 	HDFGRTR, DBGWCRn_EL1, 1),
+	SR_FGT(SYS_DBGWCRn_EL1(11), 	HDFGRTR, DBGWCRn_EL1, 1),
+	SR_FGT(SYS_DBGWCRn_EL1(12), 	HDFGRTR, DBGWCRn_EL1, 1),
+	SR_FGT(SYS_DBGWCRn_EL1(13), 	HDFGRTR, DBGWCRn_EL1, 1),
+	SR_FGT(SYS_DBGWCRn_EL1(14), 	HDFGRTR, DBGWCRn_EL1, 1),
+	SR_FGT(SYS_DBGWCRn_EL1(15), 	HDFGRTR, DBGWCRn_EL1, 1),
+	SR_FGT(SYS_DBGBVRn_EL1(0), 	HDFGRTR, DBGBVRn_EL1, 1),
+	SR_FGT(SYS_DBGBVRn_EL1(1), 	HDFGRTR, DBGBVRn_EL1, 1),
+	SR_FGT(SYS_DBGBVRn_EL1(2), 	HDFGRTR, DBGBVRn_EL1, 1),
+	SR_FGT(SYS_DBGBVRn_EL1(3), 	HDFGRTR, DBGBVRn_EL1, 1),
+	SR_FGT(SYS_DBGBVRn_EL1(4), 	HDFGRTR, DBGBVRn_EL1, 1),
+	SR_FGT(SYS_DBGBVRn_EL1(5), 	HDFGRTR, DBGBVRn_EL1, 1),
+	SR_FGT(SYS_DBGBVRn_EL1(6), 	HDFGRTR, DBGBVRn_EL1, 1),
+	SR_FGT(SYS_DBGBVRn_EL1(7), 	HDFGRTR, DBGBVRn_EL1, 1),
+	SR_FGT(SYS_DBGBVRn_EL1(8), 	HDFGRTR, DBGBVRn_EL1, 1),
+	SR_FGT(SYS_DBGBVRn_EL1(9), 	HDFGRTR, DBGBVRn_EL1, 1),
+	SR_FGT(SYS_DBGBVRn_EL1(10), 	HDFGRTR, DBGBVRn_EL1, 1),
+	SR_FGT(SYS_DBGBVRn_EL1(11), 	HDFGRTR, DBGBVRn_EL1, 1),
+	SR_FGT(SYS_DBGBVRn_EL1(12), 	HDFGRTR, DBGBVRn_EL1, 1),
+	SR_FGT(SYS_DBGBVRn_EL1(13), 	HDFGRTR, DBGBVRn_EL1, 1),
+	SR_FGT(SYS_DBGBVRn_EL1(14), 	HDFGRTR, DBGBVRn_EL1, 1),
+	SR_FGT(SYS_DBGBVRn_EL1(15), 	HDFGRTR, DBGBVRn_EL1, 1),
+	SR_FGT(SYS_DBGBCRn_EL1(0), 	HDFGRTR, DBGBCRn_EL1, 1),
+	SR_FGT(SYS_DBGBCRn_EL1(1), 	HDFGRTR, DBGBCRn_EL1, 1),
+	SR_FGT(SYS_DBGBCRn_EL1(2), 	HDFGRTR, DBGBCRn_EL1, 1),
+	SR_FGT(SYS_DBGBCRn_EL1(3), 	HDFGRTR, DBGBCRn_EL1, 1),
+	SR_FGT(SYS_DBGBCRn_EL1(4), 	HDFGRTR, DBGBCRn_EL1, 1),
+	SR_FGT(SYS_DBGBCRn_EL1(5), 	HDFGRTR, DBGBCRn_EL1, 1),
+	SR_FGT(SYS_DBGBCRn_EL1(6), 	HDFGRTR, DBGBCRn_EL1, 1),
+	SR_FGT(SYS_DBGBCRn_EL1(7), 	HDFGRTR, DBGBCRn_EL1, 1),
+	SR_FGT(SYS_DBGBCRn_EL1(8), 	HDFGRTR, DBGBCRn_EL1, 1),
+	SR_FGT(SYS_DBGBCRn_EL1(9), 	HDFGRTR, DBGBCRn_EL1, 1),
+	SR_FGT(SYS_DBGBCRn_EL1(10), 	HDFGRTR, DBGBCRn_EL1, 1),
+	SR_FGT(SYS_DBGBCRn_EL1(11), 	HDFGRTR, DBGBCRn_EL1, 1),
+	SR_FGT(SYS_DBGBCRn_EL1(12), 	HDFGRTR, DBGBCRn_EL1, 1),
+	SR_FGT(SYS_DBGBCRn_EL1(13), 	HDFGRTR, DBGBCRn_EL1, 1),
+	SR_FGT(SYS_DBGBCRn_EL1(14), 	HDFGRTR, DBGBCRn_EL1, 1),
+	SR_FGT(SYS_DBGBCRn_EL1(15), 	HDFGRTR, DBGBCRn_EL1, 1),
+	/*
+	 * HDFGWTR_EL2
+	 *
+	 * Although HDFGRTR_EL2 and HDFGWTR_EL2 registers largely
+	 * overlap in their bit assignment, there are a number of bits
+	 * that are RES0 on one side, and an actual trap bit on the
+	 * other.  The policy chosen here is to describe all the
+	 * read-side mappings, and only the write-side mappings that
+	 * differ from the read side, and the trap handler will pick
+	 * the correct shadow register based on the access type.
+	 */
+	SR_FGT(SYS_TRFCR_EL1,		HDFGWTR, TRFCR_EL1, 1),
+	SR_FGT(SYS_TRCOSLAR,		HDFGWTR, TRCOSLAR, 1),
+	SR_FGT(SYS_PMCR_EL0,		HDFGWTR, PMCR_EL0, 1),
+	SR_FGT(SYS_PMSWINC_EL0,		HDFGWTR, PMSWINC_EL0, 1),
+	SR_FGT(SYS_OSLAR_EL1,		HDFGWTR, OSLAR_EL1, 1),
 };
 
 static union trap_config get_trap_config(u32 sysreg)
@@ -1336,6 +1802,14 @@ bool __check_nv_sr_forward(struct kvm_vcpu *vcpu)
 			val = sanitised_sys_reg(vcpu, HFGWTR_EL2);
 		break;
 
+	case HDFGRTR_GROUP:
+	case HDFGWTR_GROUP:
+		if (is_read)
+			val = sanitised_sys_reg(vcpu, HDFGRTR_EL2);
+		else
+			val = sanitised_sys_reg(vcpu, HDFGWTR_EL2);
+		break;
+
 	case HFGITR_GROUP:
 		val = sanitised_sys_reg(vcpu, HFGITR_EL2);
 		break;
-- 
2.34.1


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply related	[flat|nested] 48+ messages in thread

* [PATCH v4 23/28] KVM: arm64: nv: Add SVC trap forwarding
  2023-08-15 18:38 [PATCH v4 00/28] KVM: arm64: NV trap forwarding infrastructure Marc Zyngier
                   ` (21 preceding siblings ...)
  2023-08-15 18:38 ` [PATCH v4 22/28] KVM: arm64: nv: Add trap forwarding for HDFGxTR_EL2 Marc Zyngier
@ 2023-08-15 18:38 ` Marc Zyngier
  2023-08-15 23:24   ` Jing Zhang
  2023-08-15 18:38 ` [PATCH v4 24/28] KVM: arm64: nv: Expand ERET trap forwarding to handle FGT Marc Zyngier
                   ` (5 subsequent siblings)
  28 siblings, 1 reply; 48+ messages in thread
From: Marc Zyngier @ 2023-08-15 18:38 UTC (permalink / raw)
  To: kvmarm, kvm, linux-arm-kernel
  Cc: Catalin Marinas, Eric Auger, Mark Brown, Mark Rutland,
	Will Deacon, Alexandru Elisei, Andre Przywara, Chase Conklin,
	Ganapatrao Kulkarni, Darren Hart, Miguel Luis, Jing Zhang,
	James Morse, Suzuki K Poulose, Oliver Upton, Zenghui Yu

HFGITR_EL2 allows the trap of SVC instructions to EL2. Allow these
traps to be forwarded. Take this opportunity to deny any 32bit activity
when NV is enabled.

Reviewed-by: Eric Auger <eric.auger@redhat.com>
Signed-off-by: Marc Zyngier <maz@kernel.org>
---
 arch/arm64/kvm/arm.c         |  4 ++++
 arch/arm64/kvm/handle_exit.c | 12 ++++++++++++
 2 files changed, 16 insertions(+)

diff --git a/arch/arm64/kvm/arm.c b/arch/arm64/kvm/arm.c
index 72dc53a75d1c..8b51570a76f8 100644
--- a/arch/arm64/kvm/arm.c
+++ b/arch/arm64/kvm/arm.c
@@ -36,6 +36,7 @@
 #include <asm/kvm_arm.h>
 #include <asm/kvm_asm.h>
 #include <asm/kvm_mmu.h>
+#include <asm/kvm_nested.h>
 #include <asm/kvm_pkvm.h>
 #include <asm/kvm_emulate.h>
 #include <asm/sections.h>
@@ -818,6 +819,9 @@ static bool vcpu_mode_is_bad_32bit(struct kvm_vcpu *vcpu)
 	if (likely(!vcpu_mode_is_32bit(vcpu)))
 		return false;
 
+	if (vcpu_has_nv(vcpu))
+		return true;
+
 	return !kvm_supports_32bit_el0();
 }
 
diff --git a/arch/arm64/kvm/handle_exit.c b/arch/arm64/kvm/handle_exit.c
index 6dcd6604b6bc..3b86d534b995 100644
--- a/arch/arm64/kvm/handle_exit.c
+++ b/arch/arm64/kvm/handle_exit.c
@@ -226,6 +226,17 @@ static int kvm_handle_eret(struct kvm_vcpu *vcpu)
 	return 1;
 }
 
+static int handle_svc(struct kvm_vcpu *vcpu)
+{
+	/*
+	 * So far, SVC traps only for NV via HFGITR_EL2. A SVC from a
+	 * 32bit guest would be caught by vpcu_mode_is_bad_32bit(), so
+	 * we should only have to deal with a 64 bit exception.
+	 */
+	kvm_inject_nested_sync(vcpu, kvm_vcpu_get_esr(vcpu));
+	return 1;
+}
+
 static exit_handle_fn arm_exit_handlers[] = {
 	[0 ... ESR_ELx_EC_MAX]	= kvm_handle_unknown_ec,
 	[ESR_ELx_EC_WFx]	= kvm_handle_wfx,
@@ -239,6 +250,7 @@ static exit_handle_fn arm_exit_handlers[] = {
 	[ESR_ELx_EC_SMC32]	= handle_smc,
 	[ESR_ELx_EC_HVC64]	= handle_hvc,
 	[ESR_ELx_EC_SMC64]	= handle_smc,
+	[ESR_ELx_EC_SVC64]	= handle_svc,
 	[ESR_ELx_EC_SYS64]	= kvm_handle_sys_reg,
 	[ESR_ELx_EC_SVE]	= handle_sve,
 	[ESR_ELx_EC_ERET]	= kvm_handle_eret,
-- 
2.34.1


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply related	[flat|nested] 48+ messages in thread

* [PATCH v4 24/28] KVM: arm64: nv: Expand ERET trap forwarding to handle FGT
  2023-08-15 18:38 [PATCH v4 00/28] KVM: arm64: NV trap forwarding infrastructure Marc Zyngier
                   ` (22 preceding siblings ...)
  2023-08-15 18:38 ` [PATCH v4 23/28] KVM: arm64: nv: Add SVC trap forwarding Marc Zyngier
@ 2023-08-15 18:38 ` Marc Zyngier
  2023-08-15 23:28   ` Jing Zhang
  2023-08-15 18:38 ` [PATCH v4 25/28] KVM: arm64: nv: Add switching support for HFGxTR/HDFGxTR Marc Zyngier
                   ` (4 subsequent siblings)
  28 siblings, 1 reply; 48+ messages in thread
From: Marc Zyngier @ 2023-08-15 18:38 UTC (permalink / raw)
  To: kvmarm, kvm, linux-arm-kernel
  Cc: Catalin Marinas, Eric Auger, Mark Brown, Mark Rutland,
	Will Deacon, Alexandru Elisei, Andre Przywara, Chase Conklin,
	Ganapatrao Kulkarni, Darren Hart, Miguel Luis, Jing Zhang,
	James Morse, Suzuki K Poulose, Oliver Upton, Zenghui Yu

We already handle ERET being trapped from a L1 guest in hyp context.
However, with FGT, we can also have ERET being trapped from L2, and
this needs to be reinjected into L1.

Add the required exception routing.

Signed-off-by: Marc Zyngier <maz@kernel.org>
---
 arch/arm64/kvm/handle_exit.c | 17 ++++++++++++++++-
 1 file changed, 16 insertions(+), 1 deletion(-)

diff --git a/arch/arm64/kvm/handle_exit.c b/arch/arm64/kvm/handle_exit.c
index 3b86d534b995..617ae6dea5d5 100644
--- a/arch/arm64/kvm/handle_exit.c
+++ b/arch/arm64/kvm/handle_exit.c
@@ -222,7 +222,22 @@ static int kvm_handle_eret(struct kvm_vcpu *vcpu)
 	if (kvm_vcpu_get_esr(vcpu) & ESR_ELx_ERET_ISS_ERET)
 		return kvm_handle_ptrauth(vcpu);
 
-	kvm_emulate_nested_eret(vcpu);
+	/*
+	 * If we got here, two possibilities:
+	 *
+	 * - the guest is in EL2, and we need to fully emulate ERET
+	 *
+	 * - the guest is in EL1, and we need to reinject the
+         *   exception into the L1 hypervisor.
+	 *
+	 * If KVM ever traps ERET for its own use, we'll have to
+	 * revisit this.
+	 */
+	if (is_hyp_ctxt(vcpu))
+		kvm_emulate_nested_eret(vcpu);
+	else
+		kvm_inject_nested_sync(vcpu, kvm_vcpu_get_esr(vcpu));
+
 	return 1;
 }
 
-- 
2.34.1


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply related	[flat|nested] 48+ messages in thread

* [PATCH v4 25/28] KVM: arm64: nv: Add switching support for HFGxTR/HDFGxTR
  2023-08-15 18:38 [PATCH v4 00/28] KVM: arm64: NV trap forwarding infrastructure Marc Zyngier
                   ` (23 preceding siblings ...)
  2023-08-15 18:38 ` [PATCH v4 24/28] KVM: arm64: nv: Expand ERET trap forwarding to handle FGT Marc Zyngier
@ 2023-08-15 18:38 ` Marc Zyngier
  2023-08-15 23:37   ` Jing Zhang
  2023-08-15 18:39 ` [PATCH v4 26/28] KVM: arm64: nv: Expose FGT to nested guests Marc Zyngier
                   ` (3 subsequent siblings)
  28 siblings, 1 reply; 48+ messages in thread
From: Marc Zyngier @ 2023-08-15 18:38 UTC (permalink / raw)
  To: kvmarm, kvm, linux-arm-kernel
  Cc: Catalin Marinas, Eric Auger, Mark Brown, Mark Rutland,
	Will Deacon, Alexandru Elisei, Andre Przywara, Chase Conklin,
	Ganapatrao Kulkarni, Darren Hart, Miguel Luis, Jing Zhang,
	James Morse, Suzuki K Poulose, Oliver Upton, Zenghui Yu

Now that we can evaluate the FGT registers, allow them to be merged
with the hypervisor's own configuration (in the case of HFG{RW}TR_EL2)
or simply set for HFGITR_EL2, HDGFRTR_EL2 and HDFGWTR_EL2.

Reviewed-by: Eric Auger <eric.auger@redhat.com>
Signed-off-by: Marc Zyngier <maz@kernel.org>
---
 arch/arm64/kvm/hyp/include/hyp/switch.h | 48 +++++++++++++++++++++++++
 1 file changed, 48 insertions(+)

diff --git a/arch/arm64/kvm/hyp/include/hyp/switch.h b/arch/arm64/kvm/hyp/include/hyp/switch.h
index e096b16e85fd..a4750070563f 100644
--- a/arch/arm64/kvm/hyp/include/hyp/switch.h
+++ b/arch/arm64/kvm/hyp/include/hyp/switch.h
@@ -70,6 +70,13 @@ static inline void __activate_traps_fpsimd32(struct kvm_vcpu *vcpu)
 	}
 }
 
+#define compute_clr_set(vcpu, reg, clr, set)				\
+	do {								\
+		u64 hfg;						\
+		hfg = __vcpu_sys_reg(vcpu, reg) & ~__ ## reg ## _RES0;	\
+		set |= hfg & __ ## reg ## _MASK; 			\
+		clr |= ~hfg & __ ## reg ## _nMASK; 			\
+	} while(0)
 
 
 static inline void __activate_traps_hfgxtr(struct kvm_vcpu *vcpu)
@@ -97,6 +104,10 @@ static inline void __activate_traps_hfgxtr(struct kvm_vcpu *vcpu)
 	if (cpus_have_final_cap(ARM64_WORKAROUND_AMPERE_AC03_CPU_38))
 		w_set |= HFGxTR_EL2_TCR_EL1_MASK;
 
+	if (vcpu_has_nv(vcpu) && !is_hyp_ctxt(vcpu)) {
+		compute_clr_set(vcpu, HFGRTR_EL2, r_clr, r_set);
+		compute_clr_set(vcpu, HFGWTR_EL2, w_clr, w_set);
+	}
 
 	/* The default is not to trap anything but ACCDATA_EL1 */
 	r_val = __HFGRTR_EL2_nMASK & ~HFGxTR_EL2_nACCDATA_EL1;
@@ -109,6 +120,38 @@ static inline void __activate_traps_hfgxtr(struct kvm_vcpu *vcpu)
 
 	write_sysreg_s(r_val, SYS_HFGRTR_EL2);
 	write_sysreg_s(w_val, SYS_HFGWTR_EL2);
+
+	if (!vcpu_has_nv(vcpu) || is_hyp_ctxt(vcpu))
+		return;
+
+	ctxt_sys_reg(hctxt, HFGITR_EL2) = read_sysreg_s(SYS_HFGITR_EL2);
+
+	r_set = r_clr = 0;
+	compute_clr_set(vcpu, HFGITR_EL2, r_clr, r_set);
+	r_val = __HFGITR_EL2_nMASK;
+	r_val |= r_set;
+	r_val &= ~r_clr;
+
+	write_sysreg_s(r_val, SYS_HFGITR_EL2);
+
+	ctxt_sys_reg(hctxt, HDFGRTR_EL2) = read_sysreg_s(SYS_HDFGRTR_EL2);
+	ctxt_sys_reg(hctxt, HDFGWTR_EL2) = read_sysreg_s(SYS_HDFGWTR_EL2);
+
+	r_clr = r_set = w_clr = w_set = 0;
+
+	compute_clr_set(vcpu, HDFGRTR_EL2, r_clr, r_set);
+	compute_clr_set(vcpu, HDFGWTR_EL2, w_clr, w_set);
+
+	r_val = __HDFGRTR_EL2_nMASK;
+	r_val |= r_set;
+	r_val &= ~r_clr;
+
+	w_val = __HDFGWTR_EL2_nMASK;
+	w_val |= w_set;
+	w_val &= ~w_clr;
+
+	write_sysreg_s(r_val, SYS_HDFGRTR_EL2);
+	write_sysreg_s(w_val, SYS_HDFGWTR_EL2);
 }
 
 static inline void __deactivate_traps_hfgxtr(struct kvm_vcpu *vcpu)
@@ -121,7 +164,12 @@ static inline void __deactivate_traps_hfgxtr(struct kvm_vcpu *vcpu)
 	write_sysreg_s(ctxt_sys_reg(hctxt, HFGRTR_EL2), SYS_HFGRTR_EL2);
 	write_sysreg_s(ctxt_sys_reg(hctxt, HFGWTR_EL2), SYS_HFGWTR_EL2);
 
+	if (!vcpu_has_nv(vcpu) || is_hyp_ctxt(vcpu))
+		return;
 
+	write_sysreg_s(ctxt_sys_reg(hctxt, HFGITR_EL2), SYS_HFGITR_EL2);
+	write_sysreg_s(ctxt_sys_reg(hctxt, HDFGRTR_EL2), SYS_HDFGRTR_EL2);
+	write_sysreg_s(ctxt_sys_reg(hctxt, HDFGWTR_EL2), SYS_HDFGWTR_EL2);
 }
 
 static inline void __activate_traps_common(struct kvm_vcpu *vcpu)
-- 
2.34.1


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply related	[flat|nested] 48+ messages in thread

* [PATCH v4 26/28] KVM: arm64: nv: Expose FGT to nested guests
  2023-08-15 18:38 [PATCH v4 00/28] KVM: arm64: NV trap forwarding infrastructure Marc Zyngier
                   ` (24 preceding siblings ...)
  2023-08-15 18:38 ` [PATCH v4 25/28] KVM: arm64: nv: Add switching support for HFGxTR/HDFGxTR Marc Zyngier
@ 2023-08-15 18:39 ` Marc Zyngier
  2023-08-16  0:02   ` Jing Zhang
  2023-08-15 18:39 ` [PATCH v4 27/28] KVM: arm64: Move HCRX_EL2 switch to load/put on VHE systems Marc Zyngier
                   ` (2 subsequent siblings)
  28 siblings, 1 reply; 48+ messages in thread
From: Marc Zyngier @ 2023-08-15 18:39 UTC (permalink / raw)
  To: kvmarm, kvm, linux-arm-kernel
  Cc: Catalin Marinas, Eric Auger, Mark Brown, Mark Rutland,
	Will Deacon, Alexandru Elisei, Andre Przywara, Chase Conklin,
	Ganapatrao Kulkarni, Darren Hart, Miguel Luis, Jing Zhang,
	James Morse, Suzuki K Poulose, Oliver Upton, Zenghui Yu

Now that we have FGT support, expose the feature to NV guests.

Reviewed-by: Eric Auger <eric.auger@redhat.com>
Signed-off-by: Marc Zyngier <maz@kernel.org>
---
 arch/arm64/kvm/nested.c | 5 +++--
 1 file changed, 3 insertions(+), 2 deletions(-)

diff --git a/arch/arm64/kvm/nested.c b/arch/arm64/kvm/nested.c
index 7f80f385d9e8..3facd8918ae3 100644
--- a/arch/arm64/kvm/nested.c
+++ b/arch/arm64/kvm/nested.c
@@ -71,8 +71,9 @@ void access_nested_id_reg(struct kvm_vcpu *v, struct sys_reg_params *p,
 		break;
 
 	case SYS_ID_AA64MMFR0_EL1:
-		/* Hide ECV, FGT, ExS, Secure Memory */
-		val &= ~(GENMASK_ULL(63, 43)		|
+		/* Hide ECV, ExS, Secure Memory */
+		val &= ~(NV_FTR(MMFR0, ECV)		|
+			 NV_FTR(MMFR0, EXS)		|
 			 NV_FTR(MMFR0, TGRAN4_2)	|
 			 NV_FTR(MMFR0, TGRAN16_2)	|
 			 NV_FTR(MMFR0, TGRAN64_2)	|
-- 
2.34.1


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply related	[flat|nested] 48+ messages in thread

* [PATCH v4 27/28] KVM: arm64: Move HCRX_EL2 switch to load/put on VHE systems
  2023-08-15 18:38 [PATCH v4 00/28] KVM: arm64: NV trap forwarding infrastructure Marc Zyngier
                   ` (25 preceding siblings ...)
  2023-08-15 18:39 ` [PATCH v4 26/28] KVM: arm64: nv: Expose FGT to nested guests Marc Zyngier
@ 2023-08-15 18:39 ` Marc Zyngier
  2023-08-16  0:17   ` Jing Zhang
  2023-08-15 18:39 ` [PATCH v4 28/28] KVM: arm64: nv: Add support for HCRX_EL2 Marc Zyngier
  2023-08-17  9:29 ` [PATCH v4 00/28] KVM: arm64: NV trap forwarding infrastructure Marc Zyngier
  28 siblings, 1 reply; 48+ messages in thread
From: Marc Zyngier @ 2023-08-15 18:39 UTC (permalink / raw)
  To: kvmarm, kvm, linux-arm-kernel
  Cc: Catalin Marinas, Eric Auger, Mark Brown, Mark Rutland,
	Will Deacon, Alexandru Elisei, Andre Przywara, Chase Conklin,
	Ganapatrao Kulkarni, Darren Hart, Miguel Luis, Jing Zhang,
	James Morse, Suzuki K Poulose, Oliver Upton, Zenghui Yu

Although the nVHE behaviour requires HCRX_EL2 to be switched
on each switch between host and guest, there is nothing in
this register that would affect a VHE host.

It is thus possible to save/restore this register on load/put
on VHE systems, avoiding unnecessary sysreg access on the hot
path. Additionally, it avoids unnecessary traps when running
with NV.

To achieve this, simply move the read/writes to the *_common()
helpers, which are called on load/put on VHE, and more eagerly
on nVHE.

Reviewed-by: Eric Auger <eric.auger@redhat.com>
Signed-off-by: Marc Zyngier <maz@kernel.org>
---
 arch/arm64/kvm/hyp/include/hyp/switch.h | 12 ++++++------
 1 file changed, 6 insertions(+), 6 deletions(-)

diff --git a/arch/arm64/kvm/hyp/include/hyp/switch.h b/arch/arm64/kvm/hyp/include/hyp/switch.h
index a4750070563f..060c5a0409e5 100644
--- a/arch/arm64/kvm/hyp/include/hyp/switch.h
+++ b/arch/arm64/kvm/hyp/include/hyp/switch.h
@@ -197,6 +197,9 @@ static inline void __activate_traps_common(struct kvm_vcpu *vcpu)
 	vcpu->arch.mdcr_el2_host = read_sysreg(mdcr_el2);
 	write_sysreg(vcpu->arch.mdcr_el2, mdcr_el2);
 
+	if (cpus_have_final_cap(ARM64_HAS_HCX))
+		write_sysreg_s(HCRX_GUEST_FLAGS, SYS_HCRX_EL2);
+
 	__activate_traps_hfgxtr(vcpu);
 }
 
@@ -213,6 +216,9 @@ static inline void __deactivate_traps_common(struct kvm_vcpu *vcpu)
 		vcpu_clear_flag(vcpu, PMUSERENR_ON_CPU);
 	}
 
+	if (cpus_have_final_cap(ARM64_HAS_HCX))
+		write_sysreg_s(HCRX_HOST_FLAGS, SYS_HCRX_EL2);
+
 	__deactivate_traps_hfgxtr(vcpu);
 }
 
@@ -227,9 +233,6 @@ static inline void ___activate_traps(struct kvm_vcpu *vcpu)
 
 	if (cpus_have_final_cap(ARM64_HAS_RAS_EXTN) && (hcr & HCR_VSE))
 		write_sysreg_s(vcpu->arch.vsesr_el2, SYS_VSESR_EL2);
-
-	if (cpus_have_final_cap(ARM64_HAS_HCX))
-		write_sysreg_s(HCRX_GUEST_FLAGS, SYS_HCRX_EL2);
 }
 
 static inline void ___deactivate_traps(struct kvm_vcpu *vcpu)
@@ -244,9 +247,6 @@ static inline void ___deactivate_traps(struct kvm_vcpu *vcpu)
 		vcpu->arch.hcr_el2 &= ~HCR_VSE;
 		vcpu->arch.hcr_el2 |= read_sysreg(hcr_el2) & HCR_VSE;
 	}
-
-	if (cpus_have_final_cap(ARM64_HAS_HCX))
-		write_sysreg_s(HCRX_HOST_FLAGS, SYS_HCRX_EL2);
 }
 
 static inline bool __populate_fault_info(struct kvm_vcpu *vcpu)
-- 
2.34.1


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply related	[flat|nested] 48+ messages in thread

* [PATCH v4 28/28] KVM: arm64: nv: Add support for HCRX_EL2
  2023-08-15 18:38 [PATCH v4 00/28] KVM: arm64: NV trap forwarding infrastructure Marc Zyngier
                   ` (26 preceding siblings ...)
  2023-08-15 18:39 ` [PATCH v4 27/28] KVM: arm64: Move HCRX_EL2 switch to load/put on VHE systems Marc Zyngier
@ 2023-08-15 18:39 ` Marc Zyngier
  2023-08-16  0:18   ` Jing Zhang
  2023-08-17  9:29 ` [PATCH v4 00/28] KVM: arm64: NV trap forwarding infrastructure Marc Zyngier
  28 siblings, 1 reply; 48+ messages in thread
From: Marc Zyngier @ 2023-08-15 18:39 UTC (permalink / raw)
  To: kvmarm, kvm, linux-arm-kernel
  Cc: Catalin Marinas, Eric Auger, Mark Brown, Mark Rutland,
	Will Deacon, Alexandru Elisei, Andre Przywara, Chase Conklin,
	Ganapatrao Kulkarni, Darren Hart, Miguel Luis, Jing Zhang,
	James Morse, Suzuki K Poulose, Oliver Upton, Zenghui Yu

HCRX_EL2 has an interesting effect on HFGITR_EL2, as it conditions
the traps of TLBI*nXS.

Expand the FGT support to add a new Fine Grained Filter that will
get checked when the instruction gets trapped, allowing the shadow
register to override the trap as needed.

Reviewed-by: Eric Auger <eric.auger@redhat.com>
Signed-off-by: Marc Zyngier <maz@kernel.org>
---
 arch/arm64/include/asm/kvm_arm.h        |  5 ++
 arch/arm64/include/asm/kvm_host.h       |  1 +
 arch/arm64/kvm/emulate-nested.c         | 94 ++++++++++++++++---------
 arch/arm64/kvm/hyp/include/hyp/switch.h | 15 +++-
 arch/arm64/kvm/nested.c                 |  3 +-
 arch/arm64/kvm/sys_regs.c               |  2 +
 6 files changed, 83 insertions(+), 37 deletions(-)

diff --git a/arch/arm64/include/asm/kvm_arm.h b/arch/arm64/include/asm/kvm_arm.h
index d229f238c3b6..137f732789c9 100644
--- a/arch/arm64/include/asm/kvm_arm.h
+++ b/arch/arm64/include/asm/kvm_arm.h
@@ -369,6 +369,11 @@
 #define __HDFGWTR_EL2_MASK	~__HDFGWTR_EL2_nMASK
 #define __HDFGWTR_EL2_nMASK	GENMASK(62, 60)
 
+/* Similar definitions for HCRX_EL2 */
+#define __HCRX_EL2_RES0		(GENMASK(63, 16) | GENMASK(13, 12))
+#define __HCRX_EL2_MASK		(0)
+#define __HCRX_EL2_nMASK	(GENMASK(15, 14) | GENMASK(4, 0))
+
 /* Hyp Prefetch Fault Address Register (HPFAR/HDFAR) */
 #define HPFAR_MASK	(~UL(0xf))
 /*
diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h
index cb1c5c54cedd..93c541111dea 100644
--- a/arch/arm64/include/asm/kvm_host.h
+++ b/arch/arm64/include/asm/kvm_host.h
@@ -380,6 +380,7 @@ enum vcpu_sysreg {
 	CPTR_EL2,	/* Architectural Feature Trap Register (EL2) */
 	HSTR_EL2,	/* Hypervisor System Trap Register */
 	HACR_EL2,	/* Hypervisor Auxiliary Control Register */
+	HCRX_EL2,	/* Extended Hypervisor Configuration Register */
 	TTBR0_EL2,	/* Translation Table Base Register 0 (EL2) */
 	TTBR1_EL2,	/* Translation Table Base Register 1 (EL2) */
 	TCR_EL2,	/* Translation Control Register (EL2) */
diff --git a/arch/arm64/kvm/emulate-nested.c b/arch/arm64/kvm/emulate-nested.c
index c9662f9a345e..1cc606c16416 100644
--- a/arch/arm64/kvm/emulate-nested.c
+++ b/arch/arm64/kvm/emulate-nested.c
@@ -426,11 +426,13 @@ static const complex_condition_check ccc[] = {
  * [13:10]	enum fgt_group_id (4 bits)
  * [19:14]	bit number in the FGT register (6 bits)
  * [20]		trap polarity (1 bit)
- * [62:21]	Unused (42 bits)
+ * [25:21]	FG filter (5 bits)
+ * [62:26]	Unused (37 bits)
  * [63]		RES0 - Must be zero, as lost on insertion in the xarray
  */
 #define TC_CGT_BITS	10
 #define TC_FGT_BITS	4
+#define TC_FGF_BITS	5
 
 union trap_config {
 	u64	val;
@@ -439,7 +441,8 @@ union trap_config {
 		unsigned long	fgt:TC_FGT_BITS; /* Fine Grained Trap id */
 		unsigned long	bit:6;		 /* Bit number */
 		unsigned long	pol:1;		 /* Polarity */
-		unsigned long	unused:42;	 /* Unused, should be zero */
+		unsigned long	fgf:TC_FGF_BITS; /* Fine Grained Filter */
+		unsigned long	unused:37;	 /* Unused, should be zero */
 		unsigned long	mbz:1;		 /* Must Be Zero */
 	};
 };
@@ -947,7 +950,15 @@ enum fgt_group_id {
 	__NR_FGT_GROUP_IDS__
 };
 
-#define SR_FGT(sr, g, b, p)					\
+enum fg_filter_id {
+	__NO_FGF__,
+	HCRX_FGTnXS,
+
+	/* Must be last */
+	__NR_FG_FILTER_IDS__
+};
+
+#define SR_FGF(sr, g, b, p, f)					\
 	{							\
 		.encoding	= sr,				\
 		.end		= sr,				\
@@ -955,10 +966,13 @@ enum fgt_group_id {
 			.fgt = g ## _GROUP,			\
 			.bit = g ## _EL2_ ## b ## _SHIFT,	\
 			.pol = p,				\
+			.fgf = f,				\
 		},						\
 		.line = __LINE__,				\
 	}
 
+#define SR_FGT(sr, g, b, p)	SR_FGF(sr, g, b, p, __NO_FGF__)
+
 static const struct encoding_to_trap_config encoding_to_fgt[] __initconst = {
 	/* HFGRTR_EL2, HFGWTR_EL2 */
 	SR_FGT(SYS_TPIDR2_EL0,		HFGxTR, nTPIDR2_EL0, 0),
@@ -1062,37 +1076,37 @@ static const struct encoding_to_trap_config encoding_to_fgt[] __initconst = {
 	SR_FGT(OP_TLBI_ASIDE1OS, 	HFGITR, TLBIASIDE1OS, 1),
 	SR_FGT(OP_TLBI_VAE1OS, 		HFGITR, TLBIVAE1OS, 1),
 	SR_FGT(OP_TLBI_VMALLE1OS, 	HFGITR, TLBIVMALLE1OS, 1),
-	/* FIXME: nXS variants must be checked against HCRX_EL2.FGTnXS */
-	SR_FGT(OP_TLBI_VAALE1NXS, 	HFGITR, TLBIVAALE1, 1),
-	SR_FGT(OP_TLBI_VALE1NXS, 	HFGITR, TLBIVALE1, 1),
-	SR_FGT(OP_TLBI_VAAE1NXS, 	HFGITR, TLBIVAAE1, 1),
-	SR_FGT(OP_TLBI_ASIDE1NXS, 	HFGITR, TLBIASIDE1, 1),
-	SR_FGT(OP_TLBI_VAE1NXS, 	HFGITR, TLBIVAE1, 1),
-	SR_FGT(OP_TLBI_VMALLE1NXS, 	HFGITR, TLBIVMALLE1, 1),
-	SR_FGT(OP_TLBI_RVAALE1NXS, 	HFGITR, TLBIRVAALE1, 1),
-	SR_FGT(OP_TLBI_RVALE1NXS, 	HFGITR, TLBIRVALE1, 1),
-	SR_FGT(OP_TLBI_RVAAE1NXS, 	HFGITR, TLBIRVAAE1, 1),
-	SR_FGT(OP_TLBI_RVAE1NXS, 	HFGITR, TLBIRVAE1, 1),
-	SR_FGT(OP_TLBI_RVAALE1ISNXS, 	HFGITR, TLBIRVAALE1IS, 1),
-	SR_FGT(OP_TLBI_RVALE1ISNXS, 	HFGITR, TLBIRVALE1IS, 1),
-	SR_FGT(OP_TLBI_RVAAE1ISNXS, 	HFGITR, TLBIRVAAE1IS, 1),
-	SR_FGT(OP_TLBI_RVAE1ISNXS, 	HFGITR, TLBIRVAE1IS, 1),
-	SR_FGT(OP_TLBI_VAALE1ISNXS, 	HFGITR, TLBIVAALE1IS, 1),
-	SR_FGT(OP_TLBI_VALE1ISNXS, 	HFGITR, TLBIVALE1IS, 1),
-	SR_FGT(OP_TLBI_VAAE1ISNXS, 	HFGITR, TLBIVAAE1IS, 1),
-	SR_FGT(OP_TLBI_ASIDE1ISNXS, 	HFGITR, TLBIASIDE1IS, 1),
-	SR_FGT(OP_TLBI_VAE1ISNXS, 	HFGITR, TLBIVAE1IS, 1),
-	SR_FGT(OP_TLBI_VMALLE1ISNXS, 	HFGITR, TLBIVMALLE1IS, 1),
-	SR_FGT(OP_TLBI_RVAALE1OSNXS, 	HFGITR, TLBIRVAALE1OS, 1),
-	SR_FGT(OP_TLBI_RVALE1OSNXS, 	HFGITR, TLBIRVALE1OS, 1),
-	SR_FGT(OP_TLBI_RVAAE1OSNXS, 	HFGITR, TLBIRVAAE1OS, 1),
-	SR_FGT(OP_TLBI_RVAE1OSNXS, 	HFGITR, TLBIRVAE1OS, 1),
-	SR_FGT(OP_TLBI_VAALE1OSNXS, 	HFGITR, TLBIVAALE1OS, 1),
-	SR_FGT(OP_TLBI_VALE1OSNXS, 	HFGITR, TLBIVALE1OS, 1),
-	SR_FGT(OP_TLBI_VAAE1OSNXS, 	HFGITR, TLBIVAAE1OS, 1),
-	SR_FGT(OP_TLBI_ASIDE1OSNXS, 	HFGITR, TLBIASIDE1OS, 1),
-	SR_FGT(OP_TLBI_VAE1OSNXS, 	HFGITR, TLBIVAE1OS, 1),
-	SR_FGT(OP_TLBI_VMALLE1OSNXS, 	HFGITR, TLBIVMALLE1OS, 1),
+	/* nXS variants must be checked against HCRX_EL2.FGTnXS */
+	SR_FGF(OP_TLBI_VAALE1NXS, 	HFGITR, TLBIVAALE1, 1, HCRX_FGTnXS),
+	SR_FGF(OP_TLBI_VALE1NXS, 	HFGITR, TLBIVALE1, 1, HCRX_FGTnXS),
+	SR_FGF(OP_TLBI_VAAE1NXS, 	HFGITR, TLBIVAAE1, 1, HCRX_FGTnXS),
+	SR_FGF(OP_TLBI_ASIDE1NXS, 	HFGITR, TLBIASIDE1, 1, HCRX_FGTnXS),
+	SR_FGF(OP_TLBI_VAE1NXS, 	HFGITR, TLBIVAE1, 1, HCRX_FGTnXS),
+	SR_FGF(OP_TLBI_VMALLE1NXS, 	HFGITR, TLBIVMALLE1, 1, HCRX_FGTnXS),
+	SR_FGF(OP_TLBI_RVAALE1NXS, 	HFGITR, TLBIRVAALE1, 1, HCRX_FGTnXS),
+	SR_FGF(OP_TLBI_RVALE1NXS, 	HFGITR, TLBIRVALE1, 1, HCRX_FGTnXS),
+	SR_FGF(OP_TLBI_RVAAE1NXS, 	HFGITR, TLBIRVAAE1, 1, HCRX_FGTnXS),
+	SR_FGF(OP_TLBI_RVAE1NXS, 	HFGITR, TLBIRVAE1, 1, HCRX_FGTnXS),
+	SR_FGF(OP_TLBI_RVAALE1ISNXS, 	HFGITR, TLBIRVAALE1IS, 1, HCRX_FGTnXS),
+	SR_FGF(OP_TLBI_RVALE1ISNXS, 	HFGITR, TLBIRVALE1IS, 1, HCRX_FGTnXS),
+	SR_FGF(OP_TLBI_RVAAE1ISNXS, 	HFGITR, TLBIRVAAE1IS, 1, HCRX_FGTnXS),
+	SR_FGF(OP_TLBI_RVAE1ISNXS, 	HFGITR, TLBIRVAE1IS, 1, HCRX_FGTnXS),
+	SR_FGF(OP_TLBI_VAALE1ISNXS, 	HFGITR, TLBIVAALE1IS, 1, HCRX_FGTnXS),
+	SR_FGF(OP_TLBI_VALE1ISNXS, 	HFGITR, TLBIVALE1IS, 1, HCRX_FGTnXS),
+	SR_FGF(OP_TLBI_VAAE1ISNXS, 	HFGITR, TLBIVAAE1IS, 1, HCRX_FGTnXS),
+	SR_FGF(OP_TLBI_ASIDE1ISNXS, 	HFGITR, TLBIASIDE1IS, 1, HCRX_FGTnXS),
+	SR_FGF(OP_TLBI_VAE1ISNXS, 	HFGITR, TLBIVAE1IS, 1, HCRX_FGTnXS),
+	SR_FGF(OP_TLBI_VMALLE1ISNXS, 	HFGITR, TLBIVMALLE1IS, 1, HCRX_FGTnXS),
+	SR_FGF(OP_TLBI_RVAALE1OSNXS, 	HFGITR, TLBIRVAALE1OS, 1, HCRX_FGTnXS),
+	SR_FGF(OP_TLBI_RVALE1OSNXS, 	HFGITR, TLBIRVALE1OS, 1, HCRX_FGTnXS),
+	SR_FGF(OP_TLBI_RVAAE1OSNXS, 	HFGITR, TLBIRVAAE1OS, 1, HCRX_FGTnXS),
+	SR_FGF(OP_TLBI_RVAE1OSNXS, 	HFGITR, TLBIRVAE1OS, 1, HCRX_FGTnXS),
+	SR_FGF(OP_TLBI_VAALE1OSNXS, 	HFGITR, TLBIVAALE1OS, 1, HCRX_FGTnXS),
+	SR_FGF(OP_TLBI_VALE1OSNXS, 	HFGITR, TLBIVALE1OS, 1, HCRX_FGTnXS),
+	SR_FGF(OP_TLBI_VAAE1OSNXS, 	HFGITR, TLBIVAAE1OS, 1, HCRX_FGTnXS),
+	SR_FGF(OP_TLBI_ASIDE1OSNXS, 	HFGITR, TLBIASIDE1OS, 1, HCRX_FGTnXS),
+	SR_FGF(OP_TLBI_VAE1OSNXS, 	HFGITR, TLBIVAE1OS, 1, HCRX_FGTnXS),
+	SR_FGF(OP_TLBI_VMALLE1OSNXS, 	HFGITR, TLBIVMALLE1OS, 1, HCRX_FGTnXS),
 	SR_FGT(OP_AT_S1E1WP, 		HFGITR, ATS1E1WP, 1),
 	SR_FGT(OP_AT_S1E1RP, 		HFGITR, ATS1E1RP, 1),
 	SR_FGT(OP_AT_S1E0W, 		HFGITR, ATS1E0W, 1),
@@ -1622,6 +1636,7 @@ int __init populate_nv_trap_config(void)
 	BUILD_BUG_ON(sizeof(union trap_config) != sizeof(void *));
 	BUILD_BUG_ON(__NR_CGT_GROUP_IDS__ > BIT(TC_CGT_BITS));
 	BUILD_BUG_ON(__NR_FGT_GROUP_IDS__ > BIT(TC_FGT_BITS));
+	BUILD_BUG_ON(__NR_FG_FILTER_IDS__ > BIT(TC_FGF_BITS));
 
 	for (int i = 0; i < ARRAY_SIZE(encoding_to_cgt); i++) {
 		const struct encoding_to_trap_config *cgt = &encoding_to_cgt[i];
@@ -1812,6 +1827,17 @@ bool __check_nv_sr_forward(struct kvm_vcpu *vcpu)
 
 	case HFGITR_GROUP:
 		val = sanitised_sys_reg(vcpu, HFGITR_EL2);
+		switch (tc.fgf) {
+			u64 tmp;
+
+		case __NO_FGF__:
+			break;
+
+		case HCRX_FGTnXS:
+			tmp = sanitised_sys_reg(vcpu, HCRX_EL2);
+			if (tmp & HCRX_EL2_FGTnXS)
+				tc.fgt = __NO_FGT_GROUP__;
+		}
 		break;
 
 	case __NR_FGT_GROUP_IDS__:
diff --git a/arch/arm64/kvm/hyp/include/hyp/switch.h b/arch/arm64/kvm/hyp/include/hyp/switch.h
index 060c5a0409e5..3acf6d77e324 100644
--- a/arch/arm64/kvm/hyp/include/hyp/switch.h
+++ b/arch/arm64/kvm/hyp/include/hyp/switch.h
@@ -197,8 +197,19 @@ static inline void __activate_traps_common(struct kvm_vcpu *vcpu)
 	vcpu->arch.mdcr_el2_host = read_sysreg(mdcr_el2);
 	write_sysreg(vcpu->arch.mdcr_el2, mdcr_el2);
 
-	if (cpus_have_final_cap(ARM64_HAS_HCX))
-		write_sysreg_s(HCRX_GUEST_FLAGS, SYS_HCRX_EL2);
+	if (cpus_have_final_cap(ARM64_HAS_HCX)) {
+		u64 hcrx = HCRX_GUEST_FLAGS;
+		if (vcpu_has_nv(vcpu) && !is_hyp_ctxt(vcpu)) {
+			u64 clr = 0, set = 0;
+
+			compute_clr_set(vcpu, HCRX_EL2, clr, set);
+
+			hcrx |= set;
+			hcrx &= ~clr;
+		}
+
+		write_sysreg_s(hcrx, SYS_HCRX_EL2);
+	}
 
 	__activate_traps_hfgxtr(vcpu);
 }
diff --git a/arch/arm64/kvm/nested.c b/arch/arm64/kvm/nested.c
index 3facd8918ae3..042695a210ce 100644
--- a/arch/arm64/kvm/nested.c
+++ b/arch/arm64/kvm/nested.c
@@ -117,7 +117,8 @@ void access_nested_id_reg(struct kvm_vcpu *v, struct sys_reg_params *p,
 		break;
 
 	case SYS_ID_AA64MMFR1_EL1:
-		val &= (NV_FTR(MMFR1, PAN)	|
+		val &= (NV_FTR(MMFR1, HCX)	|
+			NV_FTR(MMFR1, PAN)	|
 			NV_FTR(MMFR1, LO)	|
 			NV_FTR(MMFR1, HPDS)	|
 			NV_FTR(MMFR1, VH)	|
diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c
index 9556896311db..e92ec810d449 100644
--- a/arch/arm64/kvm/sys_regs.c
+++ b/arch/arm64/kvm/sys_regs.c
@@ -2372,6 +2372,8 @@ static const struct sys_reg_desc sys_reg_descs[] = {
 	EL2_REG(HFGITR_EL2, access_rw, reset_val, 0),
 	EL2_REG(HACR_EL2, access_rw, reset_val, 0),
 
+	EL2_REG(HCRX_EL2, access_rw, reset_val, 0),
+
 	EL2_REG(TTBR0_EL2, access_rw, reset_val, 0),
 	EL2_REG(TTBR1_EL2, access_rw, reset_val, 0),
 	EL2_REG(TCR_EL2, access_rw, reset_val, TCR_EL2_RES1),
-- 
2.34.1


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply related	[flat|nested] 48+ messages in thread

* Re: [PATCH v4 14/28] KVM: arm64: nv: Add trap forwarding infrastructure
  2023-08-15 18:38 ` [PATCH v4 14/28] KVM: arm64: nv: Add trap forwarding infrastructure Marc Zyngier
@ 2023-08-15 21:34   ` Jing Zhang
  2023-08-16  9:34   ` Miguel Luis
  1 sibling, 0 replies; 48+ messages in thread
From: Jing Zhang @ 2023-08-15 21:34 UTC (permalink / raw)
  To: Marc Zyngier
  Cc: kvmarm, kvm, linux-arm-kernel, Catalin Marinas, Eric Auger,
	Mark Brown, Mark Rutland, Will Deacon, Alexandru Elisei,
	Andre Przywara, Chase Conklin, Ganapatrao Kulkarni, Darren Hart,
	Miguel Luis, James Morse, Suzuki K Poulose, Oliver Upton,
	Zenghui Yu

Hi Marc,

On Tue, Aug 15, 2023 at 11:47 AM Marc Zyngier <maz@kernel.org> wrote:
>
> A significant part of what a NV hypervisor needs to do is to decide
> whether a trap from a L2+ guest has to be forwarded to a L1 guest
> or handled locally. This is done by checking for the trap bits that
> the guest hypervisor has set and acting accordingly, as described by
> the architecture.
>
> A previous approach was to sprinkle a bunch of checks in all the
> system register accessors, but this is pretty error prone and doesn't
> help getting an overview of what is happening.
>
> Instead, implement a set of global tables that describe a trap bit,
> combinations of trap bits, behaviours on trap, and what bits must
> be evaluated on a system register trap.
>
> Although this is painful to describe, this allows to specify each
> and every control bit in a static manner. To make it efficient,
> the table is inserted in an xarray that is global to the system,
> and checked each time we trap a system register while running
> a L2 guest.
>
> Add the basic infrastructure for now, while additional patches will
> implement configuration registers.
>
> Signed-off-by: Marc Zyngier <maz@kernel.org>
> ---
>  arch/arm64/include/asm/kvm_host.h   |   1 +
>  arch/arm64/include/asm/kvm_nested.h |   2 +
>  arch/arm64/kvm/emulate-nested.c     | 282 ++++++++++++++++++++++++++++
>  arch/arm64/kvm/sys_regs.c           |   6 +
>  arch/arm64/kvm/trace_arm.h          |  26 +++
>  5 files changed, 317 insertions(+)
>
> diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h
> index 721680da1011..cb1c5c54cedd 100644
> --- a/arch/arm64/include/asm/kvm_host.h
> +++ b/arch/arm64/include/asm/kvm_host.h
> @@ -988,6 +988,7 @@ int kvm_handle_cp10_id(struct kvm_vcpu *vcpu);
>  void kvm_reset_sys_regs(struct kvm_vcpu *vcpu);
>
>  int __init kvm_sys_reg_table_init(void);
> +int __init populate_nv_trap_config(void);
>
>  bool lock_all_vcpus(struct kvm *kvm);
>  void unlock_all_vcpus(struct kvm *kvm);
> diff --git a/arch/arm64/include/asm/kvm_nested.h b/arch/arm64/include/asm/kvm_nested.h
> index 8fb67f032fd1..fa23cc9c2adc 100644
> --- a/arch/arm64/include/asm/kvm_nested.h
> +++ b/arch/arm64/include/asm/kvm_nested.h
> @@ -11,6 +11,8 @@ static inline bool vcpu_has_nv(const struct kvm_vcpu *vcpu)
>                 test_bit(KVM_ARM_VCPU_HAS_EL2, vcpu->arch.features));
>  }
>
> +extern bool __check_nv_sr_forward(struct kvm_vcpu *vcpu);
> +
>  struct sys_reg_params;
>  struct sys_reg_desc;
>
> diff --git a/arch/arm64/kvm/emulate-nested.c b/arch/arm64/kvm/emulate-nested.c
> index b96662029fb1..d5837ed0077c 100644
> --- a/arch/arm64/kvm/emulate-nested.c
> +++ b/arch/arm64/kvm/emulate-nested.c
> @@ -14,6 +14,288 @@
>
>  #include "trace.h"
>
> +enum trap_behaviour {
> +       BEHAVE_HANDLE_LOCALLY   = 0,
> +       BEHAVE_FORWARD_READ     = BIT(0),
> +       BEHAVE_FORWARD_WRITE    = BIT(1),
> +       BEHAVE_FORWARD_ANY      = BEHAVE_FORWARD_READ | BEHAVE_FORWARD_WRITE,
> +};
> +
> +struct trap_bits {
> +       const enum vcpu_sysreg          index;
> +       const enum trap_behaviour       behaviour;
> +       const u64                       value;
> +       const u64                       mask;
> +};
> +
> +/* Coarse Grained Trap definitions */
> +enum cgt_group_id {
> +       /* Indicates no coarse trap control */
> +       __RESERVED__,
> +
> +       /*
> +        * The first batch of IDs denote coarse trapping that are used
> +        * on their own instead of being part of a combination of
> +        * trap controls.
> +        */
> +
> +       /*
> +        * Anything after this point is a combination of coarse trap
> +        * controls, which must all be evaluated to decide what to do.
> +        */
> +       __MULTIPLE_CONTROL_BITS__,
> +
> +       /*
> +        * Anything after this point requires a callback evaluating a
> +        * complex trap condition. Hopefully we'll never need this...
> +        */
> +       __COMPLEX_CONDITIONS__,
> +
> +       /* Must be last */
> +       __NR_CGT_GROUP_IDS__
> +};
> +
> +static const struct trap_bits coarse_trap_bits[] = {
> +};
> +
> +#define MCB(id, ...)                                           \
> +       [id - __MULTIPLE_CONTROL_BITS__]        =               \
> +               (const enum cgt_group_id[]){                    \
> +               __VA_ARGS__, __RESERVED__                       \
> +               }
> +
> +static const enum cgt_group_id *coarse_control_combo[] = {
> +};
> +
> +typedef enum trap_behaviour (*complex_condition_check)(struct kvm_vcpu *);
> +
> +#define CCC(id, fn)                            \
> +       [id - __COMPLEX_CONDITIONS__] = fn
> +
> +static const complex_condition_check ccc[] = {
> +};
> +
> +/*
> + * Bit assignment for the trap controls. We use a 64bit word with the
> + * following layout for each trapped sysreg:
> + *
> + * [9:0]       enum cgt_group_id (10 bits)
> + * [62:10]     Unused (53 bits)
> + * [63]                RES0 - Must be zero, as lost on insertion in the xarray
> + */
> +#define TC_CGT_BITS    10
> +
> +union trap_config {
> +       u64     val;
> +       struct {
> +               unsigned long   cgt:TC_CGT_BITS; /* Coarse Grained Trap id */
> +               unsigned long   unused:53;       /* Unused, should be zero */
> +               unsigned long   mbz:1;           /* Must Be Zero */
> +       };
> +};
> +
> +struct encoding_to_trap_config {
> +       const u32                       encoding;
> +       const u32                       end;
> +       const union trap_config         tc;
> +       const unsigned int              line;
> +};
> +
> +#define SR_RANGE_TRAP(sr_start, sr_end, trap_id)                       \
> +       {                                                               \
> +               .encoding       = sr_start,                             \
> +               .end            = sr_end,                               \
> +               .tc             = {                                     \
> +                       .cgt            = trap_id,                      \
> +               },                                                      \
> +               .line = __LINE__,                                       \
> +       }
> +
> +#define SR_TRAP(sr, trap_id)           SR_RANGE_TRAP(sr, sr, trap_id)
> +
> +/*
> + * Map encoding to trap bits for exception reported with EC=0x18.
> + * These must only be evaluated when running a nested hypervisor, but
> + * that the current context is not a hypervisor context. When the
> + * trapped access matches one of the trap controls, the exception is
> + * re-injected in the nested hypervisor.
> + */
> +static const struct encoding_to_trap_config encoding_to_cgt[] __initconst = {
> +};
> +
> +static DEFINE_XARRAY(sr_forward_xa);
> +
> +static union trap_config get_trap_config(u32 sysreg)
> +{
> +       return (union trap_config) {
> +               .val = xa_to_value(xa_load(&sr_forward_xa, sysreg)),
> +       };
> +}
> +
> +static __init void print_nv_trap_error(const struct encoding_to_trap_config *tc,
> +                                      const char *type, int err)
> +{
> +       kvm_err("%s line %d encoding range "
> +               "(%d, %d, %d, %d, %d) - (%d, %d, %d, %d, %d) (err=%d)\n",
> +               type, tc->line,
> +               sys_reg_Op0(tc->encoding), sys_reg_Op1(tc->encoding),
> +               sys_reg_CRn(tc->encoding), sys_reg_CRm(tc->encoding),
> +               sys_reg_Op2(tc->encoding),
> +               sys_reg_Op0(tc->end), sys_reg_Op1(tc->end),
> +               sys_reg_CRn(tc->end), sys_reg_CRm(tc->end),
> +               sys_reg_Op2(tc->end),
> +               err);
> +}
> +
> +int __init populate_nv_trap_config(void)
> +{
> +       int ret = 0;
> +
> +       BUILD_BUG_ON(sizeof(union trap_config) != sizeof(void *));
> +       BUILD_BUG_ON(__NR_CGT_GROUP_IDS__ > BIT(TC_CGT_BITS));
> +
> +       for (int i = 0; i < ARRAY_SIZE(encoding_to_cgt); i++) {
> +               const struct encoding_to_trap_config *cgt = &encoding_to_cgt[i];
> +               void *prev;
> +
> +               if (cgt->tc.val & BIT(63)) {
> +                       kvm_err("CGT[%d] has MBZ bit set\n", i);
> +                       ret = -EINVAL;
> +               }
> +
> +               if (cgt->encoding != cgt->end) {
> +                       prev = xa_store_range(&sr_forward_xa,
> +                                             cgt->encoding, cgt->end,
> +                                             xa_mk_value(cgt->tc.val),
> +                                             GFP_KERNEL);
> +               } else {
> +                       prev = xa_store(&sr_forward_xa, cgt->encoding,
> +                                       xa_mk_value(cgt->tc.val), GFP_KERNEL);
> +                       if (prev && !xa_is_err(prev)) {
> +                               ret = -EINVAL;
> +                               print_nv_trap_error(cgt, "Duplicate CGT", ret);
> +                       }
> +               }
> +
> +               if (xa_is_err(prev)) {
> +                       ret = xa_err(prev);
> +                       print_nv_trap_error(cgt, "Failed CGT insertion", ret);
> +               }
> +       }
> +
> +       kvm_info("nv: %ld coarse grained trap handlers\n",
> +                ARRAY_SIZE(encoding_to_cgt));
> +
> +       for (int id = __MULTIPLE_CONTROL_BITS__; id < __COMPLEX_CONDITIONS__; id++) {
> +               const enum cgt_group_id *cgids;
> +
> +               cgids = coarse_control_combo[id - __MULTIPLE_CONTROL_BITS__];
> +
> +               for (int i = 0; cgids[i] != __RESERVED__; i++) {
> +                       if (cgids[i] >= __MULTIPLE_CONTROL_BITS__) {
> +                               kvm_err("Recursive MCB %d/%d\n", id, cgids[i]);
> +                               ret = -EINVAL;
> +                       }
> +               }
> +       }
> +
> +       if (ret)
> +               xa_destroy(&sr_forward_xa);
> +
> +       return ret;
> +}
> +
> +static enum trap_behaviour get_behaviour(struct kvm_vcpu *vcpu,
> +                                        const struct trap_bits *tb)
> +{
> +       enum trap_behaviour b = BEHAVE_HANDLE_LOCALLY;
> +       u64 val;
> +
> +       val = __vcpu_sys_reg(vcpu, tb->index);
> +       if ((val & tb->mask) == tb->value)
> +               b |= tb->behaviour;
> +
> +       return b;
> +}
> +
> +static enum trap_behaviour __compute_trap_behaviour(struct kvm_vcpu *vcpu,
> +                                                   const enum cgt_group_id id,
> +                                                   enum trap_behaviour b)
> +{
> +       switch (id) {
> +               const enum cgt_group_id *cgids;
> +
> +       case __RESERVED__ ... __MULTIPLE_CONTROL_BITS__ - 1:
> +               if (likely(id != __RESERVED__))
> +                       b |= get_behaviour(vcpu, &coarse_trap_bits[id]);
> +               break;
> +       case __MULTIPLE_CONTROL_BITS__ ... __COMPLEX_CONDITIONS__ - 1:
> +               /* Yes, this is recursive. Don't do anything stupid. */
> +               cgids = coarse_control_combo[id - __MULTIPLE_CONTROL_BITS__];
> +               for (int i = 0; cgids[i] != __RESERVED__; i++)
> +                       b |= __compute_trap_behaviour(vcpu, cgids[i], b);
> +               break;
> +       default:
> +               if (ARRAY_SIZE(ccc))
> +                       b |= ccc[id -  __COMPLEX_CONDITIONS__](vcpu);
> +               break;
> +       }
> +
> +       return b;
> +}
> +
> +static enum trap_behaviour compute_trap_behaviour(struct kvm_vcpu *vcpu,
> +                                                 const union trap_config tc)
> +{
> +       enum trap_behaviour b = BEHAVE_HANDLE_LOCALLY;
> +
> +       return __compute_trap_behaviour(vcpu, tc.cgt, b);
> +}
> +
> +bool __check_nv_sr_forward(struct kvm_vcpu *vcpu)
> +{
> +       union trap_config tc;
> +       enum trap_behaviour b;
> +       bool is_read;
> +       u32 sysreg;
> +       u64 esr;
> +
> +       if (!vcpu_has_nv(vcpu) || is_hyp_ctxt(vcpu))
> +               return false;
> +
> +       esr = kvm_vcpu_get_esr(vcpu);
> +       sysreg = esr_sys64_to_sysreg(esr);
> +       is_read = (esr & ESR_ELx_SYS64_ISS_DIR_MASK) == ESR_ELx_SYS64_ISS_DIR_READ;
> +
> +       tc = get_trap_config(sysreg);
> +
> +       /*
> +        * A value of 0 for the whole entry means that we know nothing
> +        * for this sysreg, and that it cannot be re-injected into the
> +        * nested hypervisor. In this situation, let's cut it short.
> +        *
> +        * Note that ultimately, we could also make use of the xarray
> +        * to store the index of the sysreg in the local descriptor
> +        * array, avoiding another search... Hint, hint...
> +        */
> +       if (!tc.val)
> +               return false;
> +
> +       b = compute_trap_behaviour(vcpu, tc);
> +
> +       if (((b & BEHAVE_FORWARD_READ) && is_read) ||
> +           ((b & BEHAVE_FORWARD_WRITE) && !is_read))
> +               goto inject;
> +
> +       return false;
> +
> +inject:
> +       trace_kvm_forward_sysreg_trap(vcpu, sysreg, is_read);
> +
> +       kvm_inject_nested_sync(vcpu, kvm_vcpu_get_esr(vcpu));
> +       return true;
> +}
> +
>  static u64 kvm_check_illegal_exception_return(struct kvm_vcpu *vcpu, u64 spsr)
>  {
>         u64 mode = spsr & PSR_MODE_MASK;
> diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c
> index f5baaa508926..9556896311db 100644
> --- a/arch/arm64/kvm/sys_regs.c
> +++ b/arch/arm64/kvm/sys_regs.c
> @@ -3177,6 +3177,9 @@ int kvm_handle_sys_reg(struct kvm_vcpu *vcpu)
>
>         trace_kvm_handle_sys_reg(esr);
>
> +       if (__check_nv_sr_forward(vcpu))
> +               return 1;
> +
>         params = esr_sys64_to_params(esr);
>         params.regval = vcpu_get_reg(vcpu, Rt);
>
> @@ -3594,5 +3597,8 @@ int __init kvm_sys_reg_table_init(void)
>         if (!first_idreg)
>                 return -EINVAL;
>
> +       if (kvm_get_mode() == KVM_MODE_NV)
> +               return populate_nv_trap_config();
> +
>         return 0;
>  }
> diff --git a/arch/arm64/kvm/trace_arm.h b/arch/arm64/kvm/trace_arm.h
> index 6ce5c025218d..8ad53104934d 100644
> --- a/arch/arm64/kvm/trace_arm.h
> +++ b/arch/arm64/kvm/trace_arm.h
> @@ -364,6 +364,32 @@ TRACE_EVENT(kvm_inject_nested_exception,
>                   __entry->hcr_el2)
>  );
>
> +TRACE_EVENT(kvm_forward_sysreg_trap,
> +           TP_PROTO(struct kvm_vcpu *vcpu, u32 sysreg, bool is_read),
> +           TP_ARGS(vcpu, sysreg, is_read),
> +
> +           TP_STRUCT__entry(
> +               __field(u64,    pc)
> +               __field(u32,    sysreg)
> +               __field(bool,   is_read)
> +           ),
> +
> +           TP_fast_assign(
> +               __entry->pc = *vcpu_pc(vcpu);
> +               __entry->sysreg = sysreg;
> +               __entry->is_read = is_read;
> +           ),
> +
> +           TP_printk("%llx %c (%d,%d,%d,%d,%d)",
> +                     __entry->pc,
> +                     __entry->is_read ? 'R' : 'W',
> +                     sys_reg_Op0(__entry->sysreg),
> +                     sys_reg_Op1(__entry->sysreg),
> +                     sys_reg_CRn(__entry->sysreg),
> +                     sys_reg_CRm(__entry->sysreg),
> +                     sys_reg_Op2(__entry->sysreg))
> +);
> +
>  #endif /* _TRACE_ARM_ARM64_KVM_H */
>
>  #undef TRACE_INCLUDE_PATH
> --
> 2.34.1
>

Reviewed-by: Jing Zhang <jingzhangos@google.com>

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 48+ messages in thread

* Re: [PATCH v4 15/28] KVM: arm64: nv: Add trap forwarding for HCR_EL2
  2023-08-15 18:38 ` [PATCH v4 15/28] KVM: arm64: nv: Add trap forwarding for HCR_EL2 Marc Zyngier
@ 2023-08-15 21:37   ` Jing Zhang
  2023-08-17 11:05   ` Miguel Luis
  1 sibling, 0 replies; 48+ messages in thread
From: Jing Zhang @ 2023-08-15 21:37 UTC (permalink / raw)
  To: Marc Zyngier
  Cc: kvmarm, kvm, linux-arm-kernel, Catalin Marinas, Eric Auger,
	Mark Brown, Mark Rutland, Will Deacon, Alexandru Elisei,
	Andre Przywara, Chase Conklin, Ganapatrao Kulkarni, Darren Hart,
	Miguel Luis, James Morse, Suzuki K Poulose, Oliver Upton,
	Zenghui Yu

Hi Marc,

On Tue, Aug 15, 2023 at 11:47 AM Marc Zyngier <maz@kernel.org> wrote:
>
> Describe the HCR_EL2 register, and associate it with all the sysregs
> it allows to trap.
>
> Reviewed-by: Eric Auger <eric.auger@redhat.com>
> Signed-off-by: Marc Zyngier <maz@kernel.org>
> ---
>  arch/arm64/kvm/emulate-nested.c | 488 ++++++++++++++++++++++++++++++++
>  1 file changed, 488 insertions(+)
>
> diff --git a/arch/arm64/kvm/emulate-nested.c b/arch/arm64/kvm/emulate-nested.c
> index d5837ed0077c..975a30ef874a 100644
> --- a/arch/arm64/kvm/emulate-nested.c
> +++ b/arch/arm64/kvm/emulate-nested.c
> @@ -38,12 +38,48 @@ enum cgt_group_id {
>          * on their own instead of being part of a combination of
>          * trap controls.
>          */
> +       CGT_HCR_TID1,
> +       CGT_HCR_TID2,
> +       CGT_HCR_TID3,
> +       CGT_HCR_IMO,
> +       CGT_HCR_FMO,
> +       CGT_HCR_TIDCP,
> +       CGT_HCR_TACR,
> +       CGT_HCR_TSW,
> +       CGT_HCR_TPC,
> +       CGT_HCR_TPU,
> +       CGT_HCR_TTLB,
> +       CGT_HCR_TVM,
> +       CGT_HCR_TDZ,
> +       CGT_HCR_TRVM,
> +       CGT_HCR_TLOR,
> +       CGT_HCR_TERR,
> +       CGT_HCR_APK,
> +       CGT_HCR_NV,
> +       CGT_HCR_NV_nNV2,
> +       CGT_HCR_NV1_nNV2,
> +       CGT_HCR_AT,
> +       CGT_HCR_nFIEN,
> +       CGT_HCR_TID4,
> +       CGT_HCR_TICAB,
> +       CGT_HCR_TOCU,
> +       CGT_HCR_ENSCXT,
> +       CGT_HCR_TTLBIS,
> +       CGT_HCR_TTLBOS,
>
>         /*
>          * Anything after this point is a combination of coarse trap
>          * controls, which must all be evaluated to decide what to do.
>          */
>         __MULTIPLE_CONTROL_BITS__,
> +       CGT_HCR_IMO_FMO = __MULTIPLE_CONTROL_BITS__,
> +       CGT_HCR_TID2_TID4,
> +       CGT_HCR_TTLB_TTLBIS,
> +       CGT_HCR_TTLB_TTLBOS,
> +       CGT_HCR_TVM_TRVM,
> +       CGT_HCR_TPU_TICAB,
> +       CGT_HCR_TPU_TOCU,
> +       CGT_HCR_NV1_nNV2_ENSCXT,
>
>         /*
>          * Anything after this point requires a callback evaluating a
> @@ -56,6 +92,174 @@ enum cgt_group_id {
>  };
>
>  static const struct trap_bits coarse_trap_bits[] = {
> +       [CGT_HCR_TID1] = {
> +               .index          = HCR_EL2,
> +               .value          = HCR_TID1,
> +               .mask           = HCR_TID1,
> +               .behaviour      = BEHAVE_FORWARD_READ,
> +       },
> +       [CGT_HCR_TID2] = {
> +               .index          = HCR_EL2,
> +               .value          = HCR_TID2,
> +               .mask           = HCR_TID2,
> +               .behaviour      = BEHAVE_FORWARD_ANY,
> +       },
> +       [CGT_HCR_TID3] = {
> +               .index          = HCR_EL2,
> +               .value          = HCR_TID3,
> +               .mask           = HCR_TID3,
> +               .behaviour      = BEHAVE_FORWARD_READ,
> +       },
> +       [CGT_HCR_IMO] = {
> +               .index          = HCR_EL2,
> +               .value          = HCR_IMO,
> +               .mask           = HCR_IMO,
> +               .behaviour      = BEHAVE_FORWARD_WRITE,
> +       },
> +       [CGT_HCR_FMO] = {
> +               .index          = HCR_EL2,
> +               .value          = HCR_FMO,
> +               .mask           = HCR_FMO,
> +               .behaviour      = BEHAVE_FORWARD_WRITE,
> +       },
> +       [CGT_HCR_TIDCP] = {
> +               .index          = HCR_EL2,
> +               .value          = HCR_TIDCP,
> +               .mask           = HCR_TIDCP,
> +               .behaviour      = BEHAVE_FORWARD_ANY,
> +       },
> +       [CGT_HCR_TACR] = {
> +               .index          = HCR_EL2,
> +               .value          = HCR_TACR,
> +               .mask           = HCR_TACR,
> +               .behaviour      = BEHAVE_FORWARD_ANY,
> +       },
> +       [CGT_HCR_TSW] = {
> +               .index          = HCR_EL2,
> +               .value          = HCR_TSW,
> +               .mask           = HCR_TSW,
> +               .behaviour      = BEHAVE_FORWARD_ANY,
> +       },
> +       [CGT_HCR_TPC] = { /* Also called TCPC when FEAT_DPB is implemented */
> +               .index          = HCR_EL2,
> +               .value          = HCR_TPC,
> +               .mask           = HCR_TPC,
> +               .behaviour      = BEHAVE_FORWARD_ANY,
> +       },
> +       [CGT_HCR_TPU] = {
> +               .index          = HCR_EL2,
> +               .value          = HCR_TPU,
> +               .mask           = HCR_TPU,
> +               .behaviour      = BEHAVE_FORWARD_ANY,
> +       },
> +       [CGT_HCR_TTLB] = {
> +               .index          = HCR_EL2,
> +               .value          = HCR_TTLB,
> +               .mask           = HCR_TTLB,
> +               .behaviour      = BEHAVE_FORWARD_ANY,
> +       },
> +       [CGT_HCR_TVM] = {
> +               .index          = HCR_EL2,
> +               .value          = HCR_TVM,
> +               .mask           = HCR_TVM,
> +               .behaviour      = BEHAVE_FORWARD_WRITE,
> +       },
> +       [CGT_HCR_TDZ] = {
> +               .index          = HCR_EL2,
> +               .value          = HCR_TDZ,
> +               .mask           = HCR_TDZ,
> +               .behaviour      = BEHAVE_FORWARD_ANY,
> +       },
> +       [CGT_HCR_TRVM] = {
> +               .index          = HCR_EL2,
> +               .value          = HCR_TRVM,
> +               .mask           = HCR_TRVM,
> +               .behaviour      = BEHAVE_FORWARD_READ,
> +       },
> +       [CGT_HCR_TLOR] = {
> +               .index          = HCR_EL2,
> +               .value          = HCR_TLOR,
> +               .mask           = HCR_TLOR,
> +               .behaviour      = BEHAVE_FORWARD_ANY,
> +       },
> +       [CGT_HCR_TERR] = {
> +               .index          = HCR_EL2,
> +               .value          = HCR_TERR,
> +               .mask           = HCR_TERR,
> +               .behaviour      = BEHAVE_FORWARD_ANY,
> +       },
> +       [CGT_HCR_APK] = {
> +               .index          = HCR_EL2,
> +               .value          = 0,
> +               .mask           = HCR_APK,
> +               .behaviour      = BEHAVE_FORWARD_ANY,
> +       },
> +       [CGT_HCR_NV] = {
> +               .index          = HCR_EL2,
> +               .value          = HCR_NV,
> +               .mask           = HCR_NV,
> +               .behaviour      = BEHAVE_FORWARD_ANY,
> +       },
> +       [CGT_HCR_NV_nNV2] = {
> +               .index          = HCR_EL2,
> +               .value          = HCR_NV,
> +               .mask           = HCR_NV | HCR_NV2,
> +               .behaviour      = BEHAVE_FORWARD_ANY,
> +       },
> +       [CGT_HCR_NV1_nNV2] = {
> +               .index          = HCR_EL2,
> +               .value          = HCR_NV | HCR_NV1,
> +               .mask           = HCR_NV | HCR_NV1 | HCR_NV2,
> +               .behaviour      = BEHAVE_FORWARD_ANY,
> +       },
> +       [CGT_HCR_AT] = {
> +               .index          = HCR_EL2,
> +               .value          = HCR_AT,
> +               .mask           = HCR_AT,
> +               .behaviour      = BEHAVE_FORWARD_ANY,
> +       },
> +       [CGT_HCR_nFIEN] = {
> +               .index          = HCR_EL2,
> +               .value          = 0,
> +               .mask           = HCR_FIEN,
> +               .behaviour      = BEHAVE_FORWARD_ANY,
> +       },
> +       [CGT_HCR_TID4] = {
> +               .index          = HCR_EL2,
> +               .value          = HCR_TID4,
> +               .mask           = HCR_TID4,
> +               .behaviour      = BEHAVE_FORWARD_ANY,
> +       },
> +       [CGT_HCR_TICAB] = {
> +               .index          = HCR_EL2,
> +               .value          = HCR_TICAB,
> +               .mask           = HCR_TICAB,
> +               .behaviour      = BEHAVE_FORWARD_ANY,
> +       },
> +       [CGT_HCR_TOCU] = {
> +               .index          = HCR_EL2,
> +               .value          = HCR_TOCU,
> +               .mask           = HCR_TOCU,
> +               .behaviour      = BEHAVE_FORWARD_ANY,
> +       },
> +       [CGT_HCR_ENSCXT] = {
> +               .index          = HCR_EL2,
> +               .value          = 0,
> +               .mask           = HCR_ENSCXT,
> +               .behaviour      = BEHAVE_FORWARD_ANY,
> +       },
> +       [CGT_HCR_TTLBIS] = {
> +               .index          = HCR_EL2,
> +               .value          = HCR_TTLBIS,
> +               .mask           = HCR_TTLBIS,
> +               .behaviour      = BEHAVE_FORWARD_ANY,
> +       },
> +       [CGT_HCR_TTLBOS] = {
> +               .index          = HCR_EL2,
> +               .value          = HCR_TTLBOS,
> +               .mask           = HCR_TTLBOS,
> +               .behaviour      = BEHAVE_FORWARD_ANY,
> +       },
>  };
>
>  #define MCB(id, ...)                                           \
> @@ -65,6 +269,14 @@ static const struct trap_bits coarse_trap_bits[] = {
>                 }
>
>  static const enum cgt_group_id *coarse_control_combo[] = {
> +       MCB(CGT_HCR_IMO_FMO,            CGT_HCR_IMO, CGT_HCR_FMO),
> +       MCB(CGT_HCR_TID2_TID4,          CGT_HCR_TID2, CGT_HCR_TID4),
> +       MCB(CGT_HCR_TTLB_TTLBIS,        CGT_HCR_TTLB, CGT_HCR_TTLBIS),
> +       MCB(CGT_HCR_TTLB_TTLBOS,        CGT_HCR_TTLB, CGT_HCR_TTLBOS),
> +       MCB(CGT_HCR_TVM_TRVM,           CGT_HCR_TVM, CGT_HCR_TRVM),
> +       MCB(CGT_HCR_TPU_TICAB,          CGT_HCR_TPU, CGT_HCR_TICAB),
> +       MCB(CGT_HCR_TPU_TOCU,           CGT_HCR_TPU, CGT_HCR_TOCU),
> +       MCB(CGT_HCR_NV1_nNV2_ENSCXT,    CGT_HCR_NV1_nNV2, CGT_HCR_ENSCXT),
>  };
>
>  typedef enum trap_behaviour (*complex_condition_check)(struct kvm_vcpu *);
> @@ -121,6 +333,282 @@ struct encoding_to_trap_config {
>   * re-injected in the nested hypervisor.
>   */
>  static const struct encoding_to_trap_config encoding_to_cgt[] __initconst = {
> +       SR_TRAP(SYS_REVIDR_EL1,         CGT_HCR_TID1),
> +       SR_TRAP(SYS_AIDR_EL1,           CGT_HCR_TID1),
> +       SR_TRAP(SYS_SMIDR_EL1,          CGT_HCR_TID1),
> +       SR_TRAP(SYS_CTR_EL0,            CGT_HCR_TID2),
> +       SR_TRAP(SYS_CCSIDR_EL1,         CGT_HCR_TID2_TID4),
> +       SR_TRAP(SYS_CCSIDR2_EL1,        CGT_HCR_TID2_TID4),
> +       SR_TRAP(SYS_CLIDR_EL1,          CGT_HCR_TID2_TID4),
> +       SR_TRAP(SYS_CSSELR_EL1,         CGT_HCR_TID2_TID4),
> +       SR_RANGE_TRAP(SYS_ID_PFR0_EL1,
> +                     sys_reg(3, 0, 0, 7, 7), CGT_HCR_TID3),
> +       SR_TRAP(SYS_ICC_SGI0R_EL1,      CGT_HCR_IMO_FMO),
> +       SR_TRAP(SYS_ICC_ASGI1R_EL1,     CGT_HCR_IMO_FMO),
> +       SR_TRAP(SYS_ICC_SGI1R_EL1,      CGT_HCR_IMO_FMO),
> +       SR_RANGE_TRAP(sys_reg(3, 0, 11, 0, 0),
> +                     sys_reg(3, 0, 11, 15, 7), CGT_HCR_TIDCP),
> +       SR_RANGE_TRAP(sys_reg(3, 1, 11, 0, 0),
> +                     sys_reg(3, 1, 11, 15, 7), CGT_HCR_TIDCP),
> +       SR_RANGE_TRAP(sys_reg(3, 2, 11, 0, 0),
> +                     sys_reg(3, 2, 11, 15, 7), CGT_HCR_TIDCP),
> +       SR_RANGE_TRAP(sys_reg(3, 3, 11, 0, 0),
> +                     sys_reg(3, 3, 11, 15, 7), CGT_HCR_TIDCP),
> +       SR_RANGE_TRAP(sys_reg(3, 4, 11, 0, 0),
> +                     sys_reg(3, 4, 11, 15, 7), CGT_HCR_TIDCP),
> +       SR_RANGE_TRAP(sys_reg(3, 5, 11, 0, 0),
> +                     sys_reg(3, 5, 11, 15, 7), CGT_HCR_TIDCP),
> +       SR_RANGE_TRAP(sys_reg(3, 6, 11, 0, 0),
> +                     sys_reg(3, 6, 11, 15, 7), CGT_HCR_TIDCP),
> +       SR_RANGE_TRAP(sys_reg(3, 7, 11, 0, 0),
> +                     sys_reg(3, 7, 11, 15, 7), CGT_HCR_TIDCP),
> +       SR_RANGE_TRAP(sys_reg(3, 0, 15, 0, 0),
> +                     sys_reg(3, 0, 15, 15, 7), CGT_HCR_TIDCP),
> +       SR_RANGE_TRAP(sys_reg(3, 1, 15, 0, 0),
> +                     sys_reg(3, 1, 15, 15, 7), CGT_HCR_TIDCP),
> +       SR_RANGE_TRAP(sys_reg(3, 2, 15, 0, 0),
> +                     sys_reg(3, 2, 15, 15, 7), CGT_HCR_TIDCP),
> +       SR_RANGE_TRAP(sys_reg(3, 3, 15, 0, 0),
> +                     sys_reg(3, 3, 15, 15, 7), CGT_HCR_TIDCP),
> +       SR_RANGE_TRAP(sys_reg(3, 4, 15, 0, 0),
> +                     sys_reg(3, 4, 15, 15, 7), CGT_HCR_TIDCP),
> +       SR_RANGE_TRAP(sys_reg(3, 5, 15, 0, 0),
> +                     sys_reg(3, 5, 15, 15, 7), CGT_HCR_TIDCP),
> +       SR_RANGE_TRAP(sys_reg(3, 6, 15, 0, 0),
> +                     sys_reg(3, 6, 15, 15, 7), CGT_HCR_TIDCP),
> +       SR_RANGE_TRAP(sys_reg(3, 7, 15, 0, 0),
> +                     sys_reg(3, 7, 15, 15, 7), CGT_HCR_TIDCP),
> +       SR_TRAP(SYS_ACTLR_EL1,          CGT_HCR_TACR),
> +       SR_TRAP(SYS_DC_ISW,             CGT_HCR_TSW),
> +       SR_TRAP(SYS_DC_CSW,             CGT_HCR_TSW),
> +       SR_TRAP(SYS_DC_CISW,            CGT_HCR_TSW),
> +       SR_TRAP(SYS_DC_IGSW,            CGT_HCR_TSW),
> +       SR_TRAP(SYS_DC_IGDSW,           CGT_HCR_TSW),
> +       SR_TRAP(SYS_DC_CGSW,            CGT_HCR_TSW),
> +       SR_TRAP(SYS_DC_CGDSW,           CGT_HCR_TSW),
> +       SR_TRAP(SYS_DC_CIGSW,           CGT_HCR_TSW),
> +       SR_TRAP(SYS_DC_CIGDSW,          CGT_HCR_TSW),
> +       SR_TRAP(SYS_DC_CIVAC,           CGT_HCR_TPC),
> +       SR_TRAP(SYS_DC_CVAC,            CGT_HCR_TPC),
> +       SR_TRAP(SYS_DC_CVAP,            CGT_HCR_TPC),
> +       SR_TRAP(SYS_DC_CVADP,           CGT_HCR_TPC),
> +       SR_TRAP(SYS_DC_IVAC,            CGT_HCR_TPC),
> +       SR_TRAP(SYS_DC_CIGVAC,          CGT_HCR_TPC),
> +       SR_TRAP(SYS_DC_CIGDVAC,         CGT_HCR_TPC),
> +       SR_TRAP(SYS_DC_IGVAC,           CGT_HCR_TPC),
> +       SR_TRAP(SYS_DC_IGDVAC,          CGT_HCR_TPC),
> +       SR_TRAP(SYS_DC_CGVAC,           CGT_HCR_TPC),
> +       SR_TRAP(SYS_DC_CGDVAC,          CGT_HCR_TPC),
> +       SR_TRAP(SYS_DC_CGVAP,           CGT_HCR_TPC),
> +       SR_TRAP(SYS_DC_CGDVAP,          CGT_HCR_TPC),
> +       SR_TRAP(SYS_DC_CGVADP,          CGT_HCR_TPC),
> +       SR_TRAP(SYS_DC_CGDVADP,         CGT_HCR_TPC),
> +       SR_TRAP(SYS_IC_IVAU,            CGT_HCR_TPU_TOCU),
> +       SR_TRAP(SYS_IC_IALLU,           CGT_HCR_TPU_TOCU),
> +       SR_TRAP(SYS_IC_IALLUIS,         CGT_HCR_TPU_TICAB),
> +       SR_TRAP(SYS_DC_CVAU,            CGT_HCR_TPU_TOCU),
> +       SR_TRAP(OP_TLBI_RVAE1,          CGT_HCR_TTLB),
> +       SR_TRAP(OP_TLBI_RVAAE1,         CGT_HCR_TTLB),
> +       SR_TRAP(OP_TLBI_RVALE1,         CGT_HCR_TTLB),
> +       SR_TRAP(OP_TLBI_RVAALE1,        CGT_HCR_TTLB),
> +       SR_TRAP(OP_TLBI_VMALLE1,        CGT_HCR_TTLB),
> +       SR_TRAP(OP_TLBI_VAE1,           CGT_HCR_TTLB),
> +       SR_TRAP(OP_TLBI_ASIDE1,         CGT_HCR_TTLB),
> +       SR_TRAP(OP_TLBI_VAAE1,          CGT_HCR_TTLB),
> +       SR_TRAP(OP_TLBI_VALE1,          CGT_HCR_TTLB),
> +       SR_TRAP(OP_TLBI_VAALE1,         CGT_HCR_TTLB),
> +       SR_TRAP(OP_TLBI_RVAE1NXS,       CGT_HCR_TTLB),
> +       SR_TRAP(OP_TLBI_RVAAE1NXS,      CGT_HCR_TTLB),
> +       SR_TRAP(OP_TLBI_RVALE1NXS,      CGT_HCR_TTLB),
> +       SR_TRAP(OP_TLBI_RVAALE1NXS,     CGT_HCR_TTLB),
> +       SR_TRAP(OP_TLBI_VMALLE1NXS,     CGT_HCR_TTLB),
> +       SR_TRAP(OP_TLBI_VAE1NXS,        CGT_HCR_TTLB),
> +       SR_TRAP(OP_TLBI_ASIDE1NXS,      CGT_HCR_TTLB),
> +       SR_TRAP(OP_TLBI_VAAE1NXS,       CGT_HCR_TTLB),
> +       SR_TRAP(OP_TLBI_VALE1NXS,       CGT_HCR_TTLB),
> +       SR_TRAP(OP_TLBI_VAALE1NXS,      CGT_HCR_TTLB),
> +       SR_TRAP(OP_TLBI_RVAE1IS,        CGT_HCR_TTLB_TTLBIS),
> +       SR_TRAP(OP_TLBI_RVAAE1IS,       CGT_HCR_TTLB_TTLBIS),
> +       SR_TRAP(OP_TLBI_RVALE1IS,       CGT_HCR_TTLB_TTLBIS),
> +       SR_TRAP(OP_TLBI_RVAALE1IS,      CGT_HCR_TTLB_TTLBIS),
> +       SR_TRAP(OP_TLBI_VMALLE1IS,      CGT_HCR_TTLB_TTLBIS),
> +       SR_TRAP(OP_TLBI_VAE1IS,         CGT_HCR_TTLB_TTLBIS),
> +       SR_TRAP(OP_TLBI_ASIDE1IS,       CGT_HCR_TTLB_TTLBIS),
> +       SR_TRAP(OP_TLBI_VAAE1IS,        CGT_HCR_TTLB_TTLBIS),
> +       SR_TRAP(OP_TLBI_VALE1IS,        CGT_HCR_TTLB_TTLBIS),
> +       SR_TRAP(OP_TLBI_VAALE1IS,       CGT_HCR_TTLB_TTLBIS),
> +       SR_TRAP(OP_TLBI_RVAE1ISNXS,     CGT_HCR_TTLB_TTLBIS),
> +       SR_TRAP(OP_TLBI_RVAAE1ISNXS,    CGT_HCR_TTLB_TTLBIS),
> +       SR_TRAP(OP_TLBI_RVALE1ISNXS,    CGT_HCR_TTLB_TTLBIS),
> +       SR_TRAP(OP_TLBI_RVAALE1ISNXS,   CGT_HCR_TTLB_TTLBIS),
> +       SR_TRAP(OP_TLBI_VMALLE1ISNXS,   CGT_HCR_TTLB_TTLBIS),
> +       SR_TRAP(OP_TLBI_VAE1ISNXS,      CGT_HCR_TTLB_TTLBIS),
> +       SR_TRAP(OP_TLBI_ASIDE1ISNXS,    CGT_HCR_TTLB_TTLBIS),
> +       SR_TRAP(OP_TLBI_VAAE1ISNXS,     CGT_HCR_TTLB_TTLBIS),
> +       SR_TRAP(OP_TLBI_VALE1ISNXS,     CGT_HCR_TTLB_TTLBIS),
> +       SR_TRAP(OP_TLBI_VAALE1ISNXS,    CGT_HCR_TTLB_TTLBIS),
> +       SR_TRAP(OP_TLBI_VMALLE1OS,      CGT_HCR_TTLB_TTLBOS),
> +       SR_TRAP(OP_TLBI_VAE1OS,         CGT_HCR_TTLB_TTLBOS),
> +       SR_TRAP(OP_TLBI_ASIDE1OS,       CGT_HCR_TTLB_TTLBOS),
> +       SR_TRAP(OP_TLBI_VAAE1OS,        CGT_HCR_TTLB_TTLBOS),
> +       SR_TRAP(OP_TLBI_VALE1OS,        CGT_HCR_TTLB_TTLBOS),
> +       SR_TRAP(OP_TLBI_VAALE1OS,       CGT_HCR_TTLB_TTLBOS),
> +       SR_TRAP(OP_TLBI_RVAE1OS,        CGT_HCR_TTLB_TTLBOS),
> +       SR_TRAP(OP_TLBI_RVAAE1OS,       CGT_HCR_TTLB_TTLBOS),
> +       SR_TRAP(OP_TLBI_RVALE1OS,       CGT_HCR_TTLB_TTLBOS),
> +       SR_TRAP(OP_TLBI_RVAALE1OS,      CGT_HCR_TTLB_TTLBOS),
> +       SR_TRAP(OP_TLBI_VMALLE1OSNXS,   CGT_HCR_TTLB_TTLBOS),
> +       SR_TRAP(OP_TLBI_VAE1OSNXS,      CGT_HCR_TTLB_TTLBOS),
> +       SR_TRAP(OP_TLBI_ASIDE1OSNXS,    CGT_HCR_TTLB_TTLBOS),
> +       SR_TRAP(OP_TLBI_VAAE1OSNXS,     CGT_HCR_TTLB_TTLBOS),
> +       SR_TRAP(OP_TLBI_VALE1OSNXS,     CGT_HCR_TTLB_TTLBOS),
> +       SR_TRAP(OP_TLBI_VAALE1OSNXS,    CGT_HCR_TTLB_TTLBOS),
> +       SR_TRAP(OP_TLBI_RVAE1OSNXS,     CGT_HCR_TTLB_TTLBOS),
> +       SR_TRAP(OP_TLBI_RVAAE1OSNXS,    CGT_HCR_TTLB_TTLBOS),
> +       SR_TRAP(OP_TLBI_RVALE1OSNXS,    CGT_HCR_TTLB_TTLBOS),
> +       SR_TRAP(OP_TLBI_RVAALE1OSNXS,   CGT_HCR_TTLB_TTLBOS),
> +       SR_TRAP(SYS_SCTLR_EL1,          CGT_HCR_TVM_TRVM),
> +       SR_TRAP(SYS_TTBR0_EL1,          CGT_HCR_TVM_TRVM),
> +       SR_TRAP(SYS_TTBR1_EL1,          CGT_HCR_TVM_TRVM),
> +       SR_TRAP(SYS_TCR_EL1,            CGT_HCR_TVM_TRVM),
> +       SR_TRAP(SYS_ESR_EL1,            CGT_HCR_TVM_TRVM),
> +       SR_TRAP(SYS_FAR_EL1,            CGT_HCR_TVM_TRVM),
> +       SR_TRAP(SYS_AFSR0_EL1,          CGT_HCR_TVM_TRVM),
> +       SR_TRAP(SYS_AFSR1_EL1,          CGT_HCR_TVM_TRVM),
> +       SR_TRAP(SYS_MAIR_EL1,           CGT_HCR_TVM_TRVM),
> +       SR_TRAP(SYS_AMAIR_EL1,          CGT_HCR_TVM_TRVM),
> +       SR_TRAP(SYS_CONTEXTIDR_EL1,     CGT_HCR_TVM_TRVM),
> +       SR_TRAP(SYS_DC_ZVA,             CGT_HCR_TDZ),
> +       SR_TRAP(SYS_DC_GVA,             CGT_HCR_TDZ),
> +       SR_TRAP(SYS_DC_GZVA,            CGT_HCR_TDZ),
> +       SR_TRAP(SYS_LORSA_EL1,          CGT_HCR_TLOR),
> +       SR_TRAP(SYS_LOREA_EL1,          CGT_HCR_TLOR),
> +       SR_TRAP(SYS_LORN_EL1,           CGT_HCR_TLOR),
> +       SR_TRAP(SYS_LORC_EL1,           CGT_HCR_TLOR),
> +       SR_TRAP(SYS_LORID_EL1,          CGT_HCR_TLOR),
> +       SR_TRAP(SYS_ERRIDR_EL1,         CGT_HCR_TERR),
> +       SR_TRAP(SYS_ERRSELR_EL1,        CGT_HCR_TERR),
> +       SR_TRAP(SYS_ERXADDR_EL1,        CGT_HCR_TERR),
> +       SR_TRAP(SYS_ERXCTLR_EL1,        CGT_HCR_TERR),
> +       SR_TRAP(SYS_ERXFR_EL1,          CGT_HCR_TERR),
> +       SR_TRAP(SYS_ERXMISC0_EL1,       CGT_HCR_TERR),
> +       SR_TRAP(SYS_ERXMISC1_EL1,       CGT_HCR_TERR),
> +       SR_TRAP(SYS_ERXMISC2_EL1,       CGT_HCR_TERR),
> +       SR_TRAP(SYS_ERXMISC3_EL1,       CGT_HCR_TERR),
> +       SR_TRAP(SYS_ERXSTATUS_EL1,      CGT_HCR_TERR),
> +       SR_TRAP(SYS_APIAKEYLO_EL1,      CGT_HCR_APK),
> +       SR_TRAP(SYS_APIAKEYHI_EL1,      CGT_HCR_APK),
> +       SR_TRAP(SYS_APIBKEYLO_EL1,      CGT_HCR_APK),
> +       SR_TRAP(SYS_APIBKEYHI_EL1,      CGT_HCR_APK),
> +       SR_TRAP(SYS_APDAKEYLO_EL1,      CGT_HCR_APK),
> +       SR_TRAP(SYS_APDAKEYHI_EL1,      CGT_HCR_APK),
> +       SR_TRAP(SYS_APDBKEYLO_EL1,      CGT_HCR_APK),
> +       SR_TRAP(SYS_APDBKEYHI_EL1,      CGT_HCR_APK),
> +       SR_TRAP(SYS_APGAKEYLO_EL1,      CGT_HCR_APK),
> +       SR_TRAP(SYS_APGAKEYHI_EL1,      CGT_HCR_APK),
> +       /* All _EL2 registers */
> +       SR_RANGE_TRAP(sys_reg(3, 4, 0, 0, 0),
> +                     sys_reg(3, 4, 3, 15, 7), CGT_HCR_NV),
> +       /* Skip the SP_EL1 encoding... */
> +       SR_RANGE_TRAP(sys_reg(3, 4, 4, 1, 1),
> +                     sys_reg(3, 4, 10, 15, 7), CGT_HCR_NV),
> +       SR_RANGE_TRAP(sys_reg(3, 4, 12, 0, 0),
> +                     sys_reg(3, 4, 14, 15, 7), CGT_HCR_NV),
> +       /* All _EL02, _EL12 registers */
> +       SR_RANGE_TRAP(sys_reg(3, 5, 0, 0, 0),
> +                     sys_reg(3, 5, 10, 15, 7), CGT_HCR_NV),
> +       SR_RANGE_TRAP(sys_reg(3, 5, 12, 0, 0),
> +                     sys_reg(3, 5, 14, 15, 7), CGT_HCR_NV),
> +       SR_TRAP(OP_AT_S1E2R,            CGT_HCR_NV),
> +       SR_TRAP(OP_AT_S1E2W,            CGT_HCR_NV),
> +       SR_TRAP(OP_AT_S12E1R,           CGT_HCR_NV),
> +       SR_TRAP(OP_AT_S12E1W,           CGT_HCR_NV),
> +       SR_TRAP(OP_AT_S12E0R,           CGT_HCR_NV),
> +       SR_TRAP(OP_AT_S12E0W,           CGT_HCR_NV),
> +       SR_TRAP(OP_TLBI_IPAS2E1,        CGT_HCR_NV),
> +       SR_TRAP(OP_TLBI_RIPAS2E1,       CGT_HCR_NV),
> +       SR_TRAP(OP_TLBI_IPAS2LE1,       CGT_HCR_NV),
> +       SR_TRAP(OP_TLBI_RIPAS2LE1,      CGT_HCR_NV),
> +       SR_TRAP(OP_TLBI_RVAE2,          CGT_HCR_NV),
> +       SR_TRAP(OP_TLBI_RVALE2,         CGT_HCR_NV),
> +       SR_TRAP(OP_TLBI_ALLE2,          CGT_HCR_NV),
> +       SR_TRAP(OP_TLBI_VAE2,           CGT_HCR_NV),
> +       SR_TRAP(OP_TLBI_ALLE1,          CGT_HCR_NV),
> +       SR_TRAP(OP_TLBI_VALE2,          CGT_HCR_NV),
> +       SR_TRAP(OP_TLBI_VMALLS12E1,     CGT_HCR_NV),
> +       SR_TRAP(OP_TLBI_IPAS2E1NXS,     CGT_HCR_NV),
> +       SR_TRAP(OP_TLBI_RIPAS2E1NXS,    CGT_HCR_NV),
> +       SR_TRAP(OP_TLBI_IPAS2LE1NXS,    CGT_HCR_NV),
> +       SR_TRAP(OP_TLBI_RIPAS2LE1NXS,   CGT_HCR_NV),
> +       SR_TRAP(OP_TLBI_RVAE2NXS,       CGT_HCR_NV),
> +       SR_TRAP(OP_TLBI_RVALE2NXS,      CGT_HCR_NV),
> +       SR_TRAP(OP_TLBI_ALLE2NXS,       CGT_HCR_NV),
> +       SR_TRAP(OP_TLBI_VAE2NXS,        CGT_HCR_NV),
> +       SR_TRAP(OP_TLBI_ALLE1NXS,       CGT_HCR_NV),
> +       SR_TRAP(OP_TLBI_VALE2NXS,       CGT_HCR_NV),
> +       SR_TRAP(OP_TLBI_VMALLS12E1NXS,  CGT_HCR_NV),
> +       SR_TRAP(OP_TLBI_IPAS2E1IS,      CGT_HCR_NV),
> +       SR_TRAP(OP_TLBI_RIPAS2E1IS,     CGT_HCR_NV),
> +       SR_TRAP(OP_TLBI_IPAS2LE1IS,     CGT_HCR_NV),
> +       SR_TRAP(OP_TLBI_RIPAS2LE1IS,    CGT_HCR_NV),
> +       SR_TRAP(OP_TLBI_RVAE2IS,        CGT_HCR_NV),
> +       SR_TRAP(OP_TLBI_RVALE2IS,       CGT_HCR_NV),
> +       SR_TRAP(OP_TLBI_ALLE2IS,        CGT_HCR_NV),
> +       SR_TRAP(OP_TLBI_VAE2IS,         CGT_HCR_NV),
> +       SR_TRAP(OP_TLBI_ALLE1IS,        CGT_HCR_NV),
> +       SR_TRAP(OP_TLBI_VALE2IS,        CGT_HCR_NV),
> +       SR_TRAP(OP_TLBI_VMALLS12E1IS,   CGT_HCR_NV),
> +       SR_TRAP(OP_TLBI_IPAS2E1ISNXS,   CGT_HCR_NV),
> +       SR_TRAP(OP_TLBI_RIPAS2E1ISNXS,  CGT_HCR_NV),
> +       SR_TRAP(OP_TLBI_IPAS2LE1ISNXS,  CGT_HCR_NV),
> +       SR_TRAP(OP_TLBI_RIPAS2LE1ISNXS, CGT_HCR_NV),
> +       SR_TRAP(OP_TLBI_RVAE2ISNXS,     CGT_HCR_NV),
> +       SR_TRAP(OP_TLBI_RVALE2ISNXS,    CGT_HCR_NV),
> +       SR_TRAP(OP_TLBI_ALLE2ISNXS,     CGT_HCR_NV),
> +       SR_TRAP(OP_TLBI_VAE2ISNXS,      CGT_HCR_NV),
> +       SR_TRAP(OP_TLBI_ALLE1ISNXS,     CGT_HCR_NV),
> +       SR_TRAP(OP_TLBI_VALE2ISNXS,     CGT_HCR_NV),
> +       SR_TRAP(OP_TLBI_VMALLS12E1ISNXS,CGT_HCR_NV),
> +       SR_TRAP(OP_TLBI_ALLE2OS,        CGT_HCR_NV),
> +       SR_TRAP(OP_TLBI_VAE2OS,         CGT_HCR_NV),
> +       SR_TRAP(OP_TLBI_ALLE1OS,        CGT_HCR_NV),
> +       SR_TRAP(OP_TLBI_VALE2OS,        CGT_HCR_NV),
> +       SR_TRAP(OP_TLBI_VMALLS12E1OS,   CGT_HCR_NV),
> +       SR_TRAP(OP_TLBI_IPAS2E1OS,      CGT_HCR_NV),
> +       SR_TRAP(OP_TLBI_RIPAS2E1OS,     CGT_HCR_NV),
> +       SR_TRAP(OP_TLBI_IPAS2LE1OS,     CGT_HCR_NV),
> +       SR_TRAP(OP_TLBI_RIPAS2LE1OS,    CGT_HCR_NV),
> +       SR_TRAP(OP_TLBI_RVAE2OS,        CGT_HCR_NV),
> +       SR_TRAP(OP_TLBI_RVALE2OS,       CGT_HCR_NV),
> +       SR_TRAP(OP_TLBI_ALLE2OSNXS,     CGT_HCR_NV),
> +       SR_TRAP(OP_TLBI_VAE2OSNXS,      CGT_HCR_NV),
> +       SR_TRAP(OP_TLBI_ALLE1OSNXS,     CGT_HCR_NV),
> +       SR_TRAP(OP_TLBI_VALE2OSNXS,     CGT_HCR_NV),
> +       SR_TRAP(OP_TLBI_VMALLS12E1OSNXS,CGT_HCR_NV),
> +       SR_TRAP(OP_TLBI_IPAS2E1OSNXS,   CGT_HCR_NV),
> +       SR_TRAP(OP_TLBI_RIPAS2E1OSNXS,  CGT_HCR_NV),
> +       SR_TRAP(OP_TLBI_IPAS2LE1OSNXS,  CGT_HCR_NV),
> +       SR_TRAP(OP_TLBI_RIPAS2LE1OSNXS, CGT_HCR_NV),
> +       SR_TRAP(OP_TLBI_RVAE2OSNXS,     CGT_HCR_NV),
> +       SR_TRAP(OP_TLBI_RVALE2OSNXS,    CGT_HCR_NV),
> +       SR_TRAP(OP_CPP_RCTX,            CGT_HCR_NV),
> +       SR_TRAP(OP_DVP_RCTX,            CGT_HCR_NV),
> +       SR_TRAP(OP_CFP_RCTX,            CGT_HCR_NV),
> +       SR_TRAP(SYS_SP_EL1,             CGT_HCR_NV_nNV2),
> +       SR_TRAP(SYS_VBAR_EL1,           CGT_HCR_NV1_nNV2),
> +       SR_TRAP(SYS_ELR_EL1,            CGT_HCR_NV1_nNV2),
> +       SR_TRAP(SYS_SPSR_EL1,           CGT_HCR_NV1_nNV2),
> +       SR_TRAP(SYS_SCXTNUM_EL1,        CGT_HCR_NV1_nNV2_ENSCXT),
> +       SR_TRAP(SYS_SCXTNUM_EL0,        CGT_HCR_ENSCXT),
> +       SR_TRAP(OP_AT_S1E1R,            CGT_HCR_AT),
> +       SR_TRAP(OP_AT_S1E1W,            CGT_HCR_AT),
> +       SR_TRAP(OP_AT_S1E0R,            CGT_HCR_AT),
> +       SR_TRAP(OP_AT_S1E0W,            CGT_HCR_AT),
> +       SR_TRAP(OP_AT_S1E1RP,           CGT_HCR_AT),
> +       SR_TRAP(OP_AT_S1E1WP,           CGT_HCR_AT),
> +       SR_TRAP(SYS_ERXPFGF_EL1,        CGT_HCR_nFIEN),
> +       SR_TRAP(SYS_ERXPFGCTL_EL1,      CGT_HCR_nFIEN),
> +       SR_TRAP(SYS_ERXPFGCDN_EL1,      CGT_HCR_nFIEN),
>  };
>
>  static DEFINE_XARRAY(sr_forward_xa);
> --
> 2.34.1
>

Reviewed-by: Jing Zhang <jingzhangos@google.com>

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 48+ messages in thread

* Re: [PATCH v4 17/28] KVM: arm64: nv: Add trap forwarding for MDCR_EL2
  2023-08-15 18:38 ` [PATCH v4 17/28] KVM: arm64: nv: Add trap forwarding for MDCR_EL2 Marc Zyngier
@ 2023-08-15 22:33   ` Jing Zhang
  0 siblings, 0 replies; 48+ messages in thread
From: Jing Zhang @ 2023-08-15 22:33 UTC (permalink / raw)
  To: Marc Zyngier
  Cc: kvmarm, kvm, linux-arm-kernel, Catalin Marinas, Eric Auger,
	Mark Brown, Mark Rutland, Will Deacon, Alexandru Elisei,
	Andre Przywara, Chase Conklin, Ganapatrao Kulkarni, Darren Hart,
	Miguel Luis, James Morse, Suzuki K Poulose, Oliver Upton,
	Zenghui Yu

Hi Marc,

On Tue, Aug 15, 2023 at 11:46 AM Marc Zyngier <maz@kernel.org> wrote:
>
> Describe the MDCR_EL2 register, and associate it with all the sysregs
> it allows to trap.
>
> Reviewed-by: Eric Auger <eric.auger@redhat.com>
> Signed-off-by: Marc Zyngier <maz@kernel.org>
> ---
>  arch/arm64/kvm/emulate-nested.c | 268 ++++++++++++++++++++++++++++++++
>  1 file changed, 268 insertions(+)
>
> diff --git a/arch/arm64/kvm/emulate-nested.c b/arch/arm64/kvm/emulate-nested.c
> index 975a30ef874a..241e44eeed6d 100644
> --- a/arch/arm64/kvm/emulate-nested.c
> +++ b/arch/arm64/kvm/emulate-nested.c
> @@ -67,6 +67,18 @@ enum cgt_group_id {
>         CGT_HCR_TTLBIS,
>         CGT_HCR_TTLBOS,
>
> +       CGT_MDCR_TPMCR,
> +       CGT_MDCR_TPM,
> +       CGT_MDCR_TDE,
> +       CGT_MDCR_TDA,
> +       CGT_MDCR_TDOSA,
> +       CGT_MDCR_TDRA,
> +       CGT_MDCR_E2PB,
> +       CGT_MDCR_TPMS,
> +       CGT_MDCR_TTRF,
> +       CGT_MDCR_E2TB,
> +       CGT_MDCR_TDCC,
> +
>         /*
>          * Anything after this point is a combination of coarse trap
>          * controls, which must all be evaluated to decide what to do.
> @@ -80,6 +92,11 @@ enum cgt_group_id {
>         CGT_HCR_TPU_TICAB,
>         CGT_HCR_TPU_TOCU,
>         CGT_HCR_NV1_nNV2_ENSCXT,
> +       CGT_MDCR_TPM_TPMCR,
> +       CGT_MDCR_TDE_TDA,
> +       CGT_MDCR_TDE_TDOSA,
> +       CGT_MDCR_TDE_TDRA,
> +       CGT_MDCR_TDCC_TDE_TDA,
>
>         /*
>          * Anything after this point requires a callback evaluating a
> @@ -260,6 +277,72 @@ static const struct trap_bits coarse_trap_bits[] = {
>                 .mask           = HCR_TTLBOS,
>                 .behaviour      = BEHAVE_FORWARD_ANY,
>         },
> +       [CGT_MDCR_TPMCR] = {
> +               .index          = MDCR_EL2,
> +               .value          = MDCR_EL2_TPMCR,
> +               .mask           = MDCR_EL2_TPMCR,
> +               .behaviour      = BEHAVE_FORWARD_ANY,
> +       },
> +       [CGT_MDCR_TPM] = {
> +               .index          = MDCR_EL2,
> +               .value          = MDCR_EL2_TPM,
> +               .mask           = MDCR_EL2_TPM,
> +               .behaviour      = BEHAVE_FORWARD_ANY,
> +       },
> +       [CGT_MDCR_TDE] = {
> +               .index          = MDCR_EL2,
> +               .value          = MDCR_EL2_TDE,
> +               .mask           = MDCR_EL2_TDE,
> +               .behaviour      = BEHAVE_FORWARD_ANY,
> +       },
> +       [CGT_MDCR_TDA] = {
> +               .index          = MDCR_EL2,
> +               .value          = MDCR_EL2_TDA,
> +               .mask           = MDCR_EL2_TDA,
> +               .behaviour      = BEHAVE_FORWARD_ANY,
> +       },
> +       [CGT_MDCR_TDOSA] = {
> +               .index          = MDCR_EL2,
> +               .value          = MDCR_EL2_TDOSA,
> +               .mask           = MDCR_EL2_TDOSA,
> +               .behaviour      = BEHAVE_FORWARD_ANY,
> +       },
> +       [CGT_MDCR_TDRA] = {
> +               .index          = MDCR_EL2,
> +               .value          = MDCR_EL2_TDRA,
> +               .mask           = MDCR_EL2_TDRA,
> +               .behaviour      = BEHAVE_FORWARD_ANY,
> +       },
> +       [CGT_MDCR_E2PB] = {
> +               .index          = MDCR_EL2,
> +               .value          = 0,
> +               .mask           = BIT(MDCR_EL2_E2PB_SHIFT),
> +               .behaviour      = BEHAVE_FORWARD_ANY,
> +       },
> +       [CGT_MDCR_TPMS] = {
> +               .index          = MDCR_EL2,
> +               .value          = MDCR_EL2_TPMS,
> +               .mask           = MDCR_EL2_TPMS,
> +               .behaviour      = BEHAVE_FORWARD_ANY,
> +       },
> +       [CGT_MDCR_TTRF] = {
> +               .index          = MDCR_EL2,
> +               .value          = MDCR_EL2_TTRF,
> +               .mask           = MDCR_EL2_TTRF,
> +               .behaviour      = BEHAVE_FORWARD_ANY,
> +       },
> +       [CGT_MDCR_E2TB] = {
> +               .index          = MDCR_EL2,
> +               .value          = 0,
> +               .mask           = BIT(MDCR_EL2_E2TB_SHIFT),
> +               .behaviour      = BEHAVE_FORWARD_ANY,
> +       },
> +       [CGT_MDCR_TDCC] = {
> +               .index          = MDCR_EL2,
> +               .value          = MDCR_EL2_TDCC,
> +               .mask           = MDCR_EL2_TDCC,
> +               .behaviour      = BEHAVE_FORWARD_ANY,
> +       },
>  };
>
>  #define MCB(id, ...)                                           \
> @@ -277,6 +360,11 @@ static const enum cgt_group_id *coarse_control_combo[] = {
>         MCB(CGT_HCR_TPU_TICAB,          CGT_HCR_TPU, CGT_HCR_TICAB),
>         MCB(CGT_HCR_TPU_TOCU,           CGT_HCR_TPU, CGT_HCR_TOCU),
>         MCB(CGT_HCR_NV1_nNV2_ENSCXT,    CGT_HCR_NV1_nNV2, CGT_HCR_ENSCXT),
> +       MCB(CGT_MDCR_TPM_TPMCR,         CGT_MDCR_TPM, CGT_MDCR_TPMCR),
> +       MCB(CGT_MDCR_TDE_TDA,           CGT_MDCR_TDE, CGT_MDCR_TDA),
> +       MCB(CGT_MDCR_TDE_TDOSA,         CGT_MDCR_TDE, CGT_MDCR_TDOSA),
> +       MCB(CGT_MDCR_TDE_TDRA,          CGT_MDCR_TDE, CGT_MDCR_TDRA),
> +       MCB(CGT_MDCR_TDCC_TDE_TDA,      CGT_MDCR_TDCC, CGT_MDCR_TDE, CGT_MDCR_TDA),
>  };
>
>  typedef enum trap_behaviour (*complex_condition_check)(struct kvm_vcpu *);
> @@ -609,6 +697,186 @@ static const struct encoding_to_trap_config encoding_to_cgt[] __initconst = {
>         SR_TRAP(SYS_ERXPFGF_EL1,        CGT_HCR_nFIEN),
>         SR_TRAP(SYS_ERXPFGCTL_EL1,      CGT_HCR_nFIEN),
>         SR_TRAP(SYS_ERXPFGCDN_EL1,      CGT_HCR_nFIEN),
> +       SR_TRAP(SYS_PMCR_EL0,           CGT_MDCR_TPM_TPMCR),
> +       SR_TRAP(SYS_PMCNTENSET_EL0,     CGT_MDCR_TPM),
> +       SR_TRAP(SYS_PMCNTENCLR_EL0,     CGT_MDCR_TPM),
> +       SR_TRAP(SYS_PMOVSSET_EL0,       CGT_MDCR_TPM),
> +       SR_TRAP(SYS_PMOVSCLR_EL0,       CGT_MDCR_TPM),
> +       SR_TRAP(SYS_PMCEID0_EL0,        CGT_MDCR_TPM),
> +       SR_TRAP(SYS_PMCEID1_EL0,        CGT_MDCR_TPM),
> +       SR_TRAP(SYS_PMXEVTYPER_EL0,     CGT_MDCR_TPM),
> +       SR_TRAP(SYS_PMSWINC_EL0,        CGT_MDCR_TPM),
> +       SR_TRAP(SYS_PMSELR_EL0,         CGT_MDCR_TPM),
> +       SR_TRAP(SYS_PMXEVCNTR_EL0,      CGT_MDCR_TPM),
> +       SR_TRAP(SYS_PMCCNTR_EL0,        CGT_MDCR_TPM),
> +       SR_TRAP(SYS_PMUSERENR_EL0,      CGT_MDCR_TPM),
> +       SR_TRAP(SYS_PMINTENSET_EL1,     CGT_MDCR_TPM),
> +       SR_TRAP(SYS_PMINTENCLR_EL1,     CGT_MDCR_TPM),
> +       SR_TRAP(SYS_PMMIR_EL1,          CGT_MDCR_TPM),
> +       SR_TRAP(SYS_PMEVCNTRn_EL0(0),   CGT_MDCR_TPM),
> +       SR_TRAP(SYS_PMEVCNTRn_EL0(1),   CGT_MDCR_TPM),
> +       SR_TRAP(SYS_PMEVCNTRn_EL0(2),   CGT_MDCR_TPM),
> +       SR_TRAP(SYS_PMEVCNTRn_EL0(3),   CGT_MDCR_TPM),
> +       SR_TRAP(SYS_PMEVCNTRn_EL0(4),   CGT_MDCR_TPM),
> +       SR_TRAP(SYS_PMEVCNTRn_EL0(5),   CGT_MDCR_TPM),
> +       SR_TRAP(SYS_PMEVCNTRn_EL0(6),   CGT_MDCR_TPM),
> +       SR_TRAP(SYS_PMEVCNTRn_EL0(7),   CGT_MDCR_TPM),
> +       SR_TRAP(SYS_PMEVCNTRn_EL0(8),   CGT_MDCR_TPM),
> +       SR_TRAP(SYS_PMEVCNTRn_EL0(9),   CGT_MDCR_TPM),
> +       SR_TRAP(SYS_PMEVCNTRn_EL0(10),  CGT_MDCR_TPM),
> +       SR_TRAP(SYS_PMEVCNTRn_EL0(11),  CGT_MDCR_TPM),
> +       SR_TRAP(SYS_PMEVCNTRn_EL0(12),  CGT_MDCR_TPM),
> +       SR_TRAP(SYS_PMEVCNTRn_EL0(13),  CGT_MDCR_TPM),
> +       SR_TRAP(SYS_PMEVCNTRn_EL0(14),  CGT_MDCR_TPM),
> +       SR_TRAP(SYS_PMEVCNTRn_EL0(15),  CGT_MDCR_TPM),
> +       SR_TRAP(SYS_PMEVCNTRn_EL0(16),  CGT_MDCR_TPM),
> +       SR_TRAP(SYS_PMEVCNTRn_EL0(17),  CGT_MDCR_TPM),
> +       SR_TRAP(SYS_PMEVCNTRn_EL0(18),  CGT_MDCR_TPM),
> +       SR_TRAP(SYS_PMEVCNTRn_EL0(19),  CGT_MDCR_TPM),
> +       SR_TRAP(SYS_PMEVCNTRn_EL0(20),  CGT_MDCR_TPM),
> +       SR_TRAP(SYS_PMEVCNTRn_EL0(21),  CGT_MDCR_TPM),
> +       SR_TRAP(SYS_PMEVCNTRn_EL0(22),  CGT_MDCR_TPM),
> +       SR_TRAP(SYS_PMEVCNTRn_EL0(23),  CGT_MDCR_TPM),
> +       SR_TRAP(SYS_PMEVCNTRn_EL0(24),  CGT_MDCR_TPM),
> +       SR_TRAP(SYS_PMEVCNTRn_EL0(25),  CGT_MDCR_TPM),
> +       SR_TRAP(SYS_PMEVCNTRn_EL0(26),  CGT_MDCR_TPM),
> +       SR_TRAP(SYS_PMEVCNTRn_EL0(27),  CGT_MDCR_TPM),
> +       SR_TRAP(SYS_PMEVCNTRn_EL0(28),  CGT_MDCR_TPM),
> +       SR_TRAP(SYS_PMEVCNTRn_EL0(29),  CGT_MDCR_TPM),
> +       SR_TRAP(SYS_PMEVCNTRn_EL0(30),  CGT_MDCR_TPM),
> +       SR_TRAP(SYS_PMEVTYPERn_EL0(0),  CGT_MDCR_TPM),
> +       SR_TRAP(SYS_PMEVTYPERn_EL0(1),  CGT_MDCR_TPM),
> +       SR_TRAP(SYS_PMEVTYPERn_EL0(2),  CGT_MDCR_TPM),
> +       SR_TRAP(SYS_PMEVTYPERn_EL0(3),  CGT_MDCR_TPM),
> +       SR_TRAP(SYS_PMEVTYPERn_EL0(4),  CGT_MDCR_TPM),
> +       SR_TRAP(SYS_PMEVTYPERn_EL0(5),  CGT_MDCR_TPM),
> +       SR_TRAP(SYS_PMEVTYPERn_EL0(6),  CGT_MDCR_TPM),
> +       SR_TRAP(SYS_PMEVTYPERn_EL0(7),  CGT_MDCR_TPM),
> +       SR_TRAP(SYS_PMEVTYPERn_EL0(8),  CGT_MDCR_TPM),
> +       SR_TRAP(SYS_PMEVTYPERn_EL0(9),  CGT_MDCR_TPM),
> +       SR_TRAP(SYS_PMEVTYPERn_EL0(10), CGT_MDCR_TPM),
> +       SR_TRAP(SYS_PMEVTYPERn_EL0(11), CGT_MDCR_TPM),
> +       SR_TRAP(SYS_PMEVTYPERn_EL0(12), CGT_MDCR_TPM),
> +       SR_TRAP(SYS_PMEVTYPERn_EL0(13), CGT_MDCR_TPM),
> +       SR_TRAP(SYS_PMEVTYPERn_EL0(14), CGT_MDCR_TPM),
> +       SR_TRAP(SYS_PMEVTYPERn_EL0(15), CGT_MDCR_TPM),
> +       SR_TRAP(SYS_PMEVTYPERn_EL0(16), CGT_MDCR_TPM),
> +       SR_TRAP(SYS_PMEVTYPERn_EL0(17), CGT_MDCR_TPM),
> +       SR_TRAP(SYS_PMEVTYPERn_EL0(18), CGT_MDCR_TPM),
> +       SR_TRAP(SYS_PMEVTYPERn_EL0(19), CGT_MDCR_TPM),
> +       SR_TRAP(SYS_PMEVTYPERn_EL0(20), CGT_MDCR_TPM),
> +       SR_TRAP(SYS_PMEVTYPERn_EL0(21), CGT_MDCR_TPM),
> +       SR_TRAP(SYS_PMEVTYPERn_EL0(22), CGT_MDCR_TPM),
> +       SR_TRAP(SYS_PMEVTYPERn_EL0(23), CGT_MDCR_TPM),
> +       SR_TRAP(SYS_PMEVTYPERn_EL0(24), CGT_MDCR_TPM),
> +       SR_TRAP(SYS_PMEVTYPERn_EL0(25), CGT_MDCR_TPM),
> +       SR_TRAP(SYS_PMEVTYPERn_EL0(26), CGT_MDCR_TPM),
> +       SR_TRAP(SYS_PMEVTYPERn_EL0(27), CGT_MDCR_TPM),
> +       SR_TRAP(SYS_PMEVTYPERn_EL0(28), CGT_MDCR_TPM),
> +       SR_TRAP(SYS_PMEVTYPERn_EL0(29), CGT_MDCR_TPM),
> +       SR_TRAP(SYS_PMEVTYPERn_EL0(30), CGT_MDCR_TPM),
> +       SR_TRAP(SYS_PMCCFILTR_EL0,      CGT_MDCR_TPM),
> +       SR_TRAP(SYS_MDCCSR_EL0,         CGT_MDCR_TDCC_TDE_TDA),
> +       SR_TRAP(SYS_MDCCINT_EL1,        CGT_MDCR_TDCC_TDE_TDA),
> +       SR_TRAP(SYS_OSDTRRX_EL1,        CGT_MDCR_TDCC_TDE_TDA),
> +       SR_TRAP(SYS_OSDTRTX_EL1,        CGT_MDCR_TDCC_TDE_TDA),
> +       SR_TRAP(SYS_DBGDTR_EL0,         CGT_MDCR_TDCC_TDE_TDA),
> +       /*
> +        * Also covers DBGDTRRX_EL0, which has the same encoding as
> +        * SYS_DBGDTRTX_EL0...
> +        */
> +       SR_TRAP(SYS_DBGDTRTX_EL0,       CGT_MDCR_TDCC_TDE_TDA),
> +       SR_TRAP(SYS_MDSCR_EL1,          CGT_MDCR_TDE_TDA),
> +       SR_TRAP(SYS_OSECCR_EL1,         CGT_MDCR_TDE_TDA),
> +       SR_TRAP(SYS_DBGBVRn_EL1(0),     CGT_MDCR_TDE_TDA),
> +       SR_TRAP(SYS_DBGBVRn_EL1(1),     CGT_MDCR_TDE_TDA),
> +       SR_TRAP(SYS_DBGBVRn_EL1(2),     CGT_MDCR_TDE_TDA),
> +       SR_TRAP(SYS_DBGBVRn_EL1(3),     CGT_MDCR_TDE_TDA),
> +       SR_TRAP(SYS_DBGBVRn_EL1(4),     CGT_MDCR_TDE_TDA),
> +       SR_TRAP(SYS_DBGBVRn_EL1(5),     CGT_MDCR_TDE_TDA),
> +       SR_TRAP(SYS_DBGBVRn_EL1(6),     CGT_MDCR_TDE_TDA),
> +       SR_TRAP(SYS_DBGBVRn_EL1(7),     CGT_MDCR_TDE_TDA),
> +       SR_TRAP(SYS_DBGBVRn_EL1(8),     CGT_MDCR_TDE_TDA),
> +       SR_TRAP(SYS_DBGBVRn_EL1(9),     CGT_MDCR_TDE_TDA),
> +       SR_TRAP(SYS_DBGBVRn_EL1(10),    CGT_MDCR_TDE_TDA),
> +       SR_TRAP(SYS_DBGBVRn_EL1(11),    CGT_MDCR_TDE_TDA),
> +       SR_TRAP(SYS_DBGBVRn_EL1(12),    CGT_MDCR_TDE_TDA),
> +       SR_TRAP(SYS_DBGBVRn_EL1(13),    CGT_MDCR_TDE_TDA),
> +       SR_TRAP(SYS_DBGBVRn_EL1(14),    CGT_MDCR_TDE_TDA),
> +       SR_TRAP(SYS_DBGBVRn_EL1(15),    CGT_MDCR_TDE_TDA),
> +       SR_TRAP(SYS_DBGBCRn_EL1(0),     CGT_MDCR_TDE_TDA),
> +       SR_TRAP(SYS_DBGBCRn_EL1(1),     CGT_MDCR_TDE_TDA),
> +       SR_TRAP(SYS_DBGBCRn_EL1(2),     CGT_MDCR_TDE_TDA),
> +       SR_TRAP(SYS_DBGBCRn_EL1(3),     CGT_MDCR_TDE_TDA),
> +       SR_TRAP(SYS_DBGBCRn_EL1(4),     CGT_MDCR_TDE_TDA),
> +       SR_TRAP(SYS_DBGBCRn_EL1(5),     CGT_MDCR_TDE_TDA),
> +       SR_TRAP(SYS_DBGBCRn_EL1(6),     CGT_MDCR_TDE_TDA),
> +       SR_TRAP(SYS_DBGBCRn_EL1(7),     CGT_MDCR_TDE_TDA),
> +       SR_TRAP(SYS_DBGBCRn_EL1(8),     CGT_MDCR_TDE_TDA),
> +       SR_TRAP(SYS_DBGBCRn_EL1(9),     CGT_MDCR_TDE_TDA),
> +       SR_TRAP(SYS_DBGBCRn_EL1(10),    CGT_MDCR_TDE_TDA),
> +       SR_TRAP(SYS_DBGBCRn_EL1(11),    CGT_MDCR_TDE_TDA),
> +       SR_TRAP(SYS_DBGBCRn_EL1(12),    CGT_MDCR_TDE_TDA),
> +       SR_TRAP(SYS_DBGBCRn_EL1(13),    CGT_MDCR_TDE_TDA),
> +       SR_TRAP(SYS_DBGBCRn_EL1(14),    CGT_MDCR_TDE_TDA),
> +       SR_TRAP(SYS_DBGBCRn_EL1(15),    CGT_MDCR_TDE_TDA),
> +       SR_TRAP(SYS_DBGWVRn_EL1(0),     CGT_MDCR_TDE_TDA),
> +       SR_TRAP(SYS_DBGWVRn_EL1(1),     CGT_MDCR_TDE_TDA),
> +       SR_TRAP(SYS_DBGWVRn_EL1(2),     CGT_MDCR_TDE_TDA),
> +       SR_TRAP(SYS_DBGWVRn_EL1(3),     CGT_MDCR_TDE_TDA),
> +       SR_TRAP(SYS_DBGWVRn_EL1(4),     CGT_MDCR_TDE_TDA),
> +       SR_TRAP(SYS_DBGWVRn_EL1(5),     CGT_MDCR_TDE_TDA),
> +       SR_TRAP(SYS_DBGWVRn_EL1(6),     CGT_MDCR_TDE_TDA),
> +       SR_TRAP(SYS_DBGWVRn_EL1(7),     CGT_MDCR_TDE_TDA),
> +       SR_TRAP(SYS_DBGWVRn_EL1(8),     CGT_MDCR_TDE_TDA),
> +       SR_TRAP(SYS_DBGWVRn_EL1(9),     CGT_MDCR_TDE_TDA),
> +       SR_TRAP(SYS_DBGWVRn_EL1(10),    CGT_MDCR_TDE_TDA),
> +       SR_TRAP(SYS_DBGWVRn_EL1(11),    CGT_MDCR_TDE_TDA),
> +       SR_TRAP(SYS_DBGWVRn_EL1(12),    CGT_MDCR_TDE_TDA),
> +       SR_TRAP(SYS_DBGWVRn_EL1(13),    CGT_MDCR_TDE_TDA),
> +       SR_TRAP(SYS_DBGWVRn_EL1(14),    CGT_MDCR_TDE_TDA),
> +       SR_TRAP(SYS_DBGWVRn_EL1(15),    CGT_MDCR_TDE_TDA),
> +       SR_TRAP(SYS_DBGWCRn_EL1(0),     CGT_MDCR_TDE_TDA),
> +       SR_TRAP(SYS_DBGWCRn_EL1(1),     CGT_MDCR_TDE_TDA),
> +       SR_TRAP(SYS_DBGWCRn_EL1(2),     CGT_MDCR_TDE_TDA),
> +       SR_TRAP(SYS_DBGWCRn_EL1(3),     CGT_MDCR_TDE_TDA),
> +       SR_TRAP(SYS_DBGWCRn_EL1(4),     CGT_MDCR_TDE_TDA),
> +       SR_TRAP(SYS_DBGWCRn_EL1(5),     CGT_MDCR_TDE_TDA),
> +       SR_TRAP(SYS_DBGWCRn_EL1(6),     CGT_MDCR_TDE_TDA),
> +       SR_TRAP(SYS_DBGWCRn_EL1(7),     CGT_MDCR_TDE_TDA),
> +       SR_TRAP(SYS_DBGWCRn_EL1(8),     CGT_MDCR_TDE_TDA),
> +       SR_TRAP(SYS_DBGWCRn_EL1(9),     CGT_MDCR_TDE_TDA),
> +       SR_TRAP(SYS_DBGWCRn_EL1(10),    CGT_MDCR_TDE_TDA),
> +       SR_TRAP(SYS_DBGWCRn_EL1(11),    CGT_MDCR_TDE_TDA),
> +       SR_TRAP(SYS_DBGWCRn_EL1(12),    CGT_MDCR_TDE_TDA),
> +       SR_TRAP(SYS_DBGWCRn_EL1(13),    CGT_MDCR_TDE_TDA),
> +       SR_TRAP(SYS_DBGWCRn_EL1(14),    CGT_MDCR_TDE_TDA),
> +       SR_TRAP(SYS_DBGCLAIMSET_EL1,    CGT_MDCR_TDE_TDA),
> +       SR_TRAP(SYS_DBGCLAIMCLR_EL1,    CGT_MDCR_TDE_TDA),
> +       SR_TRAP(SYS_DBGAUTHSTATUS_EL1,  CGT_MDCR_TDE_TDA),
> +       SR_TRAP(SYS_OSLAR_EL1,          CGT_MDCR_TDE_TDOSA),
> +       SR_TRAP(SYS_OSLSR_EL1,          CGT_MDCR_TDE_TDOSA),
> +       SR_TRAP(SYS_OSDLR_EL1,          CGT_MDCR_TDE_TDOSA),
> +       SR_TRAP(SYS_DBGPRCR_EL1,        CGT_MDCR_TDE_TDOSA),
> +       SR_TRAP(SYS_MDRAR_EL1,          CGT_MDCR_TDE_TDRA),
> +       SR_TRAP(SYS_PMBLIMITR_EL1,      CGT_MDCR_E2PB),
> +       SR_TRAP(SYS_PMBPTR_EL1,         CGT_MDCR_E2PB),
> +       SR_TRAP(SYS_PMBSR_EL1,          CGT_MDCR_E2PB),
> +       SR_TRAP(SYS_PMSCR_EL1,          CGT_MDCR_TPMS),
> +       SR_TRAP(SYS_PMSEVFR_EL1,        CGT_MDCR_TPMS),
> +       SR_TRAP(SYS_PMSFCR_EL1,         CGT_MDCR_TPMS),
> +       SR_TRAP(SYS_PMSICR_EL1,         CGT_MDCR_TPMS),
> +       SR_TRAP(SYS_PMSIDR_EL1,         CGT_MDCR_TPMS),
> +       SR_TRAP(SYS_PMSIRR_EL1,         CGT_MDCR_TPMS),
> +       SR_TRAP(SYS_PMSLATFR_EL1,       CGT_MDCR_TPMS),
> +       SR_TRAP(SYS_PMSNEVFR_EL1,       CGT_MDCR_TPMS),
> +       SR_TRAP(SYS_TRFCR_EL1,          CGT_MDCR_TTRF),
> +       SR_TRAP(SYS_TRBBASER_EL1,       CGT_MDCR_E2TB),
> +       SR_TRAP(SYS_TRBLIMITR_EL1,      CGT_MDCR_E2TB),
> +       SR_TRAP(SYS_TRBMAR_EL1,         CGT_MDCR_E2TB),
> +       SR_TRAP(SYS_TRBPTR_EL1,         CGT_MDCR_E2TB),
> +       SR_TRAP(SYS_TRBSR_EL1,          CGT_MDCR_E2TB),
> +       SR_TRAP(SYS_TRBTRG_EL1,         CGT_MDCR_E2TB),
>  };
>
>  static DEFINE_XARRAY(sr_forward_xa);
> --
> 2.34.1
>

Reviewed-by: Jing Zhang <jingzhangos@google.com>

Jing

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 48+ messages in thread

* Re: [PATCH v4 18/28] KVM: arm64: nv: Add trap forwarding for CNTHCTL_EL2
  2023-08-15 18:38 ` [PATCH v4 18/28] KVM: arm64: nv: Add trap forwarding for CNTHCTL_EL2 Marc Zyngier
@ 2023-08-15 22:42   ` Jing Zhang
  0 siblings, 0 replies; 48+ messages in thread
From: Jing Zhang @ 2023-08-15 22:42 UTC (permalink / raw)
  To: Marc Zyngier
  Cc: kvmarm, kvm, linux-arm-kernel, Catalin Marinas, Eric Auger,
	Mark Brown, Mark Rutland, Will Deacon, Alexandru Elisei,
	Andre Przywara, Chase Conklin, Ganapatrao Kulkarni, Darren Hart,
	Miguel Luis, James Morse, Suzuki K Poulose, Oliver Upton,
	Zenghui Yu

Hi Marc,

On Tue, Aug 15, 2023 at 11:47 AM Marc Zyngier <maz@kernel.org> wrote:
>
> Describe the CNTHCTL_EL2 register, and associate it with all the sysregs
> it allows to trap.
>
> Reviewed-by: Eric Auger <eric.auger@redhat.com>
> Signed-off-by: Marc Zyngier <maz@kernel.org>
> ---
>  arch/arm64/kvm/emulate-nested.c | 50 ++++++++++++++++++++++++++++++++-
>  1 file changed, 49 insertions(+), 1 deletion(-)
>
> diff --git a/arch/arm64/kvm/emulate-nested.c b/arch/arm64/kvm/emulate-nested.c
> index 241e44eeed6d..860910386b5b 100644
> --- a/arch/arm64/kvm/emulate-nested.c
> +++ b/arch/arm64/kvm/emulate-nested.c
> @@ -100,9 +100,11 @@ enum cgt_group_id {
>
>         /*
>          * Anything after this point requires a callback evaluating a
> -        * complex trap condition. Hopefully we'll never need this...
> +        * complex trap condition. Ugly stuff.
>          */
>         __COMPLEX_CONDITIONS__,
> +       CGT_CNTHCTL_EL1PCTEN = __COMPLEX_CONDITIONS__,
> +       CGT_CNTHCTL_EL1PTEN,
>
>         /* Must be last */
>         __NR_CGT_GROUP_IDS__
> @@ -369,10 +371,51 @@ static const enum cgt_group_id *coarse_control_combo[] = {
>
>  typedef enum trap_behaviour (*complex_condition_check)(struct kvm_vcpu *);
>
> +/*
> + * Warning, maximum confusion ahead.
> + *
> + * When E2H=0, CNTHCTL_EL2[1:0] are defined as EL1PCEN:EL1PCTEN
> + * When E2H=1, CNTHCTL_EL2[11:10] are defined as EL1PTEN:EL1PCTEN
> + *
> + * Note the single letter difference? Yet, the bits have the same
> + * function despite a different layout and a different name.
> + *
> + * We don't try to reconcile this mess. We just use the E2H=0 bits
> + * to generate something that is in the E2H=1 format, and live with
> + * it. You're welcome.
> + */
> +static u64 get_sanitized_cnthctl(struct kvm_vcpu *vcpu)
> +{
> +       u64 val = __vcpu_sys_reg(vcpu, CNTHCTL_EL2);
> +
> +       if (!vcpu_el2_e2h_is_set(vcpu))
> +               val = (val & (CNTHCTL_EL1PCEN | CNTHCTL_EL1PCTEN)) << 10;
> +
> +       return val & ((CNTHCTL_EL1PCEN | CNTHCTL_EL1PCTEN) << 10);
> +}
> +
> +static enum trap_behaviour check_cnthctl_el1pcten(struct kvm_vcpu *vcpu)
> +{
> +       if (get_sanitized_cnthctl(vcpu) & (CNTHCTL_EL1PCTEN << 10))
> +               return BEHAVE_HANDLE_LOCALLY;
> +
> +       return BEHAVE_FORWARD_ANY;
> +}
> +
> +static enum trap_behaviour check_cnthctl_el1pten(struct kvm_vcpu *vcpu)
> +{
> +       if (get_sanitized_cnthctl(vcpu) & (CNTHCTL_EL1PCEN << 10))
> +               return BEHAVE_HANDLE_LOCALLY;
> +
> +       return BEHAVE_FORWARD_ANY;
> +}
> +
>  #define CCC(id, fn)                            \
>         [id - __COMPLEX_CONDITIONS__] = fn
>
>  static const complex_condition_check ccc[] = {
> +       CCC(CGT_CNTHCTL_EL1PCTEN, check_cnthctl_el1pcten),
> +       CCC(CGT_CNTHCTL_EL1PTEN, check_cnthctl_el1pten),
>  };
>
>  /*
> @@ -877,6 +920,11 @@ static const struct encoding_to_trap_config encoding_to_cgt[] __initconst = {
>         SR_TRAP(SYS_TRBPTR_EL1,         CGT_MDCR_E2TB),
>         SR_TRAP(SYS_TRBSR_EL1,          CGT_MDCR_E2TB),
>         SR_TRAP(SYS_TRBTRG_EL1,         CGT_MDCR_E2TB),
> +       SR_TRAP(SYS_CNTP_TVAL_EL0,      CGT_CNTHCTL_EL1PTEN),
> +       SR_TRAP(SYS_CNTP_CVAL_EL0,      CGT_CNTHCTL_EL1PTEN),
> +       SR_TRAP(SYS_CNTP_CTL_EL0,       CGT_CNTHCTL_EL1PTEN),
> +       SR_TRAP(SYS_CNTPCT_EL0,         CGT_CNTHCTL_EL1PCTEN),
> +       SR_TRAP(SYS_CNTPCTSS_EL0,       CGT_CNTHCTL_EL1PCTEN),
>  };
>
>  static DEFINE_XARRAY(sr_forward_xa);
> --
> 2.34.1
>

Reviewed-by: Jing Zhang <jingzhangos@google.com>

Jing

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 48+ messages in thread

* Re: [PATCH v4 19/28] KVM: arm64: nv: Add fine grained trap forwarding infrastructure
  2023-08-15 18:38 ` [PATCH v4 19/28] KVM: arm64: nv: Add fine grained trap forwarding infrastructure Marc Zyngier
@ 2023-08-15 22:44   ` Jing Zhang
  0 siblings, 0 replies; 48+ messages in thread
From: Jing Zhang @ 2023-08-15 22:44 UTC (permalink / raw)
  To: Marc Zyngier
  Cc: kvmarm, kvm, linux-arm-kernel, Catalin Marinas, Eric Auger,
	Mark Brown, Mark Rutland, Will Deacon, Alexandru Elisei,
	Andre Przywara, Chase Conklin, Ganapatrao Kulkarni, Darren Hart,
	Miguel Luis, James Morse, Suzuki K Poulose, Oliver Upton,
	Zenghui Yu

Hi Marc,

On Tue, Aug 15, 2023 at 11:47 AM Marc Zyngier <maz@kernel.org> wrote:
>
> Fine Grained Traps are fun. Not.
>
> Implement the fine grained trap forwarding, reusing the Coarse Grained
> Traps infrastructure previously implemented.
>
> Each sysreg/instruction inserted in the xarray gets a FGT group
> (vaguely equivalent to a register number), a bit number in that register,
> and a polarity.
>
> It is then pretty easy to check the FGT state at handling time, just
> like we do for the coarse version (it is just faster).
>
> Reviewed-by: Eric Auger <eric.auger@redhat.com>
> Signed-off-by: Marc Zyngier <maz@kernel.org>
> ---
>  arch/arm64/kvm/emulate-nested.c | 90 +++++++++++++++++++++++++++++++--
>  1 file changed, 87 insertions(+), 3 deletions(-)
>
> diff --git a/arch/arm64/kvm/emulate-nested.c b/arch/arm64/kvm/emulate-nested.c
> index 860910386b5b..0da9d92ed921 100644
> --- a/arch/arm64/kvm/emulate-nested.c
> +++ b/arch/arm64/kvm/emulate-nested.c
> @@ -423,16 +423,23 @@ static const complex_condition_check ccc[] = {
>   * following layout for each trapped sysreg:
>   *
>   * [9:0]       enum cgt_group_id (10 bits)
> - * [62:10]     Unused (53 bits)
> + * [13:10]     enum fgt_group_id (4 bits)
> + * [19:14]     bit number in the FGT register (6 bits)
> + * [20]                trap polarity (1 bit)
> + * [62:21]     Unused (42 bits)
>   * [63]                RES0 - Must be zero, as lost on insertion in the xarray
>   */
>  #define TC_CGT_BITS    10
> +#define TC_FGT_BITS    4
>
>  union trap_config {
>         u64     val;
>         struct {
>                 unsigned long   cgt:TC_CGT_BITS; /* Coarse Grained Trap id */
> -               unsigned long   unused:53;       /* Unused, should be zero */
> +               unsigned long   fgt:TC_FGT_BITS; /* Fine Grained Trap id */
> +               unsigned long   bit:6;           /* Bit number */
> +               unsigned long   pol:1;           /* Polarity */
> +               unsigned long   unused:42;       /* Unused, should be zero */
>                 unsigned long   mbz:1;           /* Must Be Zero */
>         };
>  };
> @@ -929,6 +936,28 @@ static const struct encoding_to_trap_config encoding_to_cgt[] __initconst = {
>
>  static DEFINE_XARRAY(sr_forward_xa);
>
> +enum fgt_group_id {
> +       __NO_FGT_GROUP__,
> +
> +       /* Must be last */
> +       __NR_FGT_GROUP_IDS__
> +};
> +
> +#define SR_FGT(sr, g, b, p)                                    \
> +       {                                                       \
> +               .encoding       = sr,                           \
> +               .end            = sr,                           \
> +               .tc             = {                             \
> +                       .fgt = g ## _GROUP,                     \
> +                       .bit = g ## _EL2_ ## b ## _SHIFT,       \
> +                       .pol = p,                               \
> +               },                                              \
> +               .line = __LINE__,                               \
> +       }
> +
> +static const struct encoding_to_trap_config encoding_to_fgt[] __initconst = {
> +};
> +
>  static union trap_config get_trap_config(u32 sysreg)
>  {
>         return (union trap_config) {
> @@ -957,6 +986,7 @@ int __init populate_nv_trap_config(void)
>
>         BUILD_BUG_ON(sizeof(union trap_config) != sizeof(void *));
>         BUILD_BUG_ON(__NR_CGT_GROUP_IDS__ > BIT(TC_CGT_BITS));
> +       BUILD_BUG_ON(__NR_FGT_GROUP_IDS__ > BIT(TC_FGT_BITS));
>
>         for (int i = 0; i < ARRAY_SIZE(encoding_to_cgt); i++) {
>                 const struct encoding_to_trap_config *cgt = &encoding_to_cgt[i];
> @@ -990,6 +1020,34 @@ int __init populate_nv_trap_config(void)
>         kvm_info("nv: %ld coarse grained trap handlers\n",
>                  ARRAY_SIZE(encoding_to_cgt));
>
> +       if (!cpus_have_final_cap(ARM64_HAS_FGT))
> +               goto check_mcb;
> +
> +       for (int i = 0; i < ARRAY_SIZE(encoding_to_fgt); i++) {
> +               const struct encoding_to_trap_config *fgt = &encoding_to_fgt[i];
> +               union trap_config tc;
> +
> +               if (fgt->tc.fgt >= __NR_FGT_GROUP_IDS__) {
> +                       ret = -EINVAL;
> +                       print_nv_trap_error(fgt, "Invalid FGT", ret);
> +               }
> +
> +               tc = get_trap_config(fgt->encoding);
> +
> +               if (tc.fgt) {
> +                       ret = -EINVAL;
> +                       print_nv_trap_error(fgt, "Duplicate FGT", ret);
> +               }
> +
> +               tc.val |= fgt->tc.val;
> +               xa_store(&sr_forward_xa, fgt->encoding,
> +                        xa_mk_value(tc.val), GFP_KERNEL);
> +       }
> +
> +       kvm_info("nv: %ld fine grained trap handlers\n",
> +                ARRAY_SIZE(encoding_to_fgt));
> +
> +check_mcb:
>         for (int id = __MULTIPLE_CONTROL_BITS__; id < __COMPLEX_CONDITIONS__; id++) {
>                 const enum cgt_group_id *cgids;
>
> @@ -1056,13 +1114,26 @@ static enum trap_behaviour compute_trap_behaviour(struct kvm_vcpu *vcpu,
>         return __compute_trap_behaviour(vcpu, tc.cgt, b);
>  }
>
> +static bool check_fgt_bit(u64 val, const union trap_config tc)
> +{
> +       return ((val >> tc.bit) & 1) == tc.pol;
> +}
> +
> +#define sanitised_sys_reg(vcpu, reg)                   \
> +       ({                                              \
> +               u64 __val;                              \
> +               __val = __vcpu_sys_reg(vcpu, reg);      \
> +               __val &= ~__ ## reg ## _RES0;           \
> +               (__val);                                \
> +       })
> +
>  bool __check_nv_sr_forward(struct kvm_vcpu *vcpu)
>  {
>         union trap_config tc;
>         enum trap_behaviour b;
>         bool is_read;
>         u32 sysreg;
> -       u64 esr;
> +       u64 esr, val;
>
>         if (!vcpu_has_nv(vcpu) || is_hyp_ctxt(vcpu))
>                 return false;
> @@ -1085,6 +1156,19 @@ bool __check_nv_sr_forward(struct kvm_vcpu *vcpu)
>         if (!tc.val)
>                 return false;
>
> +       switch ((enum fgt_group_id)tc.fgt) {
> +       case __NO_FGT_GROUP__:
> +               break;
> +
> +       case __NR_FGT_GROUP_IDS__:
> +               /* Something is really wrong, bail out */
> +               WARN_ONCE(1, "__NR_FGT_GROUP_IDS__");
> +               return false;
> +       }
> +
> +       if (tc.fgt != __NO_FGT_GROUP__ && check_fgt_bit(val, tc))
> +               goto inject;
> +
>         b = compute_trap_behaviour(vcpu, tc);
>
>         if (((b & BEHAVE_FORWARD_READ) && is_read) ||
> --
> 2.34.1
>

Reviewed-by: Jing Zhang <jingzhangos@google.com>

Jing

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 48+ messages in thread

* Re: [PATCH v4 20/28] KVM: arm64: nv: Add trap forwarding for HFGxTR_EL2
  2023-08-15 18:38 ` [PATCH v4 20/28] KVM: arm64: nv: Add trap forwarding for HFGxTR_EL2 Marc Zyngier
@ 2023-08-15 22:51   ` Jing Zhang
  0 siblings, 0 replies; 48+ messages in thread
From: Jing Zhang @ 2023-08-15 22:51 UTC (permalink / raw)
  To: Marc Zyngier
  Cc: kvmarm, kvm, linux-arm-kernel, Catalin Marinas, Eric Auger,
	Mark Brown, Mark Rutland, Will Deacon, Alexandru Elisei,
	Andre Przywara, Chase Conklin, Ganapatrao Kulkarni, Darren Hart,
	Miguel Luis, James Morse, Suzuki K Poulose, Oliver Upton,
	Zenghui Yu

Hi Marc,

On Tue, Aug 15, 2023 at 11:47 AM Marc Zyngier <maz@kernel.org> wrote:
>
> Implement the trap forwarding for traps described by HFGxTR_EL2,
> reusing the Fine Grained Traps infrastructure previously implemented.
>
> Reviewed-by: Eric Auger <eric.auger@redhat.com>
> Signed-off-by: Marc Zyngier <maz@kernel.org>
> ---
>  arch/arm64/kvm/emulate-nested.c | 71 +++++++++++++++++++++++++++++++++
>  1 file changed, 71 insertions(+)
>
> diff --git a/arch/arm64/kvm/emulate-nested.c b/arch/arm64/kvm/emulate-nested.c
> index 0da9d92ed921..0e34797515b6 100644
> --- a/arch/arm64/kvm/emulate-nested.c
> +++ b/arch/arm64/kvm/emulate-nested.c
> @@ -938,6 +938,7 @@ static DEFINE_XARRAY(sr_forward_xa);
>
>  enum fgt_group_id {
>         __NO_FGT_GROUP__,
> +       HFGxTR_GROUP,
>
>         /* Must be last */
>         __NR_FGT_GROUP_IDS__
> @@ -956,6 +957,69 @@ enum fgt_group_id {
>         }
>
>  static const struct encoding_to_trap_config encoding_to_fgt[] __initconst = {
> +       /* HFGRTR_EL2, HFGWTR_EL2 */
> +       SR_FGT(SYS_TPIDR2_EL0,          HFGxTR, nTPIDR2_EL0, 0),
> +       SR_FGT(SYS_SMPRI_EL1,           HFGxTR, nSMPRI_EL1, 0),
> +       SR_FGT(SYS_ACCDATA_EL1,         HFGxTR, nACCDATA_EL1, 0),
> +       SR_FGT(SYS_ERXADDR_EL1,         HFGxTR, ERXADDR_EL1, 1),
> +       SR_FGT(SYS_ERXPFGCDN_EL1,       HFGxTR, ERXPFGCDN_EL1, 1),
> +       SR_FGT(SYS_ERXPFGCTL_EL1,       HFGxTR, ERXPFGCTL_EL1, 1),
> +       SR_FGT(SYS_ERXPFGF_EL1,         HFGxTR, ERXPFGF_EL1, 1),
> +       SR_FGT(SYS_ERXMISC0_EL1,        HFGxTR, ERXMISCn_EL1, 1),
> +       SR_FGT(SYS_ERXMISC1_EL1,        HFGxTR, ERXMISCn_EL1, 1),
> +       SR_FGT(SYS_ERXMISC2_EL1,        HFGxTR, ERXMISCn_EL1, 1),
> +       SR_FGT(SYS_ERXMISC3_EL1,        HFGxTR, ERXMISCn_EL1, 1),
> +       SR_FGT(SYS_ERXSTATUS_EL1,       HFGxTR, ERXSTATUS_EL1, 1),
> +       SR_FGT(SYS_ERXCTLR_EL1,         HFGxTR, ERXCTLR_EL1, 1),
> +       SR_FGT(SYS_ERXFR_EL1,           HFGxTR, ERXFR_EL1, 1),
> +       SR_FGT(SYS_ERRSELR_EL1,         HFGxTR, ERRSELR_EL1, 1),
> +       SR_FGT(SYS_ERRIDR_EL1,          HFGxTR, ERRIDR_EL1, 1),
> +       SR_FGT(SYS_ICC_IGRPEN0_EL1,     HFGxTR, ICC_IGRPENn_EL1, 1),
> +       SR_FGT(SYS_ICC_IGRPEN1_EL1,     HFGxTR, ICC_IGRPENn_EL1, 1),
> +       SR_FGT(SYS_VBAR_EL1,            HFGxTR, VBAR_EL1, 1),
> +       SR_FGT(SYS_TTBR1_EL1,           HFGxTR, TTBR1_EL1, 1),
> +       SR_FGT(SYS_TTBR0_EL1,           HFGxTR, TTBR0_EL1, 1),
> +       SR_FGT(SYS_TPIDR_EL0,           HFGxTR, TPIDR_EL0, 1),
> +       SR_FGT(SYS_TPIDRRO_EL0,         HFGxTR, TPIDRRO_EL0, 1),
> +       SR_FGT(SYS_TPIDR_EL1,           HFGxTR, TPIDR_EL1, 1),
> +       SR_FGT(SYS_TCR_EL1,             HFGxTR, TCR_EL1, 1),
> +       SR_FGT(SYS_SCXTNUM_EL0,         HFGxTR, SCXTNUM_EL0, 1),
> +       SR_FGT(SYS_SCXTNUM_EL1,         HFGxTR, SCXTNUM_EL1, 1),
> +       SR_FGT(SYS_SCTLR_EL1,           HFGxTR, SCTLR_EL1, 1),
> +       SR_FGT(SYS_REVIDR_EL1,          HFGxTR, REVIDR_EL1, 1),
> +       SR_FGT(SYS_PAR_EL1,             HFGxTR, PAR_EL1, 1),
> +       SR_FGT(SYS_MPIDR_EL1,           HFGxTR, MPIDR_EL1, 1),
> +       SR_FGT(SYS_MIDR_EL1,            HFGxTR, MIDR_EL1, 1),
> +       SR_FGT(SYS_MAIR_EL1,            HFGxTR, MAIR_EL1, 1),
> +       SR_FGT(SYS_LORSA_EL1,           HFGxTR, LORSA_EL1, 1),
> +       SR_FGT(SYS_LORN_EL1,            HFGxTR, LORN_EL1, 1),
> +       SR_FGT(SYS_LORID_EL1,           HFGxTR, LORID_EL1, 1),
> +       SR_FGT(SYS_LOREA_EL1,           HFGxTR, LOREA_EL1, 1),
> +       SR_FGT(SYS_LORC_EL1,            HFGxTR, LORC_EL1, 1),
> +       SR_FGT(SYS_ISR_EL1,             HFGxTR, ISR_EL1, 1),
> +       SR_FGT(SYS_FAR_EL1,             HFGxTR, FAR_EL1, 1),
> +       SR_FGT(SYS_ESR_EL1,             HFGxTR, ESR_EL1, 1),
> +       SR_FGT(SYS_DCZID_EL0,           HFGxTR, DCZID_EL0, 1),
> +       SR_FGT(SYS_CTR_EL0,             HFGxTR, CTR_EL0, 1),
> +       SR_FGT(SYS_CSSELR_EL1,          HFGxTR, CSSELR_EL1, 1),
> +       SR_FGT(SYS_CPACR_EL1,           HFGxTR, CPACR_EL1, 1),
> +       SR_FGT(SYS_CONTEXTIDR_EL1,      HFGxTR, CONTEXTIDR_EL1, 1),
> +       SR_FGT(SYS_CLIDR_EL1,           HFGxTR, CLIDR_EL1, 1),
> +       SR_FGT(SYS_CCSIDR_EL1,          HFGxTR, CCSIDR_EL1, 1),
> +       SR_FGT(SYS_APIBKEYLO_EL1,       HFGxTR, APIBKey, 1),
> +       SR_FGT(SYS_APIBKEYHI_EL1,       HFGxTR, APIBKey, 1),
> +       SR_FGT(SYS_APIAKEYLO_EL1,       HFGxTR, APIAKey, 1),
> +       SR_FGT(SYS_APIAKEYHI_EL1,       HFGxTR, APIAKey, 1),
> +       SR_FGT(SYS_APGAKEYLO_EL1,       HFGxTR, APGAKey, 1),
> +       SR_FGT(SYS_APGAKEYHI_EL1,       HFGxTR, APGAKey, 1),
> +       SR_FGT(SYS_APDBKEYLO_EL1,       HFGxTR, APDBKey, 1),
> +       SR_FGT(SYS_APDBKEYHI_EL1,       HFGxTR, APDBKey, 1),
> +       SR_FGT(SYS_APDAKEYLO_EL1,       HFGxTR, APDAKey, 1),
> +       SR_FGT(SYS_APDAKEYHI_EL1,       HFGxTR, APDAKey, 1),
> +       SR_FGT(SYS_AMAIR_EL1,           HFGxTR, AMAIR_EL1, 1),
> +       SR_FGT(SYS_AIDR_EL1,            HFGxTR, AIDR_EL1, 1),
> +       SR_FGT(SYS_AFSR1_EL1,           HFGxTR, AFSR1_EL1, 1),
> +       SR_FGT(SYS_AFSR0_EL1,           HFGxTR, AFSR0_EL1, 1),
>  };
>
>  static union trap_config get_trap_config(u32 sysreg)
> @@ -1160,6 +1224,13 @@ bool __check_nv_sr_forward(struct kvm_vcpu *vcpu)
>         case __NO_FGT_GROUP__:
>                 break;
>
> +       case HFGxTR_GROUP:
> +               if (is_read)
> +                       val = sanitised_sys_reg(vcpu, HFGRTR_EL2);
> +               else
> +                       val = sanitised_sys_reg(vcpu, HFGWTR_EL2);
> +               break;
> +
>         case __NR_FGT_GROUP_IDS__:
>                 /* Something is really wrong, bail out */
>                 WARN_ONCE(1, "__NR_FGT_GROUP_IDS__");
> --
> 2.34.1
>

Reviewed-by: Jing Zhang <jingzhangos@google.com>

Jing

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 48+ messages in thread

* Re: [PATCH v4 21/28] KVM: arm64: nv: Add trap forwarding for HFGITR_EL2
  2023-08-15 18:38 ` [PATCH v4 21/28] KVM: arm64: nv: Add trap forwarding for HFGITR_EL2 Marc Zyngier
@ 2023-08-15 22:55   ` Jing Zhang
  0 siblings, 0 replies; 48+ messages in thread
From: Jing Zhang @ 2023-08-15 22:55 UTC (permalink / raw)
  To: Marc Zyngier
  Cc: kvmarm, kvm, linux-arm-kernel, Catalin Marinas, Eric Auger,
	Mark Brown, Mark Rutland, Will Deacon, Alexandru Elisei,
	Andre Przywara, Chase Conklin, Ganapatrao Kulkarni, Darren Hart,
	Miguel Luis, James Morse, Suzuki K Poulose, Oliver Upton,
	Zenghui Yu

Hi Marc,

On Tue, Aug 15, 2023 at 11:47 AM Marc Zyngier <maz@kernel.org> wrote:
>
> Similarly, implement the trap forwarding for instructions affected
> by HFGITR_EL2.
>
> Note that the TLBI*nXS instructions should be affected by HCRX_EL2,
> which will be dealt with down the line. Also, ERET* and SVC traps
> are handled separately.
>
> Reviewed-by: Eric Auger <eric.auger@redhat.com>
> Signed-off-by: Marc Zyngier <maz@kernel.org>
> ---
>  arch/arm64/include/asm/kvm_arm.h |   4 ++
>  arch/arm64/kvm/emulate-nested.c  | 109 +++++++++++++++++++++++++++++++
>  2 files changed, 113 insertions(+)
>
> diff --git a/arch/arm64/include/asm/kvm_arm.h b/arch/arm64/include/asm/kvm_arm.h
> index 85908aa18908..809bc86acefd 100644
> --- a/arch/arm64/include/asm/kvm_arm.h
> +++ b/arch/arm64/include/asm/kvm_arm.h
> @@ -354,6 +354,10 @@
>  #define __HFGWTR_EL2_MASK      GENMASK(49, 0)
>  #define __HFGWTR_EL2_nMASK     (GENMASK(55, 54) | BIT(50))
>
> +#define __HFGITR_EL2_RES0      GENMASK(63, 57)
> +#define __HFGITR_EL2_MASK      GENMASK(54, 0)
> +#define __HFGITR_EL2_nMASK     GENMASK(56, 55)
> +
>  /* Hyp Prefetch Fault Address Register (HPFAR/HDFAR) */
>  #define HPFAR_MASK     (~UL(0xf))
>  /*
> diff --git a/arch/arm64/kvm/emulate-nested.c b/arch/arm64/kvm/emulate-nested.c
> index 0e34797515b6..a1a7792db412 100644
> --- a/arch/arm64/kvm/emulate-nested.c
> +++ b/arch/arm64/kvm/emulate-nested.c
> @@ -939,6 +939,7 @@ static DEFINE_XARRAY(sr_forward_xa);
>  enum fgt_group_id {
>         __NO_FGT_GROUP__,
>         HFGxTR_GROUP,
> +       HFGITR_GROUP,
>
>         /* Must be last */
>         __NR_FGT_GROUP_IDS__
> @@ -1020,6 +1021,110 @@ static const struct encoding_to_trap_config encoding_to_fgt[] __initconst = {
>         SR_FGT(SYS_AIDR_EL1,            HFGxTR, AIDR_EL1, 1),
>         SR_FGT(SYS_AFSR1_EL1,           HFGxTR, AFSR1_EL1, 1),
>         SR_FGT(SYS_AFSR0_EL1,           HFGxTR, AFSR0_EL1, 1),
> +       /* HFGITR_EL2 */
> +       SR_FGT(OP_BRB_IALL,             HFGITR, nBRBIALL, 0),
> +       SR_FGT(OP_BRB_INJ,              HFGITR, nBRBINJ, 0),
> +       SR_FGT(SYS_DC_CVAC,             HFGITR, DCCVAC, 1),
> +       SR_FGT(SYS_DC_CGVAC,            HFGITR, DCCVAC, 1),
> +       SR_FGT(SYS_DC_CGDVAC,           HFGITR, DCCVAC, 1),
> +       SR_FGT(OP_CPP_RCTX,             HFGITR, CPPRCTX, 1),
> +       SR_FGT(OP_DVP_RCTX,             HFGITR, DVPRCTX, 1),
> +       SR_FGT(OP_CFP_RCTX,             HFGITR, CFPRCTX, 1),
> +       SR_FGT(OP_TLBI_VAALE1,          HFGITR, TLBIVAALE1, 1),
> +       SR_FGT(OP_TLBI_VALE1,           HFGITR, TLBIVALE1, 1),
> +       SR_FGT(OP_TLBI_VAAE1,           HFGITR, TLBIVAAE1, 1),
> +       SR_FGT(OP_TLBI_ASIDE1,          HFGITR, TLBIASIDE1, 1),
> +       SR_FGT(OP_TLBI_VAE1,            HFGITR, TLBIVAE1, 1),
> +       SR_FGT(OP_TLBI_VMALLE1,         HFGITR, TLBIVMALLE1, 1),
> +       SR_FGT(OP_TLBI_RVAALE1,         HFGITR, TLBIRVAALE1, 1),
> +       SR_FGT(OP_TLBI_RVALE1,          HFGITR, TLBIRVALE1, 1),
> +       SR_FGT(OP_TLBI_RVAAE1,          HFGITR, TLBIRVAAE1, 1),
> +       SR_FGT(OP_TLBI_RVAE1,           HFGITR, TLBIRVAE1, 1),
> +       SR_FGT(OP_TLBI_RVAALE1IS,       HFGITR, TLBIRVAALE1IS, 1),
> +       SR_FGT(OP_TLBI_RVALE1IS,        HFGITR, TLBIRVALE1IS, 1),
> +       SR_FGT(OP_TLBI_RVAAE1IS,        HFGITR, TLBIRVAAE1IS, 1),
> +       SR_FGT(OP_TLBI_RVAE1IS,         HFGITR, TLBIRVAE1IS, 1),
> +       SR_FGT(OP_TLBI_VAALE1IS,        HFGITR, TLBIVAALE1IS, 1),
> +       SR_FGT(OP_TLBI_VALE1IS,         HFGITR, TLBIVALE1IS, 1),
> +       SR_FGT(OP_TLBI_VAAE1IS,         HFGITR, TLBIVAAE1IS, 1),
> +       SR_FGT(OP_TLBI_ASIDE1IS,        HFGITR, TLBIASIDE1IS, 1),
> +       SR_FGT(OP_TLBI_VAE1IS,          HFGITR, TLBIVAE1IS, 1),
> +       SR_FGT(OP_TLBI_VMALLE1IS,       HFGITR, TLBIVMALLE1IS, 1),
> +       SR_FGT(OP_TLBI_RVAALE1OS,       HFGITR, TLBIRVAALE1OS, 1),
> +       SR_FGT(OP_TLBI_RVALE1OS,        HFGITR, TLBIRVALE1OS, 1),
> +       SR_FGT(OP_TLBI_RVAAE1OS,        HFGITR, TLBIRVAAE1OS, 1),
> +       SR_FGT(OP_TLBI_RVAE1OS,         HFGITR, TLBIRVAE1OS, 1),
> +       SR_FGT(OP_TLBI_VAALE1OS,        HFGITR, TLBIVAALE1OS, 1),
> +       SR_FGT(OP_TLBI_VALE1OS,         HFGITR, TLBIVALE1OS, 1),
> +       SR_FGT(OP_TLBI_VAAE1OS,         HFGITR, TLBIVAAE1OS, 1),
> +       SR_FGT(OP_TLBI_ASIDE1OS,        HFGITR, TLBIASIDE1OS, 1),
> +       SR_FGT(OP_TLBI_VAE1OS,          HFGITR, TLBIVAE1OS, 1),
> +       SR_FGT(OP_TLBI_VMALLE1OS,       HFGITR, TLBIVMALLE1OS, 1),
> +       /* FIXME: nXS variants must be checked against HCRX_EL2.FGTnXS */
> +       SR_FGT(OP_TLBI_VAALE1NXS,       HFGITR, TLBIVAALE1, 1),
> +       SR_FGT(OP_TLBI_VALE1NXS,        HFGITR, TLBIVALE1, 1),
> +       SR_FGT(OP_TLBI_VAAE1NXS,        HFGITR, TLBIVAAE1, 1),
> +       SR_FGT(OP_TLBI_ASIDE1NXS,       HFGITR, TLBIASIDE1, 1),
> +       SR_FGT(OP_TLBI_VAE1NXS,         HFGITR, TLBIVAE1, 1),
> +       SR_FGT(OP_TLBI_VMALLE1NXS,      HFGITR, TLBIVMALLE1, 1),
> +       SR_FGT(OP_TLBI_RVAALE1NXS,      HFGITR, TLBIRVAALE1, 1),
> +       SR_FGT(OP_TLBI_RVALE1NXS,       HFGITR, TLBIRVALE1, 1),
> +       SR_FGT(OP_TLBI_RVAAE1NXS,       HFGITR, TLBIRVAAE1, 1),
> +       SR_FGT(OP_TLBI_RVAE1NXS,        HFGITR, TLBIRVAE1, 1),
> +       SR_FGT(OP_TLBI_RVAALE1ISNXS,    HFGITR, TLBIRVAALE1IS, 1),
> +       SR_FGT(OP_TLBI_RVALE1ISNXS,     HFGITR, TLBIRVALE1IS, 1),
> +       SR_FGT(OP_TLBI_RVAAE1ISNXS,     HFGITR, TLBIRVAAE1IS, 1),
> +       SR_FGT(OP_TLBI_RVAE1ISNXS,      HFGITR, TLBIRVAE1IS, 1),
> +       SR_FGT(OP_TLBI_VAALE1ISNXS,     HFGITR, TLBIVAALE1IS, 1),
> +       SR_FGT(OP_TLBI_VALE1ISNXS,      HFGITR, TLBIVALE1IS, 1),
> +       SR_FGT(OP_TLBI_VAAE1ISNXS,      HFGITR, TLBIVAAE1IS, 1),
> +       SR_FGT(OP_TLBI_ASIDE1ISNXS,     HFGITR, TLBIASIDE1IS, 1),
> +       SR_FGT(OP_TLBI_VAE1ISNXS,       HFGITR, TLBIVAE1IS, 1),
> +       SR_FGT(OP_TLBI_VMALLE1ISNXS,    HFGITR, TLBIVMALLE1IS, 1),
> +       SR_FGT(OP_TLBI_RVAALE1OSNXS,    HFGITR, TLBIRVAALE1OS, 1),
> +       SR_FGT(OP_TLBI_RVALE1OSNXS,     HFGITR, TLBIRVALE1OS, 1),
> +       SR_FGT(OP_TLBI_RVAAE1OSNXS,     HFGITR, TLBIRVAAE1OS, 1),
> +       SR_FGT(OP_TLBI_RVAE1OSNXS,      HFGITR, TLBIRVAE1OS, 1),
> +       SR_FGT(OP_TLBI_VAALE1OSNXS,     HFGITR, TLBIVAALE1OS, 1),
> +       SR_FGT(OP_TLBI_VALE1OSNXS,      HFGITR, TLBIVALE1OS, 1),
> +       SR_FGT(OP_TLBI_VAAE1OSNXS,      HFGITR, TLBIVAAE1OS, 1),
> +       SR_FGT(OP_TLBI_ASIDE1OSNXS,     HFGITR, TLBIASIDE1OS, 1),
> +       SR_FGT(OP_TLBI_VAE1OSNXS,       HFGITR, TLBIVAE1OS, 1),
> +       SR_FGT(OP_TLBI_VMALLE1OSNXS,    HFGITR, TLBIVMALLE1OS, 1),
> +       SR_FGT(OP_AT_S1E1WP,            HFGITR, ATS1E1WP, 1),
> +       SR_FGT(OP_AT_S1E1RP,            HFGITR, ATS1E1RP, 1),
> +       SR_FGT(OP_AT_S1E0W,             HFGITR, ATS1E0W, 1),
> +       SR_FGT(OP_AT_S1E0R,             HFGITR, ATS1E0R, 1),
> +       SR_FGT(OP_AT_S1E1W,             HFGITR, ATS1E1W, 1),
> +       SR_FGT(OP_AT_S1E1R,             HFGITR, ATS1E1R, 1),
> +       SR_FGT(SYS_DC_ZVA,              HFGITR, DCZVA, 1),
> +       SR_FGT(SYS_DC_GVA,              HFGITR, DCZVA, 1),
> +       SR_FGT(SYS_DC_GZVA,             HFGITR, DCZVA, 1),
> +       SR_FGT(SYS_DC_CIVAC,            HFGITR, DCCIVAC, 1),
> +       SR_FGT(SYS_DC_CIGVAC,           HFGITR, DCCIVAC, 1),
> +       SR_FGT(SYS_DC_CIGDVAC,          HFGITR, DCCIVAC, 1),
> +       SR_FGT(SYS_DC_CVADP,            HFGITR, DCCVADP, 1),
> +       SR_FGT(SYS_DC_CGVADP,           HFGITR, DCCVADP, 1),
> +       SR_FGT(SYS_DC_CGDVADP,          HFGITR, DCCVADP, 1),
> +       SR_FGT(SYS_DC_CVAP,             HFGITR, DCCVAP, 1),
> +       SR_FGT(SYS_DC_CGVAP,            HFGITR, DCCVAP, 1),
> +       SR_FGT(SYS_DC_CGDVAP,           HFGITR, DCCVAP, 1),
> +       SR_FGT(SYS_DC_CVAU,             HFGITR, DCCVAU, 1),
> +       SR_FGT(SYS_DC_CISW,             HFGITR, DCCISW, 1),
> +       SR_FGT(SYS_DC_CIGSW,            HFGITR, DCCISW, 1),
> +       SR_FGT(SYS_DC_CIGDSW,           HFGITR, DCCISW, 1),
> +       SR_FGT(SYS_DC_CSW,              HFGITR, DCCSW, 1),
> +       SR_FGT(SYS_DC_CGSW,             HFGITR, DCCSW, 1),
> +       SR_FGT(SYS_DC_CGDSW,            HFGITR, DCCSW, 1),
> +       SR_FGT(SYS_DC_ISW,              HFGITR, DCISW, 1),
> +       SR_FGT(SYS_DC_IGSW,             HFGITR, DCISW, 1),
> +       SR_FGT(SYS_DC_IGDSW,            HFGITR, DCISW, 1),
> +       SR_FGT(SYS_DC_IVAC,             HFGITR, DCIVAC, 1),
> +       SR_FGT(SYS_DC_IGVAC,            HFGITR, DCIVAC, 1),
> +       SR_FGT(SYS_DC_IGDVAC,           HFGITR, DCIVAC, 1),
> +       SR_FGT(SYS_IC_IVAU,             HFGITR, ICIVAU, 1),
> +       SR_FGT(SYS_IC_IALLU,            HFGITR, ICIALLU, 1),
> +       SR_FGT(SYS_IC_IALLUIS,          HFGITR, ICIALLUIS, 1),
>  };
>
>  static union trap_config get_trap_config(u32 sysreg)
> @@ -1231,6 +1336,10 @@ bool __check_nv_sr_forward(struct kvm_vcpu *vcpu)
>                         val = sanitised_sys_reg(vcpu, HFGWTR_EL2);
>                 break;
>
> +       case HFGITR_GROUP:
> +               val = sanitised_sys_reg(vcpu, HFGITR_EL2);
> +               break;
> +
>         case __NR_FGT_GROUP_IDS__:
>                 /* Something is really wrong, bail out */
>                 WARN_ONCE(1, "__NR_FGT_GROUP_IDS__");
> --
> 2.34.1
>

Reviewed-by: Jing Zhang <jingzhangos@google.com>

Jing

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 48+ messages in thread

* Re: [PATCH v4 22/28] KVM: arm64: nv: Add trap forwarding for HDFGxTR_EL2
  2023-08-15 18:38 ` [PATCH v4 22/28] KVM: arm64: nv: Add trap forwarding for HDFGxTR_EL2 Marc Zyngier
@ 2023-08-15 23:10   ` Jing Zhang
  0 siblings, 0 replies; 48+ messages in thread
From: Jing Zhang @ 2023-08-15 23:10 UTC (permalink / raw)
  To: Marc Zyngier
  Cc: kvmarm, kvm, linux-arm-kernel, Catalin Marinas, Eric Auger,
	Mark Brown, Mark Rutland, Will Deacon, Alexandru Elisei,
	Andre Przywara, Chase Conklin, Ganapatrao Kulkarni, Darren Hart,
	Miguel Luis, James Morse, Suzuki K Poulose, Oliver Upton,
	Zenghui Yu

Hi Marc,

On Tue, Aug 15, 2023 at 11:47 AM Marc Zyngier <maz@kernel.org> wrote:
>
> ... and finally, the Debug version of FGT, with its *enormous*
> list of trapped registers.
>
> Reviewed-by: Suzuki K Poulose <suzuki.poulose@arm.com>
> Reviewed-by: Eric Auger <eric.auger@redhat.com>
> Signed-off-by: Marc Zyngier <maz@kernel.org>
> ---
>  arch/arm64/include/asm/kvm_arm.h |  11 +
>  arch/arm64/kvm/emulate-nested.c  | 474 +++++++++++++++++++++++++++++++
>  2 files changed, 485 insertions(+)
>
> diff --git a/arch/arm64/include/asm/kvm_arm.h b/arch/arm64/include/asm/kvm_arm.h
> index 809bc86acefd..d229f238c3b6 100644
> --- a/arch/arm64/include/asm/kvm_arm.h
> +++ b/arch/arm64/include/asm/kvm_arm.h
> @@ -358,6 +358,17 @@
>  #define __HFGITR_EL2_MASK      GENMASK(54, 0)
>  #define __HFGITR_EL2_nMASK     GENMASK(56, 55)
>
> +#define __HDFGRTR_EL2_RES0     (BIT(49) | BIT(42) | GENMASK(39, 38) |  \
> +                                GENMASK(21, 20) | BIT(8))
> +#define __HDFGRTR_EL2_MASK     ~__HDFGRTR_EL2_nMASK
> +#define __HDFGRTR_EL2_nMASK    GENMASK(62, 59)
> +
> +#define __HDFGWTR_EL2_RES0     (BIT(63) | GENMASK(59, 58) | BIT(51) | BIT(47) | \
> +                                BIT(43) | GENMASK(40, 38) | BIT(34) | BIT(30) | \
> +                                BIT(22) | BIT(9) | BIT(6))
> +#define __HDFGWTR_EL2_MASK     ~__HDFGWTR_EL2_nMASK
> +#define __HDFGWTR_EL2_nMASK    GENMASK(62, 60)
> +
>  /* Hyp Prefetch Fault Address Register (HPFAR/HDFAR) */
>  #define HPFAR_MASK     (~UL(0xf))
>  /*
> diff --git a/arch/arm64/kvm/emulate-nested.c b/arch/arm64/kvm/emulate-nested.c
> index a1a7792db412..c9662f9a345e 100644
> --- a/arch/arm64/kvm/emulate-nested.c
> +++ b/arch/arm64/kvm/emulate-nested.c
> @@ -939,6 +939,8 @@ static DEFINE_XARRAY(sr_forward_xa);
>  enum fgt_group_id {
>         __NO_FGT_GROUP__,
>         HFGxTR_GROUP,
> +       HDFGRTR_GROUP,
> +       HDFGWTR_GROUP,
>         HFGITR_GROUP,
>
>         /* Must be last */
> @@ -1125,6 +1127,470 @@ static const struct encoding_to_trap_config encoding_to_fgt[] __initconst = {
>         SR_FGT(SYS_IC_IVAU,             HFGITR, ICIVAU, 1),
>         SR_FGT(SYS_IC_IALLU,            HFGITR, ICIALLU, 1),
>         SR_FGT(SYS_IC_IALLUIS,          HFGITR, ICIALLUIS, 1),
> +       /* HDFGRTR_EL2 */
> +       SR_FGT(SYS_PMBIDR_EL1,          HDFGRTR, PMBIDR_EL1, 1),
> +       SR_FGT(SYS_PMSNEVFR_EL1,        HDFGRTR, nPMSNEVFR_EL1, 0),
> +       SR_FGT(SYS_BRBINF_EL1(0),       HDFGRTR, nBRBDATA, 0),
> +       SR_FGT(SYS_BRBINF_EL1(1),       HDFGRTR, nBRBDATA, 0),
> +       SR_FGT(SYS_BRBINF_EL1(2),       HDFGRTR, nBRBDATA, 0),
> +       SR_FGT(SYS_BRBINF_EL1(3),       HDFGRTR, nBRBDATA, 0),
> +       SR_FGT(SYS_BRBINF_EL1(4),       HDFGRTR, nBRBDATA, 0),
> +       SR_FGT(SYS_BRBINF_EL1(5),       HDFGRTR, nBRBDATA, 0),
> +       SR_FGT(SYS_BRBINF_EL1(6),       HDFGRTR, nBRBDATA, 0),
> +       SR_FGT(SYS_BRBINF_EL1(7),       HDFGRTR, nBRBDATA, 0),
> +       SR_FGT(SYS_BRBINF_EL1(8),       HDFGRTR, nBRBDATA, 0),
> +       SR_FGT(SYS_BRBINF_EL1(9),       HDFGRTR, nBRBDATA, 0),
> +       SR_FGT(SYS_BRBINF_EL1(10),      HDFGRTR, nBRBDATA, 0),
> +       SR_FGT(SYS_BRBINF_EL1(11),      HDFGRTR, nBRBDATA, 0),
> +       SR_FGT(SYS_BRBINF_EL1(12),      HDFGRTR, nBRBDATA, 0),
> +       SR_FGT(SYS_BRBINF_EL1(13),      HDFGRTR, nBRBDATA, 0),
> +       SR_FGT(SYS_BRBINF_EL1(14),      HDFGRTR, nBRBDATA, 0),
> +       SR_FGT(SYS_BRBINF_EL1(15),      HDFGRTR, nBRBDATA, 0),
> +       SR_FGT(SYS_BRBINF_EL1(16),      HDFGRTR, nBRBDATA, 0),
> +       SR_FGT(SYS_BRBINF_EL1(17),      HDFGRTR, nBRBDATA, 0),
> +       SR_FGT(SYS_BRBINF_EL1(18),      HDFGRTR, nBRBDATA, 0),
> +       SR_FGT(SYS_BRBINF_EL1(19),      HDFGRTR, nBRBDATA, 0),
> +       SR_FGT(SYS_BRBINF_EL1(20),      HDFGRTR, nBRBDATA, 0),
> +       SR_FGT(SYS_BRBINF_EL1(21),      HDFGRTR, nBRBDATA, 0),
> +       SR_FGT(SYS_BRBINF_EL1(22),      HDFGRTR, nBRBDATA, 0),
> +       SR_FGT(SYS_BRBINF_EL1(23),      HDFGRTR, nBRBDATA, 0),
> +       SR_FGT(SYS_BRBINF_EL1(24),      HDFGRTR, nBRBDATA, 0),
> +       SR_FGT(SYS_BRBINF_EL1(25),      HDFGRTR, nBRBDATA, 0),
> +       SR_FGT(SYS_BRBINF_EL1(26),      HDFGRTR, nBRBDATA, 0),
> +       SR_FGT(SYS_BRBINF_EL1(27),      HDFGRTR, nBRBDATA, 0),
> +       SR_FGT(SYS_BRBINF_EL1(28),      HDFGRTR, nBRBDATA, 0),
> +       SR_FGT(SYS_BRBINF_EL1(29),      HDFGRTR, nBRBDATA, 0),
> +       SR_FGT(SYS_BRBINF_EL1(30),      HDFGRTR, nBRBDATA, 0),
> +       SR_FGT(SYS_BRBINF_EL1(31),      HDFGRTR, nBRBDATA, 0),
> +       SR_FGT(SYS_BRBINFINJ_EL1,       HDFGRTR, nBRBDATA, 0),
> +       SR_FGT(SYS_BRBSRC_EL1(0),       HDFGRTR, nBRBDATA, 0),
> +       SR_FGT(SYS_BRBSRC_EL1(1),       HDFGRTR, nBRBDATA, 0),
> +       SR_FGT(SYS_BRBSRC_EL1(2),       HDFGRTR, nBRBDATA, 0),
> +       SR_FGT(SYS_BRBSRC_EL1(3),       HDFGRTR, nBRBDATA, 0),
> +       SR_FGT(SYS_BRBSRC_EL1(4),       HDFGRTR, nBRBDATA, 0),
> +       SR_FGT(SYS_BRBSRC_EL1(5),       HDFGRTR, nBRBDATA, 0),
> +       SR_FGT(SYS_BRBSRC_EL1(6),       HDFGRTR, nBRBDATA, 0),
> +       SR_FGT(SYS_BRBSRC_EL1(7),       HDFGRTR, nBRBDATA, 0),
> +       SR_FGT(SYS_BRBSRC_EL1(8),       HDFGRTR, nBRBDATA, 0),
> +       SR_FGT(SYS_BRBSRC_EL1(9),       HDFGRTR, nBRBDATA, 0),
> +       SR_FGT(SYS_BRBSRC_EL1(10),      HDFGRTR, nBRBDATA, 0),
> +       SR_FGT(SYS_BRBSRC_EL1(11),      HDFGRTR, nBRBDATA, 0),
> +       SR_FGT(SYS_BRBSRC_EL1(12),      HDFGRTR, nBRBDATA, 0),
> +       SR_FGT(SYS_BRBSRC_EL1(13),      HDFGRTR, nBRBDATA, 0),
> +       SR_FGT(SYS_BRBSRC_EL1(14),      HDFGRTR, nBRBDATA, 0),
> +       SR_FGT(SYS_BRBSRC_EL1(15),      HDFGRTR, nBRBDATA, 0),
> +       SR_FGT(SYS_BRBSRC_EL1(16),      HDFGRTR, nBRBDATA, 0),
> +       SR_FGT(SYS_BRBSRC_EL1(17),      HDFGRTR, nBRBDATA, 0),
> +       SR_FGT(SYS_BRBSRC_EL1(18),      HDFGRTR, nBRBDATA, 0),
> +       SR_FGT(SYS_BRBSRC_EL1(19),      HDFGRTR, nBRBDATA, 0),
> +       SR_FGT(SYS_BRBSRC_EL1(20),      HDFGRTR, nBRBDATA, 0),
> +       SR_FGT(SYS_BRBSRC_EL1(21),      HDFGRTR, nBRBDATA, 0),
> +       SR_FGT(SYS_BRBSRC_EL1(22),      HDFGRTR, nBRBDATA, 0),
> +       SR_FGT(SYS_BRBSRC_EL1(23),      HDFGRTR, nBRBDATA, 0),
> +       SR_FGT(SYS_BRBSRC_EL1(24),      HDFGRTR, nBRBDATA, 0),
> +       SR_FGT(SYS_BRBSRC_EL1(25),      HDFGRTR, nBRBDATA, 0),
> +       SR_FGT(SYS_BRBSRC_EL1(26),      HDFGRTR, nBRBDATA, 0),
> +       SR_FGT(SYS_BRBSRC_EL1(27),      HDFGRTR, nBRBDATA, 0),
> +       SR_FGT(SYS_BRBSRC_EL1(28),      HDFGRTR, nBRBDATA, 0),
> +       SR_FGT(SYS_BRBSRC_EL1(29),      HDFGRTR, nBRBDATA, 0),
> +       SR_FGT(SYS_BRBSRC_EL1(30),      HDFGRTR, nBRBDATA, 0),
> +       SR_FGT(SYS_BRBSRC_EL1(31),      HDFGRTR, nBRBDATA, 0),
> +       SR_FGT(SYS_BRBSRCINJ_EL1,       HDFGRTR, nBRBDATA, 0),
> +       SR_FGT(SYS_BRBTGT_EL1(0),       HDFGRTR, nBRBDATA, 0),
> +       SR_FGT(SYS_BRBTGT_EL1(1),       HDFGRTR, nBRBDATA, 0),
> +       SR_FGT(SYS_BRBTGT_EL1(2),       HDFGRTR, nBRBDATA, 0),
> +       SR_FGT(SYS_BRBTGT_EL1(3),       HDFGRTR, nBRBDATA, 0),
> +       SR_FGT(SYS_BRBTGT_EL1(4),       HDFGRTR, nBRBDATA, 0),
> +       SR_FGT(SYS_BRBTGT_EL1(5),       HDFGRTR, nBRBDATA, 0),
> +       SR_FGT(SYS_BRBTGT_EL1(6),       HDFGRTR, nBRBDATA, 0),
> +       SR_FGT(SYS_BRBTGT_EL1(7),       HDFGRTR, nBRBDATA, 0),
> +       SR_FGT(SYS_BRBTGT_EL1(8),       HDFGRTR, nBRBDATA, 0),
> +       SR_FGT(SYS_BRBTGT_EL1(9),       HDFGRTR, nBRBDATA, 0),
> +       SR_FGT(SYS_BRBTGT_EL1(10),      HDFGRTR, nBRBDATA, 0),
> +       SR_FGT(SYS_BRBTGT_EL1(11),      HDFGRTR, nBRBDATA, 0),
> +       SR_FGT(SYS_BRBTGT_EL1(12),      HDFGRTR, nBRBDATA, 0),
> +       SR_FGT(SYS_BRBTGT_EL1(13),      HDFGRTR, nBRBDATA, 0),
> +       SR_FGT(SYS_BRBTGT_EL1(14),      HDFGRTR, nBRBDATA, 0),
> +       SR_FGT(SYS_BRBTGT_EL1(15),      HDFGRTR, nBRBDATA, 0),
> +       SR_FGT(SYS_BRBTGT_EL1(16),      HDFGRTR, nBRBDATA, 0),
> +       SR_FGT(SYS_BRBTGT_EL1(17),      HDFGRTR, nBRBDATA, 0),
> +       SR_FGT(SYS_BRBTGT_EL1(18),      HDFGRTR, nBRBDATA, 0),
> +       SR_FGT(SYS_BRBTGT_EL1(19),      HDFGRTR, nBRBDATA, 0),
> +       SR_FGT(SYS_BRBTGT_EL1(20),      HDFGRTR, nBRBDATA, 0),
> +       SR_FGT(SYS_BRBTGT_EL1(21),      HDFGRTR, nBRBDATA, 0),
> +       SR_FGT(SYS_BRBTGT_EL1(22),      HDFGRTR, nBRBDATA, 0),
> +       SR_FGT(SYS_BRBTGT_EL1(23),      HDFGRTR, nBRBDATA, 0),
> +       SR_FGT(SYS_BRBTGT_EL1(24),      HDFGRTR, nBRBDATA, 0),
> +       SR_FGT(SYS_BRBTGT_EL1(25),      HDFGRTR, nBRBDATA, 0),
> +       SR_FGT(SYS_BRBTGT_EL1(26),      HDFGRTR, nBRBDATA, 0),
> +       SR_FGT(SYS_BRBTGT_EL1(27),      HDFGRTR, nBRBDATA, 0),
> +       SR_FGT(SYS_BRBTGT_EL1(28),      HDFGRTR, nBRBDATA, 0),
> +       SR_FGT(SYS_BRBTGT_EL1(29),      HDFGRTR, nBRBDATA, 0),
> +       SR_FGT(SYS_BRBTGT_EL1(30),      HDFGRTR, nBRBDATA, 0),
> +       SR_FGT(SYS_BRBTGT_EL1(31),      HDFGRTR, nBRBDATA, 0),
> +       SR_FGT(SYS_BRBTGTINJ_EL1,       HDFGRTR, nBRBDATA, 0),
> +       SR_FGT(SYS_BRBTS_EL1,           HDFGRTR, nBRBDATA, 0),
> +       SR_FGT(SYS_BRBCR_EL1,           HDFGRTR, nBRBCTL, 0),
> +       SR_FGT(SYS_BRBFCR_EL1,          HDFGRTR, nBRBCTL, 0),
> +       SR_FGT(SYS_BRBIDR0_EL1,         HDFGRTR, nBRBIDR, 0),
> +       SR_FGT(SYS_PMCEID0_EL0,         HDFGRTR, PMCEIDn_EL0, 1),
> +       SR_FGT(SYS_PMCEID1_EL0,         HDFGRTR, PMCEIDn_EL0, 1),
> +       SR_FGT(SYS_PMUSERENR_EL0,       HDFGRTR, PMUSERENR_EL0, 1),
> +       SR_FGT(SYS_TRBTRG_EL1,          HDFGRTR, TRBTRG_EL1, 1),
> +       SR_FGT(SYS_TRBSR_EL1,           HDFGRTR, TRBSR_EL1, 1),
> +       SR_FGT(SYS_TRBPTR_EL1,          HDFGRTR, TRBPTR_EL1, 1),
> +       SR_FGT(SYS_TRBMAR_EL1,          HDFGRTR, TRBMAR_EL1, 1),
> +       SR_FGT(SYS_TRBLIMITR_EL1,       HDFGRTR, TRBLIMITR_EL1, 1),
> +       SR_FGT(SYS_TRBIDR_EL1,          HDFGRTR, TRBIDR_EL1, 1),
> +       SR_FGT(SYS_TRBBASER_EL1,        HDFGRTR, TRBBASER_EL1, 1),
> +       SR_FGT(SYS_TRCVICTLR,           HDFGRTR, TRCVICTLR, 1),
> +       SR_FGT(SYS_TRCSTATR,            HDFGRTR, TRCSTATR, 1),
> +       SR_FGT(SYS_TRCSSCSR(0),         HDFGRTR, TRCSSCSRn, 1),
> +       SR_FGT(SYS_TRCSSCSR(1),         HDFGRTR, TRCSSCSRn, 1),
> +       SR_FGT(SYS_TRCSSCSR(2),         HDFGRTR, TRCSSCSRn, 1),
> +       SR_FGT(SYS_TRCSSCSR(3),         HDFGRTR, TRCSSCSRn, 1),
> +       SR_FGT(SYS_TRCSSCSR(4),         HDFGRTR, TRCSSCSRn, 1),
> +       SR_FGT(SYS_TRCSSCSR(5),         HDFGRTR, TRCSSCSRn, 1),
> +       SR_FGT(SYS_TRCSSCSR(6),         HDFGRTR, TRCSSCSRn, 1),
> +       SR_FGT(SYS_TRCSSCSR(7),         HDFGRTR, TRCSSCSRn, 1),
> +       SR_FGT(SYS_TRCSEQSTR,           HDFGRTR, TRCSEQSTR, 1),
> +       SR_FGT(SYS_TRCPRGCTLR,          HDFGRTR, TRCPRGCTLR, 1),
> +       SR_FGT(SYS_TRCOSLSR,            HDFGRTR, TRCOSLSR, 1),
> +       SR_FGT(SYS_TRCIMSPEC(0),        HDFGRTR, TRCIMSPECn, 1),
> +       SR_FGT(SYS_TRCIMSPEC(1),        HDFGRTR, TRCIMSPECn, 1),
> +       SR_FGT(SYS_TRCIMSPEC(2),        HDFGRTR, TRCIMSPECn, 1),
> +       SR_FGT(SYS_TRCIMSPEC(3),        HDFGRTR, TRCIMSPECn, 1),
> +       SR_FGT(SYS_TRCIMSPEC(4),        HDFGRTR, TRCIMSPECn, 1),
> +       SR_FGT(SYS_TRCIMSPEC(5),        HDFGRTR, TRCIMSPECn, 1),
> +       SR_FGT(SYS_TRCIMSPEC(6),        HDFGRTR, TRCIMSPECn, 1),
> +       SR_FGT(SYS_TRCIMSPEC(7),        HDFGRTR, TRCIMSPECn, 1),
> +       SR_FGT(SYS_TRCDEVARCH,          HDFGRTR, TRCID, 1),
> +       SR_FGT(SYS_TRCDEVID,            HDFGRTR, TRCID, 1),
> +       SR_FGT(SYS_TRCIDR0,             HDFGRTR, TRCID, 1),
> +       SR_FGT(SYS_TRCIDR1,             HDFGRTR, TRCID, 1),
> +       SR_FGT(SYS_TRCIDR2,             HDFGRTR, TRCID, 1),
> +       SR_FGT(SYS_TRCIDR3,             HDFGRTR, TRCID, 1),
> +       SR_FGT(SYS_TRCIDR4,             HDFGRTR, TRCID, 1),
> +       SR_FGT(SYS_TRCIDR5,             HDFGRTR, TRCID, 1),
> +       SR_FGT(SYS_TRCIDR6,             HDFGRTR, TRCID, 1),
> +       SR_FGT(SYS_TRCIDR7,             HDFGRTR, TRCID, 1),
> +       SR_FGT(SYS_TRCIDR8,             HDFGRTR, TRCID, 1),
> +       SR_FGT(SYS_TRCIDR9,             HDFGRTR, TRCID, 1),
> +       SR_FGT(SYS_TRCIDR10,            HDFGRTR, TRCID, 1),
> +       SR_FGT(SYS_TRCIDR11,            HDFGRTR, TRCID, 1),
> +       SR_FGT(SYS_TRCIDR12,            HDFGRTR, TRCID, 1),
> +       SR_FGT(SYS_TRCIDR13,            HDFGRTR, TRCID, 1),
> +       SR_FGT(SYS_TRCCNTVR(0),         HDFGRTR, TRCCNTVRn, 1),
> +       SR_FGT(SYS_TRCCNTVR(1),         HDFGRTR, TRCCNTVRn, 1),
> +       SR_FGT(SYS_TRCCNTVR(2),         HDFGRTR, TRCCNTVRn, 1),
> +       SR_FGT(SYS_TRCCNTVR(3),         HDFGRTR, TRCCNTVRn, 1),
> +       SR_FGT(SYS_TRCCLAIMCLR,         HDFGRTR, TRCCLAIM, 1),
> +       SR_FGT(SYS_TRCCLAIMSET,         HDFGRTR, TRCCLAIM, 1),
> +       SR_FGT(SYS_TRCAUXCTLR,          HDFGRTR, TRCAUXCTLR, 1),
> +       SR_FGT(SYS_TRCAUTHSTATUS,       HDFGRTR, TRCAUTHSTATUS, 1),
> +       SR_FGT(SYS_TRCACATR(0),         HDFGRTR, TRC, 1),
> +       SR_FGT(SYS_TRCACATR(1),         HDFGRTR, TRC, 1),
> +       SR_FGT(SYS_TRCACATR(2),         HDFGRTR, TRC, 1),
> +       SR_FGT(SYS_TRCACATR(3),         HDFGRTR, TRC, 1),
> +       SR_FGT(SYS_TRCACATR(4),         HDFGRTR, TRC, 1),
> +       SR_FGT(SYS_TRCACATR(5),         HDFGRTR, TRC, 1),
> +       SR_FGT(SYS_TRCACATR(6),         HDFGRTR, TRC, 1),
> +       SR_FGT(SYS_TRCACATR(7),         HDFGRTR, TRC, 1),
> +       SR_FGT(SYS_TRCACATR(8),         HDFGRTR, TRC, 1),
> +       SR_FGT(SYS_TRCACATR(9),         HDFGRTR, TRC, 1),
> +       SR_FGT(SYS_TRCACATR(10),        HDFGRTR, TRC, 1),
> +       SR_FGT(SYS_TRCACATR(11),        HDFGRTR, TRC, 1),
> +       SR_FGT(SYS_TRCACATR(12),        HDFGRTR, TRC, 1),
> +       SR_FGT(SYS_TRCACATR(13),        HDFGRTR, TRC, 1),
> +       SR_FGT(SYS_TRCACATR(14),        HDFGRTR, TRC, 1),
> +       SR_FGT(SYS_TRCACATR(15),        HDFGRTR, TRC, 1),
> +       SR_FGT(SYS_TRCACVR(0),          HDFGRTR, TRC, 1),
> +       SR_FGT(SYS_TRCACVR(1),          HDFGRTR, TRC, 1),
> +       SR_FGT(SYS_TRCACVR(2),          HDFGRTR, TRC, 1),
> +       SR_FGT(SYS_TRCACVR(3),          HDFGRTR, TRC, 1),
> +       SR_FGT(SYS_TRCACVR(4),          HDFGRTR, TRC, 1),
> +       SR_FGT(SYS_TRCACVR(5),          HDFGRTR, TRC, 1),
> +       SR_FGT(SYS_TRCACVR(6),          HDFGRTR, TRC, 1),
> +       SR_FGT(SYS_TRCACVR(7),          HDFGRTR, TRC, 1),
> +       SR_FGT(SYS_TRCACVR(8),          HDFGRTR, TRC, 1),
> +       SR_FGT(SYS_TRCACVR(9),          HDFGRTR, TRC, 1),
> +       SR_FGT(SYS_TRCACVR(10),         HDFGRTR, TRC, 1),
> +       SR_FGT(SYS_TRCACVR(11),         HDFGRTR, TRC, 1),
> +       SR_FGT(SYS_TRCACVR(12),         HDFGRTR, TRC, 1),
> +       SR_FGT(SYS_TRCACVR(13),         HDFGRTR, TRC, 1),
> +       SR_FGT(SYS_TRCACVR(14),         HDFGRTR, TRC, 1),
> +       SR_FGT(SYS_TRCACVR(15),         HDFGRTR, TRC, 1),
> +       SR_FGT(SYS_TRCBBCTLR,           HDFGRTR, TRC, 1),
> +       SR_FGT(SYS_TRCCCCTLR,           HDFGRTR, TRC, 1),
> +       SR_FGT(SYS_TRCCIDCCTLR0,        HDFGRTR, TRC, 1),
> +       SR_FGT(SYS_TRCCIDCCTLR1,        HDFGRTR, TRC, 1),
> +       SR_FGT(SYS_TRCCIDCVR(0),        HDFGRTR, TRC, 1),
> +       SR_FGT(SYS_TRCCIDCVR(1),        HDFGRTR, TRC, 1),
> +       SR_FGT(SYS_TRCCIDCVR(2),        HDFGRTR, TRC, 1),
> +       SR_FGT(SYS_TRCCIDCVR(3),        HDFGRTR, TRC, 1),
> +       SR_FGT(SYS_TRCCIDCVR(4),        HDFGRTR, TRC, 1),
> +       SR_FGT(SYS_TRCCIDCVR(5),        HDFGRTR, TRC, 1),
> +       SR_FGT(SYS_TRCCIDCVR(6),        HDFGRTR, TRC, 1),
> +       SR_FGT(SYS_TRCCIDCVR(7),        HDFGRTR, TRC, 1),
> +       SR_FGT(SYS_TRCCNTCTLR(0),       HDFGRTR, TRC, 1),
> +       SR_FGT(SYS_TRCCNTCTLR(1),       HDFGRTR, TRC, 1),
> +       SR_FGT(SYS_TRCCNTCTLR(2),       HDFGRTR, TRC, 1),
> +       SR_FGT(SYS_TRCCNTCTLR(3),       HDFGRTR, TRC, 1),
> +       SR_FGT(SYS_TRCCNTRLDVR(0),      HDFGRTR, TRC, 1),
> +       SR_FGT(SYS_TRCCNTRLDVR(1),      HDFGRTR, TRC, 1),
> +       SR_FGT(SYS_TRCCNTRLDVR(2),      HDFGRTR, TRC, 1),
> +       SR_FGT(SYS_TRCCNTRLDVR(3),      HDFGRTR, TRC, 1),
> +       SR_FGT(SYS_TRCCONFIGR,          HDFGRTR, TRC, 1),
> +       SR_FGT(SYS_TRCEVENTCTL0R,       HDFGRTR, TRC, 1),
> +       SR_FGT(SYS_TRCEVENTCTL1R,       HDFGRTR, TRC, 1),
> +       SR_FGT(SYS_TRCEXTINSELR(0),     HDFGRTR, TRC, 1),
> +       SR_FGT(SYS_TRCEXTINSELR(1),     HDFGRTR, TRC, 1),
> +       SR_FGT(SYS_TRCEXTINSELR(2),     HDFGRTR, TRC, 1),
> +       SR_FGT(SYS_TRCEXTINSELR(3),     HDFGRTR, TRC, 1),
> +       SR_FGT(SYS_TRCQCTLR,            HDFGRTR, TRC, 1),
> +       SR_FGT(SYS_TRCRSCTLR(2),        HDFGRTR, TRC, 1),
> +       SR_FGT(SYS_TRCRSCTLR(3),        HDFGRTR, TRC, 1),
> +       SR_FGT(SYS_TRCRSCTLR(4),        HDFGRTR, TRC, 1),
> +       SR_FGT(SYS_TRCRSCTLR(5),        HDFGRTR, TRC, 1),
> +       SR_FGT(SYS_TRCRSCTLR(6),        HDFGRTR, TRC, 1),
> +       SR_FGT(SYS_TRCRSCTLR(7),        HDFGRTR, TRC, 1),
> +       SR_FGT(SYS_TRCRSCTLR(8),        HDFGRTR, TRC, 1),
> +       SR_FGT(SYS_TRCRSCTLR(9),        HDFGRTR, TRC, 1),
> +       SR_FGT(SYS_TRCRSCTLR(10),       HDFGRTR, TRC, 1),
> +       SR_FGT(SYS_TRCRSCTLR(11),       HDFGRTR, TRC, 1),
> +       SR_FGT(SYS_TRCRSCTLR(12),       HDFGRTR, TRC, 1),
> +       SR_FGT(SYS_TRCRSCTLR(13),       HDFGRTR, TRC, 1),
> +       SR_FGT(SYS_TRCRSCTLR(14),       HDFGRTR, TRC, 1),
> +       SR_FGT(SYS_TRCRSCTLR(15),       HDFGRTR, TRC, 1),
> +       SR_FGT(SYS_TRCRSCTLR(16),       HDFGRTR, TRC, 1),
> +       SR_FGT(SYS_TRCRSCTLR(17),       HDFGRTR, TRC, 1),
> +       SR_FGT(SYS_TRCRSCTLR(18),       HDFGRTR, TRC, 1),
> +       SR_FGT(SYS_TRCRSCTLR(19),       HDFGRTR, TRC, 1),
> +       SR_FGT(SYS_TRCRSCTLR(20),       HDFGRTR, TRC, 1),
> +       SR_FGT(SYS_TRCRSCTLR(21),       HDFGRTR, TRC, 1),
> +       SR_FGT(SYS_TRCRSCTLR(22),       HDFGRTR, TRC, 1),
> +       SR_FGT(SYS_TRCRSCTLR(23),       HDFGRTR, TRC, 1),
> +       SR_FGT(SYS_TRCRSCTLR(24),       HDFGRTR, TRC, 1),
> +       SR_FGT(SYS_TRCRSCTLR(25),       HDFGRTR, TRC, 1),
> +       SR_FGT(SYS_TRCRSCTLR(26),       HDFGRTR, TRC, 1),
> +       SR_FGT(SYS_TRCRSCTLR(27),       HDFGRTR, TRC, 1),
> +       SR_FGT(SYS_TRCRSCTLR(28),       HDFGRTR, TRC, 1),
> +       SR_FGT(SYS_TRCRSCTLR(29),       HDFGRTR, TRC, 1),
> +       SR_FGT(SYS_TRCRSCTLR(30),       HDFGRTR, TRC, 1),
> +       SR_FGT(SYS_TRCRSCTLR(31),       HDFGRTR, TRC, 1),
> +       SR_FGT(SYS_TRCRSR,              HDFGRTR, TRC, 1),
> +       SR_FGT(SYS_TRCSEQEVR(0),        HDFGRTR, TRC, 1),
> +       SR_FGT(SYS_TRCSEQEVR(1),        HDFGRTR, TRC, 1),
> +       SR_FGT(SYS_TRCSEQEVR(2),        HDFGRTR, TRC, 1),
> +       SR_FGT(SYS_TRCSEQRSTEVR,        HDFGRTR, TRC, 1),
> +       SR_FGT(SYS_TRCSSCCR(0),         HDFGRTR, TRC, 1),
> +       SR_FGT(SYS_TRCSSCCR(1),         HDFGRTR, TRC, 1),
> +       SR_FGT(SYS_TRCSSCCR(2),         HDFGRTR, TRC, 1),
> +       SR_FGT(SYS_TRCSSCCR(3),         HDFGRTR, TRC, 1),
> +       SR_FGT(SYS_TRCSSCCR(4),         HDFGRTR, TRC, 1),
> +       SR_FGT(SYS_TRCSSCCR(5),         HDFGRTR, TRC, 1),
> +       SR_FGT(SYS_TRCSSCCR(6),         HDFGRTR, TRC, 1),
> +       SR_FGT(SYS_TRCSSCCR(7),         HDFGRTR, TRC, 1),
> +       SR_FGT(SYS_TRCSSPCICR(0),       HDFGRTR, TRC, 1),
> +       SR_FGT(SYS_TRCSSPCICR(1),       HDFGRTR, TRC, 1),
> +       SR_FGT(SYS_TRCSSPCICR(2),       HDFGRTR, TRC, 1),
> +       SR_FGT(SYS_TRCSSPCICR(3),       HDFGRTR, TRC, 1),
> +       SR_FGT(SYS_TRCSSPCICR(4),       HDFGRTR, TRC, 1),
> +       SR_FGT(SYS_TRCSSPCICR(5),       HDFGRTR, TRC, 1),
> +       SR_FGT(SYS_TRCSSPCICR(6),       HDFGRTR, TRC, 1),
> +       SR_FGT(SYS_TRCSSPCICR(7),       HDFGRTR, TRC, 1),
> +       SR_FGT(SYS_TRCSTALLCTLR,        HDFGRTR, TRC, 1),
> +       SR_FGT(SYS_TRCSYNCPR,           HDFGRTR, TRC, 1),
> +       SR_FGT(SYS_TRCTRACEIDR,         HDFGRTR, TRC, 1),
> +       SR_FGT(SYS_TRCTSCTLR,           HDFGRTR, TRC, 1),
> +       SR_FGT(SYS_TRCVIIECTLR,         HDFGRTR, TRC, 1),
> +       SR_FGT(SYS_TRCVIPCSSCTLR,       HDFGRTR, TRC, 1),
> +       SR_FGT(SYS_TRCVISSCTLR,         HDFGRTR, TRC, 1),
> +       SR_FGT(SYS_TRCVMIDCCTLR0,       HDFGRTR, TRC, 1),
> +       SR_FGT(SYS_TRCVMIDCCTLR1,       HDFGRTR, TRC, 1),
> +       SR_FGT(SYS_TRCVMIDCVR(0),       HDFGRTR, TRC, 1),
> +       SR_FGT(SYS_TRCVMIDCVR(1),       HDFGRTR, TRC, 1),
> +       SR_FGT(SYS_TRCVMIDCVR(2),       HDFGRTR, TRC, 1),
> +       SR_FGT(SYS_TRCVMIDCVR(3),       HDFGRTR, TRC, 1),
> +       SR_FGT(SYS_TRCVMIDCVR(4),       HDFGRTR, TRC, 1),
> +       SR_FGT(SYS_TRCVMIDCVR(5),       HDFGRTR, TRC, 1),
> +       SR_FGT(SYS_TRCVMIDCVR(6),       HDFGRTR, TRC, 1),
> +       SR_FGT(SYS_TRCVMIDCVR(7),       HDFGRTR, TRC, 1),
> +       SR_FGT(SYS_PMSLATFR_EL1,        HDFGRTR, PMSLATFR_EL1, 1),
> +       SR_FGT(SYS_PMSIRR_EL1,          HDFGRTR, PMSIRR_EL1, 1),
> +       SR_FGT(SYS_PMSIDR_EL1,          HDFGRTR, PMSIDR_EL1, 1),
> +       SR_FGT(SYS_PMSICR_EL1,          HDFGRTR, PMSICR_EL1, 1),
> +       SR_FGT(SYS_PMSFCR_EL1,          HDFGRTR, PMSFCR_EL1, 1),
> +       SR_FGT(SYS_PMSEVFR_EL1,         HDFGRTR, PMSEVFR_EL1, 1),
> +       SR_FGT(SYS_PMSCR_EL1,           HDFGRTR, PMSCR_EL1, 1),
> +       SR_FGT(SYS_PMBSR_EL1,           HDFGRTR, PMBSR_EL1, 1),
> +       SR_FGT(SYS_PMBPTR_EL1,          HDFGRTR, PMBPTR_EL1, 1),
> +       SR_FGT(SYS_PMBLIMITR_EL1,       HDFGRTR, PMBLIMITR_EL1, 1),
> +       SR_FGT(SYS_PMMIR_EL1,           HDFGRTR, PMMIR_EL1, 1),
> +       SR_FGT(SYS_PMSELR_EL0,          HDFGRTR, PMSELR_EL0, 1),
> +       SR_FGT(SYS_PMOVSCLR_EL0,        HDFGRTR, PMOVS, 1),
> +       SR_FGT(SYS_PMOVSSET_EL0,        HDFGRTR, PMOVS, 1),
> +       SR_FGT(SYS_PMINTENCLR_EL1,      HDFGRTR, PMINTEN, 1),
> +       SR_FGT(SYS_PMINTENSET_EL1,      HDFGRTR, PMINTEN, 1),
> +       SR_FGT(SYS_PMCNTENCLR_EL0,      HDFGRTR, PMCNTEN, 1),
> +       SR_FGT(SYS_PMCNTENSET_EL0,      HDFGRTR, PMCNTEN, 1),
> +       SR_FGT(SYS_PMCCNTR_EL0,         HDFGRTR, PMCCNTR_EL0, 1),
> +       SR_FGT(SYS_PMCCFILTR_EL0,       HDFGRTR, PMCCFILTR_EL0, 1),
> +       SR_FGT(SYS_PMEVTYPERn_EL0(0),   HDFGRTR, PMEVTYPERn_EL0, 1),
> +       SR_FGT(SYS_PMEVTYPERn_EL0(1),   HDFGRTR, PMEVTYPERn_EL0, 1),
> +       SR_FGT(SYS_PMEVTYPERn_EL0(2),   HDFGRTR, PMEVTYPERn_EL0, 1),
> +       SR_FGT(SYS_PMEVTYPERn_EL0(3),   HDFGRTR, PMEVTYPERn_EL0, 1),
> +       SR_FGT(SYS_PMEVTYPERn_EL0(4),   HDFGRTR, PMEVTYPERn_EL0, 1),
> +       SR_FGT(SYS_PMEVTYPERn_EL0(5),   HDFGRTR, PMEVTYPERn_EL0, 1),
> +       SR_FGT(SYS_PMEVTYPERn_EL0(6),   HDFGRTR, PMEVTYPERn_EL0, 1),
> +       SR_FGT(SYS_PMEVTYPERn_EL0(7),   HDFGRTR, PMEVTYPERn_EL0, 1),
> +       SR_FGT(SYS_PMEVTYPERn_EL0(8),   HDFGRTR, PMEVTYPERn_EL0, 1),
> +       SR_FGT(SYS_PMEVTYPERn_EL0(9),   HDFGRTR, PMEVTYPERn_EL0, 1),
> +       SR_FGT(SYS_PMEVTYPERn_EL0(10),  HDFGRTR, PMEVTYPERn_EL0, 1),
> +       SR_FGT(SYS_PMEVTYPERn_EL0(11),  HDFGRTR, PMEVTYPERn_EL0, 1),
> +       SR_FGT(SYS_PMEVTYPERn_EL0(12),  HDFGRTR, PMEVTYPERn_EL0, 1),
> +       SR_FGT(SYS_PMEVTYPERn_EL0(13),  HDFGRTR, PMEVTYPERn_EL0, 1),
> +       SR_FGT(SYS_PMEVTYPERn_EL0(14),  HDFGRTR, PMEVTYPERn_EL0, 1),
> +       SR_FGT(SYS_PMEVTYPERn_EL0(15),  HDFGRTR, PMEVTYPERn_EL0, 1),
> +       SR_FGT(SYS_PMEVTYPERn_EL0(16),  HDFGRTR, PMEVTYPERn_EL0, 1),
> +       SR_FGT(SYS_PMEVTYPERn_EL0(17),  HDFGRTR, PMEVTYPERn_EL0, 1),
> +       SR_FGT(SYS_PMEVTYPERn_EL0(18),  HDFGRTR, PMEVTYPERn_EL0, 1),
> +       SR_FGT(SYS_PMEVTYPERn_EL0(19),  HDFGRTR, PMEVTYPERn_EL0, 1),
> +       SR_FGT(SYS_PMEVTYPERn_EL0(20),  HDFGRTR, PMEVTYPERn_EL0, 1),
> +       SR_FGT(SYS_PMEVTYPERn_EL0(21),  HDFGRTR, PMEVTYPERn_EL0, 1),
> +       SR_FGT(SYS_PMEVTYPERn_EL0(22),  HDFGRTR, PMEVTYPERn_EL0, 1),
> +       SR_FGT(SYS_PMEVTYPERn_EL0(23),  HDFGRTR, PMEVTYPERn_EL0, 1),
> +       SR_FGT(SYS_PMEVTYPERn_EL0(24),  HDFGRTR, PMEVTYPERn_EL0, 1),
> +       SR_FGT(SYS_PMEVTYPERn_EL0(25),  HDFGRTR, PMEVTYPERn_EL0, 1),
> +       SR_FGT(SYS_PMEVTYPERn_EL0(26),  HDFGRTR, PMEVTYPERn_EL0, 1),
> +       SR_FGT(SYS_PMEVTYPERn_EL0(27),  HDFGRTR, PMEVTYPERn_EL0, 1),
> +       SR_FGT(SYS_PMEVTYPERn_EL0(28),  HDFGRTR, PMEVTYPERn_EL0, 1),
> +       SR_FGT(SYS_PMEVTYPERn_EL0(29),  HDFGRTR, PMEVTYPERn_EL0, 1),
> +       SR_FGT(SYS_PMEVTYPERn_EL0(30),  HDFGRTR, PMEVTYPERn_EL0, 1),
> +       SR_FGT(SYS_PMEVCNTRn_EL0(0),    HDFGRTR, PMEVCNTRn_EL0, 1),
> +       SR_FGT(SYS_PMEVCNTRn_EL0(1),    HDFGRTR, PMEVCNTRn_EL0, 1),
> +       SR_FGT(SYS_PMEVCNTRn_EL0(2),    HDFGRTR, PMEVCNTRn_EL0, 1),
> +       SR_FGT(SYS_PMEVCNTRn_EL0(3),    HDFGRTR, PMEVCNTRn_EL0, 1),
> +       SR_FGT(SYS_PMEVCNTRn_EL0(4),    HDFGRTR, PMEVCNTRn_EL0, 1),
> +       SR_FGT(SYS_PMEVCNTRn_EL0(5),    HDFGRTR, PMEVCNTRn_EL0, 1),
> +       SR_FGT(SYS_PMEVCNTRn_EL0(6),    HDFGRTR, PMEVCNTRn_EL0, 1),
> +       SR_FGT(SYS_PMEVCNTRn_EL0(7),    HDFGRTR, PMEVCNTRn_EL0, 1),
> +       SR_FGT(SYS_PMEVCNTRn_EL0(8),    HDFGRTR, PMEVCNTRn_EL0, 1),
> +       SR_FGT(SYS_PMEVCNTRn_EL0(9),    HDFGRTR, PMEVCNTRn_EL0, 1),
> +       SR_FGT(SYS_PMEVCNTRn_EL0(10),   HDFGRTR, PMEVCNTRn_EL0, 1),
> +       SR_FGT(SYS_PMEVCNTRn_EL0(11),   HDFGRTR, PMEVCNTRn_EL0, 1),
> +       SR_FGT(SYS_PMEVCNTRn_EL0(12),   HDFGRTR, PMEVCNTRn_EL0, 1),
> +       SR_FGT(SYS_PMEVCNTRn_EL0(13),   HDFGRTR, PMEVCNTRn_EL0, 1),
> +       SR_FGT(SYS_PMEVCNTRn_EL0(14),   HDFGRTR, PMEVCNTRn_EL0, 1),
> +       SR_FGT(SYS_PMEVCNTRn_EL0(15),   HDFGRTR, PMEVCNTRn_EL0, 1),
> +       SR_FGT(SYS_PMEVCNTRn_EL0(16),   HDFGRTR, PMEVCNTRn_EL0, 1),
> +       SR_FGT(SYS_PMEVCNTRn_EL0(17),   HDFGRTR, PMEVCNTRn_EL0, 1),
> +       SR_FGT(SYS_PMEVCNTRn_EL0(18),   HDFGRTR, PMEVCNTRn_EL0, 1),
> +       SR_FGT(SYS_PMEVCNTRn_EL0(19),   HDFGRTR, PMEVCNTRn_EL0, 1),
> +       SR_FGT(SYS_PMEVCNTRn_EL0(20),   HDFGRTR, PMEVCNTRn_EL0, 1),
> +       SR_FGT(SYS_PMEVCNTRn_EL0(21),   HDFGRTR, PMEVCNTRn_EL0, 1),
> +       SR_FGT(SYS_PMEVCNTRn_EL0(22),   HDFGRTR, PMEVCNTRn_EL0, 1),
> +       SR_FGT(SYS_PMEVCNTRn_EL0(23),   HDFGRTR, PMEVCNTRn_EL0, 1),
> +       SR_FGT(SYS_PMEVCNTRn_EL0(24),   HDFGRTR, PMEVCNTRn_EL0, 1),
> +       SR_FGT(SYS_PMEVCNTRn_EL0(25),   HDFGRTR, PMEVCNTRn_EL0, 1),
> +       SR_FGT(SYS_PMEVCNTRn_EL0(26),   HDFGRTR, PMEVCNTRn_EL0, 1),
> +       SR_FGT(SYS_PMEVCNTRn_EL0(27),   HDFGRTR, PMEVCNTRn_EL0, 1),
> +       SR_FGT(SYS_PMEVCNTRn_EL0(28),   HDFGRTR, PMEVCNTRn_EL0, 1),
> +       SR_FGT(SYS_PMEVCNTRn_EL0(29),   HDFGRTR, PMEVCNTRn_EL0, 1),
> +       SR_FGT(SYS_PMEVCNTRn_EL0(30),   HDFGRTR, PMEVCNTRn_EL0, 1),
> +       SR_FGT(SYS_OSDLR_EL1,           HDFGRTR, OSDLR_EL1, 1),
> +       SR_FGT(SYS_OSECCR_EL1,          HDFGRTR, OSECCR_EL1, 1),
> +       SR_FGT(SYS_OSLSR_EL1,           HDFGRTR, OSLSR_EL1, 1),
> +       SR_FGT(SYS_DBGPRCR_EL1,         HDFGRTR, DBGPRCR_EL1, 1),
> +       SR_FGT(SYS_DBGAUTHSTATUS_EL1,   HDFGRTR, DBGAUTHSTATUS_EL1, 1),
> +       SR_FGT(SYS_DBGCLAIMSET_EL1,     HDFGRTR, DBGCLAIM, 1),
> +       SR_FGT(SYS_DBGCLAIMCLR_EL1,     HDFGRTR, DBGCLAIM, 1),
> +       SR_FGT(SYS_MDSCR_EL1,           HDFGRTR, MDSCR_EL1, 1),
> +       /*
> +        * The trap bits capture *64* debug registers per bit, but the
> +        * ARM ARM only describes the encoding for the first 16, and
> +        * we don't really support more than that anyway.
> +        */
> +       SR_FGT(SYS_DBGWVRn_EL1(0),      HDFGRTR, DBGWVRn_EL1, 1),
> +       SR_FGT(SYS_DBGWVRn_EL1(1),      HDFGRTR, DBGWVRn_EL1, 1),
> +       SR_FGT(SYS_DBGWVRn_EL1(2),      HDFGRTR, DBGWVRn_EL1, 1),
> +       SR_FGT(SYS_DBGWVRn_EL1(3),      HDFGRTR, DBGWVRn_EL1, 1),
> +       SR_FGT(SYS_DBGWVRn_EL1(4),      HDFGRTR, DBGWVRn_EL1, 1),
> +       SR_FGT(SYS_DBGWVRn_EL1(5),      HDFGRTR, DBGWVRn_EL1, 1),
> +       SR_FGT(SYS_DBGWVRn_EL1(6),      HDFGRTR, DBGWVRn_EL1, 1),
> +       SR_FGT(SYS_DBGWVRn_EL1(7),      HDFGRTR, DBGWVRn_EL1, 1),
> +       SR_FGT(SYS_DBGWVRn_EL1(8),      HDFGRTR, DBGWVRn_EL1, 1),
> +       SR_FGT(SYS_DBGWVRn_EL1(9),      HDFGRTR, DBGWVRn_EL1, 1),
> +       SR_FGT(SYS_DBGWVRn_EL1(10),     HDFGRTR, DBGWVRn_EL1, 1),
> +       SR_FGT(SYS_DBGWVRn_EL1(11),     HDFGRTR, DBGWVRn_EL1, 1),
> +       SR_FGT(SYS_DBGWVRn_EL1(12),     HDFGRTR, DBGWVRn_EL1, 1),
> +       SR_FGT(SYS_DBGWVRn_EL1(13),     HDFGRTR, DBGWVRn_EL1, 1),
> +       SR_FGT(SYS_DBGWVRn_EL1(14),     HDFGRTR, DBGWVRn_EL1, 1),
> +       SR_FGT(SYS_DBGWVRn_EL1(15),     HDFGRTR, DBGWVRn_EL1, 1),
> +       SR_FGT(SYS_DBGWCRn_EL1(0),      HDFGRTR, DBGWCRn_EL1, 1),
> +       SR_FGT(SYS_DBGWCRn_EL1(1),      HDFGRTR, DBGWCRn_EL1, 1),
> +       SR_FGT(SYS_DBGWCRn_EL1(2),      HDFGRTR, DBGWCRn_EL1, 1),
> +       SR_FGT(SYS_DBGWCRn_EL1(3),      HDFGRTR, DBGWCRn_EL1, 1),
> +       SR_FGT(SYS_DBGWCRn_EL1(4),      HDFGRTR, DBGWCRn_EL1, 1),
> +       SR_FGT(SYS_DBGWCRn_EL1(5),      HDFGRTR, DBGWCRn_EL1, 1),
> +       SR_FGT(SYS_DBGWCRn_EL1(6),      HDFGRTR, DBGWCRn_EL1, 1),
> +       SR_FGT(SYS_DBGWCRn_EL1(7),      HDFGRTR, DBGWCRn_EL1, 1),
> +       SR_FGT(SYS_DBGWCRn_EL1(8),      HDFGRTR, DBGWCRn_EL1, 1),
> +       SR_FGT(SYS_DBGWCRn_EL1(9),      HDFGRTR, DBGWCRn_EL1, 1),
> +       SR_FGT(SYS_DBGWCRn_EL1(10),     HDFGRTR, DBGWCRn_EL1, 1),
> +       SR_FGT(SYS_DBGWCRn_EL1(11),     HDFGRTR, DBGWCRn_EL1, 1),
> +       SR_FGT(SYS_DBGWCRn_EL1(12),     HDFGRTR, DBGWCRn_EL1, 1),
> +       SR_FGT(SYS_DBGWCRn_EL1(13),     HDFGRTR, DBGWCRn_EL1, 1),
> +       SR_FGT(SYS_DBGWCRn_EL1(14),     HDFGRTR, DBGWCRn_EL1, 1),
> +       SR_FGT(SYS_DBGWCRn_EL1(15),     HDFGRTR, DBGWCRn_EL1, 1),
> +       SR_FGT(SYS_DBGBVRn_EL1(0),      HDFGRTR, DBGBVRn_EL1, 1),
> +       SR_FGT(SYS_DBGBVRn_EL1(1),      HDFGRTR, DBGBVRn_EL1, 1),
> +       SR_FGT(SYS_DBGBVRn_EL1(2),      HDFGRTR, DBGBVRn_EL1, 1),
> +       SR_FGT(SYS_DBGBVRn_EL1(3),      HDFGRTR, DBGBVRn_EL1, 1),
> +       SR_FGT(SYS_DBGBVRn_EL1(4),      HDFGRTR, DBGBVRn_EL1, 1),
> +       SR_FGT(SYS_DBGBVRn_EL1(5),      HDFGRTR, DBGBVRn_EL1, 1),
> +       SR_FGT(SYS_DBGBVRn_EL1(6),      HDFGRTR, DBGBVRn_EL1, 1),
> +       SR_FGT(SYS_DBGBVRn_EL1(7),      HDFGRTR, DBGBVRn_EL1, 1),
> +       SR_FGT(SYS_DBGBVRn_EL1(8),      HDFGRTR, DBGBVRn_EL1, 1),
> +       SR_FGT(SYS_DBGBVRn_EL1(9),      HDFGRTR, DBGBVRn_EL1, 1),
> +       SR_FGT(SYS_DBGBVRn_EL1(10),     HDFGRTR, DBGBVRn_EL1, 1),
> +       SR_FGT(SYS_DBGBVRn_EL1(11),     HDFGRTR, DBGBVRn_EL1, 1),
> +       SR_FGT(SYS_DBGBVRn_EL1(12),     HDFGRTR, DBGBVRn_EL1, 1),
> +       SR_FGT(SYS_DBGBVRn_EL1(13),     HDFGRTR, DBGBVRn_EL1, 1),
> +       SR_FGT(SYS_DBGBVRn_EL1(14),     HDFGRTR, DBGBVRn_EL1, 1),
> +       SR_FGT(SYS_DBGBVRn_EL1(15),     HDFGRTR, DBGBVRn_EL1, 1),
> +       SR_FGT(SYS_DBGBCRn_EL1(0),      HDFGRTR, DBGBCRn_EL1, 1),
> +       SR_FGT(SYS_DBGBCRn_EL1(1),      HDFGRTR, DBGBCRn_EL1, 1),
> +       SR_FGT(SYS_DBGBCRn_EL1(2),      HDFGRTR, DBGBCRn_EL1, 1),
> +       SR_FGT(SYS_DBGBCRn_EL1(3),      HDFGRTR, DBGBCRn_EL1, 1),
> +       SR_FGT(SYS_DBGBCRn_EL1(4),      HDFGRTR, DBGBCRn_EL1, 1),
> +       SR_FGT(SYS_DBGBCRn_EL1(5),      HDFGRTR, DBGBCRn_EL1, 1),
> +       SR_FGT(SYS_DBGBCRn_EL1(6),      HDFGRTR, DBGBCRn_EL1, 1),
> +       SR_FGT(SYS_DBGBCRn_EL1(7),      HDFGRTR, DBGBCRn_EL1, 1),
> +       SR_FGT(SYS_DBGBCRn_EL1(8),      HDFGRTR, DBGBCRn_EL1, 1),
> +       SR_FGT(SYS_DBGBCRn_EL1(9),      HDFGRTR, DBGBCRn_EL1, 1),
> +       SR_FGT(SYS_DBGBCRn_EL1(10),     HDFGRTR, DBGBCRn_EL1, 1),
> +       SR_FGT(SYS_DBGBCRn_EL1(11),     HDFGRTR, DBGBCRn_EL1, 1),
> +       SR_FGT(SYS_DBGBCRn_EL1(12),     HDFGRTR, DBGBCRn_EL1, 1),
> +       SR_FGT(SYS_DBGBCRn_EL1(13),     HDFGRTR, DBGBCRn_EL1, 1),
> +       SR_FGT(SYS_DBGBCRn_EL1(14),     HDFGRTR, DBGBCRn_EL1, 1),
> +       SR_FGT(SYS_DBGBCRn_EL1(15),     HDFGRTR, DBGBCRn_EL1, 1),
> +       /*
> +        * HDFGWTR_EL2
> +        *
> +        * Although HDFGRTR_EL2 and HDFGWTR_EL2 registers largely
> +        * overlap in their bit assignment, there are a number of bits
> +        * that are RES0 on one side, and an actual trap bit on the
> +        * other.  The policy chosen here is to describe all the
> +        * read-side mappings, and only the write-side mappings that
> +        * differ from the read side, and the trap handler will pick
> +        * the correct shadow register based on the access type.
> +        */
> +       SR_FGT(SYS_TRFCR_EL1,           HDFGWTR, TRFCR_EL1, 1),
> +       SR_FGT(SYS_TRCOSLAR,            HDFGWTR, TRCOSLAR, 1),
> +       SR_FGT(SYS_PMCR_EL0,            HDFGWTR, PMCR_EL0, 1),
> +       SR_FGT(SYS_PMSWINC_EL0,         HDFGWTR, PMSWINC_EL0, 1),
> +       SR_FGT(SYS_OSLAR_EL1,           HDFGWTR, OSLAR_EL1, 1),
>  };
>
>  static union trap_config get_trap_config(u32 sysreg)
> @@ -1336,6 +1802,14 @@ bool __check_nv_sr_forward(struct kvm_vcpu *vcpu)
>                         val = sanitised_sys_reg(vcpu, HFGWTR_EL2);
>                 break;
>
> +       case HDFGRTR_GROUP:
> +       case HDFGWTR_GROUP:
> +               if (is_read)
> +                       val = sanitised_sys_reg(vcpu, HDFGRTR_EL2);
> +               else
> +                       val = sanitised_sys_reg(vcpu, HDFGWTR_EL2);
> +               break;
> +
>         case HFGITR_GROUP:
>                 val = sanitised_sys_reg(vcpu, HFGITR_EL2);
>                 break;
> --
> 2.34.1
>

Reviewed-by: Jing Zhang <jingzhangos@google.com>

Jing

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 48+ messages in thread

* Re: [PATCH v4 23/28] KVM: arm64: nv: Add SVC trap forwarding
  2023-08-15 18:38 ` [PATCH v4 23/28] KVM: arm64: nv: Add SVC trap forwarding Marc Zyngier
@ 2023-08-15 23:24   ` Jing Zhang
  0 siblings, 0 replies; 48+ messages in thread
From: Jing Zhang @ 2023-08-15 23:24 UTC (permalink / raw)
  To: Marc Zyngier
  Cc: kvmarm, kvm, linux-arm-kernel, Catalin Marinas, Eric Auger,
	Mark Brown, Mark Rutland, Will Deacon, Alexandru Elisei,
	Andre Przywara, Chase Conklin, Ganapatrao Kulkarni, Darren Hart,
	Miguel Luis, James Morse, Suzuki K Poulose, Oliver Upton,
	Zenghui Yu

Hi Marc,

On Tue, Aug 15, 2023 at 11:47 AM Marc Zyngier <maz@kernel.org> wrote:
>
> HFGITR_EL2 allows the trap of SVC instructions to EL2. Allow these
> traps to be forwarded. Take this opportunity to deny any 32bit activity
> when NV is enabled.
>
> Reviewed-by: Eric Auger <eric.auger@redhat.com>
> Signed-off-by: Marc Zyngier <maz@kernel.org>
> ---
>  arch/arm64/kvm/arm.c         |  4 ++++
>  arch/arm64/kvm/handle_exit.c | 12 ++++++++++++
>  2 files changed, 16 insertions(+)
>
> diff --git a/arch/arm64/kvm/arm.c b/arch/arm64/kvm/arm.c
> index 72dc53a75d1c..8b51570a76f8 100644
> --- a/arch/arm64/kvm/arm.c
> +++ b/arch/arm64/kvm/arm.c
> @@ -36,6 +36,7 @@
>  #include <asm/kvm_arm.h>
>  #include <asm/kvm_asm.h>
>  #include <asm/kvm_mmu.h>
> +#include <asm/kvm_nested.h>
>  #include <asm/kvm_pkvm.h>
>  #include <asm/kvm_emulate.h>
>  #include <asm/sections.h>
> @@ -818,6 +819,9 @@ static bool vcpu_mode_is_bad_32bit(struct kvm_vcpu *vcpu)
>         if (likely(!vcpu_mode_is_32bit(vcpu)))
>                 return false;
>
> +       if (vcpu_has_nv(vcpu))
> +               return true;
> +
>         return !kvm_supports_32bit_el0();
>  }
>
> diff --git a/arch/arm64/kvm/handle_exit.c b/arch/arm64/kvm/handle_exit.c
> index 6dcd6604b6bc..3b86d534b995 100644
> --- a/arch/arm64/kvm/handle_exit.c
> +++ b/arch/arm64/kvm/handle_exit.c
> @@ -226,6 +226,17 @@ static int kvm_handle_eret(struct kvm_vcpu *vcpu)
>         return 1;
>  }
>
> +static int handle_svc(struct kvm_vcpu *vcpu)
> +{
> +       /*
> +        * So far, SVC traps only for NV via HFGITR_EL2. A SVC from a
> +        * 32bit guest would be caught by vpcu_mode_is_bad_32bit(), so
> +        * we should only have to deal with a 64 bit exception.
> +        */
> +       kvm_inject_nested_sync(vcpu, kvm_vcpu_get_esr(vcpu));
> +       return 1;
> +}
> +
>  static exit_handle_fn arm_exit_handlers[] = {
>         [0 ... ESR_ELx_EC_MAX]  = kvm_handle_unknown_ec,
>         [ESR_ELx_EC_WFx]        = kvm_handle_wfx,
> @@ -239,6 +250,7 @@ static exit_handle_fn arm_exit_handlers[] = {
>         [ESR_ELx_EC_SMC32]      = handle_smc,
>         [ESR_ELx_EC_HVC64]      = handle_hvc,
>         [ESR_ELx_EC_SMC64]      = handle_smc,
> +       [ESR_ELx_EC_SVC64]      = handle_svc,
>         [ESR_ELx_EC_SYS64]      = kvm_handle_sys_reg,
>         [ESR_ELx_EC_SVE]        = handle_sve,
>         [ESR_ELx_EC_ERET]       = kvm_handle_eret,
> --
> 2.34.1
>

Reviewed-by: Jing Zhang <jingzhangos@google.com>

Jing

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 48+ messages in thread

* Re: [PATCH v4 24/28] KVM: arm64: nv: Expand ERET trap forwarding to handle FGT
  2023-08-15 18:38 ` [PATCH v4 24/28] KVM: arm64: nv: Expand ERET trap forwarding to handle FGT Marc Zyngier
@ 2023-08-15 23:28   ` Jing Zhang
  0 siblings, 0 replies; 48+ messages in thread
From: Jing Zhang @ 2023-08-15 23:28 UTC (permalink / raw)
  To: Marc Zyngier
  Cc: kvmarm, kvm, linux-arm-kernel, Catalin Marinas, Eric Auger,
	Mark Brown, Mark Rutland, Will Deacon, Alexandru Elisei,
	Andre Przywara, Chase Conklin, Ganapatrao Kulkarni, Darren Hart,
	Miguel Luis, James Morse, Suzuki K Poulose, Oliver Upton,
	Zenghui Yu

Hi Marc,

On Tue, Aug 15, 2023 at 11:47 AM Marc Zyngier <maz@kernel.org> wrote:
>
> We already handle ERET being trapped from a L1 guest in hyp context.
> However, with FGT, we can also have ERET being trapped from L2, and
> this needs to be reinjected into L1.
>
> Add the required exception routing.
>
> Signed-off-by: Marc Zyngier <maz@kernel.org>
> ---
>  arch/arm64/kvm/handle_exit.c | 17 ++++++++++++++++-
>  1 file changed, 16 insertions(+), 1 deletion(-)
>
> diff --git a/arch/arm64/kvm/handle_exit.c b/arch/arm64/kvm/handle_exit.c
> index 3b86d534b995..617ae6dea5d5 100644
> --- a/arch/arm64/kvm/handle_exit.c
> +++ b/arch/arm64/kvm/handle_exit.c
> @@ -222,7 +222,22 @@ static int kvm_handle_eret(struct kvm_vcpu *vcpu)
>         if (kvm_vcpu_get_esr(vcpu) & ESR_ELx_ERET_ISS_ERET)
>                 return kvm_handle_ptrauth(vcpu);
>
> -       kvm_emulate_nested_eret(vcpu);
> +       /*
> +        * If we got here, two possibilities:
> +        *
> +        * - the guest is in EL2, and we need to fully emulate ERET
> +        *
> +        * - the guest is in EL1, and we need to reinject the
> +         *   exception into the L1 hypervisor.
> +        *
> +        * If KVM ever traps ERET for its own use, we'll have to
> +        * revisit this.
> +        */
> +       if (is_hyp_ctxt(vcpu))
> +               kvm_emulate_nested_eret(vcpu);
> +       else
> +               kvm_inject_nested_sync(vcpu, kvm_vcpu_get_esr(vcpu));
> +
>         return 1;
>  }
>
> --
> 2.34.1
>

Reviewed-by: Jing Zhang <jingzhangos@google.com>

Jing

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 48+ messages in thread

* Re: [PATCH v4 25/28] KVM: arm64: nv: Add switching support for HFGxTR/HDFGxTR
  2023-08-15 18:38 ` [PATCH v4 25/28] KVM: arm64: nv: Add switching support for HFGxTR/HDFGxTR Marc Zyngier
@ 2023-08-15 23:37   ` Jing Zhang
  0 siblings, 0 replies; 48+ messages in thread
From: Jing Zhang @ 2023-08-15 23:37 UTC (permalink / raw)
  To: Marc Zyngier
  Cc: kvmarm, kvm, linux-arm-kernel, Catalin Marinas, Eric Auger,
	Mark Brown, Mark Rutland, Will Deacon, Alexandru Elisei,
	Andre Przywara, Chase Conklin, Ganapatrao Kulkarni, Darren Hart,
	Miguel Luis, James Morse, Suzuki K Poulose, Oliver Upton,
	Zenghui Yu

Hi Marc,

On Tue, Aug 15, 2023 at 11:47 AM Marc Zyngier <maz@kernel.org> wrote:
>
> Now that we can evaluate the FGT registers, allow them to be merged
> with the hypervisor's own configuration (in the case of HFG{RW}TR_EL2)
> or simply set for HFGITR_EL2, HDGFRTR_EL2 and HDFGWTR_EL2.
>
> Reviewed-by: Eric Auger <eric.auger@redhat.com>
> Signed-off-by: Marc Zyngier <maz@kernel.org>
> ---
>  arch/arm64/kvm/hyp/include/hyp/switch.h | 48 +++++++++++++++++++++++++
>  1 file changed, 48 insertions(+)
>
> diff --git a/arch/arm64/kvm/hyp/include/hyp/switch.h b/arch/arm64/kvm/hyp/include/hyp/switch.h
> index e096b16e85fd..a4750070563f 100644
> --- a/arch/arm64/kvm/hyp/include/hyp/switch.h
> +++ b/arch/arm64/kvm/hyp/include/hyp/switch.h
> @@ -70,6 +70,13 @@ static inline void __activate_traps_fpsimd32(struct kvm_vcpu *vcpu)
>         }
>  }
>
> +#define compute_clr_set(vcpu, reg, clr, set)                           \
> +       do {                                                            \
> +               u64 hfg;                                                \
> +               hfg = __vcpu_sys_reg(vcpu, reg) & ~__ ## reg ## _RES0;  \
> +               set |= hfg & __ ## reg ## _MASK;                        \
> +               clr |= ~hfg & __ ## reg ## _nMASK;                      \
> +       } while(0)
>
>
>  static inline void __activate_traps_hfgxtr(struct kvm_vcpu *vcpu)
> @@ -97,6 +104,10 @@ static inline void __activate_traps_hfgxtr(struct kvm_vcpu *vcpu)
>         if (cpus_have_final_cap(ARM64_WORKAROUND_AMPERE_AC03_CPU_38))
>                 w_set |= HFGxTR_EL2_TCR_EL1_MASK;
>
> +       if (vcpu_has_nv(vcpu) && !is_hyp_ctxt(vcpu)) {
> +               compute_clr_set(vcpu, HFGRTR_EL2, r_clr, r_set);
> +               compute_clr_set(vcpu, HFGWTR_EL2, w_clr, w_set);
> +       }
>
>         /* The default is not to trap anything but ACCDATA_EL1 */
>         r_val = __HFGRTR_EL2_nMASK & ~HFGxTR_EL2_nACCDATA_EL1;
> @@ -109,6 +120,38 @@ static inline void __activate_traps_hfgxtr(struct kvm_vcpu *vcpu)
>
>         write_sysreg_s(r_val, SYS_HFGRTR_EL2);
>         write_sysreg_s(w_val, SYS_HFGWTR_EL2);
> +
> +       if (!vcpu_has_nv(vcpu) || is_hyp_ctxt(vcpu))
> +               return;
> +
> +       ctxt_sys_reg(hctxt, HFGITR_EL2) = read_sysreg_s(SYS_HFGITR_EL2);
> +
> +       r_set = r_clr = 0;
> +       compute_clr_set(vcpu, HFGITR_EL2, r_clr, r_set);
> +       r_val = __HFGITR_EL2_nMASK;
> +       r_val |= r_set;
> +       r_val &= ~r_clr;
> +
> +       write_sysreg_s(r_val, SYS_HFGITR_EL2);
> +
> +       ctxt_sys_reg(hctxt, HDFGRTR_EL2) = read_sysreg_s(SYS_HDFGRTR_EL2);
> +       ctxt_sys_reg(hctxt, HDFGWTR_EL2) = read_sysreg_s(SYS_HDFGWTR_EL2);
> +
> +       r_clr = r_set = w_clr = w_set = 0;
> +
> +       compute_clr_set(vcpu, HDFGRTR_EL2, r_clr, r_set);
> +       compute_clr_set(vcpu, HDFGWTR_EL2, w_clr, w_set);
> +
> +       r_val = __HDFGRTR_EL2_nMASK;
> +       r_val |= r_set;
> +       r_val &= ~r_clr;
> +
> +       w_val = __HDFGWTR_EL2_nMASK;
> +       w_val |= w_set;
> +       w_val &= ~w_clr;
> +
> +       write_sysreg_s(r_val, SYS_HDFGRTR_EL2);
> +       write_sysreg_s(w_val, SYS_HDFGWTR_EL2);
>  }
>
>  static inline void __deactivate_traps_hfgxtr(struct kvm_vcpu *vcpu)
> @@ -121,7 +164,12 @@ static inline void __deactivate_traps_hfgxtr(struct kvm_vcpu *vcpu)
>         write_sysreg_s(ctxt_sys_reg(hctxt, HFGRTR_EL2), SYS_HFGRTR_EL2);
>         write_sysreg_s(ctxt_sys_reg(hctxt, HFGWTR_EL2), SYS_HFGWTR_EL2);
>
> +       if (!vcpu_has_nv(vcpu) || is_hyp_ctxt(vcpu))
> +               return;
>
> +       write_sysreg_s(ctxt_sys_reg(hctxt, HFGITR_EL2), SYS_HFGITR_EL2);
> +       write_sysreg_s(ctxt_sys_reg(hctxt, HDFGRTR_EL2), SYS_HDFGRTR_EL2);
> +       write_sysreg_s(ctxt_sys_reg(hctxt, HDFGWTR_EL2), SYS_HDFGWTR_EL2);
>  }
>
>  static inline void __activate_traps_common(struct kvm_vcpu *vcpu)
> --
> 2.34.1
>

Reviewed-by: Jing Zhang <jingzhangos@google.com>

Jing

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 48+ messages in thread

* Re: [PATCH v4 26/28] KVM: arm64: nv: Expose FGT to nested guests
  2023-08-15 18:39 ` [PATCH v4 26/28] KVM: arm64: nv: Expose FGT to nested guests Marc Zyngier
@ 2023-08-16  0:02   ` Jing Zhang
  0 siblings, 0 replies; 48+ messages in thread
From: Jing Zhang @ 2023-08-16  0:02 UTC (permalink / raw)
  To: Marc Zyngier
  Cc: kvmarm, kvm, linux-arm-kernel, Catalin Marinas, Eric Auger,
	Mark Brown, Mark Rutland, Will Deacon, Alexandru Elisei,
	Andre Przywara, Chase Conklin, Ganapatrao Kulkarni, Darren Hart,
	Miguel Luis, James Morse, Suzuki K Poulose, Oliver Upton,
	Zenghui Yu

Hi Marc,

On Tue, Aug 15, 2023 at 11:47 AM Marc Zyngier <maz@kernel.org> wrote:
>
> Now that we have FGT support, expose the feature to NV guests.
>
> Reviewed-by: Eric Auger <eric.auger@redhat.com>
> Signed-off-by: Marc Zyngier <maz@kernel.org>
> ---
>  arch/arm64/kvm/nested.c | 5 +++--
>  1 file changed, 3 insertions(+), 2 deletions(-)
>
> diff --git a/arch/arm64/kvm/nested.c b/arch/arm64/kvm/nested.c
> index 7f80f385d9e8..3facd8918ae3 100644
> --- a/arch/arm64/kvm/nested.c
> +++ b/arch/arm64/kvm/nested.c
> @@ -71,8 +71,9 @@ void access_nested_id_reg(struct kvm_vcpu *v, struct sys_reg_params *p,
>                 break;
>
>         case SYS_ID_AA64MMFR0_EL1:
> -               /* Hide ECV, FGT, ExS, Secure Memory */
> -               val &= ~(GENMASK_ULL(63, 43)            |
> +               /* Hide ECV, ExS, Secure Memory */
> +               val &= ~(NV_FTR(MMFR0, ECV)             |
> +                        NV_FTR(MMFR0, EXS)             |
>                          NV_FTR(MMFR0, TGRAN4_2)        |
>                          NV_FTR(MMFR0, TGRAN16_2)       |
>                          NV_FTR(MMFR0, TGRAN64_2)       |
> --
> 2.34.1
>

Reviewed-by: Jing Zhang <jingzhangos@google.com>

Jing

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 48+ messages in thread

* Re: [PATCH v4 27/28] KVM: arm64: Move HCRX_EL2 switch to load/put on VHE systems
  2023-08-15 18:39 ` [PATCH v4 27/28] KVM: arm64: Move HCRX_EL2 switch to load/put on VHE systems Marc Zyngier
@ 2023-08-16  0:17   ` Jing Zhang
  0 siblings, 0 replies; 48+ messages in thread
From: Jing Zhang @ 2023-08-16  0:17 UTC (permalink / raw)
  To: Marc Zyngier
  Cc: kvmarm, kvm, linux-arm-kernel, Catalin Marinas, Eric Auger,
	Mark Brown, Mark Rutland, Will Deacon, Alexandru Elisei,
	Andre Przywara, Chase Conklin, Ganapatrao Kulkarni, Darren Hart,
	Miguel Luis, James Morse, Suzuki K Poulose, Oliver Upton,
	Zenghui Yu

Hi Marc,

On Tue, Aug 15, 2023 at 11:47 AM Marc Zyngier <maz@kernel.org> wrote:
>
> Although the nVHE behaviour requires HCRX_EL2 to be switched
> on each switch between host and guest, there is nothing in
> this register that would affect a VHE host.
>
> It is thus possible to save/restore this register on load/put
> on VHE systems, avoiding unnecessary sysreg access on the hot
> path. Additionally, it avoids unnecessary traps when running
> with NV.
>
> To achieve this, simply move the read/writes to the *_common()
> helpers, which are called on load/put on VHE, and more eagerly
> on nVHE.
>
> Reviewed-by: Eric Auger <eric.auger@redhat.com>
> Signed-off-by: Marc Zyngier <maz@kernel.org>
> ---
>  arch/arm64/kvm/hyp/include/hyp/switch.h | 12 ++++++------
>  1 file changed, 6 insertions(+), 6 deletions(-)
>
> diff --git a/arch/arm64/kvm/hyp/include/hyp/switch.h b/arch/arm64/kvm/hyp/include/hyp/switch.h
> index a4750070563f..060c5a0409e5 100644
> --- a/arch/arm64/kvm/hyp/include/hyp/switch.h
> +++ b/arch/arm64/kvm/hyp/include/hyp/switch.h
> @@ -197,6 +197,9 @@ static inline void __activate_traps_common(struct kvm_vcpu *vcpu)
>         vcpu->arch.mdcr_el2_host = read_sysreg(mdcr_el2);
>         write_sysreg(vcpu->arch.mdcr_el2, mdcr_el2);
>
> +       if (cpus_have_final_cap(ARM64_HAS_HCX))
> +               write_sysreg_s(HCRX_GUEST_FLAGS, SYS_HCRX_EL2);
> +
>         __activate_traps_hfgxtr(vcpu);
>  }
>
> @@ -213,6 +216,9 @@ static inline void __deactivate_traps_common(struct kvm_vcpu *vcpu)
>                 vcpu_clear_flag(vcpu, PMUSERENR_ON_CPU);
>         }
>
> +       if (cpus_have_final_cap(ARM64_HAS_HCX))
> +               write_sysreg_s(HCRX_HOST_FLAGS, SYS_HCRX_EL2);
> +
>         __deactivate_traps_hfgxtr(vcpu);
>  }
>
> @@ -227,9 +233,6 @@ static inline void ___activate_traps(struct kvm_vcpu *vcpu)
>
>         if (cpus_have_final_cap(ARM64_HAS_RAS_EXTN) && (hcr & HCR_VSE))
>                 write_sysreg_s(vcpu->arch.vsesr_el2, SYS_VSESR_EL2);
> -
> -       if (cpus_have_final_cap(ARM64_HAS_HCX))
> -               write_sysreg_s(HCRX_GUEST_FLAGS, SYS_HCRX_EL2);
>  }
>
>  static inline void ___deactivate_traps(struct kvm_vcpu *vcpu)
> @@ -244,9 +247,6 @@ static inline void ___deactivate_traps(struct kvm_vcpu *vcpu)
>                 vcpu->arch.hcr_el2 &= ~HCR_VSE;
>                 vcpu->arch.hcr_el2 |= read_sysreg(hcr_el2) & HCR_VSE;
>         }
> -
> -       if (cpus_have_final_cap(ARM64_HAS_HCX))
> -               write_sysreg_s(HCRX_HOST_FLAGS, SYS_HCRX_EL2);
>  }
>
>  static inline bool __populate_fault_info(struct kvm_vcpu *vcpu)
> --
> 2.34.1
>

Reviewed-by: Jing Zhang <jingzhangos@google.com>

Jing

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 48+ messages in thread

* Re: [PATCH v4 28/28] KVM: arm64: nv: Add support for HCRX_EL2
  2023-08-15 18:39 ` [PATCH v4 28/28] KVM: arm64: nv: Add support for HCRX_EL2 Marc Zyngier
@ 2023-08-16  0:18   ` Jing Zhang
  0 siblings, 0 replies; 48+ messages in thread
From: Jing Zhang @ 2023-08-16  0:18 UTC (permalink / raw)
  To: Marc Zyngier
  Cc: kvmarm, kvm, linux-arm-kernel, Catalin Marinas, Eric Auger,
	Mark Brown, Mark Rutland, Will Deacon, Alexandru Elisei,
	Andre Przywara, Chase Conklin, Ganapatrao Kulkarni, Darren Hart,
	Miguel Luis, James Morse, Suzuki K Poulose, Oliver Upton,
	Zenghui Yu

Hi Marc,

On Tue, Aug 15, 2023 at 11:47 AM Marc Zyngier <maz@kernel.org> wrote:
>
> HCRX_EL2 has an interesting effect on HFGITR_EL2, as it conditions
> the traps of TLBI*nXS.
>
> Expand the FGT support to add a new Fine Grained Filter that will
> get checked when the instruction gets trapped, allowing the shadow
> register to override the trap as needed.
>
> Reviewed-by: Eric Auger <eric.auger@redhat.com>
> Signed-off-by: Marc Zyngier <maz@kernel.org>
> ---
>  arch/arm64/include/asm/kvm_arm.h        |  5 ++
>  arch/arm64/include/asm/kvm_host.h       |  1 +
>  arch/arm64/kvm/emulate-nested.c         | 94 ++++++++++++++++---------
>  arch/arm64/kvm/hyp/include/hyp/switch.h | 15 +++-
>  arch/arm64/kvm/nested.c                 |  3 +-
>  arch/arm64/kvm/sys_regs.c               |  2 +
>  6 files changed, 83 insertions(+), 37 deletions(-)
>
> diff --git a/arch/arm64/include/asm/kvm_arm.h b/arch/arm64/include/asm/kvm_arm.h
> index d229f238c3b6..137f732789c9 100644
> --- a/arch/arm64/include/asm/kvm_arm.h
> +++ b/arch/arm64/include/asm/kvm_arm.h
> @@ -369,6 +369,11 @@
>  #define __HDFGWTR_EL2_MASK     ~__HDFGWTR_EL2_nMASK
>  #define __HDFGWTR_EL2_nMASK    GENMASK(62, 60)
>
> +/* Similar definitions for HCRX_EL2 */
> +#define __HCRX_EL2_RES0                (GENMASK(63, 16) | GENMASK(13, 12))
> +#define __HCRX_EL2_MASK                (0)
> +#define __HCRX_EL2_nMASK       (GENMASK(15, 14) | GENMASK(4, 0))
> +
>  /* Hyp Prefetch Fault Address Register (HPFAR/HDFAR) */
>  #define HPFAR_MASK     (~UL(0xf))
>  /*
> diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h
> index cb1c5c54cedd..93c541111dea 100644
> --- a/arch/arm64/include/asm/kvm_host.h
> +++ b/arch/arm64/include/asm/kvm_host.h
> @@ -380,6 +380,7 @@ enum vcpu_sysreg {
>         CPTR_EL2,       /* Architectural Feature Trap Register (EL2) */
>         HSTR_EL2,       /* Hypervisor System Trap Register */
>         HACR_EL2,       /* Hypervisor Auxiliary Control Register */
> +       HCRX_EL2,       /* Extended Hypervisor Configuration Register */
>         TTBR0_EL2,      /* Translation Table Base Register 0 (EL2) */
>         TTBR1_EL2,      /* Translation Table Base Register 1 (EL2) */
>         TCR_EL2,        /* Translation Control Register (EL2) */
> diff --git a/arch/arm64/kvm/emulate-nested.c b/arch/arm64/kvm/emulate-nested.c
> index c9662f9a345e..1cc606c16416 100644
> --- a/arch/arm64/kvm/emulate-nested.c
> +++ b/arch/arm64/kvm/emulate-nested.c
> @@ -426,11 +426,13 @@ static const complex_condition_check ccc[] = {
>   * [13:10]     enum fgt_group_id (4 bits)
>   * [19:14]     bit number in the FGT register (6 bits)
>   * [20]                trap polarity (1 bit)
> - * [62:21]     Unused (42 bits)
> + * [25:21]     FG filter (5 bits)
> + * [62:26]     Unused (37 bits)
>   * [63]                RES0 - Must be zero, as lost on insertion in the xarray
>   */
>  #define TC_CGT_BITS    10
>  #define TC_FGT_BITS    4
> +#define TC_FGF_BITS    5
>
>  union trap_config {
>         u64     val;
> @@ -439,7 +441,8 @@ union trap_config {
>                 unsigned long   fgt:TC_FGT_BITS; /* Fine Grained Trap id */
>                 unsigned long   bit:6;           /* Bit number */
>                 unsigned long   pol:1;           /* Polarity */
> -               unsigned long   unused:42;       /* Unused, should be zero */
> +               unsigned long   fgf:TC_FGF_BITS; /* Fine Grained Filter */
> +               unsigned long   unused:37;       /* Unused, should be zero */
>                 unsigned long   mbz:1;           /* Must Be Zero */
>         };
>  };
> @@ -947,7 +950,15 @@ enum fgt_group_id {
>         __NR_FGT_GROUP_IDS__
>  };
>
> -#define SR_FGT(sr, g, b, p)                                    \
> +enum fg_filter_id {
> +       __NO_FGF__,
> +       HCRX_FGTnXS,
> +
> +       /* Must be last */
> +       __NR_FG_FILTER_IDS__
> +};
> +
> +#define SR_FGF(sr, g, b, p, f)                                 \
>         {                                                       \
>                 .encoding       = sr,                           \
>                 .end            = sr,                           \
> @@ -955,10 +966,13 @@ enum fgt_group_id {
>                         .fgt = g ## _GROUP,                     \
>                         .bit = g ## _EL2_ ## b ## _SHIFT,       \
>                         .pol = p,                               \
> +                       .fgf = f,                               \
>                 },                                              \
>                 .line = __LINE__,                               \
>         }
>
> +#define SR_FGT(sr, g, b, p)    SR_FGF(sr, g, b, p, __NO_FGF__)
> +
>  static const struct encoding_to_trap_config encoding_to_fgt[] __initconst = {
>         /* HFGRTR_EL2, HFGWTR_EL2 */
>         SR_FGT(SYS_TPIDR2_EL0,          HFGxTR, nTPIDR2_EL0, 0),
> @@ -1062,37 +1076,37 @@ static const struct encoding_to_trap_config encoding_to_fgt[] __initconst = {
>         SR_FGT(OP_TLBI_ASIDE1OS,        HFGITR, TLBIASIDE1OS, 1),
>         SR_FGT(OP_TLBI_VAE1OS,          HFGITR, TLBIVAE1OS, 1),
>         SR_FGT(OP_TLBI_VMALLE1OS,       HFGITR, TLBIVMALLE1OS, 1),
> -       /* FIXME: nXS variants must be checked against HCRX_EL2.FGTnXS */
> -       SR_FGT(OP_TLBI_VAALE1NXS,       HFGITR, TLBIVAALE1, 1),
> -       SR_FGT(OP_TLBI_VALE1NXS,        HFGITR, TLBIVALE1, 1),
> -       SR_FGT(OP_TLBI_VAAE1NXS,        HFGITR, TLBIVAAE1, 1),
> -       SR_FGT(OP_TLBI_ASIDE1NXS,       HFGITR, TLBIASIDE1, 1),
> -       SR_FGT(OP_TLBI_VAE1NXS,         HFGITR, TLBIVAE1, 1),
> -       SR_FGT(OP_TLBI_VMALLE1NXS,      HFGITR, TLBIVMALLE1, 1),
> -       SR_FGT(OP_TLBI_RVAALE1NXS,      HFGITR, TLBIRVAALE1, 1),
> -       SR_FGT(OP_TLBI_RVALE1NXS,       HFGITR, TLBIRVALE1, 1),
> -       SR_FGT(OP_TLBI_RVAAE1NXS,       HFGITR, TLBIRVAAE1, 1),
> -       SR_FGT(OP_TLBI_RVAE1NXS,        HFGITR, TLBIRVAE1, 1),
> -       SR_FGT(OP_TLBI_RVAALE1ISNXS,    HFGITR, TLBIRVAALE1IS, 1),
> -       SR_FGT(OP_TLBI_RVALE1ISNXS,     HFGITR, TLBIRVALE1IS, 1),
> -       SR_FGT(OP_TLBI_RVAAE1ISNXS,     HFGITR, TLBIRVAAE1IS, 1),
> -       SR_FGT(OP_TLBI_RVAE1ISNXS,      HFGITR, TLBIRVAE1IS, 1),
> -       SR_FGT(OP_TLBI_VAALE1ISNXS,     HFGITR, TLBIVAALE1IS, 1),
> -       SR_FGT(OP_TLBI_VALE1ISNXS,      HFGITR, TLBIVALE1IS, 1),
> -       SR_FGT(OP_TLBI_VAAE1ISNXS,      HFGITR, TLBIVAAE1IS, 1),
> -       SR_FGT(OP_TLBI_ASIDE1ISNXS,     HFGITR, TLBIASIDE1IS, 1),
> -       SR_FGT(OP_TLBI_VAE1ISNXS,       HFGITR, TLBIVAE1IS, 1),
> -       SR_FGT(OP_TLBI_VMALLE1ISNXS,    HFGITR, TLBIVMALLE1IS, 1),
> -       SR_FGT(OP_TLBI_RVAALE1OSNXS,    HFGITR, TLBIRVAALE1OS, 1),
> -       SR_FGT(OP_TLBI_RVALE1OSNXS,     HFGITR, TLBIRVALE1OS, 1),
> -       SR_FGT(OP_TLBI_RVAAE1OSNXS,     HFGITR, TLBIRVAAE1OS, 1),
> -       SR_FGT(OP_TLBI_RVAE1OSNXS,      HFGITR, TLBIRVAE1OS, 1),
> -       SR_FGT(OP_TLBI_VAALE1OSNXS,     HFGITR, TLBIVAALE1OS, 1),
> -       SR_FGT(OP_TLBI_VALE1OSNXS,      HFGITR, TLBIVALE1OS, 1),
> -       SR_FGT(OP_TLBI_VAAE1OSNXS,      HFGITR, TLBIVAAE1OS, 1),
> -       SR_FGT(OP_TLBI_ASIDE1OSNXS,     HFGITR, TLBIASIDE1OS, 1),
> -       SR_FGT(OP_TLBI_VAE1OSNXS,       HFGITR, TLBIVAE1OS, 1),
> -       SR_FGT(OP_TLBI_VMALLE1OSNXS,    HFGITR, TLBIVMALLE1OS, 1),
> +       /* nXS variants must be checked against HCRX_EL2.FGTnXS */
> +       SR_FGF(OP_TLBI_VAALE1NXS,       HFGITR, TLBIVAALE1, 1, HCRX_FGTnXS),
> +       SR_FGF(OP_TLBI_VALE1NXS,        HFGITR, TLBIVALE1, 1, HCRX_FGTnXS),
> +       SR_FGF(OP_TLBI_VAAE1NXS,        HFGITR, TLBIVAAE1, 1, HCRX_FGTnXS),
> +       SR_FGF(OP_TLBI_ASIDE1NXS,       HFGITR, TLBIASIDE1, 1, HCRX_FGTnXS),
> +       SR_FGF(OP_TLBI_VAE1NXS,         HFGITR, TLBIVAE1, 1, HCRX_FGTnXS),
> +       SR_FGF(OP_TLBI_VMALLE1NXS,      HFGITR, TLBIVMALLE1, 1, HCRX_FGTnXS),
> +       SR_FGF(OP_TLBI_RVAALE1NXS,      HFGITR, TLBIRVAALE1, 1, HCRX_FGTnXS),
> +       SR_FGF(OP_TLBI_RVALE1NXS,       HFGITR, TLBIRVALE1, 1, HCRX_FGTnXS),
> +       SR_FGF(OP_TLBI_RVAAE1NXS,       HFGITR, TLBIRVAAE1, 1, HCRX_FGTnXS),
> +       SR_FGF(OP_TLBI_RVAE1NXS,        HFGITR, TLBIRVAE1, 1, HCRX_FGTnXS),
> +       SR_FGF(OP_TLBI_RVAALE1ISNXS,    HFGITR, TLBIRVAALE1IS, 1, HCRX_FGTnXS),
> +       SR_FGF(OP_TLBI_RVALE1ISNXS,     HFGITR, TLBIRVALE1IS, 1, HCRX_FGTnXS),
> +       SR_FGF(OP_TLBI_RVAAE1ISNXS,     HFGITR, TLBIRVAAE1IS, 1, HCRX_FGTnXS),
> +       SR_FGF(OP_TLBI_RVAE1ISNXS,      HFGITR, TLBIRVAE1IS, 1, HCRX_FGTnXS),
> +       SR_FGF(OP_TLBI_VAALE1ISNXS,     HFGITR, TLBIVAALE1IS, 1, HCRX_FGTnXS),
> +       SR_FGF(OP_TLBI_VALE1ISNXS,      HFGITR, TLBIVALE1IS, 1, HCRX_FGTnXS),
> +       SR_FGF(OP_TLBI_VAAE1ISNXS,      HFGITR, TLBIVAAE1IS, 1, HCRX_FGTnXS),
> +       SR_FGF(OP_TLBI_ASIDE1ISNXS,     HFGITR, TLBIASIDE1IS, 1, HCRX_FGTnXS),
> +       SR_FGF(OP_TLBI_VAE1ISNXS,       HFGITR, TLBIVAE1IS, 1, HCRX_FGTnXS),
> +       SR_FGF(OP_TLBI_VMALLE1ISNXS,    HFGITR, TLBIVMALLE1IS, 1, HCRX_FGTnXS),
> +       SR_FGF(OP_TLBI_RVAALE1OSNXS,    HFGITR, TLBIRVAALE1OS, 1, HCRX_FGTnXS),
> +       SR_FGF(OP_TLBI_RVALE1OSNXS,     HFGITR, TLBIRVALE1OS, 1, HCRX_FGTnXS),
> +       SR_FGF(OP_TLBI_RVAAE1OSNXS,     HFGITR, TLBIRVAAE1OS, 1, HCRX_FGTnXS),
> +       SR_FGF(OP_TLBI_RVAE1OSNXS,      HFGITR, TLBIRVAE1OS, 1, HCRX_FGTnXS),
> +       SR_FGF(OP_TLBI_VAALE1OSNXS,     HFGITR, TLBIVAALE1OS, 1, HCRX_FGTnXS),
> +       SR_FGF(OP_TLBI_VALE1OSNXS,      HFGITR, TLBIVALE1OS, 1, HCRX_FGTnXS),
> +       SR_FGF(OP_TLBI_VAAE1OSNXS,      HFGITR, TLBIVAAE1OS, 1, HCRX_FGTnXS),
> +       SR_FGF(OP_TLBI_ASIDE1OSNXS,     HFGITR, TLBIASIDE1OS, 1, HCRX_FGTnXS),
> +       SR_FGF(OP_TLBI_VAE1OSNXS,       HFGITR, TLBIVAE1OS, 1, HCRX_FGTnXS),
> +       SR_FGF(OP_TLBI_VMALLE1OSNXS,    HFGITR, TLBIVMALLE1OS, 1, HCRX_FGTnXS),
>         SR_FGT(OP_AT_S1E1WP,            HFGITR, ATS1E1WP, 1),
>         SR_FGT(OP_AT_S1E1RP,            HFGITR, ATS1E1RP, 1),
>         SR_FGT(OP_AT_S1E0W,             HFGITR, ATS1E0W, 1),
> @@ -1622,6 +1636,7 @@ int __init populate_nv_trap_config(void)
>         BUILD_BUG_ON(sizeof(union trap_config) != sizeof(void *));
>         BUILD_BUG_ON(__NR_CGT_GROUP_IDS__ > BIT(TC_CGT_BITS));
>         BUILD_BUG_ON(__NR_FGT_GROUP_IDS__ > BIT(TC_FGT_BITS));
> +       BUILD_BUG_ON(__NR_FG_FILTER_IDS__ > BIT(TC_FGF_BITS));
>
>         for (int i = 0; i < ARRAY_SIZE(encoding_to_cgt); i++) {
>                 const struct encoding_to_trap_config *cgt = &encoding_to_cgt[i];
> @@ -1812,6 +1827,17 @@ bool __check_nv_sr_forward(struct kvm_vcpu *vcpu)
>
>         case HFGITR_GROUP:
>                 val = sanitised_sys_reg(vcpu, HFGITR_EL2);
> +               switch (tc.fgf) {
> +                       u64 tmp;
> +
> +               case __NO_FGF__:
> +                       break;
> +
> +               case HCRX_FGTnXS:
> +                       tmp = sanitised_sys_reg(vcpu, HCRX_EL2);
> +                       if (tmp & HCRX_EL2_FGTnXS)
> +                               tc.fgt = __NO_FGT_GROUP__;
> +               }
>                 break;
>
>         case __NR_FGT_GROUP_IDS__:
> diff --git a/arch/arm64/kvm/hyp/include/hyp/switch.h b/arch/arm64/kvm/hyp/include/hyp/switch.h
> index 060c5a0409e5..3acf6d77e324 100644
> --- a/arch/arm64/kvm/hyp/include/hyp/switch.h
> +++ b/arch/arm64/kvm/hyp/include/hyp/switch.h
> @@ -197,8 +197,19 @@ static inline void __activate_traps_common(struct kvm_vcpu *vcpu)
>         vcpu->arch.mdcr_el2_host = read_sysreg(mdcr_el2);
>         write_sysreg(vcpu->arch.mdcr_el2, mdcr_el2);
>
> -       if (cpus_have_final_cap(ARM64_HAS_HCX))
> -               write_sysreg_s(HCRX_GUEST_FLAGS, SYS_HCRX_EL2);
> +       if (cpus_have_final_cap(ARM64_HAS_HCX)) {
> +               u64 hcrx = HCRX_GUEST_FLAGS;
> +               if (vcpu_has_nv(vcpu) && !is_hyp_ctxt(vcpu)) {
> +                       u64 clr = 0, set = 0;
> +
> +                       compute_clr_set(vcpu, HCRX_EL2, clr, set);
> +
> +                       hcrx |= set;
> +                       hcrx &= ~clr;
> +               }
> +
> +               write_sysreg_s(hcrx, SYS_HCRX_EL2);
> +       }
>
>         __activate_traps_hfgxtr(vcpu);
>  }
> diff --git a/arch/arm64/kvm/nested.c b/arch/arm64/kvm/nested.c
> index 3facd8918ae3..042695a210ce 100644
> --- a/arch/arm64/kvm/nested.c
> +++ b/arch/arm64/kvm/nested.c
> @@ -117,7 +117,8 @@ void access_nested_id_reg(struct kvm_vcpu *v, struct sys_reg_params *p,
>                 break;
>
>         case SYS_ID_AA64MMFR1_EL1:
> -               val &= (NV_FTR(MMFR1, PAN)      |
> +               val &= (NV_FTR(MMFR1, HCX)      |
> +                       NV_FTR(MMFR1, PAN)      |
>                         NV_FTR(MMFR1, LO)       |
>                         NV_FTR(MMFR1, HPDS)     |
>                         NV_FTR(MMFR1, VH)       |
> diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c
> index 9556896311db..e92ec810d449 100644
> --- a/arch/arm64/kvm/sys_regs.c
> +++ b/arch/arm64/kvm/sys_regs.c
> @@ -2372,6 +2372,8 @@ static const struct sys_reg_desc sys_reg_descs[] = {
>         EL2_REG(HFGITR_EL2, access_rw, reset_val, 0),
>         EL2_REG(HACR_EL2, access_rw, reset_val, 0),
>
> +       EL2_REG(HCRX_EL2, access_rw, reset_val, 0),
> +
>         EL2_REG(TTBR0_EL2, access_rw, reset_val, 0),
>         EL2_REG(TTBR1_EL2, access_rw, reset_val, 0),
>         EL2_REG(TCR_EL2, access_rw, reset_val, TCR_EL2_RES1),
> --
> 2.34.1
>

Reviewed-by: Jing Zhang <jingzhangos@google.com>

Jing

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 48+ messages in thread

* Re: [PATCH v4 14/28] KVM: arm64: nv: Add trap forwarding infrastructure
  2023-08-15 18:38 ` [PATCH v4 14/28] KVM: arm64: nv: Add trap forwarding infrastructure Marc Zyngier
  2023-08-15 21:34   ` Jing Zhang
@ 2023-08-16  9:34   ` Miguel Luis
  1 sibling, 0 replies; 48+ messages in thread
From: Miguel Luis @ 2023-08-16  9:34 UTC (permalink / raw)
  To: Marc Zyngier
  Cc: kvmarm@lists.linux.dev, kvm@vger.kernel.org,
	linux-arm-kernel@lists.infradead.org, Catalin Marinas, Eric Auger,
	Mark Brown, Mark Rutland, Will Deacon, Alexandru Elisei,
	Andre Przywara, Chase Conklin, Ganapatrao Kulkarni, Darren Hart,
	Jing Zhang, James Morse, Suzuki K Poulose, Oliver Upton,
	Zenghui Yu

Hi Marc,

> On 15 Aug 2023, at 18:38, Marc Zyngier <maz@kernel.org> wrote:
> 
> A significant part of what a NV hypervisor needs to do is to decide
> whether a trap from a L2+ guest has to be forwarded to a L1 guest
> or handled locally. This is done by checking for the trap bits that
> the guest hypervisor has set and acting accordingly, as described by
> the architecture.
> 
> A previous approach was to sprinkle a bunch of checks in all the
> system register accessors, but this is pretty error prone and doesn't
> help getting an overview of what is happening.
> 
> Instead, implement a set of global tables that describe a trap bit,
> combinations of trap bits, behaviours on trap, and what bits must
> be evaluated on a system register trap.
> 
> Although this is painful to describe, this allows to specify each
> and every control bit in a static manner. To make it efficient,
> the table is inserted in an xarray that is global to the system,
> and checked each time we trap a system register while running
> a L2 guest.
> 
> Add the basic infrastructure for now, while additional patches will
> implement configuration registers.
> 
> Signed-off-by: Marc Zyngier <maz@kernel.org>
> ---
> arch/arm64/include/asm/kvm_host.h   |   1 +
> arch/arm64/include/asm/kvm_nested.h |   2 +
> arch/arm64/kvm/emulate-nested.c     | 282 ++++++++++++++++++++++++++++
> arch/arm64/kvm/sys_regs.c           |   6 +
> arch/arm64/kvm/trace_arm.h          |  26 +++
> 5 files changed, 317 insertions(+)
> 
> diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h
> index 721680da1011..cb1c5c54cedd 100644
> --- a/arch/arm64/include/asm/kvm_host.h
> +++ b/arch/arm64/include/asm/kvm_host.h
> @@ -988,6 +988,7 @@ int kvm_handle_cp10_id(struct kvm_vcpu *vcpu);
> void kvm_reset_sys_regs(struct kvm_vcpu *vcpu);
> 
> int __init kvm_sys_reg_table_init(void);
> +int __init populate_nv_trap_config(void);
> 
> bool lock_all_vcpus(struct kvm *kvm);
> void unlock_all_vcpus(struct kvm *kvm);
> diff --git a/arch/arm64/include/asm/kvm_nested.h b/arch/arm64/include/asm/kvm_nested.h
> index 8fb67f032fd1..fa23cc9c2adc 100644
> --- a/arch/arm64/include/asm/kvm_nested.h
> +++ b/arch/arm64/include/asm/kvm_nested.h
> @@ -11,6 +11,8 @@ static inline bool vcpu_has_nv(const struct kvm_vcpu *vcpu)
> test_bit(KVM_ARM_VCPU_HAS_EL2, vcpu->arch.features));
> }
> 
> +extern bool __check_nv_sr_forward(struct kvm_vcpu *vcpu);
> +
> struct sys_reg_params;
> struct sys_reg_desc;
> 
> diff --git a/arch/arm64/kvm/emulate-nested.c b/arch/arm64/kvm/emulate-nested.c
> index b96662029fb1..d5837ed0077c 100644
> --- a/arch/arm64/kvm/emulate-nested.c
> +++ b/arch/arm64/kvm/emulate-nested.c
> @@ -14,6 +14,288 @@
> 
> #include "trace.h"
> 
> +enum trap_behaviour {
> + BEHAVE_HANDLE_LOCALLY = 0,
> + BEHAVE_FORWARD_READ = BIT(0),
> + BEHAVE_FORWARD_WRITE = BIT(1),
> + BEHAVE_FORWARD_ANY = BEHAVE_FORWARD_READ | BEHAVE_FORWARD_WRITE,
> +};
> +
> +struct trap_bits {
> + const enum vcpu_sysreg index;
> + const enum trap_behaviour behaviour;
> + const u64 value;
> + const u64 mask;
> +};
> +
> +/* Coarse Grained Trap definitions */
> +enum cgt_group_id {
> + /* Indicates no coarse trap control */
> + __RESERVED__,
> +
> + /*
> + * The first batch of IDs denote coarse trapping that are used
> + * on their own instead of being part of a combination of
> + * trap controls.
> + */
> +
> + /*
> + * Anything after this point is a combination of coarse trap
> + * controls, which must all be evaluated to decide what to do.
> + */
> + __MULTIPLE_CONTROL_BITS__,
> +
> + /*
> + * Anything after this point requires a callback evaluating a
> + * complex trap condition. Hopefully we'll never need this...
> + */
> + __COMPLEX_CONDITIONS__,
> +
> + /* Must be last */
> + __NR_CGT_GROUP_IDS__
> +};
> +
> +static const struct trap_bits coarse_trap_bits[] = {
> +};
> +
> +#define MCB(id, ...) \
> + [id - __MULTIPLE_CONTROL_BITS__] = \
> + (const enum cgt_group_id[]){ \
> + __VA_ARGS__, __RESERVED__ \
> + }
> +
> +static const enum cgt_group_id *coarse_control_combo[] = {
> +};
> +
> +typedef enum trap_behaviour (*complex_condition_check)(struct kvm_vcpu *);
> +
> +#define CCC(id, fn) \
> + [id - __COMPLEX_CONDITIONS__] = fn
> +
> +static const complex_condition_check ccc[] = {
> +};
> +
> +/*
> + * Bit assignment for the trap controls. We use a 64bit word with the
> + * following layout for each trapped sysreg:
> + *
> + * [9:0] enum cgt_group_id (10 bits)
> + * [62:10] Unused (53 bits)
> + * [63] RES0 - Must be zero, as lost on insertion in the xarray
> + */
> +#define TC_CGT_BITS 10
> +
> +union trap_config {
> + u64 val;
> + struct {
> + unsigned long cgt:TC_CGT_BITS; /* Coarse Grained Trap id */
> + unsigned long unused:53; /* Unused, should be zero */
> + unsigned long mbz:1; /* Must Be Zero */
> + };
> +};
> +
> +struct encoding_to_trap_config {
> + const u32 encoding;
> + const u32 end;
> + const union trap_config tc;
> + const unsigned int line;
> +};
> +
> +#define SR_RANGE_TRAP(sr_start, sr_end, trap_id) \
> + { \
> + .encoding = sr_start, \
> + .end = sr_end, \
> + .tc = { \
> + .cgt = trap_id, \
> + }, \
> + .line = __LINE__, \
> + }
> +
> +#define SR_TRAP(sr, trap_id) SR_RANGE_TRAP(sr, sr, trap_id)
> +
> +/*
> + * Map encoding to trap bits for exception reported with EC=0x18.
> + * These must only be evaluated when running a nested hypervisor, but
> + * that the current context is not a hypervisor context. When the
> + * trapped access matches one of the trap controls, the exception is
> + * re-injected in the nested hypervisor.
> + */
> +static const struct encoding_to_trap_config encoding_to_cgt[] __initconst = {
> +};
> +
> +static DEFINE_XARRAY(sr_forward_xa);
> +
> +static union trap_config get_trap_config(u32 sysreg)
> +{
> + return (union trap_config) {
> + .val = xa_to_value(xa_load(&sr_forward_xa, sysreg)),
> + };
> +}
> +
> +static __init void print_nv_trap_error(const struct encoding_to_trap_config *tc,
> +       const char *type, int err)
> +{
> + kvm_err("%s line %d encoding range "
> + "(%d, %d, %d, %d, %d) - (%d, %d, %d, %d, %d) (err=%d)\n",
> + type, tc->line,
> + sys_reg_Op0(tc->encoding), sys_reg_Op1(tc->encoding),
> + sys_reg_CRn(tc->encoding), sys_reg_CRm(tc->encoding),
> + sys_reg_Op2(tc->encoding),
> + sys_reg_Op0(tc->end), sys_reg_Op1(tc->end),
> + sys_reg_CRn(tc->end), sys_reg_CRm(tc->end),
> + sys_reg_Op2(tc->end),
> + err);
> +}
> +
> +int __init populate_nv_trap_config(void)
> +{
> + int ret = 0;
> +
> + BUILD_BUG_ON(sizeof(union trap_config) != sizeof(void *));
> + BUILD_BUG_ON(__NR_CGT_GROUP_IDS__ > BIT(TC_CGT_BITS));
> +
> + for (int i = 0; i < ARRAY_SIZE(encoding_to_cgt); i++) {
> + const struct encoding_to_trap_config *cgt = &encoding_to_cgt[i];
> + void *prev;
> +
> + if (cgt->tc.val & BIT(63)) {
> + kvm_err("CGT[%d] has MBZ bit set\n", i);
> + ret = -EINVAL;
> + }
> +
> + if (cgt->encoding != cgt->end) {
> + prev = xa_store_range(&sr_forward_xa,
> +      cgt->encoding, cgt->end,
> +      xa_mk_value(cgt->tc.val),
> +      GFP_KERNEL);
> + } else {
> + prev = xa_store(&sr_forward_xa, cgt->encoding,
> + xa_mk_value(cgt->tc.val), GFP_KERNEL);
> + if (prev && !xa_is_err(prev)) {
> + ret = -EINVAL;
> + print_nv_trap_error(cgt, "Duplicate CGT", ret);
> + }
> + }
> +
> + if (xa_is_err(prev)) {
> + ret = xa_err(prev);
> + print_nv_trap_error(cgt, "Failed CGT insertion", ret);
> + }
> + }
> +
> + kvm_info("nv: %ld coarse grained trap handlers\n",
> + ARRAY_SIZE(encoding_to_cgt));
> +
> + for (int id = __MULTIPLE_CONTROL_BITS__; id < __COMPLEX_CONDITIONS__; id++) {
> + const enum cgt_group_id *cgids;
> +
> + cgids = coarse_control_combo[id - __MULTIPLE_CONTROL_BITS__];
> +
> + for (int i = 0; cgids[i] != __RESERVED__; i++) {
> + if (cgids[i] >= __MULTIPLE_CONTROL_BITS__) {
> + kvm_err("Recursive MCB %d/%d\n", id, cgids[i]);
> + ret = -EINVAL;
> + }
> + }
> + }
> +
> + if (ret)
> + xa_destroy(&sr_forward_xa);
> +
> + return ret;
> +}
> +
> +static enum trap_behaviour get_behaviour(struct kvm_vcpu *vcpu,
> + const struct trap_bits *tb)
> +{
> + enum trap_behaviour b = BEHAVE_HANDLE_LOCALLY;
> + u64 val;
> +
> + val = __vcpu_sys_reg(vcpu, tb->index);
> + if ((val & tb->mask) == tb->value)
> + b |= tb->behaviour;
> +
> + return b;
> +}
> +
> +static enum trap_behaviour __compute_trap_behaviour(struct kvm_vcpu *vcpu,
> +    const enum cgt_group_id id,
> +    enum trap_behaviour b)
> +{
> + switch (id) {
> + const enum cgt_group_id *cgids;
> +
> + case __RESERVED__ ... __MULTIPLE_CONTROL_BITS__ - 1:
> + if (likely(id != __RESERVED__))
> + b |= get_behaviour(vcpu, &coarse_trap_bits[id]);
> + break;
> + case __MULTIPLE_CONTROL_BITS__ ... __COMPLEX_CONDITIONS__ - 1:
> + /* Yes, this is recursive. Don't do anything stupid. */
> + cgids = coarse_control_combo[id - __MULTIPLE_CONTROL_BITS__];
> + for (int i = 0; cgids[i] != __RESERVED__; i++)
> + b |= __compute_trap_behaviour(vcpu, cgids[i], b);
> + break;
> + default:
> + if (ARRAY_SIZE(ccc))
> + b |= ccc[id -  __COMPLEX_CONDITIONS__](vcpu);
> + break;
> + }
> +
> + return b;
> +}
> +
> +static enum trap_behaviour compute_trap_behaviour(struct kvm_vcpu *vcpu,
> +  const union trap_config tc)
> +{
> + enum trap_behaviour b = BEHAVE_HANDLE_LOCALLY;
> +
> + return __compute_trap_behaviour(vcpu, tc.cgt, b);
> +}
> +
> +bool __check_nv_sr_forward(struct kvm_vcpu *vcpu)
> +{
> + union trap_config tc;
> + enum trap_behaviour b;
> + bool is_read;
> + u32 sysreg;
> + u64 esr;
> +
> + if (!vcpu_has_nv(vcpu) || is_hyp_ctxt(vcpu))
> + return false;
> +
> + esr = kvm_vcpu_get_esr(vcpu);
> + sysreg = esr_sys64_to_sysreg(esr);
> + is_read = (esr & ESR_ELx_SYS64_ISS_DIR_MASK) == ESR_ELx_SYS64_ISS_DIR_READ;
> +
> + tc = get_trap_config(sysreg);
> +
> + /*
> + * A value of 0 for the whole entry means that we know nothing
> + * for this sysreg, and that it cannot be re-injected into the
> + * nested hypervisor. In this situation, let's cut it short.
> + *
> + * Note that ultimately, we could also make use of the xarray
> + * to store the index of the sysreg in the local descriptor
> + * array, avoiding another search... Hint, hint...
> + */
> + if (!tc.val)
> + return false;
> +
> + b = compute_trap_behaviour(vcpu, tc);
> +
> + if (((b & BEHAVE_FORWARD_READ) && is_read) ||
> +    ((b & BEHAVE_FORWARD_WRITE) && !is_read))
> + goto inject;
> +
> + return false;
> +
> +inject:
> + trace_kvm_forward_sysreg_trap(vcpu, sysreg, is_read);
> +
> + kvm_inject_nested_sync(vcpu, kvm_vcpu_get_esr(vcpu));
> + return true;
> +}
> +
> static u64 kvm_check_illegal_exception_return(struct kvm_vcpu *vcpu, u64 spsr)
> {
> u64 mode = spsr & PSR_MODE_MASK;
> diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c
> index f5baaa508926..9556896311db 100644
> --- a/arch/arm64/kvm/sys_regs.c
> +++ b/arch/arm64/kvm/sys_regs.c
> @@ -3177,6 +3177,9 @@ int kvm_handle_sys_reg(struct kvm_vcpu *vcpu)
> 
> trace_kvm_handle_sys_reg(esr);
> 
> + if (__check_nv_sr_forward(vcpu))
> + return 1;
> +
> params = esr_sys64_to_params(esr);
> params.regval = vcpu_get_reg(vcpu, Rt);
> 
> @@ -3594,5 +3597,8 @@ int __init kvm_sys_reg_table_init(void)
> if (!first_idreg)
> return -EINVAL;
> 
> + if (kvm_get_mode() == KVM_MODE_NV)
> + return populate_nv_trap_config();
> +
> return 0;
> }
> diff --git a/arch/arm64/kvm/trace_arm.h b/arch/arm64/kvm/trace_arm.h
> index 6ce5c025218d..8ad53104934d 100644
> --- a/arch/arm64/kvm/trace_arm.h
> +++ b/arch/arm64/kvm/trace_arm.h
> @@ -364,6 +364,32 @@ TRACE_EVENT(kvm_inject_nested_exception,
>  __entry->hcr_el2)
> );
> 
> +TRACE_EVENT(kvm_forward_sysreg_trap,
> +    TP_PROTO(struct kvm_vcpu *vcpu, u32 sysreg, bool is_read),
> +    TP_ARGS(vcpu, sysreg, is_read),
> +
> +    TP_STRUCT__entry(
> + __field(u64, pc)
> + __field(u32, sysreg)
> + __field(bool, is_read)
> +    ),
> +
> +    TP_fast_assign(
> + __entry->pc = *vcpu_pc(vcpu);
> + __entry->sysreg = sysreg;
> + __entry->is_read = is_read;
> +    ),
> +
> +    TP_printk("%llx %c (%d,%d,%d,%d,%d)",
> +      __entry->pc,
> +      __entry->is_read ? 'R' : 'W',
> +      sys_reg_Op0(__entry->sysreg),
> +      sys_reg_Op1(__entry->sysreg),
> +      sys_reg_CRn(__entry->sysreg),
> +      sys_reg_CRm(__entry->sysreg),
> +      sys_reg_Op2(__entry->sysreg))
> +);
> +

Reviewed-by: Miguel Luis <miguel.luis@oracle.com>

Thanks
Miguel

> #endif /* _TRACE_ARM_ARM64_KVM_H */
> 
> #undef TRACE_INCLUDE_PATH
> -- 
> 2.34.1
> 


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 48+ messages in thread

* Re: [PATCH v4 00/28] KVM: arm64: NV trap forwarding infrastructure
  2023-08-15 18:38 [PATCH v4 00/28] KVM: arm64: NV trap forwarding infrastructure Marc Zyngier
                   ` (27 preceding siblings ...)
  2023-08-15 18:39 ` [PATCH v4 28/28] KVM: arm64: nv: Add support for HCRX_EL2 Marc Zyngier
@ 2023-08-17  9:29 ` Marc Zyngier
  28 siblings, 0 replies; 48+ messages in thread
From: Marc Zyngier @ 2023-08-17  9:29 UTC (permalink / raw)
  To: linux-arm-kernel, kvmarm, kvm, Marc Zyngier
  Cc: Mark Brown, Chase Conklin, Jing Zhang, Alexandru Elisei,
	Oliver Upton, Andre Przywara, Suzuki K Poulose, Catalin Marinas,
	Darren Hart, Eric Auger, Zenghui Yu, Will Deacon, Miguel Luis,
	Ganapatrao Kulkarni, James Morse, Mark Rutland

On Tue, 15 Aug 2023 19:38:34 +0100, Marc Zyngier wrote:
> Another week, another version. Change log below.
> 
> I'll drop this into -next now, and see what happens.
> 
> * From v3 [3]:
> 
>   - Renamed trap_group to cgt_group_id (Eric)
> 
> [...]

Applied to next, thanks!

[01/28] arm64: Add missing VA CMO encodings
        commit: 21f74a51373791732baa0d672a604afa76d5718d
[02/28] arm64: Add missing ERX*_EL1 encodings
        commit: 464f2164da7e4cb50faec9d56226b22c9b36cdda
[03/28] arm64: Add missing DC ZVA/GVA/GZVA encodings
        commit: 6ddea24dfd59f0fc78a87df54d428e3a6cf3e11f
[04/28] arm64: Add TLBI operation encodings
        commit: fb1926cccd70a5032448968dfd639187cd894cb7
[05/28] arm64: Add AT operation encodings
        commit: 2b97411fef8ff9dafc862971f08382f780dc5357
[06/28] arm64: Add debug registers affected by HDFGxTR_EL2
        commit: 57596c8f991c9aace47d75b31249b8ec36b3b899
[07/28] arm64: Add missing BRB/CFP/DVP/CPP instructions
        commit: 2b062ed483ebd625b6c6054b9d29d600bd755a86
[08/28] arm64: Add HDFGRTR_EL2 and HDFGWTR_EL2 layouts
        commit: cc24f656f7cf834f384a43fc6fe68ec62730743d
[09/28] arm64: Add feature detection for fine grained traps
        commit: b206a708cbfb352f2191089678ab595d24563011
[10/28] KVM: arm64: Correctly handle ACCDATA_EL1 traps
        commit: 484f86824a3d94c6d9412618dd70b1d5923fff6f
[11/28] KVM: arm64: Add missing HCR_EL2 trap bits
        commit: 3ea84b4fe446319625be64945793b8540ca15f84
[12/28] KVM: arm64: nv: Add FGT registers
        commit: 50d2fe4648c50e7d33fa576f6b078f22ad973670
[13/28] KVM: arm64: Restructure FGT register switching
        commit: e930694e6145eb210c9931914a7801cc61016a82
[14/28] KVM: arm64: nv: Add trap forwarding infrastructure
        commit: e58ec47bf68d2bcaaa97d80cc13aca4bc4abe07b
[15/28] KVM: arm64: nv: Add trap forwarding for HCR_EL2
        commit: d0fc0a2519a6dd906aac448e742958d30b5787ac
[16/28] KVM: arm64: nv: Expose FEAT_EVT to nested guests
        commit: a0b70fb00db83e678f92b8aed0a9a9e4ffcffb82
[17/28] KVM: arm64: nv: Add trap forwarding for MDCR_EL2
        commit: cb31632c44529048c052a2961b3adf62a2c89b17
[18/28] KVM: arm64: nv: Add trap forwarding for CNTHCTL_EL2
        commit: e880bd3363237ed8abbe623d1b49d59d5f6fe0d1
[19/28] KVM: arm64: nv: Add fine grained trap forwarding infrastructure
        commit: 15b4d82d69d7b0e5833b7a023dff3d7bbae5ccfc
[20/28] KVM: arm64: nv: Add trap forwarding for HFGxTR_EL2
        commit: 5a24ea7869857251a83da1512209f76003bc09db
[21/28] KVM: arm64: nv: Add trap forwarding for HFGITR_EL2
        commit: 039f9f12de5fc761d2b32fa072071533aa8cbb3b
[22/28] KVM: arm64: nv: Add trap forwarding for HDFGxTR_EL2
        commit: d0be0b2ede13247c53745d50e2a5993f2b27c802
[23/28] KVM: arm64: nv: Add SVC trap forwarding
        commit: a77b31dce4375be15014b10e8f94a149592ea6b6
[24/28] KVM: arm64: nv: Expand ERET trap forwarding to handle FGT
        commit: ea3b27d8dea081f1693b310322ae71fa75d1875b
[25/28] KVM: arm64: nv: Add switching support for HFGxTR/HDFGxTR
        commit: d4d2dacc7cddc37aaa7c6eed8665d533d1037e1e
[26/28] KVM: arm64: nv: Expose FGT to nested guests
        commit: 0a5d28433ad94cc38ecb3dbb5138b8ae30ffb98a
[27/28] KVM: arm64: Move HCRX_EL2 switch to load/put on VHE systems
        commit: a63cf31139b7f41d468dc8ef63dbf6bae213d960
[28/28] KVM: arm64: nv: Add support for HCRX_EL2
        commit: 03fb54d0aa73cc14e51f6611eb3289e4fec15184

Cheers,

	M.
-- 
Without deviation from the norm, progress is not possible.



_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 48+ messages in thread

* Re: [PATCH v4 15/28] KVM: arm64: nv: Add trap forwarding for HCR_EL2
  2023-08-15 18:38 ` [PATCH v4 15/28] KVM: arm64: nv: Add trap forwarding for HCR_EL2 Marc Zyngier
  2023-08-15 21:37   ` Jing Zhang
@ 2023-08-17 11:05   ` Miguel Luis
  2023-08-21 17:47     ` Marc Zyngier
  1 sibling, 1 reply; 48+ messages in thread
From: Miguel Luis @ 2023-08-17 11:05 UTC (permalink / raw)
  To: Marc Zyngier
  Cc: kvmarm@lists.linux.dev, kvm@vger.kernel.org,
	linux-arm-kernel@lists.infradead.org, Catalin Marinas, Eric Auger,
	Mark Brown, Mark Rutland, Will Deacon, Alexandru Elisei,
	Andre Przywara, Chase Conklin, Ganapatrao Kulkarni, Darren Hart,
	Jing Zhang, James Morse, Suzuki K Poulose, Oliver Upton,
	Zenghui Yu

Hi Marc,

> On 15 Aug 2023, at 18:38, Marc Zyngier <maz@kernel.org> wrote:
> 
> Describe the HCR_EL2 register, and associate it with all the sysregs
> it allows to trap.
> 
> Reviewed-by: Eric Auger <eric.auger@redhat.com>
> Signed-off-by: Marc Zyngier <maz@kernel.org>
> ---
> arch/arm64/kvm/emulate-nested.c | 488 ++++++++++++++++++++++++++++++++
> 1 file changed, 488 insertions(+)
> 
> diff --git a/arch/arm64/kvm/emulate-nested.c b/arch/arm64/kvm/emulate-nested.c
> index d5837ed0077c..975a30ef874a 100644
> --- a/arch/arm64/kvm/emulate-nested.c
> +++ b/arch/arm64/kvm/emulate-nested.c
> @@ -38,12 +38,48 @@ enum cgt_group_id {
> * on their own instead of being part of a combination of
> * trap controls.
> */
> + CGT_HCR_TID1,
> + CGT_HCR_TID2,
> + CGT_HCR_TID3,
> + CGT_HCR_IMO,
> + CGT_HCR_FMO,
> + CGT_HCR_TIDCP,
> + CGT_HCR_TACR,
> + CGT_HCR_TSW,
> + CGT_HCR_TPC,
> + CGT_HCR_TPU,
> + CGT_HCR_TTLB,
> + CGT_HCR_TVM,
> + CGT_HCR_TDZ,
> + CGT_HCR_TRVM,
> + CGT_HCR_TLOR,
> + CGT_HCR_TERR,
> + CGT_HCR_APK,
> + CGT_HCR_NV,
> + CGT_HCR_NV_nNV2,
> + CGT_HCR_NV1_nNV2,
> + CGT_HCR_AT,
> + CGT_HCR_nFIEN,
> + CGT_HCR_TID4,
> + CGT_HCR_TICAB,
> + CGT_HCR_TOCU,
> + CGT_HCR_ENSCXT,
> + CGT_HCR_TTLBIS,
> + CGT_HCR_TTLBOS,
> 
> /*
> * Anything after this point is a combination of coarse trap
> * controls, which must all be evaluated to decide what to do.
> */
> __MULTIPLE_CONTROL_BITS__,
> + CGT_HCR_IMO_FMO = __MULTIPLE_CONTROL_BITS__,
> + CGT_HCR_TID2_TID4,
> + CGT_HCR_TTLB_TTLBIS,
> + CGT_HCR_TTLB_TTLBOS,
> + CGT_HCR_TVM_TRVM,
> + CGT_HCR_TPU_TICAB,
> + CGT_HCR_TPU_TOCU,
> + CGT_HCR_NV1_nNV2_ENSCXT,
> 
> /*
> * Anything after this point requires a callback evaluating a
> @@ -56,6 +92,174 @@ enum cgt_group_id {
> };
> 
> static const struct trap_bits coarse_trap_bits[] = {
> + [CGT_HCR_TID1] = {
> + .index = HCR_EL2,
> + .value = HCR_TID1,
> + .mask = HCR_TID1,
> + .behaviour = BEHAVE_FORWARD_READ,
> + },
> + [CGT_HCR_TID2] = {
> + .index = HCR_EL2,
> + .value = HCR_TID2,
> + .mask = HCR_TID2,
> + .behaviour = BEHAVE_FORWARD_ANY,
> + },
> + [CGT_HCR_TID3] = {
> + .index = HCR_EL2,
> + .value = HCR_TID3,
> + .mask = HCR_TID3,
> + .behaviour = BEHAVE_FORWARD_READ,
> + },
> + [CGT_HCR_IMO] = {
> + .index = HCR_EL2,
> + .value = HCR_IMO,
> + .mask = HCR_IMO,
> + .behaviour = BEHAVE_FORWARD_WRITE,
> + },
> + [CGT_HCR_FMO] = {
> + .index = HCR_EL2,
> + .value = HCR_FMO,
> + .mask = HCR_FMO,
> + .behaviour = BEHAVE_FORWARD_WRITE,
> + },
> + [CGT_HCR_TIDCP] = {
> + .index = HCR_EL2,
> + .value = HCR_TIDCP,
> + .mask = HCR_TIDCP,
> + .behaviour = BEHAVE_FORWARD_ANY,
> + },
> + [CGT_HCR_TACR] = {
> + .index = HCR_EL2,
> + .value = HCR_TACR,
> + .mask = HCR_TACR,
> + .behaviour = BEHAVE_FORWARD_ANY,
> + },
> + [CGT_HCR_TSW] = {
> + .index = HCR_EL2,
> + .value = HCR_TSW,
> + .mask = HCR_TSW,
> + .behaviour = BEHAVE_FORWARD_ANY,
> + },
> + [CGT_HCR_TPC] = { /* Also called TCPC when FEAT_DPB is implemented */
> + .index = HCR_EL2,
> + .value = HCR_TPC,
> + .mask = HCR_TPC,
> + .behaviour = BEHAVE_FORWARD_ANY,
> + },
> + [CGT_HCR_TPU] = {
> + .index = HCR_EL2,
> + .value = HCR_TPU,
> + .mask = HCR_TPU,
> + .behaviour = BEHAVE_FORWARD_ANY,
> + },
> + [CGT_HCR_TTLB] = {
> + .index = HCR_EL2,
> + .value = HCR_TTLB,
> + .mask = HCR_TTLB,
> + .behaviour = BEHAVE_FORWARD_ANY,
> + },
> + [CGT_HCR_TVM] = {
> + .index = HCR_EL2,
> + .value = HCR_TVM,
> + .mask = HCR_TVM,
> + .behaviour = BEHAVE_FORWARD_WRITE,
> + },
> + [CGT_HCR_TDZ] = {
> + .index = HCR_EL2,
> + .value = HCR_TDZ,
> + .mask = HCR_TDZ,
> + .behaviour = BEHAVE_FORWARD_ANY,
> + },
> + [CGT_HCR_TRVM] = {
> + .index = HCR_EL2,
> + .value = HCR_TRVM,
> + .mask = HCR_TRVM,
> + .behaviour = BEHAVE_FORWARD_READ,
> + },
> + [CGT_HCR_TLOR] = {
> + .index = HCR_EL2,
> + .value = HCR_TLOR,
> + .mask = HCR_TLOR,
> + .behaviour = BEHAVE_FORWARD_ANY,
> + },
> + [CGT_HCR_TERR] = {
> + .index = HCR_EL2,
> + .value = HCR_TERR,
> + .mask = HCR_TERR,
> + .behaviour = BEHAVE_FORWARD_ANY,
> + },
> + [CGT_HCR_APK] = {
> + .index = HCR_EL2,
> + .value = 0,
> + .mask = HCR_APK,
> + .behaviour = BEHAVE_FORWARD_ANY,
> + },
> + [CGT_HCR_NV] = {
> + .index = HCR_EL2,
> + .value = HCR_NV,
> + .mask = HCR_NV,
> + .behaviour = BEHAVE_FORWARD_ANY,
> + },
> + [CGT_HCR_NV_nNV2] = {
> + .index = HCR_EL2,
> + .value = HCR_NV,
> + .mask = HCR_NV | HCR_NV2,
> + .behaviour = BEHAVE_FORWARD_ANY,
> + },
> + [CGT_HCR_NV1_nNV2] = {
> + .index = HCR_EL2,
> + .value = HCR_NV | HCR_NV1,
> + .mask = HCR_NV | HCR_NV1 | HCR_NV2,
> + .behaviour = BEHAVE_FORWARD_ANY,
> + },
> + [CGT_HCR_AT] = {
> + .index = HCR_EL2,
> + .value = HCR_AT,
> + .mask = HCR_AT,
> + .behaviour = BEHAVE_FORWARD_ANY,
> + },
> + [CGT_HCR_nFIEN] = {
> + .index = HCR_EL2,
> + .value = 0,
> + .mask = HCR_FIEN,
> + .behaviour = BEHAVE_FORWARD_ANY,
> + },
> + [CGT_HCR_TID4] = {
> + .index = HCR_EL2,
> + .value = HCR_TID4,
> + .mask = HCR_TID4,
> + .behaviour = BEHAVE_FORWARD_ANY,
> + },
> + [CGT_HCR_TICAB] = {
> + .index = HCR_EL2,
> + .value = HCR_TICAB,
> + .mask = HCR_TICAB,
> + .behaviour = BEHAVE_FORWARD_ANY,
> + },
> + [CGT_HCR_TOCU] = {
> + .index = HCR_EL2,
> + .value = HCR_TOCU,
> + .mask = HCR_TOCU,
> + .behaviour = BEHAVE_FORWARD_ANY,
> + },
> + [CGT_HCR_ENSCXT] = {
> + .index = HCR_EL2,
> + .value = 0,
> + .mask = HCR_ENSCXT,
> + .behaviour = BEHAVE_FORWARD_ANY,
> + },
> + [CGT_HCR_TTLBIS] = {
> + .index = HCR_EL2,
> + .value = HCR_TTLBIS,
> + .mask = HCR_TTLBIS,
> + .behaviour = BEHAVE_FORWARD_ANY,
> + },
> + [CGT_HCR_TTLBOS] = {
> + .index = HCR_EL2,
> + .value = HCR_TTLBOS,
> + .mask = HCR_TTLBOS,
> + .behaviour = BEHAVE_FORWARD_ANY,
> + },
> };
> 
> #define MCB(id, ...) \
> @@ -65,6 +269,14 @@ static const struct trap_bits coarse_trap_bits[] = {
> }
> 
> static const enum cgt_group_id *coarse_control_combo[] = {
> + MCB(CGT_HCR_IMO_FMO, CGT_HCR_IMO, CGT_HCR_FMO),
> + MCB(CGT_HCR_TID2_TID4, CGT_HCR_TID2, CGT_HCR_TID4),
> + MCB(CGT_HCR_TTLB_TTLBIS, CGT_HCR_TTLB, CGT_HCR_TTLBIS),
> + MCB(CGT_HCR_TTLB_TTLBOS, CGT_HCR_TTLB, CGT_HCR_TTLBOS),
> + MCB(CGT_HCR_TVM_TRVM, CGT_HCR_TVM, CGT_HCR_TRVM),
> + MCB(CGT_HCR_TPU_TICAB, CGT_HCR_TPU, CGT_HCR_TICAB),
> + MCB(CGT_HCR_TPU_TOCU, CGT_HCR_TPU, CGT_HCR_TOCU),
> + MCB(CGT_HCR_NV1_nNV2_ENSCXT, CGT_HCR_NV1_nNV2, CGT_HCR_ENSCXT),
> };
> 
> typedef enum trap_behaviour (*complex_condition_check)(struct kvm_vcpu *);
> @@ -121,6 +333,282 @@ struct encoding_to_trap_config {
>  * re-injected in the nested hypervisor.
>  */
> static const struct encoding_to_trap_config encoding_to_cgt[] __initconst = {
> + SR_TRAP(SYS_REVIDR_EL1, CGT_HCR_TID1),
> + SR_TRAP(SYS_AIDR_EL1, CGT_HCR_TID1),
> + SR_TRAP(SYS_SMIDR_EL1, CGT_HCR_TID1),
> + SR_TRAP(SYS_CTR_EL0, CGT_HCR_TID2),
> + SR_TRAP(SYS_CCSIDR_EL1, CGT_HCR_TID2_TID4),
> + SR_TRAP(SYS_CCSIDR2_EL1, CGT_HCR_TID2_TID4),
> + SR_TRAP(SYS_CLIDR_EL1, CGT_HCR_TID2_TID4),
> + SR_TRAP(SYS_CSSELR_EL1, CGT_HCR_TID2_TID4),
> + SR_RANGE_TRAP(SYS_ID_PFR0_EL1,
> +      sys_reg(3, 0, 0, 7, 7), CGT_HCR_TID3),
> + SR_TRAP(SYS_ICC_SGI0R_EL1, CGT_HCR_IMO_FMO),
> + SR_TRAP(SYS_ICC_ASGI1R_EL1, CGT_HCR_IMO_FMO),
> + SR_TRAP(SYS_ICC_SGI1R_EL1, CGT_HCR_IMO_FMO),
> + SR_RANGE_TRAP(sys_reg(3, 0, 11, 0, 0),
> +      sys_reg(3, 0, 11, 15, 7), CGT_HCR_TIDCP),
> + SR_RANGE_TRAP(sys_reg(3, 1, 11, 0, 0),
> +      sys_reg(3, 1, 11, 15, 7), CGT_HCR_TIDCP),
> + SR_RANGE_TRAP(sys_reg(3, 2, 11, 0, 0),
> +      sys_reg(3, 2, 11, 15, 7), CGT_HCR_TIDCP),
> + SR_RANGE_TRAP(sys_reg(3, 3, 11, 0, 0),
> +      sys_reg(3, 3, 11, 15, 7), CGT_HCR_TIDCP),
> + SR_RANGE_TRAP(sys_reg(3, 4, 11, 0, 0),
> +      sys_reg(3, 4, 11, 15, 7), CGT_HCR_TIDCP),
> + SR_RANGE_TRAP(sys_reg(3, 5, 11, 0, 0),
> +      sys_reg(3, 5, 11, 15, 7), CGT_HCR_TIDCP),
> + SR_RANGE_TRAP(sys_reg(3, 6, 11, 0, 0),
> +      sys_reg(3, 6, 11, 15, 7), CGT_HCR_TIDCP),
> + SR_RANGE_TRAP(sys_reg(3, 7, 11, 0, 0),
> +      sys_reg(3, 7, 11, 15, 7), CGT_HCR_TIDCP),
> + SR_RANGE_TRAP(sys_reg(3, 0, 15, 0, 0),
> +      sys_reg(3, 0, 15, 15, 7), CGT_HCR_TIDCP),
> + SR_RANGE_TRAP(sys_reg(3, 1, 15, 0, 0),
> +      sys_reg(3, 1, 15, 15, 7), CGT_HCR_TIDCP),
> + SR_RANGE_TRAP(sys_reg(3, 2, 15, 0, 0),
> +      sys_reg(3, 2, 15, 15, 7), CGT_HCR_TIDCP),
> + SR_RANGE_TRAP(sys_reg(3, 3, 15, 0, 0),
> +      sys_reg(3, 3, 15, 15, 7), CGT_HCR_TIDCP),
> + SR_RANGE_TRAP(sys_reg(3, 4, 15, 0, 0),
> +      sys_reg(3, 4, 15, 15, 7), CGT_HCR_TIDCP),
> + SR_RANGE_TRAP(sys_reg(3, 5, 15, 0, 0),
> +      sys_reg(3, 5, 15, 15, 7), CGT_HCR_TIDCP),
> + SR_RANGE_TRAP(sys_reg(3, 6, 15, 0, 0),
> +      sys_reg(3, 6, 15, 15, 7), CGT_HCR_TIDCP),
> + SR_RANGE_TRAP(sys_reg(3, 7, 15, 0, 0),
> +      sys_reg(3, 7, 15, 15, 7), CGT_HCR_TIDCP),
> + SR_TRAP(SYS_ACTLR_EL1, CGT_HCR_TACR),
> + SR_TRAP(SYS_DC_ISW, CGT_HCR_TSW),
> + SR_TRAP(SYS_DC_CSW, CGT_HCR_TSW),
> + SR_TRAP(SYS_DC_CISW, CGT_HCR_TSW),
> + SR_TRAP(SYS_DC_IGSW, CGT_HCR_TSW),
> + SR_TRAP(SYS_DC_IGDSW, CGT_HCR_TSW),
> + SR_TRAP(SYS_DC_CGSW, CGT_HCR_TSW),
> + SR_TRAP(SYS_DC_CGDSW, CGT_HCR_TSW),
> + SR_TRAP(SYS_DC_CIGSW, CGT_HCR_TSW),
> + SR_TRAP(SYS_DC_CIGDSW, CGT_HCR_TSW),
> + SR_TRAP(SYS_DC_CIVAC, CGT_HCR_TPC),
> + SR_TRAP(SYS_DC_CVAC, CGT_HCR_TPC),
> + SR_TRAP(SYS_DC_CVAP, CGT_HCR_TPC),
> + SR_TRAP(SYS_DC_CVADP, CGT_HCR_TPC),
> + SR_TRAP(SYS_DC_IVAC, CGT_HCR_TPC),
> + SR_TRAP(SYS_DC_CIGVAC, CGT_HCR_TPC),
> + SR_TRAP(SYS_DC_CIGDVAC, CGT_HCR_TPC),
> + SR_TRAP(SYS_DC_IGVAC, CGT_HCR_TPC),
> + SR_TRAP(SYS_DC_IGDVAC, CGT_HCR_TPC),
> + SR_TRAP(SYS_DC_CGVAC, CGT_HCR_TPC),
> + SR_TRAP(SYS_DC_CGDVAC, CGT_HCR_TPC),
> + SR_TRAP(SYS_DC_CGVAP, CGT_HCR_TPC),
> + SR_TRAP(SYS_DC_CGDVAP, CGT_HCR_TPC),
> + SR_TRAP(SYS_DC_CGVADP, CGT_HCR_TPC),
> + SR_TRAP(SYS_DC_CGDVADP, CGT_HCR_TPC),
> + SR_TRAP(SYS_IC_IVAU, CGT_HCR_TPU_TOCU),
> + SR_TRAP(SYS_IC_IALLU, CGT_HCR_TPU_TOCU),
> + SR_TRAP(SYS_IC_IALLUIS, CGT_HCR_TPU_TICAB),
> + SR_TRAP(SYS_DC_CVAU, CGT_HCR_TPU_TOCU),
> + SR_TRAP(OP_TLBI_RVAE1, CGT_HCR_TTLB),
> + SR_TRAP(OP_TLBI_RVAAE1, CGT_HCR_TTLB),
> + SR_TRAP(OP_TLBI_RVALE1, CGT_HCR_TTLB),
> + SR_TRAP(OP_TLBI_RVAALE1, CGT_HCR_TTLB),
> + SR_TRAP(OP_TLBI_VMALLE1, CGT_HCR_TTLB),
> + SR_TRAP(OP_TLBI_VAE1, CGT_HCR_TTLB),
> + SR_TRAP(OP_TLBI_ASIDE1, CGT_HCR_TTLB),
> + SR_TRAP(OP_TLBI_VAAE1, CGT_HCR_TTLB),
> + SR_TRAP(OP_TLBI_VALE1, CGT_HCR_TTLB),
> + SR_TRAP(OP_TLBI_VAALE1, CGT_HCR_TTLB),
> + SR_TRAP(OP_TLBI_RVAE1NXS, CGT_HCR_TTLB),
> + SR_TRAP(OP_TLBI_RVAAE1NXS, CGT_HCR_TTLB),
> + SR_TRAP(OP_TLBI_RVALE1NXS, CGT_HCR_TTLB),
> + SR_TRAP(OP_TLBI_RVAALE1NXS, CGT_HCR_TTLB),
> + SR_TRAP(OP_TLBI_VMALLE1NXS, CGT_HCR_TTLB),
> + SR_TRAP(OP_TLBI_VAE1NXS, CGT_HCR_TTLB),
> + SR_TRAP(OP_TLBI_ASIDE1NXS, CGT_HCR_TTLB),
> + SR_TRAP(OP_TLBI_VAAE1NXS, CGT_HCR_TTLB),
> + SR_TRAP(OP_TLBI_VALE1NXS, CGT_HCR_TTLB),
> + SR_TRAP(OP_TLBI_VAALE1NXS, CGT_HCR_TTLB),
> + SR_TRAP(OP_TLBI_RVAE1IS, CGT_HCR_TTLB_TTLBIS),
> + SR_TRAP(OP_TLBI_RVAAE1IS, CGT_HCR_TTLB_TTLBIS),
> + SR_TRAP(OP_TLBI_RVALE1IS, CGT_HCR_TTLB_TTLBIS),
> + SR_TRAP(OP_TLBI_RVAALE1IS, CGT_HCR_TTLB_TTLBIS),
> + SR_TRAP(OP_TLBI_VMALLE1IS, CGT_HCR_TTLB_TTLBIS),
> + SR_TRAP(OP_TLBI_VAE1IS, CGT_HCR_TTLB_TTLBIS),
> + SR_TRAP(OP_TLBI_ASIDE1IS, CGT_HCR_TTLB_TTLBIS),
> + SR_TRAP(OP_TLBI_VAAE1IS, CGT_HCR_TTLB_TTLBIS),
> + SR_TRAP(OP_TLBI_VALE1IS, CGT_HCR_TTLB_TTLBIS),
> + SR_TRAP(OP_TLBI_VAALE1IS, CGT_HCR_TTLB_TTLBIS),
> + SR_TRAP(OP_TLBI_RVAE1ISNXS, CGT_HCR_TTLB_TTLBIS),
> + SR_TRAP(OP_TLBI_RVAAE1ISNXS, CGT_HCR_TTLB_TTLBIS),
> + SR_TRAP(OP_TLBI_RVALE1ISNXS, CGT_HCR_TTLB_TTLBIS),
> + SR_TRAP(OP_TLBI_RVAALE1ISNXS, CGT_HCR_TTLB_TTLBIS),
> + SR_TRAP(OP_TLBI_VMALLE1ISNXS, CGT_HCR_TTLB_TTLBIS),
> + SR_TRAP(OP_TLBI_VAE1ISNXS, CGT_HCR_TTLB_TTLBIS),
> + SR_TRAP(OP_TLBI_ASIDE1ISNXS, CGT_HCR_TTLB_TTLBIS),
> + SR_TRAP(OP_TLBI_VAAE1ISNXS, CGT_HCR_TTLB_TTLBIS),
> + SR_TRAP(OP_TLBI_VALE1ISNXS, CGT_HCR_TTLB_TTLBIS),
> + SR_TRAP(OP_TLBI_VAALE1ISNXS, CGT_HCR_TTLB_TTLBIS),
> + SR_TRAP(OP_TLBI_VMALLE1OS, CGT_HCR_TTLB_TTLBOS),
> + SR_TRAP(OP_TLBI_VAE1OS, CGT_HCR_TTLB_TTLBOS),
> + SR_TRAP(OP_TLBI_ASIDE1OS, CGT_HCR_TTLB_TTLBOS),
> + SR_TRAP(OP_TLBI_VAAE1OS, CGT_HCR_TTLB_TTLBOS),
> + SR_TRAP(OP_TLBI_VALE1OS, CGT_HCR_TTLB_TTLBOS),
> + SR_TRAP(OP_TLBI_VAALE1OS, CGT_HCR_TTLB_TTLBOS),
> + SR_TRAP(OP_TLBI_RVAE1OS, CGT_HCR_TTLB_TTLBOS),
> + SR_TRAP(OP_TLBI_RVAAE1OS, CGT_HCR_TTLB_TTLBOS),
> + SR_TRAP(OP_TLBI_RVALE1OS, CGT_HCR_TTLB_TTLBOS),
> + SR_TRAP(OP_TLBI_RVAALE1OS, CGT_HCR_TTLB_TTLBOS),
> + SR_TRAP(OP_TLBI_VMALLE1OSNXS, CGT_HCR_TTLB_TTLBOS),
> + SR_TRAP(OP_TLBI_VAE1OSNXS, CGT_HCR_TTLB_TTLBOS),
> + SR_TRAP(OP_TLBI_ASIDE1OSNXS, CGT_HCR_TTLB_TTLBOS),
> + SR_TRAP(OP_TLBI_VAAE1OSNXS, CGT_HCR_TTLB_TTLBOS),
> + SR_TRAP(OP_TLBI_VALE1OSNXS, CGT_HCR_TTLB_TTLBOS),
> + SR_TRAP(OP_TLBI_VAALE1OSNXS, CGT_HCR_TTLB_TTLBOS),
> + SR_TRAP(OP_TLBI_RVAE1OSNXS, CGT_HCR_TTLB_TTLBOS),
> + SR_TRAP(OP_TLBI_RVAAE1OSNXS, CGT_HCR_TTLB_TTLBOS),
> + SR_TRAP(OP_TLBI_RVALE1OSNXS, CGT_HCR_TTLB_TTLBOS),
> + SR_TRAP(OP_TLBI_RVAALE1OSNXS, CGT_HCR_TTLB_TTLBOS),
> + SR_TRAP(SYS_SCTLR_EL1, CGT_HCR_TVM_TRVM),
> + SR_TRAP(SYS_TTBR0_EL1, CGT_HCR_TVM_TRVM),
> + SR_TRAP(SYS_TTBR1_EL1, CGT_HCR_TVM_TRVM),
> + SR_TRAP(SYS_TCR_EL1, CGT_HCR_TVM_TRVM),
> + SR_TRAP(SYS_ESR_EL1, CGT_HCR_TVM_TRVM),
> + SR_TRAP(SYS_FAR_EL1, CGT_HCR_TVM_TRVM),
> + SR_TRAP(SYS_AFSR0_EL1, CGT_HCR_TVM_TRVM),
> + SR_TRAP(SYS_AFSR1_EL1, CGT_HCR_TVM_TRVM),
> + SR_TRAP(SYS_MAIR_EL1, CGT_HCR_TVM_TRVM),
> + SR_TRAP(SYS_AMAIR_EL1, CGT_HCR_TVM_TRVM),
> + SR_TRAP(SYS_CONTEXTIDR_EL1, CGT_HCR_TVM_TRVM),
> + SR_TRAP(SYS_DC_ZVA, CGT_HCR_TDZ),
> + SR_TRAP(SYS_DC_GVA, CGT_HCR_TDZ),
> + SR_TRAP(SYS_DC_GZVA, CGT_HCR_TDZ),
> + SR_TRAP(SYS_LORSA_EL1, CGT_HCR_TLOR),
> + SR_TRAP(SYS_LOREA_EL1, CGT_HCR_TLOR),
> + SR_TRAP(SYS_LORN_EL1, CGT_HCR_TLOR),
> + SR_TRAP(SYS_LORC_EL1, CGT_HCR_TLOR),
> + SR_TRAP(SYS_LORID_EL1, CGT_HCR_TLOR),
> + SR_TRAP(SYS_ERRIDR_EL1, CGT_HCR_TERR),
> + SR_TRAP(SYS_ERRSELR_EL1, CGT_HCR_TERR),
> + SR_TRAP(SYS_ERXADDR_EL1, CGT_HCR_TERR),
> + SR_TRAP(SYS_ERXCTLR_EL1, CGT_HCR_TERR),
> + SR_TRAP(SYS_ERXFR_EL1, CGT_HCR_TERR),
> + SR_TRAP(SYS_ERXMISC0_EL1, CGT_HCR_TERR),
> + SR_TRAP(SYS_ERXMISC1_EL1, CGT_HCR_TERR),
> + SR_TRAP(SYS_ERXMISC2_EL1, CGT_HCR_TERR),
> + SR_TRAP(SYS_ERXMISC3_EL1, CGT_HCR_TERR),
> + SR_TRAP(SYS_ERXSTATUS_EL1, CGT_HCR_TERR),
> + SR_TRAP(SYS_APIAKEYLO_EL1, CGT_HCR_APK),
> + SR_TRAP(SYS_APIAKEYHI_EL1, CGT_HCR_APK),
> + SR_TRAP(SYS_APIBKEYLO_EL1, CGT_HCR_APK),
> + SR_TRAP(SYS_APIBKEYHI_EL1, CGT_HCR_APK),
> + SR_TRAP(SYS_APDAKEYLO_EL1, CGT_HCR_APK),
> + SR_TRAP(SYS_APDAKEYHI_EL1, CGT_HCR_APK),
> + SR_TRAP(SYS_APDBKEYLO_EL1, CGT_HCR_APK),
> + SR_TRAP(SYS_APDBKEYHI_EL1, CGT_HCR_APK),
> + SR_TRAP(SYS_APGAKEYLO_EL1, CGT_HCR_APK),
> + SR_TRAP(SYS_APGAKEYHI_EL1, CGT_HCR_APK),
> + /* All _EL2 registers */
> + SR_RANGE_TRAP(sys_reg(3, 4, 0, 0, 0),
> +      sys_reg(3, 4, 3, 15, 7), CGT_HCR_NV),
> + /* Skip the SP_EL1 encoding... */
> + SR_RANGE_TRAP(sys_reg(3, 4, 4, 1, 1),
> +      sys_reg(3, 4, 10, 15, 7), CGT_HCR_NV),
> + SR_RANGE_TRAP(sys_reg(3, 4, 12, 0, 0),
> +      sys_reg(3, 4, 14, 15, 7), CGT_HCR_NV),

Should SPSR_EL2 and ELR_EL2 be considered also?

Thanks,
Miguel

> + /* All _EL02, _EL12 registers */
> + SR_RANGE_TRAP(sys_reg(3, 5, 0, 0, 0),
> +      sys_reg(3, 5, 10, 15, 7), CGT_HCR_NV),
> + SR_RANGE_TRAP(sys_reg(3, 5, 12, 0, 0),
> +      sys_reg(3, 5, 14, 15, 7), CGT_HCR_NV),
> + SR_TRAP(OP_AT_S1E2R, CGT_HCR_NV),
> + SR_TRAP(OP_AT_S1E2W, CGT_HCR_NV),
> + SR_TRAP(OP_AT_S12E1R, CGT_HCR_NV),
> + SR_TRAP(OP_AT_S12E1W, CGT_HCR_NV),
> + SR_TRAP(OP_AT_S12E0R, CGT_HCR_NV),
> + SR_TRAP(OP_AT_S12E0W, CGT_HCR_NV),
> + SR_TRAP(OP_TLBI_IPAS2E1, CGT_HCR_NV),
> + SR_TRAP(OP_TLBI_RIPAS2E1, CGT_HCR_NV),
> + SR_TRAP(OP_TLBI_IPAS2LE1, CGT_HCR_NV),
> + SR_TRAP(OP_TLBI_RIPAS2LE1, CGT_HCR_NV),
> + SR_TRAP(OP_TLBI_RVAE2, CGT_HCR_NV),
> + SR_TRAP(OP_TLBI_RVALE2, CGT_HCR_NV),
> + SR_TRAP(OP_TLBI_ALLE2, CGT_HCR_NV),
> + SR_TRAP(OP_TLBI_VAE2, CGT_HCR_NV),
> + SR_TRAP(OP_TLBI_ALLE1, CGT_HCR_NV),
> + SR_TRAP(OP_TLBI_VALE2, CGT_HCR_NV),
> + SR_TRAP(OP_TLBI_VMALLS12E1, CGT_HCR_NV),
> + SR_TRAP(OP_TLBI_IPAS2E1NXS, CGT_HCR_NV),
> + SR_TRAP(OP_TLBI_RIPAS2E1NXS, CGT_HCR_NV),
> + SR_TRAP(OP_TLBI_IPAS2LE1NXS, CGT_HCR_NV),
> + SR_TRAP(OP_TLBI_RIPAS2LE1NXS, CGT_HCR_NV),
> + SR_TRAP(OP_TLBI_RVAE2NXS, CGT_HCR_NV),
> + SR_TRAP(OP_TLBI_RVALE2NXS, CGT_HCR_NV),
> + SR_TRAP(OP_TLBI_ALLE2NXS, CGT_HCR_NV),
> + SR_TRAP(OP_TLBI_VAE2NXS, CGT_HCR_NV),
> + SR_TRAP(OP_TLBI_ALLE1NXS, CGT_HCR_NV),
> + SR_TRAP(OP_TLBI_VALE2NXS, CGT_HCR_NV),
> + SR_TRAP(OP_TLBI_VMALLS12E1NXS, CGT_HCR_NV),
> + SR_TRAP(OP_TLBI_IPAS2E1IS, CGT_HCR_NV),
> + SR_TRAP(OP_TLBI_RIPAS2E1IS, CGT_HCR_NV),
> + SR_TRAP(OP_TLBI_IPAS2LE1IS, CGT_HCR_NV),
> + SR_TRAP(OP_TLBI_RIPAS2LE1IS, CGT_HCR_NV),
> + SR_TRAP(OP_TLBI_RVAE2IS, CGT_HCR_NV),
> + SR_TRAP(OP_TLBI_RVALE2IS, CGT_HCR_NV),
> + SR_TRAP(OP_TLBI_ALLE2IS, CGT_HCR_NV),
> + SR_TRAP(OP_TLBI_VAE2IS, CGT_HCR_NV),
> + SR_TRAP(OP_TLBI_ALLE1IS, CGT_HCR_NV),
> + SR_TRAP(OP_TLBI_VALE2IS, CGT_HCR_NV),
> + SR_TRAP(OP_TLBI_VMALLS12E1IS, CGT_HCR_NV),
> + SR_TRAP(OP_TLBI_IPAS2E1ISNXS, CGT_HCR_NV),
> + SR_TRAP(OP_TLBI_RIPAS2E1ISNXS, CGT_HCR_NV),
> + SR_TRAP(OP_TLBI_IPAS2LE1ISNXS, CGT_HCR_NV),
> + SR_TRAP(OP_TLBI_RIPAS2LE1ISNXS, CGT_HCR_NV),
> + SR_TRAP(OP_TLBI_RVAE2ISNXS, CGT_HCR_NV),
> + SR_TRAP(OP_TLBI_RVALE2ISNXS, CGT_HCR_NV),
> + SR_TRAP(OP_TLBI_ALLE2ISNXS, CGT_HCR_NV),
> + SR_TRAP(OP_TLBI_VAE2ISNXS, CGT_HCR_NV),
> + SR_TRAP(OP_TLBI_ALLE1ISNXS, CGT_HCR_NV),
> + SR_TRAP(OP_TLBI_VALE2ISNXS, CGT_HCR_NV),
> + SR_TRAP(OP_TLBI_VMALLS12E1ISNXS,CGT_HCR_NV),
> + SR_TRAP(OP_TLBI_ALLE2OS, CGT_HCR_NV),
> + SR_TRAP(OP_TLBI_VAE2OS, CGT_HCR_NV),
> + SR_TRAP(OP_TLBI_ALLE1OS, CGT_HCR_NV),
> + SR_TRAP(OP_TLBI_VALE2OS, CGT_HCR_NV),
> + SR_TRAP(OP_TLBI_VMALLS12E1OS, CGT_HCR_NV),
> + SR_TRAP(OP_TLBI_IPAS2E1OS, CGT_HCR_NV),
> + SR_TRAP(OP_TLBI_RIPAS2E1OS, CGT_HCR_NV),
> + SR_TRAP(OP_TLBI_IPAS2LE1OS, CGT_HCR_NV),
> + SR_TRAP(OP_TLBI_RIPAS2LE1OS, CGT_HCR_NV),
> + SR_TRAP(OP_TLBI_RVAE2OS, CGT_HCR_NV),
> + SR_TRAP(OP_TLBI_RVALE2OS, CGT_HCR_NV),
> + SR_TRAP(OP_TLBI_ALLE2OSNXS, CGT_HCR_NV),
> + SR_TRAP(OP_TLBI_VAE2OSNXS, CGT_HCR_NV),
> + SR_TRAP(OP_TLBI_ALLE1OSNXS, CGT_HCR_NV),
> + SR_TRAP(OP_TLBI_VALE2OSNXS, CGT_HCR_NV),
> + SR_TRAP(OP_TLBI_VMALLS12E1OSNXS,CGT_HCR_NV),
> + SR_TRAP(OP_TLBI_IPAS2E1OSNXS, CGT_HCR_NV),
> + SR_TRAP(OP_TLBI_RIPAS2E1OSNXS, CGT_HCR_NV),
> + SR_TRAP(OP_TLBI_IPAS2LE1OSNXS, CGT_HCR_NV),
> + SR_TRAP(OP_TLBI_RIPAS2LE1OSNXS, CGT_HCR_NV),
> + SR_TRAP(OP_TLBI_RVAE2OSNXS, CGT_HCR_NV),
> + SR_TRAP(OP_TLBI_RVALE2OSNXS, CGT_HCR_NV),
> + SR_TRAP(OP_CPP_RCTX, CGT_HCR_NV),
> + SR_TRAP(OP_DVP_RCTX, CGT_HCR_NV),
> + SR_TRAP(OP_CFP_RCTX, CGT_HCR_NV),
> + SR_TRAP(SYS_SP_EL1, CGT_HCR_NV_nNV2),
> + SR_TRAP(SYS_VBAR_EL1, CGT_HCR_NV1_nNV2),
> + SR_TRAP(SYS_ELR_EL1, CGT_HCR_NV1_nNV2),
> + SR_TRAP(SYS_SPSR_EL1, CGT_HCR_NV1_nNV2),
> + SR_TRAP(SYS_SCXTNUM_EL1, CGT_HCR_NV1_nNV2_ENSCXT),
> + SR_TRAP(SYS_SCXTNUM_EL0, CGT_HCR_ENSCXT),
> + SR_TRAP(OP_AT_S1E1R, CGT_HCR_AT),
> + SR_TRAP(OP_AT_S1E1W, CGT_HCR_AT),
> + SR_TRAP(OP_AT_S1E0R, CGT_HCR_AT),
> + SR_TRAP(OP_AT_S1E0W, CGT_HCR_AT),
> + SR_TRAP(OP_AT_S1E1RP, CGT_HCR_AT),
> + SR_TRAP(OP_AT_S1E1WP, CGT_HCR_AT),
> + SR_TRAP(SYS_ERXPFGF_EL1, CGT_HCR_nFIEN),
> + SR_TRAP(SYS_ERXPFGCTL_EL1, CGT_HCR_nFIEN),
> + SR_TRAP(SYS_ERXPFGCDN_EL1, CGT_HCR_nFIEN),
> };
> 
> static DEFINE_XARRAY(sr_forward_xa);
> -- 
> 2.34.1
> 
> 


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 48+ messages in thread

* Re: [PATCH v4 15/28] KVM: arm64: nv: Add trap forwarding for HCR_EL2
  2023-08-17 11:05   ` Miguel Luis
@ 2023-08-21 17:47     ` Marc Zyngier
  2023-08-22 11:12       ` Miguel Luis
  0 siblings, 1 reply; 48+ messages in thread
From: Marc Zyngier @ 2023-08-21 17:47 UTC (permalink / raw)
  To: Miguel Luis
  Cc: kvmarm@lists.linux.dev, kvm@vger.kernel.org,
	linux-arm-kernel@lists.infradead.org, Catalin Marinas, Eric Auger,
	Mark Brown, Mark Rutland, Will Deacon, Alexandru Elisei,
	Andre Przywara, Chase Conklin, Ganapatrao Kulkarni, Darren Hart,
	Jing Zhang, James Morse, Suzuki K Poulose, Oliver Upton,
	Zenghui Yu

On Thu, 17 Aug 2023 12:05:49 +0100,
Miguel Luis <miguel.luis@oracle.com> wrote:
> 
> Hi Marc,
> 
> > On 15 Aug 2023, at 18:38, Marc Zyngier <maz@kernel.org> wrote:
> > 
> > Describe the HCR_EL2 register, and associate it with all the sysregs
> > it allows to trap.
> > 
> > Reviewed-by: Eric Auger <eric.auger@redhat.com>
> > Signed-off-by: Marc Zyngier <maz@kernel.org>
> > ---
> > arch/arm64/kvm/emulate-nested.c | 488 ++++++++++++++++++++++++++++++++
> > 1 file changed, 488 insertions(+)
> > 
> > diff --git a/arch/arm64/kvm/emulate-nested.c b/arch/arm64/kvm/emulate-nested.c
> > index d5837ed0077c..975a30ef874a 100644
> > --- a/arch/arm64/kvm/emulate-nested.c
> > +++ b/arch/arm64/kvm/emulate-nested.c
> > @@ -38,12 +38,48 @@ enum cgt_group_id {
> > * on their own instead of being part of a combination of
> > * trap controls.
> > */
> > + CGT_HCR_TID1,
> > + CGT_HCR_TID2,
> > + CGT_HCR_TID3,
> > + CGT_HCR_IMO,
> > + CGT_HCR_FMO,
> > + CGT_HCR_TIDCP,
> > + CGT_HCR_TACR,
> > + CGT_HCR_TSW,
> > + CGT_HCR_TPC,
> > + CGT_HCR_TPU,
> > + CGT_HCR_TTLB,
> > + CGT_HCR_TVM,
> > + CGT_HCR_TDZ,
> > + CGT_HCR_TRVM,
> > + CGT_HCR_TLOR,
> > + CGT_HCR_TERR,
> > + CGT_HCR_APK,
> > + CGT_HCR_NV,
> > + CGT_HCR_NV_nNV2,
> > + CGT_HCR_NV1_nNV2,
> > + CGT_HCR_AT,
> > + CGT_HCR_nFIEN,
> > + CGT_HCR_TID4,
> > + CGT_HCR_TICAB,
> > + CGT_HCR_TOCU,
> > + CGT_HCR_ENSCXT,
> > + CGT_HCR_TTLBIS,
> > + CGT_HCR_TTLBOS,
> > 
> > /*
> > * Anything after this point is a combination of coarse trap
> > * controls, which must all be evaluated to decide what to do.
> > */
> > __MULTIPLE_CONTROL_BITS__,
> > + CGT_HCR_IMO_FMO = __MULTIPLE_CONTROL_BITS__,
> > + CGT_HCR_TID2_TID4,
> > + CGT_HCR_TTLB_TTLBIS,
> > + CGT_HCR_TTLB_TTLBOS,
> > + CGT_HCR_TVM_TRVM,
> > + CGT_HCR_TPU_TICAB,
> > + CGT_HCR_TPU_TOCU,
> > + CGT_HCR_NV1_nNV2_ENSCXT,
> > 
> > /*
> > * Anything after this point requires a callback evaluating a
> > @@ -56,6 +92,174 @@ enum cgt_group_id {
> > };
> > 
> > static const struct trap_bits coarse_trap_bits[] = {
> > + [CGT_HCR_TID1] = {
> > + .index = HCR_EL2,
> > + .value = HCR_TID1,
> > + .mask = HCR_TID1,
> > + .behaviour = BEHAVE_FORWARD_READ,
> > + },
> > + [CGT_HCR_TID2] = {
> > + .index = HCR_EL2,
> > + .value = HCR_TID2,
> > + .mask = HCR_TID2,
> > + .behaviour = BEHAVE_FORWARD_ANY,
> > + },
> > + [CGT_HCR_TID3] = {
> > + .index = HCR_EL2,
> > + .value = HCR_TID3,
> > + .mask = HCR_TID3,
> > + .behaviour = BEHAVE_FORWARD_READ,
> > + },
> > + [CGT_HCR_IMO] = {
> > + .index = HCR_EL2,
> > + .value = HCR_IMO,
> > + .mask = HCR_IMO,
> > + .behaviour = BEHAVE_FORWARD_WRITE,
> > + },
> > + [CGT_HCR_FMO] = {
> > + .index = HCR_EL2,
> > + .value = HCR_FMO,
> > + .mask = HCR_FMO,
> > + .behaviour = BEHAVE_FORWARD_WRITE,
> > + },
> > + [CGT_HCR_TIDCP] = {
> > + .index = HCR_EL2,
> > + .value = HCR_TIDCP,
> > + .mask = HCR_TIDCP,
> > + .behaviour = BEHAVE_FORWARD_ANY,
> > + },
> > + [CGT_HCR_TACR] = {
> > + .index = HCR_EL2,
> > + .value = HCR_TACR,
> > + .mask = HCR_TACR,
> > + .behaviour = BEHAVE_FORWARD_ANY,
> > + },
> > + [CGT_HCR_TSW] = {
> > + .index = HCR_EL2,
> > + .value = HCR_TSW,
> > + .mask = HCR_TSW,
> > + .behaviour = BEHAVE_FORWARD_ANY,
> > + },
> > + [CGT_HCR_TPC] = { /* Also called TCPC when FEAT_DPB is implemented */
> > + .index = HCR_EL2,
> > + .value = HCR_TPC,
> > + .mask = HCR_TPC,
> > + .behaviour = BEHAVE_FORWARD_ANY,
> > + },
> > + [CGT_HCR_TPU] = {
> > + .index = HCR_EL2,
> > + .value = HCR_TPU,
> > + .mask = HCR_TPU,
> > + .behaviour = BEHAVE_FORWARD_ANY,
> > + },
> > + [CGT_HCR_TTLB] = {
> > + .index = HCR_EL2,
> > + .value = HCR_TTLB,
> > + .mask = HCR_TTLB,
> > + .behaviour = BEHAVE_FORWARD_ANY,
> > + },
> > + [CGT_HCR_TVM] = {
> > + .index = HCR_EL2,
> > + .value = HCR_TVM,
> > + .mask = HCR_TVM,
> > + .behaviour = BEHAVE_FORWARD_WRITE,
> > + },
> > + [CGT_HCR_TDZ] = {
> > + .index = HCR_EL2,
> > + .value = HCR_TDZ,
> > + .mask = HCR_TDZ,
> > + .behaviour = BEHAVE_FORWARD_ANY,
> > + },
> > + [CGT_HCR_TRVM] = {
> > + .index = HCR_EL2,
> > + .value = HCR_TRVM,
> > + .mask = HCR_TRVM,
> > + .behaviour = BEHAVE_FORWARD_READ,
> > + },
> > + [CGT_HCR_TLOR] = {
> > + .index = HCR_EL2,
> > + .value = HCR_TLOR,
> > + .mask = HCR_TLOR,
> > + .behaviour = BEHAVE_FORWARD_ANY,
> > + },
> > + [CGT_HCR_TERR] = {
> > + .index = HCR_EL2,
> > + .value = HCR_TERR,
> > + .mask = HCR_TERR,
> > + .behaviour = BEHAVE_FORWARD_ANY,
> > + },
> > + [CGT_HCR_APK] = {
> > + .index = HCR_EL2,
> > + .value = 0,
> > + .mask = HCR_APK,
> > + .behaviour = BEHAVE_FORWARD_ANY,
> > + },
> > + [CGT_HCR_NV] = {
> > + .index = HCR_EL2,
> > + .value = HCR_NV,
> > + .mask = HCR_NV,
> > + .behaviour = BEHAVE_FORWARD_ANY,
> > + },
> > + [CGT_HCR_NV_nNV2] = {
> > + .index = HCR_EL2,
> > + .value = HCR_NV,
> > + .mask = HCR_NV | HCR_NV2,
> > + .behaviour = BEHAVE_FORWARD_ANY,
> > + },
> > + [CGT_HCR_NV1_nNV2] = {
> > + .index = HCR_EL2,
> > + .value = HCR_NV | HCR_NV1,
> > + .mask = HCR_NV | HCR_NV1 | HCR_NV2,
> > + .behaviour = BEHAVE_FORWARD_ANY,
> > + },
> > + [CGT_HCR_AT] = {
> > + .index = HCR_EL2,
> > + .value = HCR_AT,
> > + .mask = HCR_AT,
> > + .behaviour = BEHAVE_FORWARD_ANY,
> > + },
> > + [CGT_HCR_nFIEN] = {
> > + .index = HCR_EL2,
> > + .value = 0,
> > + .mask = HCR_FIEN,
> > + .behaviour = BEHAVE_FORWARD_ANY,
> > + },
> > + [CGT_HCR_TID4] = {
> > + .index = HCR_EL2,
> > + .value = HCR_TID4,
> > + .mask = HCR_TID4,
> > + .behaviour = BEHAVE_FORWARD_ANY,
> > + },
> > + [CGT_HCR_TICAB] = {
> > + .index = HCR_EL2,
> > + .value = HCR_TICAB,
> > + .mask = HCR_TICAB,
> > + .behaviour = BEHAVE_FORWARD_ANY,
> > + },
> > + [CGT_HCR_TOCU] = {
> > + .index = HCR_EL2,
> > + .value = HCR_TOCU,
> > + .mask = HCR_TOCU,
> > + .behaviour = BEHAVE_FORWARD_ANY,
> > + },
> > + [CGT_HCR_ENSCXT] = {
> > + .index = HCR_EL2,
> > + .value = 0,
> > + .mask = HCR_ENSCXT,
> > + .behaviour = BEHAVE_FORWARD_ANY,
> > + },
> > + [CGT_HCR_TTLBIS] = {
> > + .index = HCR_EL2,
> > + .value = HCR_TTLBIS,
> > + .mask = HCR_TTLBIS,
> > + .behaviour = BEHAVE_FORWARD_ANY,
> > + },
> > + [CGT_HCR_TTLBOS] = {
> > + .index = HCR_EL2,
> > + .value = HCR_TTLBOS,
> > + .mask = HCR_TTLBOS,
> > + .behaviour = BEHAVE_FORWARD_ANY,
> > + },
> > };
> > 
> > #define MCB(id, ...) \
> > @@ -65,6 +269,14 @@ static const struct trap_bits coarse_trap_bits[] = {
> > }
> > 
> > static const enum cgt_group_id *coarse_control_combo[] = {
> > + MCB(CGT_HCR_IMO_FMO, CGT_HCR_IMO, CGT_HCR_FMO),
> > + MCB(CGT_HCR_TID2_TID4, CGT_HCR_TID2, CGT_HCR_TID4),
> > + MCB(CGT_HCR_TTLB_TTLBIS, CGT_HCR_TTLB, CGT_HCR_TTLBIS),
> > + MCB(CGT_HCR_TTLB_TTLBOS, CGT_HCR_TTLB, CGT_HCR_TTLBOS),
> > + MCB(CGT_HCR_TVM_TRVM, CGT_HCR_TVM, CGT_HCR_TRVM),
> > + MCB(CGT_HCR_TPU_TICAB, CGT_HCR_TPU, CGT_HCR_TICAB),
> > + MCB(CGT_HCR_TPU_TOCU, CGT_HCR_TPU, CGT_HCR_TOCU),
> > + MCB(CGT_HCR_NV1_nNV2_ENSCXT, CGT_HCR_NV1_nNV2, CGT_HCR_ENSCXT),
> > };
> > 
> > typedef enum trap_behaviour (*complex_condition_check)(struct kvm_vcpu *);
> > @@ -121,6 +333,282 @@ struct encoding_to_trap_config {
> >  * re-injected in the nested hypervisor.
> >  */
> > static const struct encoding_to_trap_config encoding_to_cgt[] __initconst = {
> > + SR_TRAP(SYS_REVIDR_EL1, CGT_HCR_TID1),
> > + SR_TRAP(SYS_AIDR_EL1, CGT_HCR_TID1),
> > + SR_TRAP(SYS_SMIDR_EL1, CGT_HCR_TID1),
> > + SR_TRAP(SYS_CTR_EL0, CGT_HCR_TID2),
> > + SR_TRAP(SYS_CCSIDR_EL1, CGT_HCR_TID2_TID4),
> > + SR_TRAP(SYS_CCSIDR2_EL1, CGT_HCR_TID2_TID4),
> > + SR_TRAP(SYS_CLIDR_EL1, CGT_HCR_TID2_TID4),
> > + SR_TRAP(SYS_CSSELR_EL1, CGT_HCR_TID2_TID4),
> > + SR_RANGE_TRAP(SYS_ID_PFR0_EL1,
> > +      sys_reg(3, 0, 0, 7, 7), CGT_HCR_TID3),
> > + SR_TRAP(SYS_ICC_SGI0R_EL1, CGT_HCR_IMO_FMO),
> > + SR_TRAP(SYS_ICC_ASGI1R_EL1, CGT_HCR_IMO_FMO),
> > + SR_TRAP(SYS_ICC_SGI1R_EL1, CGT_HCR_IMO_FMO),
> > + SR_RANGE_TRAP(sys_reg(3, 0, 11, 0, 0),
> > +      sys_reg(3, 0, 11, 15, 7), CGT_HCR_TIDCP),
> > + SR_RANGE_TRAP(sys_reg(3, 1, 11, 0, 0),
> > +      sys_reg(3, 1, 11, 15, 7), CGT_HCR_TIDCP),
> > + SR_RANGE_TRAP(sys_reg(3, 2, 11, 0, 0),
> > +      sys_reg(3, 2, 11, 15, 7), CGT_HCR_TIDCP),
> > + SR_RANGE_TRAP(sys_reg(3, 3, 11, 0, 0),
> > +      sys_reg(3, 3, 11, 15, 7), CGT_HCR_TIDCP),
> > + SR_RANGE_TRAP(sys_reg(3, 4, 11, 0, 0),
> > +      sys_reg(3, 4, 11, 15, 7), CGT_HCR_TIDCP),
> > + SR_RANGE_TRAP(sys_reg(3, 5, 11, 0, 0),
> > +      sys_reg(3, 5, 11, 15, 7), CGT_HCR_TIDCP),
> > + SR_RANGE_TRAP(sys_reg(3, 6, 11, 0, 0),
> > +      sys_reg(3, 6, 11, 15, 7), CGT_HCR_TIDCP),
> > + SR_RANGE_TRAP(sys_reg(3, 7, 11, 0, 0),
> > +      sys_reg(3, 7, 11, 15, 7), CGT_HCR_TIDCP),
> > + SR_RANGE_TRAP(sys_reg(3, 0, 15, 0, 0),
> > +      sys_reg(3, 0, 15, 15, 7), CGT_HCR_TIDCP),
> > + SR_RANGE_TRAP(sys_reg(3, 1, 15, 0, 0),
> > +      sys_reg(3, 1, 15, 15, 7), CGT_HCR_TIDCP),
> > + SR_RANGE_TRAP(sys_reg(3, 2, 15, 0, 0),
> > +      sys_reg(3, 2, 15, 15, 7), CGT_HCR_TIDCP),
> > + SR_RANGE_TRAP(sys_reg(3, 3, 15, 0, 0),
> > +      sys_reg(3, 3, 15, 15, 7), CGT_HCR_TIDCP),
> > + SR_RANGE_TRAP(sys_reg(3, 4, 15, 0, 0),
> > +      sys_reg(3, 4, 15, 15, 7), CGT_HCR_TIDCP),
> > + SR_RANGE_TRAP(sys_reg(3, 5, 15, 0, 0),
> > +      sys_reg(3, 5, 15, 15, 7), CGT_HCR_TIDCP),
> > + SR_RANGE_TRAP(sys_reg(3, 6, 15, 0, 0),
> > +      sys_reg(3, 6, 15, 15, 7), CGT_HCR_TIDCP),
> > + SR_RANGE_TRAP(sys_reg(3, 7, 15, 0, 0),
> > +      sys_reg(3, 7, 15, 15, 7), CGT_HCR_TIDCP),
> > + SR_TRAP(SYS_ACTLR_EL1, CGT_HCR_TACR),
> > + SR_TRAP(SYS_DC_ISW, CGT_HCR_TSW),
> > + SR_TRAP(SYS_DC_CSW, CGT_HCR_TSW),
> > + SR_TRAP(SYS_DC_CISW, CGT_HCR_TSW),
> > + SR_TRAP(SYS_DC_IGSW, CGT_HCR_TSW),
> > + SR_TRAP(SYS_DC_IGDSW, CGT_HCR_TSW),
> > + SR_TRAP(SYS_DC_CGSW, CGT_HCR_TSW),
> > + SR_TRAP(SYS_DC_CGDSW, CGT_HCR_TSW),
> > + SR_TRAP(SYS_DC_CIGSW, CGT_HCR_TSW),
> > + SR_TRAP(SYS_DC_CIGDSW, CGT_HCR_TSW),
> > + SR_TRAP(SYS_DC_CIVAC, CGT_HCR_TPC),
> > + SR_TRAP(SYS_DC_CVAC, CGT_HCR_TPC),
> > + SR_TRAP(SYS_DC_CVAP, CGT_HCR_TPC),
> > + SR_TRAP(SYS_DC_CVADP, CGT_HCR_TPC),
> > + SR_TRAP(SYS_DC_IVAC, CGT_HCR_TPC),
> > + SR_TRAP(SYS_DC_CIGVAC, CGT_HCR_TPC),
> > + SR_TRAP(SYS_DC_CIGDVAC, CGT_HCR_TPC),
> > + SR_TRAP(SYS_DC_IGVAC, CGT_HCR_TPC),
> > + SR_TRAP(SYS_DC_IGDVAC, CGT_HCR_TPC),
> > + SR_TRAP(SYS_DC_CGVAC, CGT_HCR_TPC),
> > + SR_TRAP(SYS_DC_CGDVAC, CGT_HCR_TPC),
> > + SR_TRAP(SYS_DC_CGVAP, CGT_HCR_TPC),
> > + SR_TRAP(SYS_DC_CGDVAP, CGT_HCR_TPC),
> > + SR_TRAP(SYS_DC_CGVADP, CGT_HCR_TPC),
> > + SR_TRAP(SYS_DC_CGDVADP, CGT_HCR_TPC),
> > + SR_TRAP(SYS_IC_IVAU, CGT_HCR_TPU_TOCU),
> > + SR_TRAP(SYS_IC_IALLU, CGT_HCR_TPU_TOCU),
> > + SR_TRAP(SYS_IC_IALLUIS, CGT_HCR_TPU_TICAB),
> > + SR_TRAP(SYS_DC_CVAU, CGT_HCR_TPU_TOCU),
> > + SR_TRAP(OP_TLBI_RVAE1, CGT_HCR_TTLB),
> > + SR_TRAP(OP_TLBI_RVAAE1, CGT_HCR_TTLB),
> > + SR_TRAP(OP_TLBI_RVALE1, CGT_HCR_TTLB),
> > + SR_TRAP(OP_TLBI_RVAALE1, CGT_HCR_TTLB),
> > + SR_TRAP(OP_TLBI_VMALLE1, CGT_HCR_TTLB),
> > + SR_TRAP(OP_TLBI_VAE1, CGT_HCR_TTLB),
> > + SR_TRAP(OP_TLBI_ASIDE1, CGT_HCR_TTLB),
> > + SR_TRAP(OP_TLBI_VAAE1, CGT_HCR_TTLB),
> > + SR_TRAP(OP_TLBI_VALE1, CGT_HCR_TTLB),
> > + SR_TRAP(OP_TLBI_VAALE1, CGT_HCR_TTLB),
> > + SR_TRAP(OP_TLBI_RVAE1NXS, CGT_HCR_TTLB),
> > + SR_TRAP(OP_TLBI_RVAAE1NXS, CGT_HCR_TTLB),
> > + SR_TRAP(OP_TLBI_RVALE1NXS, CGT_HCR_TTLB),
> > + SR_TRAP(OP_TLBI_RVAALE1NXS, CGT_HCR_TTLB),
> > + SR_TRAP(OP_TLBI_VMALLE1NXS, CGT_HCR_TTLB),
> > + SR_TRAP(OP_TLBI_VAE1NXS, CGT_HCR_TTLB),
> > + SR_TRAP(OP_TLBI_ASIDE1NXS, CGT_HCR_TTLB),
> > + SR_TRAP(OP_TLBI_VAAE1NXS, CGT_HCR_TTLB),
> > + SR_TRAP(OP_TLBI_VALE1NXS, CGT_HCR_TTLB),
> > + SR_TRAP(OP_TLBI_VAALE1NXS, CGT_HCR_TTLB),
> > + SR_TRAP(OP_TLBI_RVAE1IS, CGT_HCR_TTLB_TTLBIS),
> > + SR_TRAP(OP_TLBI_RVAAE1IS, CGT_HCR_TTLB_TTLBIS),
> > + SR_TRAP(OP_TLBI_RVALE1IS, CGT_HCR_TTLB_TTLBIS),
> > + SR_TRAP(OP_TLBI_RVAALE1IS, CGT_HCR_TTLB_TTLBIS),
> > + SR_TRAP(OP_TLBI_VMALLE1IS, CGT_HCR_TTLB_TTLBIS),
> > + SR_TRAP(OP_TLBI_VAE1IS, CGT_HCR_TTLB_TTLBIS),
> > + SR_TRAP(OP_TLBI_ASIDE1IS, CGT_HCR_TTLB_TTLBIS),
> > + SR_TRAP(OP_TLBI_VAAE1IS, CGT_HCR_TTLB_TTLBIS),
> > + SR_TRAP(OP_TLBI_VALE1IS, CGT_HCR_TTLB_TTLBIS),
> > + SR_TRAP(OP_TLBI_VAALE1IS, CGT_HCR_TTLB_TTLBIS),
> > + SR_TRAP(OP_TLBI_RVAE1ISNXS, CGT_HCR_TTLB_TTLBIS),
> > + SR_TRAP(OP_TLBI_RVAAE1ISNXS, CGT_HCR_TTLB_TTLBIS),
> > + SR_TRAP(OP_TLBI_RVALE1ISNXS, CGT_HCR_TTLB_TTLBIS),
> > + SR_TRAP(OP_TLBI_RVAALE1ISNXS, CGT_HCR_TTLB_TTLBIS),
> > + SR_TRAP(OP_TLBI_VMALLE1ISNXS, CGT_HCR_TTLB_TTLBIS),
> > + SR_TRAP(OP_TLBI_VAE1ISNXS, CGT_HCR_TTLB_TTLBIS),
> > + SR_TRAP(OP_TLBI_ASIDE1ISNXS, CGT_HCR_TTLB_TTLBIS),
> > + SR_TRAP(OP_TLBI_VAAE1ISNXS, CGT_HCR_TTLB_TTLBIS),
> > + SR_TRAP(OP_TLBI_VALE1ISNXS, CGT_HCR_TTLB_TTLBIS),
> > + SR_TRAP(OP_TLBI_VAALE1ISNXS, CGT_HCR_TTLB_TTLBIS),
> > + SR_TRAP(OP_TLBI_VMALLE1OS, CGT_HCR_TTLB_TTLBOS),
> > + SR_TRAP(OP_TLBI_VAE1OS, CGT_HCR_TTLB_TTLBOS),
> > + SR_TRAP(OP_TLBI_ASIDE1OS, CGT_HCR_TTLB_TTLBOS),
> > + SR_TRAP(OP_TLBI_VAAE1OS, CGT_HCR_TTLB_TTLBOS),
> > + SR_TRAP(OP_TLBI_VALE1OS, CGT_HCR_TTLB_TTLBOS),
> > + SR_TRAP(OP_TLBI_VAALE1OS, CGT_HCR_TTLB_TTLBOS),
> > + SR_TRAP(OP_TLBI_RVAE1OS, CGT_HCR_TTLB_TTLBOS),
> > + SR_TRAP(OP_TLBI_RVAAE1OS, CGT_HCR_TTLB_TTLBOS),
> > + SR_TRAP(OP_TLBI_RVALE1OS, CGT_HCR_TTLB_TTLBOS),
> > + SR_TRAP(OP_TLBI_RVAALE1OS, CGT_HCR_TTLB_TTLBOS),
> > + SR_TRAP(OP_TLBI_VMALLE1OSNXS, CGT_HCR_TTLB_TTLBOS),
> > + SR_TRAP(OP_TLBI_VAE1OSNXS, CGT_HCR_TTLB_TTLBOS),
> > + SR_TRAP(OP_TLBI_ASIDE1OSNXS, CGT_HCR_TTLB_TTLBOS),
> > + SR_TRAP(OP_TLBI_VAAE1OSNXS, CGT_HCR_TTLB_TTLBOS),
> > + SR_TRAP(OP_TLBI_VALE1OSNXS, CGT_HCR_TTLB_TTLBOS),
> > + SR_TRAP(OP_TLBI_VAALE1OSNXS, CGT_HCR_TTLB_TTLBOS),
> > + SR_TRAP(OP_TLBI_RVAE1OSNXS, CGT_HCR_TTLB_TTLBOS),
> > + SR_TRAP(OP_TLBI_RVAAE1OSNXS, CGT_HCR_TTLB_TTLBOS),
> > + SR_TRAP(OP_TLBI_RVALE1OSNXS, CGT_HCR_TTLB_TTLBOS),
> > + SR_TRAP(OP_TLBI_RVAALE1OSNXS, CGT_HCR_TTLB_TTLBOS),
> > + SR_TRAP(SYS_SCTLR_EL1, CGT_HCR_TVM_TRVM),
> > + SR_TRAP(SYS_TTBR0_EL1, CGT_HCR_TVM_TRVM),
> > + SR_TRAP(SYS_TTBR1_EL1, CGT_HCR_TVM_TRVM),
> > + SR_TRAP(SYS_TCR_EL1, CGT_HCR_TVM_TRVM),
> > + SR_TRAP(SYS_ESR_EL1, CGT_HCR_TVM_TRVM),
> > + SR_TRAP(SYS_FAR_EL1, CGT_HCR_TVM_TRVM),
> > + SR_TRAP(SYS_AFSR0_EL1, CGT_HCR_TVM_TRVM),
> > + SR_TRAP(SYS_AFSR1_EL1, CGT_HCR_TVM_TRVM),
> > + SR_TRAP(SYS_MAIR_EL1, CGT_HCR_TVM_TRVM),
> > + SR_TRAP(SYS_AMAIR_EL1, CGT_HCR_TVM_TRVM),
> > + SR_TRAP(SYS_CONTEXTIDR_EL1, CGT_HCR_TVM_TRVM),
> > + SR_TRAP(SYS_DC_ZVA, CGT_HCR_TDZ),
> > + SR_TRAP(SYS_DC_GVA, CGT_HCR_TDZ),
> > + SR_TRAP(SYS_DC_GZVA, CGT_HCR_TDZ),
> > + SR_TRAP(SYS_LORSA_EL1, CGT_HCR_TLOR),
> > + SR_TRAP(SYS_LOREA_EL1, CGT_HCR_TLOR),
> > + SR_TRAP(SYS_LORN_EL1, CGT_HCR_TLOR),
> > + SR_TRAP(SYS_LORC_EL1, CGT_HCR_TLOR),
> > + SR_TRAP(SYS_LORID_EL1, CGT_HCR_TLOR),
> > + SR_TRAP(SYS_ERRIDR_EL1, CGT_HCR_TERR),
> > + SR_TRAP(SYS_ERRSELR_EL1, CGT_HCR_TERR),
> > + SR_TRAP(SYS_ERXADDR_EL1, CGT_HCR_TERR),
> > + SR_TRAP(SYS_ERXCTLR_EL1, CGT_HCR_TERR),
> > + SR_TRAP(SYS_ERXFR_EL1, CGT_HCR_TERR),
> > + SR_TRAP(SYS_ERXMISC0_EL1, CGT_HCR_TERR),
> > + SR_TRAP(SYS_ERXMISC1_EL1, CGT_HCR_TERR),
> > + SR_TRAP(SYS_ERXMISC2_EL1, CGT_HCR_TERR),
> > + SR_TRAP(SYS_ERXMISC3_EL1, CGT_HCR_TERR),
> > + SR_TRAP(SYS_ERXSTATUS_EL1, CGT_HCR_TERR),
> > + SR_TRAP(SYS_APIAKEYLO_EL1, CGT_HCR_APK),
> > + SR_TRAP(SYS_APIAKEYHI_EL1, CGT_HCR_APK),
> > + SR_TRAP(SYS_APIBKEYLO_EL1, CGT_HCR_APK),
> > + SR_TRAP(SYS_APIBKEYHI_EL1, CGT_HCR_APK),
> > + SR_TRAP(SYS_APDAKEYLO_EL1, CGT_HCR_APK),
> > + SR_TRAP(SYS_APDAKEYHI_EL1, CGT_HCR_APK),
> > + SR_TRAP(SYS_APDBKEYLO_EL1, CGT_HCR_APK),
> > + SR_TRAP(SYS_APDBKEYHI_EL1, CGT_HCR_APK),
> > + SR_TRAP(SYS_APGAKEYLO_EL1, CGT_HCR_APK),
> > + SR_TRAP(SYS_APGAKEYHI_EL1, CGT_HCR_APK),
> > + /* All _EL2 registers */
> > + SR_RANGE_TRAP(sys_reg(3, 4, 0, 0, 0),
> > +      sys_reg(3, 4, 3, 15, 7), CGT_HCR_NV),
> > + /* Skip the SP_EL1 encoding... */
> > + SR_RANGE_TRAP(sys_reg(3, 4, 4, 1, 1),
> > +      sys_reg(3, 4, 10, 15, 7), CGT_HCR_NV),
> > + SR_RANGE_TRAP(sys_reg(3, 4, 12, 0, 0),
> > +      sys_reg(3, 4, 14, 15, 7), CGT_HCR_NV),
> 
> Should SPSR_EL2 and ELR_EL2 be considered also?

Ah crap, these are outside of the expected range. It doesn't really
matter yet as we are still a long way away from recursive
virtualisation, but we might as well address that now.

I may also eventually have a more fine grained approach to these
registers, as the ranges tend to bleed over a number of EL1 registers
that aren't affected by NV.

In the meantime, I'll add the patch below to the patch stack.

Thanks,

	M.

From 9b650e785e3e59ef23a5dcb8f58be45cdd97b1f2 Mon Sep 17 00:00:00 2001
From: Marc Zyngier <maz@kernel.org>
Date: Mon, 21 Aug 2023 18:44:15 +0100
Subject: [PATCH] KVM: arm64: nv: Add trap description for SPSR_EL2 and ELR_EL2

Having carved a hole for SP_EL1, we are now missing the entries
for SPSR_EL2 and ELR_EL2. Add them back.

Reported-by: Miguel Luis <miguel.luis@oracle.com>
Signed-off-by: Marc Zyngier <maz@kernel.org>
---
 arch/arm64/kvm/emulate-nested.c | 2 ++
 1 file changed, 2 insertions(+)

diff --git a/arch/arm64/kvm/emulate-nested.c b/arch/arm64/kvm/emulate-nested.c
index 44d9300e95f5..b5637ae4149f 100644
--- a/arch/arm64/kvm/emulate-nested.c
+++ b/arch/arm64/kvm/emulate-nested.c
@@ -651,6 +651,8 @@ static const struct encoding_to_trap_config encoding_to_cgt[] __initconst = {
 	SR_RANGE_TRAP(sys_reg(3, 4, 0, 0, 0),
 		      sys_reg(3, 4, 3, 15, 7), CGT_HCR_NV),
 	/* Skip the SP_EL1 encoding... */
+	SR_TRAP(SYS_SPSR_EL2,		CGT_HCR_NV),
+	SR_TRAP(SYS_ELR_EL2,		CGT_HCR_NV),
 	SR_RANGE_TRAP(sys_reg(3, 4, 4, 1, 1),
 		      sys_reg(3, 4, 10, 15, 7), CGT_HCR_NV),
 	SR_RANGE_TRAP(sys_reg(3, 4, 12, 0, 0),
-- 
2.34.1


-- 
Without deviation from the norm, progress is not possible.

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply related	[flat|nested] 48+ messages in thread

* Re: [PATCH v4 15/28] KVM: arm64: nv: Add trap forwarding for HCR_EL2
  2023-08-21 17:47     ` Marc Zyngier
@ 2023-08-22 11:12       ` Miguel Luis
  0 siblings, 0 replies; 48+ messages in thread
From: Miguel Luis @ 2023-08-22 11:12 UTC (permalink / raw)
  To: Marc Zyngier
  Cc: kvmarm@lists.linux.dev, kvm@vger.kernel.org,
	linux-arm-kernel@lists.infradead.org, Catalin Marinas, Eric Auger,
	Mark Brown, Mark Rutland, Will Deacon, Alexandru Elisei,
	Andre Przywara, Chase Conklin, Ganapatrao Kulkarni, Darren Hart,
	Jing Zhang, James Morse, Suzuki K Poulose, Oliver Upton,
	Zenghui Yu

Hi Marc,

> On 21 Aug 2023, at 17:47, Marc Zyngier <maz@kernel.org> wrote:
> 
> On Thu, 17 Aug 2023 12:05:49 +0100,
> Miguel Luis <miguel.luis@oracle.com> wrote:
>> 
>> Hi Marc,
>> 
>>> On 15 Aug 2023, at 18:38, Marc Zyngier <maz@kernel.org> wrote:
>>> 
>>> Describe the HCR_EL2 register, and associate it with all the sysregs
>>> it allows to trap.
>>> 
>>> Reviewed-by: Eric Auger <eric.auger@redhat.com>
>>> Signed-off-by: Marc Zyngier <maz@kernel.org>
>>> ---
>>> arch/arm64/kvm/emulate-nested.c | 488 ++++++++++++++++++++++++++++++++
>>> 1 file changed, 488 insertions(+)
>>> 
>>> diff --git a/arch/arm64/kvm/emulate-nested.c b/arch/arm64/kvm/emulate-nested.c
>>> index d5837ed0077c..975a30ef874a 100644
>>> --- a/arch/arm64/kvm/emulate-nested.c
>>> +++ b/arch/arm64/kvm/emulate-nested.c
>>> @@ -38,12 +38,48 @@ enum cgt_group_id {
>>> * on their own instead of being part of a combination of
>>> * trap controls.
>>> */
>>> + CGT_HCR_TID1,
>>> + CGT_HCR_TID2,
>>> + CGT_HCR_TID3,
>>> + CGT_HCR_IMO,
>>> + CGT_HCR_FMO,
>>> + CGT_HCR_TIDCP,
>>> + CGT_HCR_TACR,
>>> + CGT_HCR_TSW,
>>> + CGT_HCR_TPC,
>>> + CGT_HCR_TPU,
>>> + CGT_HCR_TTLB,
>>> + CGT_HCR_TVM,
>>> + CGT_HCR_TDZ,
>>> + CGT_HCR_TRVM,
>>> + CGT_HCR_TLOR,
>>> + CGT_HCR_TERR,
>>> + CGT_HCR_APK,
>>> + CGT_HCR_NV,
>>> + CGT_HCR_NV_nNV2,
>>> + CGT_HCR_NV1_nNV2,
>>> + CGT_HCR_AT,
>>> + CGT_HCR_nFIEN,
>>> + CGT_HCR_TID4,
>>> + CGT_HCR_TICAB,
>>> + CGT_HCR_TOCU,
>>> + CGT_HCR_ENSCXT,
>>> + CGT_HCR_TTLBIS,
>>> + CGT_HCR_TTLBOS,
>>> 
>>> /*
>>> * Anything after this point is a combination of coarse trap
>>> * controls, which must all be evaluated to decide what to do.
>>> */
>>> __MULTIPLE_CONTROL_BITS__,
>>> + CGT_HCR_IMO_FMO = __MULTIPLE_CONTROL_BITS__,
>>> + CGT_HCR_TID2_TID4,
>>> + CGT_HCR_TTLB_TTLBIS,
>>> + CGT_HCR_TTLB_TTLBOS,
>>> + CGT_HCR_TVM_TRVM,
>>> + CGT_HCR_TPU_TICAB,
>>> + CGT_HCR_TPU_TOCU,
>>> + CGT_HCR_NV1_nNV2_ENSCXT,
>>> 
>>> /*
>>> * Anything after this point requires a callback evaluating a
>>> @@ -56,6 +92,174 @@ enum cgt_group_id {
>>> };
>>> 
>>> static const struct trap_bits coarse_trap_bits[] = {
>>> + [CGT_HCR_TID1] = {
>>> + .index = HCR_EL2,
>>> + .value = HCR_TID1,
>>> + .mask = HCR_TID1,
>>> + .behaviour = BEHAVE_FORWARD_READ,
>>> + },
>>> + [CGT_HCR_TID2] = {
>>> + .index = HCR_EL2,
>>> + .value = HCR_TID2,
>>> + .mask = HCR_TID2,
>>> + .behaviour = BEHAVE_FORWARD_ANY,
>>> + },
>>> + [CGT_HCR_TID3] = {
>>> + .index = HCR_EL2,
>>> + .value = HCR_TID3,
>>> + .mask = HCR_TID3,
>>> + .behaviour = BEHAVE_FORWARD_READ,
>>> + },
>>> + [CGT_HCR_IMO] = {
>>> + .index = HCR_EL2,
>>> + .value = HCR_IMO,
>>> + .mask = HCR_IMO,
>>> + .behaviour = BEHAVE_FORWARD_WRITE,
>>> + },
>>> + [CGT_HCR_FMO] = {
>>> + .index = HCR_EL2,
>>> + .value = HCR_FMO,
>>> + .mask = HCR_FMO,
>>> + .behaviour = BEHAVE_FORWARD_WRITE,
>>> + },
>>> + [CGT_HCR_TIDCP] = {
>>> + .index = HCR_EL2,
>>> + .value = HCR_TIDCP,
>>> + .mask = HCR_TIDCP,
>>> + .behaviour = BEHAVE_FORWARD_ANY,
>>> + },
>>> + [CGT_HCR_TACR] = {
>>> + .index = HCR_EL2,
>>> + .value = HCR_TACR,
>>> + .mask = HCR_TACR,
>>> + .behaviour = BEHAVE_FORWARD_ANY,
>>> + },
>>> + [CGT_HCR_TSW] = {
>>> + .index = HCR_EL2,
>>> + .value = HCR_TSW,
>>> + .mask = HCR_TSW,
>>> + .behaviour = BEHAVE_FORWARD_ANY,
>>> + },
>>> + [CGT_HCR_TPC] = { /* Also called TCPC when FEAT_DPB is implemented */
>>> + .index = HCR_EL2,
>>> + .value = HCR_TPC,
>>> + .mask = HCR_TPC,
>>> + .behaviour = BEHAVE_FORWARD_ANY,
>>> + },
>>> + [CGT_HCR_TPU] = {
>>> + .index = HCR_EL2,
>>> + .value = HCR_TPU,
>>> + .mask = HCR_TPU,
>>> + .behaviour = BEHAVE_FORWARD_ANY,
>>> + },
>>> + [CGT_HCR_TTLB] = {
>>> + .index = HCR_EL2,
>>> + .value = HCR_TTLB,
>>> + .mask = HCR_TTLB,
>>> + .behaviour = BEHAVE_FORWARD_ANY,
>>> + },
>>> + [CGT_HCR_TVM] = {
>>> + .index = HCR_EL2,
>>> + .value = HCR_TVM,
>>> + .mask = HCR_TVM,
>>> + .behaviour = BEHAVE_FORWARD_WRITE,
>>> + },
>>> + [CGT_HCR_TDZ] = {
>>> + .index = HCR_EL2,
>>> + .value = HCR_TDZ,
>>> + .mask = HCR_TDZ,
>>> + .behaviour = BEHAVE_FORWARD_ANY,
>>> + },
>>> + [CGT_HCR_TRVM] = {
>>> + .index = HCR_EL2,
>>> + .value = HCR_TRVM,
>>> + .mask = HCR_TRVM,
>>> + .behaviour = BEHAVE_FORWARD_READ,
>>> + },
>>> + [CGT_HCR_TLOR] = {
>>> + .index = HCR_EL2,
>>> + .value = HCR_TLOR,
>>> + .mask = HCR_TLOR,
>>> + .behaviour = BEHAVE_FORWARD_ANY,
>>> + },
>>> + [CGT_HCR_TERR] = {
>>> + .index = HCR_EL2,
>>> + .value = HCR_TERR,
>>> + .mask = HCR_TERR,
>>> + .behaviour = BEHAVE_FORWARD_ANY,
>>> + },
>>> + [CGT_HCR_APK] = {
>>> + .index = HCR_EL2,
>>> + .value = 0,
>>> + .mask = HCR_APK,
>>> + .behaviour = BEHAVE_FORWARD_ANY,
>>> + },
>>> + [CGT_HCR_NV] = {
>>> + .index = HCR_EL2,
>>> + .value = HCR_NV,
>>> + .mask = HCR_NV,
>>> + .behaviour = BEHAVE_FORWARD_ANY,
>>> + },
>>> + [CGT_HCR_NV_nNV2] = {
>>> + .index = HCR_EL2,
>>> + .value = HCR_NV,
>>> + .mask = HCR_NV | HCR_NV2,
>>> + .behaviour = BEHAVE_FORWARD_ANY,
>>> + },
>>> + [CGT_HCR_NV1_nNV2] = {
>>> + .index = HCR_EL2,
>>> + .value = HCR_NV | HCR_NV1,
>>> + .mask = HCR_NV | HCR_NV1 | HCR_NV2,
>>> + .behaviour = BEHAVE_FORWARD_ANY,
>>> + },
>>> + [CGT_HCR_AT] = {
>>> + .index = HCR_EL2,
>>> + .value = HCR_AT,
>>> + .mask = HCR_AT,
>>> + .behaviour = BEHAVE_FORWARD_ANY,
>>> + },
>>> + [CGT_HCR_nFIEN] = {
>>> + .index = HCR_EL2,
>>> + .value = 0,
>>> + .mask = HCR_FIEN,
>>> + .behaviour = BEHAVE_FORWARD_ANY,
>>> + },
>>> + [CGT_HCR_TID4] = {
>>> + .index = HCR_EL2,
>>> + .value = HCR_TID4,
>>> + .mask = HCR_TID4,
>>> + .behaviour = BEHAVE_FORWARD_ANY,
>>> + },
>>> + [CGT_HCR_TICAB] = {
>>> + .index = HCR_EL2,
>>> + .value = HCR_TICAB,
>>> + .mask = HCR_TICAB,
>>> + .behaviour = BEHAVE_FORWARD_ANY,
>>> + },
>>> + [CGT_HCR_TOCU] = {
>>> + .index = HCR_EL2,
>>> + .value = HCR_TOCU,
>>> + .mask = HCR_TOCU,
>>> + .behaviour = BEHAVE_FORWARD_ANY,
>>> + },
>>> + [CGT_HCR_ENSCXT] = {
>>> + .index = HCR_EL2,
>>> + .value = 0,
>>> + .mask = HCR_ENSCXT,
>>> + .behaviour = BEHAVE_FORWARD_ANY,
>>> + },
>>> + [CGT_HCR_TTLBIS] = {
>>> + .index = HCR_EL2,
>>> + .value = HCR_TTLBIS,
>>> + .mask = HCR_TTLBIS,
>>> + .behaviour = BEHAVE_FORWARD_ANY,
>>> + },
>>> + [CGT_HCR_TTLBOS] = {
>>> + .index = HCR_EL2,
>>> + .value = HCR_TTLBOS,
>>> + .mask = HCR_TTLBOS,
>>> + .behaviour = BEHAVE_FORWARD_ANY,
>>> + },
>>> };
>>> 
>>> #define MCB(id, ...) \
>>> @@ -65,6 +269,14 @@ static const struct trap_bits coarse_trap_bits[] = {
>>> }
>>> 
>>> static const enum cgt_group_id *coarse_control_combo[] = {
>>> + MCB(CGT_HCR_IMO_FMO, CGT_HCR_IMO, CGT_HCR_FMO),
>>> + MCB(CGT_HCR_TID2_TID4, CGT_HCR_TID2, CGT_HCR_TID4),
>>> + MCB(CGT_HCR_TTLB_TTLBIS, CGT_HCR_TTLB, CGT_HCR_TTLBIS),
>>> + MCB(CGT_HCR_TTLB_TTLBOS, CGT_HCR_TTLB, CGT_HCR_TTLBOS),
>>> + MCB(CGT_HCR_TVM_TRVM, CGT_HCR_TVM, CGT_HCR_TRVM),
>>> + MCB(CGT_HCR_TPU_TICAB, CGT_HCR_TPU, CGT_HCR_TICAB),
>>> + MCB(CGT_HCR_TPU_TOCU, CGT_HCR_TPU, CGT_HCR_TOCU),
>>> + MCB(CGT_HCR_NV1_nNV2_ENSCXT, CGT_HCR_NV1_nNV2, CGT_HCR_ENSCXT),
>>> };
>>> 
>>> typedef enum trap_behaviour (*complex_condition_check)(struct kvm_vcpu *);
>>> @@ -121,6 +333,282 @@ struct encoding_to_trap_config {
>>> * re-injected in the nested hypervisor.
>>> */
>>> static const struct encoding_to_trap_config encoding_to_cgt[] __initconst = {
>>> + SR_TRAP(SYS_REVIDR_EL1, CGT_HCR_TID1),
>>> + SR_TRAP(SYS_AIDR_EL1, CGT_HCR_TID1),
>>> + SR_TRAP(SYS_SMIDR_EL1, CGT_HCR_TID1),
>>> + SR_TRAP(SYS_CTR_EL0, CGT_HCR_TID2),
>>> + SR_TRAP(SYS_CCSIDR_EL1, CGT_HCR_TID2_TID4),
>>> + SR_TRAP(SYS_CCSIDR2_EL1, CGT_HCR_TID2_TID4),
>>> + SR_TRAP(SYS_CLIDR_EL1, CGT_HCR_TID2_TID4),
>>> + SR_TRAP(SYS_CSSELR_EL1, CGT_HCR_TID2_TID4),
>>> + SR_RANGE_TRAP(SYS_ID_PFR0_EL1,
>>> +      sys_reg(3, 0, 0, 7, 7), CGT_HCR_TID3),
>>> + SR_TRAP(SYS_ICC_SGI0R_EL1, CGT_HCR_IMO_FMO),
>>> + SR_TRAP(SYS_ICC_ASGI1R_EL1, CGT_HCR_IMO_FMO),
>>> + SR_TRAP(SYS_ICC_SGI1R_EL1, CGT_HCR_IMO_FMO),
>>> + SR_RANGE_TRAP(sys_reg(3, 0, 11, 0, 0),
>>> +      sys_reg(3, 0, 11, 15, 7), CGT_HCR_TIDCP),
>>> + SR_RANGE_TRAP(sys_reg(3, 1, 11, 0, 0),
>>> +      sys_reg(3, 1, 11, 15, 7), CGT_HCR_TIDCP),
>>> + SR_RANGE_TRAP(sys_reg(3, 2, 11, 0, 0),
>>> +      sys_reg(3, 2, 11, 15, 7), CGT_HCR_TIDCP),
>>> + SR_RANGE_TRAP(sys_reg(3, 3, 11, 0, 0),
>>> +      sys_reg(3, 3, 11, 15, 7), CGT_HCR_TIDCP),
>>> + SR_RANGE_TRAP(sys_reg(3, 4, 11, 0, 0),
>>> +      sys_reg(3, 4, 11, 15, 7), CGT_HCR_TIDCP),
>>> + SR_RANGE_TRAP(sys_reg(3, 5, 11, 0, 0),
>>> +      sys_reg(3, 5, 11, 15, 7), CGT_HCR_TIDCP),
>>> + SR_RANGE_TRAP(sys_reg(3, 6, 11, 0, 0),
>>> +      sys_reg(3, 6, 11, 15, 7), CGT_HCR_TIDCP),
>>> + SR_RANGE_TRAP(sys_reg(3, 7, 11, 0, 0),
>>> +      sys_reg(3, 7, 11, 15, 7), CGT_HCR_TIDCP),
>>> + SR_RANGE_TRAP(sys_reg(3, 0, 15, 0, 0),
>>> +      sys_reg(3, 0, 15, 15, 7), CGT_HCR_TIDCP),
>>> + SR_RANGE_TRAP(sys_reg(3, 1, 15, 0, 0),
>>> +      sys_reg(3, 1, 15, 15, 7), CGT_HCR_TIDCP),
>>> + SR_RANGE_TRAP(sys_reg(3, 2, 15, 0, 0),
>>> +      sys_reg(3, 2, 15, 15, 7), CGT_HCR_TIDCP),
>>> + SR_RANGE_TRAP(sys_reg(3, 3, 15, 0, 0),
>>> +      sys_reg(3, 3, 15, 15, 7), CGT_HCR_TIDCP),
>>> + SR_RANGE_TRAP(sys_reg(3, 4, 15, 0, 0),
>>> +      sys_reg(3, 4, 15, 15, 7), CGT_HCR_TIDCP),
>>> + SR_RANGE_TRAP(sys_reg(3, 5, 15, 0, 0),
>>> +      sys_reg(3, 5, 15, 15, 7), CGT_HCR_TIDCP),
>>> + SR_RANGE_TRAP(sys_reg(3, 6, 15, 0, 0),
>>> +      sys_reg(3, 6, 15, 15, 7), CGT_HCR_TIDCP),
>>> + SR_RANGE_TRAP(sys_reg(3, 7, 15, 0, 0),
>>> +      sys_reg(3, 7, 15, 15, 7), CGT_HCR_TIDCP),
>>> + SR_TRAP(SYS_ACTLR_EL1, CGT_HCR_TACR),
>>> + SR_TRAP(SYS_DC_ISW, CGT_HCR_TSW),
>>> + SR_TRAP(SYS_DC_CSW, CGT_HCR_TSW),
>>> + SR_TRAP(SYS_DC_CISW, CGT_HCR_TSW),
>>> + SR_TRAP(SYS_DC_IGSW, CGT_HCR_TSW),
>>> + SR_TRAP(SYS_DC_IGDSW, CGT_HCR_TSW),
>>> + SR_TRAP(SYS_DC_CGSW, CGT_HCR_TSW),
>>> + SR_TRAP(SYS_DC_CGDSW, CGT_HCR_TSW),
>>> + SR_TRAP(SYS_DC_CIGSW, CGT_HCR_TSW),
>>> + SR_TRAP(SYS_DC_CIGDSW, CGT_HCR_TSW),
>>> + SR_TRAP(SYS_DC_CIVAC, CGT_HCR_TPC),
>>> + SR_TRAP(SYS_DC_CVAC, CGT_HCR_TPC),
>>> + SR_TRAP(SYS_DC_CVAP, CGT_HCR_TPC),
>>> + SR_TRAP(SYS_DC_CVADP, CGT_HCR_TPC),
>>> + SR_TRAP(SYS_DC_IVAC, CGT_HCR_TPC),
>>> + SR_TRAP(SYS_DC_CIGVAC, CGT_HCR_TPC),
>>> + SR_TRAP(SYS_DC_CIGDVAC, CGT_HCR_TPC),
>>> + SR_TRAP(SYS_DC_IGVAC, CGT_HCR_TPC),
>>> + SR_TRAP(SYS_DC_IGDVAC, CGT_HCR_TPC),
>>> + SR_TRAP(SYS_DC_CGVAC, CGT_HCR_TPC),
>>> + SR_TRAP(SYS_DC_CGDVAC, CGT_HCR_TPC),
>>> + SR_TRAP(SYS_DC_CGVAP, CGT_HCR_TPC),
>>> + SR_TRAP(SYS_DC_CGDVAP, CGT_HCR_TPC),
>>> + SR_TRAP(SYS_DC_CGVADP, CGT_HCR_TPC),
>>> + SR_TRAP(SYS_DC_CGDVADP, CGT_HCR_TPC),
>>> + SR_TRAP(SYS_IC_IVAU, CGT_HCR_TPU_TOCU),
>>> + SR_TRAP(SYS_IC_IALLU, CGT_HCR_TPU_TOCU),
>>> + SR_TRAP(SYS_IC_IALLUIS, CGT_HCR_TPU_TICAB),
>>> + SR_TRAP(SYS_DC_CVAU, CGT_HCR_TPU_TOCU),
>>> + SR_TRAP(OP_TLBI_RVAE1, CGT_HCR_TTLB),
>>> + SR_TRAP(OP_TLBI_RVAAE1, CGT_HCR_TTLB),
>>> + SR_TRAP(OP_TLBI_RVALE1, CGT_HCR_TTLB),
>>> + SR_TRAP(OP_TLBI_RVAALE1, CGT_HCR_TTLB),
>>> + SR_TRAP(OP_TLBI_VMALLE1, CGT_HCR_TTLB),
>>> + SR_TRAP(OP_TLBI_VAE1, CGT_HCR_TTLB),
>>> + SR_TRAP(OP_TLBI_ASIDE1, CGT_HCR_TTLB),
>>> + SR_TRAP(OP_TLBI_VAAE1, CGT_HCR_TTLB),
>>> + SR_TRAP(OP_TLBI_VALE1, CGT_HCR_TTLB),
>>> + SR_TRAP(OP_TLBI_VAALE1, CGT_HCR_TTLB),
>>> + SR_TRAP(OP_TLBI_RVAE1NXS, CGT_HCR_TTLB),
>>> + SR_TRAP(OP_TLBI_RVAAE1NXS, CGT_HCR_TTLB),
>>> + SR_TRAP(OP_TLBI_RVALE1NXS, CGT_HCR_TTLB),
>>> + SR_TRAP(OP_TLBI_RVAALE1NXS, CGT_HCR_TTLB),
>>> + SR_TRAP(OP_TLBI_VMALLE1NXS, CGT_HCR_TTLB),
>>> + SR_TRAP(OP_TLBI_VAE1NXS, CGT_HCR_TTLB),
>>> + SR_TRAP(OP_TLBI_ASIDE1NXS, CGT_HCR_TTLB),
>>> + SR_TRAP(OP_TLBI_VAAE1NXS, CGT_HCR_TTLB),
>>> + SR_TRAP(OP_TLBI_VALE1NXS, CGT_HCR_TTLB),
>>> + SR_TRAP(OP_TLBI_VAALE1NXS, CGT_HCR_TTLB),
>>> + SR_TRAP(OP_TLBI_RVAE1IS, CGT_HCR_TTLB_TTLBIS),
>>> + SR_TRAP(OP_TLBI_RVAAE1IS, CGT_HCR_TTLB_TTLBIS),
>>> + SR_TRAP(OP_TLBI_RVALE1IS, CGT_HCR_TTLB_TTLBIS),
>>> + SR_TRAP(OP_TLBI_RVAALE1IS, CGT_HCR_TTLB_TTLBIS),
>>> + SR_TRAP(OP_TLBI_VMALLE1IS, CGT_HCR_TTLB_TTLBIS),
>>> + SR_TRAP(OP_TLBI_VAE1IS, CGT_HCR_TTLB_TTLBIS),
>>> + SR_TRAP(OP_TLBI_ASIDE1IS, CGT_HCR_TTLB_TTLBIS),
>>> + SR_TRAP(OP_TLBI_VAAE1IS, CGT_HCR_TTLB_TTLBIS),
>>> + SR_TRAP(OP_TLBI_VALE1IS, CGT_HCR_TTLB_TTLBIS),
>>> + SR_TRAP(OP_TLBI_VAALE1IS, CGT_HCR_TTLB_TTLBIS),
>>> + SR_TRAP(OP_TLBI_RVAE1ISNXS, CGT_HCR_TTLB_TTLBIS),
>>> + SR_TRAP(OP_TLBI_RVAAE1ISNXS, CGT_HCR_TTLB_TTLBIS),
>>> + SR_TRAP(OP_TLBI_RVALE1ISNXS, CGT_HCR_TTLB_TTLBIS),
>>> + SR_TRAP(OP_TLBI_RVAALE1ISNXS, CGT_HCR_TTLB_TTLBIS),
>>> + SR_TRAP(OP_TLBI_VMALLE1ISNXS, CGT_HCR_TTLB_TTLBIS),
>>> + SR_TRAP(OP_TLBI_VAE1ISNXS, CGT_HCR_TTLB_TTLBIS),
>>> + SR_TRAP(OP_TLBI_ASIDE1ISNXS, CGT_HCR_TTLB_TTLBIS),
>>> + SR_TRAP(OP_TLBI_VAAE1ISNXS, CGT_HCR_TTLB_TTLBIS),
>>> + SR_TRAP(OP_TLBI_VALE1ISNXS, CGT_HCR_TTLB_TTLBIS),
>>> + SR_TRAP(OP_TLBI_VAALE1ISNXS, CGT_HCR_TTLB_TTLBIS),
>>> + SR_TRAP(OP_TLBI_VMALLE1OS, CGT_HCR_TTLB_TTLBOS),
>>> + SR_TRAP(OP_TLBI_VAE1OS, CGT_HCR_TTLB_TTLBOS),
>>> + SR_TRAP(OP_TLBI_ASIDE1OS, CGT_HCR_TTLB_TTLBOS),
>>> + SR_TRAP(OP_TLBI_VAAE1OS, CGT_HCR_TTLB_TTLBOS),
>>> + SR_TRAP(OP_TLBI_VALE1OS, CGT_HCR_TTLB_TTLBOS),
>>> + SR_TRAP(OP_TLBI_VAALE1OS, CGT_HCR_TTLB_TTLBOS),
>>> + SR_TRAP(OP_TLBI_RVAE1OS, CGT_HCR_TTLB_TTLBOS),
>>> + SR_TRAP(OP_TLBI_RVAAE1OS, CGT_HCR_TTLB_TTLBOS),
>>> + SR_TRAP(OP_TLBI_RVALE1OS, CGT_HCR_TTLB_TTLBOS),
>>> + SR_TRAP(OP_TLBI_RVAALE1OS, CGT_HCR_TTLB_TTLBOS),
>>> + SR_TRAP(OP_TLBI_VMALLE1OSNXS, CGT_HCR_TTLB_TTLBOS),
>>> + SR_TRAP(OP_TLBI_VAE1OSNXS, CGT_HCR_TTLB_TTLBOS),
>>> + SR_TRAP(OP_TLBI_ASIDE1OSNXS, CGT_HCR_TTLB_TTLBOS),
>>> + SR_TRAP(OP_TLBI_VAAE1OSNXS, CGT_HCR_TTLB_TTLBOS),
>>> + SR_TRAP(OP_TLBI_VALE1OSNXS, CGT_HCR_TTLB_TTLBOS),
>>> + SR_TRAP(OP_TLBI_VAALE1OSNXS, CGT_HCR_TTLB_TTLBOS),
>>> + SR_TRAP(OP_TLBI_RVAE1OSNXS, CGT_HCR_TTLB_TTLBOS),
>>> + SR_TRAP(OP_TLBI_RVAAE1OSNXS, CGT_HCR_TTLB_TTLBOS),
>>> + SR_TRAP(OP_TLBI_RVALE1OSNXS, CGT_HCR_TTLB_TTLBOS),
>>> + SR_TRAP(OP_TLBI_RVAALE1OSNXS, CGT_HCR_TTLB_TTLBOS),
>>> + SR_TRAP(SYS_SCTLR_EL1, CGT_HCR_TVM_TRVM),
>>> + SR_TRAP(SYS_TTBR0_EL1, CGT_HCR_TVM_TRVM),
>>> + SR_TRAP(SYS_TTBR1_EL1, CGT_HCR_TVM_TRVM),
>>> + SR_TRAP(SYS_TCR_EL1, CGT_HCR_TVM_TRVM),
>>> + SR_TRAP(SYS_ESR_EL1, CGT_HCR_TVM_TRVM),
>>> + SR_TRAP(SYS_FAR_EL1, CGT_HCR_TVM_TRVM),
>>> + SR_TRAP(SYS_AFSR0_EL1, CGT_HCR_TVM_TRVM),
>>> + SR_TRAP(SYS_AFSR1_EL1, CGT_HCR_TVM_TRVM),
>>> + SR_TRAP(SYS_MAIR_EL1, CGT_HCR_TVM_TRVM),
>>> + SR_TRAP(SYS_AMAIR_EL1, CGT_HCR_TVM_TRVM),
>>> + SR_TRAP(SYS_CONTEXTIDR_EL1, CGT_HCR_TVM_TRVM),
>>> + SR_TRAP(SYS_DC_ZVA, CGT_HCR_TDZ),
>>> + SR_TRAP(SYS_DC_GVA, CGT_HCR_TDZ),
>>> + SR_TRAP(SYS_DC_GZVA, CGT_HCR_TDZ),
>>> + SR_TRAP(SYS_LORSA_EL1, CGT_HCR_TLOR),
>>> + SR_TRAP(SYS_LOREA_EL1, CGT_HCR_TLOR),
>>> + SR_TRAP(SYS_LORN_EL1, CGT_HCR_TLOR),
>>> + SR_TRAP(SYS_LORC_EL1, CGT_HCR_TLOR),
>>> + SR_TRAP(SYS_LORID_EL1, CGT_HCR_TLOR),
>>> + SR_TRAP(SYS_ERRIDR_EL1, CGT_HCR_TERR),
>>> + SR_TRAP(SYS_ERRSELR_EL1, CGT_HCR_TERR),
>>> + SR_TRAP(SYS_ERXADDR_EL1, CGT_HCR_TERR),
>>> + SR_TRAP(SYS_ERXCTLR_EL1, CGT_HCR_TERR),
>>> + SR_TRAP(SYS_ERXFR_EL1, CGT_HCR_TERR),
>>> + SR_TRAP(SYS_ERXMISC0_EL1, CGT_HCR_TERR),
>>> + SR_TRAP(SYS_ERXMISC1_EL1, CGT_HCR_TERR),
>>> + SR_TRAP(SYS_ERXMISC2_EL1, CGT_HCR_TERR),
>>> + SR_TRAP(SYS_ERXMISC3_EL1, CGT_HCR_TERR),
>>> + SR_TRAP(SYS_ERXSTATUS_EL1, CGT_HCR_TERR),
>>> + SR_TRAP(SYS_APIAKEYLO_EL1, CGT_HCR_APK),
>>> + SR_TRAP(SYS_APIAKEYHI_EL1, CGT_HCR_APK),
>>> + SR_TRAP(SYS_APIBKEYLO_EL1, CGT_HCR_APK),
>>> + SR_TRAP(SYS_APIBKEYHI_EL1, CGT_HCR_APK),
>>> + SR_TRAP(SYS_APDAKEYLO_EL1, CGT_HCR_APK),
>>> + SR_TRAP(SYS_APDAKEYHI_EL1, CGT_HCR_APK),
>>> + SR_TRAP(SYS_APDBKEYLO_EL1, CGT_HCR_APK),
>>> + SR_TRAP(SYS_APDBKEYHI_EL1, CGT_HCR_APK),
>>> + SR_TRAP(SYS_APGAKEYLO_EL1, CGT_HCR_APK),
>>> + SR_TRAP(SYS_APGAKEYHI_EL1, CGT_HCR_APK),
>>> + /* All _EL2 registers */
>>> + SR_RANGE_TRAP(sys_reg(3, 4, 0, 0, 0),
>>> +      sys_reg(3, 4, 3, 15, 7), CGT_HCR_NV),
>>> + /* Skip the SP_EL1 encoding... */
>>> + SR_RANGE_TRAP(sys_reg(3, 4, 4, 1, 1),
>>> +      sys_reg(3, 4, 10, 15, 7), CGT_HCR_NV),
>>> + SR_RANGE_TRAP(sys_reg(3, 4, 12, 0, 0),
>>> +      sys_reg(3, 4, 14, 15, 7), CGT_HCR_NV),
>> 
>> Should SPSR_EL2 and ELR_EL2 be considered also?
> 
> Ah crap, these are outside of the expected range. It doesn't really
> matter yet as we are still a long way away from recursive
> virtualisation, but we might as well address that now.
> 
> I may also eventually have a more fine grained approach to these
> registers, as the ranges tend to bleed over a number of EL1 registers
> that aren't affected by NV.

I've suspected that, thanks for confirming it.

> 
> In the meantime, I'll add the patch below to the patch stack.
> 
> Thanks,
> 
> M.
> 
> From 9b650e785e3e59ef23a5dcb8f58be45cdd97b1f2 Mon Sep 17 00:00:00 2001
> From: Marc Zyngier <maz@kernel.org>
> Date: Mon, 21 Aug 2023 18:44:15 +0100
> Subject: [PATCH] KVM: arm64: nv: Add trap description for SPSR_EL2 and ELR_EL2
> 
> Having carved a hole for SP_EL1, we are now missing the entries
> for SPSR_EL2 and ELR_EL2. Add them back.
> 
> Reported-by: Miguel Luis <miguel.luis@oracle.com>
> Signed-off-by: Marc Zyngier <maz@kernel.org>
> ---
> arch/arm64/kvm/emulate-nested.c | 2 ++
> 1 file changed, 2 insertions(+)
> 
> diff --git a/arch/arm64/kvm/emulate-nested.c b/arch/arm64/kvm/emulate-nested.c
> index 44d9300e95f5..b5637ae4149f 100644
> --- a/arch/arm64/kvm/emulate-nested.c
> +++ b/arch/arm64/kvm/emulate-nested.c
> @@ -651,6 +651,8 @@ static const struct encoding_to_trap_config encoding_to_cgt[] __initconst = {
> SR_RANGE_TRAP(sys_reg(3, 4, 0, 0, 0),
>       sys_reg(3, 4, 3, 15, 7), CGT_HCR_NV),
> /* Skip the SP_EL1 encoding... */
> + SR_TRAP(SYS_SPSR_EL2, CGT_HCR_NV),
> + SR_TRAP(SYS_ELR_EL2, CGT_HCR_NV),

Thanks

Miguel

> SR_RANGE_TRAP(sys_reg(3, 4, 4, 1, 1),
>       sys_reg(3, 4, 10, 15, 7), CGT_HCR_NV),
> SR_RANGE_TRAP(sys_reg(3, 4, 12, 0, 0),
> -- 
> 2.34.1
> 
> 
> -- 
> Without deviation from the norm, progress is not possible.



_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 48+ messages in thread

end of thread, other threads:[~2023-08-22 11:13 UTC | newest]

Thread overview: 48+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2023-08-15 18:38 [PATCH v4 00/28] KVM: arm64: NV trap forwarding infrastructure Marc Zyngier
2023-08-15 18:38 ` [PATCH v4 01/28] arm64: Add missing VA CMO encodings Marc Zyngier
2023-08-15 18:38 ` [PATCH v4 02/28] arm64: Add missing ERX*_EL1 encodings Marc Zyngier
2023-08-15 18:38 ` [PATCH v4 03/28] arm64: Add missing DC ZVA/GVA/GZVA encodings Marc Zyngier
2023-08-15 18:38 ` [PATCH v4 04/28] arm64: Add TLBI operation encodings Marc Zyngier
2023-08-15 18:38 ` [PATCH v4 05/28] arm64: Add AT " Marc Zyngier
2023-08-15 18:38 ` [PATCH v4 06/28] arm64: Add debug registers affected by HDFGxTR_EL2 Marc Zyngier
2023-08-15 18:38 ` [PATCH v4 07/28] arm64: Add missing BRB/CFP/DVP/CPP instructions Marc Zyngier
2023-08-15 18:38 ` [PATCH v4 08/28] arm64: Add HDFGRTR_EL2 and HDFGWTR_EL2 layouts Marc Zyngier
2023-08-15 18:38 ` [PATCH v4 09/28] arm64: Add feature detection for fine grained traps Marc Zyngier
2023-08-15 18:38 ` [PATCH v4 10/28] KVM: arm64: Correctly handle ACCDATA_EL1 traps Marc Zyngier
2023-08-15 18:38 ` [PATCH v4 11/28] KVM: arm64: Add missing HCR_EL2 trap bits Marc Zyngier
2023-08-15 18:38 ` [PATCH v4 12/28] KVM: arm64: nv: Add FGT registers Marc Zyngier
2023-08-15 18:38 ` [PATCH v4 13/28] KVM: arm64: Restructure FGT register switching Marc Zyngier
2023-08-15 18:38 ` [PATCH v4 14/28] KVM: arm64: nv: Add trap forwarding infrastructure Marc Zyngier
2023-08-15 21:34   ` Jing Zhang
2023-08-16  9:34   ` Miguel Luis
2023-08-15 18:38 ` [PATCH v4 15/28] KVM: arm64: nv: Add trap forwarding for HCR_EL2 Marc Zyngier
2023-08-15 21:37   ` Jing Zhang
2023-08-17 11:05   ` Miguel Luis
2023-08-21 17:47     ` Marc Zyngier
2023-08-22 11:12       ` Miguel Luis
2023-08-15 18:38 ` [PATCH v4 16/28] KVM: arm64: nv: Expose FEAT_EVT to nested guests Marc Zyngier
2023-08-15 18:38 ` [PATCH v4 17/28] KVM: arm64: nv: Add trap forwarding for MDCR_EL2 Marc Zyngier
2023-08-15 22:33   ` Jing Zhang
2023-08-15 18:38 ` [PATCH v4 18/28] KVM: arm64: nv: Add trap forwarding for CNTHCTL_EL2 Marc Zyngier
2023-08-15 22:42   ` Jing Zhang
2023-08-15 18:38 ` [PATCH v4 19/28] KVM: arm64: nv: Add fine grained trap forwarding infrastructure Marc Zyngier
2023-08-15 22:44   ` Jing Zhang
2023-08-15 18:38 ` [PATCH v4 20/28] KVM: arm64: nv: Add trap forwarding for HFGxTR_EL2 Marc Zyngier
2023-08-15 22:51   ` Jing Zhang
2023-08-15 18:38 ` [PATCH v4 21/28] KVM: arm64: nv: Add trap forwarding for HFGITR_EL2 Marc Zyngier
2023-08-15 22:55   ` Jing Zhang
2023-08-15 18:38 ` [PATCH v4 22/28] KVM: arm64: nv: Add trap forwarding for HDFGxTR_EL2 Marc Zyngier
2023-08-15 23:10   ` Jing Zhang
2023-08-15 18:38 ` [PATCH v4 23/28] KVM: arm64: nv: Add SVC trap forwarding Marc Zyngier
2023-08-15 23:24   ` Jing Zhang
2023-08-15 18:38 ` [PATCH v4 24/28] KVM: arm64: nv: Expand ERET trap forwarding to handle FGT Marc Zyngier
2023-08-15 23:28   ` Jing Zhang
2023-08-15 18:38 ` [PATCH v4 25/28] KVM: arm64: nv: Add switching support for HFGxTR/HDFGxTR Marc Zyngier
2023-08-15 23:37   ` Jing Zhang
2023-08-15 18:39 ` [PATCH v4 26/28] KVM: arm64: nv: Expose FGT to nested guests Marc Zyngier
2023-08-16  0:02   ` Jing Zhang
2023-08-15 18:39 ` [PATCH v4 27/28] KVM: arm64: Move HCRX_EL2 switch to load/put on VHE systems Marc Zyngier
2023-08-16  0:17   ` Jing Zhang
2023-08-15 18:39 ` [PATCH v4 28/28] KVM: arm64: nv: Add support for HCRX_EL2 Marc Zyngier
2023-08-16  0:18   ` Jing Zhang
2023-08-17  9:29 ` [PATCH v4 00/28] KVM: arm64: NV trap forwarding infrastructure Marc Zyngier

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).