public inbox for linux-s390@vger.kernel.org
 help / color / mirror / Atom feed
* [RFC 00/10] KVM: s390: spring cleanup
@ 2026-03-16 16:23 Janosch Frank
  2026-03-16 16:23 ` [RFC 01/10] KVM: s390: diag258: Pass absolute address to kvm_is_gpa_in_memslot() Janosch Frank
                   ` (9 more replies)
  0 siblings, 10 replies; 39+ messages in thread
From: Janosch Frank @ 2026-03-16 16:23 UTC (permalink / raw)
  To: kvm; +Cc: linux-s390, imbrenda, borntraeger, akrowiak

When looking into the new gmap code I also had a look into our
handling of addresses as a whole. I've found that we have a lot of
unsigned long variables which could be gpa_t or gva_t. There's code
where we could use provided functions for calculations like gfn
shifting and in general we have a lot of magic constants left which
could be defined constants. Some of these contants are already in our
code base anyway...

This series tries to clean up some of these problems. 
I have more commits that also introduce new types for real and logical
addresses but these have to wait until I see a real benefit.

Janosch Frank (10):
  KVM: s390: diag258: Pass absolute address to kvm_is_gpa_in_memslot()
  KVM: s390: Consolidate lpswe variants
  KVM: s390: Convert shifts to gpa_to_gfn()
  KVM: s390: kvm_s390_real_to_abs() should return gpa_t
  KVM: s390: vsie: Cleanup and fixup of crycb handling
  KVM: s390: Rework lowcore access functions
  KVM: s390: Use gpa_t and gva_t in gaccess files
  KVM: s390: Use gpa_t in priv.c
  KVM: s390: Use gpa_t in pv.c
  KVM: s390: Cleanup kvm_s390_store_status_unloaded

 arch/s390/include/asm/kvm_host.h |  6 +++
 arch/s390/kvm/diag.c             |  2 +-
 arch/s390/kvm/gaccess.c          | 20 ++++-----
 arch/s390/kvm/gaccess.h          | 49 ++++++++++++++-------
 arch/s390/kvm/interrupt.c        |  4 +-
 arch/s390/kvm/kvm-s390.c         | 24 +++++++----
 arch/s390/kvm/kvm-s390.h         | 12 +++---
 arch/s390/kvm/priv.c             | 73 +++++++++++++++-----------------
 arch/s390/kvm/pv.c               | 12 +++---
 arch/s390/kvm/vsie.c             | 50 +++++++++++-----------
 10 files changed, 138 insertions(+), 114 deletions(-)

-- 
2.51.0


^ permalink raw reply	[flat|nested] 39+ messages in thread

* [RFC 01/10] KVM: s390: diag258: Pass absolute address to kvm_is_gpa_in_memslot()
  2026-03-16 16:23 [RFC 00/10] KVM: s390: spring cleanup Janosch Frank
@ 2026-03-16 16:23 ` Janosch Frank
  2026-03-16 18:34   ` Christian Borntraeger
                     ` (2 more replies)
  2026-03-16 16:23 ` [RFC 02/10] KVM: s390: Consolidate lpswe variants Janosch Frank
                   ` (8 subsequent siblings)
  9 siblings, 3 replies; 39+ messages in thread
From: Janosch Frank @ 2026-03-16 16:23 UTC (permalink / raw)
  To: kvm; +Cc: linux-s390, imbrenda, borntraeger, akrowiak

The token address is a real address and as such we need to translate
it before it's a true gpa.

Signed-off-by: Janosch Frank <frankja@linux.ibm.com>
Fixes: 3c038e6be0e29 ("KVM: async_pf: Async page fault support on s390")
---
 arch/s390/kvm/diag.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/arch/s390/kvm/diag.c b/arch/s390/kvm/diag.c
index d89d1c381522..51ba4dcc3905 100644
--- a/arch/s390/kvm/diag.c
+++ b/arch/s390/kvm/diag.c
@@ -122,7 +122,7 @@ static int __diag_page_ref_service(struct kvm_vcpu *vcpu)
 		    parm.token_addr & 7 || parm.zarch != 0x8000000000000000ULL)
 			return kvm_s390_inject_program_int(vcpu, PGM_SPECIFICATION);
 
-		if (!kvm_is_gpa_in_memslot(vcpu->kvm, parm.token_addr))
+		if (!kvm_is_gpa_in_memslot(vcpu->kvm, kvm_s390_real_to_abs(vcpu, parm.token_addr)))
 			return kvm_s390_inject_program_int(vcpu, PGM_ADDRESSING);
 
 		vcpu->arch.pfault_token = parm.token_addr;
-- 
2.51.0


^ permalink raw reply related	[flat|nested] 39+ messages in thread

* [RFC 02/10] KVM: s390: Consolidate lpswe variants
  2026-03-16 16:23 [RFC 00/10] KVM: s390: spring cleanup Janosch Frank
  2026-03-16 16:23 ` [RFC 01/10] KVM: s390: diag258: Pass absolute address to kvm_is_gpa_in_memslot() Janosch Frank
@ 2026-03-16 16:23 ` Janosch Frank
  2026-03-16 18:47   ` Christian Borntraeger
  2026-03-16 16:23 ` [RFC 03/10] KVM: s390: Convert shifts to gpa_to_gfn() Janosch Frank
                   ` (7 subsequent siblings)
  9 siblings, 1 reply; 39+ messages in thread
From: Janosch Frank @ 2026-03-16 16:23 UTC (permalink / raw)
  To: kvm; +Cc: linux-s390, imbrenda, borntraeger, akrowiak

LPSWE and LPSWEY currently only differ in instruction format but not
in functionality.

Signed-off-by: Janosch Frank <frankja@linux.ibm.com>
---
 arch/s390/kvm/priv.c | 46 ++++++++++++++++++++------------------------
 1 file changed, 21 insertions(+), 25 deletions(-)

diff --git a/arch/s390/kvm/priv.c b/arch/s390/kvm/priv.c
index a3250ad83a8e..a90fc0b4fd96 100644
--- a/arch/s390/kvm/priv.c
+++ b/arch/s390/kvm/priv.c
@@ -740,11 +740,28 @@ int kvm_s390_handle_lpsw(struct kvm_vcpu *vcpu)
 	return 0;
 }
 
-static int handle_lpswe(struct kvm_vcpu *vcpu)
+static int _handle_lpswe_y(struct kvm_vcpu *vcpu, u64 addr, u8 ar)
 {
 	psw_t new_psw;
-	u64 addr;
 	int rc;
+
+	if (addr & 7)
+		return kvm_s390_inject_program_int(vcpu, PGM_SPECIFICATION);
+
+	rc = read_guest(vcpu, addr, ar, &new_psw, sizeof(new_psw));
+	if (rc)
+		return kvm_s390_inject_prog_cond(vcpu, rc);
+
+	vcpu->arch.sie_block->gpsw = new_psw;
+	if (!is_valid_psw(&vcpu->arch.sie_block->gpsw))
+		return kvm_s390_inject_program_int(vcpu, PGM_SPECIFICATION);
+
+	return 0;
+}
+
+static int handle_lpswe(struct kvm_vcpu *vcpu)
+{
+	u64 addr;
 	u8 ar;
 
 	vcpu->stat.instruction_lpswe++;
@@ -753,22 +770,12 @@ static int handle_lpswe(struct kvm_vcpu *vcpu)
 		return kvm_s390_inject_program_int(vcpu, PGM_PRIVILEGED_OP);
 
 	addr = kvm_s390_get_base_disp_s(vcpu, &ar);
-	if (addr & 7)
-		return kvm_s390_inject_program_int(vcpu, PGM_SPECIFICATION);
-	rc = read_guest(vcpu, addr, ar, &new_psw, sizeof(new_psw));
-	if (rc)
-		return kvm_s390_inject_prog_cond(vcpu, rc);
-	vcpu->arch.sie_block->gpsw = new_psw;
-	if (!is_valid_psw(&vcpu->arch.sie_block->gpsw))
-		return kvm_s390_inject_program_int(vcpu, PGM_SPECIFICATION);
-	return 0;
+	return _handle_lpswe_y(vcpu, addr, ar);
 }
 
 static int handle_lpswey(struct kvm_vcpu *vcpu)
 {
-	psw_t new_psw;
 	u64 addr;
-	int rc;
 	u8 ar;
 
 	vcpu->stat.instruction_lpswey++;
@@ -780,18 +787,7 @@ static int handle_lpswey(struct kvm_vcpu *vcpu)
 		return kvm_s390_inject_program_int(vcpu, PGM_PRIVILEGED_OP);
 
 	addr = kvm_s390_get_base_disp_siy(vcpu, &ar);
-	if (addr & 7)
-		return kvm_s390_inject_program_int(vcpu, PGM_SPECIFICATION);
-
-	rc = read_guest(vcpu, addr, ar, &new_psw, sizeof(new_psw));
-	if (rc)
-		return kvm_s390_inject_prog_cond(vcpu, rc);
-
-	vcpu->arch.sie_block->gpsw = new_psw;
-	if (!is_valid_psw(&vcpu->arch.sie_block->gpsw))
-		return kvm_s390_inject_program_int(vcpu, PGM_SPECIFICATION);
-
-	return 0;
+	return _handle_lpswe_y(vcpu, addr, ar);
 }
 
 static int handle_stidp(struct kvm_vcpu *vcpu)
-- 
2.51.0


^ permalink raw reply related	[flat|nested] 39+ messages in thread

* [RFC 03/10] KVM: s390: Convert shifts to gpa_to_gfn()
  2026-03-16 16:23 [RFC 00/10] KVM: s390: spring cleanup Janosch Frank
  2026-03-16 16:23 ` [RFC 01/10] KVM: s390: diag258: Pass absolute address to kvm_is_gpa_in_memslot() Janosch Frank
  2026-03-16 16:23 ` [RFC 02/10] KVM: s390: Consolidate lpswe variants Janosch Frank
@ 2026-03-16 16:23 ` Janosch Frank
  2026-03-16 18:49   ` Christian Borntraeger
                     ` (2 more replies)
  2026-03-16 16:23 ` [RFC 04/10] KVM: s390: kvm_s390_real_to_abs() should return gpa_t Janosch Frank
                   ` (6 subsequent siblings)
  9 siblings, 3 replies; 39+ messages in thread
From: Janosch Frank @ 2026-03-16 16:23 UTC (permalink / raw)
  To: kvm; +Cc: linux-s390, imbrenda, borntraeger, akrowiak

Not only do we get rid of the ugly shift but we have more chances to
do static analysis type checking since gpa_to_gfn() returns gfn_t.

Signed-off-by: Janosch Frank <frankja@linux.ibm.com>
---
 arch/s390/kvm/interrupt.c | 4 ++--
 arch/s390/kvm/priv.c      | 2 +-
 2 files changed, 3 insertions(+), 3 deletions(-)

diff --git a/arch/s390/kvm/interrupt.c b/arch/s390/kvm/interrupt.c
index 03fb477c7527..1b771276415c 100644
--- a/arch/s390/kvm/interrupt.c
+++ b/arch/s390/kvm/interrupt.c
@@ -2771,13 +2771,13 @@ static int adapter_indicators_set(struct kvm *kvm,
 	bit = get_ind_bit(adapter_int->ind_addr,
 			  adapter_int->ind_offset, adapter->swap);
 	set_bit(bit, map);
-	mark_page_dirty(kvm, adapter_int->ind_gaddr >> PAGE_SHIFT);
+	mark_page_dirty(kvm, gpa_to_gfn(adapter_int->ind_gaddr));
 	set_page_dirty_lock(ind_page);
 	map = page_address(summary_page);
 	bit = get_ind_bit(adapter_int->summary_addr,
 			  adapter_int->summary_offset, adapter->swap);
 	summary_set = test_and_set_bit(bit, map);
-	mark_page_dirty(kvm, adapter_int->summary_gaddr >> PAGE_SHIFT);
+	mark_page_dirty(kvm, gpa_to_gfn(adapter_int->summary_gaddr));
 	set_page_dirty_lock(summary_page);
 	srcu_read_unlock(&kvm->srcu, idx);
 
diff --git a/arch/s390/kvm/priv.c b/arch/s390/kvm/priv.c
index a90fc0b4fd96..780186eb6037 100644
--- a/arch/s390/kvm/priv.c
+++ b/arch/s390/kvm/priv.c
@@ -1151,7 +1151,7 @@ static inline int __do_essa(struct kvm_vcpu *vcpu, const int orc)
 	 */
 
 	kvm_s390_get_regs_rre(vcpu, &r1, &r2);
-	gfn = vcpu->run->s.regs.gprs[r2] >> PAGE_SHIFT;
+	gfn = gpa_to_gfn(vcpu->run->s.regs.gprs[r2]);
 	entries = (vcpu->arch.sie_block->cbrlo & ~PAGE_MASK) >> 3;
 
 	nappended = dat_perform_essa(vcpu->arch.gmap->asce, gfn, orc, &state, &dirtied);
-- 
2.51.0


^ permalink raw reply related	[flat|nested] 39+ messages in thread

* [RFC 04/10] KVM: s390: kvm_s390_real_to_abs() should return gpa_t
  2026-03-16 16:23 [RFC 00/10] KVM: s390: spring cleanup Janosch Frank
                   ` (2 preceding siblings ...)
  2026-03-16 16:23 ` [RFC 03/10] KVM: s390: Convert shifts to gpa_to_gfn() Janosch Frank
@ 2026-03-16 16:23 ` Janosch Frank
  2026-03-16 18:53   ` Christian Borntraeger
                     ` (2 more replies)
  2026-03-16 16:23 ` [RFC 05/10] KVM: s390: vsie: Cleanup and fixup of crycb handling Janosch Frank
                   ` (5 subsequent siblings)
  9 siblings, 3 replies; 39+ messages in thread
From: Janosch Frank @ 2026-03-16 16:23 UTC (permalink / raw)
  To: kvm; +Cc: linux-s390, imbrenda, borntraeger, akrowiak

An absolute address is definitely guest physical.

Signed-off-by: Janosch Frank <frankja@linux.ibm.com>
---
 arch/s390/kvm/gaccess.h | 6 +++---
 1 file changed, 3 insertions(+), 3 deletions(-)

diff --git a/arch/s390/kvm/gaccess.h b/arch/s390/kvm/gaccess.h
index b5385cec60f4..ee346b607a07 100644
--- a/arch/s390/kvm/gaccess.h
+++ b/arch/s390/kvm/gaccess.h
@@ -24,7 +24,7 @@
  * Returns the guest absolute address that corresponds to the passed guest real
  * address @gra of by applying the given prefix.
  */
-static inline unsigned long _kvm_s390_real_to_abs(u32 prefix, unsigned long gra)
+static inline gpa_t _kvm_s390_real_to_abs(u32 prefix, unsigned long gra)
 {
 	if (gra < 2 * PAGE_SIZE)
 		gra += prefix;
@@ -41,8 +41,8 @@ static inline unsigned long _kvm_s390_real_to_abs(u32 prefix, unsigned long gra)
  * Returns the guest absolute address that corresponds to the passed guest real
  * address @gra of a virtual guest cpu by applying its prefix.
  */
-static inline unsigned long kvm_s390_real_to_abs(struct kvm_vcpu *vcpu,
-						 unsigned long gra)
+static inline gpa_t kvm_s390_real_to_abs(struct kvm_vcpu *vcpu,
+					 unsigned long gra)
 {
 	return _kvm_s390_real_to_abs(kvm_s390_get_prefix(vcpu), gra);
 }
-- 
2.51.0


^ permalink raw reply related	[flat|nested] 39+ messages in thread

* [RFC 05/10] KVM: s390: vsie: Cleanup and fixup of crycb handling
  2026-03-16 16:23 [RFC 00/10] KVM: s390: spring cleanup Janosch Frank
                   ` (3 preceding siblings ...)
  2026-03-16 16:23 ` [RFC 04/10] KVM: s390: kvm_s390_real_to_abs() should return gpa_t Janosch Frank
@ 2026-03-16 16:23 ` Janosch Frank
  2026-03-18 14:13   ` Christoph Schlameuss
                     ` (2 more replies)
  2026-03-16 16:23 ` [RFC 06/10] KVM: s390: Rework lowcore access functions Janosch Frank
                   ` (4 subsequent siblings)
  9 siblings, 3 replies; 39+ messages in thread
From: Janosch Frank @ 2026-03-16 16:23 UTC (permalink / raw)
  To: kvm; +Cc: linux-s390, imbrenda, borntraeger, akrowiak

The crycbd denotes an absolute address and as such we need to use
gpa_t and read_guest_abs() instead of read_guest_real().

We don't want to copy the reserved fields into the host, so let's
define size constants that only include the masks and ignore the
reserved fields.

While we're at it, remove magic constants with compiler backed
constants.

Signed-off-by: Janosch Frank <frankja@linux.ibm.com>
---
 arch/s390/include/asm/kvm_host.h |  6 ++++
 arch/s390/kvm/vsie.c             | 50 +++++++++++++++-----------------
 2 files changed, 30 insertions(+), 26 deletions(-)

diff --git a/arch/s390/include/asm/kvm_host.h b/arch/s390/include/asm/kvm_host.h
index 64a50f0862aa..52827db2fa97 100644
--- a/arch/s390/include/asm/kvm_host.h
+++ b/arch/s390/include/asm/kvm_host.h
@@ -516,6 +516,8 @@ struct kvm_s390_crypto {
 	__u8 apie;
 };
 
+#define APCB_NUM_MASKS 3
+
 #define APCB0_MASK_SIZE 1
 struct kvm_s390_apcb0 {
 	__u64 apm[APCB0_MASK_SIZE];		/* 0x0000 */
@@ -540,6 +542,10 @@ struct kvm_s390_crypto_cb {
 	struct kvm_s390_apcb1 apcb1;		/* 0x0080 */
 };
 
+#define APCB_KEY_MASK_SIZE \
+	(sizeof_field(struct kvm_s390_crypto_cb, dea_wrapping_key_mask) + \
+	 sizeof_field(struct kvm_s390_crypto_cb, aes_wrapping_key_mask))
+
 struct kvm_s390_gisa {
 	union {
 		struct { /* common to all formats */
diff --git a/arch/s390/kvm/vsie.c b/arch/s390/kvm/vsie.c
index c0d36afd4023..13480d65c59d 100644
--- a/arch/s390/kvm/vsie.c
+++ b/arch/s390/kvm/vsie.c
@@ -155,17 +155,17 @@ static int prepare_cpuflags(struct kvm_vcpu *vcpu, struct vsie_page *vsie_page)
 	atomic_set(&scb_s->cpuflags, newflags);
 	return 0;
 }
+
 /* Copy to APCB FORMAT1 from APCB FORMAT0 */
 static int setup_apcb10(struct kvm_vcpu *vcpu, struct kvm_s390_apcb1 *apcb_s,
-			unsigned long crycb_gpa, struct kvm_s390_apcb1 *apcb_h)
+			gpa_t crycb_gpa, struct kvm_s390_apcb1 *apcb_h)
 {
 	struct kvm_s390_apcb0 tmp;
-	unsigned long apcb_gpa;
+	gpa_t apcb_gpa;
 
 	apcb_gpa = crycb_gpa + offsetof(struct kvm_s390_crypto_cb, apcb0);
 
-	if (read_guest_real(vcpu, apcb_gpa, &tmp,
-			    sizeof(struct kvm_s390_apcb0)))
+	if (read_guest_abs(vcpu, apcb_gpa, &tmp, sizeof(tmp)))
 		return -EFAULT;
 
 	apcb_s->apm[0] = apcb_h->apm[0] & tmp.apm[0];
@@ -173,7 +173,6 @@ static int setup_apcb10(struct kvm_vcpu *vcpu, struct kvm_s390_apcb1 *apcb_s,
 	apcb_s->adm[0] = apcb_h->adm[0] & tmp.adm[0] & 0xffff000000000000UL;
 
 	return 0;
-
 }
 
 /**
@@ -186,18 +185,18 @@ static int setup_apcb10(struct kvm_vcpu *vcpu, struct kvm_s390_apcb1 *apcb_s,
  * Returns 0 and -EFAULT on error reading guest apcb
  */
 static int setup_apcb00(struct kvm_vcpu *vcpu, unsigned long *apcb_s,
-			unsigned long crycb_gpa, unsigned long *apcb_h)
+			gpa_t crycb_gpa, unsigned long *apcb_h)
 {
-	unsigned long apcb_gpa;
+	/* sizeof() would include reserved fields which we do not need/want */
+	unsigned long len = APCB_NUM_MASKS * APCB0_MASK_SIZE * sizeof(u64);
+	gpa_t apcb_gpa;
 
 	apcb_gpa = crycb_gpa + offsetof(struct kvm_s390_crypto_cb, apcb0);
 
-	if (read_guest_real(vcpu, apcb_gpa, apcb_s,
-			    sizeof(struct kvm_s390_apcb0)))
+	if (read_guest_abs(vcpu, apcb_gpa, apcb_s, len))
 		return -EFAULT;
 
-	bitmap_and(apcb_s, apcb_s, apcb_h,
-		   BITS_PER_BYTE * sizeof(struct kvm_s390_apcb0));
+	bitmap_and(apcb_s, apcb_s, apcb_h, BITS_PER_BYTE * len);
 
 	return 0;
 }
@@ -212,19 +211,18 @@ static int setup_apcb00(struct kvm_vcpu *vcpu, unsigned long *apcb_s,
  * Returns 0 and -EFAULT on error reading guest apcb
  */
 static int setup_apcb11(struct kvm_vcpu *vcpu, unsigned long *apcb_s,
-			unsigned long crycb_gpa,
-			unsigned long *apcb_h)
+			gpa_t crycb_gpa, unsigned long *apcb_h)
 {
-	unsigned long apcb_gpa;
+	/* sizeof() would include reserved fields which we do not need/want */
+	unsigned long len = APCB_NUM_MASKS * APCB1_MASK_SIZE * sizeof(u64);
+	gpa_t apcb_gpa;
 
 	apcb_gpa = crycb_gpa + offsetof(struct kvm_s390_crypto_cb, apcb1);
 
-	if (read_guest_real(vcpu, apcb_gpa, apcb_s,
-			    sizeof(struct kvm_s390_apcb1)))
+	if (read_guest_abs(vcpu, apcb_gpa, apcb_s, len))
 		return -EFAULT;
 
-	bitmap_and(apcb_s, apcb_s, apcb_h,
-		   BITS_PER_BYTE * sizeof(struct kvm_s390_apcb1));
+	bitmap_and(apcb_s, apcb_s, apcb_h, BITS_PER_BYTE * len);
 
 	return 0;
 }
@@ -244,8 +242,7 @@ static int setup_apcb11(struct kvm_vcpu *vcpu, unsigned long *apcb_s,
  * Return 0 or an error number if the guest and host crycb are incompatible.
  */
 static int setup_apcb(struct kvm_vcpu *vcpu, struct kvm_s390_crypto_cb *crycb_s,
-	       const u32 crycb_gpa,
-	       struct kvm_s390_crypto_cb *crycb_h,
+	       const gpa_t crycb_gpa, struct kvm_s390_crypto_cb *crycb_h,
 	       int fmt_o, int fmt_h)
 {
 	switch (fmt_o) {
@@ -315,7 +312,8 @@ static int shadow_crycb(struct kvm_vcpu *vcpu, struct vsie_page *vsie_page)
 	struct kvm_s390_sie_block *scb_s = &vsie_page->scb_s;
 	struct kvm_s390_sie_block *scb_o = vsie_page->scb_o;
 	const uint32_t crycbd_o = READ_ONCE(scb_o->crycbd);
-	const u32 crycb_addr = crycbd_o & 0x7ffffff8U;
+	/* CRYCB origin is a 31 bit absolute address with a bit of masking */
+	const gpa_t crycb_addr = crycbd_o & 0x7ffffff8U;
 	unsigned long *b1, *b2;
 	u8 ecb3_flags;
 	u32 ecd_flags;
@@ -359,8 +357,9 @@ static int shadow_crycb(struct kvm_vcpu *vcpu, struct vsie_page *vsie_page)
 		goto end;
 
 	/* copy only the wrapping keys */
-	if (read_guest_real(vcpu, crycb_addr + 72,
-			    vsie_page->crycb.dea_wrapping_key_mask, 56))
+	if (read_guest_abs(vcpu,
+			   crycb_addr + offsetof(struct kvm_s390_crypto_cb, dea_wrapping_key_mask),
+			   vsie_page->crycb.dea_wrapping_key_mask, APCB_KEY_MASK_SIZE))
 		return set_validity_icpt(scb_s, 0x0035U);
 
 	scb_s->ecb3 |= ecb3_flags;
@@ -368,10 +367,9 @@ static int shadow_crycb(struct kvm_vcpu *vcpu, struct vsie_page *vsie_page)
 
 	/* xor both blocks in one run */
 	b1 = (unsigned long *) vsie_page->crycb.dea_wrapping_key_mask;
-	b2 = (unsigned long *)
-			    vcpu->kvm->arch.crypto.crycb->dea_wrapping_key_mask;
+	b2 = (unsigned long *) vcpu->kvm->arch.crypto.crycb->dea_wrapping_key_mask;
 	/* as 56%8 == 0, bitmap_xor won't overwrite any data */
-	bitmap_xor(b1, b1, b2, BITS_PER_BYTE * 56);
+	bitmap_xor(b1, b1, b2, BITS_PER_BYTE * APCB_KEY_MASK_SIZE);
 end:
 	switch (ret) {
 	case -EINVAL:
-- 
2.51.0


^ permalink raw reply related	[flat|nested] 39+ messages in thread

* [RFC 06/10] KVM: s390: Rework lowcore access functions
  2026-03-16 16:23 [RFC 00/10] KVM: s390: spring cleanup Janosch Frank
                   ` (4 preceding siblings ...)
  2026-03-16 16:23 ` [RFC 05/10] KVM: s390: vsie: Cleanup and fixup of crycb handling Janosch Frank
@ 2026-03-16 16:23 ` Janosch Frank
  2026-03-18 14:25   ` Claudio Imbrenda
  2026-03-23  9:11   ` Christoph Schlameuss
  2026-03-16 16:23 ` [RFC 07/10] KVM: s390: Use gpa_t and gva_t in gaccess files Janosch Frank
                   ` (3 subsequent siblings)
  9 siblings, 2 replies; 39+ messages in thread
From: Janosch Frank @ 2026-03-16 16:23 UTC (permalink / raw)
  To: kvm; +Cc: linux-s390, imbrenda, borntraeger, akrowiak

These functions effectively always use offset constants and no
addresses at all. Therefore make it clear that we're accessing offsets
and sprinkle in compile and runtime warnings for more safety.

Signed-off-by: Janosch Frank <frankja@linux.ibm.com>
---
 arch/s390/kvm/gaccess.h | 32 ++++++++++++++++++++++----------
 1 file changed, 22 insertions(+), 10 deletions(-)

diff --git a/arch/s390/kvm/gaccess.h b/arch/s390/kvm/gaccess.h
index ee346b607a07..086da7b040b5 100644
--- a/arch/s390/kvm/gaccess.h
+++ b/arch/s390/kvm/gaccess.h
@@ -89,6 +89,13 @@ static inline unsigned long kvm_s390_logical_to_effective(struct kvm_vcpu *vcpu,
 	return _kvm_s390_logical_to_effective(&vcpu->arch.sie_block->gpsw, ga);
 }
 
+static inline gpa_t lc_addr_from_offset(struct kvm_vcpu *vcpu, unsigned int off)
+{
+	gpa_t addr = kvm_s390_get_prefix(vcpu);
+
+	return addr + off;
+}
+
 /*
  * put_guest_lc, read_guest_lc and write_guest_lc are guest access functions
  * which shall only be used to access the lowcore of a vcpu.
@@ -117,13 +124,14 @@ static inline unsigned long kvm_s390_logical_to_effective(struct kvm_vcpu *vcpu,
  *	 would be to terminate the guest.
  *	 It is wrong to inject a guest exception.
  */
-#define put_guest_lc(vcpu, x, gra)				\
+#define put_guest_lc(vcpu, x, off)				\
 ({								\
 	struct kvm_vcpu *__vcpu = (vcpu);			\
-	__typeof__(*(gra)) __x = (x);				\
-	unsigned long __gpa;					\
+	__typeof__(*(off)) __x = (x);				\
+	gpa_t __gpa;						\
 								\
-	__gpa = (unsigned long)(gra);				\
+	BUILD_BUG_ON(!__builtin_constant_p(off));		\
+	__gpa = (unsigned long)(off);				\
 	__gpa += kvm_s390_get_prefix(__vcpu);			\
 	kvm_write_guest(__vcpu->kvm, __gpa, &__x, sizeof(__x));	\
 })
@@ -131,7 +139,7 @@ static inline unsigned long kvm_s390_logical_to_effective(struct kvm_vcpu *vcpu,
 /**
  * write_guest_lc - copy data from kernel space to guest vcpu's lowcore
  * @vcpu: virtual cpu
- * @gra: vcpu's source guest real address
+ * @off: offset into the lowcore
  * @data: source address in kernel space
  * @len: number of bytes to copy
  *
@@ -146,18 +154,20 @@ static inline unsigned long kvm_s390_logical_to_effective(struct kvm_vcpu *vcpu,
  *	 It is wrong to inject a guest exception.
  */
 static inline __must_check
-int write_guest_lc(struct kvm_vcpu *vcpu, unsigned long gra, void *data,
+int write_guest_lc(struct kvm_vcpu *vcpu, unsigned int off, void *data,
 		   unsigned long len)
 {
-	unsigned long gpa = gra + kvm_s390_get_prefix(vcpu);
+	gpa_t gpa = lc_addr_from_offset(vcpu, off);
 
+	BUILD_BUG_ON(!__builtin_constant_p(off) || !__builtin_constant_p(len));
+	BUILD_BUG_ON(off + len >= 2 * PAGE_SIZE);
 	return kvm_write_guest(vcpu->kvm, gpa, data, len);
 }
 
 /**
  * read_guest_lc - copy data from guest vcpu's lowcore to kernel space
  * @vcpu: virtual cpu
- * @gra: vcpu's source guest real address
+ * @off: offset into the lowcore
  * @data: destination address in kernel space
  * @len: number of bytes to copy
  *
@@ -172,11 +182,13 @@ int write_guest_lc(struct kvm_vcpu *vcpu, unsigned long gra, void *data,
  *	 It is wrong to inject a guest exception.
  */
 static inline __must_check
-int read_guest_lc(struct kvm_vcpu *vcpu, unsigned long gra, void *data,
+int read_guest_lc(struct kvm_vcpu *vcpu, unsigned int off, void *data,
 		  unsigned long len)
 {
-	unsigned long gpa = gra + kvm_s390_get_prefix(vcpu);
+	gpa_t gpa = lc_addr_from_offset(vcpu, off);
 
+	BUILD_BUG_ON(!__builtin_constant_p(off) || !__builtin_constant_p(len));
+	BUILD_BUG_ON(off + len >= 2 * PAGE_SIZE);
 	return kvm_read_guest(vcpu->kvm, gpa, data, len);
 }
 
-- 
2.51.0


^ permalink raw reply related	[flat|nested] 39+ messages in thread

* [RFC 07/10] KVM: s390: Use gpa_t and gva_t in gaccess files
  2026-03-16 16:23 [RFC 00/10] KVM: s390: spring cleanup Janosch Frank
                   ` (5 preceding siblings ...)
  2026-03-16 16:23 ` [RFC 06/10] KVM: s390: Rework lowcore access functions Janosch Frank
@ 2026-03-16 16:23 ` Janosch Frank
  2026-03-18 15:36   ` Claudio Imbrenda
  2026-03-23  9:10   ` Christoph Schlameuss
  2026-03-16 16:23 ` [RFC 08/10] KVM: s390: Use gpa_t in priv.c Janosch Frank
                   ` (2 subsequent siblings)
  9 siblings, 2 replies; 39+ messages in thread
From: Janosch Frank @ 2026-03-16 16:23 UTC (permalink / raw)
  To: kvm; +Cc: linux-s390, imbrenda, borntraeger, akrowiak

A lot of addresses are being passed around as u64 or unsigned long
instead of gpa_t and gva_t. Some of the variables are already called
gva or gpa anyway.

Signed-off-by: Janosch Frank <frankja@linux.ibm.com>
---
 arch/s390/kvm/gaccess.c | 20 ++++++++++----------
 arch/s390/kvm/gaccess.h |  3 +--
 2 files changed, 11 insertions(+), 12 deletions(-)

diff --git a/arch/s390/kvm/gaccess.c b/arch/s390/kvm/gaccess.c
index 7275854ee68e..b2e83f6e16ab 100644
--- a/arch/s390/kvm/gaccess.c
+++ b/arch/s390/kvm/gaccess.c
@@ -441,7 +441,7 @@ static int get_vcpu_asce(struct kvm_vcpu *vcpu, union asce *asce,
 	return 0;
 }
 
-static int deref_table(struct kvm *kvm, unsigned long gpa, unsigned long *val)
+static int deref_table(struct kvm *kvm, gpa_t gpa, unsigned long *val)
 {
 	return kvm_read_guest(kvm, gpa, val, sizeof(*val));
 }
@@ -467,8 +467,8 @@ static int deref_table(struct kvm *kvm, unsigned long gpa, unsigned long *val)
  *	      the returned value is the program interruption code as defined
  *	      by the architecture
  */
-static unsigned long guest_translate_gva(struct kvm_vcpu *vcpu, unsigned long gva,
-					 unsigned long *gpa, const union asce asce,
+static unsigned long guest_translate_gva(struct kvm_vcpu *vcpu, gva_t gva,
+					 gpa_t *gpa, const union asce asce,
 					 enum gacc_mode mode, enum prot_type *prot)
 {
 	union vaddress vaddr = {.addr = gva};
@@ -477,8 +477,8 @@ static unsigned long guest_translate_gva(struct kvm_vcpu *vcpu, unsigned long gv
 	int dat_protection = 0;
 	int iep_protection = 0;
 	union ctlreg0 ctlreg0;
-	unsigned long ptr;
 	int edat1, edat2, iep;
+	gpa_t ptr;
 
 	ctlreg0.val = vcpu->arch.sie_block->gcr[0];
 	edat1 = ctlreg0.edat && test_kvm_facility(vcpu->kvm, 8);
@@ -772,7 +772,7 @@ static int vcpu_check_access_key_gpa(struct kvm_vcpu *vcpu, u8 access_key,
  *		  be used to inject an exception into the guest.
  */
 static int guest_range_to_gpas(struct kvm_vcpu *vcpu, unsigned long ga, u8 ar,
-			       unsigned long *gpas, unsigned long len,
+			       gpa_t *gpas, unsigned long len,
 			       const union asce asce, enum gacc_mode mode,
 			       u8 access_key)
 {
@@ -781,7 +781,7 @@ static int guest_range_to_gpas(struct kvm_vcpu *vcpu, unsigned long ga, u8 ar,
 	unsigned int fragment_len;
 	int lap_enabled, rc = 0;
 	enum prot_type prot;
-	unsigned long gpa;
+	gpa_t gpa;
 
 	lap_enabled = low_address_protection_enabled(vcpu, asce);
 	while (min(PAGE_SIZE - offset, len) > 0) {
@@ -932,11 +932,11 @@ int access_guest_with_key(struct kvm_vcpu *vcpu, unsigned long ga, u8 ar,
 {
 	psw_t *psw = &vcpu->arch.sie_block->gpsw;
 	unsigned long nr_pages, idx;
-	unsigned long gpa_array[2];
 	unsigned int fragment_len;
-	unsigned long *gpas;
 	enum prot_type prot;
+	gpa_t gpa_array[2];
 	int need_ipte_lock;
+	gpa_t *gpas;
 	union asce asce;
 	bool try_storage_prot_override;
 	bool try_fetch_prot_override;
@@ -1182,7 +1182,7 @@ int cmpxchg_guest_abs_with_key(struct kvm *kvm, gpa_t gpa, int len, union kvm_s3
  * has to take care of this.
  */
 int guest_translate_address_with_key(struct kvm_vcpu *vcpu, unsigned long gva, u8 ar,
-				     unsigned long *gpa, enum gacc_mode mode,
+				     gpa_t *gpa, enum gacc_mode mode,
 				     u8 access_key)
 {
 	union asce asce;
@@ -1282,9 +1282,9 @@ static int walk_guest_tables(struct gmap *sg, unsigned long saddr, struct pgtwal
 	struct guest_fault *entries;
 	union dat_table_entry table;
 	union vaddress vaddr;
-	unsigned long ptr;
 	struct kvm *kvm;
 	union asce asce;
+	gpa_t ptr;
 	int rc;
 
 	if (!parent)
diff --git a/arch/s390/kvm/gaccess.h b/arch/s390/kvm/gaccess.h
index 086da7b040b5..f23dc0729649 100644
--- a/arch/s390/kvm/gaccess.h
+++ b/arch/s390/kvm/gaccess.h
@@ -199,8 +199,7 @@ enum gacc_mode {
 };
 
 int guest_translate_address_with_key(struct kvm_vcpu *vcpu, unsigned long gva, u8 ar,
-				     unsigned long *gpa, enum gacc_mode mode,
-				     u8 access_key);
+				     gpa_t *gpa, enum gacc_mode mode, u8 access_key);
 
 int check_gva_range(struct kvm_vcpu *vcpu, unsigned long gva, u8 ar,
 		    unsigned long length, enum gacc_mode mode, u8 access_key);
-- 
2.51.0


^ permalink raw reply related	[flat|nested] 39+ messages in thread

* [RFC 08/10] KVM: s390: Use gpa_t in priv.c
  2026-03-16 16:23 [RFC 00/10] KVM: s390: spring cleanup Janosch Frank
                   ` (6 preceding siblings ...)
  2026-03-16 16:23 ` [RFC 07/10] KVM: s390: Use gpa_t and gva_t in gaccess files Janosch Frank
@ 2026-03-16 16:23 ` Janosch Frank
  2026-03-18 16:02   ` Claudio Imbrenda
  2026-03-23  9:28   ` Christoph Schlameuss
  2026-03-16 16:23 ` [RFC 09/10] KVM: s390: Use gpa_t in pv.c Janosch Frank
  2026-03-16 16:23 ` [RFC 10/10] KVM: s390: Cleanup kvm_s390_store_status_unloaded Janosch Frank
  9 siblings, 2 replies; 39+ messages in thread
From: Janosch Frank @ 2026-03-16 16:23 UTC (permalink / raw)
  To: kvm; +Cc: linux-s390, imbrenda, borntraeger, akrowiak

More unsigned long to gpa_t conversions.

Signed-off-by: Janosch Frank <frankja@linux.ibm.com>
---
 arch/s390/kvm/gaccess.h |  8 ++++++++
 arch/s390/kvm/priv.c    | 27 ++++++++++++---------------
 2 files changed, 20 insertions(+), 15 deletions(-)

diff --git a/arch/s390/kvm/gaccess.h b/arch/s390/kvm/gaccess.h
index f23dc0729649..970d9020dc14 100644
--- a/arch/s390/kvm/gaccess.h
+++ b/arch/s390/kvm/gaccess.h
@@ -89,6 +89,14 @@ static inline unsigned long kvm_s390_logical_to_effective(struct kvm_vcpu *vcpu,
 	return _kvm_s390_logical_to_effective(&vcpu->arch.sie_block->gpsw, ga);
 }
 
+static inline gpa_t kvm_s390_real_to_abs_effective(struct kvm_vcpu *vcpu,
+						   unsigned long gra)
+{
+	gra = kvm_s390_logical_to_effective(vcpu, gra);
+
+	return kvm_s390_real_to_abs(vcpu, gra);
+}
+
 static inline gpa_t lc_addr_from_offset(struct kvm_vcpu *vcpu, unsigned int off)
 {
 	gpa_t addr = kvm_s390_get_prefix(vcpu);
diff --git a/arch/s390/kvm/priv.c b/arch/s390/kvm/priv.c
index 780186eb6037..78d3338afdb2 100644
--- a/arch/s390/kvm/priv.c
+++ b/arch/s390/kvm/priv.c
@@ -256,9 +256,9 @@ static int try_handle_skey(struct kvm_vcpu *vcpu)
 
 static int handle_iske(struct kvm_vcpu *vcpu)
 {
-	unsigned long gaddr;
 	int reg1, reg2;
 	union skey key;
+	gpa_t gpa;
 	int rc;
 
 	vcpu->stat.instruction_iske++;
@@ -271,12 +271,10 @@ static int handle_iske(struct kvm_vcpu *vcpu)
 		return rc != -EAGAIN ? rc : 0;
 
 	kvm_s390_get_regs_rre(vcpu, &reg1, &reg2);
+	gpa = kvm_s390_real_to_abs_effective(vcpu, vcpu->run->s.regs.gprs[reg2] & PAGE_MASK);
 
-	gaddr = vcpu->run->s.regs.gprs[reg2] & PAGE_MASK;
-	gaddr = kvm_s390_logical_to_effective(vcpu, gaddr);
-	gaddr = kvm_s390_real_to_abs(vcpu, gaddr);
 	scoped_guard(read_lock, &vcpu->kvm->mmu_lock)
-		rc = dat_get_storage_key(vcpu->arch.gmap->asce, gpa_to_gfn(gaddr), &key);
+		rc = dat_get_storage_key(vcpu->arch.gmap->asce, gpa_to_gfn(gpa), &key);
 	if (rc > 0)
 		return kvm_s390_inject_program_int(vcpu, rc);
 	if (rc < 0)
@@ -288,8 +286,8 @@ static int handle_iske(struct kvm_vcpu *vcpu)
 
 static int handle_rrbe(struct kvm_vcpu *vcpu)
 {
-	unsigned long gaddr;
 	int reg1, reg2;
+	gpa_t gpa;
 	int rc;
 
 	vcpu->stat.instruction_rrbe++;
@@ -302,12 +300,10 @@ static int handle_rrbe(struct kvm_vcpu *vcpu)
 		return rc != -EAGAIN ? rc : 0;
 
 	kvm_s390_get_regs_rre(vcpu, &reg1, &reg2);
+	gpa = kvm_s390_real_to_abs_effective(vcpu, vcpu->run->s.regs.gprs[reg2] & PAGE_MASK);
 
-	gaddr = vcpu->run->s.regs.gprs[reg2] & PAGE_MASK;
-	gaddr = kvm_s390_logical_to_effective(vcpu, gaddr);
-	gaddr = kvm_s390_real_to_abs(vcpu, gaddr);
 	scoped_guard(read_lock, &vcpu->kvm->mmu_lock)
-		rc = dat_reset_reference_bit(vcpu->arch.gmap->asce, gpa_to_gfn(gaddr));
+		rc = dat_reset_reference_bit(vcpu->arch.gmap->asce, gpa_to_gfn(gpa));
 	if (rc > 0)
 		return kvm_s390_inject_program_int(vcpu, rc);
 	if (rc < 0)
@@ -1142,8 +1138,8 @@ static inline int __do_essa(struct kvm_vcpu *vcpu, const int orc)
 	int r1, r2, nappended, entries;
 	union essa_state state;
 	unsigned long *cbrlo;
-	unsigned long gfn;
 	bool dirtied;
+	gpa_t gpa;
 
 	/*
 	 * We don't need to set SD.FPF.SK to 1 here, because if we have a
@@ -1151,10 +1147,11 @@ static inline int __do_essa(struct kvm_vcpu *vcpu, const int orc)
 	 */
 
 	kvm_s390_get_regs_rre(vcpu, &r1, &r2);
-	gfn = gpa_to_gfn(vcpu->run->s.regs.gprs[r2]);
+	gpa = vcpu->run->s.regs.gprs[r2];
 	entries = (vcpu->arch.sie_block->cbrlo & ~PAGE_MASK) >> 3;
 
-	nappended = dat_perform_essa(vcpu->arch.gmap->asce, gfn, orc, &state, &dirtied);
+	nappended = dat_perform_essa(vcpu->arch.gmap->asce, gpa_to_gfn(gpa),
+				     orc, &state, &dirtied);
 	vcpu->run->s.regs.gprs[r1] = state.val;
 	if (nappended < 0)
 		return 0;
@@ -1166,7 +1163,7 @@ static inline int __do_essa(struct kvm_vcpu *vcpu, const int orc)
 	 */
 	if (nappended > 0) {
 		cbrlo = phys_to_virt(vcpu->arch.sie_block->cbrlo & PAGE_MASK);
-		cbrlo[entries] = gfn << PAGE_SHIFT;
+		cbrlo[entries] = gpa;
 	}
 
 	if (dirtied)
@@ -1447,10 +1444,10 @@ int kvm_s390_handle_eb(struct kvm_vcpu *vcpu)
 static int handle_tprot(struct kvm_vcpu *vcpu)
 {
 	u64 address, operand2;
-	unsigned long gpa;
 	u8 access_key;
 	bool writable;
 	int ret, cc;
+	gpa_t gpa;
 	u8 ar;
 
 	vcpu->stat.instruction_tprot++;
-- 
2.51.0


^ permalink raw reply related	[flat|nested] 39+ messages in thread

* [RFC 09/10] KVM: s390: Use gpa_t in pv.c
  2026-03-16 16:23 [RFC 00/10] KVM: s390: spring cleanup Janosch Frank
                   ` (7 preceding siblings ...)
  2026-03-16 16:23 ` [RFC 08/10] KVM: s390: Use gpa_t in priv.c Janosch Frank
@ 2026-03-16 16:23 ` Janosch Frank
  2026-03-18 15:46   ` Claudio Imbrenda
  2026-03-23  9:41   ` Christoph Schlameuss
  2026-03-16 16:23 ` [RFC 10/10] KVM: s390: Cleanup kvm_s390_store_status_unloaded Janosch Frank
  9 siblings, 2 replies; 39+ messages in thread
From: Janosch Frank @ 2026-03-16 16:23 UTC (permalink / raw)
  To: kvm; +Cc: linux-s390, imbrenda, borntraeger, akrowiak

Lot's of locations where we could've used gpa_t but used u64/unsigned
long.

Signed-off-by: Janosch Frank <frankja@linux.ibm.com>
---
 arch/s390/kvm/kvm-s390.h |  8 ++++----
 arch/s390/kvm/pv.c       | 12 ++++++------
 2 files changed, 10 insertions(+), 10 deletions(-)

diff --git a/arch/s390/kvm/kvm-s390.h b/arch/s390/kvm/kvm-s390.h
index bf1d7798c1af..1ffaec723a30 100644
--- a/arch/s390/kvm/kvm-s390.h
+++ b/arch/s390/kvm/kvm-s390.h
@@ -308,17 +308,17 @@ int kvm_s390_pv_deinit_vm(struct kvm *kvm, u16 *rc, u16 *rrc);
 int kvm_s390_pv_init_vm(struct kvm *kvm, u16 *rc, u16 *rrc);
 int kvm_s390_pv_set_sec_parms(struct kvm *kvm, void *hdr, u64 length, u16 *rc,
 			      u16 *rrc);
-int kvm_s390_pv_unpack(struct kvm *kvm, unsigned long addr, unsigned long size,
+int kvm_s390_pv_unpack(struct kvm *kvm, gpa_t addr, unsigned long size,
 		       unsigned long tweak, u16 *rc, u16 *rrc);
 int kvm_s390_pv_set_cpu_state(struct kvm_vcpu *vcpu, u8 state);
 int kvm_s390_pv_dump_cpu(struct kvm_vcpu *vcpu, void *buff, u16 *rc, u16 *rrc);
 int kvm_s390_pv_dump_stor_state(struct kvm *kvm, void __user *buff_user,
-				u64 *gaddr, u64 buff_user_len, u16 *rc, u16 *rrc);
+				gpa_t *gaddr, u64 buff_user_len, u16 *rc, u16 *rrc);
 int kvm_s390_pv_dump_complete(struct kvm *kvm, void __user *buff_user,
 			      u16 *rc, u16 *rrc);
 int kvm_s390_pv_destroy_page(struct kvm *kvm, unsigned long gaddr);
-int kvm_s390_pv_convert_to_secure(struct kvm *kvm, unsigned long gaddr);
-int kvm_s390_pv_make_secure(struct kvm *kvm, unsigned long gaddr, void *uvcb);
+int kvm_s390_pv_convert_to_secure(struct kvm *kvm, gpa_t gaddr);
+int kvm_s390_pv_make_secure(struct kvm *kvm, gpa_t gaddr, void *uvcb);
 
 static inline u64 kvm_s390_pv_get_handle(struct kvm *kvm)
 {
diff --git a/arch/s390/kvm/pv.c b/arch/s390/kvm/pv.c
index c2dafd812a3b..a86469507309 100644
--- a/arch/s390/kvm/pv.c
+++ b/arch/s390/kvm/pv.c
@@ -125,7 +125,7 @@ static void _kvm_s390_pv_make_secure(struct guest_fault *f)
  * Context: needs to be called with kvm->srcu held.
  * Return: 0 on success, < 0 in case of error.
  */
-int kvm_s390_pv_make_secure(struct kvm *kvm, unsigned long gaddr, void *uvcb)
+int kvm_s390_pv_make_secure(struct kvm *kvm, gpa_t gaddr, void *uvcb)
 {
 	struct pv_make_secure priv = { .uvcb = uvcb };
 	struct guest_fault f = {
@@ -157,7 +157,7 @@ int kvm_s390_pv_make_secure(struct kvm *kvm, unsigned long gaddr, void *uvcb)
 	return rc;
 }
 
-int kvm_s390_pv_convert_to_secure(struct kvm *kvm, unsigned long gaddr)
+int kvm_s390_pv_convert_to_secure(struct kvm *kvm, gpa_t gaddr)
 {
 	struct uv_cb_cts uvcb = {
 		.header.cmd = UVC_CMD_CONV_TO_SEC_STOR,
@@ -765,7 +765,7 @@ int kvm_s390_pv_set_sec_parms(struct kvm *kvm, void *hdr, u64 length, u16 *rc,
 	return cc ? -EINVAL : 0;
 }
 
-static int unpack_one(struct kvm *kvm, unsigned long addr, u64 tweak,
+static int unpack_one(struct kvm *kvm, gpa_t addr, u64 tweak,
 		      u64 offset, u16 *rc, u16 *rrc)
 {
 	struct uv_cb_unp uvcb = {
@@ -793,7 +793,7 @@ static int unpack_one(struct kvm *kvm, unsigned long addr, u64 tweak,
 	return ret;
 }
 
-int kvm_s390_pv_unpack(struct kvm *kvm, unsigned long addr, unsigned long size,
+int kvm_s390_pv_unpack(struct kvm *kvm, gpa_t addr, unsigned long size,
 		       unsigned long tweak, u16 *rc, u16 *rrc)
 {
 	u64 offset = 0;
@@ -802,7 +802,7 @@ int kvm_s390_pv_unpack(struct kvm *kvm, unsigned long addr, unsigned long size,
 	if (addr & ~PAGE_MASK || !size || size & ~PAGE_MASK)
 		return -EINVAL;
 
-	KVM_UV_EVENT(kvm, 3, "PROTVIRT VM UNPACK: start addr %lx size %lx",
+	KVM_UV_EVENT(kvm, 3, "PROTVIRT VM UNPACK: start addr %llx size %lx",
 		     addr, size);
 
 	guard(srcu)(&kvm->srcu);
@@ -891,7 +891,7 @@ int kvm_s390_pv_dump_cpu(struct kvm_vcpu *vcpu, void *buff, u16 *rc, u16 *rrc)
  *  -EFAULT if copying the result to buff_user failed
  */
 int kvm_s390_pv_dump_stor_state(struct kvm *kvm, void __user *buff_user,
-				u64 *gaddr, u64 buff_user_len, u16 *rc, u16 *rrc)
+				gpa_t *gaddr, u64 buff_user_len, u16 *rc, u16 *rrc)
 {
 	struct uv_cb_dump_stor_state uvcb = {
 		.header.cmd = UVC_CMD_DUMP_CONF_STOR_STATE,
-- 
2.51.0


^ permalink raw reply related	[flat|nested] 39+ messages in thread

* [RFC 10/10] KVM: s390: Cleanup kvm_s390_store_status_unloaded
  2026-03-16 16:23 [RFC 00/10] KVM: s390: spring cleanup Janosch Frank
                   ` (8 preceding siblings ...)
  2026-03-16 16:23 ` [RFC 09/10] KVM: s390: Use gpa_t in pv.c Janosch Frank
@ 2026-03-16 16:23 ` Janosch Frank
  2026-03-18 15:51   ` Claudio Imbrenda
  2026-03-23  9:47   ` Christoph Schlameuss
  9 siblings, 2 replies; 39+ messages in thread
From: Janosch Frank @ 2026-03-16 16:23 UTC (permalink / raw)
  To: kvm; +Cc: linux-s390, imbrenda, borntraeger, akrowiak

Fixup comments, use gpa_t and replace magic constants.

Signed-off-by: Janosch Frank <frankja@linux.ibm.com>
---
 arch/s390/kvm/kvm-s390.c | 24 ++++++++++++++++--------
 arch/s390/kvm/kvm-s390.h |  4 ++--
 2 files changed, 18 insertions(+), 10 deletions(-)

diff --git a/arch/s390/kvm/kvm-s390.c b/arch/s390/kvm/kvm-s390.c
index 1668580008c6..c76f83b38d27 100644
--- a/arch/s390/kvm/kvm-s390.c
+++ b/arch/s390/kvm/kvm-s390.c
@@ -4993,11 +4993,12 @@ int kvm_arch_vcpu_ioctl_run(struct kvm_vcpu *vcpu)
 
 /*
  * store status at address
- * we use have two special cases:
- * KVM_S390_STORE_STATUS_NOADDR: -> 0x1200 on 64 bit
- * KVM_S390_STORE_STATUS_PREFIXED: -> prefix
+ *
+ * We have two special cases:
+ * - KVM_S390_STORE_STATUS_NOADDR: -> 0x1200 on 64 bit
+ * - KVM_S390_STORE_STATUS_PREFIXED: -> prefix
  */
-int kvm_s390_store_status_unloaded(struct kvm_vcpu *vcpu, unsigned long gpa)
+int kvm_s390_store_status_unloaded(struct kvm_vcpu *vcpu, gpa_t gpa)
 {
 	unsigned char archmode = 1;
 	freg_t fprs[NUM_FPRS];
@@ -5007,15 +5008,22 @@ int kvm_s390_store_status_unloaded(struct kvm_vcpu *vcpu, unsigned long gpa)
 
 	px = kvm_s390_get_prefix(vcpu);
 	if (gpa == KVM_S390_STORE_STATUS_NOADDR) {
-		if (write_guest_abs(vcpu, 163, &archmode, 1))
+		if (write_guest_abs(vcpu, __LC_AR_MODE_ID, &archmode, 1))
 			return -EFAULT;
 		gpa = 0;
 	} else if (gpa == KVM_S390_STORE_STATUS_PREFIXED) {
-		if (write_guest_real(vcpu, 163, &archmode, 1))
+		if (write_guest_real(vcpu, __LC_AR_MODE_ID, &archmode, 1))
 			return -EFAULT;
 		gpa = px;
-	} else
+	} else {
+		/*
+		 * Store status at address does NOT store vrs and arch
+		 * indication. Since we add __LC_FPREGS_SAVE_AREA to
+		 * the address when writing, we need to subtract it
+		 * here.
+		 */
 		gpa -= __LC_FPREGS_SAVE_AREA;
+	}
 
 	/* manually convert vector registers if necessary */
 	if (cpu_has_vx()) {
@@ -5049,7 +5057,7 @@ int kvm_s390_store_status_unloaded(struct kvm_vcpu *vcpu, unsigned long gpa)
 	return rc ? -EFAULT : 0;
 }
 
-int kvm_s390_vcpu_store_status(struct kvm_vcpu *vcpu, unsigned long addr)
+int kvm_s390_vcpu_store_status(struct kvm_vcpu *vcpu, gpa_t addr)
 {
 	/*
 	 * The guest FPRS and ACRS are in the host FPRS/ACRS due to the lazy
diff --git a/arch/s390/kvm/kvm-s390.h b/arch/s390/kvm/kvm-s390.h
index 1ffaec723a30..9cfc3caee334 100644
--- a/arch/s390/kvm/kvm-s390.h
+++ b/arch/s390/kvm/kvm-s390.h
@@ -450,8 +450,8 @@ int kvm_s390_handle_sigp_pei(struct kvm_vcpu *vcpu);
 
 /* implemented in kvm-s390.c */
 int kvm_s390_try_set_tod_clock(struct kvm *kvm, const struct kvm_s390_vm_tod_clock *gtod);
-int kvm_s390_store_status_unloaded(struct kvm_vcpu *vcpu, unsigned long addr);
-int kvm_s390_vcpu_store_status(struct kvm_vcpu *vcpu, unsigned long addr);
+int kvm_s390_store_status_unloaded(struct kvm_vcpu *vcpu, gpa_t addr);
+int kvm_s390_vcpu_store_status(struct kvm_vcpu *vcpu, gpa_t addr);
 int kvm_s390_vcpu_start(struct kvm_vcpu *vcpu);
 int kvm_s390_vcpu_stop(struct kvm_vcpu *vcpu);
 void kvm_s390_vcpu_block(struct kvm_vcpu *vcpu);
-- 
2.51.0


^ permalink raw reply related	[flat|nested] 39+ messages in thread

* Re: [RFC 01/10] KVM: s390: diag258: Pass absolute address to kvm_is_gpa_in_memslot()
  2026-03-16 16:23 ` [RFC 01/10] KVM: s390: diag258: Pass absolute address to kvm_is_gpa_in_memslot() Janosch Frank
@ 2026-03-16 18:34   ` Christian Borntraeger
  2026-03-17 10:01   ` Christoph Schlameuss
  2026-03-18 16:04   ` Claudio Imbrenda
  2 siblings, 0 replies; 39+ messages in thread
From: Christian Borntraeger @ 2026-03-16 18:34 UTC (permalink / raw)
  To: Janosch Frank, kvm; +Cc: linux-s390, imbrenda, akrowiak

Am 16.03.26 um 17:23 schrieb Janosch Frank:
> The token address is a real address and as such we need to translate
> it before it's a true gpa.
> 
> Signed-off-by: Janosch Frank <frankja@linux.ibm.com>
> Fixes: 3c038e6be0e29 ("KVM: async_pf: Async page fault support on s390")

Makes sense, but I guess we would have other issues if the prefix register
points outside a memslot.
Anyway

Reviewed-by: Christian Borntraeger <borntraeger@linux.ibm.com>

> ---
>   arch/s390/kvm/diag.c | 2 +-
>   1 file changed, 1 insertion(+), 1 deletion(-)
> 
> diff --git a/arch/s390/kvm/diag.c b/arch/s390/kvm/diag.c
> index d89d1c381522..51ba4dcc3905 100644
> --- a/arch/s390/kvm/diag.c
> +++ b/arch/s390/kvm/diag.c
> @@ -122,7 +122,7 @@ static int __diag_page_ref_service(struct kvm_vcpu *vcpu)
>   		    parm.token_addr & 7 || parm.zarch != 0x8000000000000000ULL)
>   			return kvm_s390_inject_program_int(vcpu, PGM_SPECIFICATION);
>   
> -		if (!kvm_is_gpa_in_memslot(vcpu->kvm, parm.token_addr))
> +		if (!kvm_is_gpa_in_memslot(vcpu->kvm, kvm_s390_real_to_abs(vcpu, parm.token_addr)))
>   			return kvm_s390_inject_program_int(vcpu, PGM_ADDRESSING);
>   
>   		vcpu->arch.pfault_token = parm.token_addr;


^ permalink raw reply	[flat|nested] 39+ messages in thread

* Re: [RFC 02/10] KVM: s390: Consolidate lpswe variants
  2026-03-16 16:23 ` [RFC 02/10] KVM: s390: Consolidate lpswe variants Janosch Frank
@ 2026-03-16 18:47   ` Christian Borntraeger
  2026-03-17  8:13     ` Janosch Frank
  2026-03-17 13:03     ` [PATCH] KVM: s390: Fix lpsw/e breaking event handling Janosch Frank
  0 siblings, 2 replies; 39+ messages in thread
From: Christian Borntraeger @ 2026-03-16 18:47 UTC (permalink / raw)
  To: Janosch Frank, kvm; +Cc: linux-s390, imbrenda, akrowiak


Am 16.03.26 um 17:23 schrieb Janosch Frank:
> LPSWE and LPSWEY currently only differ in instruction format but not
> in functionality.

I think this is actually a bug in KVM. LPSWEY does not set the breaking
event address register, LPSWE does. Maybe that would be the better fix
to actually set gbea when emulating LPSWE. This must happen after all
checks, so I guess combining both would make this harder, so better
not do this.


^ permalink raw reply	[flat|nested] 39+ messages in thread

* Re: [RFC 03/10] KVM: s390: Convert shifts to gpa_to_gfn()
  2026-03-16 16:23 ` [RFC 03/10] KVM: s390: Convert shifts to gpa_to_gfn() Janosch Frank
@ 2026-03-16 18:49   ` Christian Borntraeger
  2026-03-17 10:38   ` Christoph Schlameuss
  2026-03-18 14:26   ` Claudio Imbrenda
  2 siblings, 0 replies; 39+ messages in thread
From: Christian Borntraeger @ 2026-03-16 18:49 UTC (permalink / raw)
  To: Janosch Frank, kvm; +Cc: linux-s390, imbrenda, akrowiak



Am 16.03.26 um 17:23 schrieb Janosch Frank:
> Not only do we get rid of the ugly shift but we have more chances to
> do static analysis type checking since gpa_to_gfn() returns gfn_t.
> 
> Signed-off-by: Janosch Frank <frankja@linux.ibm.com>

Acked-by: Christian Borntraeger <borntraeger@linux.ibm.com>


^ permalink raw reply	[flat|nested] 39+ messages in thread

* Re: [RFC 04/10] KVM: s390: kvm_s390_real_to_abs() should return gpa_t
  2026-03-16 16:23 ` [RFC 04/10] KVM: s390: kvm_s390_real_to_abs() should return gpa_t Janosch Frank
@ 2026-03-16 18:53   ` Christian Borntraeger
  2026-03-18  7:10   ` Christoph Schlameuss
  2026-03-18 14:29   ` Claudio Imbrenda
  2 siblings, 0 replies; 39+ messages in thread
From: Christian Borntraeger @ 2026-03-16 18:53 UTC (permalink / raw)
  To: Janosch Frank, kvm; +Cc: linux-s390, imbrenda, akrowiak



Am 16.03.26 um 17:23 schrieb Janosch Frank:
> An absolute address is definitely guest physical.
> 
> Signed-off-by: Janosch Frank <frankja@linux.ibm.com>

Acked-by: Christian Borntraeger <borntraeger@linux.ibm.com>


^ permalink raw reply	[flat|nested] 39+ messages in thread

* Re: [RFC 02/10] KVM: s390: Consolidate lpswe variants
  2026-03-16 18:47   ` Christian Borntraeger
@ 2026-03-17  8:13     ` Janosch Frank
  2026-03-17 13:03     ` [PATCH] KVM: s390: Fix lpsw/e breaking event handling Janosch Frank
  1 sibling, 0 replies; 39+ messages in thread
From: Janosch Frank @ 2026-03-17  8:13 UTC (permalink / raw)
  To: Christian Borntraeger, kvm; +Cc: linux-s390, imbrenda, akrowiak

On 3/16/26 19:47, Christian Borntraeger wrote:
> 
> Am 16.03.26 um 17:23 schrieb Janosch Frank:
>> LPSWE and LPSWEY currently only differ in instruction format but not
>> in functionality.
> 
> I think this is actually a bug in KVM. LPSWEY does not set the breaking
> event address register, LPSWE does. Maybe that would be the better fix
> to actually set gbea when emulating LPSWE. This must happen after all
> checks, so I guess combining both would make this harder, so better
> not do this.
> 

Interesting, seems like I glanced over that when checking the 
architecture. I'll have a look.

^ permalink raw reply	[flat|nested] 39+ messages in thread

* Re: [RFC 01/10] KVM: s390: diag258: Pass absolute address to kvm_is_gpa_in_memslot()
  2026-03-16 16:23 ` [RFC 01/10] KVM: s390: diag258: Pass absolute address to kvm_is_gpa_in_memslot() Janosch Frank
  2026-03-16 18:34   ` Christian Borntraeger
@ 2026-03-17 10:01   ` Christoph Schlameuss
  2026-03-18 16:04   ` Claudio Imbrenda
  2 siblings, 0 replies; 39+ messages in thread
From: Christoph Schlameuss @ 2026-03-17 10:01 UTC (permalink / raw)
  To: Janosch Frank, kvm; +Cc: linux-s390, imbrenda, borntraeger, akrowiak

On Mon Mar 16, 2026 at 5:23 PM CET, Janosch Frank wrote:
> The token address is a real address and as such we need to translate
> it before it's a true gpa.
>
> Signed-off-by: Janosch Frank <frankja@linux.ibm.com>
> Fixes: 3c038e6be0e29 ("KVM: async_pf: Async page fault support on s390")

Reviewed-by: Christoph Schlameuss <schlameuss@linux.ibm.com>

> ---
>  arch/s390/kvm/diag.c | 2 +-
>  1 file changed, 1 insertion(+), 1 deletion(-)
>
> diff --git a/arch/s390/kvm/diag.c b/arch/s390/kvm/diag.c
> index d89d1c381522..51ba4dcc3905 100644
> --- a/arch/s390/kvm/diag.c
> +++ b/arch/s390/kvm/diag.c
> @@ -122,7 +122,7 @@ static int __diag_page_ref_service(struct kvm_vcpu *vcpu)
>  		    parm.token_addr & 7 || parm.zarch != 0x8000000000000000ULL)
>  			return kvm_s390_inject_program_int(vcpu, PGM_SPECIFICATION);
>  
> -		if (!kvm_is_gpa_in_memslot(vcpu->kvm, parm.token_addr))
> +		if (!kvm_is_gpa_in_memslot(vcpu->kvm, kvm_s390_real_to_abs(vcpu, parm.token_addr)))
>  			return kvm_s390_inject_program_int(vcpu, PGM_ADDRESSING);
>  
>  		vcpu->arch.pfault_token = parm.token_addr;


^ permalink raw reply	[flat|nested] 39+ messages in thread

* Re: [RFC 03/10] KVM: s390: Convert shifts to gpa_to_gfn()
  2026-03-16 16:23 ` [RFC 03/10] KVM: s390: Convert shifts to gpa_to_gfn() Janosch Frank
  2026-03-16 18:49   ` Christian Borntraeger
@ 2026-03-17 10:38   ` Christoph Schlameuss
  2026-03-18 14:26   ` Claudio Imbrenda
  2 siblings, 0 replies; 39+ messages in thread
From: Christoph Schlameuss @ 2026-03-17 10:38 UTC (permalink / raw)
  To: Janosch Frank, kvm; +Cc: linux-s390, imbrenda, borntraeger, akrowiak

On Mon Mar 16, 2026 at 5:23 PM CET, Janosch Frank wrote:
> Not only do we get rid of the ugly shift but we have more chances to
> do static analysis type checking since gpa_to_gfn() returns gfn_t.
>
> Signed-off-by: Janosch Frank <frankja@linux.ibm.com>

Reviewed-by: Christoph Schlameuss <schlameuss@linux.ibm.com>

> ---
>  arch/s390/kvm/interrupt.c | 4 ++--
>  arch/s390/kvm/priv.c      | 2 +-
>  2 files changed, 3 insertions(+), 3 deletions(-)

^ permalink raw reply	[flat|nested] 39+ messages in thread

* [PATCH] KVM: s390: Fix lpsw/e breaking event handling
  2026-03-16 18:47   ` Christian Borntraeger
  2026-03-17  8:13     ` Janosch Frank
@ 2026-03-17 13:03     ` Janosch Frank
  2026-03-17 13:30       ` Christian Borntraeger
  2026-03-23 15:08       ` Hendrik Brueckner
  1 sibling, 2 replies; 39+ messages in thread
From: Janosch Frank @ 2026-03-17 13:03 UTC (permalink / raw)
  To: kvm; +Cc: linux-s390, imbrenda, borntraeger, akrowiak

LPSW and LPSWE need to set the gbea on completion but currently don't.
Time to fix this up.

LPSWEY was designed to not set the bear.

Signed-off-by: Janosch Frank <frankja@linux.ibm.com>
Fixes: 48a3e950f4cee ("KVM: s390: Add support for machine checks.")
Reported-by: Christian Borntraeger <borntraeger@linux.ibm.com>
---
 arch/s390/kvm/priv.c | 8 ++++++--
 1 file changed, 6 insertions(+), 2 deletions(-)

diff --git a/arch/s390/kvm/priv.c b/arch/s390/kvm/priv.c
index 3e764e6440d8..9d28b3fdba5b 100644
--- a/arch/s390/kvm/priv.c
+++ b/arch/s390/kvm/priv.c
@@ -710,12 +710,13 @@ int kvm_s390_handle_lpsw(struct kvm_vcpu *vcpu)
 {
 	psw_t *gpsw = &vcpu->arch.sie_block->gpsw;
 	psw32_t new_psw;
-	u64 addr;
+	u64 addr, iaddr;
 	int rc;
 	u8 ar;
 
 	vcpu->stat.instruction_lpsw++;
 
+	iaddr = gpsw->addr;
 	if (gpsw->mask & PSW_MASK_PSTATE)
 		return kvm_s390_inject_program_int(vcpu, PGM_PRIVILEGED_OP);
 
@@ -733,18 +734,20 @@ int kvm_s390_handle_lpsw(struct kvm_vcpu *vcpu)
 	gpsw->addr = new_psw.addr & ~PSW32_ADDR_AMODE;
 	if (!is_valid_psw(gpsw))
 		return kvm_s390_inject_program_int(vcpu, PGM_SPECIFICATION);
+	vcpu->arch.sie_block->gbea = iaddr;
 	return 0;
 }
 
 static int handle_lpswe(struct kvm_vcpu *vcpu)
 {
 	psw_t new_psw;
-	u64 addr;
+	u64 addr, iaddr;
 	int rc;
 	u8 ar;
 
 	vcpu->stat.instruction_lpswe++;
 
+	iaddr = vcpu->arch.sie_block->gpsw.addr;
 	if (vcpu->arch.sie_block->gpsw.mask & PSW_MASK_PSTATE)
 		return kvm_s390_inject_program_int(vcpu, PGM_PRIVILEGED_OP);
 
@@ -757,6 +760,7 @@ static int handle_lpswe(struct kvm_vcpu *vcpu)
 	vcpu->arch.sie_block->gpsw = new_psw;
 	if (!is_valid_psw(&vcpu->arch.sie_block->gpsw))
 		return kvm_s390_inject_program_int(vcpu, PGM_SPECIFICATION);
+	vcpu->arch.sie_block->gbea = iaddr;
 	return 0;
 }
 
-- 
2.51.0


^ permalink raw reply related	[flat|nested] 39+ messages in thread

* Re: [PATCH] KVM: s390: Fix lpsw/e breaking event handling
  2026-03-17 13:03     ` [PATCH] KVM: s390: Fix lpsw/e breaking event handling Janosch Frank
@ 2026-03-17 13:30       ` Christian Borntraeger
  2026-03-23 15:08       ` Hendrik Brueckner
  1 sibling, 0 replies; 39+ messages in thread
From: Christian Borntraeger @ 2026-03-17 13:30 UTC (permalink / raw)
  To: Janosch Frank, kvm; +Cc: linux-s390, imbrenda, akrowiak



Am 17.03.26 um 14:03 schrieb Janosch Frank:
> LPSW and LPSWE need to set the gbea on completion but currently don't.
> Time to fix this up.
> 
> LPSWEY was designed to not set the bear.
> 
> Signed-off-by: Janosch Frank <frankja@linux.ibm.com>
> Fixes: 48a3e950f4cee ("KVM: s390: Add support for machine checks.")
> Reported-by: Christian Borntraeger <borntraeger@linux.ibm.com>


Looks good,

Reviewed-by: Christian Borntraeger <borntraeger@linux.ibm.com>

> ---
>   arch/s390/kvm/priv.c | 8 ++++++--
>   1 file changed, 6 insertions(+), 2 deletions(-)
> 
> diff --git a/arch/s390/kvm/priv.c b/arch/s390/kvm/priv.c
> index 3e764e6440d8..9d28b3fdba5b 100644
> --- a/arch/s390/kvm/priv.c
> +++ b/arch/s390/kvm/priv.c
> @@ -710,12 +710,13 @@ int kvm_s390_handle_lpsw(struct kvm_vcpu *vcpu)
>   {
>   	psw_t *gpsw = &vcpu->arch.sie_block->gpsw;
>   	psw32_t new_psw;
> -	u64 addr;
> +	u64 addr, iaddr;
>   	int rc;
>   	u8 ar;
>   
>   	vcpu->stat.instruction_lpsw++;
>   
> +	iaddr = gpsw->addr;
>   	if (gpsw->mask & PSW_MASK_PSTATE)
>   		return kvm_s390_inject_program_int(vcpu, PGM_PRIVILEGED_OP);
>   
> @@ -733,18 +734,20 @@ int kvm_s390_handle_lpsw(struct kvm_vcpu *vcpu)
>   	gpsw->addr = new_psw.addr & ~PSW32_ADDR_AMODE;
>   	if (!is_valid_psw(gpsw))
>   		return kvm_s390_inject_program_int(vcpu, PGM_SPECIFICATION);
> +	vcpu->arch.sie_block->gbea = iaddr;
>   	return 0;
>   }
>   
>   static int handle_lpswe(struct kvm_vcpu *vcpu)
>   {
>   	psw_t new_psw;
> -	u64 addr;
> +	u64 addr, iaddr;
>   	int rc;
>   	u8 ar;
>   
>   	vcpu->stat.instruction_lpswe++;
>   
> +	iaddr = vcpu->arch.sie_block->gpsw.addr;
>   	if (vcpu->arch.sie_block->gpsw.mask & PSW_MASK_PSTATE)
>   		return kvm_s390_inject_program_int(vcpu, PGM_PRIVILEGED_OP);
>   
> @@ -757,6 +760,7 @@ static int handle_lpswe(struct kvm_vcpu *vcpu)
>   	vcpu->arch.sie_block->gpsw = new_psw;
>   	if (!is_valid_psw(&vcpu->arch.sie_block->gpsw))
>   		return kvm_s390_inject_program_int(vcpu, PGM_SPECIFICATION);
> +	vcpu->arch.sie_block->gbea = iaddr;
>   	return 0;
>   }
>   


^ permalink raw reply	[flat|nested] 39+ messages in thread

* Re: [RFC 04/10] KVM: s390: kvm_s390_real_to_abs() should return gpa_t
  2026-03-16 16:23 ` [RFC 04/10] KVM: s390: kvm_s390_real_to_abs() should return gpa_t Janosch Frank
  2026-03-16 18:53   ` Christian Borntraeger
@ 2026-03-18  7:10   ` Christoph Schlameuss
  2026-03-18 14:29   ` Claudio Imbrenda
  2 siblings, 0 replies; 39+ messages in thread
From: Christoph Schlameuss @ 2026-03-18  7:10 UTC (permalink / raw)
  To: Janosch Frank, kvm; +Cc: linux-s390, imbrenda, borntraeger, akrowiak

On Mon Mar 16, 2026 at 5:23 PM CET, Janosch Frank wrote:
> An absolute address is definitely guest physical.
>
> Signed-off-by: Janosch Frank <frankja@linux.ibm.com>

Reviewed-by: Christoph Schlameuss <schlameuss@linux.ibm.com>

> ---
>  arch/s390/kvm/gaccess.h | 6 +++---
>  1 file changed, 3 insertions(+), 3 deletions(-)
>
> diff --git a/arch/s390/kvm/gaccess.h b/arch/s390/kvm/gaccess.h
> index b5385cec60f4..ee346b607a07 100644
> --- a/arch/s390/kvm/gaccess.h
> +++ b/arch/s390/kvm/gaccess.h
> @@ -24,7 +24,7 @@
>   * Returns the guest absolute address that corresponds to the passed guest real
>   * address @gra of by applying the given prefix.
>   */
> -static inline unsigned long _kvm_s390_real_to_abs(u32 prefix, unsigned long gra)
> +static inline gpa_t _kvm_s390_real_to_abs(u32 prefix, unsigned long gra)
>  {
>  	if (gra < 2 * PAGE_SIZE)
>  		gra += prefix;
> @@ -41,8 +41,8 @@ static inline unsigned long _kvm_s390_real_to_abs(u32 prefix, unsigned long gra)
>   * Returns the guest absolute address that corresponds to the passed guest real
>   * address @gra of a virtual guest cpu by applying its prefix.
>   */
> -static inline unsigned long kvm_s390_real_to_abs(struct kvm_vcpu *vcpu,
> -						 unsigned long gra)
> +static inline gpa_t kvm_s390_real_to_abs(struct kvm_vcpu *vcpu,
> +					 unsigned long gra)

nit: This would also nicely fit into a single line (<100 c) when you are already
touching it.

>  {
>  	return _kvm_s390_real_to_abs(kvm_s390_get_prefix(vcpu), gra);
>  }


^ permalink raw reply	[flat|nested] 39+ messages in thread

* Re: [RFC 05/10] KVM: s390: vsie: Cleanup and fixup of crycb handling
  2026-03-16 16:23 ` [RFC 05/10] KVM: s390: vsie: Cleanup and fixup of crycb handling Janosch Frank
@ 2026-03-18 14:13   ` Christoph Schlameuss
  2026-03-18 16:48   ` Claudio Imbrenda
  2026-03-20 12:01   ` Anthony Krowiak
  2 siblings, 0 replies; 39+ messages in thread
From: Christoph Schlameuss @ 2026-03-18 14:13 UTC (permalink / raw)
  To: Janosch Frank, kvm; +Cc: linux-s390, imbrenda, borntraeger, akrowiak

On Mon Mar 16, 2026 at 5:23 PM CET, Janosch Frank wrote:
> The crycbd denotes an absolute address and as such we need to use
> gpa_t and read_guest_abs() instead of read_guest_real().
>
> We don't want to copy the reserved fields into the host, so let's
> define size constants that only include the masks and ignore the
> reserved fields.
>
> While we're at it, remove magic constants with compiler backed
> constants.
>
> Signed-off-by: Janosch Frank <frankja@linux.ibm.com>

Reviewed-by: Christoph Schlameuss <schlameuss@linux.ibm.com>

> ---
>  arch/s390/include/asm/kvm_host.h |  6 ++++
>  arch/s390/kvm/vsie.c             | 50 +++++++++++++++-----------------
>  2 files changed, 30 insertions(+), 26 deletions(-)
>

^ permalink raw reply	[flat|nested] 39+ messages in thread

* Re: [RFC 06/10] KVM: s390: Rework lowcore access functions
  2026-03-16 16:23 ` [RFC 06/10] KVM: s390: Rework lowcore access functions Janosch Frank
@ 2026-03-18 14:25   ` Claudio Imbrenda
  2026-03-23  9:11   ` Christoph Schlameuss
  1 sibling, 0 replies; 39+ messages in thread
From: Claudio Imbrenda @ 2026-03-18 14:25 UTC (permalink / raw)
  To: Janosch Frank; +Cc: kvm, linux-s390, borntraeger, akrowiak

On Mon, 16 Mar 2026 16:23:53 +0000
Janosch Frank <frankja@linux.ibm.com> wrote:

> These functions effectively always use offset constants and no
> addresses at all. Therefore make it clear that we're accessing offsets
> and sprinkle in compile and runtime warnings for more safety.
> 
> Signed-off-by: Janosch Frank <frankja@linux.ibm.com>
> ---
>  arch/s390/kvm/gaccess.h | 32 ++++++++++++++++++++++----------
>  1 file changed, 22 insertions(+), 10 deletions(-)
> 
> diff --git a/arch/s390/kvm/gaccess.h b/arch/s390/kvm/gaccess.h
> index ee346b607a07..086da7b040b5 100644
> --- a/arch/s390/kvm/gaccess.h
> +++ b/arch/s390/kvm/gaccess.h
> @@ -89,6 +89,13 @@ static inline unsigned long kvm_s390_logical_to_effective(struct kvm_vcpu *vcpu,
>  	return _kvm_s390_logical_to_effective(&vcpu->arch.sie_block->gpsw, ga);
>  }
>  
> +static inline gpa_t lc_addr_from_offset(struct kvm_vcpu *vcpu, unsigned int off)

int is ok, but technically it could even be a short

> +{
> +	gpa_t addr = kvm_s390_get_prefix(vcpu);
> +
> +	return addr + off;
> +}
> +
>  /*
>   * put_guest_lc, read_guest_lc and write_guest_lc are guest access functions
>   * which shall only be used to access the lowcore of a vcpu.
> @@ -117,13 +124,14 @@ static inline unsigned long kvm_s390_logical_to_effective(struct kvm_vcpu *vcpu,
>   *	 would be to terminate the guest.
>   *	 It is wrong to inject a guest exception.
>   */
> -#define put_guest_lc(vcpu, x, gra)				\
> +#define put_guest_lc(vcpu, x, off)				\
>  ({								\
>  	struct kvm_vcpu *__vcpu = (vcpu);			\
> -	__typeof__(*(gra)) __x = (x);				\
> -	unsigned long __gpa;					\
> +	__typeof__(*(off)) __x = (x);				\
> +	gpa_t __gpa;						\
>  								\
> -	__gpa = (unsigned long)(gra);				\
> +	BUILD_BUG_ON(!__builtin_constant_p(off));		\
> +	__gpa = (unsigned long)(off);				\

why not use the new function lc_addr_from_offset() that you have just
introduced?

>  	__gpa += kvm_s390_get_prefix(__vcpu);			\
>  	kvm_write_guest(__vcpu->kvm, __gpa, &__x, sizeof(__x));	\
>  })
> @@ -131,7 +139,7 @@ static inline unsigned long kvm_s390_logical_to_effective(struct kvm_vcpu *vcpu,
>  /**
>   * write_guest_lc - copy data from kernel space to guest vcpu's lowcore
>   * @vcpu: virtual cpu
> - * @gra: vcpu's source guest real address
> + * @off: offset into the lowcore
>   * @data: source address in kernel space
>   * @len: number of bytes to copy
>   *
> @@ -146,18 +154,20 @@ static inline unsigned long kvm_s390_logical_to_effective(struct kvm_vcpu *vcpu,
>   *	 It is wrong to inject a guest exception.
>   */
>  static inline __must_check
> -int write_guest_lc(struct kvm_vcpu *vcpu, unsigned long gra, void *data,
> +int write_guest_lc(struct kvm_vcpu *vcpu, unsigned int off, void *data,
>  		   unsigned long len)
>  {
> -	unsigned long gpa = gra + kvm_s390_get_prefix(vcpu);
> +	gpa_t gpa = lc_addr_from_offset(vcpu, off);
>  
> +	BUILD_BUG_ON(!__builtin_constant_p(off) || !__builtin_constant_p(len));
> +	BUILD_BUG_ON(off + len >= 2 * PAGE_SIZE);
>  	return kvm_write_guest(vcpu->kvm, gpa, data, len);
>  }
>  
>  /**
>   * read_guest_lc - copy data from guest vcpu's lowcore to kernel space
>   * @vcpu: virtual cpu
> - * @gra: vcpu's source guest real address
> + * @off: offset into the lowcore
>   * @data: destination address in kernel space
>   * @len: number of bytes to copy
>   *
> @@ -172,11 +182,13 @@ int write_guest_lc(struct kvm_vcpu *vcpu, unsigned long gra, void *data,
>   *	 It is wrong to inject a guest exception.
>   */
>  static inline __must_check
> -int read_guest_lc(struct kvm_vcpu *vcpu, unsigned long gra, void *data,
> +int read_guest_lc(struct kvm_vcpu *vcpu, unsigned int off, void *data,
>  		  unsigned long len)
>  {
> -	unsigned long gpa = gra + kvm_s390_get_prefix(vcpu);
> +	gpa_t gpa = lc_addr_from_offset(vcpu, off);
>  
> +	BUILD_BUG_ON(!__builtin_constant_p(off) || !__builtin_constant_p(len));
> +	BUILD_BUG_ON(off + len >= 2 * PAGE_SIZE);
>  	return kvm_read_guest(vcpu->kvm, gpa, data, len);
>  }
>  


^ permalink raw reply	[flat|nested] 39+ messages in thread

* Re: [RFC 03/10] KVM: s390: Convert shifts to gpa_to_gfn()
  2026-03-16 16:23 ` [RFC 03/10] KVM: s390: Convert shifts to gpa_to_gfn() Janosch Frank
  2026-03-16 18:49   ` Christian Borntraeger
  2026-03-17 10:38   ` Christoph Schlameuss
@ 2026-03-18 14:26   ` Claudio Imbrenda
  2 siblings, 0 replies; 39+ messages in thread
From: Claudio Imbrenda @ 2026-03-18 14:26 UTC (permalink / raw)
  To: Janosch Frank; +Cc: kvm, linux-s390, borntraeger, akrowiak

On Mon, 16 Mar 2026 16:23:50 +0000
Janosch Frank <frankja@linux.ibm.com> wrote:

> Not only do we get rid of the ugly shift but we have more chances to
> do static analysis type checking since gpa_to_gfn() returns gfn_t.
> 
> Signed-off-by: Janosch Frank <frankja@linux.ibm.com>

Reviewed-by: Claudio Imbrenda <imbrenda@linux.ibm.com>

> ---
>  arch/s390/kvm/interrupt.c | 4 ++--
>  arch/s390/kvm/priv.c      | 2 +-
>  2 files changed, 3 insertions(+), 3 deletions(-)
> 
> diff --git a/arch/s390/kvm/interrupt.c b/arch/s390/kvm/interrupt.c
> index 03fb477c7527..1b771276415c 100644
> --- a/arch/s390/kvm/interrupt.c
> +++ b/arch/s390/kvm/interrupt.c
> @@ -2771,13 +2771,13 @@ static int adapter_indicators_set(struct kvm *kvm,
>  	bit = get_ind_bit(adapter_int->ind_addr,
>  			  adapter_int->ind_offset, adapter->swap);
>  	set_bit(bit, map);
> -	mark_page_dirty(kvm, adapter_int->ind_gaddr >> PAGE_SHIFT);
> +	mark_page_dirty(kvm, gpa_to_gfn(adapter_int->ind_gaddr));
>  	set_page_dirty_lock(ind_page);
>  	map = page_address(summary_page);
>  	bit = get_ind_bit(adapter_int->summary_addr,
>  			  adapter_int->summary_offset, adapter->swap);
>  	summary_set = test_and_set_bit(bit, map);
> -	mark_page_dirty(kvm, adapter_int->summary_gaddr >> PAGE_SHIFT);
> +	mark_page_dirty(kvm, gpa_to_gfn(adapter_int->summary_gaddr));
>  	set_page_dirty_lock(summary_page);
>  	srcu_read_unlock(&kvm->srcu, idx);
>  
> diff --git a/arch/s390/kvm/priv.c b/arch/s390/kvm/priv.c
> index a90fc0b4fd96..780186eb6037 100644
> --- a/arch/s390/kvm/priv.c
> +++ b/arch/s390/kvm/priv.c
> @@ -1151,7 +1151,7 @@ static inline int __do_essa(struct kvm_vcpu *vcpu, const int orc)
>  	 */
>  
>  	kvm_s390_get_regs_rre(vcpu, &r1, &r2);
> -	gfn = vcpu->run->s.regs.gprs[r2] >> PAGE_SHIFT;
> +	gfn = gpa_to_gfn(vcpu->run->s.regs.gprs[r2]);
>  	entries = (vcpu->arch.sie_block->cbrlo & ~PAGE_MASK) >> 3;
>  
>  	nappended = dat_perform_essa(vcpu->arch.gmap->asce, gfn, orc, &state, &dirtied);


^ permalink raw reply	[flat|nested] 39+ messages in thread

* Re: [RFC 04/10] KVM: s390: kvm_s390_real_to_abs() should return gpa_t
  2026-03-16 16:23 ` [RFC 04/10] KVM: s390: kvm_s390_real_to_abs() should return gpa_t Janosch Frank
  2026-03-16 18:53   ` Christian Borntraeger
  2026-03-18  7:10   ` Christoph Schlameuss
@ 2026-03-18 14:29   ` Claudio Imbrenda
  2 siblings, 0 replies; 39+ messages in thread
From: Claudio Imbrenda @ 2026-03-18 14:29 UTC (permalink / raw)
  To: Janosch Frank; +Cc: kvm, linux-s390, borntraeger, akrowiak

On Mon, 16 Mar 2026 16:23:51 +0000
Janosch Frank <frankja@linux.ibm.com> wrote:

> An absolute address is definitely guest physical.
> 
> Signed-off-by: Janosch Frank <frankja@linux.ibm.com>
> ---
>  arch/s390/kvm/gaccess.h | 6 +++---
>  1 file changed, 3 insertions(+), 3 deletions(-)
> 
> diff --git a/arch/s390/kvm/gaccess.h b/arch/s390/kvm/gaccess.h
> index b5385cec60f4..ee346b607a07 100644
> --- a/arch/s390/kvm/gaccess.h
> +++ b/arch/s390/kvm/gaccess.h
> @@ -24,7 +24,7 @@
>   * Returns the guest absolute address that corresponds to the passed guest real
>   * address @gra of by applying the given prefix.
>   */
> -static inline unsigned long _kvm_s390_real_to_abs(u32 prefix, unsigned long gra)
> +static inline gpa_t _kvm_s390_real_to_abs(u32 prefix, unsigned long gra)
>  {
>  	if (gra < 2 * PAGE_SIZE)
>  		gra += prefix;
> @@ -41,8 +41,8 @@ static inline unsigned long _kvm_s390_real_to_abs(u32 prefix, unsigned long gra)
>   * Returns the guest absolute address that corresponds to the passed guest real
>   * address @gra of a virtual guest cpu by applying its prefix.
>   */
> -static inline unsigned long kvm_s390_real_to_abs(struct kvm_vcpu *vcpu,
> -						 unsigned long gra)
> +static inline gpa_t kvm_s390_real_to_abs(struct kvm_vcpu *vcpu,
> +					 unsigned long gra)

as Christoph suggested, please put it on one line; with that fixed:

Reviewed-by: Claudio Imbrenda <imbrenda@linux.ibm.com>

>  {
>  	return _kvm_s390_real_to_abs(kvm_s390_get_prefix(vcpu), gra);
>  }


^ permalink raw reply	[flat|nested] 39+ messages in thread

* Re: [RFC 07/10] KVM: s390: Use gpa_t and gva_t in gaccess files
  2026-03-16 16:23 ` [RFC 07/10] KVM: s390: Use gpa_t and gva_t in gaccess files Janosch Frank
@ 2026-03-18 15:36   ` Claudio Imbrenda
  2026-03-23  9:10   ` Christoph Schlameuss
  1 sibling, 0 replies; 39+ messages in thread
From: Claudio Imbrenda @ 2026-03-18 15:36 UTC (permalink / raw)
  To: Janosch Frank; +Cc: kvm, linux-s390, borntraeger, akrowiak

On Mon, 16 Mar 2026 16:23:54 +0000
Janosch Frank <frankja@linux.ibm.com> wrote:

> A lot of addresses are being passed around as u64 or unsigned long
> instead of gpa_t and gva_t. Some of the variables are already called
> gva or gpa anyway.
> 
> Signed-off-by: Janosch Frank <frankja@linux.ibm.com>

Reviewed-by: Claudio Imbrenda <imbrenda@linux.ibm.com>

> ---
>  arch/s390/kvm/gaccess.c | 20 ++++++++++----------
>  arch/s390/kvm/gaccess.h |  3 +--
>  2 files changed, 11 insertions(+), 12 deletions(-)
> 
> diff --git a/arch/s390/kvm/gaccess.c b/arch/s390/kvm/gaccess.c
> index 7275854ee68e..b2e83f6e16ab 100644
> --- a/arch/s390/kvm/gaccess.c
> +++ b/arch/s390/kvm/gaccess.c
> @@ -441,7 +441,7 @@ static int get_vcpu_asce(struct kvm_vcpu *vcpu, union asce *asce,
>  	return 0;
>  }
>  
> -static int deref_table(struct kvm *kvm, unsigned long gpa, unsigned long *val)
> +static int deref_table(struct kvm *kvm, gpa_t gpa, unsigned long *val)
>  {
>  	return kvm_read_guest(kvm, gpa, val, sizeof(*val));
>  }
> @@ -467,8 +467,8 @@ static int deref_table(struct kvm *kvm, unsigned long gpa, unsigned long *val)
>   *	      the returned value is the program interruption code as defined
>   *	      by the architecture
>   */
> -static unsigned long guest_translate_gva(struct kvm_vcpu *vcpu, unsigned long gva,
> -					 unsigned long *gpa, const union asce asce,
> +static unsigned long guest_translate_gva(struct kvm_vcpu *vcpu, gva_t gva,
> +					 gpa_t *gpa, const union asce asce,
>  					 enum gacc_mode mode, enum prot_type *prot)
>  {
>  	union vaddress vaddr = {.addr = gva};
> @@ -477,8 +477,8 @@ static unsigned long guest_translate_gva(struct kvm_vcpu *vcpu, unsigned long gv
>  	int dat_protection = 0;
>  	int iep_protection = 0;
>  	union ctlreg0 ctlreg0;
> -	unsigned long ptr;
>  	int edat1, edat2, iep;
> +	gpa_t ptr;
>  
>  	ctlreg0.val = vcpu->arch.sie_block->gcr[0];
>  	edat1 = ctlreg0.edat && test_kvm_facility(vcpu->kvm, 8);
> @@ -772,7 +772,7 @@ static int vcpu_check_access_key_gpa(struct kvm_vcpu *vcpu, u8 access_key,
>   *		  be used to inject an exception into the guest.
>   */
>  static int guest_range_to_gpas(struct kvm_vcpu *vcpu, unsigned long ga, u8 ar,
> -			       unsigned long *gpas, unsigned long len,
> +			       gpa_t *gpas, unsigned long len,
>  			       const union asce asce, enum gacc_mode mode,
>  			       u8 access_key)
>  {
> @@ -781,7 +781,7 @@ static int guest_range_to_gpas(struct kvm_vcpu *vcpu, unsigned long ga, u8 ar,
>  	unsigned int fragment_len;
>  	int lap_enabled, rc = 0;
>  	enum prot_type prot;
> -	unsigned long gpa;
> +	gpa_t gpa;
>  
>  	lap_enabled = low_address_protection_enabled(vcpu, asce);
>  	while (min(PAGE_SIZE - offset, len) > 0) {
> @@ -932,11 +932,11 @@ int access_guest_with_key(struct kvm_vcpu *vcpu, unsigned long ga, u8 ar,
>  {
>  	psw_t *psw = &vcpu->arch.sie_block->gpsw;
>  	unsigned long nr_pages, idx;
> -	unsigned long gpa_array[2];
>  	unsigned int fragment_len;
> -	unsigned long *gpas;
>  	enum prot_type prot;
> +	gpa_t gpa_array[2];
>  	int need_ipte_lock;
> +	gpa_t *gpas;
>  	union asce asce;
>  	bool try_storage_prot_override;
>  	bool try_fetch_prot_override;
> @@ -1182,7 +1182,7 @@ int cmpxchg_guest_abs_with_key(struct kvm *kvm, gpa_t gpa, int len, union kvm_s3
>   * has to take care of this.
>   */
>  int guest_translate_address_with_key(struct kvm_vcpu *vcpu, unsigned long gva, u8 ar,
> -				     unsigned long *gpa, enum gacc_mode mode,
> +				     gpa_t *gpa, enum gacc_mode mode,
>  				     u8 access_key)
>  {
>  	union asce asce;
> @@ -1282,9 +1282,9 @@ static int walk_guest_tables(struct gmap *sg, unsigned long saddr, struct pgtwal
>  	struct guest_fault *entries;
>  	union dat_table_entry table;
>  	union vaddress vaddr;
> -	unsigned long ptr;
>  	struct kvm *kvm;
>  	union asce asce;
> +	gpa_t ptr;
>  	int rc;
>  
>  	if (!parent)
> diff --git a/arch/s390/kvm/gaccess.h b/arch/s390/kvm/gaccess.h
> index 086da7b040b5..f23dc0729649 100644
> --- a/arch/s390/kvm/gaccess.h
> +++ b/arch/s390/kvm/gaccess.h
> @@ -199,8 +199,7 @@ enum gacc_mode {
>  };
>  
>  int guest_translate_address_with_key(struct kvm_vcpu *vcpu, unsigned long gva, u8 ar,
> -				     unsigned long *gpa, enum gacc_mode mode,
> -				     u8 access_key);
> +				     gpa_t *gpa, enum gacc_mode mode, u8 access_key);
>  
>  int check_gva_range(struct kvm_vcpu *vcpu, unsigned long gva, u8 ar,
>  		    unsigned long length, enum gacc_mode mode, u8 access_key);


^ permalink raw reply	[flat|nested] 39+ messages in thread

* Re: [RFC 09/10] KVM: s390: Use gpa_t in pv.c
  2026-03-16 16:23 ` [RFC 09/10] KVM: s390: Use gpa_t in pv.c Janosch Frank
@ 2026-03-18 15:46   ` Claudio Imbrenda
  2026-03-23  9:41   ` Christoph Schlameuss
  1 sibling, 0 replies; 39+ messages in thread
From: Claudio Imbrenda @ 2026-03-18 15:46 UTC (permalink / raw)
  To: Janosch Frank; +Cc: kvm, linux-s390, borntraeger, akrowiak

On Mon, 16 Mar 2026 16:23:56 +0000
Janosch Frank <frankja@linux.ibm.com> wrote:

> Lot's of locations where we could've used gpa_t but used u64/unsigned
> long.
> 
> Signed-off-by: Janosch Frank <frankja@linux.ibm.com>

Reviewed-by: Claudio Imbrenda <imbrenda@linux.ibm.com>

> ---
>  arch/s390/kvm/kvm-s390.h |  8 ++++----
>  arch/s390/kvm/pv.c       | 12 ++++++------
>  2 files changed, 10 insertions(+), 10 deletions(-)
> 
> diff --git a/arch/s390/kvm/kvm-s390.h b/arch/s390/kvm/kvm-s390.h
> index bf1d7798c1af..1ffaec723a30 100644
> --- a/arch/s390/kvm/kvm-s390.h
> +++ b/arch/s390/kvm/kvm-s390.h
> @@ -308,17 +308,17 @@ int kvm_s390_pv_deinit_vm(struct kvm *kvm, u16 *rc, u16 *rrc);
>  int kvm_s390_pv_init_vm(struct kvm *kvm, u16 *rc, u16 *rrc);
>  int kvm_s390_pv_set_sec_parms(struct kvm *kvm, void *hdr, u64 length, u16 *rc,
>  			      u16 *rrc);
> -int kvm_s390_pv_unpack(struct kvm *kvm, unsigned long addr, unsigned long size,
> +int kvm_s390_pv_unpack(struct kvm *kvm, gpa_t addr, unsigned long size,
>  		       unsigned long tweak, u16 *rc, u16 *rrc);
>  int kvm_s390_pv_set_cpu_state(struct kvm_vcpu *vcpu, u8 state);
>  int kvm_s390_pv_dump_cpu(struct kvm_vcpu *vcpu, void *buff, u16 *rc, u16 *rrc);
>  int kvm_s390_pv_dump_stor_state(struct kvm *kvm, void __user *buff_user,
> -				u64 *gaddr, u64 buff_user_len, u16 *rc, u16 *rrc);
> +				gpa_t *gaddr, u64 buff_user_len, u16 *rc, u16 *rrc);
>  int kvm_s390_pv_dump_complete(struct kvm *kvm, void __user *buff_user,
>  			      u16 *rc, u16 *rrc);
>  int kvm_s390_pv_destroy_page(struct kvm *kvm, unsigned long gaddr);
> -int kvm_s390_pv_convert_to_secure(struct kvm *kvm, unsigned long gaddr);
> -int kvm_s390_pv_make_secure(struct kvm *kvm, unsigned long gaddr, void *uvcb);
> +int kvm_s390_pv_convert_to_secure(struct kvm *kvm, gpa_t gaddr);
> +int kvm_s390_pv_make_secure(struct kvm *kvm, gpa_t gaddr, void *uvcb);
>  
>  static inline u64 kvm_s390_pv_get_handle(struct kvm *kvm)
>  {
> diff --git a/arch/s390/kvm/pv.c b/arch/s390/kvm/pv.c
> index c2dafd812a3b..a86469507309 100644
> --- a/arch/s390/kvm/pv.c
> +++ b/arch/s390/kvm/pv.c
> @@ -125,7 +125,7 @@ static void _kvm_s390_pv_make_secure(struct guest_fault *f)
>   * Context: needs to be called with kvm->srcu held.
>   * Return: 0 on success, < 0 in case of error.
>   */
> -int kvm_s390_pv_make_secure(struct kvm *kvm, unsigned long gaddr, void *uvcb)
> +int kvm_s390_pv_make_secure(struct kvm *kvm, gpa_t gaddr, void *uvcb)
>  {
>  	struct pv_make_secure priv = { .uvcb = uvcb };
>  	struct guest_fault f = {
> @@ -157,7 +157,7 @@ int kvm_s390_pv_make_secure(struct kvm *kvm, unsigned long gaddr, void *uvcb)
>  	return rc;
>  }
>  
> -int kvm_s390_pv_convert_to_secure(struct kvm *kvm, unsigned long gaddr)
> +int kvm_s390_pv_convert_to_secure(struct kvm *kvm, gpa_t gaddr)
>  {
>  	struct uv_cb_cts uvcb = {
>  		.header.cmd = UVC_CMD_CONV_TO_SEC_STOR,
> @@ -765,7 +765,7 @@ int kvm_s390_pv_set_sec_parms(struct kvm *kvm, void *hdr, u64 length, u16 *rc,
>  	return cc ? -EINVAL : 0;
>  }
>  
> -static int unpack_one(struct kvm *kvm, unsigned long addr, u64 tweak,
> +static int unpack_one(struct kvm *kvm, gpa_t addr, u64 tweak,
>  		      u64 offset, u16 *rc, u16 *rrc)
>  {
>  	struct uv_cb_unp uvcb = {
> @@ -793,7 +793,7 @@ static int unpack_one(struct kvm *kvm, unsigned long addr, u64 tweak,
>  	return ret;
>  }
>  
> -int kvm_s390_pv_unpack(struct kvm *kvm, unsigned long addr, unsigned long size,
> +int kvm_s390_pv_unpack(struct kvm *kvm, gpa_t addr, unsigned long size,
>  		       unsigned long tweak, u16 *rc, u16 *rrc)
>  {
>  	u64 offset = 0;
> @@ -802,7 +802,7 @@ int kvm_s390_pv_unpack(struct kvm *kvm, unsigned long addr, unsigned long size,
>  	if (addr & ~PAGE_MASK || !size || size & ~PAGE_MASK)
>  		return -EINVAL;
>  
> -	KVM_UV_EVENT(kvm, 3, "PROTVIRT VM UNPACK: start addr %lx size %lx",
> +	KVM_UV_EVENT(kvm, 3, "PROTVIRT VM UNPACK: start addr %llx size %lx",
>  		     addr, size);
>  
>  	guard(srcu)(&kvm->srcu);
> @@ -891,7 +891,7 @@ int kvm_s390_pv_dump_cpu(struct kvm_vcpu *vcpu, void *buff, u16 *rc, u16 *rrc)
>   *  -EFAULT if copying the result to buff_user failed
>   */
>  int kvm_s390_pv_dump_stor_state(struct kvm *kvm, void __user *buff_user,
> -				u64 *gaddr, u64 buff_user_len, u16 *rc, u16 *rrc)
> +				gpa_t *gaddr, u64 buff_user_len, u16 *rc, u16 *rrc)
>  {
>  	struct uv_cb_dump_stor_state uvcb = {
>  		.header.cmd = UVC_CMD_DUMP_CONF_STOR_STATE,


^ permalink raw reply	[flat|nested] 39+ messages in thread

* Re: [RFC 10/10] KVM: s390: Cleanup kvm_s390_store_status_unloaded
  2026-03-16 16:23 ` [RFC 10/10] KVM: s390: Cleanup kvm_s390_store_status_unloaded Janosch Frank
@ 2026-03-18 15:51   ` Claudio Imbrenda
  2026-03-23  9:47   ` Christoph Schlameuss
  1 sibling, 0 replies; 39+ messages in thread
From: Claudio Imbrenda @ 2026-03-18 15:51 UTC (permalink / raw)
  To: Janosch Frank; +Cc: kvm, linux-s390, borntraeger, akrowiak

On Mon, 16 Mar 2026 16:23:57 +0000
Janosch Frank <frankja@linux.ibm.com> wrote:

> Fixup comments, use gpa_t and replace magic constants.
> 
> Signed-off-by: Janosch Frank <frankja@linux.ibm.com>
> ---
>  arch/s390/kvm/kvm-s390.c | 24 ++++++++++++++++--------
>  arch/s390/kvm/kvm-s390.h |  4 ++--
>  2 files changed, 18 insertions(+), 10 deletions(-)
> 
> diff --git a/arch/s390/kvm/kvm-s390.c b/arch/s390/kvm/kvm-s390.c
> index 1668580008c6..c76f83b38d27 100644
> --- a/arch/s390/kvm/kvm-s390.c
> +++ b/arch/s390/kvm/kvm-s390.c
> @@ -4993,11 +4993,12 @@ int kvm_arch_vcpu_ioctl_run(struct kvm_vcpu *vcpu)
>  
>  /*
>   * store status at address
> - * we use have two special cases:
> - * KVM_S390_STORE_STATUS_NOADDR: -> 0x1200 on 64 bit
> - * KVM_S390_STORE_STATUS_PREFIXED: -> prefix
> + *
> + * We have two special cases:
> + * - KVM_S390_STORE_STATUS_NOADDR: -> 0x1200 on 64 bit
> + * - KVM_S390_STORE_STATUS_PREFIXED: -> prefix
>   */

since you're touching this comment block, make it proper kerneldoc,
describing parameters, return value, etc

the rest of the patch is good

> -int kvm_s390_store_status_unloaded(struct kvm_vcpu *vcpu, unsigned long gpa)
> +int kvm_s390_store_status_unloaded(struct kvm_vcpu *vcpu, gpa_t gpa)
>  {
>  	unsigned char archmode = 1;
>  	freg_t fprs[NUM_FPRS];
> @@ -5007,15 +5008,22 @@ int kvm_s390_store_status_unloaded(struct kvm_vcpu *vcpu, unsigned long gpa)
>  
>  	px = kvm_s390_get_prefix(vcpu);
>  	if (gpa == KVM_S390_STORE_STATUS_NOADDR) {
> -		if (write_guest_abs(vcpu, 163, &archmode, 1))
> +		if (write_guest_abs(vcpu, __LC_AR_MODE_ID, &archmode, 1))
>  			return -EFAULT;
>  		gpa = 0;
>  	} else if (gpa == KVM_S390_STORE_STATUS_PREFIXED) {
> -		if (write_guest_real(vcpu, 163, &archmode, 1))
> +		if (write_guest_real(vcpu, __LC_AR_MODE_ID, &archmode, 1))
>  			return -EFAULT;
>  		gpa = px;
> -	} else
> +	} else {
> +		/*
> +		 * Store status at address does NOT store vrs and arch
> +		 * indication. Since we add __LC_FPREGS_SAVE_AREA to
> +		 * the address when writing, we need to subtract it
> +		 * here.
> +		 */
>  		gpa -= __LC_FPREGS_SAVE_AREA;
> +	}
>  
>  	/* manually convert vector registers if necessary */
>  	if (cpu_has_vx()) {
> @@ -5049,7 +5057,7 @@ int kvm_s390_store_status_unloaded(struct kvm_vcpu *vcpu, unsigned long gpa)
>  	return rc ? -EFAULT : 0;
>  }
>  
> -int kvm_s390_vcpu_store_status(struct kvm_vcpu *vcpu, unsigned long addr)
> +int kvm_s390_vcpu_store_status(struct kvm_vcpu *vcpu, gpa_t addr)
>  {
>  	/*
>  	 * The guest FPRS and ACRS are in the host FPRS/ACRS due to the lazy
> diff --git a/arch/s390/kvm/kvm-s390.h b/arch/s390/kvm/kvm-s390.h
> index 1ffaec723a30..9cfc3caee334 100644
> --- a/arch/s390/kvm/kvm-s390.h
> +++ b/arch/s390/kvm/kvm-s390.h
> @@ -450,8 +450,8 @@ int kvm_s390_handle_sigp_pei(struct kvm_vcpu *vcpu);
>  
>  /* implemented in kvm-s390.c */
>  int kvm_s390_try_set_tod_clock(struct kvm *kvm, const struct kvm_s390_vm_tod_clock *gtod);
> -int kvm_s390_store_status_unloaded(struct kvm_vcpu *vcpu, unsigned long addr);
> -int kvm_s390_vcpu_store_status(struct kvm_vcpu *vcpu, unsigned long addr);
> +int kvm_s390_store_status_unloaded(struct kvm_vcpu *vcpu, gpa_t addr);
> +int kvm_s390_vcpu_store_status(struct kvm_vcpu *vcpu, gpa_t addr);
>  int kvm_s390_vcpu_start(struct kvm_vcpu *vcpu);
>  int kvm_s390_vcpu_stop(struct kvm_vcpu *vcpu);
>  void kvm_s390_vcpu_block(struct kvm_vcpu *vcpu);


^ permalink raw reply	[flat|nested] 39+ messages in thread

* Re: [RFC 08/10] KVM: s390: Use gpa_t in priv.c
  2026-03-16 16:23 ` [RFC 08/10] KVM: s390: Use gpa_t in priv.c Janosch Frank
@ 2026-03-18 16:02   ` Claudio Imbrenda
  2026-03-23  9:28   ` Christoph Schlameuss
  1 sibling, 0 replies; 39+ messages in thread
From: Claudio Imbrenda @ 2026-03-18 16:02 UTC (permalink / raw)
  To: Janosch Frank; +Cc: kvm, linux-s390, borntraeger, akrowiak

On Mon, 16 Mar 2026 16:23:55 +0000
Janosch Frank <frankja@linux.ibm.com> wrote:

> More unsigned long to gpa_t conversions.
> 
> Signed-off-by: Janosch Frank <frankja@linux.ibm.com>
> ---
>  arch/s390/kvm/gaccess.h |  8 ++++++++
>  arch/s390/kvm/priv.c    | 27 ++++++++++++---------------
>  2 files changed, 20 insertions(+), 15 deletions(-)
> 
> diff --git a/arch/s390/kvm/gaccess.h b/arch/s390/kvm/gaccess.h
> index f23dc0729649..970d9020dc14 100644
> --- a/arch/s390/kvm/gaccess.h
> +++ b/arch/s390/kvm/gaccess.h
> @@ -89,6 +89,14 @@ static inline unsigned long kvm_s390_logical_to_effective(struct kvm_vcpu *vcpu,
>  	return _kvm_s390_logical_to_effective(&vcpu->arch.sie_block->gpsw, ga);
>  }
>  
> +static inline gpa_t kvm_s390_real_to_abs_effective(struct kvm_vcpu *vcpu,
> +						   unsigned long gra)

the name is confusing, it should be something more like
kvm_logical_to_abs, or kvm_logical_effective_to_abs

the rest of the patch is good

> +{
> +	gra = kvm_s390_logical_to_effective(vcpu, gra);
> +
> +	return kvm_s390_real_to_abs(vcpu, gra);
> +}
> +
>  static inline gpa_t lc_addr_from_offset(struct kvm_vcpu *vcpu, unsigned int off)
>  {
>  	gpa_t addr = kvm_s390_get_prefix(vcpu);
> diff --git a/arch/s390/kvm/priv.c b/arch/s390/kvm/priv.c
> index 780186eb6037..78d3338afdb2 100644
> --- a/arch/s390/kvm/priv.c
> +++ b/arch/s390/kvm/priv.c
> @@ -256,9 +256,9 @@ static int try_handle_skey(struct kvm_vcpu *vcpu)
>  
>  static int handle_iske(struct kvm_vcpu *vcpu)
>  {
> -	unsigned long gaddr;
>  	int reg1, reg2;
>  	union skey key;
> +	gpa_t gpa;
>  	int rc;
>  
>  	vcpu->stat.instruction_iske++;
> @@ -271,12 +271,10 @@ static int handle_iske(struct kvm_vcpu *vcpu)
>  		return rc != -EAGAIN ? rc : 0;
>  
>  	kvm_s390_get_regs_rre(vcpu, &reg1, &reg2);
> +	gpa = kvm_s390_real_to_abs_effective(vcpu, vcpu->run->s.regs.gprs[reg2] & PAGE_MASK);
>  
> -	gaddr = vcpu->run->s.regs.gprs[reg2] & PAGE_MASK;
> -	gaddr = kvm_s390_logical_to_effective(vcpu, gaddr);
> -	gaddr = kvm_s390_real_to_abs(vcpu, gaddr);
>  	scoped_guard(read_lock, &vcpu->kvm->mmu_lock)
> -		rc = dat_get_storage_key(vcpu->arch.gmap->asce, gpa_to_gfn(gaddr), &key);
> +		rc = dat_get_storage_key(vcpu->arch.gmap->asce, gpa_to_gfn(gpa), &key);
>  	if (rc > 0)
>  		return kvm_s390_inject_program_int(vcpu, rc);
>  	if (rc < 0)
> @@ -288,8 +286,8 @@ static int handle_iske(struct kvm_vcpu *vcpu)
>  
>  static int handle_rrbe(struct kvm_vcpu *vcpu)
>  {
> -	unsigned long gaddr;
>  	int reg1, reg2;
> +	gpa_t gpa;
>  	int rc;
>  
>  	vcpu->stat.instruction_rrbe++;
> @@ -302,12 +300,10 @@ static int handle_rrbe(struct kvm_vcpu *vcpu)
>  		return rc != -EAGAIN ? rc : 0;
>  
>  	kvm_s390_get_regs_rre(vcpu, &reg1, &reg2);
> +	gpa = kvm_s390_real_to_abs_effective(vcpu, vcpu->run->s.regs.gprs[reg2] & PAGE_MASK);
>  
> -	gaddr = vcpu->run->s.regs.gprs[reg2] & PAGE_MASK;
> -	gaddr = kvm_s390_logical_to_effective(vcpu, gaddr);
> -	gaddr = kvm_s390_real_to_abs(vcpu, gaddr);
>  	scoped_guard(read_lock, &vcpu->kvm->mmu_lock)
> -		rc = dat_reset_reference_bit(vcpu->arch.gmap->asce, gpa_to_gfn(gaddr));
> +		rc = dat_reset_reference_bit(vcpu->arch.gmap->asce, gpa_to_gfn(gpa));
>  	if (rc > 0)
>  		return kvm_s390_inject_program_int(vcpu, rc);
>  	if (rc < 0)
> @@ -1142,8 +1138,8 @@ static inline int __do_essa(struct kvm_vcpu *vcpu, const int orc)
>  	int r1, r2, nappended, entries;
>  	union essa_state state;
>  	unsigned long *cbrlo;
> -	unsigned long gfn;
>  	bool dirtied;
> +	gpa_t gpa;
>  
>  	/*
>  	 * We don't need to set SD.FPF.SK to 1 here, because if we have a
> @@ -1151,10 +1147,11 @@ static inline int __do_essa(struct kvm_vcpu *vcpu, const int orc)
>  	 */
>  
>  	kvm_s390_get_regs_rre(vcpu, &r1, &r2);
> -	gfn = gpa_to_gfn(vcpu->run->s.regs.gprs[r2]);
> +	gpa = vcpu->run->s.regs.gprs[r2];
>  	entries = (vcpu->arch.sie_block->cbrlo & ~PAGE_MASK) >> 3;
>  
> -	nappended = dat_perform_essa(vcpu->arch.gmap->asce, gfn, orc, &state, &dirtied);
> +	nappended = dat_perform_essa(vcpu->arch.gmap->asce, gpa_to_gfn(gpa),
> +				     orc, &state, &dirtied);
>  	vcpu->run->s.regs.gprs[r1] = state.val;
>  	if (nappended < 0)
>  		return 0;
> @@ -1166,7 +1163,7 @@ static inline int __do_essa(struct kvm_vcpu *vcpu, const int orc)
>  	 */
>  	if (nappended > 0) {
>  		cbrlo = phys_to_virt(vcpu->arch.sie_block->cbrlo & PAGE_MASK);
> -		cbrlo[entries] = gfn << PAGE_SHIFT;
> +		cbrlo[entries] = gpa;
>  	}
>  
>  	if (dirtied)
> @@ -1447,10 +1444,10 @@ int kvm_s390_handle_eb(struct kvm_vcpu *vcpu)
>  static int handle_tprot(struct kvm_vcpu *vcpu)
>  {
>  	u64 address, operand2;
> -	unsigned long gpa;
>  	u8 access_key;
>  	bool writable;
>  	int ret, cc;
> +	gpa_t gpa;
>  	u8 ar;
>  
>  	vcpu->stat.instruction_tprot++;


^ permalink raw reply	[flat|nested] 39+ messages in thread

* Re: [RFC 01/10] KVM: s390: diag258: Pass absolute address to kvm_is_gpa_in_memslot()
  2026-03-16 16:23 ` [RFC 01/10] KVM: s390: diag258: Pass absolute address to kvm_is_gpa_in_memslot() Janosch Frank
  2026-03-16 18:34   ` Christian Borntraeger
  2026-03-17 10:01   ` Christoph Schlameuss
@ 2026-03-18 16:04   ` Claudio Imbrenda
  2 siblings, 0 replies; 39+ messages in thread
From: Claudio Imbrenda @ 2026-03-18 16:04 UTC (permalink / raw)
  To: Janosch Frank; +Cc: kvm, linux-s390, borntraeger, akrowiak

On Mon, 16 Mar 2026 16:23:48 +0000
Janosch Frank <frankja@linux.ibm.com> wrote:

> The token address is a real address and as such we need to translate
> it before it's a true gpa.
> 
> Signed-off-by: Janosch Frank <frankja@linux.ibm.com>
> Fixes: 3c038e6be0e29 ("KVM: async_pf: Async page fault support on s390")

Reviewed-by: Claudio Imbrenda <imbrenda@linux.ibm.com>

> ---
>  arch/s390/kvm/diag.c | 2 +-
>  1 file changed, 1 insertion(+), 1 deletion(-)
> 
> diff --git a/arch/s390/kvm/diag.c b/arch/s390/kvm/diag.c
> index d89d1c381522..51ba4dcc3905 100644
> --- a/arch/s390/kvm/diag.c
> +++ b/arch/s390/kvm/diag.c
> @@ -122,7 +122,7 @@ static int __diag_page_ref_service(struct kvm_vcpu *vcpu)
>  		    parm.token_addr & 7 || parm.zarch != 0x8000000000000000ULL)
>  			return kvm_s390_inject_program_int(vcpu, PGM_SPECIFICATION);
>  
> -		if (!kvm_is_gpa_in_memslot(vcpu->kvm, parm.token_addr))
> +		if (!kvm_is_gpa_in_memslot(vcpu->kvm, kvm_s390_real_to_abs(vcpu, parm.token_addr)))
>  			return kvm_s390_inject_program_int(vcpu, PGM_ADDRESSING);
>  
>  		vcpu->arch.pfault_token = parm.token_addr;


^ permalink raw reply	[flat|nested] 39+ messages in thread

* Re: [RFC 05/10] KVM: s390: vsie: Cleanup and fixup of crycb handling
  2026-03-16 16:23 ` [RFC 05/10] KVM: s390: vsie: Cleanup and fixup of crycb handling Janosch Frank
  2026-03-18 14:13   ` Christoph Schlameuss
@ 2026-03-18 16:48   ` Claudio Imbrenda
  2026-03-20 12:01   ` Anthony Krowiak
  2 siblings, 0 replies; 39+ messages in thread
From: Claudio Imbrenda @ 2026-03-18 16:48 UTC (permalink / raw)
  To: Janosch Frank; +Cc: kvm, linux-s390, borntraeger, akrowiak

On Mon, 16 Mar 2026 16:23:52 +0000
Janosch Frank <frankja@linux.ibm.com> wrote:

> The crycbd denotes an absolute address and as such we need to use
> gpa_t and read_guest_abs() instead of read_guest_real().
> 
> We don't want to copy the reserved fields into the host, so let's
> define size constants that only include the masks and ignore the
> reserved fields.
> 
> While we're at it, remove magic constants with compiler backed
> constants.
> 
> Signed-off-by: Janosch Frank <frankja@linux.ibm.com>

Reviewed-by: Claudio Imbrenda <imbrenda@linux.ibm.com>

> ---
>  arch/s390/include/asm/kvm_host.h |  6 ++++
>  arch/s390/kvm/vsie.c             | 50 +++++++++++++++-----------------
>  2 files changed, 30 insertions(+), 26 deletions(-)
> 
> diff --git a/arch/s390/include/asm/kvm_host.h b/arch/s390/include/asm/kvm_host.h
> index 64a50f0862aa..52827db2fa97 100644
> --- a/arch/s390/include/asm/kvm_host.h
> +++ b/arch/s390/include/asm/kvm_host.h
> @@ -516,6 +516,8 @@ struct kvm_s390_crypto {
>  	__u8 apie;
>  };
>  
> +#define APCB_NUM_MASKS 3
> +
>  #define APCB0_MASK_SIZE 1
>  struct kvm_s390_apcb0 {
>  	__u64 apm[APCB0_MASK_SIZE];		/* 0x0000 */
> @@ -540,6 +542,10 @@ struct kvm_s390_crypto_cb {
>  	struct kvm_s390_apcb1 apcb1;		/* 0x0080 */
>  };
>  
> +#define APCB_KEY_MASK_SIZE \
> +	(sizeof_field(struct kvm_s390_crypto_cb, dea_wrapping_key_mask) + \
> +	 sizeof_field(struct kvm_s390_crypto_cb, aes_wrapping_key_mask))
> +
>  struct kvm_s390_gisa {
>  	union {
>  		struct { /* common to all formats */
> diff --git a/arch/s390/kvm/vsie.c b/arch/s390/kvm/vsie.c
> index c0d36afd4023..13480d65c59d 100644
> --- a/arch/s390/kvm/vsie.c
> +++ b/arch/s390/kvm/vsie.c
> @@ -155,17 +155,17 @@ static int prepare_cpuflags(struct kvm_vcpu *vcpu, struct vsie_page *vsie_page)
>  	atomic_set(&scb_s->cpuflags, newflags);
>  	return 0;
>  }
> +
>  /* Copy to APCB FORMAT1 from APCB FORMAT0 */
>  static int setup_apcb10(struct kvm_vcpu *vcpu, struct kvm_s390_apcb1 *apcb_s,
> -			unsigned long crycb_gpa, struct kvm_s390_apcb1 *apcb_h)
> +			gpa_t crycb_gpa, struct kvm_s390_apcb1 *apcb_h)
>  {
>  	struct kvm_s390_apcb0 tmp;
> -	unsigned long apcb_gpa;
> +	gpa_t apcb_gpa;
>  
>  	apcb_gpa = crycb_gpa + offsetof(struct kvm_s390_crypto_cb, apcb0);
>  
> -	if (read_guest_real(vcpu, apcb_gpa, &tmp,
> -			    sizeof(struct kvm_s390_apcb0)))
> +	if (read_guest_abs(vcpu, apcb_gpa, &tmp, sizeof(tmp)))
>  		return -EFAULT;
>  
>  	apcb_s->apm[0] = apcb_h->apm[0] & tmp.apm[0];
> @@ -173,7 +173,6 @@ static int setup_apcb10(struct kvm_vcpu *vcpu, struct kvm_s390_apcb1 *apcb_s,
>  	apcb_s->adm[0] = apcb_h->adm[0] & tmp.adm[0] & 0xffff000000000000UL;
>  
>  	return 0;
> -
>  }
>  
>  /**
> @@ -186,18 +185,18 @@ static int setup_apcb10(struct kvm_vcpu *vcpu, struct kvm_s390_apcb1 *apcb_s,
>   * Returns 0 and -EFAULT on error reading guest apcb
>   */
>  static int setup_apcb00(struct kvm_vcpu *vcpu, unsigned long *apcb_s,
> -			unsigned long crycb_gpa, unsigned long *apcb_h)
> +			gpa_t crycb_gpa, unsigned long *apcb_h)
>  {
> -	unsigned long apcb_gpa;
> +	/* sizeof() would include reserved fields which we do not need/want */
> +	unsigned long len = APCB_NUM_MASKS * APCB0_MASK_SIZE * sizeof(u64);
> +	gpa_t apcb_gpa;
>  
>  	apcb_gpa = crycb_gpa + offsetof(struct kvm_s390_crypto_cb, apcb0);
>  
> -	if (read_guest_real(vcpu, apcb_gpa, apcb_s,
> -			    sizeof(struct kvm_s390_apcb0)))
> +	if (read_guest_abs(vcpu, apcb_gpa, apcb_s, len))
>  		return -EFAULT;
>  
> -	bitmap_and(apcb_s, apcb_s, apcb_h,
> -		   BITS_PER_BYTE * sizeof(struct kvm_s390_apcb0));
> +	bitmap_and(apcb_s, apcb_s, apcb_h, BITS_PER_BYTE * len);
>  
>  	return 0;
>  }
> @@ -212,19 +211,18 @@ static int setup_apcb00(struct kvm_vcpu *vcpu, unsigned long *apcb_s,
>   * Returns 0 and -EFAULT on error reading guest apcb
>   */
>  static int setup_apcb11(struct kvm_vcpu *vcpu, unsigned long *apcb_s,
> -			unsigned long crycb_gpa,
> -			unsigned long *apcb_h)
> +			gpa_t crycb_gpa, unsigned long *apcb_h)
>  {
> -	unsigned long apcb_gpa;
> +	/* sizeof() would include reserved fields which we do not need/want */
> +	unsigned long len = APCB_NUM_MASKS * APCB1_MASK_SIZE * sizeof(u64);
> +	gpa_t apcb_gpa;
>  
>  	apcb_gpa = crycb_gpa + offsetof(struct kvm_s390_crypto_cb, apcb1);
>  
> -	if (read_guest_real(vcpu, apcb_gpa, apcb_s,
> -			    sizeof(struct kvm_s390_apcb1)))
> +	if (read_guest_abs(vcpu, apcb_gpa, apcb_s, len))
>  		return -EFAULT;
>  
> -	bitmap_and(apcb_s, apcb_s, apcb_h,
> -		   BITS_PER_BYTE * sizeof(struct kvm_s390_apcb1));
> +	bitmap_and(apcb_s, apcb_s, apcb_h, BITS_PER_BYTE * len);
>  
>  	return 0;
>  }
> @@ -244,8 +242,7 @@ static int setup_apcb11(struct kvm_vcpu *vcpu, unsigned long *apcb_s,
>   * Return 0 or an error number if the guest and host crycb are incompatible.
>   */
>  static int setup_apcb(struct kvm_vcpu *vcpu, struct kvm_s390_crypto_cb *crycb_s,
> -	       const u32 crycb_gpa,
> -	       struct kvm_s390_crypto_cb *crycb_h,
> +	       const gpa_t crycb_gpa, struct kvm_s390_crypto_cb *crycb_h,
>  	       int fmt_o, int fmt_h)
>  {
>  	switch (fmt_o) {
> @@ -315,7 +312,8 @@ static int shadow_crycb(struct kvm_vcpu *vcpu, struct vsie_page *vsie_page)
>  	struct kvm_s390_sie_block *scb_s = &vsie_page->scb_s;
>  	struct kvm_s390_sie_block *scb_o = vsie_page->scb_o;
>  	const uint32_t crycbd_o = READ_ONCE(scb_o->crycbd);
> -	const u32 crycb_addr = crycbd_o & 0x7ffffff8U;
> +	/* CRYCB origin is a 31 bit absolute address with a bit of masking */
> +	const gpa_t crycb_addr = crycbd_o & 0x7ffffff8U;
>  	unsigned long *b1, *b2;
>  	u8 ecb3_flags;
>  	u32 ecd_flags;
> @@ -359,8 +357,9 @@ static int shadow_crycb(struct kvm_vcpu *vcpu, struct vsie_page *vsie_page)
>  		goto end;
>  
>  	/* copy only the wrapping keys */
> -	if (read_guest_real(vcpu, crycb_addr + 72,
> -			    vsie_page->crycb.dea_wrapping_key_mask, 56))
> +	if (read_guest_abs(vcpu,
> +			   crycb_addr + offsetof(struct kvm_s390_crypto_cb, dea_wrapping_key_mask),
> +			   vsie_page->crycb.dea_wrapping_key_mask, APCB_KEY_MASK_SIZE))
>  		return set_validity_icpt(scb_s, 0x0035U);
>  
>  	scb_s->ecb3 |= ecb3_flags;
> @@ -368,10 +367,9 @@ static int shadow_crycb(struct kvm_vcpu *vcpu, struct vsie_page *vsie_page)
>  
>  	/* xor both blocks in one run */
>  	b1 = (unsigned long *) vsie_page->crycb.dea_wrapping_key_mask;
> -	b2 = (unsigned long *)
> -			    vcpu->kvm->arch.crypto.crycb->dea_wrapping_key_mask;
> +	b2 = (unsigned long *) vcpu->kvm->arch.crypto.crycb->dea_wrapping_key_mask;
>  	/* as 56%8 == 0, bitmap_xor won't overwrite any data */
> -	bitmap_xor(b1, b1, b2, BITS_PER_BYTE * 56);
> +	bitmap_xor(b1, b1, b2, BITS_PER_BYTE * APCB_KEY_MASK_SIZE);
>  end:
>  	switch (ret) {
>  	case -EINVAL:


^ permalink raw reply	[flat|nested] 39+ messages in thread

* Re: [RFC 05/10] KVM: s390: vsie: Cleanup and fixup of crycb handling
  2026-03-16 16:23 ` [RFC 05/10] KVM: s390: vsie: Cleanup and fixup of crycb handling Janosch Frank
  2026-03-18 14:13   ` Christoph Schlameuss
  2026-03-18 16:48   ` Claudio Imbrenda
@ 2026-03-20 12:01   ` Anthony Krowiak
  2026-03-23 15:54     ` Janosch Frank
  2 siblings, 1 reply; 39+ messages in thread
From: Anthony Krowiak @ 2026-03-20 12:01 UTC (permalink / raw)
  To: Janosch Frank, kvm; +Cc: linux-s390, imbrenda, borntraeger



On 3/16/26 12:23 PM, Janosch Frank wrote:
> The crycbd denotes an absolute address and as such we need to use
> gpa_t and read_guest_abs() instead of read_guest_real().
>
> We don't want to copy the reserved fields into the host, so let's
> define size constants that only include the masks and ignore the
> reserved fields.
>
> While we're at it, remove magic constants with compiler backed
> constants.
>
> Signed-off-by: Janosch Frank <frankja@linux.ibm.com>
> ---
>   arch/s390/include/asm/kvm_host.h |  6 ++++
>   arch/s390/kvm/vsie.c             | 50 +++++++++++++++-----------------
>   2 files changed, 30 insertions(+), 26 deletions(-)
>
> diff --git a/arch/s390/include/asm/kvm_host.h b/arch/s390/include/asm/kvm_host.h
> index 64a50f0862aa..52827db2fa97 100644
> --- a/arch/s390/include/asm/kvm_host.h
> +++ b/arch/s390/include/asm/kvm_host.h
> @@ -516,6 +516,8 @@ struct kvm_s390_crypto {
>   	__u8 apie;
>   };
>   
> +#define APCB_NUM_MASKS 3
> +
>   #define APCB0_MASK_SIZE 1
>   struct kvm_s390_apcb0 {
>   	__u64 apm[APCB0_MASK_SIZE];		/* 0x0000 */
> @@ -540,6 +542,10 @@ struct kvm_s390_crypto_cb {
>   	struct kvm_s390_apcb1 apcb1;		/* 0x0080 */
>   };
>   
> +#define APCB_KEY_MASK_SIZE \

Not critical, but should this maybe be APCB_KEY_MASKS_SIZE
or APCB_WRAPPING_KEY_MASKS_SIZE since
it encompasses both key masks? That notwithstanding:

Reviewed-by: Anthony Krowiak <akrowiak@linux.ibm.com>

> +	(sizeof_field(struct kvm_s390_crypto_cb, dea_wrapping_key_mask) + \
> +	 sizeof_field(struct kvm_s390_crypto_cb, aes_wrapping_key_mask))
> +
>   struct kvm_s390_gisa {
>   	union {
>   		struct { /* common to all formats */
> diff --git a/arch/s390/kvm/vsie.c b/arch/s390/kvm/vsie.c
> index c0d36afd4023..13480d65c59d 100644
> --- a/arch/s390/kvm/vsie.c
> +++ b/arch/s390/kvm/vsie.c
> @@ -155,17 +155,17 @@ static int prepare_cpuflags(struct kvm_vcpu *vcpu, struct vsie_page *vsie_page)
>   	atomic_set(&scb_s->cpuflags, newflags);
>   	return 0;
>   }
> +
>   /* Copy to APCB FORMAT1 from APCB FORMAT0 */
>   static int setup_apcb10(struct kvm_vcpu *vcpu, struct kvm_s390_apcb1 *apcb_s,
> -			unsigned long crycb_gpa, struct kvm_s390_apcb1 *apcb_h)
> +			gpa_t crycb_gpa, struct kvm_s390_apcb1 *apcb_h)
>   {
>   	struct kvm_s390_apcb0 tmp;
> -	unsigned long apcb_gpa;
> +	gpa_t apcb_gpa;
>   
>   	apcb_gpa = crycb_gpa + offsetof(struct kvm_s390_crypto_cb, apcb0);
>   
> -	if (read_guest_real(vcpu, apcb_gpa, &tmp,
> -			    sizeof(struct kvm_s390_apcb0)))
> +	if (read_guest_abs(vcpu, apcb_gpa, &tmp, sizeof(tmp)))
>   		return -EFAULT;
>   
>   	apcb_s->apm[0] = apcb_h->apm[0] & tmp.apm[0];
> @@ -173,7 +173,6 @@ static int setup_apcb10(struct kvm_vcpu *vcpu, struct kvm_s390_apcb1 *apcb_s,
>   	apcb_s->adm[0] = apcb_h->adm[0] & tmp.adm[0] & 0xffff000000000000UL;
>   
>   	return 0;
> -
>   }
>   
>   /**
> @@ -186,18 +185,18 @@ static int setup_apcb10(struct kvm_vcpu *vcpu, struct kvm_s390_apcb1 *apcb_s,
>    * Returns 0 and -EFAULT on error reading guest apcb
>    */
>   static int setup_apcb00(struct kvm_vcpu *vcpu, unsigned long *apcb_s,
> -			unsigned long crycb_gpa, unsigned long *apcb_h)
> +			gpa_t crycb_gpa, unsigned long *apcb_h)
>   {
> -	unsigned long apcb_gpa;
> +	/* sizeof() would include reserved fields which we do not need/want */
> +	unsigned long len = APCB_NUM_MASKS * APCB0_MASK_SIZE * sizeof(u64);
> +	gpa_t apcb_gpa;
>   
>   	apcb_gpa = crycb_gpa + offsetof(struct kvm_s390_crypto_cb, apcb0);
>   
> -	if (read_guest_real(vcpu, apcb_gpa, apcb_s,
> -			    sizeof(struct kvm_s390_apcb0)))
> +	if (read_guest_abs(vcpu, apcb_gpa, apcb_s, len))
>   		return -EFAULT;
>   
> -	bitmap_and(apcb_s, apcb_s, apcb_h,
> -		   BITS_PER_BYTE * sizeof(struct kvm_s390_apcb0));
> +	bitmap_and(apcb_s, apcb_s, apcb_h, BITS_PER_BYTE * len);
>   
>   	return 0;
>   }
> @@ -212,19 +211,18 @@ static int setup_apcb00(struct kvm_vcpu *vcpu, unsigned long *apcb_s,
>    * Returns 0 and -EFAULT on error reading guest apcb
>    */
>   static int setup_apcb11(struct kvm_vcpu *vcpu, unsigned long *apcb_s,
> -			unsigned long crycb_gpa,
> -			unsigned long *apcb_h)
> +			gpa_t crycb_gpa, unsigned long *apcb_h)
>   {
> -	unsigned long apcb_gpa;
> +	/* sizeof() would include reserved fields which we do not need/want */
> +	unsigned long len = APCB_NUM_MASKS * APCB1_MASK_SIZE * sizeof(u64);
> +	gpa_t apcb_gpa;
>   
>   	apcb_gpa = crycb_gpa + offsetof(struct kvm_s390_crypto_cb, apcb1);
>   
> -	if (read_guest_real(vcpu, apcb_gpa, apcb_s,
> -			    sizeof(struct kvm_s390_apcb1)))
> +	if (read_guest_abs(vcpu, apcb_gpa, apcb_s, len))
>   		return -EFAULT;
>   
> -	bitmap_and(apcb_s, apcb_s, apcb_h,
> -		   BITS_PER_BYTE * sizeof(struct kvm_s390_apcb1));
> +	bitmap_and(apcb_s, apcb_s, apcb_h, BITS_PER_BYTE * len);
>   
>   	return 0;
>   }
> @@ -244,8 +242,7 @@ static int setup_apcb11(struct kvm_vcpu *vcpu, unsigned long *apcb_s,
>    * Return 0 or an error number if the guest and host crycb are incompatible.
>    */
>   static int setup_apcb(struct kvm_vcpu *vcpu, struct kvm_s390_crypto_cb *crycb_s,
> -	       const u32 crycb_gpa,
> -	       struct kvm_s390_crypto_cb *crycb_h,
> +	       const gpa_t crycb_gpa, struct kvm_s390_crypto_cb *crycb_h,
>   	       int fmt_o, int fmt_h)
>   {
>   	switch (fmt_o) {
> @@ -315,7 +312,8 @@ static int shadow_crycb(struct kvm_vcpu *vcpu, struct vsie_page *vsie_page)
>   	struct kvm_s390_sie_block *scb_s = &vsie_page->scb_s;
>   	struct kvm_s390_sie_block *scb_o = vsie_page->scb_o;
>   	const uint32_t crycbd_o = READ_ONCE(scb_o->crycbd);
> -	const u32 crycb_addr = crycbd_o & 0x7ffffff8U;
> +	/* CRYCB origin is a 31 bit absolute address with a bit of masking */
> +	const gpa_t crycb_addr = crycbd_o & 0x7ffffff8U;
>   	unsigned long *b1, *b2;
>   	u8 ecb3_flags;
>   	u32 ecd_flags;
> @@ -359,8 +357,9 @@ static int shadow_crycb(struct kvm_vcpu *vcpu, struct vsie_page *vsie_page)
>   		goto end;
>   
>   	/* copy only the wrapping keys */
> -	if (read_guest_real(vcpu, crycb_addr + 72,
> -			    vsie_page->crycb.dea_wrapping_key_mask, 56))
> +	if (read_guest_abs(vcpu,
> +			   crycb_addr + offsetof(struct kvm_s390_crypto_cb, dea_wrapping_key_mask),
> +			   vsie_page->crycb.dea_wrapping_key_mask, APCB_KEY_MASK_SIZE))
>   		return set_validity_icpt(scb_s, 0x0035U);
>   
>   	scb_s->ecb3 |= ecb3_flags;
> @@ -368,10 +367,9 @@ static int shadow_crycb(struct kvm_vcpu *vcpu, struct vsie_page *vsie_page)
>   
>   	/* xor both blocks in one run */
>   	b1 = (unsigned long *) vsie_page->crycb.dea_wrapping_key_mask;
> -	b2 = (unsigned long *)
> -			    vcpu->kvm->arch.crypto.crycb->dea_wrapping_key_mask;
> +	b2 = (unsigned long *) vcpu->kvm->arch.crypto.crycb->dea_wrapping_key_mask;
>   	/* as 56%8 == 0, bitmap_xor won't overwrite any data */
> -	bitmap_xor(b1, b1, b2, BITS_PER_BYTE * 56);
> +	bitmap_xor(b1, b1, b2, BITS_PER_BYTE * APCB_KEY_MASK_SIZE);
>   end:
>   	switch (ret) {
>   	case -EINVAL:


^ permalink raw reply	[flat|nested] 39+ messages in thread

* Re: [RFC 07/10] KVM: s390: Use gpa_t and gva_t in gaccess files
  2026-03-16 16:23 ` [RFC 07/10] KVM: s390: Use gpa_t and gva_t in gaccess files Janosch Frank
  2026-03-18 15:36   ` Claudio Imbrenda
@ 2026-03-23  9:10   ` Christoph Schlameuss
  1 sibling, 0 replies; 39+ messages in thread
From: Christoph Schlameuss @ 2026-03-23  9:10 UTC (permalink / raw)
  To: Janosch Frank, kvm; +Cc: linux-s390, imbrenda, borntraeger, akrowiak

On Mon Mar 16, 2026 at 5:23 PM CET, Janosch Frank wrote:
> A lot of addresses are being passed around as u64 or unsigned long
> instead of gpa_t and gva_t. Some of the variables are already called
> gva or gpa anyway.
>
> Signed-off-by: Janosch Frank <frankja@linux.ibm.com>

Reviewed-by: Christoph Schlameuss <schlameuss@linux.ibm.com>

> ---
>  arch/s390/kvm/gaccess.c | 20 ++++++++++----------
>  arch/s390/kvm/gaccess.h |  3 +--
>  2 files changed, 11 insertions(+), 12 deletions(-)
>

^ permalink raw reply	[flat|nested] 39+ messages in thread

* Re: [RFC 06/10] KVM: s390: Rework lowcore access functions
  2026-03-16 16:23 ` [RFC 06/10] KVM: s390: Rework lowcore access functions Janosch Frank
  2026-03-18 14:25   ` Claudio Imbrenda
@ 2026-03-23  9:11   ` Christoph Schlameuss
  1 sibling, 0 replies; 39+ messages in thread
From: Christoph Schlameuss @ 2026-03-23  9:11 UTC (permalink / raw)
  To: Janosch Frank, kvm; +Cc: linux-s390, imbrenda, borntraeger, akrowiak

On Mon Mar 16, 2026 at 5:23 PM CET, Janosch Frank wrote:
> These functions effectively always use offset constants and no
> addresses at all. Therefore make it clear that we're accessing offsets
> and sprinkle in compile and runtime warnings for more safety.
>
> Signed-off-by: Janosch Frank <frankja@linux.ibm.com>

Acked-by: Christoph Schlameuss <schlameuss@linux.ibm.com>

> ---
>  arch/s390/kvm/gaccess.h | 32 ++++++++++++++++++++++----------
>  1 file changed, 22 insertions(+), 10 deletions(-)
>

^ permalink raw reply	[flat|nested] 39+ messages in thread

* Re: [RFC 08/10] KVM: s390: Use gpa_t in priv.c
  2026-03-16 16:23 ` [RFC 08/10] KVM: s390: Use gpa_t in priv.c Janosch Frank
  2026-03-18 16:02   ` Claudio Imbrenda
@ 2026-03-23  9:28   ` Christoph Schlameuss
  1 sibling, 0 replies; 39+ messages in thread
From: Christoph Schlameuss @ 2026-03-23  9:28 UTC (permalink / raw)
  To: Janosch Frank, kvm; +Cc: linux-s390, imbrenda, borntraeger, akrowiak

On Mon Mar 16, 2026 at 5:23 PM CET, Janosch Frank wrote:
> More unsigned long to gpa_t conversions.
>
> Signed-off-by: Janosch Frank <frankja@linux.ibm.com>

Reviewed-by: Christoph Schlameuss <schlameuss@linux.ibm.com>

> ---
>  arch/s390/kvm/gaccess.h |  8 ++++++++
>  arch/s390/kvm/priv.c    | 27 ++++++++++++---------------
>  2 files changed, 20 insertions(+), 15 deletions(-)
>
> diff --git a/arch/s390/kvm/gaccess.h b/arch/s390/kvm/gaccess.h
> index f23dc0729649..970d9020dc14 100644
> --- a/arch/s390/kvm/gaccess.h
> +++ b/arch/s390/kvm/gaccess.h
> @@ -89,6 +89,14 @@ static inline unsigned long kvm_s390_logical_to_effective(struct kvm_vcpu *vcpu,
>  	return _kvm_s390_logical_to_effective(&vcpu->arch.sie_block->gpsw, ga);
>  }
>  
> +static inline gpa_t kvm_s390_real_to_abs_effective(struct kvm_vcpu *vcpu,
> +						   unsigned long gra)

nit: This would also nicely fit into a single line.

> +{
> +	gra = kvm_s390_logical_to_effective(vcpu, gra);
> +
> +	return kvm_s390_real_to_abs(vcpu, gra);
> +}
> +
>  static inline gpa_t lc_addr_from_offset(struct kvm_vcpu *vcpu, unsigned int off)
>  {
>  	gpa_t addr = kvm_s390_get_prefix(vcpu);

...

^ permalink raw reply	[flat|nested] 39+ messages in thread

* Re: [RFC 09/10] KVM: s390: Use gpa_t in pv.c
  2026-03-16 16:23 ` [RFC 09/10] KVM: s390: Use gpa_t in pv.c Janosch Frank
  2026-03-18 15:46   ` Claudio Imbrenda
@ 2026-03-23  9:41   ` Christoph Schlameuss
  1 sibling, 0 replies; 39+ messages in thread
From: Christoph Schlameuss @ 2026-03-23  9:41 UTC (permalink / raw)
  To: Janosch Frank, kvm; +Cc: linux-s390, imbrenda, borntraeger, akrowiak

It would be nicer to also rename the variables here from the mix of "gaddr" and
"addr" to "gpa". Or at least conistently to "gaddr" as they are in most cases
then assigned to the ".gaddr" fields in the uv structs.

On Mon Mar 16, 2026 at 5:23 PM CET, Janosch Frank wrote:
> Lot's of locations where we could've used gpa_t but used u64/unsigned
> long.
>
> Signed-off-by: Janosch Frank <frankja@linux.ibm.com>
> ---
>  arch/s390/kvm/kvm-s390.h |  8 ++++----
>  arch/s390/kvm/pv.c       | 12 ++++++------
>  2 files changed, 10 insertions(+), 10 deletions(-)
>
> diff --git a/arch/s390/kvm/kvm-s390.h b/arch/s390/kvm/kvm-s390.h
> index bf1d7798c1af..1ffaec723a30 100644
> --- a/arch/s390/kvm/kvm-s390.h
> +++ b/arch/s390/kvm/kvm-s390.h
> @@ -308,17 +308,17 @@ int kvm_s390_pv_deinit_vm(struct kvm *kvm, u16 *rc, u16 *rrc);
>  int kvm_s390_pv_init_vm(struct kvm *kvm, u16 *rc, u16 *rrc);
>  int kvm_s390_pv_set_sec_parms(struct kvm *kvm, void *hdr, u64 length, u16 *rc,
>  			      u16 *rrc);
> -int kvm_s390_pv_unpack(struct kvm *kvm, unsigned long addr, unsigned long size,
> +int kvm_s390_pv_unpack(struct kvm *kvm, gpa_t addr, unsigned long size,
>  		       unsigned long tweak, u16 *rc, u16 *rrc);
>  int kvm_s390_pv_set_cpu_state(struct kvm_vcpu *vcpu, u8 state);
>  int kvm_s390_pv_dump_cpu(struct kvm_vcpu *vcpu, void *buff, u16 *rc, u16 *rrc);
>  int kvm_s390_pv_dump_stor_state(struct kvm *kvm, void __user *buff_user,
> -				u64 *gaddr, u64 buff_user_len, u16 *rc, u16 *rrc);
> +				gpa_t *gaddr, u64 buff_user_len, u16 *rc, u16 *rrc);
>  int kvm_s390_pv_dump_complete(struct kvm *kvm, void __user *buff_user,
>  			      u16 *rc, u16 *rrc);
>  int kvm_s390_pv_destroy_page(struct kvm *kvm, unsigned long gaddr);
> -int kvm_s390_pv_convert_to_secure(struct kvm *kvm, unsigned long gaddr);
> -int kvm_s390_pv_make_secure(struct kvm *kvm, unsigned long gaddr, void *uvcb);
> +int kvm_s390_pv_convert_to_secure(struct kvm *kvm, gpa_t gaddr);
> +int kvm_s390_pv_make_secure(struct kvm *kvm, gpa_t gaddr, void *uvcb);
>  
>  static inline u64 kvm_s390_pv_get_handle(struct kvm *kvm)
>  {
> diff --git a/arch/s390/kvm/pv.c b/arch/s390/kvm/pv.c
> index c2dafd812a3b..a86469507309 100644
> --- a/arch/s390/kvm/pv.c
> +++ b/arch/s390/kvm/pv.c
> @@ -125,7 +125,7 @@ static void _kvm_s390_pv_make_secure(struct guest_fault *f)
>   * Context: needs to be called with kvm->srcu held.
>   * Return: 0 on success, < 0 in case of error.
>   */
> -int kvm_s390_pv_make_secure(struct kvm *kvm, unsigned long gaddr, void *uvcb)
> +int kvm_s390_pv_make_secure(struct kvm *kvm, gpa_t gaddr, void *uvcb)
>  {
>  	struct pv_make_secure priv = { .uvcb = uvcb };
>  	struct guest_fault f = {
> @@ -157,7 +157,7 @@ int kvm_s390_pv_make_secure(struct kvm *kvm, unsigned long gaddr, void *uvcb)
>  	return rc;
>  }
>  
> -int kvm_s390_pv_convert_to_secure(struct kvm *kvm, unsigned long gaddr)
> +int kvm_s390_pv_convert_to_secure(struct kvm *kvm, gpa_t gaddr)
>  {
>  	struct uv_cb_cts uvcb = {
>  		.header.cmd = UVC_CMD_CONV_TO_SEC_STOR,
> @@ -765,7 +765,7 @@ int kvm_s390_pv_set_sec_parms(struct kvm *kvm, void *hdr, u64 length, u16 *rc,
>  	return cc ? -EINVAL : 0;
>  }
>  
> -static int unpack_one(struct kvm *kvm, unsigned long addr, u64 tweak,
> +static int unpack_one(struct kvm *kvm, gpa_t addr, u64 tweak,
>  		      u64 offset, u16 *rc, u16 *rrc)
>  {
>  	struct uv_cb_unp uvcb = {
> @@ -793,7 +793,7 @@ static int unpack_one(struct kvm *kvm, unsigned long addr, u64 tweak,
>  	return ret;
>  }
>  
> -int kvm_s390_pv_unpack(struct kvm *kvm, unsigned long addr, unsigned long size,
> +int kvm_s390_pv_unpack(struct kvm *kvm, gpa_t addr, unsigned long size,
>  		       unsigned long tweak, u16 *rc, u16 *rrc)
>  {
>  	u64 offset = 0;
> @@ -802,7 +802,7 @@ int kvm_s390_pv_unpack(struct kvm *kvm, unsigned long addr, unsigned long size,
>  	if (addr & ~PAGE_MASK || !size || size & ~PAGE_MASK)
>  		return -EINVAL;
>  
> -	KVM_UV_EVENT(kvm, 3, "PROTVIRT VM UNPACK: start addr %lx size %lx",
> +	KVM_UV_EVENT(kvm, 3, "PROTVIRT VM UNPACK: start addr %llx size %lx",
>  		     addr, size);
>  
>  	guard(srcu)(&kvm->srcu);
> @@ -891,7 +891,7 @@ int kvm_s390_pv_dump_cpu(struct kvm_vcpu *vcpu, void *buff, u16 *rc, u16 *rrc)
>   *  -EFAULT if copying the result to buff_user failed
>   */
>  int kvm_s390_pv_dump_stor_state(struct kvm *kvm, void __user *buff_user,
> -				u64 *gaddr, u64 buff_user_len, u16 *rc, u16 *rrc)
> +				gpa_t *gaddr, u64 buff_user_len, u16 *rc, u16 *rrc)
>  {
>  	struct uv_cb_dump_stor_state uvcb = {
>  		.header.cmd = UVC_CMD_DUMP_CONF_STOR_STATE,


^ permalink raw reply	[flat|nested] 39+ messages in thread

* Re: [RFC 10/10] KVM: s390: Cleanup kvm_s390_store_status_unloaded
  2026-03-16 16:23 ` [RFC 10/10] KVM: s390: Cleanup kvm_s390_store_status_unloaded Janosch Frank
  2026-03-18 15:51   ` Claudio Imbrenda
@ 2026-03-23  9:47   ` Christoph Schlameuss
  1 sibling, 0 replies; 39+ messages in thread
From: Christoph Schlameuss @ 2026-03-23  9:47 UTC (permalink / raw)
  To: Janosch Frank, kvm; +Cc: linux-s390, imbrenda, borntraeger, akrowiak

On Mon Mar 16, 2026 at 5:23 PM CET, Janosch Frank wrote:
> Fixup comments, use gpa_t and replace magic constants.
>
> Signed-off-by: Janosch Frank <frankja@linux.ibm.com>

Reviewed-by: Christoph Schlameuss <schlameuss@linux.ibm.com>

> ---
>  arch/s390/kvm/kvm-s390.c | 24 ++++++++++++++++--------
>  arch/s390/kvm/kvm-s390.h |  4 ++--
>  2 files changed, 18 insertions(+), 10 deletions(-)

^ permalink raw reply	[flat|nested] 39+ messages in thread

* Re: [PATCH] KVM: s390: Fix lpsw/e breaking event handling
  2026-03-17 13:03     ` [PATCH] KVM: s390: Fix lpsw/e breaking event handling Janosch Frank
  2026-03-17 13:30       ` Christian Borntraeger
@ 2026-03-23 15:08       ` Hendrik Brueckner
  1 sibling, 0 replies; 39+ messages in thread
From: Hendrik Brueckner @ 2026-03-23 15:08 UTC (permalink / raw)
  To: Janosch Frank; +Cc: kvm, linux-s390, imbrenda, borntraeger, akrowiak

On Tue, Mar 17, 2026 at 01:03:54PM +0000, Janosch Frank wrote:
> LPSW and LPSWE need to set the gbea on completion but currently don't.
> Time to fix this up.
> 
> LPSWEY was designed to not set the bear.
> 
> Signed-off-by: Janosch Frank <frankja@linux.ibm.com>
> Fixes: 48a3e950f4cee ("KVM: s390: Add support for machine checks.")
> Reported-by: Christian Borntraeger <borntraeger@linux.ibm.com>

Acked-by: Hendrik Brueckner <brueckner@linux.ibm.com> 

^ permalink raw reply	[flat|nested] 39+ messages in thread

* Re: [RFC 05/10] KVM: s390: vsie: Cleanup and fixup of crycb handling
  2026-03-20 12:01   ` Anthony Krowiak
@ 2026-03-23 15:54     ` Janosch Frank
  0 siblings, 0 replies; 39+ messages in thread
From: Janosch Frank @ 2026-03-23 15:54 UTC (permalink / raw)
  To: Anthony Krowiak, kvm; +Cc: linux-s390, imbrenda, borntraeger

On 3/20/26 13:01, Anthony Krowiak wrote:
> 
> 
> On 3/16/26 12:23 PM, Janosch Frank wrote:
>> The crycbd denotes an absolute address and as such we need to use
>> gpa_t and read_guest_abs() instead of read_guest_real().
>>
>> We don't want to copy the reserved fields into the host, so let's
>> define size constants that only include the masks and ignore the
>> reserved fields.
>>
>> While we're at it, remove magic constants with compiler backed
>> constants.
>>
>> Signed-off-by: Janosch Frank <frankja@linux.ibm.com>
>> ---
>>    arch/s390/include/asm/kvm_host.h |  6 ++++
>>    arch/s390/kvm/vsie.c             | 50 +++++++++++++++-----------------
>>    2 files changed, 30 insertions(+), 26 deletions(-)
>>
>> diff --git a/arch/s390/include/asm/kvm_host.h b/arch/s390/include/asm/kvm_host.h
>> index 64a50f0862aa..52827db2fa97 100644
>> --- a/arch/s390/include/asm/kvm_host.h
>> +++ b/arch/s390/include/asm/kvm_host.h
>> @@ -516,6 +516,8 @@ struct kvm_s390_crypto {
>>    	__u8 apie;
>>    };
>>    
>> +#define APCB_NUM_MASKS 3
>> +
>>    #define APCB0_MASK_SIZE 1
>>    struct kvm_s390_apcb0 {
>>    	__u64 apm[APCB0_MASK_SIZE];		/* 0x0000 */
>> @@ -540,6 +542,10 @@ struct kvm_s390_crypto_cb {
>>    	struct kvm_s390_apcb1 apcb1;		/* 0x0080 */
>>    };
>>    
>> +#define APCB_KEY_MASK_SIZE \
> 
> Not critical, but should this maybe be APCB_KEY_MASKS_SIZE
> or APCB_WRAPPING_KEY_MASKS_SIZE since

Sure, both would be fine with me.


> it encompasses both key masks? That notwithstanding:
> 
> Reviewed-by: Anthony Krowiak <akrowiak@linux.ibm.com>
> 

^ permalink raw reply	[flat|nested] 39+ messages in thread

end of thread, other threads:[~2026-03-23 15:54 UTC | newest]

Thread overview: 39+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2026-03-16 16:23 [RFC 00/10] KVM: s390: spring cleanup Janosch Frank
2026-03-16 16:23 ` [RFC 01/10] KVM: s390: diag258: Pass absolute address to kvm_is_gpa_in_memslot() Janosch Frank
2026-03-16 18:34   ` Christian Borntraeger
2026-03-17 10:01   ` Christoph Schlameuss
2026-03-18 16:04   ` Claudio Imbrenda
2026-03-16 16:23 ` [RFC 02/10] KVM: s390: Consolidate lpswe variants Janosch Frank
2026-03-16 18:47   ` Christian Borntraeger
2026-03-17  8:13     ` Janosch Frank
2026-03-17 13:03     ` [PATCH] KVM: s390: Fix lpsw/e breaking event handling Janosch Frank
2026-03-17 13:30       ` Christian Borntraeger
2026-03-23 15:08       ` Hendrik Brueckner
2026-03-16 16:23 ` [RFC 03/10] KVM: s390: Convert shifts to gpa_to_gfn() Janosch Frank
2026-03-16 18:49   ` Christian Borntraeger
2026-03-17 10:38   ` Christoph Schlameuss
2026-03-18 14:26   ` Claudio Imbrenda
2026-03-16 16:23 ` [RFC 04/10] KVM: s390: kvm_s390_real_to_abs() should return gpa_t Janosch Frank
2026-03-16 18:53   ` Christian Borntraeger
2026-03-18  7:10   ` Christoph Schlameuss
2026-03-18 14:29   ` Claudio Imbrenda
2026-03-16 16:23 ` [RFC 05/10] KVM: s390: vsie: Cleanup and fixup of crycb handling Janosch Frank
2026-03-18 14:13   ` Christoph Schlameuss
2026-03-18 16:48   ` Claudio Imbrenda
2026-03-20 12:01   ` Anthony Krowiak
2026-03-23 15:54     ` Janosch Frank
2026-03-16 16:23 ` [RFC 06/10] KVM: s390: Rework lowcore access functions Janosch Frank
2026-03-18 14:25   ` Claudio Imbrenda
2026-03-23  9:11   ` Christoph Schlameuss
2026-03-16 16:23 ` [RFC 07/10] KVM: s390: Use gpa_t and gva_t in gaccess files Janosch Frank
2026-03-18 15:36   ` Claudio Imbrenda
2026-03-23  9:10   ` Christoph Schlameuss
2026-03-16 16:23 ` [RFC 08/10] KVM: s390: Use gpa_t in priv.c Janosch Frank
2026-03-18 16:02   ` Claudio Imbrenda
2026-03-23  9:28   ` Christoph Schlameuss
2026-03-16 16:23 ` [RFC 09/10] KVM: s390: Use gpa_t in pv.c Janosch Frank
2026-03-18 15:46   ` Claudio Imbrenda
2026-03-23  9:41   ` Christoph Schlameuss
2026-03-16 16:23 ` [RFC 10/10] KVM: s390: Cleanup kvm_s390_store_status_unloaded Janosch Frank
2026-03-18 15:51   ` Claudio Imbrenda
2026-03-23  9:47   ` Christoph Schlameuss

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox