public inbox for kvm@vger.kernel.org
 help / color / mirror / Atom feed
* [PATCH v1 00/13] KVM: s390: Stop using page->index and other things
@ 2025-01-08 18:14 Claudio Imbrenda
  2025-01-08 18:14 ` [PATCH v1 01/13] KVM: s390: wrapper for KVM_BUG Claudio Imbrenda
                   ` (12 more replies)
  0 siblings, 13 replies; 39+ messages in thread
From: Claudio Imbrenda @ 2025-01-08 18:14 UTC (permalink / raw)
  To: kvm
  Cc: linux-s390, frankja, borntraeger, schlameuss, david, willy, hca,
	svens, agordeev, gor, nrb, nsg

This patchseries starts moving some of the gmap logic into KVM itself,
going towards the final goal of completely removing gmap from the
non-kvm memory management code. Aside from just moving some code from
mm/gmap into kvm, this series also starts using __kvm_faultin_pfn() to
fault-in pages as needed.

But more importantly, this series removes almost all uses of
page->index (and all uses of page->lru) from the s390 KVM code.
The only remaining use is for the vsie pages, but that has already been
taken care of by David in another series.

Unfortunately the mix of hastiness and holidays means that this series
is a little bit all over the place, and not as complete as I would have
liked to.

I'm posting it now so to try to speed up the removal of page->index,
hopefully I will be able to post another short series before the
upcoming merge window closes.

Claudio Imbrenda (13):
  KVM: s390: wrapper for KVM_BUG
  KVM: s390: fake memslots for ucontrol VMs
  KVM: s390: use __kvm_faultin_pfn()
  KVM: s390: move pv gmap functions into kvm
  KVM: s390: get rid of gmap_fault()
  KVM: s390: get rid of gmap_translate()
  KVM: s390: move some gmap shadowing functions away from mm/gmap.c
  KVM: s390: stop using page->index for non-shadow gmaps
  KVM: s390: stop using lists to keep track of used dat tables
  KVM: s390: move gmap_shadow_pgt_lookup() into kvm
  KVM: s390: remove useless page->index usage
  KVM: s390: move PGSTE softbits
  KVM: s390: remove the last user of page->index

 arch/s390/include/asm/gmap.h    |  16 +-
 arch/s390/include/asm/pgtable.h |  21 +-
 arch/s390/include/asm/uv.h      |   7 +-
 arch/s390/kernel/uv.c           | 293 +++------------
 arch/s390/kvm/Makefile          |   2 +-
 arch/s390/kvm/gaccess.c         |  42 +++
 arch/s390/kvm/gmap.c            | 183 ++++++++++
 arch/s390/kvm/gmap.h            |  19 +
 arch/s390/kvm/intercept.c       |   5 +-
 arch/s390/kvm/interrupt.c       |  19 +-
 arch/s390/kvm/kvm-s390.c        | 229 ++++++++++--
 arch/s390/kvm/kvm-s390.h        |  18 +
 arch/s390/kvm/pv.c              |   1 +
 arch/s390/kvm/vsie.c            | 137 +++++++
 arch/s390/mm/gmap.c             | 630 ++++++--------------------------
 15 files changed, 786 insertions(+), 836 deletions(-)
 create mode 100644 arch/s390/kvm/gmap.c
 create mode 100644 arch/s390/kvm/gmap.h

-- 
2.47.1


^ permalink raw reply	[flat|nested] 39+ messages in thread

* [PATCH v1 01/13] KVM: s390: wrapper for KVM_BUG
  2025-01-08 18:14 [PATCH v1 00/13] KVM: s390: Stop using page->index and other things Claudio Imbrenda
@ 2025-01-08 18:14 ` Claudio Imbrenda
  2025-01-10  9:09   ` Christian Borntraeger
  2025-01-10 13:13   ` Christoph Schlameuss
  2025-01-08 18:14 ` [PATCH v1 02/13] KVM: s390: fake memslots for ucontrol VMs Claudio Imbrenda
                   ` (11 subsequent siblings)
  12 siblings, 2 replies; 39+ messages in thread
From: Claudio Imbrenda @ 2025-01-08 18:14 UTC (permalink / raw)
  To: kvm
  Cc: linux-s390, frankja, borntraeger, schlameuss, david, willy, hca,
	svens, agordeev, gor, nrb, nsg

Wrap the call to KVM_BUG; this reduces code duplication and improves
readability.

Signed-off-by: Claudio Imbrenda <imbrenda@linux.ibm.com>
---
 arch/s390/kvm/kvm-s390.c | 19 ++++++++++---------
 1 file changed, 10 insertions(+), 9 deletions(-)

diff --git a/arch/s390/kvm/kvm-s390.c b/arch/s390/kvm/kvm-s390.c
index d8080c27d45b..ecbdd7d41230 100644
--- a/arch/s390/kvm/kvm-s390.c
+++ b/arch/s390/kvm/kvm-s390.c
@@ -4766,6 +4766,13 @@ static int vcpu_post_run_addressing_exception(struct kvm_vcpu *vcpu)
 	return kvm_s390_inject_prog_irq(vcpu, &pgm_info);
 }
 
+static void kvm_s390_assert_primary_as(struct kvm_vcpu *vcpu)
+{
+	KVM_BUG(current->thread.gmap_teid.as != PSW_BITS_AS_PRIMARY, vcpu->kvm,
+		"Unexpected program interrupt 0x%x, TEID 0x%016lx",
+		current->thread.gmap_int_code, current->thread.gmap_teid.val);
+}
+
 static int vcpu_post_run_handle_fault(struct kvm_vcpu *vcpu)
 {
 	unsigned int flags = 0;
@@ -4781,9 +4788,7 @@ static int vcpu_post_run_handle_fault(struct kvm_vcpu *vcpu)
 		vcpu->stat.exit_null++;
 		break;
 	case PGM_NON_SECURE_STORAGE_ACCESS:
-		KVM_BUG(current->thread.gmap_teid.as != PSW_BITS_AS_PRIMARY, vcpu->kvm,
-			"Unexpected program interrupt 0x%x, TEID 0x%016lx",
-			current->thread.gmap_int_code, current->thread.gmap_teid.val);
+		kvm_s390_assert_primary_as(vcpu);
 		/*
 		 * This is normal operation; a page belonging to a protected
 		 * guest has not been imported yet. Try to import the page into
@@ -4794,9 +4799,7 @@ static int vcpu_post_run_handle_fault(struct kvm_vcpu *vcpu)
 		break;
 	case PGM_SECURE_STORAGE_ACCESS:
 	case PGM_SECURE_STORAGE_VIOLATION:
-		KVM_BUG(current->thread.gmap_teid.as != PSW_BITS_AS_PRIMARY, vcpu->kvm,
-			"Unexpected program interrupt 0x%x, TEID 0x%016lx",
-			current->thread.gmap_int_code, current->thread.gmap_teid.val);
+		kvm_s390_assert_primary_as(vcpu);
 		/*
 		 * This can happen after a reboot with asynchronous teardown;
 		 * the new guest (normal or protected) will run on top of the
@@ -4825,9 +4828,7 @@ static int vcpu_post_run_handle_fault(struct kvm_vcpu *vcpu)
 	case PGM_REGION_FIRST_TRANS:
 	case PGM_REGION_SECOND_TRANS:
 	case PGM_REGION_THIRD_TRANS:
-		KVM_BUG(current->thread.gmap_teid.as != PSW_BITS_AS_PRIMARY, vcpu->kvm,
-			"Unexpected program interrupt 0x%x, TEID 0x%016lx",
-			current->thread.gmap_int_code, current->thread.gmap_teid.val);
+		kvm_s390_assert_primary_as(vcpu);
 		if (vcpu->arch.gmap->pfault_enabled) {
 			rc = gmap_fault(vcpu->arch.gmap, gaddr, flags | FAULT_FLAG_RETRY_NOWAIT);
 			if (rc == -EFAULT)
-- 
2.47.1


^ permalink raw reply related	[flat|nested] 39+ messages in thread

* [PATCH v1 02/13] KVM: s390: fake memslots for ucontrol VMs
  2025-01-08 18:14 [PATCH v1 00/13] KVM: s390: Stop using page->index and other things Claudio Imbrenda
  2025-01-08 18:14 ` [PATCH v1 01/13] KVM: s390: wrapper for KVM_BUG Claudio Imbrenda
@ 2025-01-08 18:14 ` Claudio Imbrenda
  2025-01-10  9:31   ` Christian Borntraeger
  2025-01-10 15:40   ` Christoph Schlameuss
  2025-01-08 18:14 ` [PATCH v1 03/13] KVM: s390: use __kvm_faultin_pfn() Claudio Imbrenda
                   ` (10 subsequent siblings)
  12 siblings, 2 replies; 39+ messages in thread
From: Claudio Imbrenda @ 2025-01-08 18:14 UTC (permalink / raw)
  To: kvm
  Cc: linux-s390, frankja, borntraeger, schlameuss, david, willy, hca,
	svens, agordeev, gor, nrb, nsg

Create fake memslots for ucontrol VMs. The fake memslots identity-map
userspace.

Now memslots will always be present, and ucontrol is not a special case
anymore.

Signed-off-by: Claudio Imbrenda <imbrenda@linux.ibm.com>
---
 arch/s390/kvm/kvm-s390.c | 42 ++++++++++++++++++++++++++++++++++++----
 1 file changed, 38 insertions(+), 4 deletions(-)

diff --git a/arch/s390/kvm/kvm-s390.c b/arch/s390/kvm/kvm-s390.c
index ecbdd7d41230..797b8503c162 100644
--- a/arch/s390/kvm/kvm-s390.c
+++ b/arch/s390/kvm/kvm-s390.c
@@ -59,6 +59,7 @@
 #define LOCAL_IRQS 32
 #define VCPU_IRQS_MAX_BUF (sizeof(struct kvm_s390_irq) * \
 			   (KVM_MAX_VCPUS + LOCAL_IRQS))
+#define UCONTROL_SLOT_SIZE SZ_4T
 
 const struct _kvm_stats_desc kvm_vm_stats_desc[] = {
 	KVM_GENERIC_VM_STATS(),
@@ -3326,6 +3327,23 @@ void kvm_arch_free_vm(struct kvm *kvm)
 	__kvm_arch_free_vm(kvm);
 }
 
+static void kvm_s390_ucontrol_ensure_memslot(struct kvm *kvm, unsigned long addr)
+{
+	struct kvm_userspace_memory_region2 region = {
+		.slot = addr / UCONTROL_SLOT_SIZE,
+		.memory_size = UCONTROL_SLOT_SIZE,
+		.guest_phys_addr = ALIGN_DOWN(addr, UCONTROL_SLOT_SIZE),
+		.userspace_addr = ALIGN_DOWN(addr, UCONTROL_SLOT_SIZE),
+	};
+	struct kvm_memory_slot *slot;
+
+	mutex_lock(&kvm->slots_lock);
+	slot = gfn_to_memslot(kvm, addr);
+	if (!slot)
+		__kvm_set_memory_region(kvm, &region);
+	mutex_unlock(&kvm->slots_lock);
+}
+
 int kvm_arch_init_vm(struct kvm *kvm, unsigned long type)
 {
 	gfp_t alloc_flags = GFP_KERNEL_ACCOUNT;
@@ -3430,6 +3448,9 @@ int kvm_arch_init_vm(struct kvm *kvm, unsigned long type)
 	if (type & KVM_VM_S390_UCONTROL) {
 		kvm->arch.gmap = NULL;
 		kvm->arch.mem_limit = KVM_S390_NO_MEM_LIMIT;
+		/* pre-initialize a bunch of memslots; the amount is arbitrary */
+		for (i = 0; i < 32; i++)
+			kvm_s390_ucontrol_ensure_memslot(kvm, i * UCONTROL_SLOT_SIZE);
 	} else {
 		if (sclp.hamax == U64_MAX)
 			kvm->arch.mem_limit = TASK_SIZE_MAX;
@@ -5704,6 +5725,7 @@ long kvm_arch_vcpu_ioctl(struct file *filp,
 #ifdef CONFIG_KVM_S390_UCONTROL
 	case KVM_S390_UCAS_MAP: {
 		struct kvm_s390_ucas_mapping ucasmap;
+		unsigned long a;
 
 		if (copy_from_user(&ucasmap, argp, sizeof(ucasmap))) {
 			r = -EFAULT;
@@ -5715,6 +5737,11 @@ long kvm_arch_vcpu_ioctl(struct file *filp,
 			break;
 		}
 
+		a = ALIGN_DOWN(ucasmap.user_addr, UCONTROL_SLOT_SIZE);
+		while (a < ucasmap.user_addr + ucasmap.length) {
+			kvm_s390_ucontrol_ensure_memslot(vcpu->kvm, a);
+			a += UCONTROL_SLOT_SIZE;
+		}
 		r = gmap_map_segment(vcpu->arch.gmap, ucasmap.user_addr,
 				     ucasmap.vcpu_addr, ucasmap.length);
 		break;
@@ -5852,10 +5879,18 @@ int kvm_arch_prepare_memory_region(struct kvm *kvm,
 				   struct kvm_memory_slot *new,
 				   enum kvm_mr_change change)
 {
-	gpa_t size;
+	gpa_t size = new->npages * PAGE_SIZE;
 
-	if (kvm_is_ucontrol(kvm))
-		return -EINVAL;
+	if (kvm_is_ucontrol(kvm)) {
+		if (change != KVM_MR_CREATE || new->flags)
+			return -EINVAL;
+		if (new->userspace_addr != new->base_gfn * PAGE_SIZE)
+			return -EINVAL;
+		if (!IS_ALIGNED(new->userspace_addr | size, UCONTROL_SLOT_SIZE))
+			return -EINVAL;
+		if (new->id != new->userspace_addr / UCONTROL_SLOT_SIZE)
+			return -EINVAL;
+	}
 
 	/* When we are protected, we should not change the memory slots */
 	if (kvm_s390_pv_get_handle(kvm))
@@ -5872,7 +5907,6 @@ int kvm_arch_prepare_memory_region(struct kvm *kvm,
 		if (new->userspace_addr & 0xffffful)
 			return -EINVAL;
 
-		size = new->npages * PAGE_SIZE;
 		if (size & 0xffffful)
 			return -EINVAL;
 
-- 
2.47.1


^ permalink raw reply related	[flat|nested] 39+ messages in thread

* [PATCH v1 03/13] KVM: s390: use __kvm_faultin_pfn()
  2025-01-08 18:14 [PATCH v1 00/13] KVM: s390: Stop using page->index and other things Claudio Imbrenda
  2025-01-08 18:14 ` [PATCH v1 01/13] KVM: s390: wrapper for KVM_BUG Claudio Imbrenda
  2025-01-08 18:14 ` [PATCH v1 02/13] KVM: s390: fake memslots for ucontrol VMs Claudio Imbrenda
@ 2025-01-08 18:14 ` Claudio Imbrenda
  2025-01-14 17:34   ` Christoph Schlameuss
  2025-01-08 18:14 ` [PATCH v1 04/13] KVM: s390: move pv gmap functions into kvm Claudio Imbrenda
                   ` (9 subsequent siblings)
  12 siblings, 1 reply; 39+ messages in thread
From: Claudio Imbrenda @ 2025-01-08 18:14 UTC (permalink / raw)
  To: kvm
  Cc: linux-s390, frankja, borntraeger, schlameuss, david, willy, hca,
	svens, agordeev, gor, nrb, nsg

Refactor the existing page fault handling code to use __kvm_faultin_pfn().

This possible now that memslots are always present.

Signed-off-by: Claudio Imbrenda <imbrenda@linux.ibm.com>
---
 arch/s390/kvm/kvm-s390.c | 92 +++++++++++++++++++++++++++++++---------
 arch/s390/mm/gmap.c      |  1 +
 2 files changed, 73 insertions(+), 20 deletions(-)

diff --git a/arch/s390/kvm/kvm-s390.c b/arch/s390/kvm/kvm-s390.c
index 797b8503c162..8e4e7e45238b 100644
--- a/arch/s390/kvm/kvm-s390.c
+++ b/arch/s390/kvm/kvm-s390.c
@@ -4794,11 +4794,66 @@ static void kvm_s390_assert_primary_as(struct kvm_vcpu *vcpu)
 		current->thread.gmap_int_code, current->thread.gmap_teid.val);
 }
 
+static int kvm_s390_handle_dat_fault(struct kvm_vcpu *vcpu, gfn_t gfn, gpa_t gaddr,
+				     unsigned int flags)
+{
+	struct kvm_memory_slot *slot;
+	unsigned int fault_flags;
+	bool writable, unlocked;
+	unsigned long vmaddr;
+	struct page *page;
+	kvm_pfn_t pfn;
+	int rc;
+
+	slot = kvm_vcpu_gfn_to_memslot(vcpu, gfn);
+	if (!slot || slot->flags & KVM_MEMSLOT_INVALID)
+		return vcpu_post_run_addressing_exception(vcpu);
+
+	fault_flags = flags & FOLL_WRITE ? FAULT_FLAG_WRITE : 0;
+	if (vcpu->arch.gmap->pfault_enabled)
+		flags |= FOLL_NOWAIT;
+	vmaddr = __gfn_to_hva_memslot(slot, gfn);
+
+try_again:
+	pfn = __kvm_faultin_pfn(slot, gfn, flags, &writable, &page);
+
+	/* Access outside memory, inject addressing exception */
+	if (is_noslot_pfn(pfn))
+		return vcpu_post_run_addressing_exception(vcpu);
+	/* Signal pending: try again */
+	if (pfn == KVM_PFN_ERR_SIGPENDING)
+		return -EAGAIN;
+
+	/* Needs I/O, try to setup async pfault (only possible with FOLL_NOWAIT) */
+	if (pfn == KVM_PFN_ERR_NEEDS_IO) {
+		trace_kvm_s390_major_guest_pfault(vcpu);
+		if (kvm_arch_setup_async_pf(vcpu))
+			return 0;
+		vcpu->stat.pfault_sync++;
+		/* Could not setup async pfault, try again synchronously */
+		flags &= ~FOLL_NOWAIT;
+		goto try_again;
+	}
+	/* Any other error */
+	if (is_error_pfn(pfn))
+		return -EFAULT;
+
+	/* Success */
+	mmap_read_lock(vcpu->arch.gmap->mm);
+	/* Mark the userspace PTEs as young and/or dirty, to avoid page fault loops */
+	rc = fixup_user_fault(vcpu->arch.gmap->mm, vmaddr, fault_flags, &unlocked);
+	if (!rc)
+		rc = __gmap_link(vcpu->arch.gmap, gaddr, vmaddr);
+	kvm_release_faultin_page(vcpu->kvm, page, false, writable);
+	mmap_read_unlock(vcpu->arch.gmap->mm);
+	return rc;
+}
+
 static int vcpu_post_run_handle_fault(struct kvm_vcpu *vcpu)
 {
+	unsigned long gaddr, gaddr_tmp;
 	unsigned int flags = 0;
-	unsigned long gaddr;
-	int rc = 0;
+	gfn_t gfn;
 
 	gaddr = current->thread.gmap_teid.addr * PAGE_SIZE;
 	if (kvm_s390_cur_gmap_fault_is_write())
@@ -4850,29 +4905,26 @@ static int vcpu_post_run_handle_fault(struct kvm_vcpu *vcpu)
 	case PGM_REGION_SECOND_TRANS:
 	case PGM_REGION_THIRD_TRANS:
 		kvm_s390_assert_primary_as(vcpu);
-		if (vcpu->arch.gmap->pfault_enabled) {
-			rc = gmap_fault(vcpu->arch.gmap, gaddr, flags | FAULT_FLAG_RETRY_NOWAIT);
-			if (rc == -EFAULT)
-				return vcpu_post_run_addressing_exception(vcpu);
-			if (rc == -EAGAIN) {
-				trace_kvm_s390_major_guest_pfault(vcpu);
-				if (kvm_arch_setup_async_pf(vcpu))
-					return 0;
-				vcpu->stat.pfault_sync++;
-			} else {
-				return rc;
-			}
-		}
-		rc = gmap_fault(vcpu->arch.gmap, gaddr, flags);
-		if (rc == -EFAULT) {
-			if (kvm_is_ucontrol(vcpu->kvm)) {
+
+		gfn = gpa_to_gfn(gaddr);
+		if (kvm_is_ucontrol(vcpu->kvm)) {
+			/*
+			 * This translates the per-vCPU guest address into a
+			 * fake guest address, which can then be used with the
+			 * fake memslots that are identity mapping userspace.
+			 * This allows ucontrol VMs to use the normal fault
+			 * resolution path, like normal VMs.
+			 */
+			gaddr_tmp = gmap_translate(vcpu->arch.gmap, gaddr);
+			if (gaddr_tmp == -EFAULT) {
 				vcpu->run->exit_reason = KVM_EXIT_S390_UCONTROL;
 				vcpu->run->s390_ucontrol.trans_exc_code = gaddr;
 				vcpu->run->s390_ucontrol.pgm_code = 0x10;
 				return -EREMOTE;
 			}
-			return vcpu_post_run_addressing_exception(vcpu);
+			gfn = gpa_to_gfn(gaddr_tmp);
 		}
+		return kvm_s390_handle_dat_fault(vcpu, gfn, gaddr, flags);
 		break;
 	default:
 		KVM_BUG(1, vcpu->kvm, "Unexpected program interrupt 0x%x, TEID 0x%016lx",
@@ -4880,7 +4932,7 @@ static int vcpu_post_run_handle_fault(struct kvm_vcpu *vcpu)
 		send_sig(SIGSEGV, current, 0);
 		break;
 	}
-	return rc;
+	return 0;
 }
 
 static int vcpu_post_run(struct kvm_vcpu *vcpu, int exit_reason)
diff --git a/arch/s390/mm/gmap.c b/arch/s390/mm/gmap.c
index 16b8a36c56de..3aacef77c174 100644
--- a/arch/s390/mm/gmap.c
+++ b/arch/s390/mm/gmap.c
@@ -605,6 +605,7 @@ int __gmap_link(struct gmap *gmap, unsigned long gaddr, unsigned long vmaddr)
 	radix_tree_preload_end();
 	return rc;
 }
+EXPORT_SYMBOL(__gmap_link);
 
 /**
  * fixup_user_fault_nowait - manually resolve a user page fault without waiting
-- 
2.47.1


^ permalink raw reply related	[flat|nested] 39+ messages in thread

* [PATCH v1 04/13] KVM: s390: move pv gmap functions into kvm
  2025-01-08 18:14 [PATCH v1 00/13] KVM: s390: Stop using page->index and other things Claudio Imbrenda
                   ` (2 preceding siblings ...)
  2025-01-08 18:14 ` [PATCH v1 03/13] KVM: s390: use __kvm_faultin_pfn() Claudio Imbrenda
@ 2025-01-08 18:14 ` Claudio Imbrenda
  2025-01-09 17:42   ` kernel test robot
  2025-01-15 12:48   ` Janosch Frank
  2025-01-08 18:14 ` [PATCH v1 05/13] KVM: s390: get rid of gmap_fault() Claudio Imbrenda
                   ` (8 subsequent siblings)
  12 siblings, 2 replies; 39+ messages in thread
From: Claudio Imbrenda @ 2025-01-08 18:14 UTC (permalink / raw)
  To: kvm
  Cc: linux-s390, frankja, borntraeger, schlameuss, david, willy, hca,
	svens, agordeev, gor, nrb, nsg

Move gmap related functions from kernel/uv into kvm.

Create a new file to collect gmap-related functions.

Signed-off-by: Claudio Imbrenda <imbrenda@linux.ibm.com>
---
 arch/s390/include/asm/uv.h |   7 +-
 arch/s390/kernel/uv.c      | 293 ++++++-------------------------------
 arch/s390/kvm/Makefile     |   2 +-
 arch/s390/kvm/gmap.c       | 183 +++++++++++++++++++++++
 arch/s390/kvm/gmap.h       |  17 +++
 arch/s390/kvm/intercept.c  |   1 +
 arch/s390/kvm/kvm-s390.c   |   1 +
 arch/s390/kvm/pv.c         |   1 +
 8 files changed, 251 insertions(+), 254 deletions(-)
 create mode 100644 arch/s390/kvm/gmap.c
 create mode 100644 arch/s390/kvm/gmap.h

diff --git a/arch/s390/include/asm/uv.h b/arch/s390/include/asm/uv.h
index dc332609f2c3..22ec1a24c291 100644
--- a/arch/s390/include/asm/uv.h
+++ b/arch/s390/include/asm/uv.h
@@ -628,12 +628,13 @@ static inline int is_prot_virt_host(void)
 }
 
 int uv_pin_shared(unsigned long paddr);
-int gmap_make_secure(struct gmap *gmap, unsigned long gaddr, void *uvcb);
-int gmap_destroy_page(struct gmap *gmap, unsigned long gaddr);
 int uv_destroy_folio(struct folio *folio);
 int uv_destroy_pte(pte_t pte);
 int uv_convert_from_secure_pte(pte_t pte);
-int gmap_convert_to_secure(struct gmap *gmap, unsigned long gaddr);
+int uv_wiggle_folio(struct folio *folio, bool split);
+int make_folio_secure(struct folio *folio, struct uv_cb_header *uvcb);
+int uv_convert_from_secure(unsigned long paddr);
+int uv_convert_from_secure_folio(struct folio *folio);
 
 void setup_uv(void);
 
diff --git a/arch/s390/kernel/uv.c b/arch/s390/kernel/uv.c
index 6f9654a191ad..832c39c9ccfa 100644
--- a/arch/s390/kernel/uv.c
+++ b/arch/s390/kernel/uv.c
@@ -19,19 +19,6 @@
 #include <asm/sections.h>
 #include <asm/uv.h>
 
-#if !IS_ENABLED(CONFIG_KVM)
-unsigned long __gmap_translate(struct gmap *gmap, unsigned long gaddr)
-{
-	return 0;
-}
-
-int gmap_fault(struct gmap *gmap, unsigned long gaddr,
-	       unsigned int fault_flags)
-{
-	return 0;
-}
-#endif
-
 /* the bootdata_preserved fields come from ones in arch/s390/boot/uv.c */
 int __bootdata_preserved(prot_virt_guest);
 EXPORT_SYMBOL(prot_virt_guest);
@@ -159,6 +146,7 @@ int uv_destroy_folio(struct folio *folio)
 	folio_put(folio);
 	return rc;
 }
+EXPORT_SYMBOL(uv_destroy_folio);
 
 /*
  * The present PTE still indirectly holds a folio reference through the mapping.
@@ -175,7 +163,7 @@ int uv_destroy_pte(pte_t pte)
  *
  * @paddr: Absolute host address of page to be exported
  */
-static int uv_convert_from_secure(unsigned long paddr)
+int uv_convert_from_secure(unsigned long paddr)
 {
 	struct uv_cb_cfs uvcb = {
 		.header.cmd = UVC_CMD_CONV_FROM_SEC_STOR,
@@ -187,11 +175,12 @@ static int uv_convert_from_secure(unsigned long paddr)
 		return -EINVAL;
 	return 0;
 }
+EXPORT_SYMBOL_GPL(uv_convert_from_secure);
 
 /*
  * The caller must already hold a reference to the folio.
  */
-static int uv_convert_from_secure_folio(struct folio *folio)
+int uv_convert_from_secure_folio(struct folio *folio)
 {
 	int rc;
 
@@ -206,6 +195,7 @@ static int uv_convert_from_secure_folio(struct folio *folio)
 	folio_put(folio);
 	return rc;
 }
+EXPORT_SYMBOL_GPL(uv_convert_from_secure_folio);
 
 /*
  * The present PTE still indirectly holds a folio reference through the mapping.
@@ -237,13 +227,32 @@ static int expected_folio_refs(struct folio *folio)
 	return res;
 }
 
-static int make_folio_secure(struct folio *folio, struct uv_cb_header *uvcb)
+/**
+ * make_folio_secure() - make a folio secure
+ * @folio: the folio to make secure
+ * @uvcb: the uvcb that describes the UVC to be used
+ *
+ * The folio @folio will be made secure if possible, @uvcb will be passed
+ * as-is to the UVC.
+ *
+ * Return: 0 on success;
+ *         -EBUSY if the folio is in writeback, has too many references, or is large;
+ *         -EAGAIN if the UVC needs to be attempted again;
+ *         -ENXIO if the address is not mapped;
+ *         -EINVAL if the UVC failed for other reasons.
+ *
+ * Context: The caller must hold exactly one extra reference on the folio
+ *          (it's the same logic as split_folio())
+ */
+int make_folio_secure(struct folio *folio, struct uv_cb_header *uvcb)
 {
 	int expected, cc = 0;
 
+	if (folio_test_large(folio))
+		return -EBUSY;
 	if (folio_test_writeback(folio))
-		return -EAGAIN;
-	expected = expected_folio_refs(folio);
+		return -EBUSY;
+	expected = expected_folio_refs(folio) + 1;
 	if (!folio_ref_freeze(folio, expected))
 		return -EBUSY;
 	set_bit(PG_arch_1, &folio->flags);
@@ -267,251 +276,35 @@ static int make_folio_secure(struct folio *folio, struct uv_cb_header *uvcb)
 		return -EAGAIN;
 	return uvcb->rc == 0x10a ? -ENXIO : -EINVAL;
 }
+EXPORT_SYMBOL_GPL(make_folio_secure);
 
 /**
- * should_export_before_import - Determine whether an export is needed
- * before an import-like operation
- * @uvcb: the Ultravisor control block of the UVC to be performed
- * @mm: the mm of the process
- *
- * Returns whether an export is needed before every import-like operation.
- * This is needed for shared pages, which don't trigger a secure storage
- * exception when accessed from a different guest.
- *
- * Although considered as one, the Unpin Page UVC is not an actual import,
- * so it is not affected.
+ * uv_wiggle_folio() - try to drain extra references to a folio
+ * @folio: the folio
+ * @split: whether to split a large folio
  *
- * No export is needed also when there is only one protected VM, because the
- * page cannot belong to the wrong VM in that case (there is no "other VM"
- * it can belong to).
- *
- * Return: true if an export is needed before every import, otherwise false.
+ * Context: Must be called while holding an extra reference to the folio;
+ *          the mm lock should not be held.
  */
-static bool should_export_before_import(struct uv_cb_header *uvcb, struct mm_struct *mm)
+int uv_wiggle_folio(struct folio *folio, bool split)
 {
-	/*
-	 * The misc feature indicates, among other things, that importing a
-	 * shared page from a different protected VM will automatically also
-	 * transfer its ownership.
-	 */
-	if (uv_has_feature(BIT_UV_FEAT_MISC))
-		return false;
-	if (uvcb->cmd == UVC_CMD_UNPIN_PAGE_SHARED)
-		return false;
-	return atomic_read(&mm->context.protected_count) > 1;
-}
-
-/*
- * Drain LRU caches: the local one on first invocation and the ones of all
- * CPUs on successive invocations. Returns "true" on the first invocation.
- */
-static bool drain_lru(bool *drain_lru_called)
-{
-	/*
-	 * If we have tried a local drain and the folio refcount
-	 * still does not match our expected safe value, try with a
-	 * system wide drain. This is needed if the pagevecs holding
-	 * the page are on a different CPU.
-	 */
-	if (*drain_lru_called) {
-		lru_add_drain_all();
-		/* We give up here, don't retry immediately. */
-		return false;
-	}
-	/*
-	 * We are here if the folio refcount does not match the
-	 * expected safe value. The main culprits are usually
-	 * pagevecs. With lru_add_drain() we drain the pagevecs
-	 * on the local CPU so that hopefully the refcount will
-	 * reach the expected safe value.
-	 */
-	lru_add_drain();
-	*drain_lru_called = true;
-	/* The caller should try again immediately */
-	return true;
-}
-
-/*
- * Requests the Ultravisor to make a page accessible to a guest.
- * If it's brought in the first time, it will be cleared. If
- * it has been exported before, it will be decrypted and integrity
- * checked.
- */
-int gmap_make_secure(struct gmap *gmap, unsigned long gaddr, void *uvcb)
-{
-	struct vm_area_struct *vma;
-	bool drain_lru_called = false;
-	spinlock_t *ptelock;
-	unsigned long uaddr;
-	struct folio *folio;
-	pte_t *ptep;
 	int rc;
 
-again:
-	rc = -EFAULT;
-	mmap_read_lock(gmap->mm);
-
-	uaddr = __gmap_translate(gmap, gaddr);
-	if (IS_ERR_VALUE(uaddr))
-		goto out;
-	vma = vma_lookup(gmap->mm, uaddr);
-	if (!vma)
-		goto out;
-	/*
-	 * Secure pages cannot be huge and userspace should not combine both.
-	 * In case userspace does it anyway this will result in an -EFAULT for
-	 * the unpack. The guest is thus never reaching secure mode. If
-	 * userspace is playing dirty tricky with mapping huge pages later
-	 * on this will result in a segmentation fault.
-	 */
-	if (is_vm_hugetlb_page(vma))
-		goto out;
-
-	rc = -ENXIO;
-	ptep = get_locked_pte(gmap->mm, uaddr, &ptelock);
-	if (!ptep)
-		goto out;
-	if (pte_present(*ptep) && !(pte_val(*ptep) & _PAGE_INVALID) && pte_write(*ptep)) {
-		folio = page_folio(pte_page(*ptep));
-		rc = -EAGAIN;
-		if (folio_test_large(folio)) {
-			rc = -E2BIG;
-		} else if (folio_trylock(folio)) {
-			if (should_export_before_import(uvcb, gmap->mm))
-				uv_convert_from_secure(PFN_PHYS(folio_pfn(folio)));
-			rc = make_folio_secure(folio, uvcb);
-			folio_unlock(folio);
-		}
-
-		/*
-		 * Once we drop the PTL, the folio may get unmapped and
-		 * freed immediately. We need a temporary reference.
-		 */
-		if (rc == -EAGAIN || rc == -E2BIG)
-			folio_get(folio);
-	}
-	pte_unmap_unlock(ptep, ptelock);
-out:
-	mmap_read_unlock(gmap->mm);
-
-	switch (rc) {
-	case -E2BIG:
+	folio_wait_writeback(folio);
+	if (split) {
 		folio_lock(folio);
 		rc = split_folio(folio);
 		folio_unlock(folio);
-		folio_put(folio);
-
-		switch (rc) {
-		case 0:
-			/* Splitting succeeded, try again immediately. */
-			goto again;
-		case -EAGAIN:
-			/* Additional folio references. */
-			if (drain_lru(&drain_lru_called))
-				goto again;
-			return -EAGAIN;
-		case -EBUSY:
-			/* Unexpected race. */
+
+		if (rc == -EBUSY)
 			return -EAGAIN;
-		}
-		WARN_ON_ONCE(1);
-		return -ENXIO;
-	case -EAGAIN:
-		/*
-		 * If we are here because the UVC returned busy or partial
-		 * completion, this is just a useless check, but it is safe.
-		 */
-		folio_wait_writeback(folio);
-		folio_put(folio);
-		return -EAGAIN;
-	case -EBUSY:
-		/* Additional folio references. */
-		if (drain_lru(&drain_lru_called))
-			goto again;
-		return -EAGAIN;
-	case -ENXIO:
-		if (gmap_fault(gmap, gaddr, FAULT_FLAG_WRITE))
-			return -EFAULT;
-		return -EAGAIN;
+		if (rc != -EAGAIN)
+			return rc;
 	}
-	return rc;
-}
-EXPORT_SYMBOL_GPL(gmap_make_secure);
-
-int gmap_convert_to_secure(struct gmap *gmap, unsigned long gaddr)
-{
-	struct uv_cb_cts uvcb = {
-		.header.cmd = UVC_CMD_CONV_TO_SEC_STOR,
-		.header.len = sizeof(uvcb),
-		.guest_handle = gmap->guest_handle,
-		.gaddr = gaddr,
-	};
-
-	return gmap_make_secure(gmap, gaddr, &uvcb);
-}
-EXPORT_SYMBOL_GPL(gmap_convert_to_secure);
-
-/**
- * gmap_destroy_page - Destroy a guest page.
- * @gmap: the gmap of the guest
- * @gaddr: the guest address to destroy
- *
- * An attempt will be made to destroy the given guest page. If the attempt
- * fails, an attempt is made to export the page. If both attempts fail, an
- * appropriate error is returned.
- */
-int gmap_destroy_page(struct gmap *gmap, unsigned long gaddr)
-{
-	struct vm_area_struct *vma;
-	struct folio_walk fw;
-	unsigned long uaddr;
-	struct folio *folio;
-	int rc;
-
-	rc = -EFAULT;
-	mmap_read_lock(gmap->mm);
-
-	uaddr = __gmap_translate(gmap, gaddr);
-	if (IS_ERR_VALUE(uaddr))
-		goto out;
-	vma = vma_lookup(gmap->mm, uaddr);
-	if (!vma)
-		goto out;
-	/*
-	 * Huge pages should not be able to become secure
-	 */
-	if (is_vm_hugetlb_page(vma))
-		goto out;
-
-	rc = 0;
-	folio = folio_walk_start(&fw, vma, uaddr, 0);
-	if (!folio)
-		goto out;
-	/*
-	 * See gmap_make_secure(): large folios cannot be secure. Small
-	 * folio implies FW_LEVEL_PTE.
-	 */
-	if (folio_test_large(folio) || !pte_write(fw.pte))
-		goto out_walk_end;
-	rc = uv_destroy_folio(folio);
-	/*
-	 * Fault handlers can race; it is possible that two CPUs will fault
-	 * on the same secure page. One CPU can destroy the page, reboot,
-	 * re-enter secure mode and import it, while the second CPU was
-	 * stuck at the beginning of the handler. At some point the second
-	 * CPU will be able to progress, and it will not be able to destroy
-	 * the page. In that case we do not want to terminate the process,
-	 * we instead try to export the page.
-	 */
-	if (rc)
-		rc = uv_convert_from_secure_folio(folio);
-out_walk_end:
-	folio_walk_end(&fw, vma);
-out:
-	mmap_read_unlock(gmap->mm);
-	return rc;
+	lru_add_drain_all();
+	return -EAGAIN;
 }
-EXPORT_SYMBOL_GPL(gmap_destroy_page);
+EXPORT_SYMBOL_GPL(uv_wiggle_folio);
 
 /*
  * To be called with the folio locked or with an extra reference! This will
diff --git a/arch/s390/kvm/Makefile b/arch/s390/kvm/Makefile
index 02217fb4ae10..d972dea657fd 100644
--- a/arch/s390/kvm/Makefile
+++ b/arch/s390/kvm/Makefile
@@ -8,7 +8,7 @@ include $(srctree)/virt/kvm/Makefile.kvm
 ccflags-y := -Ivirt/kvm -Iarch/s390/kvm
 
 kvm-y += kvm-s390.o intercept.o interrupt.o priv.o sigp.o
-kvm-y += diag.o gaccess.o guestdbg.o vsie.o pv.o
+kvm-y += diag.o gaccess.o guestdbg.o vsie.o pv.o gmap.o
 
 kvm-$(CONFIG_VFIO_PCI_ZDEV_KVM) += pci.o
 obj-$(CONFIG_KVM) += kvm.o
diff --git a/arch/s390/kvm/gmap.c b/arch/s390/kvm/gmap.c
new file mode 100644
index 000000000000..a142bbbddc25
--- /dev/null
+++ b/arch/s390/kvm/gmap.c
@@ -0,0 +1,183 @@
+// SPDX-License-Identifier: GPL-2.0
+/*
+ * Guest memory management for KVM/s390
+ *
+ * Copyright IBM Corp. 2008, 2020, 2024
+ *
+ *    Author(s): Claudio Imbrenda <imbrenda@linux.ibm.com>
+ *               Martin Schwidefsky <schwidefsky@de.ibm.com>
+ *               David Hildenbrand <david@redhat.com>
+ *               Janosch Frank <frankja@linux.vnet.ibm.com>
+ */
+
+#include <linux/compiler.h>
+#include <linux/kvm.h>
+#include <linux/kvm_host.h>
+#include <linux/pgtable.h>
+#include <linux/pagemap.h>
+
+#include <asm/lowcore.h>
+#include <asm/gmap.h>
+#include <asm/uv.h>
+
+#include "gmap.h"
+
+/**
+ * should_export_before_import - Determine whether an export is needed
+ * before an import-like operation
+ * @uvcb: the Ultravisor control block of the UVC to be performed
+ * @mm: the mm of the process
+ *
+ * Returns whether an export is needed before every import-like operation.
+ * This is needed for shared pages, which don't trigger a secure storage
+ * exception when accessed from a different guest.
+ *
+ * Although considered as one, the Unpin Page UVC is not an actual import,
+ * so it is not affected.
+ *
+ * No export is needed also when there is only one protected VM, because the
+ * page cannot belong to the wrong VM in that case (there is no "other VM"
+ * it can belong to).
+ *
+ * Return: true if an export is needed before every import, otherwise false.
+ */
+static bool should_export_before_import(struct uv_cb_header *uvcb, struct mm_struct *mm)
+{
+	/*
+	 * The misc feature indicates, among other things, that importing a
+	 * shared page from a different protected VM will automatically also
+	 * transfer its ownership.
+	 */
+	if (uv_has_feature(BIT_UV_FEAT_MISC))
+		return false;
+	if (uvcb->cmd == UVC_CMD_UNPIN_PAGE_SHARED)
+		return false;
+	return atomic_read(&mm->context.protected_count) > 1;
+}
+
+static int __gmap_make_secure(struct gmap *gmap, struct page *page, void *uvcb)
+{
+	struct folio *folio = page_folio(page);
+	int rc;
+
+	/*
+	 * Secure pages cannot be huge and userspace should not combine both.
+	 * In case userspace does it anyway this will result in an -EFAULT for
+	 * the unpack. The guest is thus never reaching secure mode. If
+	 * userspace is playing dirty tricky with mapping huge pages later
+	 * on this will result in a segmentation fault or in a -EFAULT return
+	 * code from the KVM_RUN ioctl.
+	 */
+	if (folio_test_hugetlb(folio))
+		return -EFAULT;
+	if (folio_test_large(folio)) {
+		mmap_read_unlock(gmap->mm);
+		rc = uv_wiggle_folio(folio, true);
+		mmap_read_lock(gmap->mm);
+		if (rc)
+			return rc;
+		folio = page_folio(page);
+	}
+
+	rc = -EAGAIN;
+	if (folio_trylock(folio)) {
+		if (should_export_before_import(uvcb, gmap->mm))
+			uv_convert_from_secure(folio_to_phys(folio));
+		rc = make_folio_secure(folio, uvcb);
+		folio_unlock(folio);
+	}
+
+	/*
+	 * Unlikely case: the page is not mapped anymore. Return success
+	 * and let the proper fault handler fault in the page again.
+	 */
+	if (rc == -ENXIO)
+		return 0;
+	/* The folio has too many references, try to shake some off */
+	if (rc == -EBUSY) {
+		mmap_read_unlock(gmap->mm);
+		uv_wiggle_folio(folio, false);
+		mmap_read_lock(gmap->mm);
+		return -EAGAIN;
+	}
+
+	return rc;
+}
+
+int gmap_make_secure(struct gmap *gmap, unsigned long gaddr, void *uvcb)
+{
+	struct page *page;
+	int rc = 0;
+
+	mmap_read_lock(gmap->mm);
+	page = gfn_to_page(gmap->private, gpa_to_gfn(gaddr));
+	if (page)
+		rc = __gmap_make_secure(gmap, page, uvcb);
+	kvm_release_page_clean(page);
+	mmap_read_unlock(gmap->mm);
+
+	return rc;
+}
+
+int gmap_convert_to_secure(struct gmap *gmap, unsigned long gaddr)
+{
+	struct uv_cb_cts uvcb = {
+		.header.cmd = UVC_CMD_CONV_TO_SEC_STOR,
+		.header.len = sizeof(uvcb),
+		.guest_handle = gmap->guest_handle,
+		.gaddr = gaddr,
+	};
+
+	return gmap_make_secure(gmap, gaddr, &uvcb);
+}
+
+/**
+ * gmap_destroy_page - Destroy a guest page.
+ * @gmap: the gmap of the guest
+ * @gaddr: the guest address to destroy
+ *
+ * An attempt will be made to destroy the given guest page. If the attempt
+ * fails, an attempt is made to export the page. If both attempts fail, an
+ * appropriate error is returned.
+ */
+static int __gmap_destroy_page(struct gmap *gmap, struct page *page)
+{
+	struct folio *folio = page_folio(page);
+	int rc;
+
+	/*
+	 * See gmap_make_secure(): large folios cannot be secure. Small
+	 * folio implies FW_LEVEL_PTE.
+	 */
+	if (folio_test_large(folio))
+		return -EFAULT;
+
+	rc = uv_destroy_folio(folio);
+	/*
+	 * Fault handlers can race; it is possible that two CPUs will fault
+	 * on the same secure page. One CPU can destroy the page, reboot,
+	 * re-enter secure mode and import it, while the second CPU was
+	 * stuck at the beginning of the handler. At some point the second
+	 * CPU will be able to progress, and it will not be able to destroy
+	 * the page. In that case we do not want to terminate the process,
+	 * we instead try to export the page.
+	 */
+	if (rc)
+		rc = uv_convert_from_secure_folio(folio);
+
+	return rc;
+}
+
+int gmap_destroy_page(struct gmap *gmap, unsigned long gaddr)
+{
+	struct page *page;
+	int rc = 0;
+
+	mmap_read_lock(gmap->mm);
+	page = gfn_to_page(gmap->private, gpa_to_gfn(gaddr));
+	if (page)
+		rc = __gmap_destroy_page(gmap, page);
+	kvm_release_page_clean(page);
+	mmap_read_unlock(gmap->mm);
+	return rc;
+}
diff --git a/arch/s390/kvm/gmap.h b/arch/s390/kvm/gmap.h
new file mode 100644
index 000000000000..f2b52ce29be3
--- /dev/null
+++ b/arch/s390/kvm/gmap.h
@@ -0,0 +1,17 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/*
+ *  KVM guest address space mapping code
+ *
+ *    Copyright IBM Corp. 2007, 2016, 2025
+ *    Author(s): Martin Schwidefsky <schwidefsky@de.ibm.com>
+ *               Claudio Imbrenda <imbrenda@linux.ibm.com>
+ */
+
+#ifndef ARCH_KVM_S390_GMAP_H
+#define ARCH_KVM_S390_GMAP_H
+
+int gmap_make_secure(struct gmap *gmap, unsigned long gaddr, void *uvcb);
+int gmap_convert_to_secure(struct gmap *gmap, unsigned long gaddr);
+int gmap_destroy_page(struct gmap *gmap, unsigned long gaddr);
+
+#endif
diff --git a/arch/s390/kvm/intercept.c b/arch/s390/kvm/intercept.c
index 5bbaadf75dc6..92ae003cd215 100644
--- a/arch/s390/kvm/intercept.c
+++ b/arch/s390/kvm/intercept.c
@@ -21,6 +21,7 @@
 #include "gaccess.h"
 #include "trace.h"
 #include "trace-s390.h"
+#include "gmap.h"
 
 u8 kvm_s390_get_ilen(struct kvm_vcpu *vcpu)
 {
diff --git a/arch/s390/kvm/kvm-s390.c b/arch/s390/kvm/kvm-s390.c
index 8e4e7e45238b..bdbb143a75c9 100644
--- a/arch/s390/kvm/kvm-s390.c
+++ b/arch/s390/kvm/kvm-s390.c
@@ -50,6 +50,7 @@
 #include "kvm-s390.h"
 #include "gaccess.h"
 #include "pci.h"
+#include "gmap.h"
 
 #define CREATE_TRACE_POINTS
 #include "trace.h"
diff --git a/arch/s390/kvm/pv.c b/arch/s390/kvm/pv.c
index 75e81ba26d04..f0301e673810 100644
--- a/arch/s390/kvm/pv.c
+++ b/arch/s390/kvm/pv.c
@@ -17,6 +17,7 @@
 #include <linux/sched/mm.h>
 #include <linux/mmu_notifier.h>
 #include "kvm-s390.h"
+#include "gmap.h"
 
 bool kvm_s390_pv_is_protected(struct kvm *kvm)
 {
-- 
2.47.1


^ permalink raw reply related	[flat|nested] 39+ messages in thread

* [PATCH v1 05/13] KVM: s390: get rid of gmap_fault()
  2025-01-08 18:14 [PATCH v1 00/13] KVM: s390: Stop using page->index and other things Claudio Imbrenda
                   ` (3 preceding siblings ...)
  2025-01-08 18:14 ` [PATCH v1 04/13] KVM: s390: move pv gmap functions into kvm Claudio Imbrenda
@ 2025-01-08 18:14 ` Claudio Imbrenda
  2025-01-08 18:14 ` [PATCH v1 06/13] KVM: s390: get rid of gmap_translate() Claudio Imbrenda
                   ` (7 subsequent siblings)
  12 siblings, 0 replies; 39+ messages in thread
From: Claudio Imbrenda @ 2025-01-08 18:14 UTC (permalink / raw)
  To: kvm
  Cc: linux-s390, frankja, borntraeger, schlameuss, david, willy, hca,
	svens, agordeev, gor, nrb, nsg

All gmap page faults are already handled in kvm by the function
kvm_s390_handle_dat_fault(); only few users of gmap_fault remained, all
within kvm.

Convert those calls to use kvm_s390_handle_dat_fault() instead.

Remove gmap_fault() entirely since it has no more users.

Signed-off-by: Claudio Imbrenda <imbrenda@linux.ibm.com>
---
 arch/s390/include/asm/gmap.h |   1 -
 arch/s390/kvm/intercept.c    |   4 +-
 arch/s390/kvm/kvm-s390.c     |  17 +++--
 arch/s390/kvm/kvm-s390.h     |   7 ++
 arch/s390/mm/gmap.c          | 124 -----------------------------------
 5 files changed, 22 insertions(+), 131 deletions(-)

diff --git a/arch/s390/include/asm/gmap.h b/arch/s390/include/asm/gmap.h
index 13f51a6a5bb1..3f4184be297f 100644
--- a/arch/s390/include/asm/gmap.h
+++ b/arch/s390/include/asm/gmap.h
@@ -113,7 +113,6 @@ int gmap_unmap_segment(struct gmap *gmap, unsigned long to, unsigned long len);
 unsigned long __gmap_translate(struct gmap *, unsigned long gaddr);
 unsigned long gmap_translate(struct gmap *, unsigned long gaddr);
 int __gmap_link(struct gmap *gmap, unsigned long gaddr, unsigned long vmaddr);
-int gmap_fault(struct gmap *, unsigned long gaddr, unsigned int fault_flags);
 void gmap_discard(struct gmap *, unsigned long from, unsigned long to);
 void __gmap_zap(struct gmap *, unsigned long gaddr);
 void gmap_unlink(struct mm_struct *, unsigned long *table, unsigned long vmaddr);
diff --git a/arch/s390/kvm/intercept.c b/arch/s390/kvm/intercept.c
index 92ae003cd215..83a4b0edf239 100644
--- a/arch/s390/kvm/intercept.c
+++ b/arch/s390/kvm/intercept.c
@@ -368,7 +368,7 @@ static int handle_mvpg_pei(struct kvm_vcpu *vcpu)
 					      reg2, &srcaddr, GACC_FETCH, 0);
 	if (rc)
 		return kvm_s390_inject_prog_cond(vcpu, rc);
-	rc = gmap_fault(vcpu->arch.gmap, srcaddr, 0);
+	rc = kvm_s390_handle_dat_fault(vcpu, srcaddr, 0);
 	if (rc != 0)
 		return rc;
 
@@ -377,7 +377,7 @@ static int handle_mvpg_pei(struct kvm_vcpu *vcpu)
 					      reg1, &dstaddr, GACC_STORE, 0);
 	if (rc)
 		return kvm_s390_inject_prog_cond(vcpu, rc);
-	rc = gmap_fault(vcpu->arch.gmap, dstaddr, FAULT_FLAG_WRITE);
+	rc = kvm_s390_handle_dat_fault(vcpu, dstaddr, FOLL_WRITE);
 	if (rc != 0)
 		return rc;
 
diff --git a/arch/s390/kvm/kvm-s390.c b/arch/s390/kvm/kvm-s390.c
index bdbb143a75c9..d8c8e91356ca 100644
--- a/arch/s390/kvm/kvm-s390.c
+++ b/arch/s390/kvm/kvm-s390.c
@@ -4795,8 +4795,17 @@ static void kvm_s390_assert_primary_as(struct kvm_vcpu *vcpu)
 		current->thread.gmap_int_code, current->thread.gmap_teid.val);
 }
 
-static int kvm_s390_handle_dat_fault(struct kvm_vcpu *vcpu, gfn_t gfn, gpa_t gaddr,
-				     unsigned int flags)
+/*
+ * __kvm_s390_handle_dat_fault() - handle a dat fault for the gmap of a vcpu
+ * @vcpu: the vCPU whose gmap is to be fixed up
+ * @gfn: the guest frame number used for memslots (including fake memslots)
+ * @gaddr: the gmap address, does not have to match @gfn for ucontrol gmaps
+ * @flags: FOLL_* flags
+ *
+ * Return: 0 on success, < 0 in case or error.
+ * Context: The mm lock must not be held before calling.
+ */
+int __kvm_s390_handle_dat_fault(struct kvm_vcpu *vcpu, gfn_t gfn, gpa_t gaddr, unsigned int flags)
 {
 	struct kvm_memory_slot *slot;
 	unsigned int fault_flags;
@@ -4925,7 +4934,7 @@ static int vcpu_post_run_handle_fault(struct kvm_vcpu *vcpu)
 			}
 			gfn = gpa_to_gfn(gaddr_tmp);
 		}
-		return kvm_s390_handle_dat_fault(vcpu, gfn, gaddr, flags);
+		return __kvm_s390_handle_dat_fault(vcpu, gfn, gaddr, flags);
 		break;
 	default:
 		KVM_BUG(1, vcpu->kvm, "Unexpected program interrupt 0x%x, TEID 0x%016lx",
@@ -5818,7 +5827,7 @@ long kvm_arch_vcpu_ioctl(struct file *filp,
 	}
 #endif
 	case KVM_S390_VCPU_FAULT: {
-		r = gmap_fault(vcpu->arch.gmap, arg, 0);
+		r = kvm_s390_handle_dat_fault(vcpu, arg, 0);
 		break;
 	}
 	case KVM_ENABLE_CAP:
diff --git a/arch/s390/kvm/kvm-s390.h b/arch/s390/kvm/kvm-s390.h
index 597d7a71deeb..7f4b5ef80a6f 100644
--- a/arch/s390/kvm/kvm-s390.h
+++ b/arch/s390/kvm/kvm-s390.h
@@ -408,6 +408,13 @@ void kvm_s390_vcpu_unsetup_cmma(struct kvm_vcpu *vcpu);
 void kvm_s390_set_cpu_timer(struct kvm_vcpu *vcpu, __u64 cputm);
 __u64 kvm_s390_get_cpu_timer(struct kvm_vcpu *vcpu);
 int kvm_s390_cpus_from_pv(struct kvm *kvm, u16 *rc, u16 *rrc);
+int __kvm_s390_handle_dat_fault(struct kvm_vcpu *vcpu, gfn_t gfn, gpa_t gaddr, unsigned int flags);
+
+static inline int kvm_s390_handle_dat_fault(struct kvm_vcpu *vcpu, gpa_t gaddr, unsigned int flags)
+{
+	return __kvm_s390_handle_dat_fault(vcpu, gpa_to_gfn(gaddr), gaddr, flags);
+}
+
 
 /* implemented in diag.c */
 int kvm_s390_handle_diag(struct kvm_vcpu *vcpu);
diff --git a/arch/s390/mm/gmap.c b/arch/s390/mm/gmap.c
index 3aacef77c174..8da4f7438511 100644
--- a/arch/s390/mm/gmap.c
+++ b/arch/s390/mm/gmap.c
@@ -607,130 +607,6 @@ int __gmap_link(struct gmap *gmap, unsigned long gaddr, unsigned long vmaddr)
 }
 EXPORT_SYMBOL(__gmap_link);
 
-/**
- * fixup_user_fault_nowait - manually resolve a user page fault without waiting
- * @mm:		mm_struct of target mm
- * @address:	user address
- * @fault_flags:flags to pass down to handle_mm_fault()
- * @unlocked:	did we unlock the mmap_lock while retrying
- *
- * This function behaves similarly to fixup_user_fault(), but it guarantees
- * that the fault will be resolved without waiting. The function might drop
- * and re-acquire the mm lock, in which case @unlocked will be set to true.
- *
- * The guarantee is that the fault is handled without waiting, but the
- * function itself might sleep, due to the lock.
- *
- * Context: Needs to be called with mm->mmap_lock held in read mode, and will
- * return with the lock held in read mode; @unlocked will indicate whether
- * the lock has been dropped and re-acquired. This is the same behaviour as
- * fixup_user_fault().
- *
- * Return: 0 on success, -EAGAIN if the fault cannot be resolved without
- * waiting, -EFAULT if the fault cannot be resolved, -ENOMEM if out of
- * memory.
- */
-static int fixup_user_fault_nowait(struct mm_struct *mm, unsigned long address,
-				   unsigned int fault_flags, bool *unlocked)
-{
-	struct vm_area_struct *vma;
-	unsigned int test_flags;
-	vm_fault_t fault;
-	int rc;
-
-	fault_flags |= FAULT_FLAG_ALLOW_RETRY | FAULT_FLAG_RETRY_NOWAIT;
-	test_flags = fault_flags & FAULT_FLAG_WRITE ? VM_WRITE : VM_READ;
-
-	vma = find_vma(mm, address);
-	if (unlikely(!vma || address < vma->vm_start))
-		return -EFAULT;
-	if (unlikely(!(vma->vm_flags & test_flags)))
-		return -EFAULT;
-
-	fault = handle_mm_fault(vma, address, fault_flags, NULL);
-	/* the mm lock has been dropped, take it again */
-	if (fault & VM_FAULT_COMPLETED) {
-		*unlocked = true;
-		mmap_read_lock(mm);
-		return 0;
-	}
-	/* the mm lock has not been dropped */
-	if (fault & VM_FAULT_ERROR) {
-		rc = vm_fault_to_errno(fault, 0);
-		BUG_ON(!rc);
-		return rc;
-	}
-	/* the mm lock has not been dropped because of FAULT_FLAG_RETRY_NOWAIT */
-	if (fault & VM_FAULT_RETRY)
-		return -EAGAIN;
-	/* nothing needed to be done and the mm lock has not been dropped */
-	return 0;
-}
-
-/**
- * __gmap_fault - resolve a fault on a guest address
- * @gmap: pointer to guest mapping meta data structure
- * @gaddr: guest address
- * @fault_flags: flags to pass down to handle_mm_fault()
- *
- * Context: Needs to be called with mm->mmap_lock held in read mode. Might
- * drop and re-acquire the lock. Will always return with the lock held.
- */
-static int __gmap_fault(struct gmap *gmap, unsigned long gaddr, unsigned int fault_flags)
-{
-	unsigned long vmaddr;
-	bool unlocked;
-	int rc = 0;
-
-retry:
-	unlocked = false;
-
-	vmaddr = __gmap_translate(gmap, gaddr);
-	if (IS_ERR_VALUE(vmaddr))
-		return vmaddr;
-
-	if (fault_flags & FAULT_FLAG_RETRY_NOWAIT)
-		rc = fixup_user_fault_nowait(gmap->mm, vmaddr, fault_flags, &unlocked);
-	else
-		rc = fixup_user_fault(gmap->mm, vmaddr, fault_flags, &unlocked);
-	if (rc)
-		return rc;
-	/*
-	 * In the case that fixup_user_fault unlocked the mmap_lock during
-	 * fault-in, redo __gmap_translate() to avoid racing with a
-	 * map/unmap_segment.
-	 * In particular, __gmap_translate(), fixup_user_fault{,_nowait}(),
-	 * and __gmap_link() must all be called atomically in one go; if the
-	 * lock had been dropped in between, a retry is needed.
-	 */
-	if (unlocked)
-		goto retry;
-
-	return __gmap_link(gmap, gaddr, vmaddr);
-}
-
-/**
- * gmap_fault - resolve a fault on a guest address
- * @gmap: pointer to guest mapping meta data structure
- * @gaddr: guest address
- * @fault_flags: flags to pass down to handle_mm_fault()
- *
- * Returns 0 on success, -ENOMEM for out of memory conditions, -EFAULT if the
- * vm address is already mapped to a different guest segment, and -EAGAIN if
- * FAULT_FLAG_RETRY_NOWAIT was specified and the fault could not be processed
- * immediately.
- */
-int gmap_fault(struct gmap *gmap, unsigned long gaddr, unsigned int fault_flags)
-{
-	int rc;
-
-	mmap_read_lock(gmap->mm);
-	rc = __gmap_fault(gmap, gaddr, fault_flags);
-	mmap_read_unlock(gmap->mm);
-	return rc;
-}
-EXPORT_SYMBOL_GPL(gmap_fault);
-
 /*
  * this function is assumed to be called with mmap_lock held
  */
-- 
2.47.1


^ permalink raw reply related	[flat|nested] 39+ messages in thread

* [PATCH v1 06/13] KVM: s390: get rid of gmap_translate()
  2025-01-08 18:14 [PATCH v1 00/13] KVM: s390: Stop using page->index and other things Claudio Imbrenda
                   ` (4 preceding siblings ...)
  2025-01-08 18:14 ` [PATCH v1 05/13] KVM: s390: get rid of gmap_fault() Claudio Imbrenda
@ 2025-01-08 18:14 ` Claudio Imbrenda
  2025-01-08 18:14 ` [PATCH v1 07/13] KVM: s390: move some gmap shadowing functions away from mm/gmap.c Claudio Imbrenda
                   ` (6 subsequent siblings)
  12 siblings, 0 replies; 39+ messages in thread
From: Claudio Imbrenda @ 2025-01-08 18:14 UTC (permalink / raw)
  To: kvm
  Cc: linux-s390, frankja, borntraeger, schlameuss, david, willy, hca,
	svens, agordeev, gor, nrb, nsg

Add gpa_to_hva(), which uses memslots, and use it to replace all uses
of gmap_translate() except one.

Open code __gmap_translate() for the only remaining user of
gmap_translate().

Signed-off-by: Claudio Imbrenda <imbrenda@linux.ibm.com>
---
 arch/s390/include/asm/gmap.h |  1 -
 arch/s390/kvm/interrupt.c    | 19 +++++++++++--------
 arch/s390/kvm/kvm-s390.c     |  4 +++-
 arch/s390/kvm/kvm-s390.h     |  9 +++++++++
 arch/s390/mm/gmap.c          | 20 --------------------
 5 files changed, 23 insertions(+), 30 deletions(-)

diff --git a/arch/s390/include/asm/gmap.h b/arch/s390/include/asm/gmap.h
index 3f4184be297f..3652d8523e1f 100644
--- a/arch/s390/include/asm/gmap.h
+++ b/arch/s390/include/asm/gmap.h
@@ -111,7 +111,6 @@ int gmap_map_segment(struct gmap *gmap, unsigned long from,
 		     unsigned long to, unsigned long len);
 int gmap_unmap_segment(struct gmap *gmap, unsigned long to, unsigned long len);
 unsigned long __gmap_translate(struct gmap *, unsigned long gaddr);
-unsigned long gmap_translate(struct gmap *, unsigned long gaddr);
 int __gmap_link(struct gmap *gmap, unsigned long gaddr, unsigned long vmaddr);
 void gmap_discard(struct gmap *, unsigned long from, unsigned long to);
 void __gmap_zap(struct gmap *, unsigned long gaddr);
diff --git a/arch/s390/kvm/interrupt.c b/arch/s390/kvm/interrupt.c
index ea8dce299954..c1526149700e 100644
--- a/arch/s390/kvm/interrupt.c
+++ b/arch/s390/kvm/interrupt.c
@@ -2889,20 +2889,23 @@ int kvm_set_routing_entry(struct kvm *kvm,
 			  struct kvm_kernel_irq_routing_entry *e,
 			  const struct kvm_irq_routing_entry *ue)
 {
-	u64 uaddr;
+	u64 uaddr_s, uaddr_i;
+	int idx;
 
 	switch (ue->type) {
 	/* we store the userspace addresses instead of the guest addresses */
 	case KVM_IRQ_ROUTING_S390_ADAPTER:
 		e->set = set_adapter_int;
-		uaddr =  gmap_translate(kvm->arch.gmap, ue->u.adapter.summary_addr);
-		if (uaddr == -EFAULT)
-			return -EFAULT;
-		e->adapter.summary_addr = uaddr;
-		uaddr =  gmap_translate(kvm->arch.gmap, ue->u.adapter.ind_addr);
-		if (uaddr == -EFAULT)
+
+		idx = srcu_read_lock(&kvm->srcu);
+		uaddr_s = gpa_to_hva(kvm, ue->u.adapter.summary_addr);
+		uaddr_i = gpa_to_hva(kvm, ue->u.adapter.ind_addr);
+		srcu_read_unlock(&kvm->srcu, idx);
+
+		if (kvm_is_error_hva(uaddr_s) || kvm_is_error_hva(uaddr_i))
 			return -EFAULT;
-		e->adapter.ind_addr = uaddr;
+		e->adapter.summary_addr = uaddr_s;
+		e->adapter.ind_addr = uaddr_i;
 		e->adapter.summary_offset = ue->u.adapter.summary_offset;
 		e->adapter.ind_offset = ue->u.adapter.ind_offset;
 		e->adapter.adapter_id = ue->u.adapter.adapter_id;
diff --git a/arch/s390/kvm/kvm-s390.c b/arch/s390/kvm/kvm-s390.c
index d8c8e91356ca..574a9e43f97a 100644
--- a/arch/s390/kvm/kvm-s390.c
+++ b/arch/s390/kvm/kvm-s390.c
@@ -4925,7 +4925,9 @@ static int vcpu_post_run_handle_fault(struct kvm_vcpu *vcpu)
 			 * This allows ucontrol VMs to use the normal fault
 			 * resolution path, like normal VMs.
 			 */
-			gaddr_tmp = gmap_translate(vcpu->arch.gmap, gaddr);
+			mmap_read_lock(vcpu->arch.gmap->mm);
+			gaddr_tmp = __gmap_translate(vcpu->arch.gmap, gaddr);
+			mmap_read_unlock(vcpu->arch.gmap->mm);
 			if (gaddr_tmp == -EFAULT) {
 				vcpu->run->exit_reason = KVM_EXIT_S390_UCONTROL;
 				vcpu->run->s390_ucontrol.trans_exc_code = gaddr;
diff --git a/arch/s390/kvm/kvm-s390.h b/arch/s390/kvm/kvm-s390.h
index 7f4b5ef80a6f..37c9ab52ecaa 100644
--- a/arch/s390/kvm/kvm-s390.h
+++ b/arch/s390/kvm/kvm-s390.h
@@ -279,6 +279,15 @@ static inline u32 kvm_s390_get_gisa_desc(struct kvm *kvm)
 	return gd;
 }
 
+static inline hva_t gpa_to_hva(struct kvm *kvm, gpa_t gpa)
+{
+	hva_t hva = gfn_to_hva(kvm, gpa_to_gfn(gpa));
+
+	if (!kvm_is_error_hva(hva))
+		hva |= offset_in_page(gpa);
+	return hva;
+}
+
 /* implemented in pv.c */
 int kvm_s390_pv_destroy_cpu(struct kvm_vcpu *vcpu, u16 *rc, u16 *rrc);
 int kvm_s390_pv_create_cpu(struct kvm_vcpu *vcpu, u16 *rc, u16 *rrc);
diff --git a/arch/s390/mm/gmap.c b/arch/s390/mm/gmap.c
index 8da4f7438511..ae6ccf034378 100644
--- a/arch/s390/mm/gmap.c
+++ b/arch/s390/mm/gmap.c
@@ -463,26 +463,6 @@ unsigned long __gmap_translate(struct gmap *gmap, unsigned long gaddr)
 }
 EXPORT_SYMBOL_GPL(__gmap_translate);
 
-/**
- * gmap_translate - translate a guest address to a user space address
- * @gmap: pointer to guest mapping meta data structure
- * @gaddr: guest address
- *
- * Returns user space address which corresponds to the guest address or
- * -EFAULT if no such mapping exists.
- * This function does not establish potentially missing page table entries.
- */
-unsigned long gmap_translate(struct gmap *gmap, unsigned long gaddr)
-{
-	unsigned long rc;
-
-	mmap_read_lock(gmap->mm);
-	rc = __gmap_translate(gmap, gaddr);
-	mmap_read_unlock(gmap->mm);
-	return rc;
-}
-EXPORT_SYMBOL_GPL(gmap_translate);
-
 /**
  * gmap_unlink - disconnect a page table from the gmap shadow tables
  * @mm: pointer to the parent mm_struct
-- 
2.47.1


^ permalink raw reply related	[flat|nested] 39+ messages in thread

* [PATCH v1 07/13] KVM: s390: move some gmap shadowing functions away from mm/gmap.c
  2025-01-08 18:14 [PATCH v1 00/13] KVM: s390: Stop using page->index and other things Claudio Imbrenda
                   ` (5 preceding siblings ...)
  2025-01-08 18:14 ` [PATCH v1 06/13] KVM: s390: get rid of gmap_translate() Claudio Imbrenda
@ 2025-01-08 18:14 ` Claudio Imbrenda
  2025-01-15  8:56   ` Janosch Frank
  2025-01-08 18:14 ` [PATCH v1 08/13] KVM: s390: stop using page->index for non-shadow gmaps Claudio Imbrenda
                   ` (5 subsequent siblings)
  12 siblings, 1 reply; 39+ messages in thread
From: Claudio Imbrenda @ 2025-01-08 18:14 UTC (permalink / raw)
  To: kvm
  Cc: linux-s390, frankja, borntraeger, schlameuss, david, willy, hca,
	svens, agordeev, gor, nrb, nsg

Move some gmap shadowing functions from mm/gmap.c to kvm/vsie.c and
kvm/kvm-s390.c .

Signed-off-by: Claudio Imbrenda <imbrenda@linux.ibm.com>
---
 arch/s390/include/asm/gmap.h |   7 +-
 arch/s390/kvm/kvm-s390.c     |  62 +++++++++-
 arch/s390/kvm/kvm-s390.h     |   2 +
 arch/s390/kvm/vsie.c         | 137 +++++++++++++++++++++
 arch/s390/mm/gmap.c          | 232 ++++-------------------------------
 5 files changed, 226 insertions(+), 214 deletions(-)

diff --git a/arch/s390/include/asm/gmap.h b/arch/s390/include/asm/gmap.h
index 3652d8523e1f..8637ebbf618a 100644
--- a/arch/s390/include/asm/gmap.h
+++ b/arch/s390/include/asm/gmap.h
@@ -106,6 +106,8 @@ struct gmap *gmap_create(struct mm_struct *mm, unsigned long limit);
 void gmap_remove(struct gmap *gmap);
 struct gmap *gmap_get(struct gmap *gmap);
 void gmap_put(struct gmap *gmap);
+void gmap_free(struct gmap *gmap);
+struct gmap *gmap_alloc(unsigned long limit);
 
 int gmap_map_segment(struct gmap *gmap, unsigned long from,
 		     unsigned long to, unsigned long len);
@@ -120,7 +122,7 @@ int gmap_read_table(struct gmap *gmap, unsigned long gaddr, unsigned long *val);
 
 struct gmap *gmap_shadow(struct gmap *parent, unsigned long asce,
 			 int edat_level);
-int gmap_shadow_valid(struct gmap *sg, unsigned long asce, int edat_level);
+void gmap_unshadow(struct gmap *sg);
 int gmap_shadow_r2t(struct gmap *sg, unsigned long saddr, unsigned long r2t,
 		    int fake);
 int gmap_shadow_r3t(struct gmap *sg, unsigned long saddr, unsigned long r3t,
@@ -136,8 +138,7 @@ int gmap_shadow_page(struct gmap *sg, unsigned long saddr, pte_t pte);
 void gmap_register_pte_notifier(struct gmap_notifier *);
 void gmap_unregister_pte_notifier(struct gmap_notifier *);
 
-int gmap_mprotect_notify(struct gmap *, unsigned long start,
-			 unsigned long len, int prot);
+int gmap_protect_one(struct gmap *gmap, unsigned long gaddr, int prot, unsigned long bits);
 
 void gmap_sync_dirty_log_pmd(struct gmap *gmap, unsigned long dirty_bitmap[4],
 			     unsigned long gaddr, unsigned long vmaddr);
diff --git a/arch/s390/kvm/kvm-s390.c b/arch/s390/kvm/kvm-s390.c
index 574a9e43f97a..b9e02fb9ae4a 100644
--- a/arch/s390/kvm/kvm-s390.c
+++ b/arch/s390/kvm/kvm-s390.c
@@ -4520,6 +4520,63 @@ static bool ibs_enabled(struct kvm_vcpu *vcpu)
 	return kvm_s390_test_cpuflags(vcpu, CPUSTAT_IBS);
 }
 
+static int __kvm_s390_fixup_fault_sync(struct gmap *gmap, gpa_t gaddr, unsigned int flags)
+{
+	struct kvm *kvm = gmap->private;
+	gfn_t gfn = gpa_to_gfn(gaddr);
+	bool unlocked;
+	hva_t vmaddr;
+	gpa_t tmp;
+	int rc;
+
+	if (kvm_is_ucontrol(kvm)) {
+		tmp = __gmap_translate(gmap, gaddr);
+		gfn = gpa_to_gfn(tmp);
+	}
+
+	vmaddr = gfn_to_hva(kvm, gfn);
+	rc = fixup_user_fault(gmap->mm, vmaddr, FAULT_FLAG_WRITE, &unlocked);
+	if (!rc)
+		rc = __gmap_link(gmap, gaddr, vmaddr);
+	return rc;
+}
+
+int __kvm_s390_mprotect_many(struct gmap *gmap, gpa_t gpa, u8 npages, unsigned int prot,
+			     unsigned long bits)
+{
+	unsigned int fault_flag = (prot & PROT_WRITE) ? FAULT_FLAG_WRITE : 0;
+	gpa_t end = gpa + npages * PAGE_SIZE;
+	int rc;
+
+	for (; gpa < end; gpa = ALIGN(gpa + 1, rc ? HPAGE_SIZE : PAGE_SIZE)) {
+		rc = gmap_protect_one(gmap, gpa, prot, bits);
+		if (rc == -EAGAIN) {
+			__kvm_s390_fixup_fault_sync(gmap, gpa, fault_flag);
+			rc = gmap_protect_one(gmap, gpa, prot, bits);
+		}
+		if (rc < 0)
+			return rc;
+	}
+
+	return 0;
+}
+
+static int kvm_s390_mprotect_notify_prefix(struct kvm_vcpu *vcpu)
+{
+	gpa_t gaddr = kvm_s390_get_prefix(vcpu);
+	int idx, rc;
+
+	idx = srcu_read_lock(&vcpu->kvm->srcu);
+	mmap_read_lock(vcpu->arch.gmap->mm);
+
+	rc = __kvm_s390_mprotect_many(vcpu->arch.gmap, gaddr, 2, PROT_WRITE, GMAP_NOTIFY_MPROT);
+
+	mmap_read_unlock(vcpu->arch.gmap->mm);
+	srcu_read_unlock(&vcpu->kvm->srcu, idx);
+
+	return rc;
+}
+
 static int kvm_s390_handle_requests(struct kvm_vcpu *vcpu)
 {
 retry:
@@ -4535,9 +4592,8 @@ static int kvm_s390_handle_requests(struct kvm_vcpu *vcpu)
 	 */
 	if (kvm_check_request(KVM_REQ_REFRESH_GUEST_PREFIX, vcpu)) {
 		int rc;
-		rc = gmap_mprotect_notify(vcpu->arch.gmap,
-					  kvm_s390_get_prefix(vcpu),
-					  PAGE_SIZE * 2, PROT_WRITE);
+
+		rc = kvm_s390_mprotect_notify_prefix(vcpu);
 		if (rc) {
 			kvm_make_request(KVM_REQ_REFRESH_GUEST_PREFIX, vcpu);
 			return rc;
diff --git a/arch/s390/kvm/kvm-s390.h b/arch/s390/kvm/kvm-s390.h
index 37c9ab52ecaa..f77a3829d182 100644
--- a/arch/s390/kvm/kvm-s390.h
+++ b/arch/s390/kvm/kvm-s390.h
@@ -418,6 +418,8 @@ void kvm_s390_set_cpu_timer(struct kvm_vcpu *vcpu, __u64 cputm);
 __u64 kvm_s390_get_cpu_timer(struct kvm_vcpu *vcpu);
 int kvm_s390_cpus_from_pv(struct kvm *kvm, u16 *rc, u16 *rrc);
 int __kvm_s390_handle_dat_fault(struct kvm_vcpu *vcpu, gfn_t gfn, gpa_t gaddr, unsigned int flags);
+int __kvm_s390_mprotect_many(struct gmap *gmap, gpa_t gpa, u8 npages, unsigned int prot,
+			     unsigned long bits);
 
 static inline int kvm_s390_handle_dat_fault(struct kvm_vcpu *vcpu, gpa_t gaddr, unsigned int flags)
 {
diff --git a/arch/s390/kvm/vsie.c b/arch/s390/kvm/vsie.c
index 150b9387860a..314e9baf2120 100644
--- a/arch/s390/kvm/vsie.c
+++ b/arch/s390/kvm/vsie.c
@@ -13,6 +13,7 @@
 #include <linux/bitmap.h>
 #include <linux/sched/signal.h>
 #include <linux/io.h>
+#include <linux/mman.h>
 
 #include <asm/gmap.h>
 #include <asm/mmu_context.h>
@@ -1207,6 +1208,142 @@ static void release_gmap_shadow(struct vsie_page *vsie_page)
 	prefix_unmapped(vsie_page);
 }
 
+/**
+ * gmap_find_shadow - find a specific asce in the list of shadow tables
+ * @parent: pointer to the parent gmap
+ * @asce: ASCE for which the shadow table is created
+ * @edat_level: edat level to be used for the shadow translation
+ *
+ * Returns the pointer to a gmap if a shadow table with the given asce is
+ * already available, ERR_PTR(-EAGAIN) if another one is just being created,
+ * otherwise NULL
+ */
+static struct gmap *gmap_find_shadow(struct gmap *parent, unsigned long asce,
+				     int edat_level)
+{
+	struct gmap *sg;
+
+	list_for_each_entry(sg, &parent->children, list) {
+		if (sg->orig_asce != asce || sg->edat_level != edat_level ||
+		    sg->removed)
+			continue;
+		if (!sg->initialized)
+			return ERR_PTR(-EAGAIN);
+		refcount_inc(&sg->ref_count);
+		return sg;
+	}
+	return NULL;
+}
+
+/**
+ * gmap_shadow_valid - check if a shadow guest address space matches the
+ *                     given properties and is still valid
+ * @sg: pointer to the shadow guest address space structure
+ * @asce: ASCE for which the shadow table is requested
+ * @edat_level: edat level to be used for the shadow translation
+ *
+ * Returns 1 if the gmap shadow is still valid and matches the given
+ * properties, the caller can continue using it. Returns 0 otherwise, the
+ * caller has to request a new shadow gmap in this case.
+ *
+ */
+static int gmap_shadow_valid(struct gmap *sg, unsigned long asce, int edat_level)
+{
+	if (sg->removed)
+		return 0;
+	return sg->orig_asce == asce && sg->edat_level == edat_level;
+}
+
+/**
+ * gmap_shadow - create/find a shadow guest address space
+ * @parent: pointer to the parent gmap
+ * @asce: ASCE for which the shadow table is created
+ * @edat_level: edat level to be used for the shadow translation
+ *
+ * The pages of the top level page table referred by the asce parameter
+ * will be set to read-only and marked in the PGSTEs of the kvm process.
+ * The shadow table will be removed automatically on any change to the
+ * PTE mapping for the source table.
+ *
+ * Returns a guest address space structure, ERR_PTR(-ENOMEM) if out of memory,
+ * ERR_PTR(-EAGAIN) if the caller has to retry and ERR_PTR(-EFAULT) if the
+ * parent gmap table could not be protected.
+ */
+struct gmap *gmap_shadow(struct gmap *parent, unsigned long asce,
+			 int edat_level)
+{
+	struct gmap *sg, *new;
+	unsigned long limit;
+	int rc;
+
+	if (KVM_BUG_ON(parent->mm->context.allow_gmap_hpage_1m, (struct kvm *)parent->private) ||
+	    KVM_BUG_ON(gmap_is_shadow(parent), (struct kvm *)parent->private))
+		return ERR_PTR(-EFAULT);
+	spin_lock(&parent->shadow_lock);
+	sg = gmap_find_shadow(parent, asce, edat_level);
+	spin_unlock(&parent->shadow_lock);
+	if (sg)
+		return sg;
+	/* Create a new shadow gmap */
+	limit = -1UL >> (33 - (((asce & _ASCE_TYPE_MASK) >> 2) * 11));
+	if (asce & _ASCE_REAL_SPACE)
+		limit = -1UL;
+	new = gmap_alloc(limit);
+	if (!new)
+		return ERR_PTR(-ENOMEM);
+	new->mm = parent->mm;
+	new->parent = gmap_get(parent);
+	new->private = parent->private;
+	new->orig_asce = asce;
+	new->edat_level = edat_level;
+	new->initialized = false;
+	spin_lock(&parent->shadow_lock);
+	/* Recheck if another CPU created the same shadow */
+	sg = gmap_find_shadow(parent, asce, edat_level);
+	if (sg) {
+		spin_unlock(&parent->shadow_lock);
+		gmap_free(new);
+		return sg;
+	}
+	if (asce & _ASCE_REAL_SPACE) {
+		/* only allow one real-space gmap shadow */
+		list_for_each_entry(sg, &parent->children, list) {
+			if (sg->orig_asce & _ASCE_REAL_SPACE) {
+				spin_lock(&sg->guest_table_lock);
+				gmap_unshadow(sg);
+				spin_unlock(&sg->guest_table_lock);
+				list_del(&sg->list);
+				gmap_put(sg);
+				break;
+			}
+		}
+	}
+	refcount_set(&new->ref_count, 2);
+	list_add(&new->list, &parent->children);
+	if (asce & _ASCE_REAL_SPACE) {
+		/* nothing to protect, return right away */
+		new->initialized = true;
+		spin_unlock(&parent->shadow_lock);
+		return new;
+	}
+	spin_unlock(&parent->shadow_lock);
+	/* protect after insertion, so it will get properly invalidated */
+	mmap_read_lock(parent->mm);
+	rc = __kvm_s390_mprotect_many(parent, asce & _ASCE_ORIGIN,
+				      ((asce & _ASCE_TABLE_LENGTH) + 1),
+				      PROT_READ, GMAP_NOTIFY_SHADOW);
+	mmap_read_unlock(parent->mm);
+	spin_lock(&parent->shadow_lock);
+	new->initialized = true;
+	if (rc) {
+		list_del(&new->list);
+		gmap_free(new);
+		new = ERR_PTR(rc);
+	}
+	spin_unlock(&parent->shadow_lock);
+	return new;
+}
+
 static int acquire_gmap_shadow(struct kvm_vcpu *vcpu,
 			       struct vsie_page *vsie_page)
 {
diff --git a/arch/s390/mm/gmap.c b/arch/s390/mm/gmap.c
index ae6ccf034378..12d681b657b4 100644
--- a/arch/s390/mm/gmap.c
+++ b/arch/s390/mm/gmap.c
@@ -43,7 +43,7 @@ static struct page *gmap_alloc_crst(void)
  *
  * Returns a guest address space structure.
  */
-static struct gmap *gmap_alloc(unsigned long limit)
+struct gmap *gmap_alloc(unsigned long limit)
 {
 	struct gmap *gmap;
 	struct page *page;
@@ -97,6 +97,7 @@ static struct gmap *gmap_alloc(unsigned long limit)
 out:
 	return NULL;
 }
+EXPORT_SYMBOL_GPL(gmap_alloc);
 
 /**
  * gmap_create - create a guest address space
@@ -191,7 +192,7 @@ static void gmap_rmap_radix_tree_free(struct radix_tree_root *root)
  *
  * No locks required. There are no references to this gmap anymore.
  */
-static void gmap_free(struct gmap *gmap)
+void gmap_free(struct gmap *gmap)
 {
 	struct page *page, *next;
 
@@ -218,6 +219,7 @@ static void gmap_free(struct gmap *gmap)
 
 	kfree(gmap);
 }
+EXPORT_SYMBOL_GPL(gmap_free);
 
 /**
  * gmap_get - increase reference counter for guest address space
@@ -958,86 +960,36 @@ static int gmap_protect_pte(struct gmap *gmap, unsigned long gaddr,
  * @prot: indicates access rights: PROT_NONE, PROT_READ or PROT_WRITE
  * @bits: pgste notification bits to set
  *
- * Returns 0 if successfully protected, -ENOMEM if out of memory and
- * -EFAULT if gaddr is invalid (or mapping for shadows is missing).
+ * Returns 0 if successfully protected, -ENOMEM if out of memory,
+ * -EFAULT if gaddr is invalid (or mapping for shadows is missing),
+ * -EAGAIN if the guest mapping is missing and should be fixed by the
+ * caller.
  *
  * Called with sg->mm->mmap_lock in read.
  */
-static int gmap_protect_range(struct gmap *gmap, unsigned long gaddr,
-			      unsigned long len, int prot, unsigned long bits)
+int gmap_protect_one(struct gmap *gmap, unsigned long gaddr, int prot, unsigned long bits)
 {
-	unsigned long vmaddr, dist;
 	pmd_t *pmdp;
-	int rc;
+	int rc = 0;
 
 	BUG_ON(gmap_is_shadow(gmap));
-	while (len) {
-		rc = -EAGAIN;
-		pmdp = gmap_pmd_op_walk(gmap, gaddr);
-		if (pmdp) {
-			if (!pmd_leaf(*pmdp)) {
-				rc = gmap_protect_pte(gmap, gaddr, pmdp, prot,
-						      bits);
-				if (!rc) {
-					len -= PAGE_SIZE;
-					gaddr += PAGE_SIZE;
-				}
-			} else {
-				rc = gmap_protect_pmd(gmap, gaddr, pmdp, prot,
-						      bits);
-				if (!rc) {
-					dist = HPAGE_SIZE - (gaddr & ~HPAGE_MASK);
-					len = len < dist ? 0 : len - dist;
-					gaddr = (gaddr & HPAGE_MASK) + HPAGE_SIZE;
-				}
-			}
-			gmap_pmd_op_end(gmap, pmdp);
-		}
-		if (rc) {
-			if (rc == -EINVAL)
-				return rc;
 
-			/* -EAGAIN, fixup of userspace mm and gmap */
-			vmaddr = __gmap_translate(gmap, gaddr);
-			if (IS_ERR_VALUE(vmaddr))
-				return vmaddr;
-			rc = gmap_pte_op_fixup(gmap, gaddr, vmaddr, prot);
-			if (rc)
-				return rc;
-		}
-	}
-	return 0;
-}
+	pmdp = gmap_pmd_op_walk(gmap, gaddr);
+	if (!pmdp)
+		return -EAGAIN;
 
-/**
- * gmap_mprotect_notify - change access rights for a range of ptes and
- *                        call the notifier if any pte changes again
- * @gmap: pointer to guest mapping meta data structure
- * @gaddr: virtual address in the guest address space
- * @len: size of area
- * @prot: indicates access rights: PROT_NONE, PROT_READ or PROT_WRITE
- *
- * Returns 0 if for each page in the given range a gmap mapping exists,
- * the new access rights could be set and the notifier could be armed.
- * If the gmap mapping is missing for one or more pages -EFAULT is
- * returned. If no memory could be allocated -ENOMEM is returned.
- * This function establishes missing page table entries.
- */
-int gmap_mprotect_notify(struct gmap *gmap, unsigned long gaddr,
-			 unsigned long len, int prot)
-{
-	int rc;
+	if (!pmd_leaf(*pmdp)) {
+		rc = gmap_protect_pte(gmap, gaddr, pmdp, prot, bits);
+	} else {
+		rc = gmap_protect_pmd(gmap, gaddr, pmdp, prot, bits);
+		if (!rc)
+			rc = 1;
+	}
+	gmap_pmd_op_end(gmap, pmdp);
 
-	if ((gaddr & ~PAGE_MASK) || (len & ~PAGE_MASK) || gmap_is_shadow(gmap))
-		return -EINVAL;
-	if (!MACHINE_HAS_ESOP && prot == PROT_READ)
-		return -EINVAL;
-	mmap_read_lock(gmap->mm);
-	rc = gmap_protect_range(gmap, gaddr, len, prot, GMAP_NOTIFY_MPROT);
-	mmap_read_unlock(gmap->mm);
 	return rc;
 }
-EXPORT_SYMBOL_GPL(gmap_mprotect_notify);
+EXPORT_SYMBOL_GPL(gmap_protect_one);
 
 /**
  * gmap_read_table - get an unsigned long value from a guest page table using
@@ -1488,7 +1440,7 @@ static void __gmap_unshadow_r1t(struct gmap *sg, unsigned long raddr,
  *
  * Called with sg->guest_table_lock
  */
-static void gmap_unshadow(struct gmap *sg)
+void gmap_unshadow(struct gmap *sg)
 {
 	unsigned long *table;
 
@@ -1514,143 +1466,7 @@ static void gmap_unshadow(struct gmap *sg)
 		break;
 	}
 }
-
-/**
- * gmap_find_shadow - find a specific asce in the list of shadow tables
- * @parent: pointer to the parent gmap
- * @asce: ASCE for which the shadow table is created
- * @edat_level: edat level to be used for the shadow translation
- *
- * Returns the pointer to a gmap if a shadow table with the given asce is
- * already available, ERR_PTR(-EAGAIN) if another one is just being created,
- * otherwise NULL
- */
-static struct gmap *gmap_find_shadow(struct gmap *parent, unsigned long asce,
-				     int edat_level)
-{
-	struct gmap *sg;
-
-	list_for_each_entry(sg, &parent->children, list) {
-		if (sg->orig_asce != asce || sg->edat_level != edat_level ||
-		    sg->removed)
-			continue;
-		if (!sg->initialized)
-			return ERR_PTR(-EAGAIN);
-		refcount_inc(&sg->ref_count);
-		return sg;
-	}
-	return NULL;
-}
-
-/**
- * gmap_shadow_valid - check if a shadow guest address space matches the
- *                     given properties and is still valid
- * @sg: pointer to the shadow guest address space structure
- * @asce: ASCE for which the shadow table is requested
- * @edat_level: edat level to be used for the shadow translation
- *
- * Returns 1 if the gmap shadow is still valid and matches the given
- * properties, the caller can continue using it. Returns 0 otherwise, the
- * caller has to request a new shadow gmap in this case.
- *
- */
-int gmap_shadow_valid(struct gmap *sg, unsigned long asce, int edat_level)
-{
-	if (sg->removed)
-		return 0;
-	return sg->orig_asce == asce && sg->edat_level == edat_level;
-}
-EXPORT_SYMBOL_GPL(gmap_shadow_valid);
-
-/**
- * gmap_shadow - create/find a shadow guest address space
- * @parent: pointer to the parent gmap
- * @asce: ASCE for which the shadow table is created
- * @edat_level: edat level to be used for the shadow translation
- *
- * The pages of the top level page table referred by the asce parameter
- * will be set to read-only and marked in the PGSTEs of the kvm process.
- * The shadow table will be removed automatically on any change to the
- * PTE mapping for the source table.
- *
- * Returns a guest address space structure, ERR_PTR(-ENOMEM) if out of memory,
- * ERR_PTR(-EAGAIN) if the caller has to retry and ERR_PTR(-EFAULT) if the
- * parent gmap table could not be protected.
- */
-struct gmap *gmap_shadow(struct gmap *parent, unsigned long asce,
-			 int edat_level)
-{
-	struct gmap *sg, *new;
-	unsigned long limit;
-	int rc;
-
-	BUG_ON(parent->mm->context.allow_gmap_hpage_1m);
-	BUG_ON(gmap_is_shadow(parent));
-	spin_lock(&parent->shadow_lock);
-	sg = gmap_find_shadow(parent, asce, edat_level);
-	spin_unlock(&parent->shadow_lock);
-	if (sg)
-		return sg;
-	/* Create a new shadow gmap */
-	limit = -1UL >> (33 - (((asce & _ASCE_TYPE_MASK) >> 2) * 11));
-	if (asce & _ASCE_REAL_SPACE)
-		limit = -1UL;
-	new = gmap_alloc(limit);
-	if (!new)
-		return ERR_PTR(-ENOMEM);
-	new->mm = parent->mm;
-	new->parent = gmap_get(parent);
-	new->private = parent->private;
-	new->orig_asce = asce;
-	new->edat_level = edat_level;
-	new->initialized = false;
-	spin_lock(&parent->shadow_lock);
-	/* Recheck if another CPU created the same shadow */
-	sg = gmap_find_shadow(parent, asce, edat_level);
-	if (sg) {
-		spin_unlock(&parent->shadow_lock);
-		gmap_free(new);
-		return sg;
-	}
-	if (asce & _ASCE_REAL_SPACE) {
-		/* only allow one real-space gmap shadow */
-		list_for_each_entry(sg, &parent->children, list) {
-			if (sg->orig_asce & _ASCE_REAL_SPACE) {
-				spin_lock(&sg->guest_table_lock);
-				gmap_unshadow(sg);
-				spin_unlock(&sg->guest_table_lock);
-				list_del(&sg->list);
-				gmap_put(sg);
-				break;
-			}
-		}
-	}
-	refcount_set(&new->ref_count, 2);
-	list_add(&new->list, &parent->children);
-	if (asce & _ASCE_REAL_SPACE) {
-		/* nothing to protect, return right away */
-		new->initialized = true;
-		spin_unlock(&parent->shadow_lock);
-		return new;
-	}
-	spin_unlock(&parent->shadow_lock);
-	/* protect after insertion, so it will get properly invalidated */
-	mmap_read_lock(parent->mm);
-	rc = gmap_protect_range(parent, asce & _ASCE_ORIGIN,
-				((asce & _ASCE_TABLE_LENGTH) + 1) * PAGE_SIZE,
-				PROT_READ, GMAP_NOTIFY_SHADOW);
-	mmap_read_unlock(parent->mm);
-	spin_lock(&parent->shadow_lock);
-	new->initialized = true;
-	if (rc) {
-		list_del(&new->list);
-		gmap_free(new);
-		new = ERR_PTR(rc);
-	}
-	spin_unlock(&parent->shadow_lock);
-	return new;
-}
-EXPORT_SYMBOL_GPL(gmap_shadow);
+EXPORT_SYMBOL(gmap_unshadow);
 
 /**
  * gmap_shadow_r2t - create an empty shadow region 2 table
-- 
2.47.1


^ permalink raw reply related	[flat|nested] 39+ messages in thread

* [PATCH v1 08/13] KVM: s390: stop using page->index for non-shadow gmaps
  2025-01-08 18:14 [PATCH v1 00/13] KVM: s390: Stop using page->index and other things Claudio Imbrenda
                   ` (6 preceding siblings ...)
  2025-01-08 18:14 ` [PATCH v1 07/13] KVM: s390: move some gmap shadowing functions away from mm/gmap.c Claudio Imbrenda
@ 2025-01-08 18:14 ` Claudio Imbrenda
  2025-01-08 18:14 ` [PATCH v1 09/13] KVM: s390: stop using lists to keep track of used dat tables Claudio Imbrenda
                   ` (4 subsequent siblings)
  12 siblings, 0 replies; 39+ messages in thread
From: Claudio Imbrenda @ 2025-01-08 18:14 UTC (permalink / raw)
  To: kvm
  Cc: linux-s390, frankja, borntraeger, schlameuss, david, willy, hca,
	svens, agordeev, gor, nrb, nsg

The host_to_guest radix tree will now map userspace addresses to guest
addresses, instead of userspace addresses to segment tables.

When segment tables and page tables are needed, they are found using an
additional gmap_table_walk().

This gets rid of all usage of page->index for non-shadow gmaps.

Signed-off-by: Claudio Imbrenda <imbrenda@linux.ibm.com>
---
 arch/s390/mm/gmap.c | 96 +++++++++++++++++++++------------------------
 1 file changed, 45 insertions(+), 51 deletions(-)

diff --git a/arch/s390/mm/gmap.c b/arch/s390/mm/gmap.c
index 12d681b657b4..c0f79c14277e 100644
--- a/arch/s390/mm/gmap.c
+++ b/arch/s390/mm/gmap.c
@@ -24,8 +24,11 @@
 #include <asm/page.h>
 #include <asm/tlb.h>
 
+#define GADDR_VALID(gaddr) ((gaddr) & 1)
 #define GMAP_SHADOW_FAKE_TABLE 1ULL
 
+static inline unsigned long *gmap_table_walk(struct gmap *gmap, unsigned long gaddr, int level);
+
 static struct page *gmap_alloc_crst(void)
 {
 	struct page *page;
@@ -82,7 +85,6 @@ struct gmap *gmap_alloc(unsigned long limit)
 	page = gmap_alloc_crst();
 	if (!page)
 		goto out_free;
-	page->index = 0;
 	list_add(&page->lru, &gmap->crst_list);
 	table = page_to_virt(page);
 	crst_table_init(table, etype);
@@ -303,7 +305,6 @@ static int gmap_alloc_table(struct gmap *gmap, unsigned long *table,
 		list_add(&page->lru, &gmap->crst_list);
 		*table = __pa(new) | _REGION_ENTRY_LENGTH |
 			(*table & _REGION_ENTRY_TYPE_MASK);
-		page->index = gaddr;
 		page = NULL;
 	}
 	spin_unlock(&gmap->guest_table_lock);
@@ -312,21 +313,23 @@ static int gmap_alloc_table(struct gmap *gmap, unsigned long *table,
 	return 0;
 }
 
-/**
- * __gmap_segment_gaddr - find virtual address from segment pointer
- * @entry: pointer to a segment table entry in the guest address space
- *
- * Returns the virtual address in the guest address space for the segment
- */
-static unsigned long __gmap_segment_gaddr(unsigned long *entry)
+static unsigned long host_to_guest_lookup(struct gmap *gmap, unsigned long vmaddr)
 {
-	struct page *page;
-	unsigned long offset;
+	return (unsigned long)radix_tree_lookup(&gmap->host_to_guest, vmaddr >> PMD_SHIFT);
+}
 
-	offset = (unsigned long) entry / sizeof(unsigned long);
-	offset = (offset & (PTRS_PER_PMD - 1)) * PMD_SIZE;
-	page = pmd_pgtable_page((pmd_t *) entry);
-	return page->index + offset;
+static unsigned long host_to_guest_delete(struct gmap *gmap, unsigned long vmaddr)
+{
+	return (unsigned long)radix_tree_delete(&gmap->host_to_guest, vmaddr >> PMD_SHIFT);
+}
+
+static pmd_t *host_to_guest_pmd_delete(struct gmap *gmap, unsigned long vmaddr,
+				       unsigned long *gaddr)
+{
+	*gaddr = host_to_guest_delete(gmap, vmaddr);
+	if (GADDR_VALID(*gaddr))
+		return (pmd_t *)gmap_table_walk(gmap, *gaddr, 1);
+	return NULL;
 }
 
 /**
@@ -338,16 +341,19 @@ static unsigned long __gmap_segment_gaddr(unsigned long *entry)
  */
 static int __gmap_unlink_by_vmaddr(struct gmap *gmap, unsigned long vmaddr)
 {
-	unsigned long *entry;
+	unsigned long gaddr;
 	int flush = 0;
+	pmd_t *pmdp;
 
 	BUG_ON(gmap_is_shadow(gmap));
 	spin_lock(&gmap->guest_table_lock);
-	entry = radix_tree_delete(&gmap->host_to_guest, vmaddr >> PMD_SHIFT);
-	if (entry) {
-		flush = (*entry != _SEGMENT_ENTRY_EMPTY);
-		*entry = _SEGMENT_ENTRY_EMPTY;
+
+	pmdp = host_to_guest_pmd_delete(gmap, vmaddr, &gaddr);
+	if (pmdp) {
+		flush = (pmd_val(*pmdp) != _SEGMENT_ENTRY_EMPTY);
+		*pmdp = __pmd(_SEGMENT_ENTRY_EMPTY);
 	}
+
 	spin_unlock(&gmap->guest_table_lock);
 	return flush;
 }
@@ -564,7 +570,8 @@ int __gmap_link(struct gmap *gmap, unsigned long gaddr, unsigned long vmaddr)
 	spin_lock(&gmap->guest_table_lock);
 	if (*table == _SEGMENT_ENTRY_EMPTY) {
 		rc = radix_tree_insert(&gmap->host_to_guest,
-				       vmaddr >> PMD_SHIFT, table);
+				       vmaddr >> PMD_SHIFT,
+				       (void *)((gaddr & HPAGE_MASK) | 1));
 		if (!rc) {
 			if (pmd_leaf(*pmd)) {
 				*table = (pmd_val(*pmd) &
@@ -1991,7 +1998,6 @@ void ptep_notify(struct mm_struct *mm, unsigned long vmaddr,
 		 pte_t *pte, unsigned long bits)
 {
 	unsigned long offset, gaddr = 0;
-	unsigned long *table;
 	struct gmap *gmap, *sg, *next;
 
 	offset = ((unsigned long) pte) & (255 * sizeof(pte_t));
@@ -1999,12 +2005,9 @@ void ptep_notify(struct mm_struct *mm, unsigned long vmaddr,
 	rcu_read_lock();
 	list_for_each_entry_rcu(gmap, &mm->context.gmap_list, list) {
 		spin_lock(&gmap->guest_table_lock);
-		table = radix_tree_lookup(&gmap->host_to_guest,
-					  vmaddr >> PMD_SHIFT);
-		if (table)
-			gaddr = __gmap_segment_gaddr(table) + offset;
+		gaddr = host_to_guest_lookup(gmap, vmaddr) + offset;
 		spin_unlock(&gmap->guest_table_lock);
-		if (!table)
+		if (!GADDR_VALID(gaddr))
 			continue;
 
 		if (!list_empty(&gmap->children) && (bits & PGSTE_VSIE_BIT)) {
@@ -2064,10 +2067,8 @@ static void gmap_pmdp_clear(struct mm_struct *mm, unsigned long vmaddr,
 	rcu_read_lock();
 	list_for_each_entry_rcu(gmap, &mm->context.gmap_list, list) {
 		spin_lock(&gmap->guest_table_lock);
-		pmdp = (pmd_t *)radix_tree_delete(&gmap->host_to_guest,
-						  vmaddr >> PMD_SHIFT);
+		pmdp = host_to_guest_pmd_delete(gmap, vmaddr, &gaddr);
 		if (pmdp) {
-			gaddr = __gmap_segment_gaddr((unsigned long *)pmdp);
 			pmdp_notify_gmap(gmap, pmdp, gaddr);
 			WARN_ON(pmd_val(*pmdp) & ~(_SEGMENT_ENTRY_HARDWARE_BITS_LARGE |
 						   _SEGMENT_ENTRY_GMAP_UC |
@@ -2111,28 +2112,25 @@ EXPORT_SYMBOL_GPL(gmap_pmdp_csp);
  */
 void gmap_pmdp_idte_local(struct mm_struct *mm, unsigned long vmaddr)
 {
-	unsigned long *entry, gaddr;
+	unsigned long gaddr;
 	struct gmap *gmap;
 	pmd_t *pmdp;
 
 	rcu_read_lock();
 	list_for_each_entry_rcu(gmap, &mm->context.gmap_list, list) {
 		spin_lock(&gmap->guest_table_lock);
-		entry = radix_tree_delete(&gmap->host_to_guest,
-					  vmaddr >> PMD_SHIFT);
-		if (entry) {
-			pmdp = (pmd_t *)entry;
-			gaddr = __gmap_segment_gaddr(entry);
+		pmdp = host_to_guest_pmd_delete(gmap, vmaddr, &gaddr);
+		if (pmdp) {
 			pmdp_notify_gmap(gmap, pmdp, gaddr);
-			WARN_ON(*entry & ~(_SEGMENT_ENTRY_HARDWARE_BITS_LARGE |
-					   _SEGMENT_ENTRY_GMAP_UC |
-					   _SEGMENT_ENTRY));
+			WARN_ON(pmd_val(*pmdp) & ~(_SEGMENT_ENTRY_HARDWARE_BITS_LARGE |
+						   _SEGMENT_ENTRY_GMAP_UC |
+						   _SEGMENT_ENTRY));
 			if (MACHINE_HAS_TLB_GUEST)
 				__pmdp_idte(gaddr, pmdp, IDTE_GUEST_ASCE,
 					    gmap->asce, IDTE_LOCAL);
 			else if (MACHINE_HAS_IDTE)
 				__pmdp_idte(gaddr, pmdp, 0, 0, IDTE_LOCAL);
-			*entry = _SEGMENT_ENTRY_EMPTY;
+			*pmdp = __pmd(_SEGMENT_ENTRY_EMPTY);
 		}
 		spin_unlock(&gmap->guest_table_lock);
 	}
@@ -2147,22 +2145,19 @@ EXPORT_SYMBOL_GPL(gmap_pmdp_idte_local);
  */
 void gmap_pmdp_idte_global(struct mm_struct *mm, unsigned long vmaddr)
 {
-	unsigned long *entry, gaddr;
+	unsigned long gaddr;
 	struct gmap *gmap;
 	pmd_t *pmdp;
 
 	rcu_read_lock();
 	list_for_each_entry_rcu(gmap, &mm->context.gmap_list, list) {
 		spin_lock(&gmap->guest_table_lock);
-		entry = radix_tree_delete(&gmap->host_to_guest,
-					  vmaddr >> PMD_SHIFT);
-		if (entry) {
-			pmdp = (pmd_t *)entry;
-			gaddr = __gmap_segment_gaddr(entry);
+		pmdp = host_to_guest_pmd_delete(gmap, vmaddr, &gaddr);
+		if (pmdp) {
 			pmdp_notify_gmap(gmap, pmdp, gaddr);
-			WARN_ON(*entry & ~(_SEGMENT_ENTRY_HARDWARE_BITS_LARGE |
-					   _SEGMENT_ENTRY_GMAP_UC |
-					   _SEGMENT_ENTRY));
+			WARN_ON(pmd_val(*pmdp) & ~(_SEGMENT_ENTRY_HARDWARE_BITS_LARGE |
+						   _SEGMENT_ENTRY_GMAP_UC |
+						   _SEGMENT_ENTRY));
 			if (MACHINE_HAS_TLB_GUEST)
 				__pmdp_idte(gaddr, pmdp, IDTE_GUEST_ASCE,
 					    gmap->asce, IDTE_GLOBAL);
@@ -2170,7 +2165,7 @@ void gmap_pmdp_idte_global(struct mm_struct *mm, unsigned long vmaddr)
 				__pmdp_idte(gaddr, pmdp, 0, 0, IDTE_GLOBAL);
 			else
 				__pmdp_csp(pmdp);
-			*entry = _SEGMENT_ENTRY_EMPTY;
+			*pmdp = __pmd(_SEGMENT_ENTRY_EMPTY);
 		}
 		spin_unlock(&gmap->guest_table_lock);
 	}
@@ -2686,7 +2681,6 @@ int s390_replace_asce(struct gmap *gmap)
 	page = gmap_alloc_crst();
 	if (!page)
 		return -ENOMEM;
-	page->index = 0;
 	table = page_to_virt(page);
 	memcpy(table, gmap->table, 1UL << (CRST_ALLOC_ORDER + PAGE_SHIFT));
 
-- 
2.47.1


^ permalink raw reply related	[flat|nested] 39+ messages in thread

* [PATCH v1 09/13] KVM: s390: stop using lists to keep track of used dat tables
  2025-01-08 18:14 [PATCH v1 00/13] KVM: s390: Stop using page->index and other things Claudio Imbrenda
                   ` (7 preceding siblings ...)
  2025-01-08 18:14 ` [PATCH v1 08/13] KVM: s390: stop using page->index for non-shadow gmaps Claudio Imbrenda
@ 2025-01-08 18:14 ` Claudio Imbrenda
  2025-01-15  9:01   ` Janosch Frank
  2025-01-08 18:14 ` [PATCH v1 10/13] KVM: s390: move gmap_shadow_pgt_lookup() into kvm Claudio Imbrenda
                   ` (3 subsequent siblings)
  12 siblings, 1 reply; 39+ messages in thread
From: Claudio Imbrenda @ 2025-01-08 18:14 UTC (permalink / raw)
  To: kvm
  Cc: linux-s390, frankja, borntraeger, schlameuss, david, willy, hca,
	svens, agordeev, gor, nrb, nsg

Until now, every dat table allocated to map a guest was put in a
linked list. The page->lru field of struct page was used to keep track
of which pages were being used, and when the gmap is torn down, the
list was walked and all pages freed.

This patch gets rid of the usage of page->lru. Page table are now freed
by recursively walking the dat table tree.

Since s390_unlist_old_asce() becomes useless now, remove it.

Signed-off-by: Claudio Imbrenda <imbrenda@linux.ibm.com>
---
 arch/s390/include/asm/gmap.h |  3 --
 arch/s390/mm/gmap.c          | 94 +++++++++---------------------------
 2 files changed, 23 insertions(+), 74 deletions(-)

diff --git a/arch/s390/include/asm/gmap.h b/arch/s390/include/asm/gmap.h
index 8637ebbf618a..65d3565b1e86 100644
--- a/arch/s390/include/asm/gmap.h
+++ b/arch/s390/include/asm/gmap.h
@@ -45,7 +45,6 @@
  */
 struct gmap {
 	struct list_head list;
-	struct list_head crst_list;
 	struct mm_struct *mm;
 	struct radix_tree_root guest_to_host;
 	struct radix_tree_root host_to_guest;
@@ -61,7 +60,6 @@ struct gmap {
 	/* Additional data for shadow guest address spaces */
 	struct radix_tree_root host_to_rmap;
 	struct list_head children;
-	struct list_head pt_list;
 	spinlock_t shadow_lock;
 	struct gmap *parent;
 	unsigned long orig_asce;
@@ -143,7 +141,6 @@ int gmap_protect_one(struct gmap *gmap, unsigned long gaddr, int prot, unsigned
 void gmap_sync_dirty_log_pmd(struct gmap *gmap, unsigned long dirty_bitmap[4],
 			     unsigned long gaddr, unsigned long vmaddr);
 int s390_disable_cow_sharing(void);
-void s390_unlist_old_asce(struct gmap *gmap);
 int s390_replace_asce(struct gmap *gmap);
 void s390_uv_destroy_pfns(unsigned long count, unsigned long *pfns);
 int __s390_uv_destroy_range(struct mm_struct *mm, unsigned long start,
diff --git a/arch/s390/mm/gmap.c b/arch/s390/mm/gmap.c
index c0f79c14277e..57a1ee47af73 100644
--- a/arch/s390/mm/gmap.c
+++ b/arch/s390/mm/gmap.c
@@ -73,9 +73,7 @@ struct gmap *gmap_alloc(unsigned long limit)
 	gmap = kzalloc(sizeof(struct gmap), GFP_KERNEL_ACCOUNT);
 	if (!gmap)
 		goto out;
-	INIT_LIST_HEAD(&gmap->crst_list);
 	INIT_LIST_HEAD(&gmap->children);
-	INIT_LIST_HEAD(&gmap->pt_list);
 	INIT_RADIX_TREE(&gmap->guest_to_host, GFP_KERNEL_ACCOUNT);
 	INIT_RADIX_TREE(&gmap->host_to_guest, GFP_ATOMIC | __GFP_ACCOUNT);
 	INIT_RADIX_TREE(&gmap->host_to_rmap, GFP_ATOMIC | __GFP_ACCOUNT);
@@ -85,7 +83,6 @@ struct gmap *gmap_alloc(unsigned long limit)
 	page = gmap_alloc_crst();
 	if (!page)
 		goto out_free;
-	list_add(&page->lru, &gmap->crst_list);
 	table = page_to_virt(page);
 	crst_table_init(table, etype);
 	gmap->table = table;
@@ -188,6 +185,27 @@ static void gmap_rmap_radix_tree_free(struct radix_tree_root *root)
 	} while (nr > 0);
 }
 
+static void gmap_free_crst(unsigned long *table, bool free_ptes)
+{
+	bool is_segment = (table[0] & _SEGMENT_ENTRY_TYPE_MASK) == 0;
+	int i;
+
+	if (is_segment) {
+		if (!free_ptes)
+			goto out;
+		for (i = 0; i < _CRST_ENTRIES; i++)
+			if (!(table[i] & _SEGMENT_ENTRY_INVALID))
+				page_table_free_pgste(page_ptdesc(phys_to_page(table[i])));
+	} else {
+		for (i = 0; i < _CRST_ENTRIES; i++)
+			if (!(table[i] & _REGION_ENTRY_INVALID))
+				gmap_free_crst(__va(table[i] & PAGE_MASK), free_ptes);
+	}
+
+out:
+	free_pages((unsigned long)table, CRST_ALLOC_ORDER);
+}
+
 /**
  * gmap_free - free a guest address space
  * @gmap: pointer to the guest address space structure
@@ -196,24 +214,17 @@ static void gmap_rmap_radix_tree_free(struct radix_tree_root *root)
  */
 void gmap_free(struct gmap *gmap)
 {
-	struct page *page, *next;
-
 	/* Flush tlb of all gmaps (if not already done for shadows) */
 	if (!(gmap_is_shadow(gmap) && gmap->removed))
 		gmap_flush_tlb(gmap);
 	/* Free all segment & region tables. */
-	list_for_each_entry_safe(page, next, &gmap->crst_list, lru)
-		__free_pages(page, CRST_ALLOC_ORDER);
+	gmap_free_crst(gmap->table, gmap_is_shadow(gmap));
+
 	gmap_radix_tree_free(&gmap->guest_to_host);
 	gmap_radix_tree_free(&gmap->host_to_guest);
 
 	/* Free additional data for a shadow gmap */
 	if (gmap_is_shadow(gmap)) {
-		struct ptdesc *ptdesc, *n;
-
-		/* Free all page tables. */
-		list_for_each_entry_safe(ptdesc, n, &gmap->pt_list, pt_list)
-			page_table_free_pgste(ptdesc);
 		gmap_rmap_radix_tree_free(&gmap->host_to_rmap);
 		/* Release reference to the parent */
 		gmap_put(gmap->parent);
@@ -302,7 +313,6 @@ static int gmap_alloc_table(struct gmap *gmap, unsigned long *table,
 	crst_table_init(new, init);
 	spin_lock(&gmap->guest_table_lock);
 	if (*table & _REGION_ENTRY_INVALID) {
-		list_add(&page->lru, &gmap->crst_list);
 		*table = __pa(new) | _REGION_ENTRY_LENGTH |
 			(*table & _REGION_ENTRY_TYPE_MASK);
 		page = NULL;
@@ -1230,7 +1240,6 @@ static void gmap_unshadow_pgt(struct gmap *sg, unsigned long raddr)
 	__gmap_unshadow_pgt(sg, raddr, __va(pgt));
 	/* Free page table */
 	ptdesc = page_ptdesc(phys_to_page(pgt));
-	list_del(&ptdesc->pt_list);
 	page_table_free_pgste(ptdesc);
 }
 
@@ -1258,7 +1267,6 @@ static void __gmap_unshadow_sgt(struct gmap *sg, unsigned long raddr,
 		__gmap_unshadow_pgt(sg, raddr, __va(pgt));
 		/* Free page table */
 		ptdesc = page_ptdesc(phys_to_page(pgt));
-		list_del(&ptdesc->pt_list);
 		page_table_free_pgste(ptdesc);
 	}
 }
@@ -1288,7 +1296,6 @@ static void gmap_unshadow_sgt(struct gmap *sg, unsigned long raddr)
 	__gmap_unshadow_sgt(sg, raddr, __va(sgt));
 	/* Free segment table */
 	page = phys_to_page(sgt);
-	list_del(&page->lru);
 	__free_pages(page, CRST_ALLOC_ORDER);
 }
 
@@ -1316,7 +1323,6 @@ static void __gmap_unshadow_r3t(struct gmap *sg, unsigned long raddr,
 		__gmap_unshadow_sgt(sg, raddr, __va(sgt));
 		/* Free segment table */
 		page = phys_to_page(sgt);
-		list_del(&page->lru);
 		__free_pages(page, CRST_ALLOC_ORDER);
 	}
 }
@@ -1346,7 +1352,6 @@ static void gmap_unshadow_r3t(struct gmap *sg, unsigned long raddr)
 	__gmap_unshadow_r3t(sg, raddr, __va(r3t));
 	/* Free region 3 table */
 	page = phys_to_page(r3t);
-	list_del(&page->lru);
 	__free_pages(page, CRST_ALLOC_ORDER);
 }
 
@@ -1374,7 +1379,6 @@ static void __gmap_unshadow_r2t(struct gmap *sg, unsigned long raddr,
 		__gmap_unshadow_r3t(sg, raddr, __va(r3t));
 		/* Free region 3 table */
 		page = phys_to_page(r3t);
-		list_del(&page->lru);
 		__free_pages(page, CRST_ALLOC_ORDER);
 	}
 }
@@ -1404,7 +1408,6 @@ static void gmap_unshadow_r2t(struct gmap *sg, unsigned long raddr)
 	__gmap_unshadow_r2t(sg, raddr, __va(r2t));
 	/* Free region 2 table */
 	page = phys_to_page(r2t);
-	list_del(&page->lru);
 	__free_pages(page, CRST_ALLOC_ORDER);
 }
 
@@ -1436,7 +1439,6 @@ static void __gmap_unshadow_r1t(struct gmap *sg, unsigned long raddr,
 		r1t[i] = _REGION1_ENTRY_EMPTY;
 		/* Free region 2 table */
 		page = phys_to_page(r2t);
-		list_del(&page->lru);
 		__free_pages(page, CRST_ALLOC_ORDER);
 	}
 }
@@ -1531,7 +1533,6 @@ int gmap_shadow_r2t(struct gmap *sg, unsigned long saddr, unsigned long r2t,
 		 _REGION_ENTRY_TYPE_R1 | _REGION_ENTRY_INVALID;
 	if (sg->edat_level >= 1)
 		*table |= (r2t & _REGION_ENTRY_PROTECT);
-	list_add(&page->lru, &sg->crst_list);
 	if (fake) {
 		/* nothing to protect for fake tables */
 		*table &= ~_REGION_ENTRY_INVALID;
@@ -1615,7 +1616,6 @@ int gmap_shadow_r3t(struct gmap *sg, unsigned long saddr, unsigned long r3t,
 		 _REGION_ENTRY_TYPE_R2 | _REGION_ENTRY_INVALID;
 	if (sg->edat_level >= 1)
 		*table |= (r3t & _REGION_ENTRY_PROTECT);
-	list_add(&page->lru, &sg->crst_list);
 	if (fake) {
 		/* nothing to protect for fake tables */
 		*table &= ~_REGION_ENTRY_INVALID;
@@ -1699,7 +1699,6 @@ int gmap_shadow_sgt(struct gmap *sg, unsigned long saddr, unsigned long sgt,
 		 _REGION_ENTRY_TYPE_R3 | _REGION_ENTRY_INVALID;
 	if (sg->edat_level >= 1)
 		*table |= sgt & _REGION_ENTRY_PROTECT;
-	list_add(&page->lru, &sg->crst_list);
 	if (fake) {
 		/* nothing to protect for fake tables */
 		*table &= ~_REGION_ENTRY_INVALID;
@@ -1820,7 +1819,6 @@ int gmap_shadow_pgt(struct gmap *sg, unsigned long saddr, unsigned long pgt,
 	/* mark as invalid as long as the parent table is not protected */
 	*table = (unsigned long) s_pgt | _SEGMENT_ENTRY |
 		 (pgt & _SEGMENT_ENTRY_PROTECT) | _SEGMENT_ENTRY_INVALID;
-	list_add(&ptdesc->pt_list, &sg->pt_list);
 	if (fake) {
 		/* nothing to protect for fake tables */
 		*table &= ~_SEGMENT_ENTRY_INVALID;
@@ -2610,49 +2608,6 @@ int __s390_uv_destroy_range(struct mm_struct *mm, unsigned long start,
 }
 EXPORT_SYMBOL_GPL(__s390_uv_destroy_range);
 
-/**
- * s390_unlist_old_asce - Remove the topmost level of page tables from the
- * list of page tables of the gmap.
- * @gmap: the gmap whose table is to be removed
- *
- * On s390x, KVM keeps a list of all pages containing the page tables of the
- * gmap (the CRST list). This list is used at tear down time to free all
- * pages that are now not needed anymore.
- *
- * This function removes the topmost page of the tree (the one pointed to by
- * the ASCE) from the CRST list.
- *
- * This means that it will not be freed when the VM is torn down, and needs
- * to be handled separately by the caller, unless a leak is actually
- * intended. Notice that this function will only remove the page from the
- * list, the page will still be used as a top level page table (and ASCE).
- */
-void s390_unlist_old_asce(struct gmap *gmap)
-{
-	struct page *old;
-
-	old = virt_to_page(gmap->table);
-	spin_lock(&gmap->guest_table_lock);
-	list_del(&old->lru);
-	/*
-	 * Sometimes the topmost page might need to be "removed" multiple
-	 * times, for example if the VM is rebooted into secure mode several
-	 * times concurrently, or if s390_replace_asce fails after calling
-	 * s390_remove_old_asce and is attempted again later. In that case
-	 * the old asce has been removed from the list, and therefore it
-	 * will not be freed when the VM terminates, but the ASCE is still
-	 * in use and still pointed to.
-	 * A subsequent call to replace_asce will follow the pointer and try
-	 * to remove the same page from the list again.
-	 * Therefore it's necessary that the page of the ASCE has valid
-	 * pointers, so list_del can work (and do nothing) without
-	 * dereferencing stale or invalid pointers.
-	 */
-	INIT_LIST_HEAD(&old->lru);
-	spin_unlock(&gmap->guest_table_lock);
-}
-EXPORT_SYMBOL_GPL(s390_unlist_old_asce);
-
 /**
  * s390_replace_asce - Try to replace the current ASCE of a gmap with a copy
  * @gmap: the gmap whose ASCE needs to be replaced
@@ -2672,8 +2627,6 @@ int s390_replace_asce(struct gmap *gmap)
 	struct page *page;
 	void *table;
 
-	s390_unlist_old_asce(gmap);
-
 	/* Replacing segment type ASCEs would cause serious issues */
 	if ((gmap->asce & _ASCE_TYPE_MASK) == _ASCE_TYPE_SEGMENT)
 		return -EINVAL;
@@ -2690,7 +2643,6 @@ int s390_replace_asce(struct gmap *gmap)
 	 * it will be freed when the VM is torn down.
 	 */
 	spin_lock(&gmap->guest_table_lock);
-	list_add(&page->lru, &gmap->crst_list);
 	spin_unlock(&gmap->guest_table_lock);
 
 	/* Set new table origin while preserving existing ASCE control bits */
-- 
2.47.1


^ permalink raw reply related	[flat|nested] 39+ messages in thread

* [PATCH v1 10/13] KVM: s390: move gmap_shadow_pgt_lookup() into kvm
  2025-01-08 18:14 [PATCH v1 00/13] KVM: s390: Stop using page->index and other things Claudio Imbrenda
                   ` (8 preceding siblings ...)
  2025-01-08 18:14 ` [PATCH v1 09/13] KVM: s390: stop using lists to keep track of used dat tables Claudio Imbrenda
@ 2025-01-08 18:14 ` Claudio Imbrenda
  2025-01-08 18:14 ` [PATCH v1 11/13] KVM: s390: remove useless page->index usage Claudio Imbrenda
                   ` (2 subsequent siblings)
  12 siblings, 0 replies; 39+ messages in thread
From: Claudio Imbrenda @ 2025-01-08 18:14 UTC (permalink / raw)
  To: kvm
  Cc: linux-s390, frankja, borntraeger, schlameuss, david, willy, hca,
	svens, agordeev, gor, nrb, nsg

Move gmap_shadow_pgt_lookup() from mm/gmap.c into kvm/gaccess.c .

Signed-off-by: Claudio Imbrenda <imbrenda@linux.ibm.com>
---
 arch/s390/include/asm/gmap.h |  3 +--
 arch/s390/kvm/gaccess.c      | 40 +++++++++++++++++++++++++++++++
 arch/s390/kvm/gmap.h         |  2 ++
 arch/s390/mm/gmap.c          | 46 ++----------------------------------
 4 files changed, 45 insertions(+), 46 deletions(-)

diff --git a/arch/s390/include/asm/gmap.h b/arch/s390/include/asm/gmap.h
index 65d3565b1e86..5ebc65ac78cc 100644
--- a/arch/s390/include/asm/gmap.h
+++ b/arch/s390/include/asm/gmap.h
@@ -129,8 +129,6 @@ int gmap_shadow_sgt(struct gmap *sg, unsigned long saddr, unsigned long sgt,
 		    int fake);
 int gmap_shadow_pgt(struct gmap *sg, unsigned long saddr, unsigned long pgt,
 		    int fake);
-int gmap_shadow_pgt_lookup(struct gmap *sg, unsigned long saddr,
-			   unsigned long *pgt, int *dat_protection, int *fake);
 int gmap_shadow_page(struct gmap *sg, unsigned long saddr, pte_t pte);
 
 void gmap_register_pte_notifier(struct gmap_notifier *);
@@ -145,6 +143,7 @@ int s390_replace_asce(struct gmap *gmap);
 void s390_uv_destroy_pfns(unsigned long count, unsigned long *pfns);
 int __s390_uv_destroy_range(struct mm_struct *mm, unsigned long start,
 			    unsigned long end, bool interruptible);
+unsigned long *gmap_table_walk(struct gmap *gmap, unsigned long gaddr, int level);
 
 /**
  * s390_uv_destroy_range - Destroy a range of pages in the given mm.
diff --git a/arch/s390/kvm/gaccess.c b/arch/s390/kvm/gaccess.c
index 9816b0060fbe..560b5677929b 100644
--- a/arch/s390/kvm/gaccess.c
+++ b/arch/s390/kvm/gaccess.c
@@ -16,6 +16,7 @@
 #include <asm/gmap.h>
 #include <asm/dat-bits.h>
 #include "kvm-s390.h"
+#include "gmap.h"
 #include "gaccess.h"
 
 /*
@@ -1392,6 +1393,42 @@ static int kvm_s390_shadow_tables(struct gmap *sg, unsigned long saddr,
 	return 0;
 }
 
+/**
+ * gmap_shadow_pgt_lookup - find a shadow page table
+ * @sg: pointer to the shadow guest address space structure
+ * @saddr: the address in the shadow aguest address space
+ * @pgt: parent gmap address of the page table to get shadowed
+ * @dat_protection: if the pgtable is marked as protected by dat
+ * @fake: pgt references contiguous guest memory block, not a pgtable
+ *
+ * Returns 0 if the shadow page table was found and -EAGAIN if the page
+ * table was not found.
+ *
+ * Called with sg->mm->mmap_lock in read.
+ */
+static int gmap_shadow_pgt_lookup(struct gmap *sg, unsigned long saddr, unsigned long *pgt,
+				  int *dat_protection, int *fake)
+{
+	unsigned long *table;
+	struct page *page;
+	int rc;
+
+	spin_lock(&sg->guest_table_lock);
+	table = gmap_table_walk(sg, saddr, 1); /* get segment pointer */
+	if (table && !(*table & _SEGMENT_ENTRY_INVALID)) {
+		/* Shadow page tables are full pages (pte+pgste) */
+		page = pfn_to_page(*table >> PAGE_SHIFT);
+		*pgt = page->index & ~GMAP_SHADOW_FAKE_TABLE;
+		*dat_protection = !!(*table & _SEGMENT_ENTRY_PROTECT);
+		*fake = !!(page->index & GMAP_SHADOW_FAKE_TABLE);
+		rc = 0;
+	} else  {
+		rc = -EAGAIN;
+	}
+	spin_unlock(&sg->guest_table_lock);
+	return rc;
+}
+
 /**
  * kvm_s390_shadow_fault - handle fault on a shadow page table
  * @vcpu: virtual cpu
@@ -1415,6 +1452,9 @@ int kvm_s390_shadow_fault(struct kvm_vcpu *vcpu, struct gmap *sg,
 	int dat_protection, fake;
 	int rc;
 
+	if (KVM_BUG_ON(!gmap_is_shadow(sg), vcpu->kvm))
+		return -EFAULT;
+
 	mmap_read_lock(sg->mm);
 	/*
 	 * We don't want any guest-2 tables to change - so the parent
diff --git a/arch/s390/kvm/gmap.h b/arch/s390/kvm/gmap.h
index f2b52ce29be3..14431ce978da 100644
--- a/arch/s390/kvm/gmap.h
+++ b/arch/s390/kvm/gmap.h
@@ -10,6 +10,8 @@
 #ifndef ARCH_KVM_S390_GMAP_H
 #define ARCH_KVM_S390_GMAP_H
 
+#define GMAP_SHADOW_FAKE_TABLE 1ULL
+
 int gmap_make_secure(struct gmap *gmap, unsigned long gaddr, void *uvcb);
 int gmap_convert_to_secure(struct gmap *gmap, unsigned long gaddr);
 int gmap_destroy_page(struct gmap *gmap, unsigned long gaddr);
diff --git a/arch/s390/mm/gmap.c b/arch/s390/mm/gmap.c
index 57a1ee47af73..815bcb996ec4 100644
--- a/arch/s390/mm/gmap.c
+++ b/arch/s390/mm/gmap.c
@@ -27,8 +27,6 @@
 #define GADDR_VALID(gaddr) ((gaddr) & 1)
 #define GMAP_SHADOW_FAKE_TABLE 1ULL
 
-static inline unsigned long *gmap_table_walk(struct gmap *gmap, unsigned long gaddr, int level);
-
 static struct page *gmap_alloc_crst(void)
 {
 	struct page *page;
@@ -729,8 +727,7 @@ static void gmap_call_notifier(struct gmap *gmap, unsigned long start,
  *
  * Note: Can also be called for shadow gmaps.
  */
-static inline unsigned long *gmap_table_walk(struct gmap *gmap,
-					     unsigned long gaddr, int level)
+unsigned long *gmap_table_walk(struct gmap *gmap, unsigned long gaddr, int level)
 {
 	const int asce_type = gmap->asce & _ASCE_TYPE_MASK;
 	unsigned long *table = gmap->table;
@@ -781,6 +778,7 @@ static inline unsigned long *gmap_table_walk(struct gmap *gmap,
 	}
 	return table;
 }
+EXPORT_SYMBOL(gmap_table_walk);
 
 /**
  * gmap_pte_op_walk - walk the gmap page table, get the page table lock
@@ -1731,46 +1729,6 @@ int gmap_shadow_sgt(struct gmap *sg, unsigned long saddr, unsigned long sgt,
 }
 EXPORT_SYMBOL_GPL(gmap_shadow_sgt);
 
-/**
- * gmap_shadow_pgt_lookup - find a shadow page table
- * @sg: pointer to the shadow guest address space structure
- * @saddr: the address in the shadow aguest address space
- * @pgt: parent gmap address of the page table to get shadowed
- * @dat_protection: if the pgtable is marked as protected by dat
- * @fake: pgt references contiguous guest memory block, not a pgtable
- *
- * Returns 0 if the shadow page table was found and -EAGAIN if the page
- * table was not found.
- *
- * Called with sg->mm->mmap_lock in read.
- */
-int gmap_shadow_pgt_lookup(struct gmap *sg, unsigned long saddr,
-			   unsigned long *pgt, int *dat_protection,
-			   int *fake)
-{
-	unsigned long *table;
-	struct page *page;
-	int rc;
-
-	BUG_ON(!gmap_is_shadow(sg));
-	spin_lock(&sg->guest_table_lock);
-	table = gmap_table_walk(sg, saddr, 1); /* get segment pointer */
-	if (table && !(*table & _SEGMENT_ENTRY_INVALID)) {
-		/* Shadow page tables are full pages (pte+pgste) */
-		page = pfn_to_page(*table >> PAGE_SHIFT);
-		*pgt = page->index & ~GMAP_SHADOW_FAKE_TABLE;
-		*dat_protection = !!(*table & _SEGMENT_ENTRY_PROTECT);
-		*fake = !!(page->index & GMAP_SHADOW_FAKE_TABLE);
-		rc = 0;
-	} else  {
-		rc = -EAGAIN;
-	}
-	spin_unlock(&sg->guest_table_lock);
-	return rc;
-
-}
-EXPORT_SYMBOL_GPL(gmap_shadow_pgt_lookup);
-
 /**
  * gmap_shadow_pgt - instantiate a shadow page table
  * @sg: pointer to the shadow guest address space structure
-- 
2.47.1


^ permalink raw reply related	[flat|nested] 39+ messages in thread

* [PATCH v1 11/13] KVM: s390: remove useless page->index usage
  2025-01-08 18:14 [PATCH v1 00/13] KVM: s390: Stop using page->index and other things Claudio Imbrenda
                   ` (9 preceding siblings ...)
  2025-01-08 18:14 ` [PATCH v1 10/13] KVM: s390: move gmap_shadow_pgt_lookup() into kvm Claudio Imbrenda
@ 2025-01-08 18:14 ` Claudio Imbrenda
  2025-01-08 18:14 ` [PATCH v1 12/13] KVM: s390: move PGSTE softbits Claudio Imbrenda
  2025-01-08 18:14 ` [PATCH v1 13/13] KVM: s390: remove the last user of page->index Claudio Imbrenda
  12 siblings, 0 replies; 39+ messages in thread
From: Claudio Imbrenda @ 2025-01-08 18:14 UTC (permalink / raw)
  To: kvm
  Cc: linux-s390, frankja, borntraeger, schlameuss, david, willy, hca,
	svens, agordeev, gor, nrb, nsg

The page->index field for VSIE dat tables is only used for segment
tables.

Stop setting the field for all region tables.

Signed-off-by: Claudio Imbrenda <imbrenda@linux.ibm.com>
---
 arch/s390/mm/gmap.c | 9 ---------
 1 file changed, 9 deletions(-)

diff --git a/arch/s390/mm/gmap.c b/arch/s390/mm/gmap.c
index 815bcb996ec4..3276437725b8 100644
--- a/arch/s390/mm/gmap.c
+++ b/arch/s390/mm/gmap.c
@@ -1507,9 +1507,6 @@ int gmap_shadow_r2t(struct gmap *sg, unsigned long saddr, unsigned long r2t,
 	page = gmap_alloc_crst();
 	if (!page)
 		return -ENOMEM;
-	page->index = r2t & _REGION_ENTRY_ORIGIN;
-	if (fake)
-		page->index |= GMAP_SHADOW_FAKE_TABLE;
 	s_r2t = page_to_phys(page);
 	/* Install shadow region second table */
 	spin_lock(&sg->guest_table_lock);
@@ -1590,9 +1587,6 @@ int gmap_shadow_r3t(struct gmap *sg, unsigned long saddr, unsigned long r3t,
 	page = gmap_alloc_crst();
 	if (!page)
 		return -ENOMEM;
-	page->index = r3t & _REGION_ENTRY_ORIGIN;
-	if (fake)
-		page->index |= GMAP_SHADOW_FAKE_TABLE;
 	s_r3t = page_to_phys(page);
 	/* Install shadow region second table */
 	spin_lock(&sg->guest_table_lock);
@@ -1673,9 +1667,6 @@ int gmap_shadow_sgt(struct gmap *sg, unsigned long saddr, unsigned long sgt,
 	page = gmap_alloc_crst();
 	if (!page)
 		return -ENOMEM;
-	page->index = sgt & _REGION_ENTRY_ORIGIN;
-	if (fake)
-		page->index |= GMAP_SHADOW_FAKE_TABLE;
 	s_sgt = page_to_phys(page);
 	/* Install shadow region second table */
 	spin_lock(&sg->guest_table_lock);
-- 
2.47.1


^ permalink raw reply related	[flat|nested] 39+ messages in thread

* [PATCH v1 12/13] KVM: s390: move PGSTE softbits
  2025-01-08 18:14 [PATCH v1 00/13] KVM: s390: Stop using page->index and other things Claudio Imbrenda
                   ` (10 preceding siblings ...)
  2025-01-08 18:14 ` [PATCH v1 11/13] KVM: s390: remove useless page->index usage Claudio Imbrenda
@ 2025-01-08 18:14 ` Claudio Imbrenda
  2025-01-08 18:14 ` [PATCH v1 13/13] KVM: s390: remove the last user of page->index Claudio Imbrenda
  12 siblings, 0 replies; 39+ messages in thread
From: Claudio Imbrenda @ 2025-01-08 18:14 UTC (permalink / raw)
  To: kvm
  Cc: linux-s390, frankja, borntraeger, schlameuss, david, willy, hca,
	svens, agordeev, gor, nrb, nsg

Move the softbits in the PGSTEs to the other usable area.

This leaves the 16-bit block of usable bits free, which will be used in the
next patch for something else.

Signed-off-by: Claudio Imbrenda <imbrenda@linux.ibm.com>
---
 arch/s390/include/asm/pgtable.h | 6 +++---
 1 file changed, 3 insertions(+), 3 deletions(-)

diff --git a/arch/s390/include/asm/pgtable.h b/arch/s390/include/asm/pgtable.h
index 48268095b0a3..151488bb9ed7 100644
--- a/arch/s390/include/asm/pgtable.h
+++ b/arch/s390/include/asm/pgtable.h
@@ -419,9 +419,9 @@ static inline int is_module_addr(void *addr)
 #define PGSTE_HC_BIT	0x0020000000000000UL
 #define PGSTE_GR_BIT	0x0004000000000000UL
 #define PGSTE_GC_BIT	0x0002000000000000UL
-#define PGSTE_UC_BIT	0x0000800000000000UL	/* user dirty (migration) */
-#define PGSTE_IN_BIT	0x0000400000000000UL	/* IPTE notify bit */
-#define PGSTE_VSIE_BIT	0x0000200000000000UL	/* ref'd in a shadow table */
+#define PGSTE_UC_BIT	0x0000000000008000UL	/* user dirty (migration) */
+#define PGSTE_IN_BIT	0x0000000000004000UL	/* IPTE notify bit */
+#define PGSTE_VSIE_BIT	0x0000000000002000UL	/* ref'd in a shadow table */
 
 /* Guest Page State used for virtualization */
 #define _PGSTE_GPS_ZERO			0x0000000080000000UL
-- 
2.47.1


^ permalink raw reply related	[flat|nested] 39+ messages in thread

* [PATCH v1 13/13] KVM: s390: remove the last user of page->index
  2025-01-08 18:14 [PATCH v1 00/13] KVM: s390: Stop using page->index and other things Claudio Imbrenda
                   ` (11 preceding siblings ...)
  2025-01-08 18:14 ` [PATCH v1 12/13] KVM: s390: move PGSTE softbits Claudio Imbrenda
@ 2025-01-08 18:14 ` Claudio Imbrenda
  2025-01-15 12:17   ` Janosch Frank
  12 siblings, 1 reply; 39+ messages in thread
From: Claudio Imbrenda @ 2025-01-08 18:14 UTC (permalink / raw)
  To: kvm
  Cc: linux-s390, frankja, borntraeger, schlameuss, david, willy, hca,
	svens, agordeev, gor, nrb, nsg

Shadow page tables use page->index to keep the g2 address of the guest
page table being shadowed.

Instead of keeping the information in page->index, split the address
and smear it over the 16-bit softbits areas of 4 PGSTEs.

This removes the last s390 user of page->index.

Signed-off-by: Claudio Imbrenda <imbrenda@linux.ibm.com>
---
 arch/s390/include/asm/gmap.h    |  1 +
 arch/s390/include/asm/pgtable.h | 15 +++++++++++++++
 arch/s390/kvm/gaccess.c         |  6 ++++--
 arch/s390/mm/gmap.c             | 22 ++++++++++++++++++++--
 4 files changed, 40 insertions(+), 4 deletions(-)

diff --git a/arch/s390/include/asm/gmap.h b/arch/s390/include/asm/gmap.h
index 5ebc65ac78cc..28c5bf097268 100644
--- a/arch/s390/include/asm/gmap.h
+++ b/arch/s390/include/asm/gmap.h
@@ -177,4 +177,5 @@ static inline int s390_uv_destroy_range_interruptible(struct mm_struct *mm, unsi
 {
 	return __s390_uv_destroy_range(mm, start, end, true);
 }
+
 #endif /* _ASM_S390_GMAP_H */
diff --git a/arch/s390/include/asm/pgtable.h b/arch/s390/include/asm/pgtable.h
index 151488bb9ed7..948100a8fa7e 100644
--- a/arch/s390/include/asm/pgtable.h
+++ b/arch/s390/include/asm/pgtable.h
@@ -419,6 +419,7 @@ static inline int is_module_addr(void *addr)
 #define PGSTE_HC_BIT	0x0020000000000000UL
 #define PGSTE_GR_BIT	0x0004000000000000UL
 #define PGSTE_GC_BIT	0x0002000000000000UL
+#define PGSTE_ST2_MASK	0x0000ffff00000000UL
 #define PGSTE_UC_BIT	0x0000000000008000UL	/* user dirty (migration) */
 #define PGSTE_IN_BIT	0x0000000000004000UL	/* IPTE notify bit */
 #define PGSTE_VSIE_BIT	0x0000000000002000UL	/* ref'd in a shadow table */
@@ -2001,4 +2002,18 @@ extern void s390_reset_cmma(struct mm_struct *mm);
 #define pmd_pgtable(pmd) \
 	((pgtable_t)__va(pmd_val(pmd) & -sizeof(pte_t)*PTRS_PER_PTE))
 
+static inline unsigned long gmap_pgste_get_index(unsigned long *pgt)
+{
+	unsigned long *pgstes, res;
+
+	pgstes = pgt + _PAGE_ENTRIES;
+
+	res = (pgstes[0] & PGSTE_ST2_MASK) << 16;
+	res |= pgstes[1] & PGSTE_ST2_MASK;
+	res |= (pgstes[2] & PGSTE_ST2_MASK) >> 16;
+	res |= (pgstes[3] & PGSTE_ST2_MASK) >> 32;
+
+	return res;
+}
+
 #endif /* _S390_PAGE_H */
diff --git a/arch/s390/kvm/gaccess.c b/arch/s390/kvm/gaccess.c
index 560b5677929b..3bf3a80942de 100644
--- a/arch/s390/kvm/gaccess.c
+++ b/arch/s390/kvm/gaccess.c
@@ -1409,6 +1409,7 @@ static int kvm_s390_shadow_tables(struct gmap *sg, unsigned long saddr,
 static int gmap_shadow_pgt_lookup(struct gmap *sg, unsigned long saddr, unsigned long *pgt,
 				  int *dat_protection, int *fake)
 {
+	unsigned long pt_index;
 	unsigned long *table;
 	struct page *page;
 	int rc;
@@ -1418,9 +1419,10 @@ static int gmap_shadow_pgt_lookup(struct gmap *sg, unsigned long saddr, unsigned
 	if (table && !(*table & _SEGMENT_ENTRY_INVALID)) {
 		/* Shadow page tables are full pages (pte+pgste) */
 		page = pfn_to_page(*table >> PAGE_SHIFT);
-		*pgt = page->index & ~GMAP_SHADOW_FAKE_TABLE;
+		pt_index = gmap_pgste_get_index(page_to_virt(page));
+		*pgt = pt_index & ~GMAP_SHADOW_FAKE_TABLE;
 		*dat_protection = !!(*table & _SEGMENT_ENTRY_PROTECT);
-		*fake = !!(page->index & GMAP_SHADOW_FAKE_TABLE);
+		*fake = !!(pt_index & GMAP_SHADOW_FAKE_TABLE);
 		rc = 0;
 	} else  {
 		rc = -EAGAIN;
diff --git a/arch/s390/mm/gmap.c b/arch/s390/mm/gmap.c
index 3276437725b8..d4f87ee67c7d 100644
--- a/arch/s390/mm/gmap.c
+++ b/arch/s390/mm/gmap.c
@@ -1720,6 +1720,23 @@ int gmap_shadow_sgt(struct gmap *sg, unsigned long saddr, unsigned long sgt,
 }
 EXPORT_SYMBOL_GPL(gmap_shadow_sgt);
 
+static void gmap_pgste_set_index(struct ptdesc *ptdesc, unsigned long pgt_addr)
+{
+	unsigned long *pgstes = page_to_virt(ptdesc_page(ptdesc));
+
+	pgstes += _PAGE_ENTRIES;
+
+	pgstes[0] &= ~PGSTE_ST2_MASK;
+	pgstes[1] &= ~PGSTE_ST2_MASK;
+	pgstes[2] &= ~PGSTE_ST2_MASK;
+	pgstes[3] &= ~PGSTE_ST2_MASK;
+
+	pgstes[0] |= (pgt_addr >> 16) & PGSTE_ST2_MASK;
+	pgstes[1] |= pgt_addr & PGSTE_ST2_MASK;
+	pgstes[2] |= (pgt_addr << 16) & PGSTE_ST2_MASK;
+	pgstes[3] |= (pgt_addr << 32) & PGSTE_ST2_MASK;
+}
+
 /**
  * gmap_shadow_pgt - instantiate a shadow page table
  * @sg: pointer to the shadow guest address space structure
@@ -1747,9 +1764,10 @@ int gmap_shadow_pgt(struct gmap *sg, unsigned long saddr, unsigned long pgt,
 	ptdesc = page_table_alloc_pgste(sg->mm);
 	if (!ptdesc)
 		return -ENOMEM;
-	ptdesc->pt_index = pgt & _SEGMENT_ENTRY_ORIGIN;
+	origin = pgt & _SEGMENT_ENTRY_ORIGIN;
 	if (fake)
-		ptdesc->pt_index |= GMAP_SHADOW_FAKE_TABLE;
+		origin |= GMAP_SHADOW_FAKE_TABLE;
+	gmap_pgste_set_index(ptdesc, origin);
 	s_pgt = page_to_phys(ptdesc_page(ptdesc));
 	/* Install shadow page table */
 	spin_lock(&sg->guest_table_lock);
-- 
2.47.1


^ permalink raw reply related	[flat|nested] 39+ messages in thread

* Re: [PATCH v1 04/13] KVM: s390: move pv gmap functions into kvm
  2025-01-08 18:14 ` [PATCH v1 04/13] KVM: s390: move pv gmap functions into kvm Claudio Imbrenda
@ 2025-01-09 17:42   ` kernel test robot
  2025-01-15 12:48   ` Janosch Frank
  1 sibling, 0 replies; 39+ messages in thread
From: kernel test robot @ 2025-01-09 17:42 UTC (permalink / raw)
  To: Claudio Imbrenda, kvm
  Cc: oe-kbuild-all, linux-s390, frankja, borntraeger, schlameuss,
	david, willy, hca, svens, agordeev, gor, nrb, nsg

Hi Claudio,

kernel test robot noticed the following build warnings:

[auto build test WARNING on s390/features]
[also build test WARNING on kvm/queue kvm/next mst-vhost/linux-next linus/master v6.13-rc6 next-20250109]
[cannot apply to kvms390/next kvm/linux-next]
[If your patch is applied to the wrong git tree, kindly drop us a note.
And when submitting patch, we suggest to use '--base' as documented in
https://git-scm.com/docs/git-format-patch#_base_tree_information]

url:    https://github.com/intel-lab-lkp/linux/commits/Claudio-Imbrenda/KVM-s390-wrapper-for-KVM_BUG/20250109-021808
base:   https://git.kernel.org/pub/scm/linux/kernel/git/s390/linux.git features
patch link:    https://lore.kernel.org/r/20250108181451.74383-5-imbrenda%40linux.ibm.com
patch subject: [PATCH v1 04/13] KVM: s390: move pv gmap functions into kvm
config: s390-randconfig-001-20250109 (https://download.01.org/0day-ci/archive/20250110/202501100045.U1NGK9qJ-lkp@intel.com/config)
compiler: s390-linux-gcc (GCC) 14.2.0
reproduce (this is a W=1 build): (https://download.01.org/0day-ci/archive/20250110/202501100045.U1NGK9qJ-lkp@intel.com/reproduce)

If you fix the issue in a separate patch/commit (i.e. not just a new version of
the same patch/commit), kindly add following tags
| Reported-by: kernel test robot <lkp@intel.com>
| Closes: https://lore.kernel.org/oe-kbuild-all/202501100045.U1NGK9qJ-lkp@intel.com/

All warnings (new ones prefixed by >>):

>> arch/s390/kvm/gmap.c:144: warning: Function parameter or struct member 'page' not described in '__gmap_destroy_page'
>> arch/s390/kvm/gmap.c:144: warning: expecting prototype for gmap_destroy_page(). Prototype was for __gmap_destroy_page() instead


vim +144 arch/s390/kvm/gmap.c

   133	
   134	/**
   135	 * gmap_destroy_page - Destroy a guest page.
   136	 * @gmap: the gmap of the guest
   137	 * @gaddr: the guest address to destroy
   138	 *
   139	 * An attempt will be made to destroy the given guest page. If the attempt
   140	 * fails, an attempt is made to export the page. If both attempts fail, an
   141	 * appropriate error is returned.
   142	 */
   143	static int __gmap_destroy_page(struct gmap *gmap, struct page *page)
 > 144	{
   145		struct folio *folio = page_folio(page);
   146		int rc;
   147	
   148		/*
   149		 * See gmap_make_secure(): large folios cannot be secure. Small
   150		 * folio implies FW_LEVEL_PTE.
   151		 */
   152		if (folio_test_large(folio))
   153			return -EFAULT;
   154	
   155		rc = uv_destroy_folio(folio);
   156		/*
   157		 * Fault handlers can race; it is possible that two CPUs will fault
   158		 * on the same secure page. One CPU can destroy the page, reboot,
   159		 * re-enter secure mode and import it, while the second CPU was
   160		 * stuck at the beginning of the handler. At some point the second
   161		 * CPU will be able to progress, and it will not be able to destroy
   162		 * the page. In that case we do not want to terminate the process,
   163		 * we instead try to export the page.
   164		 */
   165		if (rc)
   166			rc = uv_convert_from_secure_folio(folio);
   167	
   168		return rc;
   169	}
   170	

-- 
0-DAY CI Kernel Test Service
https://github.com/intel/lkp-tests/wiki

^ permalink raw reply	[flat|nested] 39+ messages in thread

* Re: [PATCH v1 01/13] KVM: s390: wrapper for KVM_BUG
  2025-01-08 18:14 ` [PATCH v1 01/13] KVM: s390: wrapper for KVM_BUG Claudio Imbrenda
@ 2025-01-10  9:09   ` Christian Borntraeger
  2025-01-10 13:13   ` Christoph Schlameuss
  1 sibling, 0 replies; 39+ messages in thread
From: Christian Borntraeger @ 2025-01-10  9:09 UTC (permalink / raw)
  To: Claudio Imbrenda, kvm
  Cc: linux-s390, frankja, schlameuss, david, willy, hca, svens,
	agordeev, gor, nrb, nsg

Am 08.01.25 um 19:14 schrieb Claudio Imbrenda:
> Wrap the call to KVM_BUG; this reduces code duplication and improves
> readability.
> 
> Signed-off-by: Claudio Imbrenda <imbrenda@linux.ibm.com>

Reviewed-by: Christian Borntraeger <borntraeger@linux.ibm.com>


^ permalink raw reply	[flat|nested] 39+ messages in thread

* Re: [PATCH v1 02/13] KVM: s390: fake memslots for ucontrol VMs
  2025-01-08 18:14 ` [PATCH v1 02/13] KVM: s390: fake memslots for ucontrol VMs Claudio Imbrenda
@ 2025-01-10  9:31   ` Christian Borntraeger
  2025-01-10 11:47     ` Claudio Imbrenda
  2025-01-10 15:40   ` Christoph Schlameuss
  1 sibling, 1 reply; 39+ messages in thread
From: Christian Borntraeger @ 2025-01-10  9:31 UTC (permalink / raw)
  To: Claudio Imbrenda, kvm
  Cc: linux-s390, frankja, schlameuss, david, willy, hca, svens,
	agordeev, gor, nrb, nsg



Am 08.01.25 um 19:14 schrieb Claudio Imbrenda:
> Create fake memslots for ucontrol VMs. The fake memslots identity-map
> userspace.
> 
> Now memslots will always be present, and ucontrol is not a special case
> anymore.
> 
> Signed-off-by: Claudio Imbrenda <imbrenda@linux.ibm.com>
> ---
>   arch/s390/kvm/kvm-s390.c | 42 ++++++++++++++++++++++++++++++++++++----
>   1 file changed, 38 insertions(+), 4 deletions(-)
> 
> diff --git a/arch/s390/kvm/kvm-s390.c b/arch/s390/kvm/kvm-s390.c
> index ecbdd7d41230..797b8503c162 100644
> --- a/arch/s390/kvm/kvm-s390.c
> +++ b/arch/s390/kvm/kvm-s390.c
> @@ -59,6 +59,7 @@
>   #define LOCAL_IRQS 32
>   #define VCPU_IRQS_MAX_BUF (sizeof(struct kvm_s390_irq) * \
>   			   (KVM_MAX_VCPUS + LOCAL_IRQS))
> +#define UCONTROL_SLOT_SIZE SZ_4T
>   
>   const struct _kvm_stats_desc kvm_vm_stats_desc[] = {
>   	KVM_GENERIC_VM_STATS(),
> @@ -3326,6 +3327,23 @@ void kvm_arch_free_vm(struct kvm *kvm)
>   	__kvm_arch_free_vm(kvm);
>   }
>   
> +static void kvm_s390_ucontrol_ensure_memslot(struct kvm *kvm, unsigned long addr)
> +{
> +	struct kvm_userspace_memory_region2 region = {
> +		.slot = addr / UCONTROL_SLOT_SIZE,
> +		.memory_size = UCONTROL_SLOT_SIZE,
> +		.guest_phys_addr = ALIGN_DOWN(addr, UCONTROL_SLOT_SIZE),
> +		.userspace_addr = ALIGN_DOWN(addr, UCONTROL_SLOT_SIZE),
> +	};
> +	struct kvm_memory_slot *slot;
> +
> +	mutex_lock(&kvm->slots_lock);
> +	slot = gfn_to_memslot(kvm, addr);
> +	if (!slot)
> +		__kvm_set_memory_region(kvm, &region);
> +	mutex_unlock(&kvm->slots_lock);
> +}
> +

Would simply having one slot from 0 to TASK_SIZE also work? This could avoid the
construction of the fake slots during runtime.

>   int kvm_arch_init_vm(struct kvm *kvm, unsigned long type)
>   {
>   	gfp_t alloc_flags = GFP_KERNEL_ACCOUNT;
> @@ -3430,6 +3448,9 @@ int kvm_arch_init_vm(struct kvm *kvm, unsigned long type)
>   	if (type & KVM_VM_S390_UCONTROL) {
>   		kvm->arch.gmap = NULL;
>   		kvm->arch.mem_limit = KVM_S390_NO_MEM_LIMIT;
> +		/* pre-initialize a bunch of memslots; the amount is arbitrary */
> +		for (i = 0; i < 32; i++)
> +			kvm_s390_ucontrol_ensure_memslot(kvm, i * UCONTROL_SLOT_SIZE);
>   	} else {
>   		if (sclp.hamax == U64_MAX)
>   			kvm->arch.mem_limit = TASK_SIZE_MAX;
> @@ -5704,6 +5725,7 @@ long kvm_arch_vcpu_ioctl(struct file *filp,
>   #ifdef CONFIG_KVM_S390_UCONTROL
>   	case KVM_S390_UCAS_MAP: {
>   		struct kvm_s390_ucas_mapping ucasmap;
> +		unsigned long a;

maybe addr?

[...]

> @@ -5852,10 +5879,18 @@ int kvm_arch_prepare_memory_region(struct kvm *kvm,
>   				   struct kvm_memory_slot *new,
>   				   enum kvm_mr_change change)
>   {
> -	gpa_t size;
> +	gpa_t size = new->npages * PAGE_SIZE;
>   
> -	if (kvm_is_ucontrol(kvm))
> -		return -EINVAL;

Maybe add some comment here what and why we are checking those?

> +	if (kvm_is_ucontrol(kvm)) {
> +		if (change != KVM_MR_CREATE || new->flags)
> +			return -EINVAL;
> +		if (new->userspace_addr != new->base_gfn * PAGE_SIZE)
> +			return -EINVAL;
> +		if (!IS_ALIGNED(new->userspace_addr | size, UCONTROL_SLOT_SIZE))
> +			return -EINVAL;
> +		if (new->id != new->userspace_addr / UCONTROL_SLOT_SIZE)
> +			return -EINVAL;
> +	}
>   

^ permalink raw reply	[flat|nested] 39+ messages in thread

* Re: [PATCH v1 02/13] KVM: s390: fake memslots for ucontrol VMs
  2025-01-10  9:31   ` Christian Borntraeger
@ 2025-01-10 11:47     ` Claudio Imbrenda
  2025-01-10 16:22       ` Sean Christopherson
  0 siblings, 1 reply; 39+ messages in thread
From: Claudio Imbrenda @ 2025-01-10 11:47 UTC (permalink / raw)
  To: Christian Borntraeger
  Cc: kvm, linux-s390, frankja, schlameuss, david, willy, hca, svens,
	agordeev, gor, nrb, nsg

On Fri, 10 Jan 2025 10:31:38 +0100
Christian Borntraeger <borntraeger@de.ibm.com> wrote:

> Am 08.01.25 um 19:14 schrieb Claudio Imbrenda:
> > Create fake memslots for ucontrol VMs. The fake memslots identity-map
> > userspace.
> > 
> > Now memslots will always be present, and ucontrol is not a special case
> > anymore.
> > 
> > Signed-off-by: Claudio Imbrenda <imbrenda@linux.ibm.com>
> > ---
> >   arch/s390/kvm/kvm-s390.c | 42 ++++++++++++++++++++++++++++++++++++----
> >   1 file changed, 38 insertions(+), 4 deletions(-)
> > 
> > diff --git a/arch/s390/kvm/kvm-s390.c b/arch/s390/kvm/kvm-s390.c
> > index ecbdd7d41230..797b8503c162 100644
> > --- a/arch/s390/kvm/kvm-s390.c
> > +++ b/arch/s390/kvm/kvm-s390.c
> > @@ -59,6 +59,7 @@
> >   #define LOCAL_IRQS 32
> >   #define VCPU_IRQS_MAX_BUF (sizeof(struct kvm_s390_irq) * \
> >   			   (KVM_MAX_VCPUS + LOCAL_IRQS))
> > +#define UCONTROL_SLOT_SIZE SZ_4T
> >   
> >   const struct _kvm_stats_desc kvm_vm_stats_desc[] = {
> >   	KVM_GENERIC_VM_STATS(),
> > @@ -3326,6 +3327,23 @@ void kvm_arch_free_vm(struct kvm *kvm)
> >   	__kvm_arch_free_vm(kvm);
> >   }
> >   
> > +static void kvm_s390_ucontrol_ensure_memslot(struct kvm *kvm, unsigned long addr)
> > +{
> > +	struct kvm_userspace_memory_region2 region = {
> > +		.slot = addr / UCONTROL_SLOT_SIZE,
> > +		.memory_size = UCONTROL_SLOT_SIZE,
> > +		.guest_phys_addr = ALIGN_DOWN(addr, UCONTROL_SLOT_SIZE),
> > +		.userspace_addr = ALIGN_DOWN(addr, UCONTROL_SLOT_SIZE),
> > +	};
> > +	struct kvm_memory_slot *slot;
> > +
> > +	mutex_lock(&kvm->slots_lock);
> > +	slot = gfn_to_memslot(kvm, addr);
> > +	if (!slot)
> > +		__kvm_set_memory_region(kvm, &region);
> > +	mutex_unlock(&kvm->slots_lock);
> > +}
> > +  
> 
> Would simply having one slot from 0 to TASK_SIZE also work? This could avoid the
> construction of the fake slots during runtime.

unfortunately memslots are limited to 4TiB.
having bigger ones would require even more changes all across KVM (and
maybe qemu too)

> 
> >   int kvm_arch_init_vm(struct kvm *kvm, unsigned long type)
> >   {
> >   	gfp_t alloc_flags = GFP_KERNEL_ACCOUNT;
> > @@ -3430,6 +3448,9 @@ int kvm_arch_init_vm(struct kvm *kvm, unsigned long type)
> >   	if (type & KVM_VM_S390_UCONTROL) {
> >   		kvm->arch.gmap = NULL;
> >   		kvm->arch.mem_limit = KVM_S390_NO_MEM_LIMIT;
> > +		/* pre-initialize a bunch of memslots; the amount is arbitrary */
> > +		for (i = 0; i < 32; i++)
> > +			kvm_s390_ucontrol_ensure_memslot(kvm, i * UCONTROL_SLOT_SIZE);
> >   	} else {
> >   		if (sclp.hamax == U64_MAX)
> >   			kvm->arch.mem_limit = TASK_SIZE_MAX;
> > @@ -5704,6 +5725,7 @@ long kvm_arch_vcpu_ioctl(struct file *filp,
> >   #ifdef CONFIG_KVM_S390_UCONTROL
> >   	case KVM_S390_UCAS_MAP: {
> >   		struct kvm_s390_ucas_mapping ucasmap;
> > +		unsigned long a;  
> 
> maybe addr?

yes

> 
> [...]
> 
> > @@ -5852,10 +5879,18 @@ int kvm_arch_prepare_memory_region(struct kvm *kvm,
> >   				   struct kvm_memory_slot *new,
> >   				   enum kvm_mr_change change)
> >   {
> > -	gpa_t size;
> > +	gpa_t size = new->npages * PAGE_SIZE;
> >   
> > -	if (kvm_is_ucontrol(kvm))
> > -		return -EINVAL;  
> 
> Maybe add some comment here what and why we are checking those?

will do

> 
> > +	if (kvm_is_ucontrol(kvm)) {
> > +		if (change != KVM_MR_CREATE || new->flags)
> > +			return -EINVAL;
> > +		if (new->userspace_addr != new->base_gfn * PAGE_SIZE)
> > +			return -EINVAL;
> > +		if (!IS_ALIGNED(new->userspace_addr | size, UCONTROL_SLOT_SIZE))
> > +			return -EINVAL;
> > +		if (new->id != new->userspace_addr / UCONTROL_SLOT_SIZE)
> > +			return -EINVAL;
> > +	}
> >     


^ permalink raw reply	[flat|nested] 39+ messages in thread

* Re: [PATCH v1 01/13] KVM: s390: wrapper for KVM_BUG
  2025-01-08 18:14 ` [PATCH v1 01/13] KVM: s390: wrapper for KVM_BUG Claudio Imbrenda
  2025-01-10  9:09   ` Christian Borntraeger
@ 2025-01-10 13:13   ` Christoph Schlameuss
  1 sibling, 0 replies; 39+ messages in thread
From: Christoph Schlameuss @ 2025-01-10 13:13 UTC (permalink / raw)
  To: Claudio Imbrenda, kvm
  Cc: linux-s390, frankja, borntraeger, david, willy, hca, svens,
	agordeev, gor, nrb, nsg

On Wed Jan 8, 2025 at 7:14 PM CET, Claudio Imbrenda wrote:
> Wrap the call to KVM_BUG; this reduces code duplication and improves
> readability.
>
> Signed-off-by: Claudio Imbrenda <imbrenda@linux.ibm.com>
> ---
>  arch/s390/kvm/kvm-s390.c | 19 ++++++++++---------
>  1 file changed, 10 insertions(+), 9 deletions(-)

LGTM

Reviewed-by: Christoph Schlameuss <schlameuss@linux.ibm.com>

^ permalink raw reply	[flat|nested] 39+ messages in thread

* Re: [PATCH v1 02/13] KVM: s390: fake memslots for ucontrol VMs
  2025-01-08 18:14 ` [PATCH v1 02/13] KVM: s390: fake memslots for ucontrol VMs Claudio Imbrenda
  2025-01-10  9:31   ` Christian Borntraeger
@ 2025-01-10 15:40   ` Christoph Schlameuss
  2025-01-10 16:18     ` Claudio Imbrenda
  1 sibling, 1 reply; 39+ messages in thread
From: Christoph Schlameuss @ 2025-01-10 15:40 UTC (permalink / raw)
  To: Claudio Imbrenda, kvm
  Cc: linux-s390, frankja, borntraeger, david, willy, hca, svens,
	agordeev, gor, nrb, nsg

On Wed Jan 8, 2025 at 7:14 PM CET, Claudio Imbrenda wrote:
> Create fake memslots for ucontrol VMs. The fake memslots identity-map
> userspace.
>
> Now memslots will always be present, and ucontrol is not a special case
> anymore.
>
> Signed-off-by: Claudio Imbrenda <imbrenda@linux.ibm.com>
> ---
>  arch/s390/kvm/kvm-s390.c | 42 ++++++++++++++++++++++++++++++++++++----
>  1 file changed, 38 insertions(+), 4 deletions(-)
>
> diff --git a/arch/s390/kvm/kvm-s390.c b/arch/s390/kvm/kvm-s390.c
> index ecbdd7d41230..797b8503c162 100644
> --- a/arch/s390/kvm/kvm-s390.c
> +++ b/arch/s390/kvm/kvm-s390.c
> @@ -59,6 +59,7 @@
>  #define LOCAL_IRQS 32
>  #define VCPU_IRQS_MAX_BUF (sizeof(struct kvm_s390_irq) * \
>  			   (KVM_MAX_VCPUS + LOCAL_IRQS))
> +#define UCONTROL_SLOT_SIZE SZ_4T
>  
>  const struct _kvm_stats_desc kvm_vm_stats_desc[] = {
>  	KVM_GENERIC_VM_STATS(),
> @@ -3326,6 +3327,23 @@ void kvm_arch_free_vm(struct kvm *kvm)
>  	__kvm_arch_free_vm(kvm);
>  }
>  
> +static void kvm_s390_ucontrol_ensure_memslot(struct kvm *kvm, unsigned long addr)
> +{
> +	struct kvm_userspace_memory_region2 region = {
> +		.slot = addr / UCONTROL_SLOT_SIZE,
> +		.memory_size = UCONTROL_SLOT_SIZE,
> +		.guest_phys_addr = ALIGN_DOWN(addr, UCONTROL_SLOT_SIZE),
> +		.userspace_addr = ALIGN_DOWN(addr, UCONTROL_SLOT_SIZE),
> +	};
> +	struct kvm_memory_slot *slot;
> +
> +	mutex_lock(&kvm->slots_lock);
> +	slot = gfn_to_memslot(kvm, addr);
> +	if (!slot)
> +		__kvm_set_memory_region(kvm, &region);

This will call into kvm_arch_commit_memory_region() where
kvm->arch.gmap will still be NULL!

> +	mutex_unlock(&kvm->slots_lock);
> +}
> +
>  int kvm_arch_init_vm(struct kvm *kvm, unsigned long type)
>  {
>  	gfp_t alloc_flags = GFP_KERNEL_ACCOUNT;
> @@ -3430,6 +3448,9 @@ int kvm_arch_init_vm(struct kvm *kvm, unsigned long type)
>  	if (type & KVM_VM_S390_UCONTROL) {
>  		kvm->arch.gmap = NULL;
>  		kvm->arch.mem_limit = KVM_S390_NO_MEM_LIMIT;
> +		/* pre-initialize a bunch of memslots; the amount is arbitrary */
> +		for (i = 0; i < 32; i++)
> +			kvm_s390_ucontrol_ensure_memslot(kvm, i * UCONTROL_SLOT_SIZE);
>  	} else {
>  		if (sclp.hamax == U64_MAX)
>  			kvm->arch.mem_limit = TASK_SIZE_MAX;
> @@ -5704,6 +5725,7 @@ long kvm_arch_vcpu_ioctl(struct file *filp,
>  #ifdef CONFIG_KVM_S390_UCONTROL
>  	case KVM_S390_UCAS_MAP: {
>  		struct kvm_s390_ucas_mapping ucasmap;
> +		unsigned long a;
>  
>  		if (copy_from_user(&ucasmap, argp, sizeof(ucasmap))) {
>  			r = -EFAULT;
> @@ -5715,6 +5737,11 @@ long kvm_arch_vcpu_ioctl(struct file *filp,
>  			break;
>  		}
>  
> +		a = ALIGN_DOWN(ucasmap.user_addr, UCONTROL_SLOT_SIZE);
> +		while (a < ucasmap.user_addr + ucasmap.length) {
> +			kvm_s390_ucontrol_ensure_memslot(vcpu->kvm, a);
> +			a += UCONTROL_SLOT_SIZE;
> +		}
>  		r = gmap_map_segment(vcpu->arch.gmap, ucasmap.user_addr,
>  				     ucasmap.vcpu_addr, ucasmap.length);
>  		break;
> @@ -5852,10 +5879,18 @@ int kvm_arch_prepare_memory_region(struct kvm *kvm,
>  				   struct kvm_memory_slot *new,
>  				   enum kvm_mr_change change)
>  {
> -	gpa_t size;
> +	gpa_t size = new->npages * PAGE_SIZE;
>  
> -	if (kvm_is_ucontrol(kvm))
> -		return -EINVAL;
> +	if (kvm_is_ucontrol(kvm)) {
> +		if (change != KVM_MR_CREATE || new->flags)
> +			return -EINVAL;
> +		if (new->userspace_addr != new->base_gfn * PAGE_SIZE)
> +			return -EINVAL;
> +		if (!IS_ALIGNED(new->userspace_addr | size, UCONTROL_SLOT_SIZE))
> +			return -EINVAL;
> +		if (new->id != new->userspace_addr / UCONTROL_SLOT_SIZE)
> +			return -EINVAL;
> +	}
>  
>  	/* When we are protected, we should not change the memory slots */
>  	if (kvm_s390_pv_get_handle(kvm))
> @@ -5872,7 +5907,6 @@ int kvm_arch_prepare_memory_region(struct kvm *kvm,
>  		if (new->userspace_addr & 0xffffful)
>  			return -EINVAL;
>  
> -		size = new->npages * PAGE_SIZE;
>  		if (size & 0xffffful)
>  			return -EINVAL;
>  


^ permalink raw reply	[flat|nested] 39+ messages in thread

* Re: [PATCH v1 02/13] KVM: s390: fake memslots for ucontrol VMs
  2025-01-10 15:40   ` Christoph Schlameuss
@ 2025-01-10 16:18     ` Claudio Imbrenda
  0 siblings, 0 replies; 39+ messages in thread
From: Claudio Imbrenda @ 2025-01-10 16:18 UTC (permalink / raw)
  To: Christoph Schlameuss
  Cc: kvm, linux-s390, frankja, borntraeger, david, willy, hca, svens,
	agordeev, gor, nrb, nsg

On Fri, 10 Jan 2025 16:40:04 +0100
"Christoph Schlameuss" <schlameuss@linux.ibm.com> wrote:

> On Wed Jan 8, 2025 at 7:14 PM CET, Claudio Imbrenda wrote:
> > Create fake memslots for ucontrol VMs. The fake memslots identity-map
> > userspace.
> >
> > Now memslots will always be present, and ucontrol is not a special case
> > anymore.
> >
> > Signed-off-by: Claudio Imbrenda <imbrenda@linux.ibm.com>
> > ---
> >  arch/s390/kvm/kvm-s390.c | 42 ++++++++++++++++++++++++++++++++++++----
> >  1 file changed, 38 insertions(+), 4 deletions(-)
> >
> > diff --git a/arch/s390/kvm/kvm-s390.c b/arch/s390/kvm/kvm-s390.c
> > index ecbdd7d41230..797b8503c162 100644
> > --- a/arch/s390/kvm/kvm-s390.c
> > +++ b/arch/s390/kvm/kvm-s390.c
> > @@ -59,6 +59,7 @@
> >  #define LOCAL_IRQS 32
> >  #define VCPU_IRQS_MAX_BUF (sizeof(struct kvm_s390_irq) * \
> >  			   (KVM_MAX_VCPUS + LOCAL_IRQS))
> > +#define UCONTROL_SLOT_SIZE SZ_4T
> >  
> >  const struct _kvm_stats_desc kvm_vm_stats_desc[] = {
> >  	KVM_GENERIC_VM_STATS(),
> > @@ -3326,6 +3327,23 @@ void kvm_arch_free_vm(struct kvm *kvm)
> >  	__kvm_arch_free_vm(kvm);
> >  }
> >  
> > +static void kvm_s390_ucontrol_ensure_memslot(struct kvm *kvm, unsigned long addr)
> > +{
> > +	struct kvm_userspace_memory_region2 region = {
> > +		.slot = addr / UCONTROL_SLOT_SIZE,
> > +		.memory_size = UCONTROL_SLOT_SIZE,
> > +		.guest_phys_addr = ALIGN_DOWN(addr, UCONTROL_SLOT_SIZE),
> > +		.userspace_addr = ALIGN_DOWN(addr, UCONTROL_SLOT_SIZE),
> > +	};
> > +	struct kvm_memory_slot *slot;
> > +
> > +	mutex_lock(&kvm->slots_lock);
> > +	slot = gfn_to_memslot(kvm, addr);
> > +	if (!slot)
> > +		__kvm_set_memory_region(kvm, &region);  
> 
> This will call into kvm_arch_commit_memory_region() where
> kvm->arch.gmap will still be NULL!

Oops!

will fix



^ permalink raw reply	[flat|nested] 39+ messages in thread

* Re: [PATCH v1 02/13] KVM: s390: fake memslots for ucontrol VMs
  2025-01-10 11:47     ` Claudio Imbrenda
@ 2025-01-10 16:22       ` Sean Christopherson
  2025-01-10 17:02         ` Claudio Imbrenda
  0 siblings, 1 reply; 39+ messages in thread
From: Sean Christopherson @ 2025-01-10 16:22 UTC (permalink / raw)
  To: Claudio Imbrenda
  Cc: Christian Borntraeger, kvm, linux-s390, frankja, schlameuss,
	david, willy, hca, svens, agordeev, gor, nrb, nsg

On Fri, Jan 10, 2025, Claudio Imbrenda wrote:
> On Fri, 10 Jan 2025 10:31:38 +0100
> Christian Borntraeger <borntraeger@de.ibm.com> wrote:
> 
> > Am 08.01.25 um 19:14 schrieb Claudio Imbrenda:
> > > +static void kvm_s390_ucontrol_ensure_memslot(struct kvm *kvm, unsigned long addr)
> > > +{
> > > +	struct kvm_userspace_memory_region2 region = {
> > > +		.slot = addr / UCONTROL_SLOT_SIZE,
> > > +		.memory_size = UCONTROL_SLOT_SIZE,
> > > +		.guest_phys_addr = ALIGN_DOWN(addr, UCONTROL_SLOT_SIZE),
> > > +		.userspace_addr = ALIGN_DOWN(addr, UCONTROL_SLOT_SIZE),
> > > +	};
> > > +	struct kvm_memory_slot *slot;
> > > +
> > > +	mutex_lock(&kvm->slots_lock);
> > > +	slot = gfn_to_memslot(kvm, addr);
> > > +	if (!slot)
> > > +		__kvm_set_memory_region(kvm, &region);

The return value definitely should be checked, especially if the memory regions
are not KVM-internal, i.e. if userspace is allowed to create memslots.

> > > +	mutex_unlock(&kvm->slots_lock);
> > > +}
> > > +  
> > 
> > Would simply having one slot from 0 to TASK_SIZE also work? This could avoid the
> > construction of the fake slots during runtime.
> 
> unfortunately memslots are limited to 4TiB.
> having bigger ones would require even more changes all across KVM (and
> maybe qemu too)

AFAIK, that limitation exists purely because of dirty bitmaps.  IIUC, these "fake"
memslots are not intended to be visible to userspace, or at the very least don't
*need* to be visible to userspace.

Assuming that's true, they/it can/should be KVM-internal memslots, and those
should never be dirty-logged.  x86 allocates metadata based on slot size, so in
practice creating a mega-slot will never succeed on x86, but the only size
limitation I see in s390 is on arch.mem_limit, but for ucontrol that's set to -1ull,
i.e. is a non-issue.

I have a series (that I need to refresh) to provide a dedicated API for creating
internal memslots, and to also enforce that flags == 0 for internal memslots,
i.e. to enforce that dirty logging is never enabled (see Link below).  With that
I mind, I can't think of any reason to disallow a 0 => TASK_SIZE memslot so long
as it's KVM-defined.

Using a single memslot would hopefully allow s390 to unconditionally carve out a
KVM-internal memslot, i.e. not have to condition the logic on the type of VM.  E.g.

  #define KVM_INTERNAL_MEM_SLOTS 1

  #define KVM_S390_UCONTROL_MEMSLOT (KVM_USER_MEM_SLOTS + 0)

And then I think just this?

---
From: Sean Christopherson <seanjc@google.com>
Date: Fri, 10 Jan 2025 08:05:09 -0800
Subject: [PATCH] KVM: Do not restrict the size of KVM-internal memory regions

Exempt KVM-internal memslots from the KVM_MEM_MAX_NR_PAGES restriction, as
the limit on the number of pages exists purely to play nice with dirty
bitmap operations, which use 32-bit values to index the bitmaps, and dirty
logging isn't supported for KVM-internal memslots.

Link: https://lore.kernel.org/all/20240802205003.353672-6-seanjc@google.com
Signed-off-by: Sean Christopherson <seanjc@google.com>
---
 virt/kvm/kvm_main.c | 10 +++++++++-
 1 file changed, 9 insertions(+), 1 deletion(-)

diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c
index 8a0d0d37fb17..3cea406c34db 100644
--- a/virt/kvm/kvm_main.c
+++ b/virt/kvm/kvm_main.c
@@ -1972,7 +1972,15 @@ int __kvm_set_memory_region(struct kvm *kvm,
 		return -EINVAL;
 	if (mem->guest_phys_addr + mem->memory_size < mem->guest_phys_addr)
 		return -EINVAL;
-	if ((mem->memory_size >> PAGE_SHIFT) > KVM_MEM_MAX_NR_PAGES)
+
+	/*
+	 * The size of userspace-defined memory regions is restricted in order
+	 * to play nice with dirty bitmap operations, which are indexed with an
+	 * "unsigned int".  KVM's internal memory regions don't support dirty
+	 * logging, and so are exempt.
+	 */
+	if (id < KVM_USER_MEM_SLOTS &&
+	    (mem->memory_size >> PAGE_SHIFT) > KVM_MEM_MAX_NR_PAGES)
 		return -EINVAL;
 
 	slots = __kvm_memslots(kvm, as_id);

base-commit: 1aadfba8419606d447d1961f25e2d312011ad45a
-- 

^ permalink raw reply related	[flat|nested] 39+ messages in thread

* Re: [PATCH v1 02/13] KVM: s390: fake memslots for ucontrol VMs
  2025-01-10 16:22       ` Sean Christopherson
@ 2025-01-10 17:02         ` Claudio Imbrenda
  2025-01-10 17:34           ` Sean Christopherson
  0 siblings, 1 reply; 39+ messages in thread
From: Claudio Imbrenda @ 2025-01-10 17:02 UTC (permalink / raw)
  To: Sean Christopherson
  Cc: Christian Borntraeger, kvm, linux-s390, frankja, schlameuss,
	david, willy, hca, svens, agordeev, gor, nrb, nsg

On Fri, 10 Jan 2025 08:22:12 -0800
Sean Christopherson <seanjc@google.com> wrote:

> On Fri, Jan 10, 2025, Claudio Imbrenda wrote:
> > On Fri, 10 Jan 2025 10:31:38 +0100
> > Christian Borntraeger <borntraeger@de.ibm.com> wrote:
> >   
> > > Am 08.01.25 um 19:14 schrieb Claudio Imbrenda:  
> > > > +static void kvm_s390_ucontrol_ensure_memslot(struct kvm *kvm, unsigned long addr)
> > > > +{
> > > > +	struct kvm_userspace_memory_region2 region = {
> > > > +		.slot = addr / UCONTROL_SLOT_SIZE,
> > > > +		.memory_size = UCONTROL_SLOT_SIZE,
> > > > +		.guest_phys_addr = ALIGN_DOWN(addr, UCONTROL_SLOT_SIZE),
> > > > +		.userspace_addr = ALIGN_DOWN(addr, UCONTROL_SLOT_SIZE),
> > > > +	};
> > > > +	struct kvm_memory_slot *slot;
> > > > +
> > > > +	mutex_lock(&kvm->slots_lock);
> > > > +	slot = gfn_to_memslot(kvm, addr);
> > > > +	if (!slot)
> > > > +		__kvm_set_memory_region(kvm, &region);  
> 
> The return value definitely should be checked, especially if the memory regions
> are not KVM-internal, i.e. if userspace is allowed to create memslots.
> 

will fix, unless we do what you propose below

> > > > +	mutex_unlock(&kvm->slots_lock);
> > > > +}
> > > > +    
> > > 
> > > Would simply having one slot from 0 to TASK_SIZE also work? This could avoid the
> > > construction of the fake slots during runtime.  
> > 
> > unfortunately memslots are limited to 4TiB.
> > having bigger ones would require even more changes all across KVM (and
> > maybe qemu too)  
> 
> AFAIK, that limitation exists purely because of dirty bitmaps.  IIUC, these "fake"
> memslots are not intended to be visible to userspace, or at the very least don't
> *need* to be visible to userspace.
> 
> Assuming that's true, they/it can/should be KVM-internal memslots, and those
> should never be dirty-logged.  x86 allocates metadata based on slot size, so in
> practice creating a mega-slot will never succeed on x86, but the only size
> limitation I see in s390 is on arch.mem_limit, but for ucontrol that's set to -1ull,
> i.e. is a non-issue.
> 
> I have a series (that I need to refresh) to provide a dedicated API for creating
> internal memslots, and to also enforce that flags == 0 for internal memslots,
> i.e. to enforce that dirty logging is never enabled (see Link below).  With that
> I mind, I can't think of any reason to disallow a 0 => TASK_SIZE memslot so long
> as it's KVM-defined.
> 
> Using a single memslot would hopefully allow s390 to unconditionally carve out a
> KVM-internal memslot, i.e. not have to condition the logic on the type of VM.  E.g.

yes, I would love that

the reason why I did not use internal memslots is that I would have
potentially needed *all* the memslots for ucontrol, and instead of
reserving, say, half of all memslots, I decided to have them
user-visible, which is hack I honestly don't like.

do you think you can refresh the series before the upcoming merge
window?

otherwise I should split this series in two, since page->index needs to
be removed asap.

> 
>   #define KVM_INTERNAL_MEM_SLOTS 1
> 
>   #define KVM_S390_UCONTROL_MEMSLOT (KVM_USER_MEM_SLOTS + 0)
> 
> And then I think just this?
> 
> ---
> From: Sean Christopherson <seanjc@google.com>
> Date: Fri, 10 Jan 2025 08:05:09 -0800
> Subject: [PATCH] KVM: Do not restrict the size of KVM-internal memory regions
> 
> Exempt KVM-internal memslots from the KVM_MEM_MAX_NR_PAGES restriction, as
> the limit on the number of pages exists purely to play nice with dirty
> bitmap operations, which use 32-bit values to index the bitmaps, and dirty
> logging isn't supported for KVM-internal memslots.
> 
> Link: https://lore.kernel.org/all/20240802205003.353672-6-seanjc@google.com
> Signed-off-by: Sean Christopherson <seanjc@google.com>
> ---
>  virt/kvm/kvm_main.c | 10 +++++++++-
>  1 file changed, 9 insertions(+), 1 deletion(-)
> 
> diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c
> index 8a0d0d37fb17..3cea406c34db 100644
> --- a/virt/kvm/kvm_main.c
> +++ b/virt/kvm/kvm_main.c
> @@ -1972,7 +1972,15 @@ int __kvm_set_memory_region(struct kvm *kvm,
>  		return -EINVAL;
>  	if (mem->guest_phys_addr + mem->memory_size < mem->guest_phys_addr)
>  		return -EINVAL;
> -	if ((mem->memory_size >> PAGE_SHIFT) > KVM_MEM_MAX_NR_PAGES)
> +
> +	/*
> +	 * The size of userspace-defined memory regions is restricted in order
> +	 * to play nice with dirty bitmap operations, which are indexed with an
> +	 * "unsigned int".  KVM's internal memory regions don't support dirty
> +	 * logging, and so are exempt.
> +	 */
> +	if (id < KVM_USER_MEM_SLOTS &&
> +	    (mem->memory_size >> PAGE_SHIFT) > KVM_MEM_MAX_NR_PAGES)
>  		return -EINVAL;
>  
>  	slots = __kvm_memslots(kvm, as_id);
> 
> base-commit: 1aadfba8419606d447d1961f25e2d312011ad45a


^ permalink raw reply	[flat|nested] 39+ messages in thread

* Re: [PATCH v1 02/13] KVM: s390: fake memslots for ucontrol VMs
  2025-01-10 17:02         ` Claudio Imbrenda
@ 2025-01-10 17:34           ` Sean Christopherson
  2025-01-10 17:43             ` Claudio Imbrenda
  0 siblings, 1 reply; 39+ messages in thread
From: Sean Christopherson @ 2025-01-10 17:34 UTC (permalink / raw)
  To: Claudio Imbrenda
  Cc: Christian Borntraeger, kvm, linux-s390, frankja, schlameuss,
	david, willy, hca, svens, agordeev, gor, nrb, nsg

On Fri, Jan 10, 2025, Claudio Imbrenda wrote:
> On Fri, 10 Jan 2025 08:22:12 -0800
> Sean Christopherson <seanjc@google.com> wrote:
> > AFAIK, that limitation exists purely because of dirty bitmaps.  IIUC, these "fake"
> > memslots are not intended to be visible to userspace, or at the very least don't
> > *need* to be visible to userspace.
> > 
> > Assuming that's true, they/it can/should be KVM-internal memslots, and those
> > should never be dirty-logged.  x86 allocates metadata based on slot size, so in
> > practice creating a mega-slot will never succeed on x86, but the only size
> > limitation I see in s390 is on arch.mem_limit, but for ucontrol that's set to -1ull,
> > i.e. is a non-issue.
> > 
> > I have a series (that I need to refresh) to provide a dedicated API for creating
> > internal memslots, and to also enforce that flags == 0 for internal memslots,
> > i.e. to enforce that dirty logging is never enabled (see Link below).  With that
> > I mind, I can't think of any reason to disallow a 0 => TASK_SIZE memslot so long
> > as it's KVM-defined.
> > 
> > Using a single memslot would hopefully allow s390 to unconditionally carve out a
> > KVM-internal memslot, i.e. not have to condition the logic on the type of VM.  E.g.
> 
> yes, I would love that
> 
> the reason why I did not use internal memslots is that I would have
> potentially needed *all* the memslots for ucontrol, and instead of
> reserving, say, half of all memslots, I decided to have them
> user-visible, which is hack I honestly don't like.
> 
> do you think you can refresh the series before the upcoming merge
> window?

Ya, I'll refresh it today, and then I can apply it early next week and provide
an immutable topic branch/tag.

My thought is to have you carry the below in the s390 series though, as I don't
have a way to properly test it, and I'd prefer to avoid having to do a revert on
the off chance removing the limit doesn't work for ucontrol.

^ permalink raw reply	[flat|nested] 39+ messages in thread

* Re: [PATCH v1 02/13] KVM: s390: fake memslots for ucontrol VMs
  2025-01-10 17:34           ` Sean Christopherson
@ 2025-01-10 17:43             ` Claudio Imbrenda
  0 siblings, 0 replies; 39+ messages in thread
From: Claudio Imbrenda @ 2025-01-10 17:43 UTC (permalink / raw)
  To: Sean Christopherson
  Cc: Christian Borntraeger, kvm, linux-s390, frankja, schlameuss,
	david, willy, hca, svens, agordeev, gor, nrb, nsg

On Fri, 10 Jan 2025 09:34:49 -0800
Sean Christopherson <seanjc@google.com> wrote:

> On Fri, Jan 10, 2025, Claudio Imbrenda wrote:
> > On Fri, 10 Jan 2025 08:22:12 -0800
> > Sean Christopherson <seanjc@google.com> wrote:  
> > > AFAIK, that limitation exists purely because of dirty bitmaps.  IIUC, these "fake"
> > > memslots are not intended to be visible to userspace, or at the very least don't
> > > *need* to be visible to userspace.
> > > 
> > > Assuming that's true, they/it can/should be KVM-internal memslots, and those
> > > should never be dirty-logged.  x86 allocates metadata based on slot size, so in
> > > practice creating a mega-slot will never succeed on x86, but the only size
> > > limitation I see in s390 is on arch.mem_limit, but for ucontrol that's set to -1ull,
> > > i.e. is a non-issue.
> > > 
> > > I have a series (that I need to refresh) to provide a dedicated API for creating
> > > internal memslots, and to also enforce that flags == 0 for internal memslots,
> > > i.e. to enforce that dirty logging is never enabled (see Link below).  With that
> > > I mind, I can't think of any reason to disallow a 0 => TASK_SIZE memslot so long
> > > as it's KVM-defined.
> > > 
> > > Using a single memslot would hopefully allow s390 to unconditionally carve out a
> > > KVM-internal memslot, i.e. not have to condition the logic on the type of VM.  E.g.  
> > 
> > yes, I would love that
> > 
> > the reason why I did not use internal memslots is that I would have
> > potentially needed *all* the memslots for ucontrol, and instead of
> > reserving, say, half of all memslots, I decided to have them
> > user-visible, which is hack I honestly don't like.
> > 
> > do you think you can refresh the series before the upcoming merge
> > window?  
> 
> Ya, I'll refresh it today, and then I can apply it early next week and provide

excellent, thanks!

> an immutable topic branch/tag.
> 
> My thought is to have you carry the below in the s390 series though, as I don't

sure

> have a way to properly test it, and I'd prefer to avoid having to do a revert on
> the off chance removing the limit doesn't work for ucontrol.

makes sense, yes

^ permalink raw reply	[flat|nested] 39+ messages in thread

* Re: [PATCH v1 03/13] KVM: s390: use __kvm_faultin_pfn()
  2025-01-08 18:14 ` [PATCH v1 03/13] KVM: s390: use __kvm_faultin_pfn() Claudio Imbrenda
@ 2025-01-14 17:34   ` Christoph Schlameuss
  2025-01-14 17:56     ` Claudio Imbrenda
  0 siblings, 1 reply; 39+ messages in thread
From: Christoph Schlameuss @ 2025-01-14 17:34 UTC (permalink / raw)
  To: Claudio Imbrenda, kvm
  Cc: linux-s390, frankja, borntraeger, david, willy, hca, svens,
	agordeev, gor, nrb, nsg

On Wed Jan 8, 2025 at 7:14 PM CET, Claudio Imbrenda wrote:
> Refactor the existing page fault handling code to use __kvm_faultin_pfn().
>
> This possible now that memslots are always present.
>
> Signed-off-by: Claudio Imbrenda <imbrenda@linux.ibm.com>
> ---
>  arch/s390/kvm/kvm-s390.c | 92 +++++++++++++++++++++++++++++++---------
>  arch/s390/mm/gmap.c      |  1 +
>  2 files changed, 73 insertions(+), 20 deletions(-)

With nits resolved:

Reviewed-by: Christoph Schlameuss <schlameuss@linux.ibm.com>

>
> diff --git a/arch/s390/kvm/kvm-s390.c b/arch/s390/kvm/kvm-s390.c
> index 797b8503c162..8e4e7e45238b 100644
> --- a/arch/s390/kvm/kvm-s390.c
> +++ b/arch/s390/kvm/kvm-s390.c
> @@ -4794,11 +4794,66 @@ static void kvm_s390_assert_primary_as(struct kvm_vcpu *vcpu)
>  		current->thread.gmap_int_code, current->thread.gmap_teid.val);
>  }
>  
> +static int kvm_s390_handle_dat_fault(struct kvm_vcpu *vcpu, gfn_t gfn, gpa_t gaddr,
> +				     unsigned int flags)
> +{
> +	struct kvm_memory_slot *slot;
> +	unsigned int fault_flags;
> +	bool writable, unlocked;
> +	unsigned long vmaddr;
> +	struct page *page;
> +	kvm_pfn_t pfn;
> +	int rc;
> +
> +	slot = kvm_vcpu_gfn_to_memslot(vcpu, gfn);
> +	if (!slot || slot->flags & KVM_MEMSLOT_INVALID)
> +		return vcpu_post_run_addressing_exception(vcpu);
> +
> +	fault_flags = flags & FOLL_WRITE ? FAULT_FLAG_WRITE : 0;
> +	if (vcpu->arch.gmap->pfault_enabled)
> +		flags |= FOLL_NOWAIT;
> +	vmaddr = __gfn_to_hva_memslot(slot, gfn);
> +
> +try_again:
> +	pfn = __kvm_faultin_pfn(slot, gfn, flags, &writable, &page);
> +
> +	/* Access outside memory, inject addressing exception */
> +	if (is_noslot_pfn(pfn))
> +		return vcpu_post_run_addressing_exception(vcpu);
> +	/* Signal pending: try again */
> +	if (pfn == KVM_PFN_ERR_SIGPENDING)
> +		return -EAGAIN;
> +
> +	/* Needs I/O, try to setup async pfault (only possible with FOLL_NOWAIT) */
> +	if (pfn == KVM_PFN_ERR_NEEDS_IO) {
> +		trace_kvm_s390_major_guest_pfault(vcpu);
> +		if (kvm_arch_setup_async_pf(vcpu))
> +			return 0;
> +		vcpu->stat.pfault_sync++;
> +		/* Could not setup async pfault, try again synchronously */
> +		flags &= ~FOLL_NOWAIT;
> +		goto try_again;
> +	}
> +	/* Any other error */
> +	if (is_error_pfn(pfn))
> +		return -EFAULT;
> +
> +	/* Success */
> +	mmap_read_lock(vcpu->arch.gmap->mm);
> +	/* Mark the userspace PTEs as young and/or dirty, to avoid page fault loops */
> +	rc = fixup_user_fault(vcpu->arch.gmap->mm, vmaddr, fault_flags, &unlocked);
> +	if (!rc)
> +		rc = __gmap_link(vcpu->arch.gmap, gaddr, vmaddr);
> +	kvm_release_faultin_page(vcpu->kvm, page, false, writable);
> +	mmap_read_unlock(vcpu->arch.gmap->mm);
> +	return rc;
> +}
> +
>  static int vcpu_post_run_handle_fault(struct kvm_vcpu *vcpu)
>  {
> +	unsigned long gaddr, gaddr_tmp;
>  	unsigned int flags = 0;
> -	unsigned long gaddr;
> -	int rc = 0;
> +	gfn_t gfn;
>  
>  	gaddr = current->thread.gmap_teid.addr * PAGE_SIZE;
>  	if (kvm_s390_cur_gmap_fault_is_write())
> @@ -4850,29 +4905,26 @@ static int vcpu_post_run_handle_fault(struct kvm_vcpu *vcpu)
>  	case PGM_REGION_SECOND_TRANS:
>  	case PGM_REGION_THIRD_TRANS:
>  		kvm_s390_assert_primary_as(vcpu);
> -		if (vcpu->arch.gmap->pfault_enabled) {
> -			rc = gmap_fault(vcpu->arch.gmap, gaddr, flags | FAULT_FLAG_RETRY_NOWAIT);
> -			if (rc == -EFAULT)
> -				return vcpu_post_run_addressing_exception(vcpu);
> -			if (rc == -EAGAIN) {
> -				trace_kvm_s390_major_guest_pfault(vcpu);
> -				if (kvm_arch_setup_async_pf(vcpu))
> -					return 0;
> -				vcpu->stat.pfault_sync++;
> -			} else {
> -				return rc;
> -			}
> -		}
> -		rc = gmap_fault(vcpu->arch.gmap, gaddr, flags);
> -		if (rc == -EFAULT) {
> -			if (kvm_is_ucontrol(vcpu->kvm)) {
> +
> +		gfn = gpa_to_gfn(gaddr);
> +		if (kvm_is_ucontrol(vcpu->kvm)) {
> +			/*
> +			 * This translates the per-vCPU guest address into a
> +			 * fake guest address, which can then be used with the
> +			 * fake memslots that are identity mapping userspace.
> +			 * This allows ucontrol VMs to use the normal fault
> +			 * resolution path, like normal VMs.
> +			 */
> +			gaddr_tmp = gmap_translate(vcpu->arch.gmap, gaddr);
> +			if (gaddr_tmp == -EFAULT) {
>  				vcpu->run->exit_reason = KVM_EXIT_S390_UCONTROL;
>  				vcpu->run->s390_ucontrol.trans_exc_code = gaddr;
>  				vcpu->run->s390_ucontrol.pgm_code = 0x10;

nit: s/0x10/PGM_SEGMENT_TRANSLATION/

>  				return -EREMOTE;
>  			}
> -			return vcpu_post_run_addressing_exception(vcpu);
> +			gfn = gpa_to_gfn(gaddr_tmp);
>  		}
> +		return kvm_s390_handle_dat_fault(vcpu, gfn, gaddr, flags);
>  		break;

nit: Remove the break after the return here?

>  	default:
>  		KVM_BUG(1, vcpu->kvm, "Unexpected program interrupt 0x%x, TEID 0x%016lx",
> @@ -4880,7 +4932,7 @@ static int vcpu_post_run_handle_fault(struct kvm_vcpu *vcpu)
>  		send_sig(SIGSEGV, current, 0);
>  		break;
>  	}
> -	return rc;
> +	return 0;
>  }
>  
>  static int vcpu_post_run(struct kvm_vcpu *vcpu, int exit_reason)
> diff --git a/arch/s390/mm/gmap.c b/arch/s390/mm/gmap.c
> index 16b8a36c56de..3aacef77c174 100644
> --- a/arch/s390/mm/gmap.c
> +++ b/arch/s390/mm/gmap.c
> @@ -605,6 +605,7 @@ int __gmap_link(struct gmap *gmap, unsigned long gaddr, unsigned long vmaddr)
>  	radix_tree_preload_end();
>  	return rc;
>  }
> +EXPORT_SYMBOL(__gmap_link);
>  
>  /**
>   * fixup_user_fault_nowait - manually resolve a user page fault without waiting


^ permalink raw reply	[flat|nested] 39+ messages in thread

* Re: [PATCH v1 03/13] KVM: s390: use __kvm_faultin_pfn()
  2025-01-14 17:34   ` Christoph Schlameuss
@ 2025-01-14 17:56     ` Claudio Imbrenda
  0 siblings, 0 replies; 39+ messages in thread
From: Claudio Imbrenda @ 2025-01-14 17:56 UTC (permalink / raw)
  To: Christoph Schlameuss
  Cc: kvm, linux-s390, frankja, borntraeger, david, willy, hca, svens,
	agordeev, gor, nrb, nsg

On Tue, 14 Jan 2025 18:34:13 +0100
"Christoph Schlameuss" <schlameuss@linux.ibm.com> wrote:

> On Wed Jan 8, 2025 at 7:14 PM CET, Claudio Imbrenda wrote:
> > Refactor the existing page fault handling code to use __kvm_faultin_pfn().
> >
> > This possible now that memslots are always present.
> >
> > Signed-off-by: Claudio Imbrenda <imbrenda@linux.ibm.com>
> > ---
> >  arch/s390/kvm/kvm-s390.c | 92 +++++++++++++++++++++++++++++++---------
> >  arch/s390/mm/gmap.c      |  1 +
> >  2 files changed, 73 insertions(+), 20 deletions(-)  
> 
> With nits resolved:
> 
> Reviewed-by: Christoph Schlameuss <schlameuss@linux.ibm.com>

I'll send a v2 with some more substantial differences, so maybe you
will want to look at that first :) 

[...]

> > +		gfn = gpa_to_gfn(gaddr);
> > +		if (kvm_is_ucontrol(vcpu->kvm)) {
> > +			/*
> > +			 * This translates the per-vCPU guest address into a
> > +			 * fake guest address, which can then be used with the
> > +			 * fake memslots that are identity mapping userspace.
> > +			 * This allows ucontrol VMs to use the normal fault
> > +			 * resolution path, like normal VMs.
> > +			 */
> > +			gaddr_tmp = gmap_translate(vcpu->arch.gmap, gaddr);
> > +			if (gaddr_tmp == -EFAULT) {
> >  				vcpu->run->exit_reason = KVM_EXIT_S390_UCONTROL;
> >  				vcpu->run->s390_ucontrol.trans_exc_code = gaddr;
> >  				vcpu->run->s390_ucontrol.pgm_code = 0x10;  
> 
> nit: s/0x10/PGM_SEGMENT_TRANSLATION/

the original code has 0x10, I wanted to keep it as it is, but I'll
change it, since I'm refactoring more code in v2

> 
> >  				return -EREMOTE;
> >  			}
> > -			return vcpu_post_run_addressing_exception(vcpu);
> > +			gfn = gpa_to_gfn(gaddr_tmp);
> >  		}
> > +		return kvm_s390_handle_dat_fault(vcpu, gfn, gaddr, flags);
> >  		break;  
> 
> nit: Remove the break after the return here?

yes

> 
> >  	default:
> >  		KVM_BUG(1, vcpu->kvm, "Unexpected program interrupt 0x%x, TEID 0x%016lx",
> > @@ -4880,7 +4932,7 @@ static int vcpu_post_run_handle_fault(struct kvm_vcpu *vcpu)
> >  		send_sig(SIGSEGV, current, 0);
> >  		break;
> >  	}
> > -	return rc;
> > +	return 0;
> >  }
> >  
> >  static int vcpu_post_run(struct kvm_vcpu *vcpu, int exit_reason)
> > diff --git a/arch/s390/mm/gmap.c b/arch/s390/mm/gmap.c
> > index 16b8a36c56de..3aacef77c174 100644
> > --- a/arch/s390/mm/gmap.c
> > +++ b/arch/s390/mm/gmap.c
> > @@ -605,6 +605,7 @@ int __gmap_link(struct gmap *gmap, unsigned long gaddr, unsigned long vmaddr)
> >  	radix_tree_preload_end();
> >  	return rc;
> >  }
> > +EXPORT_SYMBOL(__gmap_link);
> >  
> >  /**
> >   * fixup_user_fault_nowait - manually resolve a user page fault without waiting  
> 


^ permalink raw reply	[flat|nested] 39+ messages in thread

* Re: [PATCH v1 07/13] KVM: s390: move some gmap shadowing functions away from mm/gmap.c
  2025-01-08 18:14 ` [PATCH v1 07/13] KVM: s390: move some gmap shadowing functions away from mm/gmap.c Claudio Imbrenda
@ 2025-01-15  8:56   ` Janosch Frank
  2025-01-15 10:20     ` Claudio Imbrenda
  0 siblings, 1 reply; 39+ messages in thread
From: Janosch Frank @ 2025-01-15  8:56 UTC (permalink / raw)
  To: Claudio Imbrenda, kvm
  Cc: linux-s390, borntraeger, schlameuss, david, willy, hca, svens,
	agordeev, gor, nrb, nsg

On 1/8/25 7:14 PM, Claudio Imbrenda wrote:
> Move some gmap shadowing functions from mm/gmap.c to kvm/vsie.c and
> kvm/kvm-s390.c .
> 

Why though?

I don't really want to have gmap code in vsie.c
If you want to add a new mm/gmap-vsie.c then do so but I don't see a 
need to move gmap code from mm to kvm and you give no explanation 
whatsoever.

Maybe add a vsie-gmap.c in kvm but I'm not so thrilled about that for 
the reasons mentioned above.

^ permalink raw reply	[flat|nested] 39+ messages in thread

* Re: [PATCH v1 09/13] KVM: s390: stop using lists to keep track of used dat tables
  2025-01-08 18:14 ` [PATCH v1 09/13] KVM: s390: stop using lists to keep track of used dat tables Claudio Imbrenda
@ 2025-01-15  9:01   ` Janosch Frank
  0 siblings, 0 replies; 39+ messages in thread
From: Janosch Frank @ 2025-01-15  9:01 UTC (permalink / raw)
  To: Claudio Imbrenda, kvm
  Cc: linux-s390, borntraeger, schlameuss, david, willy, hca, svens,
	agordeev, gor, nrb, nsg

On 1/8/25 7:14 PM, Claudio Imbrenda wrote:
> Until now, every dat table allocated to map a guest was put in a
> linked list. The page->lru field of struct page was used to keep track
> of which pages were being used, and when the gmap is torn down, the
> list was walked and all pages freed.
> 
> This patch gets rid of the usage of page->lru. Page table are now freed
> by recursively walking the dat table tree.
> 
> Since s390_unlist_old_asce() becomes useless now, remove it.
> 
> Signed-off-by: Claudio Imbrenda <imbrenda@linux.ibm.com>

The lists and page indexes are likely a huge part of the vsie problems 
we're currently experiencing. I'm hopeful that this series resolves 
those issues and makes new additions less painful.

Nice work and thanks for tackling this!

^ permalink raw reply	[flat|nested] 39+ messages in thread

* Re: [PATCH v1 07/13] KVM: s390: move some gmap shadowing functions away from mm/gmap.c
  2025-01-15  8:56   ` Janosch Frank
@ 2025-01-15 10:20     ` Claudio Imbrenda
  2025-01-15 11:48       ` Janosch Frank
  0 siblings, 1 reply; 39+ messages in thread
From: Claudio Imbrenda @ 2025-01-15 10:20 UTC (permalink / raw)
  To: Janosch Frank
  Cc: kvm, linux-s390, borntraeger, schlameuss, david, willy, hca,
	svens, agordeev, gor, nrb, nsg

On Wed, 15 Jan 2025 09:56:11 +0100
Janosch Frank <frankja@linux.ibm.com> wrote:

> On 1/8/25 7:14 PM, Claudio Imbrenda wrote:
> > Move some gmap shadowing functions from mm/gmap.c to kvm/vsie.c and
> > kvm/kvm-s390.c .
> >   
> 
> Why though?

to start removing stuff from mm

> 
> I don't really want to have gmap code in vsie.c
> If you want to add a new mm/gmap-vsie.c then do so but I don't see a 

will do

> need to move gmap code from mm to kvm and you give no explanation 
> whatsoever.

the goal is to remove gmap from mm, as mentioned in the cover letter

> 
> Maybe add a vsie-gmap.c in kvm but I'm not so thrilled about that for 
> the reasons mentioned above.


^ permalink raw reply	[flat|nested] 39+ messages in thread

* Re: [PATCH v1 07/13] KVM: s390: move some gmap shadowing functions away from mm/gmap.c
  2025-01-15 10:20     ` Claudio Imbrenda
@ 2025-01-15 11:48       ` Janosch Frank
  0 siblings, 0 replies; 39+ messages in thread
From: Janosch Frank @ 2025-01-15 11:48 UTC (permalink / raw)
  To: Claudio Imbrenda
  Cc: kvm, linux-s390, borntraeger, schlameuss, david, willy, hca,
	svens, agordeev, gor, nrb, nsg

On 1/15/25 11:20 AM, Claudio Imbrenda wrote:
> On Wed, 15 Jan 2025 09:56:11 +0100
> Janosch Frank <frankja@linux.ibm.com> wrote:
> 
>> On 1/8/25 7:14 PM, Claudio Imbrenda wrote:
>>> Move some gmap shadowing functions from mm/gmap.c to kvm/vsie.c and
>>> kvm/kvm-s390.c .
>>>    
>>
>> Why though?
> 
> to start removing stuff from mm

Alright, let me be more specific below.

> 
>>
>> I don't really want to have gmap code in vsie.c
>> If you want to add a new mm/gmap-vsie.c then do so but I don't see a
> 
> will do

kvm-s390.c is quite long already and the fault code in vsie.c has been a 
thorn in my side for a couple of years as well since I have to jump 
between gmap.c and vsie.c and gaccess.c when reading.

I don't mind putting the gmap.c code in the kvm/ dir but I don't want it 
spread out over all of the kvm/*.c files and especially not over the two 
main files.

We have gaccess.c which is memory centric.
Add a new file or two (since gmap.c is quite long and shadow code can be 
split out) to kvm/.

> 
>> need to move gmap code from mm to kvm and you give no explanation
>> whatsoever.
> 
> the goal is to remove gmap from mm, as mentioned in the cover letter

Right, but that cover letter will never reach git.
Also, why is that the goal? The cover letter does not tell me the reason 
why you want to move that code.


^ permalink raw reply	[flat|nested] 39+ messages in thread

* Re: [PATCH v1 13/13] KVM: s390: remove the last user of page->index
  2025-01-08 18:14 ` [PATCH v1 13/13] KVM: s390: remove the last user of page->index Claudio Imbrenda
@ 2025-01-15 12:17   ` Janosch Frank
  2025-01-15 12:23     ` Claudio Imbrenda
  2025-01-20  9:43     ` David Hildenbrand
  0 siblings, 2 replies; 39+ messages in thread
From: Janosch Frank @ 2025-01-15 12:17 UTC (permalink / raw)
  To: Claudio Imbrenda, kvm
  Cc: linux-s390, borntraeger, schlameuss, david, willy, hca, svens,
	agordeev, gor, nrb, nsg

On 1/8/25 7:14 PM, Claudio Imbrenda wrote:
> Shadow page tables use page->index to keep the g2 address of the guest
> page table being shadowed.
> 
> Instead of keeping the information in page->index, split the address
> and smear it over the 16-bit softbits areas of 4 PGSTEs.
> 
> This removes the last s390 user of page->index.
> 
> Signed-off-by: Claudio Imbrenda <imbrenda@linux.ibm.com>
> ---
>   arch/s390/include/asm/gmap.h    |  1 +
>   arch/s390/include/asm/pgtable.h | 15 +++++++++++++++
>   arch/s390/kvm/gaccess.c         |  6 ++++--
>   arch/s390/mm/gmap.c             | 22 ++++++++++++++++++++--
>   4 files changed, 40 insertions(+), 4 deletions(-)
> 
> diff --git a/arch/s390/include/asm/gmap.h b/arch/s390/include/asm/gmap.h
> index 5ebc65ac78cc..28c5bf097268 100644
> --- a/arch/s390/include/asm/gmap.h
> +++ b/arch/s390/include/asm/gmap.h
> @@ -177,4 +177,5 @@ static inline int s390_uv_destroy_range_interruptible(struct mm_struct *mm, unsi
>   {
>   	return __s390_uv_destroy_range(mm, start, end, true);
>   }
> +

Stray \n

>   #endif /* _ASM_S390_GMAP_H */
> diff --git a/arch/s390/include/asm/pgtable.h b/arch/s390/include/asm/pgtable.h
> index 151488bb9ed7..948100a8fa7e 100644
> --- a/arch/s390/include/asm/pgtable.h
> +++ b/arch/s390/include/asm/pgtable.h
> @@ -419,6 +419,7 @@ static inline int is_module_addr(void *addr)
>   #define PGSTE_HC_BIT	0x0020000000000000UL
>   #define PGSTE_GR_BIT	0x0004000000000000UL
>   #define PGSTE_GC_BIT	0x0002000000000000UL
> +#define PGSTE_ST2_MASK	0x0000ffff00000000UL
>   #define PGSTE_UC_BIT	0x0000000000008000UL	/* user dirty (migration) */
>   #define PGSTE_IN_BIT	0x0000000000004000UL	/* IPTE notify bit */
>   #define PGSTE_VSIE_BIT	0x0000000000002000UL	/* ref'd in a shadow table */
> @@ -2001,4 +2002,18 @@ extern void s390_reset_cmma(struct mm_struct *mm);
>   #define pmd_pgtable(pmd) \
>   	((pgtable_t)__va(pmd_val(pmd) & -sizeof(pte_t)*PTRS_PER_PTE))
>   
> +static inline unsigned long gmap_pgste_get_index(unsigned long *pgt)
> +{
> +	unsigned long *pgstes, res;
> +
> +	pgstes = pgt + _PAGE_ENTRIES;
> +
> +	res = (pgstes[0] & PGSTE_ST2_MASK) << 16;
> +	res |= pgstes[1] & PGSTE_ST2_MASK;
> +	res |= (pgstes[2] & PGSTE_ST2_MASK) >> 16;
> +	res |= (pgstes[3] & PGSTE_ST2_MASK) >> 32;
> +
> +	return res;
> +}

I have to think about that change for a bit before I post an opinion.

^ permalink raw reply	[flat|nested] 39+ messages in thread

* Re: [PATCH v1 13/13] KVM: s390: remove the last user of page->index
  2025-01-15 12:17   ` Janosch Frank
@ 2025-01-15 12:23     ` Claudio Imbrenda
  2025-01-20  9:43     ` David Hildenbrand
  1 sibling, 0 replies; 39+ messages in thread
From: Claudio Imbrenda @ 2025-01-15 12:23 UTC (permalink / raw)
  To: Janosch Frank
  Cc: kvm, linux-s390, borntraeger, schlameuss, david, willy, hca,
	svens, agordeev, gor, nrb, nsg

On Wed, 15 Jan 2025 13:17:01 +0100
Janosch Frank <frankja@linux.ibm.com> wrote:

> On 1/8/25 7:14 PM, Claudio Imbrenda wrote:
> > Shadow page tables use page->index to keep the g2 address of the guest
> > page table being shadowed.
> > 
> > Instead of keeping the information in page->index, split the address
> > and smear it over the 16-bit softbits areas of 4 PGSTEs.
> > 
> > This removes the last s390 user of page->index.
> > 
> > Signed-off-by: Claudio Imbrenda <imbrenda@linux.ibm.com>
> > ---
> >   arch/s390/include/asm/gmap.h    |  1 +
> >   arch/s390/include/asm/pgtable.h | 15 +++++++++++++++
> >   arch/s390/kvm/gaccess.c         |  6 ++++--
> >   arch/s390/mm/gmap.c             | 22 ++++++++++++++++++++--
> >   4 files changed, 40 insertions(+), 4 deletions(-)
> > 
> > diff --git a/arch/s390/include/asm/gmap.h b/arch/s390/include/asm/gmap.h
> > index 5ebc65ac78cc..28c5bf097268 100644
> > --- a/arch/s390/include/asm/gmap.h
> > +++ b/arch/s390/include/asm/gmap.h
> > @@ -177,4 +177,5 @@ static inline int s390_uv_destroy_range_interruptible(struct mm_struct *mm, unsi
> >   {
> >   	return __s390_uv_destroy_range(mm, start, end, true);
> >   }
> > +  
> 
> Stray \n

yep, I had already noticed it myself (of course _after_ sending the
series)

> 
> >   #endif /* _ASM_S390_GMAP_H */
> > diff --git a/arch/s390/include/asm/pgtable.h b/arch/s390/include/asm/pgtable.h
> > index 151488bb9ed7..948100a8fa7e 100644
> > --- a/arch/s390/include/asm/pgtable.h
> > +++ b/arch/s390/include/asm/pgtable.h
> > @@ -419,6 +419,7 @@ static inline int is_module_addr(void *addr)
> >   #define PGSTE_HC_BIT	0x0020000000000000UL
> >   #define PGSTE_GR_BIT	0x0004000000000000UL
> >   #define PGSTE_GC_BIT	0x0002000000000000UL
> > +#define PGSTE_ST2_MASK	0x0000ffff00000000UL
> >   #define PGSTE_UC_BIT	0x0000000000008000UL	/* user dirty (migration) */
> >   #define PGSTE_IN_BIT	0x0000000000004000UL	/* IPTE notify bit */
> >   #define PGSTE_VSIE_BIT	0x0000000000002000UL	/* ref'd in a shadow table */
> > @@ -2001,4 +2002,18 @@ extern void s390_reset_cmma(struct mm_struct *mm);
> >   #define pmd_pgtable(pmd) \
> >   	((pgtable_t)__va(pmd_val(pmd) & -sizeof(pte_t)*PTRS_PER_PTE))
> >   
> > +static inline unsigned long gmap_pgste_get_index(unsigned long *pgt)
> > +{
> > +	unsigned long *pgstes, res;
> > +
> > +	pgstes = pgt + _PAGE_ENTRIES;
> > +
> > +	res = (pgstes[0] & PGSTE_ST2_MASK) << 16;
> > +	res |= pgstes[1] & PGSTE_ST2_MASK;
> > +	res |= (pgstes[2] & PGSTE_ST2_MASK) >> 16;
> > +	res |= (pgstes[3] & PGSTE_ST2_MASK) >> 32;
> > +
> > +	return res;
> > +}  
> 
> I have to think about that change for a bit before I post an opinion.

it's not pretty, but it can (and will, in upcoming patches) be
generalized to hold arbitrary data in the PGSTEs (up to 512 bytes per
page table)

^ permalink raw reply	[flat|nested] 39+ messages in thread

* Re: [PATCH v1 04/13] KVM: s390: move pv gmap functions into kvm
  2025-01-08 18:14 ` [PATCH v1 04/13] KVM: s390: move pv gmap functions into kvm Claudio Imbrenda
  2025-01-09 17:42   ` kernel test robot
@ 2025-01-15 12:48   ` Janosch Frank
  2025-01-15 12:59     ` Claudio Imbrenda
  1 sibling, 1 reply; 39+ messages in thread
From: Janosch Frank @ 2025-01-15 12:48 UTC (permalink / raw)
  To: Claudio Imbrenda, kvm
  Cc: linux-s390, borntraeger, schlameuss, david, willy, hca, svens,
	agordeev, gor, nrb, nsg

On 1/8/25 7:14 PM, Claudio Imbrenda wrote:
> Move gmap related functions from kernel/uv into kvm.
> 
> Create a new file to collect gmap-related functions.
> 
> Signed-off-by: Claudio Imbrenda <imbrenda@linux.ibm.com>
> ---
>   arch/s390/include/asm/uv.h |   7 +-
>   arch/s390/kernel/uv.c      | 293 ++++++-------------------------------
>   arch/s390/kvm/Makefile     |   2 +-
>   arch/s390/kvm/gmap.c       | 183 +++++++++++++++++++++++
>   arch/s390/kvm/gmap.h       |  17 +++
>   arch/s390/kvm/intercept.c  |   1 +
>   arch/s390/kvm/kvm-s390.c   |   1 +
>   arch/s390/kvm/pv.c         |   1 +
>   8 files changed, 251 insertions(+), 254 deletions(-)
>   create mode 100644 arch/s390/kvm/gmap.c
>   create mode 100644 arch/s390/kvm/gmap.h
> 
> diff --git a/arch/s390/include/asm/uv.h b/arch/s390/include/asm/uv.h
> index dc332609f2c3..22ec1a24c291 100644
> --- a/arch/s390/include/asm/uv.h
> +++ b/arch/s390/include/asm/uv.h
> @@ -628,12 +628,13 @@ static inline int is_prot_virt_host(void)
>   }
>   
>   int uv_pin_shared(unsigned long paddr);
> -int gmap_make_secure(struct gmap *gmap, unsigned long gaddr, void *uvcb);
> -int gmap_destroy_page(struct gmap *gmap, unsigned long gaddr);
>   int uv_destroy_folio(struct folio *folio);
>   int uv_destroy_pte(pte_t pte);
>   int uv_convert_from_secure_pte(pte_t pte);
> -int gmap_convert_to_secure(struct gmap *gmap, unsigned long gaddr);
> +int uv_wiggle_folio(struct folio *folio, bool split);
> +int make_folio_secure(struct folio *folio, struct uv_cb_header *uvcb);
> +int uv_convert_from_secure(unsigned long paddr);
> +int uv_convert_from_secure_folio(struct folio *folio);
>   
>   void setup_uv(void);
>   
> diff --git a/arch/s390/kernel/uv.c b/arch/s390/kernel/uv.c
> index 6f9654a191ad..832c39c9ccfa 100644
> --- a/arch/s390/kernel/uv.c
> +++ b/arch/s390/kernel/uv.c
> @@ -19,19 +19,6 @@
>   #include <asm/sections.h>
>   #include <asm/uv.h>
>   
> -#if !IS_ENABLED(CONFIG_KVM)
> -unsigned long __gmap_translate(struct gmap *gmap, unsigned long gaddr)
> -{
> -	return 0;
> -}
> -
> -int gmap_fault(struct gmap *gmap, unsigned long gaddr,
> -	       unsigned int fault_flags)
> -{
> -	return 0;
> -}
> -#endif
> -
>   /* the bootdata_preserved fields come from ones in arch/s390/boot/uv.c */
>   int __bootdata_preserved(prot_virt_guest);
>   EXPORT_SYMBOL(prot_virt_guest);
> @@ -159,6 +146,7 @@ int uv_destroy_folio(struct folio *folio)
>   	folio_put(folio);
>   	return rc;
>   }
> +EXPORT_SYMBOL(uv_destroy_folio);
>   
>   /*
>    * The present PTE still indirectly holds a folio reference through the mapping.
> @@ -175,7 +163,7 @@ int uv_destroy_pte(pte_t pte)
>    *
>    * @paddr: Absolute host address of page to be exported
>    */
> -static int uv_convert_from_secure(unsigned long paddr)
> +int uv_convert_from_secure(unsigned long paddr)
>   {
>   	struct uv_cb_cfs uvcb = {
>   		.header.cmd = UVC_CMD_CONV_FROM_SEC_STOR,
> @@ -187,11 +175,12 @@ static int uv_convert_from_secure(unsigned long paddr)
>   		return -EINVAL;
>   	return 0;
>   }
> +EXPORT_SYMBOL_GPL(uv_convert_from_secure);
>   
>   /*
>    * The caller must already hold a reference to the folio.
>    */
> -static int uv_convert_from_secure_folio(struct folio *folio)
> +int uv_convert_from_secure_folio(struct folio *folio)
>   {
>   	int rc;
>   
> @@ -206,6 +195,7 @@ static int uv_convert_from_secure_folio(struct folio *folio)
>   	folio_put(folio);
>   	return rc;
>   }
> +EXPORT_SYMBOL_GPL(uv_convert_from_secure_folio);
>   
>   /*
>    * The present PTE still indirectly holds a folio reference through the mapping.
> @@ -237,13 +227,32 @@ static int expected_folio_refs(struct folio *folio)
>   	return res;
>   }
>   
> -static int make_folio_secure(struct folio *folio, struct uv_cb_header *uvcb)
> +/**
> + * make_folio_secure() - make a folio secure
> + * @folio: the folio to make secure
> + * @uvcb: the uvcb that describes the UVC to be used
> + *
> + * The folio @folio will be made secure if possible, @uvcb will be passed
> + * as-is to the UVC.
> + *
> + * Return: 0 on success;
> + *         -EBUSY if the folio is in writeback, has too many references, or is large;
> + *         -EAGAIN if the UVC needs to be attempted again;
> + *         -ENXIO if the address is not mapped;
> + *         -EINVAL if the UVC failed for other reasons.
> + *
> + * Context: The caller must hold exactly one extra reference on the folio
> + *          (it's the same logic as split_folio())
> + */
> +int make_folio_secure(struct folio *folio, struct uv_cb_header *uvcb)
>   {
>   	int expected, cc = 0;
>   
> +	if (folio_test_large(folio))
> +		return -EBUSY;
>   	if (folio_test_writeback(folio))
> -		return -EAGAIN;
> -	expected = expected_folio_refs(folio);
> +		return -EBUSY;
> +	expected = expected_folio_refs(folio) + 1;
>   	if (!folio_ref_freeze(folio, expected))
>   		return -EBUSY;
>   	set_bit(PG_arch_1, &folio->flags);
> @@ -267,251 +276,35 @@ static int make_folio_secure(struct folio *folio, struct uv_cb_header *uvcb)
>   		return -EAGAIN;
>   	return uvcb->rc == 0x10a ? -ENXIO : -EINVAL;
>   }
> +EXPORT_SYMBOL_GPL(make_folio_secure);
>   
>   /**
> - * should_export_before_import - Determine whether an export is needed
> - * before an import-like operation
> - * @uvcb: the Ultravisor control block of the UVC to be performed
> - * @mm: the mm of the process
> - *
> - * Returns whether an export is needed before every import-like operation.
> - * This is needed for shared pages, which don't trigger a secure storage
> - * exception when accessed from a different guest.
> - *
> - * Although considered as one, the Unpin Page UVC is not an actual import,
> - * so it is not affected.
> + * uv_wiggle_folio() - try to drain extra references to a folio
> + * @folio: the folio
> + * @split: whether to split a large folio
>    *
> - * No export is needed also when there is only one protected VM, because the
> - * page cannot belong to the wrong VM in that case (there is no "other VM"
> - * it can belong to).
> - *
> - * Return: true if an export is needed before every import, otherwise false.
> + * Context: Must be called while holding an extra reference to the folio;
> + *          the mm lock should not be held.
>    */
> -static bool should_export_before_import(struct uv_cb_header *uvcb, struct mm_struct *mm)
> +int uv_wiggle_folio(struct folio *folio, bool split)
>   {
> -	/*
> -	 * The misc feature indicates, among other things, that importing a
> -	 * shared page from a different protected VM will automatically also
> -	 * transfer its ownership.
> -	 */
> -	if (uv_has_feature(BIT_UV_FEAT_MISC))
> -		return false;
> -	if (uvcb->cmd == UVC_CMD_UNPIN_PAGE_SHARED)
> -		return false;
> -	return atomic_read(&mm->context.protected_count) > 1;
> -}
> -
> -/*
> - * Drain LRU caches: the local one on first invocation and the ones of all
> - * CPUs on successive invocations. Returns "true" on the first invocation.
> - */
> -static bool drain_lru(bool *drain_lru_called)
> -{
> -	/*
> -	 * If we have tried a local drain and the folio refcount
> -	 * still does not match our expected safe value, try with a
> -	 * system wide drain. This is needed if the pagevecs holding
> -	 * the page are on a different CPU.
> -	 */
> -	if (*drain_lru_called) {
> -		lru_add_drain_all();
> -		/* We give up here, don't retry immediately. */
> -		return false;
> -	}
> -	/*
> -	 * We are here if the folio refcount does not match the
> -	 * expected safe value. The main culprits are usually
> -	 * pagevecs. With lru_add_drain() we drain the pagevecs
> -	 * on the local CPU so that hopefully the refcount will
> -	 * reach the expected safe value.
> -	 */
> -	lru_add_drain();
> -	*drain_lru_called = true;
> -	/* The caller should try again immediately */
> -	return true;
> -}
> -
> -/*
> - * Requests the Ultravisor to make a page accessible to a guest.
> - * If it's brought in the first time, it will be cleared. If
> - * it has been exported before, it will be decrypted and integrity
> - * checked.
> - */
> -int gmap_make_secure(struct gmap *gmap, unsigned long gaddr, void *uvcb)
> -{
> -	struct vm_area_struct *vma;
> -	bool drain_lru_called = false;
> -	spinlock_t *ptelock;
> -	unsigned long uaddr;
> -	struct folio *folio;
> -	pte_t *ptep;
>   	int rc;
>   
> -again:
> -	rc = -EFAULT;
> -	mmap_read_lock(gmap->mm);
> -
> -	uaddr = __gmap_translate(gmap, gaddr);
> -	if (IS_ERR_VALUE(uaddr))
> -		goto out;
> -	vma = vma_lookup(gmap->mm, uaddr);
> -	if (!vma)
> -		goto out;
> -	/*
> -	 * Secure pages cannot be huge and userspace should not combine both.
> -	 * In case userspace does it anyway this will result in an -EFAULT for
> -	 * the unpack. The guest is thus never reaching secure mode. If
> -	 * userspace is playing dirty tricky with mapping huge pages later
> -	 * on this will result in a segmentation fault.
> -	 */
> -	if (is_vm_hugetlb_page(vma))
> -		goto out;
> -
> -	rc = -ENXIO;
> -	ptep = get_locked_pte(gmap->mm, uaddr, &ptelock);
> -	if (!ptep)
> -		goto out;
> -	if (pte_present(*ptep) && !(pte_val(*ptep) & _PAGE_INVALID) && pte_write(*ptep)) {
> -		folio = page_folio(pte_page(*ptep));
> -		rc = -EAGAIN;
> -		if (folio_test_large(folio)) {
> -			rc = -E2BIG;
> -		} else if (folio_trylock(folio)) {
> -			if (should_export_before_import(uvcb, gmap->mm))
> -				uv_convert_from_secure(PFN_PHYS(folio_pfn(folio)));
> -			rc = make_folio_secure(folio, uvcb);
> -			folio_unlock(folio);
> -		}
> -
> -		/*
> -		 * Once we drop the PTL, the folio may get unmapped and
> -		 * freed immediately. We need a temporary reference.
> -		 */
> -		if (rc == -EAGAIN || rc == -E2BIG)
> -			folio_get(folio);
> -	}
> -	pte_unmap_unlock(ptep, ptelock);
> -out:
> -	mmap_read_unlock(gmap->mm);
> -
> -	switch (rc) {
> -	case -E2BIG:
> +	folio_wait_writeback(folio);
> +	if (split) {
>   		folio_lock(folio);
>   		rc = split_folio(folio);
>   		folio_unlock(folio);
> -		folio_put(folio);
> -
> -		switch (rc) {
> -		case 0:
> -			/* Splitting succeeded, try again immediately. */
> -			goto again;
> -		case -EAGAIN:
> -			/* Additional folio references. */
> -			if (drain_lru(&drain_lru_called))
> -				goto again;
> -			return -EAGAIN;
> -		case -EBUSY:
> -			/* Unexpected race. */
> +
> +		if (rc == -EBUSY)
>   			return -EAGAIN;
> -		}
> -		WARN_ON_ONCE(1);
> -		return -ENXIO;
> -	case -EAGAIN:
> -		/*
> -		 * If we are here because the UVC returned busy or partial
> -		 * completion, this is just a useless check, but it is safe.
> -		 */
> -		folio_wait_writeback(folio);
> -		folio_put(folio);
> -		return -EAGAIN;
> -	case -EBUSY:
> -		/* Additional folio references. */
> -		if (drain_lru(&drain_lru_called))
> -			goto again;
> -		return -EAGAIN;
> -	case -ENXIO:
> -		if (gmap_fault(gmap, gaddr, FAULT_FLAG_WRITE))
> -			return -EFAULT;
> -		return -EAGAIN;
> +		if (rc != -EAGAIN)
> +			return rc;
>   	}
> -	return rc;
> -}
> -EXPORT_SYMBOL_GPL(gmap_make_secure);
> -
> -int gmap_convert_to_secure(struct gmap *gmap, unsigned long gaddr)
> -{
> -	struct uv_cb_cts uvcb = {
> -		.header.cmd = UVC_CMD_CONV_TO_SEC_STOR,
> -		.header.len = sizeof(uvcb),
> -		.guest_handle = gmap->guest_handle,
> -		.gaddr = gaddr,
> -	};
> -
> -	return gmap_make_secure(gmap, gaddr, &uvcb);
> -}
> -EXPORT_SYMBOL_GPL(gmap_convert_to_secure);
> -
> -/**
> - * gmap_destroy_page - Destroy a guest page.
> - * @gmap: the gmap of the guest
> - * @gaddr: the guest address to destroy
> - *
> - * An attempt will be made to destroy the given guest page. If the attempt
> - * fails, an attempt is made to export the page. If both attempts fail, an
> - * appropriate error is returned.
> - */
> -int gmap_destroy_page(struct gmap *gmap, unsigned long gaddr)
> -{
> -	struct vm_area_struct *vma;
> -	struct folio_walk fw;
> -	unsigned long uaddr;
> -	struct folio *folio;
> -	int rc;
> -
> -	rc = -EFAULT;
> -	mmap_read_lock(gmap->mm);
> -
> -	uaddr = __gmap_translate(gmap, gaddr);
> -	if (IS_ERR_VALUE(uaddr))
> -		goto out;
> -	vma = vma_lookup(gmap->mm, uaddr);
> -	if (!vma)
> -		goto out;
> -	/*
> -	 * Huge pages should not be able to become secure
> -	 */
> -	if (is_vm_hugetlb_page(vma))
> -		goto out;
> -
> -	rc = 0;
> -	folio = folio_walk_start(&fw, vma, uaddr, 0);
> -	if (!folio)
> -		goto out;
> -	/*
> -	 * See gmap_make_secure(): large folios cannot be secure. Small
> -	 * folio implies FW_LEVEL_PTE.
> -	 */
> -	if (folio_test_large(folio) || !pte_write(fw.pte))
> -		goto out_walk_end;
> -	rc = uv_destroy_folio(folio);
> -	/*
> -	 * Fault handlers can race; it is possible that two CPUs will fault
> -	 * on the same secure page. One CPU can destroy the page, reboot,
> -	 * re-enter secure mode and import it, while the second CPU was
> -	 * stuck at the beginning of the handler. At some point the second
> -	 * CPU will be able to progress, and it will not be able to destroy
> -	 * the page. In that case we do not want to terminate the process,
> -	 * we instead try to export the page.
> -	 */
> -	if (rc)
> -		rc = uv_convert_from_secure_folio(folio);
> -out_walk_end:
> -	folio_walk_end(&fw, vma);
> -out:
> -	mmap_read_unlock(gmap->mm);
> -	return rc;
> +	lru_add_drain_all();
> +	return -EAGAIN;
>   }
> -EXPORT_SYMBOL_GPL(gmap_destroy_page);
> +EXPORT_SYMBOL_GPL(uv_wiggle_folio);
>   
>   /*
>    * To be called with the folio locked or with an extra reference! This will
> diff --git a/arch/s390/kvm/Makefile b/arch/s390/kvm/Makefile
> index 02217fb4ae10..d972dea657fd 100644
> --- a/arch/s390/kvm/Makefile
> +++ b/arch/s390/kvm/Makefile
> @@ -8,7 +8,7 @@ include $(srctree)/virt/kvm/Makefile.kvm
>   ccflags-y := -Ivirt/kvm -Iarch/s390/kvm
>   
>   kvm-y += kvm-s390.o intercept.o interrupt.o priv.o sigp.o
> -kvm-y += diag.o gaccess.o guestdbg.o vsie.o pv.o
> +kvm-y += diag.o gaccess.o guestdbg.o vsie.o pv.o gmap.o
>   
>   kvm-$(CONFIG_VFIO_PCI_ZDEV_KVM) += pci.o
>   obj-$(CONFIG_KVM) += kvm.o
> diff --git a/arch/s390/kvm/gmap.c b/arch/s390/kvm/gmap.c
> new file mode 100644
> index 000000000000..a142bbbddc25
> --- /dev/null
> +++ b/arch/s390/kvm/gmap.c
> @@ -0,0 +1,183 @@
> +// SPDX-License-Identifier: GPL-2.0
> +/*
> + * Guest memory management for KVM/s390
> + *
> + * Copyright IBM Corp. 2008, 2020, 2024
> + *
> + *    Author(s): Claudio Imbrenda <imbrenda@linux.ibm.com>
> + *               Martin Schwidefsky <schwidefsky@de.ibm.com>
> + *               David Hildenbrand <david@redhat.com>
> + *               Janosch Frank <frankja@linux.vnet.ibm.com>
> + */
> +
> +#include <linux/compiler.h>
> +#include <linux/kvm.h>
> +#include <linux/kvm_host.h>
> +#include <linux/pgtable.h>
> +#include <linux/pagemap.h>
> +
> +#include <asm/lowcore.h>
> +#include <asm/gmap.h>
> +#include <asm/uv.h>
> +
> +#include "gmap.h"
> +
> +/**
> + * should_export_before_import - Determine whether an export is needed
> + * before an import-like operation
> + * @uvcb: the Ultravisor control block of the UVC to be performed
> + * @mm: the mm of the process
> + *
> + * Returns whether an export is needed before every import-like operation.
> + * This is needed for shared pages, which don't trigger a secure storage
> + * exception when accessed from a different guest.
> + *
> + * Although considered as one, the Unpin Page UVC is not an actual import,
> + * so it is not affected.
> + *
> + * No export is needed also when there is only one protected VM, because the
> + * page cannot belong to the wrong VM in that case (there is no "other VM"
> + * it can belong to).
> + *
> + * Return: true if an export is needed before every import, otherwise false.
> + */
> +static bool should_export_before_import(struct uv_cb_header *uvcb, struct mm_struct *mm)
> +{
> +	/*
> +	 * The misc feature indicates, among other things, that importing a
> +	 * shared page from a different protected VM will automatically also
> +	 * transfer its ownership.
> +	 */
> +	if (uv_has_feature(BIT_UV_FEAT_MISC))
> +		return false;
> +	if (uvcb->cmd == UVC_CMD_UNPIN_PAGE_SHARED)
> +		return false;
> +	return atomic_read(&mm->context.protected_count) > 1;
> +}
> +
> +static int __gmap_make_secure(struct gmap *gmap, struct page *page, void *uvcb)
> +{
> +	struct folio *folio = page_folio(page);
> +	int rc;
> +
> +	/*
> +	 * Secure pages cannot be huge and userspace should not combine both.
> +	 * In case userspace does it anyway this will result in an -EFAULT for
> +	 * the unpack. The guest is thus never reaching secure mode. If
> +	 * userspace is playing dirty tricky with mapping huge pages later

s/tricky/tricks/

But the whole last sentence is a bit iffy.

> +	 * on this will result in a segmentation fault or in a -EFAULT return
> +	 * code from the KVM_RUN ioctl.
> +	 */
> +	if (folio_test_hugetlb(folio))
> +		return -EFAULT;
> +	if (folio_test_large(folio)) {
> +		mmap_read_unlock(gmap->mm);
> +		rc = uv_wiggle_folio(folio, true);
> +		mmap_read_lock(gmap->mm);

You could move the unlock to uv_wiggle_folio() and add a 
mmap_assert_locked() in front.

At least if you have no other users in upcoming series which don't need 
the unlock.

^ permalink raw reply	[flat|nested] 39+ messages in thread

* Re: [PATCH v1 04/13] KVM: s390: move pv gmap functions into kvm
  2025-01-15 12:48   ` Janosch Frank
@ 2025-01-15 12:59     ` Claudio Imbrenda
  2025-01-15 13:23       ` Janosch Frank
  0 siblings, 1 reply; 39+ messages in thread
From: Claudio Imbrenda @ 2025-01-15 12:59 UTC (permalink / raw)
  To: Janosch Frank
  Cc: kvm, linux-s390, borntraeger, schlameuss, david, willy, hca,
	svens, agordeev, gor, nrb, nsg

On Wed, 15 Jan 2025 13:48:47 +0100
Janosch Frank <frankja@linux.ibm.com> wrote:

[...]

> > +static int __gmap_make_secure(struct gmap *gmap, struct page *page, void *uvcb)
> > +{
> > +	struct folio *folio = page_folio(page);
> > +	int rc;
> > +
> > +	/*
> > +	 * Secure pages cannot be huge and userspace should not combine both.
> > +	 * In case userspace does it anyway this will result in an -EFAULT for
> > +	 * the unpack. The guest is thus never reaching secure mode. If
> > +	 * userspace is playing dirty tricky with mapping huge pages later  
> 
> s/tricky/tricks/
> 
> But the whole last sentence is a bit iffy.

hmm yes I'll reword it

> 
> > +	 * on this will result in a segmentation fault or in a -EFAULT return
> > +	 * code from the KVM_RUN ioctl.
> > +	 */
> > +	if (folio_test_hugetlb(folio))
> > +		return -EFAULT;
> > +	if (folio_test_large(folio)) {
> > +		mmap_read_unlock(gmap->mm);
> > +		rc = uv_wiggle_folio(folio, true);
> > +		mmap_read_lock(gmap->mm);  
> 
> You could move the unlock to uv_wiggle_folio() and add a 
> mmap_assert_locked() in front.

oh no, I don't want a function that drops a lock that has been acquired
outside of it.

by explicitly dropping and acquiring it, it's obvious what's going on,
and you can easily see that the lock is being dropped and re-acquired.

__gmap_destroy_page() does it, but it's called exactly in one spot,
namely gmap_destroy_page(), which is literally below it.

> 
> At least if you have no other users in upcoming series which don't need 
> the unlock.


^ permalink raw reply	[flat|nested] 39+ messages in thread

* Re: [PATCH v1 04/13] KVM: s390: move pv gmap functions into kvm
  2025-01-15 12:59     ` Claudio Imbrenda
@ 2025-01-15 13:23       ` Janosch Frank
  0 siblings, 0 replies; 39+ messages in thread
From: Janosch Frank @ 2025-01-15 13:23 UTC (permalink / raw)
  To: Claudio Imbrenda
  Cc: kvm, linux-s390, borntraeger, schlameuss, david, willy, hca,
	svens, agordeev, gor, nrb, nsg

On 1/15/25 1:59 PM, Claudio Imbrenda wrote:
> On Wed, 15 Jan 2025 13:48:47 +0100
> Janosch Frank <frankja@linux.ibm.com> wrote:
> 
> [...]
> 
>>> +static int __gmap_make_secure(struct gmap *gmap, struct page *page, void *uvcb)
>>> +{
>>> +	struct folio *folio = page_folio(page);
>>> +	int rc;
>>> +
>>> +	/*
>>> +	 * Secure pages cannot be huge and userspace should not combine both.
>>> +	 * In case userspace does it anyway this will result in an -EFAULT for
>>> +	 * the unpack. The guest is thus never reaching secure mode. If
>>> +	 * userspace is playing dirty tricky with mapping huge pages later
>>
>> s/tricky/tricks/
>>
>> But the whole last sentence is a bit iffy.
> 
> hmm yes I'll reword it
> 
>>
>>> +	 * on this will result in a segmentation fault or in a -EFAULT return
>>> +	 * code from the KVM_RUN ioctl.
>>> +	 */
>>> +	if (folio_test_hugetlb(folio))
>>> +		return -EFAULT;
>>> +	if (folio_test_large(folio)) {
>>> +		mmap_read_unlock(gmap->mm);
>>> +		rc = uv_wiggle_folio(folio, true);
>>> +		mmap_read_lock(gmap->mm);
>>
>> You could move the unlock to uv_wiggle_folio() and add a
>> mmap_assert_locked() in front.
> 
> oh no, I don't want a function that drops a lock that has been acquired
> outside of it.
> 
> by explicitly dropping and acquiring it, it's obvious what's going on,
> and you can easily see that the lock is being dropped and re-acquired.
> 
> __gmap_destroy_page() does it, but it's called exactly in one spot,
> namely gmap_destroy_page(), which is literally below it.
> 

It's just very weird to to me to have the (un)lock dance the wrong way 
around when reading it. At first I assumed you made a mistake when 
remembering the context description in wiggle because I misread the 
unlock() as lock().

But maybe that's just me and I don't mind it too much :)

^ permalink raw reply	[flat|nested] 39+ messages in thread

* Re: [PATCH v1 13/13] KVM: s390: remove the last user of page->index
  2025-01-15 12:17   ` Janosch Frank
  2025-01-15 12:23     ` Claudio Imbrenda
@ 2025-01-20  9:43     ` David Hildenbrand
  2025-01-20 10:28       ` Claudio Imbrenda
  1 sibling, 1 reply; 39+ messages in thread
From: David Hildenbrand @ 2025-01-20  9:43 UTC (permalink / raw)
  To: Janosch Frank, Claudio Imbrenda, kvm
  Cc: linux-s390, borntraeger, schlameuss, willy, hca, svens, agordeev,
	gor, nrb, nsg


>> +static inline unsigned long gmap_pgste_get_index(unsigned long *pgt)
>> +{
>> +	unsigned long *pgstes, res;
>> +
>> +	pgstes = pgt + _PAGE_ENTRIES;
>> +
>> +	res = (pgstes[0] & PGSTE_ST2_MASK) << 16;
>> +	res |= pgstes[1] & PGSTE_ST2_MASK;
>> +	res |= (pgstes[2] & PGSTE_ST2_MASK) >> 16;
>> +	res |= (pgstes[3] & PGSTE_ST2_MASK) >> 32;
>> +
>> +	return res;
>> +}
> 
> I have to think about that change for a bit before I post an opinion.

I'm wondering if we should just do what Willy suggested and use ptdesc 
-> pt_index instead?

It's not like we must "force" this removal here. If we'll simply 
allocate a ptdesc memdesc in the future for these page tables (just like 
for any other page table), we have that extra space easily available.

The important part is getting rid of page->index now, but not 
necessarily ptdesc->pt_index.

-- 
Cheers,

David / dhildenb


^ permalink raw reply	[flat|nested] 39+ messages in thread

* Re: [PATCH v1 13/13] KVM: s390: remove the last user of page->index
  2025-01-20  9:43     ` David Hildenbrand
@ 2025-01-20 10:28       ` Claudio Imbrenda
  2025-01-20 10:34         ` David Hildenbrand
  0 siblings, 1 reply; 39+ messages in thread
From: Claudio Imbrenda @ 2025-01-20 10:28 UTC (permalink / raw)
  To: David Hildenbrand
  Cc: Janosch Frank, kvm, linux-s390, borntraeger, schlameuss, willy,
	hca, svens, agordeev, gor, nrb, nsg

On Mon, 20 Jan 2025 10:43:15 +0100
David Hildenbrand <david@redhat.com> wrote:

> >> +static inline unsigned long gmap_pgste_get_index(unsigned long *pgt)
> >> +{
> >> +	unsigned long *pgstes, res;
> >> +
> >> +	pgstes = pgt + _PAGE_ENTRIES;
> >> +
> >> +	res = (pgstes[0] & PGSTE_ST2_MASK) << 16;
> >> +	res |= pgstes[1] & PGSTE_ST2_MASK;
> >> +	res |= (pgstes[2] & PGSTE_ST2_MASK) >> 16;
> >> +	res |= (pgstes[3] & PGSTE_ST2_MASK) >> 32;
> >> +
> >> +	return res;
> >> +}  
> > 
> > I have to think about that change for a bit before I post an opinion.  
> 
> I'm wondering if we should just do what Willy suggested and use ptdesc 
> -> pt_index instead?  

we will need to store more stuff in the future; putting things in the
PGSTEs gives us 512 bytes per table (although I admit it looks... weird)

> 
> It's not like we must "force" this removal here. If we'll simply 
> allocate a ptdesc memdesc in the future for these page tables (just like 
> for any other page table), we have that extra space easily available.
> 
> The important part is getting rid of page->index now, but not 
> necessarily ptdesc->pt_index.
> 


^ permalink raw reply	[flat|nested] 39+ messages in thread

* Re: [PATCH v1 13/13] KVM: s390: remove the last user of page->index
  2025-01-20 10:28       ` Claudio Imbrenda
@ 2025-01-20 10:34         ` David Hildenbrand
  0 siblings, 0 replies; 39+ messages in thread
From: David Hildenbrand @ 2025-01-20 10:34 UTC (permalink / raw)
  To: Claudio Imbrenda
  Cc: Janosch Frank, kvm, linux-s390, borntraeger, schlameuss, willy,
	hca, svens, agordeev, gor, nrb, nsg

On 20.01.25 11:28, Claudio Imbrenda wrote:
> On Mon, 20 Jan 2025 10:43:15 +0100
> David Hildenbrand <david@redhat.com> wrote:
> 
>>>> +static inline unsigned long gmap_pgste_get_index(unsigned long *pgt)
>>>> +{
>>>> +	unsigned long *pgstes, res;
>>>> +
>>>> +	pgstes = pgt + _PAGE_ENTRIES;
>>>> +
>>>> +	res = (pgstes[0] & PGSTE_ST2_MASK) << 16;
>>>> +	res |= pgstes[1] & PGSTE_ST2_MASK;
>>>> +	res |= (pgstes[2] & PGSTE_ST2_MASK) >> 16;
>>>> +	res |= (pgstes[3] & PGSTE_ST2_MASK) >> 32;
>>>> +
>>>> +	return res;
>>>> +}
>>>
>>> I have to think about that change for a bit before I post an opinion.
>>
>> I'm wondering if we should just do what Willy suggested and use ptdesc
>> -> pt_index instead?
> 
> we will need to store more stuff in the future; putting things in the
> PGSTEs gives us 512 bytes per table (although I admit it looks... weird)

With memdesc/ptdesc you'll be able to allocate more without playing many 
tricks.

Storing more information could be done today by allocating a separate 
structure for these page tables and linking it via ptindex. Not that I 
would suggest that just now. :)

But this is not something I am to decide, just pointing it out that it 
likely can be done in a simpler+cleaner way and there is no way to rush 
the pt_index removal.

-- 
Cheers,

David / dhildenb


^ permalink raw reply	[flat|nested] 39+ messages in thread

end of thread, other threads:[~2025-01-20 10:34 UTC | newest]

Thread overview: 39+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2025-01-08 18:14 [PATCH v1 00/13] KVM: s390: Stop using page->index and other things Claudio Imbrenda
2025-01-08 18:14 ` [PATCH v1 01/13] KVM: s390: wrapper for KVM_BUG Claudio Imbrenda
2025-01-10  9:09   ` Christian Borntraeger
2025-01-10 13:13   ` Christoph Schlameuss
2025-01-08 18:14 ` [PATCH v1 02/13] KVM: s390: fake memslots for ucontrol VMs Claudio Imbrenda
2025-01-10  9:31   ` Christian Borntraeger
2025-01-10 11:47     ` Claudio Imbrenda
2025-01-10 16:22       ` Sean Christopherson
2025-01-10 17:02         ` Claudio Imbrenda
2025-01-10 17:34           ` Sean Christopherson
2025-01-10 17:43             ` Claudio Imbrenda
2025-01-10 15:40   ` Christoph Schlameuss
2025-01-10 16:18     ` Claudio Imbrenda
2025-01-08 18:14 ` [PATCH v1 03/13] KVM: s390: use __kvm_faultin_pfn() Claudio Imbrenda
2025-01-14 17:34   ` Christoph Schlameuss
2025-01-14 17:56     ` Claudio Imbrenda
2025-01-08 18:14 ` [PATCH v1 04/13] KVM: s390: move pv gmap functions into kvm Claudio Imbrenda
2025-01-09 17:42   ` kernel test robot
2025-01-15 12:48   ` Janosch Frank
2025-01-15 12:59     ` Claudio Imbrenda
2025-01-15 13:23       ` Janosch Frank
2025-01-08 18:14 ` [PATCH v1 05/13] KVM: s390: get rid of gmap_fault() Claudio Imbrenda
2025-01-08 18:14 ` [PATCH v1 06/13] KVM: s390: get rid of gmap_translate() Claudio Imbrenda
2025-01-08 18:14 ` [PATCH v1 07/13] KVM: s390: move some gmap shadowing functions away from mm/gmap.c Claudio Imbrenda
2025-01-15  8:56   ` Janosch Frank
2025-01-15 10:20     ` Claudio Imbrenda
2025-01-15 11:48       ` Janosch Frank
2025-01-08 18:14 ` [PATCH v1 08/13] KVM: s390: stop using page->index for non-shadow gmaps Claudio Imbrenda
2025-01-08 18:14 ` [PATCH v1 09/13] KVM: s390: stop using lists to keep track of used dat tables Claudio Imbrenda
2025-01-15  9:01   ` Janosch Frank
2025-01-08 18:14 ` [PATCH v1 10/13] KVM: s390: move gmap_shadow_pgt_lookup() into kvm Claudio Imbrenda
2025-01-08 18:14 ` [PATCH v1 11/13] KVM: s390: remove useless page->index usage Claudio Imbrenda
2025-01-08 18:14 ` [PATCH v1 12/13] KVM: s390: move PGSTE softbits Claudio Imbrenda
2025-01-08 18:14 ` [PATCH v1 13/13] KVM: s390: remove the last user of page->index Claudio Imbrenda
2025-01-15 12:17   ` Janosch Frank
2025-01-15 12:23     ` Claudio Imbrenda
2025-01-20  9:43     ` David Hildenbrand
2025-01-20 10:28       ` Claudio Imbrenda
2025-01-20 10:34         ` David Hildenbrand

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox