public inbox for kvm@vger.kernel.org
 help / color / mirror / Atom feed
From: Janosch Frank <frankja@linux.ibm.com>
To: Christoph Schlameuss <schlameuss@linux.ibm.com>, kvm@vger.kernel.org
Cc: linux-s390@vger.kernel.org, Heiko Carstens <hca@linux.ibm.com>,
	Vasily Gorbik <gor@linux.ibm.com>,
	Alexander Gordeev <agordeev@linux.ibm.com>,
	Christian Borntraeger <borntraeger@linux.ibm.com>,
	Claudio Imbrenda <imbrenda@linux.ibm.com>,
	Nico Boehr <nrb@linux.ibm.com>,
	David Hildenbrand <david@redhat.com>,
	Sven Schnelle <svens@linux.ibm.com>,
	Paolo Bonzini <pbonzini@redhat.com>,
	Shuah Khan <shuah@kernel.org>
Subject: Re: [PATCH RFC v2 07/11] KVM: s390: Shadow VSIE SCA in guest-1
Date: Thu, 20 Nov 2025 12:15:34 +0100	[thread overview]
Message-ID: <058cd342-223c-4a3f-b647-cd119ca3d48a@linux.ibm.com> (raw)
In-Reply-To: <20251110-vsieie-v2-7-9e53a3618c8c@linux.ibm.com>

On 11/10/25 18:16, Christoph Schlameuss wrote:
> Restructure kvm_s390_handle_vsie() to create a guest-1 shadow of the SCA
> if guest-2 attempts to enter SIE with an SCA. If the SCA is used the
> vsie_pages are stored in a new vsie_sca struct instead of the arch vsie
> struct.

I think there should be more focus on this.
Having scbs tracked in two places is a huge change compared to how it 
worked before.

> 
> When the VSIE-Interpretation-Extension Facility is active (minimum z17)
> the shadow SCA (ssca_block) will be created and shadows of all CPUs
> defined in the configuration are created.
> SCAOL/H in the VSIE control block are overwritten with references to the
> shadow SCA.
> 
> The shadow SCA contains the addresses of the original guest-3 SCA as
> well as the original VSIE control blocks. With these addresses the
> machine can directly monitor the intervention bits within the original
> SCA entries, enabling it to handle SENSE_RUNNING and EXTERNAL_CALL sigp
> instructions without exiting VSIE.
> 
> The original SCA will be pinned in guest-2 memory and only be unpinned
> before reuse. This means some pages might still be pinned even after the
> guest 3 VM does no longer exist.
> 
> The ssca_blocks are also kept within a radix tree to reuse already
> existing ssca_blocks efficiently. While the radix tree and array with
> references to the ssca_blocks are held in the vsie_sca struct.
> The use of vsie_scas is tracked using an ref_count.
> 
> Signed-off-by: Christoph Schlameuss <schlameuss@linux.ibm.com>

I'd like to see more function header comments for the big functions.
Also think about adding lockdep checks and descriptions about what the 
lock protects.

I've needed more time to understand this patch than I'd like to admit.

[...]

>   /*
>    * Get or create a vsie page for a scb address.
>    *
> + * Original control blocks are pinned when the vsie_page pointing to them is
> + * returned.
> + * Newly created vsie_pages only have vsie_page->scb_gpa and vsie_page->sca_gpa
> + * set.
> + *
>    * Returns: - address of a vsie page (cached or new one)
>    *          - NULL if the same scb address is already used by another VCPU
>    *          - ERR_PTR(-ENOMEM) if out of memory
>    */
> -static struct vsie_page *get_vsie_page(struct kvm *kvm, unsigned long addr)
> +static struct vsie_page *get_vsie_page(struct kvm_vcpu *vcpu, unsigned long addr)
>   {
> -	struct vsie_page *vsie_page;
> -	int nr_vcpus;
> +	struct vsie_page *vsie_page, *vsie_page_new;
> +	struct kvm *kvm = vcpu->kvm;
> +	unsigned int max_vsie_page;
> +	int rc, pages_idx;
> +	gpa_t sca_addr;
>   
> -	rcu_read_lock();
>   	vsie_page = radix_tree_lookup(&kvm->arch.vsie.addr_to_page, addr >> 9);
> -	rcu_read_unlock();
> -	if (vsie_page) {
> -		if (try_get_vsie_page(vsie_page)) {
> -			if (vsie_page->scb_gpa == addr)
> -				return vsie_page;
> -			/*
> -			 * We raced with someone reusing + putting this vsie
> -			 * page before we grabbed it.
> -			 */
> -			put_vsie_page(vsie_page);
> -		}
> +	if (vsie_page && try_get_vsie_page(vsie_page)) {
> +		if (vsie_page->scb_gpa == addr)
> +			return vsie_page;
> +		/*
> +		 * We raced with someone reusing + putting this vsie
> +		 * page before we grabbed it.
> +		 */
> +		put_vsie_page(vsie_page);
>   	}
>   
> -	/*
> -	 * We want at least #online_vcpus shadows, so every VCPU can execute
> -	 * the VSIE in parallel.
> -	 */
> -	nr_vcpus = atomic_read(&kvm->online_vcpus);
> +	max_vsie_page = MIN(atomic_read(&kvm->online_vcpus), KVM_S390_MAX_VSIE_VCPUS);
> +
> +	/* allocate new vsie_page - we will likely need it */
> +	if (addr || kvm->arch.vsie.page_count < max_vsie_page) {

Is addr ever NULL?

> +		vsie_page_new = malloc_vsie_page(kvm);
> +		if (IS_ERR(vsie_page_new))
> +			return vsie_page_new;
> +	}
>   
>   	mutex_lock(&kvm->arch.vsie.mutex);
> -	if (kvm->arch.vsie.page_count < nr_vcpus) {
> -		vsie_page = (void *)__get_free_page(GFP_KERNEL_ACCOUNT | __GFP_ZERO | GFP_DMA);
> -		if (!vsie_page) {
> -			mutex_unlock(&kvm->arch.vsie.mutex);
> -			return ERR_PTR(-ENOMEM);
> -		}adows of all CPUs
> defined in the configuration are created.
> -		__set_bit(VSIE_PAGE_IN_USE, &vsie_page->flags);
> -		kvm->arch.vsie.pages[kvm->arch.vsie.page_count] = vsie_page;
> +	if (addr || kvm->arch.vsie.page_count < max_vsie_page) {
> +		pages_idx = kvm->arch.vsie.page_count;
> +		vsie_page = vsie_page_new;
> +		vsie_page_new = NULL;
> +		kvm->arch.vsie.pages[kvm->arch.vsie.page_count] = vsie_page_new;
>   		kvm->arch.vsie.page_count++;
>   	} else {
>   		/* reuse an existing entry that belongs to nobody */
> +		if (vsie_page_new)
> +			free_vsie_page(vsie_page_new);
>   		while (true) {
>   			vsie_page = kvm->arch.vsie.pages[kvm->arch.vsie.next];
> -			if (try_get_vsie_page(vsie_page))
> +			if (try_get_vsie_page(vsie_page)) {
> +				pages_idx = kvm->arch.vsie.next;
>   				break;
> +			}
>   			kvm->arch.vsie.next++;
> -			kvm->arch.vsie.next %= nr_vcpus;
> +			kvm->arch.vsie.next %= max_vsie_page;
>   		}
> +
> +		unpin_scb(kvm, vsie_page);
>   		if (vsie_page->scb_gpa != ULONG_MAX)
>   			radix_tree_delete(&kvm->arch.vsie.addr_to_page,
>   					  vsie_page->scb_gpa >> 9);
>   	}
> -	/* Mark it as invalid until it resides in the tree. */
> -	vsie_page->scb_gpa = ULONG_MAX;
> +
> +	vsie_page->scb_gpa = addr;
> +	rc = pin_scb(vcpu, vsie_page);
> +	if (rc) {
> +		vsie_page->scb_gpa = ULONG_MAX;
> +		free_vsie_page(vsie_page);

free_vsie_page() is a wrapper for free_page(), writing to vsie_page 
before freeing makes no sense.

> +		mutex_unlock(&kvm->arch.vsie.mutex);
> +		return ERR_PTR(-ENOMEM);
> +	}
> +	sca_addr = read_scao(kvm, vsie_page->scb_o);
> +	vsie_page->sca_gpa = sca_addr;
> +	__set_bit(VSIE_PAGE_IN_USE, &vsie_page->flags);
>   
>   	/* Double use of the same address or allocation failure. */
>   	if (radix_tree_insert(&kvm->arch.vsie.addr_to_page, addr >> 9,
>   			      vsie_page)) {
> +		unpin_scb(kvm, vsie_page);
>   		put_vsie_page(vsie_page);
>   		mutex_unlock(&kvm->arch.vsie.mutex);
>   		return NULL;
>   	}
> -	vsie_page->scb_gpa = addr;
>   	mutex_unlock(&kvm->arch.vsie.mutex);
>   
> +	/*
> +	 * If the vsie cb does use a sca we store the vsie_page within the
> +	 * vsie_sca later. But we need to allocate an empty page to leave no
> +	 * hole in the arch.vsie.pages.
> +	 */
> +	if (sca_addr) {
> +		vsie_page_new = malloc_vsie_page(kvm);
> +		if (IS_ERR(vsie_page_new)) {
> +			unpin_scb(kvm, vsie_page);
> +			put_vsie_page(vsie_page);
> +			return vsie_page_new;
> +		}
> +		kvm->arch.vsie.pages[pages_idx] = vsie_page_new;
> +		vsie_page_new = NULL;
> +	}
> +
>   	memset(&vsie_page->scb_s, 0, sizeof(struct kvm_s390_sie_block));
>   	release_gmap_shadow(vsie_page);
>   	vsie_page->fault_addr = 0;
> @@ -1529,11 +1855,124 @@ static struct vsie_page *get_vsie_page(struct kvm *kvm, unsigned long addr)
>   	return vsie_page;
>   }
>   
> +static struct vsie_page *get_vsie_page_cpu_nr(struct kvm_vcpu *vcpu, struct vsie_page *vsie_page,
> +					      gpa_t scb_o_gpa, u16 cpu_nr)
> +{
> +	struct vsie_page *vsie_page_n;
> +
> +	vsie_page_n = get_vsie_page(vcpu, scb_o_gpa);
> +	if (IS_ERR(vsie_page_n))
> +		return vsie_page_n;
> +	shadow_scb(vcpu, vsie_page_n);
> +	vsie_page_n->scb_s.eca |= vsie_page->scb_o->eca & ECA_SIGPI;
> +	vsie_page_n->scb_s.ecb |= vsie_page->scb_o->ecb & ECB_SRSI;
> +	put_vsie_page(vsie_page_n);
> +	WARN_ON_ONCE(!((u64)vsie_page_n->scb_gpa & PAGE_MASK));
> +	WARN_ON_ONCE(!((u64)vsie_page_n & PAGE_MASK));
> +
> +	return vsie_page_n;
> +}
> +
> +/*
> + * Fill the shadow system control area used for vsie sigpif.
> + */
> +static int init_ssca(struct kvm_vcpu *vcpu, struct vsie_page *vsie_page, struct vsie_sca *sca)
> +{
> +	hpa_t sca_o_entry_hpa, osca = sca->sca_o_pages[0].hpa;
> +	bool is_esca = sie_uses_esca(vsie_page->scb_o);
> +	unsigned int cpu_nr, cpu_slots;
> +	struct vsie_page *vsie_page_n;
> +	gpa_t scb_o_gpa;
> +	int i;
> +
> +	/* copy mcn to detect updates */
> +	if (is_esca)
> +		for (i = 0; i < 4; i++)
> +			sca->mcn[i] = ((struct esca_block *)phys_to_virt(osca))->mcn[i];
> +	else
> +		sca->mcn[0] = ((struct bsca_block *)phys_to_virt(osca))->mcn;
> +
> +	/* pin and make minimal shadow for ALL scb in the sca */
> +	cpu_slots = is_esca ? KVM_S390_MAX_VSIE_VCPUS : KVM_S390_BSCA_CPU_SLOTS;
> +	for_each_set_bit_inv(cpu_nr, (unsigned long *)&vsie_page->sca->mcn, cpu_slots) {
> +		get_sca_entry_addr(vcpu->kvm, vsie_page, sca, cpu_nr, NULL, &sca_o_entry_hpa);
> +		if (is_esca)
> +			scb_o_gpa = ((struct esca_entry *)sca_o_entry_hpa)->sda;
> +		else
> +			scb_o_gpa = ((struct bsca_entry *)sca_o_entry_hpa)->sda;
> +
> +		if (vsie_page->scb_s.icpua == cpu_nr)
> +			vsie_page_n = vsie_page;
> +		else
> +			vsie_page_n = get_vsie_page_cpu_nr(vcpu, vsie_page, scb_o_gpa, cpu_nr);
> +		if (IS_ERR(vsie_page_n))
> +			goto err;
> +
> +		if (!sca->pages[vsie_page_n->scb_o->icpua])
> +			sca->pages[vsie_page_n->scb_o->icpua] = vsie_page_n;
> +		WARN_ON_ONCE(sca->pages[vsie_page_n->scb_o->icpua] != vsie_page_n);
> +		sca->ssca->cpu[cpu_nr].ssda = virt_to_phys(&vsie_page_n->scb_s);
> +		sca->ssca->cpu[cpu_nr].ossea = sca_o_entry_hpa;
> +	}
> +
> +	sca->ssca->osca = osca;
> +	return 0;
> +
> +err:
> +	for_each_set_bit_inv(cpu_nr, (unsigned long *)&vsie_page->sca->mcn, cpu_slots) {
> +		sca->ssca->cpu[cpu_nr].ssda = 0;
> +		sca->ssca->cpu[cpu_nr].ossea = 0;
> +	}
> +	return PTR_ERR(vsie_page_n);
> +}
> +
> +/*
> + * Shadow the sca on vsie enter.
> + */
> +static int shadow_sca(struct kvm_vcpu *vcpu, struct vsie_page *vsie_page, struct vsie_sca *sca)
> +{
> +	struct kvm_s390_sie_block *scb_s = &vsie_page->scb_s;
> +	int rc;
> +
> +	vsie_page->sca = sca;
> +	if (!sca)
> +		return false;
> +
> +	if (!sca->pages[vsie_page->scb_o->icpua])
> +		sca->pages[vsie_page->scb_o->icpua] = vsie_page;
> +	WARN_ON_ONCE(sca->pages[vsie_page->scb_o->icpua] != vsie_page);
> +
> +	if (!sca->ssca)
> +		return false;
> +	if (!use_vsie_sigpif_for(vcpu->kvm, vsie_page))
> +		return false;
> +
> +	/* skip if the guest does not have an usable sca */
> +	if (!sca->ssca->osca) {
> +		rc = init_ssca(vcpu, vsie_page, sca);
> +		if (rc)
> +			return rc;
> +	}
> +
> +	/*
> +	 * only shadow sigpif if we actually have a sca that we can properly
> +	 * shadow with vsie_sigpif
> +	 */
> +	scb_s->eca |= vsie_page->scb_o->eca & ECA_SIGPI;
> +	scb_s->ecb |= vsie_page->scb_o->ecb & ECB_SRSI;
> +
> +	WRITE_ONCE(scb_s->osda, virt_to_phys(vsie_page->scb_o));
> +	write_scao(scb_s, virt_to_phys(sca->ssca));
> +
> +	return false;
> +}
> +
>   int kvm_s390_handle_vsie(struct kvm_vcpu *vcpu)
>   {
>   	struct vsie_page *vsie_page;
> -	unsigned long scb_addr;
> -	int rc;
> +	struct vsie_sca *sca = NULL;
> +	gpa_t scb_addr;
> +	int rc = 0;
>   
>   	vcpu->stat.instruction_sie++;
>   	if (!test_kvm_cpu_feat(vcpu->kvm, KVM_S390_VM_CPU_FEAT_SIEF2))
> @@ -1554,31 +1993,45 @@ int kvm_s390_handle_vsie(struct kvm_vcpu *vcpu)
>   		return 0;
>   	}
>   
> -	vsie_page = get_vsie_page(vcpu->kvm, scb_addr);
> +	/* get the vsie_page including the vsie control block */
> +	vsie_page = get_vsie_page(vcpu, scb_addr);
>   	if (IS_ERR(vsie_page))
>   		return PTR_ERR(vsie_page);
> -	else if (!vsie_page)
> +	if (!vsie_page)
>   		/* double use of sie control block - simply do nothing */
>   		return 0;
>   
> -	rc = pin_scb(vcpu, vsie_page, scb_addr);
> -	if (rc)
> -		goto out_put;
> +	/* get the vsie_sca including references to the original sca and all cbs */
> +	if (vsie_page->sca_gpa) {
> +		sca = get_vsie_sca(vcpu, vsie_page, vsie_page->sca_gpa);
> +		if (IS_ERR(sca)) {
> +			rc = PTR_ERR(sca);
> +			goto out_put_vsie_page;
> +		}
> +	}
> +
> +	/* shadow scb and sca for vsie_run */
>   	rc = shadow_scb(vcpu, vsie_page);
>   	if (rc)
> -		goto out_unpin_scb;
> +		goto out_put_vsie_sca;
> +	rc = shadow_sca(vcpu, vsie_page, sca);
> +	if (rc)
> +		goto out_unshadow_scb;
> +
>   	rc = pin_blocks(vcpu, vsie_page);
>   	if (rc)
> -		goto out_unshadow;
> +		goto out_unshadow_scb;
>   	register_shadow_scb(vcpu, vsie_page);
> +
>   	rc = vsie_run(vcpu, vsie_page);
> +
>   	unregister_shadow_scb(vcpu);
>   	unpin_blocks(vcpu, vsie_page);
> -out_unshadow:
> +out_unshadow_scb:
>   	unshadow_scb(vcpu, vsie_page);
> -out_unpin_scb:
> -	unpin_scb(vcpu, vsie_page, scb_addr);
> -out_put:
> +out_put_vsie_sca:
> +	put_vsie_sca(sca);
> +out_put_vsie_page:
>   	put_vsie_page(vsie_page);
>   
>   	return rc < 0 ? rc : 0;
> @@ -1589,27 +2042,58 @@ void kvm_s390_vsie_init(struct kvm *kvm)
>   {
>   	mutex_init(&kvm->arch.vsie.mutex);
>   	INIT_RADIX_TREE(&kvm->arch.vsie.addr_to_page, GFP_KERNEL_ACCOUNT);
> +	init_rwsem(&kvm->arch.vsie.ssca_lock);
> +	INIT_RADIX_TREE(&kvm->arch.vsie.osca_to_sca, GFP_KERNEL_ACCOUNT);
> +}
> +
> +static void kvm_s390_vsie_destroy_page(struct kvm *kvm, struct vsie_page *vsie_page)
> +{
> +	if (!vsie_page)
> +		return;
> +	unpin_scb(kvm, vsie_page);
> +	release_gmap_shadow(vsie_page);
> +	/* free the radix tree entry */
> +	if (vsie_page->scb_gpa != ULONG_MAX)
> +		radix_tree_delete(&kvm->arch.vsie.addr_to_page,
> +				  vsie_page->scb_gpa >> 9);
> +	free_vsie_page(vsie_page);
>   }
>   
>   /* Destroy the vsie data structures. To be called when a vm is destroyed. */
>   void kvm_s390_vsie_destroy(struct kvm *kvm)

When we arrive at this function all vcpus have been destroyed already.
All shadow gmaps have received a put as did the parent gmap.

>   {
>   	struct vsie_page *vsie_page;
> -	int i;
> +	struct vsie_sca *sca;
> +	int i, j;
>   
>   	mutex_lock(&kvm->arch.vsie.mutex);

struct kvm's refcount is 0 at this point, what are we protecting against?
What am I missing?

>   	for (i = 0; i < kvm->arch.vsie.page_count; i++) {
>   		vsie_page = kvm->arch.vsie.pages[i];
>   		kvm->arch.vsie.pages[i] = NULL;
> -		release_gmap_shadow(vsie_page);
> -		/* free the radix tree entry */
> -		if (vsie_page->scb_gpa != ULONG_MAX)
> -			radix_tree_delete(&kvm->arch.vsie.addr_to_page,
> -					  vsie_page->scb_gpa >> 9);
> -		free_page((unsigned long)vsie_page);
> +		kvm_s390_vsie_destroy_page(kvm, vsie_page);
>   	}
> -	kvm->arch.vsie.page_count = 0;
>   	mutex_unlock(&kvm->arch.vsie.mutex);
> +	down_write(&kvm->arch.vsie.ssca_lock);
> +	for (i = 0; i < kvm->arch.vsie.sca_count; i++) {
> +		sca = kvm->arch.vsie.scas[i];
> +		kvm->arch.vsie.scas[i] = NULL;
> +
> +		mutex_lock(&kvm->arch.vsie.mutex);
> +		for (j = 0; j < KVM_S390_MAX_VSIE_VCPUS; j++) {
> +			vsie_page = sca->pages[j];
> +			sca->pages[j] = NULL;
> +			kvm_s390_vsie_destroy_page(kvm, vsie_page);
> +		}
> +		sca->page_count = 0;
> +		mutex_unlock(&kvm->arch.vsie.mutex);
> +
> +		unpin_sca(kvm, sca);
> +		atomic_set(&sca->ref_count, 0);
> +		radix_tree_delete(&kvm->arch.vsie.osca_to_sca, sca->sca_gpa);
> +		free_pages_exact(sca, sizeof(*sca));
> +	}
> +	kvm->arch.vsie.sca_count = 0;
> +	up_write(&kvm->arch.vsie.ssca_lock);

Why do we need to set anything to 0 here?

struct kvm and all struct vsie_page are either freed here or a couple 
meters down the road.

>   }
>   
>   void kvm_s390_vsie_kick(struct kvm_vcpu *vcpu)
> 




  parent reply	other threads:[~2025-11-20 11:15 UTC|newest]

Thread overview: 35+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2025-11-10 17:16 [PATCH RFC v2 00/11] KVM: s390: Add VSIE SIGP Interpretation (vsie_sigpif) Christoph Schlameuss
2025-11-10 17:16 ` [PATCH RFC v2 01/11] KVM: s390: Add SCAO read and write helpers Christoph Schlameuss
2025-11-11 13:45   ` Claudio Imbrenda
2025-11-11 14:37     ` Christoph Schlameuss
2025-11-11 14:55       ` Claudio Imbrenda
2025-11-10 17:16 ` [PATCH RFC v2 02/11] KVM: s390: Remove double 64bscao feature check Christoph Schlameuss
2025-11-10 21:32   ` Eric Farman
2025-11-11  8:13   ` Hendrik Brueckner
2025-11-11 13:20   ` Janosch Frank
2025-11-10 17:16 ` [PATCH RFC v2 03/11] KVM: s390: Move scao validation into a function Christoph Schlameuss
2025-11-10 21:30   ` Eric Farman
2025-11-11  8:48     ` Christoph Schlameuss
2025-11-10 17:16 ` [PATCH RFC v2 04/11] KVM: s390: Add vsie_sigpif detection Christoph Schlameuss
2025-11-10 17:16 ` [PATCH RFC v2 05/11] KVM: s390: Add ssca_block and ssca_entry structs for vsie_ie Christoph Schlameuss
2025-11-10 17:16 ` [PATCH RFC v2 06/11] KVM: s390: Add helper to pin multiple guest pages Christoph Schlameuss
2025-11-13 15:24   ` Janosch Frank
2025-11-10 17:16 ` [PATCH RFC v2 07/11] KVM: s390: Shadow VSIE SCA in guest-1 Christoph Schlameuss
2025-11-14 14:09   ` Janosch Frank
2025-11-17 15:39     ` Christoph Schlameuss
2025-11-17 15:22   ` Janosch Frank
2025-11-18  9:27     ` Christoph Schlameuss
2025-11-18 16:04   ` Janosch Frank
2025-11-21 15:10     ` Christoph Schlameuss
2025-11-20 11:15   ` Janosch Frank [this message]
2025-11-10 17:16 ` [PATCH RFC v2 08/11] KVM: s390: Allow guest-3 cpu add and remove with vsie sigpif Christoph Schlameuss
2025-11-11 15:47   ` Janosch Frank
2025-11-11 16:34     ` Christoph Schlameuss
2025-11-10 17:16 ` [PATCH RFC v2 09/11] KVM: s390: Allow guest-3 switch to extended sca " Christoph Schlameuss
2025-11-11 14:18   ` Janosch Frank
2025-11-10 17:16 ` [PATCH RFC v2 10/11] KVM: s390: Add VSIE shadow configuration Christoph Schlameuss
2025-11-20 11:02   ` Janosch Frank
2025-11-24 10:57     ` Christoph Schlameuss
2025-11-10 17:16 ` [PATCH RFC v2 11/11] KVM: s390: Add VSIE shadow stat counters Christoph Schlameuss
2025-11-20 11:07   ` Janosch Frank
2025-11-24 10:59     ` Christoph Schlameuss

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=058cd342-223c-4a3f-b647-cd119ca3d48a@linux.ibm.com \
    --to=frankja@linux.ibm.com \
    --cc=agordeev@linux.ibm.com \
    --cc=borntraeger@linux.ibm.com \
    --cc=david@redhat.com \
    --cc=gor@linux.ibm.com \
    --cc=hca@linux.ibm.com \
    --cc=imbrenda@linux.ibm.com \
    --cc=kvm@vger.kernel.org \
    --cc=linux-s390@vger.kernel.org \
    --cc=nrb@linux.ibm.com \
    --cc=pbonzini@redhat.com \
    --cc=schlameuss@linux.ibm.com \
    --cc=shuah@kernel.org \
    --cc=svens@linux.ibm.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox