public inbox for kvm@vger.kernel.org
 help / color / mirror / Atom feed
From: "Christoph Schlameuss" <schlameuss@linux.ibm.com>
To: "Janosch Frank" <frankja@linux.ibm.com>,
	"Christoph Schlameuss" <schlameuss@linux.ibm.com>,
	<kvm@vger.kernel.org>
Cc: <linux-s390@vger.kernel.org>,
	"Heiko Carstens" <hca@linux.ibm.com>,
	"Vasily Gorbik" <gor@linux.ibm.com>,
	"Alexander Gordeev" <agordeev@linux.ibm.com>,
	"Christian Borntraeger" <borntraeger@linux.ibm.com>,
	"Claudio Imbrenda" <imbrenda@linux.ibm.com>,
	"Nico Boehr" <nrb@linux.ibm.com>,
	"David Hildenbrand" <david@redhat.com>,
	"Sven Schnelle" <svens@linux.ibm.com>,
	"Paolo Bonzini" <pbonzini@redhat.com>,
	"Shuah Khan" <shuah@kernel.org>
Subject: Re: [PATCH RFC v2 10/11] KVM: s390: Add VSIE shadow configuration
Date: Mon, 24 Nov 2025 11:57:05 +0100	[thread overview]
Message-ID: <DEGVDBEDNAV3.3FB0VGQ9RWPWZ@linux.ibm.com> (raw)
In-Reply-To: <2087b6b4-34b4-4509-9cae-bfe719d99992@linux.ibm.com>

On Thu Nov 20, 2025 at 12:02 PM CET, Janosch Frank wrote:
> On 11/10/25 18:16, Christoph Schlameuss wrote:
>> Introduce two new module parameters allowing to keep more shadow
>> structures
>> 
>> * vsie_shadow_scb_max
>>    Override the maximum number of VSIE control blocks / vsie_pages to
>>    shadow in guest-1. KVM will use the maximum of the current number of
>>    vCPUs and a maximum of 256 or this value if it is lower.
>>    This is the number of guest-3 control blocks / CPUs to keep shadowed
>>    to minimize the repeated shadowing effort.
>
> KVM will either use this value or the number of current VCPUs. Either 
> way the number will be capped to 256.
>

>> 
>> * vsie_shadow_sca_max
>>    Override the maximum number of VSIE system control areas to
>>    shadow in guest-1. KVM will use a minimum of the current number of
>>    vCPUs and a maximum of 256 or this value if it is lower.
>>    This is the number of guest-3 system control areas / VMs to keep
>>    shadowed to minimize repeated shadowing effort.
>> 
>> Signed-off-by: Christoph Schlameuss <schlameuss@linux.ibm.com>
> Except for the current implementation with arrays, nothing is limiting 
> us from going over 256 in the future by changing the code. I'm not sure 
> if I ever want to see such an environment in practice though.
>

Even if we would implement that I would not expect to see any improvement from
that without SIGPI. It would be interesting if a crazy over committed system
would even benefit from that or not. That would mainly bring down the not
running SCB shadow and SCA shadow re-init effort.

>>   arch/s390/kvm/vsie.c | 18 +++++++++++++++---
>>   1 file changed, 15 insertions(+), 3 deletions(-)
>> 
>> diff --git a/arch/s390/kvm/vsie.c b/arch/s390/kvm/vsie.c
>> index b69ef763b55296875522f2e63169446b5e2d5053..cd114df5e119bd289d14037d1f1c5bfe148cf5c7 100644
>> --- a/arch/s390/kvm/vsie.c
>> +++ b/arch/s390/kvm/vsie.c
>> @@ -98,9 +98,19 @@ struct vsie_sca {
>>   	struct vsie_page	*pages[KVM_S390_MAX_VSIE_VCPUS];
>>   };
>>   
>> +/* maximum vsie shadow scb */
>> +unsigned int vsie_shadow_scb_max;
>
> Don't we need to initialize the variables or mark them static so they are 0?
>

Yes, initializing to 1.

Interestingly I at least not notice this acting up weirdly.

>> +module_param(vsie_shadow_scb_max, uint, 0644);
>> +MODULE_PARM_DESC(vsie_shadow_scb_max, "Maximum number of VSIE shadow control blocks to keep. Values smaller number vcpus uses number of vcpus; maximum 256");
>> +
>> +/* maximum vsie shadow sca */
>> +unsigned int vsie_shadow_sca_max;
>> +module_param(vsie_shadow_sca_max, uint, 0644);
>> +MODULE_PARM_DESC(vsie_shadow_sca_max, "Maximum number of VSIE shadow system control areas to keep. Values smaller number of vcpus uses number of vcpus; 0 to disable sca shadowing; maximum 256");
>> +
>>   static inline bool use_vsie_sigpif(struct kvm *kvm)
>>   {
>> -	return kvm->arch.use_vsie_sigpif;
>> +	return kvm->arch.use_vsie_sigpif && vsie_shadow_sca_max;
>
> This functions as the enablement of vsie_sigpif?
> Is there a reason why we don't want this enabled per default?
>

Yes, setting both values to default to 1 seems most logical to me.

I am setting this in my testing so I did actually not notice I set the default
to disable here. Thanks.

>>   }
>>   
>>   static inline bool use_vsie_sigpif_for(struct kvm *kvm, struct vsie_page *vsie_page)
>> @@ -907,7 +917,8 @@ static struct vsie_sca *get_vsie_sca(struct kvm_vcpu *vcpu, struct vsie_page *vs
>>   	 * We want at least #online_vcpus shadows, so every VCPU can execute the
>>   	 * VSIE in parallel. (Worst case all single core VMs.)
>>   	 */
>> -	max_sca = MIN(atomic_read(&kvm->online_vcpus), KVM_S390_MAX_VSIE_VCPUS);
>> +	max_sca = MIN(MAX(atomic_read(&kvm->online_vcpus), vsie_shadow_sca_max),
>> +		      KVM_S390_MAX_VSIE_VCPUS);
>>   	if (kvm->arch.vsie.sca_count < max_sca) {
>>   		BUILD_BUG_ON(sizeof(struct vsie_sca) > PAGE_SIZE);
>>   		sca_new = (void *)__get_free_page(GFP_KERNEL_ACCOUNT | __GFP_ZERO);
>> @@ -1782,7 +1793,8 @@ static struct vsie_page *get_vsie_page(struct kvm_vcpu *vcpu, unsigned long addr
>>   		put_vsie_page(vsie_page);
>>   	}
>>   
>> -	max_vsie_page = MIN(atomic_read(&kvm->online_vcpus), KVM_S390_MAX_VSIE_VCPUS);
>> +	max_vsie_page = MIN(MAX(atomic_read(&kvm->online_vcpus), vsie_shadow_scb_max),
>> +			    KVM_S390_MAX_VSIE_VCPUS);
>>   
>>   	/* allocate new vsie_page - we will likely need it */
>>   	if (addr || kvm->arch.vsie.page_count < max_vsie_page) {
>> 


  reply	other threads:[~2025-11-24 10:57 UTC|newest]

Thread overview: 35+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2025-11-10 17:16 [PATCH RFC v2 00/11] KVM: s390: Add VSIE SIGP Interpretation (vsie_sigpif) Christoph Schlameuss
2025-11-10 17:16 ` [PATCH RFC v2 01/11] KVM: s390: Add SCAO read and write helpers Christoph Schlameuss
2025-11-11 13:45   ` Claudio Imbrenda
2025-11-11 14:37     ` Christoph Schlameuss
2025-11-11 14:55       ` Claudio Imbrenda
2025-11-10 17:16 ` [PATCH RFC v2 02/11] KVM: s390: Remove double 64bscao feature check Christoph Schlameuss
2025-11-10 21:32   ` Eric Farman
2025-11-11  8:13   ` Hendrik Brueckner
2025-11-11 13:20   ` Janosch Frank
2025-11-10 17:16 ` [PATCH RFC v2 03/11] KVM: s390: Move scao validation into a function Christoph Schlameuss
2025-11-10 21:30   ` Eric Farman
2025-11-11  8:48     ` Christoph Schlameuss
2025-11-10 17:16 ` [PATCH RFC v2 04/11] KVM: s390: Add vsie_sigpif detection Christoph Schlameuss
2025-11-10 17:16 ` [PATCH RFC v2 05/11] KVM: s390: Add ssca_block and ssca_entry structs for vsie_ie Christoph Schlameuss
2025-11-10 17:16 ` [PATCH RFC v2 06/11] KVM: s390: Add helper to pin multiple guest pages Christoph Schlameuss
2025-11-13 15:24   ` Janosch Frank
2025-11-10 17:16 ` [PATCH RFC v2 07/11] KVM: s390: Shadow VSIE SCA in guest-1 Christoph Schlameuss
2025-11-14 14:09   ` Janosch Frank
2025-11-17 15:39     ` Christoph Schlameuss
2025-11-17 15:22   ` Janosch Frank
2025-11-18  9:27     ` Christoph Schlameuss
2025-11-18 16:04   ` Janosch Frank
2025-11-21 15:10     ` Christoph Schlameuss
2025-11-20 11:15   ` Janosch Frank
2025-11-10 17:16 ` [PATCH RFC v2 08/11] KVM: s390: Allow guest-3 cpu add and remove with vsie sigpif Christoph Schlameuss
2025-11-11 15:47   ` Janosch Frank
2025-11-11 16:34     ` Christoph Schlameuss
2025-11-10 17:16 ` [PATCH RFC v2 09/11] KVM: s390: Allow guest-3 switch to extended sca " Christoph Schlameuss
2025-11-11 14:18   ` Janosch Frank
2025-11-10 17:16 ` [PATCH RFC v2 10/11] KVM: s390: Add VSIE shadow configuration Christoph Schlameuss
2025-11-20 11:02   ` Janosch Frank
2025-11-24 10:57     ` Christoph Schlameuss [this message]
2025-11-10 17:16 ` [PATCH RFC v2 11/11] KVM: s390: Add VSIE shadow stat counters Christoph Schlameuss
2025-11-20 11:07   ` Janosch Frank
2025-11-24 10:59     ` Christoph Schlameuss

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=DEGVDBEDNAV3.3FB0VGQ9RWPWZ@linux.ibm.com \
    --to=schlameuss@linux.ibm.com \
    --cc=agordeev@linux.ibm.com \
    --cc=borntraeger@linux.ibm.com \
    --cc=david@redhat.com \
    --cc=frankja@linux.ibm.com \
    --cc=gor@linux.ibm.com \
    --cc=hca@linux.ibm.com \
    --cc=imbrenda@linux.ibm.com \
    --cc=kvm@vger.kernel.org \
    --cc=linux-s390@vger.kernel.org \
    --cc=nrb@linux.ibm.com \
    --cc=pbonzini@redhat.com \
    --cc=shuah@kernel.org \
    --cc=svens@linux.ibm.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox