linux-doc.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH 0/6] x86/kvm: add boot parameters for max vcpu configs
@ 2021-07-01 15:40 Juergen Gross
  2021-07-01 15:41 ` [PATCH 3/6] x86/kvm: add boot parameter for maximum vcpu-id Juergen Gross
                   ` (2 more replies)
  0 siblings, 3 replies; 11+ messages in thread
From: Juergen Gross @ 2021-07-01 15:40 UTC (permalink / raw)
  To: linux-kernel, x86, kvm, linux-doc, linux-arm-kernel
  Cc: Juergen Gross, Paolo Bonzini, Sean Christopherson,
	Vitaly Kuznetsov, Wanpeng Li, Jim Mattson, Joerg Roedel,
	Thomas Gleixner, Ingo Molnar, Borislav Petkov, H. Peter Anvin,
	Jonathan Corbet, Marc Zyngier, James Morse, Alexandru Elisei,
	Suzuki K Poulose, Catalin Marinas, Will Deacon, kvmarm

In order to be able to have a single kernel for supporting even huge
numbers of vcpus per guest some arrays should be sized dynamically.

The easiest way to do that is to add boot parameters for the maximum
number of vcpus and the highest supported vcpu-id overwriting the
normal default.

This patch series is doing that for x86. The same scheme can be easily
adapted to other architectures, but I don't want to do that in the
first iteration.

In the long term I'd suggest to have a per-guest setting of the two
parameters allowing to spare some memory for smaller guests. OTOH this
would require new ioctl()s and respective qemu modifications, so I let
those away for now.

I've tested the series not to break normal guest operation and the new
parameters to be effective on x86. For Arm64 I did a compile test only.

Juergen Gross (6):
  x86/kvm: fix vcpu-id indexed array sizes
  x86/kvm: remove non-x86 stuff from arch/x86/kvm/ioapic.h
  x86/kvm: add boot parameter for maximum vcpu-id
  x86/kvm: introduce per cpu vcpu masks
  kvm: allocate vcpu pointer array separately
  x86/kvm: add boot parameter for setting max number of vcpus per guest

 .../admin-guide/kernel-parameters.txt         | 18 +++++++
 arch/arm64/kvm/arm.c                          | 28 +++++++++--
 arch/x86/include/asm/kvm_host.h               | 22 ++++++---
 arch/x86/kvm/hyperv.c                         | 25 +++++++---
 arch/x86/kvm/ioapic.c                         | 14 +++++-
 arch/x86/kvm/ioapic.h                         |  8 +--
 arch/x86/kvm/irq_comm.c                       |  9 +++-
 arch/x86/kvm/x86.c                            | 49 ++++++++++++++++++-
 include/linux/kvm_host.h                      | 17 ++++++-
 9 files changed, 160 insertions(+), 30 deletions(-)

-- 
2.26.2


^ permalink raw reply	[flat|nested] 11+ messages in thread

* [PATCH 3/6] x86/kvm: add boot parameter for maximum vcpu-id
  2021-07-01 15:40 [PATCH 0/6] x86/kvm: add boot parameters for max vcpu configs Juergen Gross
@ 2021-07-01 15:41 ` Juergen Gross
  2021-07-01 15:41 ` [PATCH 6/6] x86/kvm: add boot parameter for setting max number of vcpus per guest Juergen Gross
  2021-07-26 13:41 ` [PATCH 0/6] x86/kvm: add boot parameters for max vcpu configs Paolo Bonzini
  2 siblings, 0 replies; 11+ messages in thread
From: Juergen Gross @ 2021-07-01 15:41 UTC (permalink / raw)
  To: linux-kernel, x86, linux-doc, kvm
  Cc: Juergen Gross, Jonathan Corbet, Paolo Bonzini,
	Sean Christopherson, Vitaly Kuznetsov, Wanpeng Li, Jim Mattson,
	Joerg Roedel, Thomas Gleixner, Ingo Molnar, Borislav Petkov,
	H. Peter Anvin

Today the maximum vcpu-id of a kvm guest's vcpu on x86 systems is set
via a #define in a header file.

In order to support higher vcpu-ids without generally increasing the
memory consumption of guests on the host (some guest structures contain
arrays sized by KVM_MAX_VCPU_ID) add a boot parameter for selecting the
maximum vcpu-id. Per default it will still be the current value of
1023, but it can be set manually to higher or lower values.

This requires to allocate the arrays using KVM_MAX_VCPU_ID as the size
dynamically.

Signed-off-by: Juergen Gross <jgross@suse.com>
---
 Documentation/admin-guide/kernel-parameters.txt |  8 ++++++++
 arch/x86/include/asm/kvm_host.h                 |  5 ++++-
 arch/x86/kvm/ioapic.c                           | 12 +++++++++++-
 arch/x86/kvm/ioapic.h                           |  4 ++--
 arch/x86/kvm/x86.c                              |  3 +++
 5 files changed, 28 insertions(+), 4 deletions(-)

diff --git a/Documentation/admin-guide/kernel-parameters.txt b/Documentation/admin-guide/kernel-parameters.txt
index cb89dbdedc46..99bfa53a2bbd 100644
--- a/Documentation/admin-guide/kernel-parameters.txt
+++ b/Documentation/admin-guide/kernel-parameters.txt
@@ -2365,6 +2365,14 @@
 			feature (tagged TLBs) on capable Intel chips.
 			Default is 1 (enabled)
 
+	kvm.max_vcpu_id=
+			[KVM,X86] Set the maximum allowed vcpu-id of a guest.
+			Some memory structure sizes depend on this value, so it
+			shouldn't be set too high. Note that each vcpu of a
+			guests needs to have a unique vcpu-id, so a single
+			guest can't have more vcpus than the set value + 1.
+			Default: 1023
+
 	l1tf=           [X86] Control mitigation of the L1TF vulnerability on
 			      affected CPUs
 
diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h
index 9c7ced0e3171..88b1ff898fb9 100644
--- a/arch/x86/include/asm/kvm_host.h
+++ b/arch/x86/include/asm/kvm_host.h
@@ -39,7 +39,8 @@
 
 #define KVM_MAX_VCPUS 288
 #define KVM_SOFT_MAX_VCPUS 240
-#define KVM_MAX_VCPU_ID 1023
+#define KVM_DEFAULT_MAX_VCPU_ID 1023
+#define KVM_MAX_VCPU_ID max_vcpu_id
 /* memory slots that are not exposed to userspace */
 #define KVM_PRIVATE_MEM_SLOTS 3
 
@@ -1511,6 +1512,8 @@ extern u64  kvm_max_tsc_scaling_ratio;
 extern u64  kvm_default_tsc_scaling_ratio;
 /* bus lock detection supported? */
 extern bool kvm_has_bus_lock_exit;
+/* maximum vcpu-id */
+extern unsigned int max_vcpu_id;
 
 extern u64 kvm_mce_cap_supported;
 
diff --git a/arch/x86/kvm/ioapic.c b/arch/x86/kvm/ioapic.c
index ff005fe738a4..52e8ea90c914 100644
--- a/arch/x86/kvm/ioapic.c
+++ b/arch/x86/kvm/ioapic.c
@@ -685,11 +685,21 @@ static const struct kvm_io_device_ops ioapic_mmio_ops = {
 int kvm_ioapic_init(struct kvm *kvm)
 {
 	struct kvm_ioapic *ioapic;
+	size_t sz;
 	int ret;
 
-	ioapic = kzalloc(sizeof(struct kvm_ioapic), GFP_KERNEL_ACCOUNT);
+	sz = sizeof(struct kvm_ioapic) +
+	     sizeof(*ioapic->rtc_status.dest_map.map) *
+		    BITS_TO_LONGS(KVM_MAX_VCPU_ID + 1) +
+	     sizeof(*ioapic->rtc_status.dest_map.vectors) *
+		    (KVM_MAX_VCPU_ID + 1);
+	ioapic = kzalloc(sz, GFP_KERNEL_ACCOUNT);
 	if (!ioapic)
 		return -ENOMEM;
+	ioapic->rtc_status.dest_map.map = (void *)(ioapic + 1);
+	ioapic->rtc_status.dest_map.vectors =
+		(void *)(ioapic->rtc_status.dest_map.map +
+			 BITS_TO_LONGS(KVM_MAX_VCPU_ID + 1));
 	spin_lock_init(&ioapic->lock);
 	INIT_DELAYED_WORK(&ioapic->eoi_inject, kvm_ioapic_eoi_inject_work);
 	kvm->arch.vioapic = ioapic;
diff --git a/arch/x86/kvm/ioapic.h b/arch/x86/kvm/ioapic.h
index bbd4a5d18b5d..623a3c5afad7 100644
--- a/arch/x86/kvm/ioapic.h
+++ b/arch/x86/kvm/ioapic.h
@@ -39,13 +39,13 @@ struct kvm_vcpu;
 
 struct dest_map {
 	/* vcpu bitmap where IRQ has been sent */
-	DECLARE_BITMAP(map, KVM_MAX_VCPU_ID + 1);
+	unsigned long *map;
 
 	/*
 	 * Vector sent to a given vcpu, only valid when
 	 * the vcpu's bit in map is set
 	 */
-	u8 vectors[KVM_MAX_VCPU_ID + 1];
+	u8 *vectors;
 };
 
 
diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
index e0f4a46649d7..0390d90fd360 100644
--- a/arch/x86/kvm/x86.c
+++ b/arch/x86/kvm/x86.c
@@ -177,6 +177,9 @@ module_param(force_emulation_prefix, bool, S_IRUGO);
 int __read_mostly pi_inject_timer = -1;
 module_param(pi_inject_timer, bint, S_IRUGO | S_IWUSR);
 
+unsigned int __read_mostly max_vcpu_id = KVM_DEFAULT_MAX_VCPU_ID;
+module_param(max_vcpu_id, uint, S_IRUGO);
+
 /*
  * Restoring the host value for MSRs that are only consumed when running in
  * usermode, e.g. SYSCALL MSRs and TSC_AUX, can be deferred until the CPU
-- 
2.26.2


^ permalink raw reply related	[flat|nested] 11+ messages in thread

* [PATCH 6/6] x86/kvm: add boot parameter for setting max number of vcpus per guest
  2021-07-01 15:40 [PATCH 0/6] x86/kvm: add boot parameters for max vcpu configs Juergen Gross
  2021-07-01 15:41 ` [PATCH 3/6] x86/kvm: add boot parameter for maximum vcpu-id Juergen Gross
@ 2021-07-01 15:41 ` Juergen Gross
  2021-07-14 11:15   ` Vitaly Kuznetsov
  2021-07-26 13:41 ` [PATCH 0/6] x86/kvm: add boot parameters for max vcpu configs Paolo Bonzini
  2 siblings, 1 reply; 11+ messages in thread
From: Juergen Gross @ 2021-07-01 15:41 UTC (permalink / raw)
  To: linux-kernel, x86, linux-doc, kvm
  Cc: Juergen Gross, Jonathan Corbet, Paolo Bonzini,
	Sean Christopherson, Vitaly Kuznetsov, Wanpeng Li, Jim Mattson,
	Joerg Roedel, Thomas Gleixner, Ingo Molnar, Borislav Petkov,
	H. Peter Anvin

Today the maximum number of vcpus of a kvm guest is set via a #define
in a header file.

In order to support higher vcpu numbers for guests without generally
increasing the memory consumption of guests on the host especially on
very large systems add a boot parameter for specifying the number of
allowed vcpus for guests.

The default will still be the current setting of 288. The value 0 has
the special meaning to limit the number of possible vcpus to the
number of possible cpus of the host.

Signed-off-by: Juergen Gross <jgross@suse.com>
---
 Documentation/admin-guide/kernel-parameters.txt | 10 ++++++++++
 arch/x86/include/asm/kvm_host.h                 |  5 ++++-
 arch/x86/kvm/x86.c                              |  7 +++++++
 3 files changed, 21 insertions(+), 1 deletion(-)

diff --git a/Documentation/admin-guide/kernel-parameters.txt b/Documentation/admin-guide/kernel-parameters.txt
index 99bfa53a2bbd..8eb856396ffa 100644
--- a/Documentation/admin-guide/kernel-parameters.txt
+++ b/Documentation/admin-guide/kernel-parameters.txt
@@ -2373,6 +2373,16 @@
 			guest can't have more vcpus than the set value + 1.
 			Default: 1023
 
+	kvm.max_vcpus=	[KVM,X86] Set the maximum allowed numbers of vcpus per
+			guest. The special value 0 sets the limit to the number
+			of physical cpus possible on the host (including not
+			yet hotplugged cpus). Higher values will result in
+			slightly higher memory consumption per guest. Depending
+			on the value and the virtual topology the maximum
+			allowed vcpu-id might need to be raised, too (see
+			kvm.max_vcpu_id parameter).
+			Default: 288
+
 	l1tf=           [X86] Control mitigation of the L1TF vulnerability on
 			      affected CPUs
 
diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h
index 39cbc4b6bffb..65ae82a5d444 100644
--- a/arch/x86/include/asm/kvm_host.h
+++ b/arch/x86/include/asm/kvm_host.h
@@ -37,7 +37,8 @@
 
 #define __KVM_HAVE_ARCH_VCPU_DEBUGFS
 
-#define KVM_MAX_VCPUS 288
+#define KVM_DEFAULT_MAX_VCPUS 288
+#define KVM_MAX_VCPUS max_vcpus
 #define KVM_SOFT_MAX_VCPUS 240
 #define KVM_DEFAULT_MAX_VCPU_ID 1023
 #define KVM_MAX_VCPU_ID max_vcpu_id
@@ -1509,6 +1510,8 @@ extern u64  kvm_max_tsc_scaling_ratio;
 extern u64  kvm_default_tsc_scaling_ratio;
 /* bus lock detection supported? */
 extern bool kvm_has_bus_lock_exit;
+/* maximum number of vcpus per guest */
+extern unsigned int max_vcpus;
 /* maximum vcpu-id */
 extern unsigned int max_vcpu_id;
 /* per cpu vcpu bitmasks (disable preemption during usage) */
diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
index a9b0bb2221ea..888c4507504d 100644
--- a/arch/x86/kvm/x86.c
+++ b/arch/x86/kvm/x86.c
@@ -177,6 +177,10 @@ module_param(force_emulation_prefix, bool, S_IRUGO);
 int __read_mostly pi_inject_timer = -1;
 module_param(pi_inject_timer, bint, S_IRUGO | S_IWUSR);
 
+unsigned int __read_mostly max_vcpus = KVM_DEFAULT_MAX_VCPUS;
+module_param(max_vcpus, uint, S_IRUGO);
+EXPORT_SYMBOL_GPL(max_vcpus);
+
 unsigned int __read_mostly max_vcpu_id = KVM_DEFAULT_MAX_VCPU_ID;
 module_param(max_vcpu_id, uint, S_IRUGO);
 
@@ -10648,6 +10652,9 @@ int kvm_arch_hardware_setup(void *opaque)
 	if (boot_cpu_has(X86_FEATURE_XSAVES))
 		rdmsrl(MSR_IA32_XSS, host_xss);
 
+	if (max_vcpus == 0)
+		max_vcpus = num_possible_cpus();
+
 	kvm_pcpu_vcpu_mask = __alloc_percpu(KVM_VCPU_MASK_SZ,
 					    sizeof(unsigned long));
 	kvm_hv_vp_bitmap = __alloc_percpu(KVM_HV_VPMAP_SZ, sizeof(u64));
-- 
2.26.2


^ permalink raw reply related	[flat|nested] 11+ messages in thread

* Re: [PATCH 6/6] x86/kvm: add boot parameter for setting max number of vcpus per guest
  2021-07-01 15:41 ` [PATCH 6/6] x86/kvm: add boot parameter for setting max number of vcpus per guest Juergen Gross
@ 2021-07-14 11:15   ` Vitaly Kuznetsov
  2021-07-14 11:24     ` Juergen Gross
  0 siblings, 1 reply; 11+ messages in thread
From: Vitaly Kuznetsov @ 2021-07-14 11:15 UTC (permalink / raw)
  To: Juergen Gross
  Cc: Jonathan Corbet, Paolo Bonzini, Sean Christopherson, Wanpeng Li,
	Jim Mattson, Joerg Roedel, Thomas Gleixner, Ingo Molnar,
	Borislav Petkov, H. Peter Anvin, linux-kernel, x86, linux-doc,
	kvm

Juergen Gross <jgross@suse.com> writes:

> Today the maximum number of vcpus of a kvm guest is set via a #define
> in a header file.
>
> In order to support higher vcpu numbers for guests without generally
> increasing the memory consumption of guests on the host especially on
> very large systems add a boot parameter for specifying the number of
> allowed vcpus for guests.
>
> The default will still be the current setting of 288. The value 0 has
> the special meaning to limit the number of possible vcpus to the
> number of possible cpus of the host.
>
> Signed-off-by: Juergen Gross <jgross@suse.com>
> ---
>  Documentation/admin-guide/kernel-parameters.txt | 10 ++++++++++
>  arch/x86/include/asm/kvm_host.h                 |  5 ++++-
>  arch/x86/kvm/x86.c                              |  7 +++++++
>  3 files changed, 21 insertions(+), 1 deletion(-)
>
> diff --git a/Documentation/admin-guide/kernel-parameters.txt b/Documentation/admin-guide/kernel-parameters.txt
> index 99bfa53a2bbd..8eb856396ffa 100644
> --- a/Documentation/admin-guide/kernel-parameters.txt
> +++ b/Documentation/admin-guide/kernel-parameters.txt
> @@ -2373,6 +2373,16 @@
>  			guest can't have more vcpus than the set value + 1.
>  			Default: 1023
>  
> +	kvm.max_vcpus=	[KVM,X86] Set the maximum allowed numbers of vcpus per
> +			guest. The special value 0 sets the limit to the number
> +			of physical cpus possible on the host (including not
> +			yet hotplugged cpus). Higher values will result in
> +			slightly higher memory consumption per guest. Depending
> +			on the value and the virtual topology the maximum
> +			allowed vcpu-id might need to be raised, too (see
> +			kvm.max_vcpu_id parameter).

I'd suggest to at least add a sanity check: 'max_vcpu_id' should always
be >= 'max_vcpus'. Alternatively, we can replace 'max_vcpu_id' with say
'vcpu_id_to_vcpus_ratio' and set it to e.g. '4' by default.

> +			Default: 288
> +
>  	l1tf=           [X86] Control mitigation of the L1TF vulnerability on
>  			      affected CPUs
>  
> diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h
> index 39cbc4b6bffb..65ae82a5d444 100644
> --- a/arch/x86/include/asm/kvm_host.h
> +++ b/arch/x86/include/asm/kvm_host.h
> @@ -37,7 +37,8 @@
>  
>  #define __KVM_HAVE_ARCH_VCPU_DEBUGFS
>  
> -#define KVM_MAX_VCPUS 288
> +#define KVM_DEFAULT_MAX_VCPUS 288
> +#define KVM_MAX_VCPUS max_vcpus
>  #define KVM_SOFT_MAX_VCPUS 240
>  #define KVM_DEFAULT_MAX_VCPU_ID 1023
>  #define KVM_MAX_VCPU_ID max_vcpu_id
> @@ -1509,6 +1510,8 @@ extern u64  kvm_max_tsc_scaling_ratio;
>  extern u64  kvm_default_tsc_scaling_ratio;
>  /* bus lock detection supported? */
>  extern bool kvm_has_bus_lock_exit;
> +/* maximum number of vcpus per guest */
> +extern unsigned int max_vcpus;
>  /* maximum vcpu-id */
>  extern unsigned int max_vcpu_id;
>  /* per cpu vcpu bitmasks (disable preemption during usage) */
> diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
> index a9b0bb2221ea..888c4507504d 100644
> --- a/arch/x86/kvm/x86.c
> +++ b/arch/x86/kvm/x86.c
> @@ -177,6 +177,10 @@ module_param(force_emulation_prefix, bool, S_IRUGO);
>  int __read_mostly pi_inject_timer = -1;
>  module_param(pi_inject_timer, bint, S_IRUGO | S_IWUSR);
>  
> +unsigned int __read_mostly max_vcpus = KVM_DEFAULT_MAX_VCPUS;
> +module_param(max_vcpus, uint, S_IRUGO);
> +EXPORT_SYMBOL_GPL(max_vcpus);
> +
>  unsigned int __read_mostly max_vcpu_id = KVM_DEFAULT_MAX_VCPU_ID;
>  module_param(max_vcpu_id, uint, S_IRUGO);
>  
> @@ -10648,6 +10652,9 @@ int kvm_arch_hardware_setup(void *opaque)
>  	if (boot_cpu_has(X86_FEATURE_XSAVES))
>  		rdmsrl(MSR_IA32_XSS, host_xss);
>  
> +	if (max_vcpus == 0)
> +		max_vcpus = num_possible_cpus();

Is this special case really needed? I mean 'max_vcpus' is not '0' by
default so whoever sets it manually probably knows how big his guests
are going to be anyway and it is not always obvious how many CPUs are
reported by 'num_possible_cpus()' (ACPI tables can be weird for example).

> +
>  	kvm_pcpu_vcpu_mask = __alloc_percpu(KVM_VCPU_MASK_SZ,
>  					    sizeof(unsigned long));
>  	kvm_hv_vp_bitmap = __alloc_percpu(KVM_HV_VPMAP_SZ, sizeof(u64));

-- 
Vitaly


^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: [PATCH 6/6] x86/kvm: add boot parameter for setting max number of vcpus per guest
  2021-07-14 11:15   ` Vitaly Kuznetsov
@ 2021-07-14 11:24     ` Juergen Gross
  2021-07-14 11:45       ` Vitaly Kuznetsov
  0 siblings, 1 reply; 11+ messages in thread
From: Juergen Gross @ 2021-07-14 11:24 UTC (permalink / raw)
  To: Vitaly Kuznetsov
  Cc: Jonathan Corbet, Paolo Bonzini, Sean Christopherson, Wanpeng Li,
	Jim Mattson, Joerg Roedel, Thomas Gleixner, Ingo Molnar,
	Borislav Petkov, H. Peter Anvin, linux-kernel, x86, linux-doc,
	kvm


[-- Attachment #1.1.1: Type: text/plain, Size: 4974 bytes --]

On 14.07.21 13:15, Vitaly Kuznetsov wrote:
> Juergen Gross <jgross@suse.com> writes:
> 
>> Today the maximum number of vcpus of a kvm guest is set via a #define
>> in a header file.
>>
>> In order to support higher vcpu numbers for guests without generally
>> increasing the memory consumption of guests on the host especially on
>> very large systems add a boot parameter for specifying the number of
>> allowed vcpus for guests.
>>
>> The default will still be the current setting of 288. The value 0 has
>> the special meaning to limit the number of possible vcpus to the
>> number of possible cpus of the host.
>>
>> Signed-off-by: Juergen Gross <jgross@suse.com>
>> ---
>>   Documentation/admin-guide/kernel-parameters.txt | 10 ++++++++++
>>   arch/x86/include/asm/kvm_host.h                 |  5 ++++-
>>   arch/x86/kvm/x86.c                              |  7 +++++++
>>   3 files changed, 21 insertions(+), 1 deletion(-)
>>
>> diff --git a/Documentation/admin-guide/kernel-parameters.txt b/Documentation/admin-guide/kernel-parameters.txt
>> index 99bfa53a2bbd..8eb856396ffa 100644
>> --- a/Documentation/admin-guide/kernel-parameters.txt
>> +++ b/Documentation/admin-guide/kernel-parameters.txt
>> @@ -2373,6 +2373,16 @@
>>   			guest can't have more vcpus than the set value + 1.
>>   			Default: 1023
>>   
>> +	kvm.max_vcpus=	[KVM,X86] Set the maximum allowed numbers of vcpus per
>> +			guest. The special value 0 sets the limit to the number
>> +			of physical cpus possible on the host (including not
>> +			yet hotplugged cpus). Higher values will result in
>> +			slightly higher memory consumption per guest. Depending
>> +			on the value and the virtual topology the maximum
>> +			allowed vcpu-id might need to be raised, too (see
>> +			kvm.max_vcpu_id parameter).
> 
> I'd suggest to at least add a sanity check: 'max_vcpu_id' should always
> be >= 'max_vcpus'. Alternatively, we can replace 'max_vcpu_id' with say
> 'vcpu_id_to_vcpus_ratio' and set it to e.g. '4' by default.

Either would be fine with me.

A default of '2' for the ratio would seem more appropriate for me,
however. A thread count per core not being a power of 2 is quite
unlikely, and the worst case scenario for cores per socket would be
2^n + 1.

> 
>> +			Default: 288
>> +
>>   	l1tf=           [X86] Control mitigation of the L1TF vulnerability on
>>   			      affected CPUs
>>   
>> diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h
>> index 39cbc4b6bffb..65ae82a5d444 100644
>> --- a/arch/x86/include/asm/kvm_host.h
>> +++ b/arch/x86/include/asm/kvm_host.h
>> @@ -37,7 +37,8 @@
>>   
>>   #define __KVM_HAVE_ARCH_VCPU_DEBUGFS
>>   
>> -#define KVM_MAX_VCPUS 288
>> +#define KVM_DEFAULT_MAX_VCPUS 288
>> +#define KVM_MAX_VCPUS max_vcpus
>>   #define KVM_SOFT_MAX_VCPUS 240
>>   #define KVM_DEFAULT_MAX_VCPU_ID 1023
>>   #define KVM_MAX_VCPU_ID max_vcpu_id
>> @@ -1509,6 +1510,8 @@ extern u64  kvm_max_tsc_scaling_ratio;
>>   extern u64  kvm_default_tsc_scaling_ratio;
>>   /* bus lock detection supported? */
>>   extern bool kvm_has_bus_lock_exit;
>> +/* maximum number of vcpus per guest */
>> +extern unsigned int max_vcpus;
>>   /* maximum vcpu-id */
>>   extern unsigned int max_vcpu_id;
>>   /* per cpu vcpu bitmasks (disable preemption during usage) */
>> diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
>> index a9b0bb2221ea..888c4507504d 100644
>> --- a/arch/x86/kvm/x86.c
>> +++ b/arch/x86/kvm/x86.c
>> @@ -177,6 +177,10 @@ module_param(force_emulation_prefix, bool, S_IRUGO);
>>   int __read_mostly pi_inject_timer = -1;
>>   module_param(pi_inject_timer, bint, S_IRUGO | S_IWUSR);
>>   
>> +unsigned int __read_mostly max_vcpus = KVM_DEFAULT_MAX_VCPUS;
>> +module_param(max_vcpus, uint, S_IRUGO);
>> +EXPORT_SYMBOL_GPL(max_vcpus);
>> +
>>   unsigned int __read_mostly max_vcpu_id = KVM_DEFAULT_MAX_VCPU_ID;
>>   module_param(max_vcpu_id, uint, S_IRUGO);
>>   
>> @@ -10648,6 +10652,9 @@ int kvm_arch_hardware_setup(void *opaque)
>>   	if (boot_cpu_has(X86_FEATURE_XSAVES))
>>   		rdmsrl(MSR_IA32_XSS, host_xss);
>>   
>> +	if (max_vcpus == 0)
>> +		max_vcpus = num_possible_cpus();
> 
> Is this special case really needed? I mean 'max_vcpus' is not '0' by
> default so whoever sets it manually probably knows how big his guests
> are going to be anyway and it is not always obvious how many CPUs are
> reported by 'num_possible_cpus()' (ACPI tables can be weird for example).

The idea was to make it easy for anyone managing a large fleet of hosts
and wanting to have a common setting for all of them.

It would even be possible to use '0' as the default (probably via config
option only).

> 
>> +
>>   	kvm_pcpu_vcpu_mask = __alloc_percpu(KVM_VCPU_MASK_SZ,
>>   					    sizeof(unsigned long));
>>   	kvm_hv_vp_bitmap = __alloc_percpu(KVM_HV_VPMAP_SZ, sizeof(u64));
> 

Thanks for the feedback,


Juergen

[-- Attachment #1.1.2: OpenPGP public key --]
[-- Type: application/pgp-keys, Size: 3135 bytes --]

[-- Attachment #2: OpenPGP digital signature --]
[-- Type: application/pgp-signature, Size: 495 bytes --]

^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: [PATCH 6/6] x86/kvm: add boot parameter for setting max number of vcpus per guest
  2021-07-14 11:24     ` Juergen Gross
@ 2021-07-14 11:45       ` Vitaly Kuznetsov
  2021-07-14 13:04         ` Juergen Gross
  0 siblings, 1 reply; 11+ messages in thread
From: Vitaly Kuznetsov @ 2021-07-14 11:45 UTC (permalink / raw)
  To: Juergen Gross
  Cc: Jonathan Corbet, Paolo Bonzini, Sean Christopherson, Wanpeng Li,
	Jim Mattson, Joerg Roedel, Thomas Gleixner, Ingo Molnar,
	Borislav Petkov, H. Peter Anvin, linux-kernel, x86, linux-doc,
	kvm

Juergen Gross <jgross@suse.com> writes:

> On 14.07.21 13:15, Vitaly Kuznetsov wrote:
>> Juergen Gross <jgross@suse.com> writes:
>> 
>>> Today the maximum number of vcpus of a kvm guest is set via a #define
>>> in a header file.
>>>
>>> In order to support higher vcpu numbers for guests without generally
>>> increasing the memory consumption of guests on the host especially on
>>> very large systems add a boot parameter for specifying the number of
>>> allowed vcpus for guests.
>>>
>>> The default will still be the current setting of 288. The value 0 has
>>> the special meaning to limit the number of possible vcpus to the
>>> number of possible cpus of the host.
>>>
>>> Signed-off-by: Juergen Gross <jgross@suse.com>
>>> ---
>>>   Documentation/admin-guide/kernel-parameters.txt | 10 ++++++++++
>>>   arch/x86/include/asm/kvm_host.h                 |  5 ++++-
>>>   arch/x86/kvm/x86.c                              |  7 +++++++
>>>   3 files changed, 21 insertions(+), 1 deletion(-)
>>>
>>> diff --git a/Documentation/admin-guide/kernel-parameters.txt b/Documentation/admin-guide/kernel-parameters.txt
>>> index 99bfa53a2bbd..8eb856396ffa 100644
>>> --- a/Documentation/admin-guide/kernel-parameters.txt
>>> +++ b/Documentation/admin-guide/kernel-parameters.txt
>>> @@ -2373,6 +2373,16 @@
>>>   			guest can't have more vcpus than the set value + 1.
>>>   			Default: 1023
>>>   
>>> +	kvm.max_vcpus=	[KVM,X86] Set the maximum allowed numbers of vcpus per
>>> +			guest. The special value 0 sets the limit to the number
>>> +			of physical cpus possible on the host (including not
>>> +			yet hotplugged cpus). Higher values will result in
>>> +			slightly higher memory consumption per guest. Depending
>>> +			on the value and the virtual topology the maximum
>>> +			allowed vcpu-id might need to be raised, too (see
>>> +			kvm.max_vcpu_id parameter).
>> 
>> I'd suggest to at least add a sanity check: 'max_vcpu_id' should always
>> be >= 'max_vcpus'. Alternatively, we can replace 'max_vcpu_id' with say
>> 'vcpu_id_to_vcpus_ratio' and set it to e.g. '4' by default.
>
> Either would be fine with me.
>
> A default of '2' for the ratio would seem more appropriate for me,
> however. A thread count per core not being a power of 2 is quite
> unlikely, and the worst case scenario for cores per socket would be
> 2^n + 1.
>

(I vaguely recall AMD EPYC had more than thread id (package id?)
encapsulated into APIC id).

Personally, I'd vote for introducing a 'ratio' parameter then so
generally users will only have to set 'kvm.max_vcpus'.

>> 
>>> +			Default: 288
>>> +
>>>   	l1tf=           [X86] Control mitigation of the L1TF vulnerability on
>>>   			      affected CPUs
>>>   
>>> diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h
>>> index 39cbc4b6bffb..65ae82a5d444 100644
>>> --- a/arch/x86/include/asm/kvm_host.h
>>> +++ b/arch/x86/include/asm/kvm_host.h
>>> @@ -37,7 +37,8 @@
>>>   
>>>   #define __KVM_HAVE_ARCH_VCPU_DEBUGFS
>>>   
>>> -#define KVM_MAX_VCPUS 288
>>> +#define KVM_DEFAULT_MAX_VCPUS 288
>>> +#define KVM_MAX_VCPUS max_vcpus
>>>   #define KVM_SOFT_MAX_VCPUS 240
>>>   #define KVM_DEFAULT_MAX_VCPU_ID 1023
>>>   #define KVM_MAX_VCPU_ID max_vcpu_id
>>> @@ -1509,6 +1510,8 @@ extern u64  kvm_max_tsc_scaling_ratio;
>>>   extern u64  kvm_default_tsc_scaling_ratio;
>>>   /* bus lock detection supported? */
>>>   extern bool kvm_has_bus_lock_exit;
>>> +/* maximum number of vcpus per guest */
>>> +extern unsigned int max_vcpus;
>>>   /* maximum vcpu-id */
>>>   extern unsigned int max_vcpu_id;
>>>   /* per cpu vcpu bitmasks (disable preemption during usage) */
>>> diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
>>> index a9b0bb2221ea..888c4507504d 100644
>>> --- a/arch/x86/kvm/x86.c
>>> +++ b/arch/x86/kvm/x86.c
>>> @@ -177,6 +177,10 @@ module_param(force_emulation_prefix, bool, S_IRUGO);
>>>   int __read_mostly pi_inject_timer = -1;
>>>   module_param(pi_inject_timer, bint, S_IRUGO | S_IWUSR);
>>>   
>>> +unsigned int __read_mostly max_vcpus = KVM_DEFAULT_MAX_VCPUS;
>>> +module_param(max_vcpus, uint, S_IRUGO);
>>> +EXPORT_SYMBOL_GPL(max_vcpus);
>>> +
>>>   unsigned int __read_mostly max_vcpu_id = KVM_DEFAULT_MAX_VCPU_ID;
>>>   module_param(max_vcpu_id, uint, S_IRUGO);
>>>   
>>> @@ -10648,6 +10652,9 @@ int kvm_arch_hardware_setup(void *opaque)
>>>   	if (boot_cpu_has(X86_FEATURE_XSAVES))
>>>   		rdmsrl(MSR_IA32_XSS, host_xss);
>>>   
>>> +	if (max_vcpus == 0)
>>> +		max_vcpus = num_possible_cpus();
>> 
>> Is this special case really needed? I mean 'max_vcpus' is not '0' by
>> default so whoever sets it manually probably knows how big his guests
>> are going to be anyway and it is not always obvious how many CPUs are
>> reported by 'num_possible_cpus()' (ACPI tables can be weird for example).
>
> The idea was to make it easy for anyone managing a large fleet of hosts
> and wanting to have a common setting for all of them.
>

I see. It seems to be uncommon indeed to run guests with more vCPUs than
host pCPUs so everything >= num_online_cpus() should be OK. My only
concern about num_possible_cpus() is that it is going to be hard to
explain what 'possible CPUs' mean (but whoever cares that much about
wasting memory can always set the required value manually).

> It would even be possible to use '0' as the default (probably via config
> option only).
>
>> 
>>> +
>>>   	kvm_pcpu_vcpu_mask = __alloc_percpu(KVM_VCPU_MASK_SZ,
>>>   					    sizeof(unsigned long));
>>>   	kvm_hv_vp_bitmap = __alloc_percpu(KVM_HV_VPMAP_SZ, sizeof(u64));
>> 

-- 
Vitaly


^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: [PATCH 6/6] x86/kvm: add boot parameter for setting max number of vcpus per guest
  2021-07-14 11:45       ` Vitaly Kuznetsov
@ 2021-07-14 13:04         ` Juergen Gross
  2021-07-14 13:21           ` Vitaly Kuznetsov
  0 siblings, 1 reply; 11+ messages in thread
From: Juergen Gross @ 2021-07-14 13:04 UTC (permalink / raw)
  To: Vitaly Kuznetsov
  Cc: Jonathan Corbet, Paolo Bonzini, Sean Christopherson, Wanpeng Li,
	Jim Mattson, Joerg Roedel, Thomas Gleixner, Ingo Molnar,
	Borislav Petkov, H. Peter Anvin, linux-kernel, x86, linux-doc,
	kvm


[-- Attachment #1.1.1: Type: text/plain, Size: 5816 bytes --]

On 14.07.21 13:45, Vitaly Kuznetsov wrote:
> Juergen Gross <jgross@suse.com> writes:
> 
>> On 14.07.21 13:15, Vitaly Kuznetsov wrote:
>>> Juergen Gross <jgross@suse.com> writes:
>>>
>>>> Today the maximum number of vcpus of a kvm guest is set via a #define
>>>> in a header file.
>>>>
>>>> In order to support higher vcpu numbers for guests without generally
>>>> increasing the memory consumption of guests on the host especially on
>>>> very large systems add a boot parameter for specifying the number of
>>>> allowed vcpus for guests.
>>>>
>>>> The default will still be the current setting of 288. The value 0 has
>>>> the special meaning to limit the number of possible vcpus to the
>>>> number of possible cpus of the host.
>>>>
>>>> Signed-off-by: Juergen Gross <jgross@suse.com>
>>>> ---
>>>>    Documentation/admin-guide/kernel-parameters.txt | 10 ++++++++++
>>>>    arch/x86/include/asm/kvm_host.h                 |  5 ++++-
>>>>    arch/x86/kvm/x86.c                              |  7 +++++++
>>>>    3 files changed, 21 insertions(+), 1 deletion(-)
>>>>
>>>> diff --git a/Documentation/admin-guide/kernel-parameters.txt b/Documentation/admin-guide/kernel-parameters.txt
>>>> index 99bfa53a2bbd..8eb856396ffa 100644
>>>> --- a/Documentation/admin-guide/kernel-parameters.txt
>>>> +++ b/Documentation/admin-guide/kernel-parameters.txt
>>>> @@ -2373,6 +2373,16 @@
>>>>    			guest can't have more vcpus than the set value + 1.
>>>>    			Default: 1023
>>>>    
>>>> +	kvm.max_vcpus=	[KVM,X86] Set the maximum allowed numbers of vcpus per
>>>> +			guest. The special value 0 sets the limit to the number
>>>> +			of physical cpus possible on the host (including not
>>>> +			yet hotplugged cpus). Higher values will result in
>>>> +			slightly higher memory consumption per guest. Depending
>>>> +			on the value and the virtual topology the maximum
>>>> +			allowed vcpu-id might need to be raised, too (see
>>>> +			kvm.max_vcpu_id parameter).
>>>
>>> I'd suggest to at least add a sanity check: 'max_vcpu_id' should always
>>> be >= 'max_vcpus'. Alternatively, we can replace 'max_vcpu_id' with say
>>> 'vcpu_id_to_vcpus_ratio' and set it to e.g. '4' by default.
>>
>> Either would be fine with me.
>>
>> A default of '2' for the ratio would seem more appropriate for me,
>> however. A thread count per core not being a power of 2 is quite
>> unlikely, and the worst case scenario for cores per socket would be
>> 2^n + 1.
>>
> 
> (I vaguely recall AMD EPYC had more than thread id (package id?)
> encapsulated into APIC id).

Ah, yes, that rings a bell.

> Personally, I'd vote for introducing a 'ratio' parameter then so
> generally users will only have to set 'kvm.max_vcpus'.

Okay.

Default '4' then? Or '2 ^ (topology_levels - 2)' (assuming a
topology_level of 3 on Intel: thread/core/socket and 4 on EPYC:
thread/core/package/socket).

> 
>>>
>>>> +			Default: 288
>>>> +
>>>>    	l1tf=           [X86] Control mitigation of the L1TF vulnerability on
>>>>    			      affected CPUs
>>>>    
>>>> diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h
>>>> index 39cbc4b6bffb..65ae82a5d444 100644
>>>> --- a/arch/x86/include/asm/kvm_host.h
>>>> +++ b/arch/x86/include/asm/kvm_host.h
>>>> @@ -37,7 +37,8 @@
>>>>    
>>>>    #define __KVM_HAVE_ARCH_VCPU_DEBUGFS
>>>>    
>>>> -#define KVM_MAX_VCPUS 288
>>>> +#define KVM_DEFAULT_MAX_VCPUS 288
>>>> +#define KVM_MAX_VCPUS max_vcpus
>>>>    #define KVM_SOFT_MAX_VCPUS 240
>>>>    #define KVM_DEFAULT_MAX_VCPU_ID 1023
>>>>    #define KVM_MAX_VCPU_ID max_vcpu_id
>>>> @@ -1509,6 +1510,8 @@ extern u64  kvm_max_tsc_scaling_ratio;
>>>>    extern u64  kvm_default_tsc_scaling_ratio;
>>>>    /* bus lock detection supported? */
>>>>    extern bool kvm_has_bus_lock_exit;
>>>> +/* maximum number of vcpus per guest */
>>>> +extern unsigned int max_vcpus;
>>>>    /* maximum vcpu-id */
>>>>    extern unsigned int max_vcpu_id;
>>>>    /* per cpu vcpu bitmasks (disable preemption during usage) */
>>>> diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
>>>> index a9b0bb2221ea..888c4507504d 100644
>>>> --- a/arch/x86/kvm/x86.c
>>>> +++ b/arch/x86/kvm/x86.c
>>>> @@ -177,6 +177,10 @@ module_param(force_emulation_prefix, bool, S_IRUGO);
>>>>    int __read_mostly pi_inject_timer = -1;
>>>>    module_param(pi_inject_timer, bint, S_IRUGO | S_IWUSR);
>>>>    
>>>> +unsigned int __read_mostly max_vcpus = KVM_DEFAULT_MAX_VCPUS;
>>>> +module_param(max_vcpus, uint, S_IRUGO);
>>>> +EXPORT_SYMBOL_GPL(max_vcpus);
>>>> +
>>>>    unsigned int __read_mostly max_vcpu_id = KVM_DEFAULT_MAX_VCPU_ID;
>>>>    module_param(max_vcpu_id, uint, S_IRUGO);
>>>>    
>>>> @@ -10648,6 +10652,9 @@ int kvm_arch_hardware_setup(void *opaque)
>>>>    	if (boot_cpu_has(X86_FEATURE_XSAVES))
>>>>    		rdmsrl(MSR_IA32_XSS, host_xss);
>>>>    
>>>> +	if (max_vcpus == 0)
>>>> +		max_vcpus = num_possible_cpus();
>>>
>>> Is this special case really needed? I mean 'max_vcpus' is not '0' by
>>> default so whoever sets it manually probably knows how big his guests
>>> are going to be anyway and it is not always obvious how many CPUs are
>>> reported by 'num_possible_cpus()' (ACPI tables can be weird for example).
>>
>> The idea was to make it easy for anyone managing a large fleet of hosts
>> and wanting to have a common setting for all of them.
>>
> 
> I see. It seems to be uncommon indeed to run guests with more vCPUs than
> host pCPUs so everything >= num_online_cpus() should be OK. My only
> concern about num_possible_cpus() is that it is going to be hard to
> explain what 'possible CPUs' mean (but whoever cares that much about
> wasting memory can always set the required value manually).

Right.


Juergen

[-- Attachment #1.1.2: OpenPGP public key --]
[-- Type: application/pgp-keys, Size: 3135 bytes --]

[-- Attachment #2: OpenPGP digital signature --]
[-- Type: application/pgp-signature, Size: 495 bytes --]

^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: [PATCH 6/6] x86/kvm: add boot parameter for setting max number of vcpus per guest
  2021-07-14 13:04         ` Juergen Gross
@ 2021-07-14 13:21           ` Vitaly Kuznetsov
  2021-09-03  7:01             ` Juergen Gross
  0 siblings, 1 reply; 11+ messages in thread
From: Vitaly Kuznetsov @ 2021-07-14 13:21 UTC (permalink / raw)
  To: Juergen Gross
  Cc: Jonathan Corbet, Paolo Bonzini, Sean Christopherson, Wanpeng Li,
	Jim Mattson, Joerg Roedel, Thomas Gleixner, Ingo Molnar,
	Borislav Petkov, H. Peter Anvin, linux-kernel, x86, linux-doc,
	kvm

Juergen Gross <jgross@suse.com> writes:

> On 14.07.21 13:45, Vitaly Kuznetsov wrote:
>
>> Personally, I'd vote for introducing a 'ratio' parameter then so
>> generally users will only have to set 'kvm.max_vcpus'.
>
> Okay.
>
> Default '4' then? Or '2 ^ (topology_levels - 2)' (assuming a
> topology_level of 3 on Intel: thread/core/socket and 4 on EPYC:
> thread/core/package/socket).

I'd suggest we default to '4' for both Intel and AMD as we haven't given
up completely on cross-vendor VMs (running AMD VMs on Intel CPUs and
vice versa). It would be great to leave a comment where the number comes
from of course.

-- 
Vitaly


^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: [PATCH 0/6] x86/kvm: add boot parameters for max vcpu configs
  2021-07-01 15:40 [PATCH 0/6] x86/kvm: add boot parameters for max vcpu configs Juergen Gross
  2021-07-01 15:41 ` [PATCH 3/6] x86/kvm: add boot parameter for maximum vcpu-id Juergen Gross
  2021-07-01 15:41 ` [PATCH 6/6] x86/kvm: add boot parameter for setting max number of vcpus per guest Juergen Gross
@ 2021-07-26 13:41 ` Paolo Bonzini
  2 siblings, 0 replies; 11+ messages in thread
From: Paolo Bonzini @ 2021-07-26 13:41 UTC (permalink / raw)
  To: Juergen Gross, linux-kernel, x86, kvm, linux-doc,
	linux-arm-kernel
  Cc: Sean Christopherson, Vitaly Kuznetsov, Wanpeng Li, Jim Mattson,
	Joerg Roedel, Thomas Gleixner, Ingo Molnar, Borislav Petkov,
	H. Peter Anvin, Jonathan Corbet, Marc Zyngier, James Morse,
	Alexandru Elisei, Suzuki K Poulose, Catalin Marinas, Will Deacon,
	kvmarm

On 01/07/21 17:40, Juergen Gross wrote:
> In order to be able to have a single kernel for supporting even huge
> numbers of vcpus per guest some arrays should be sized dynamically.
> 
> The easiest way to do that is to add boot parameters for the maximum
> number of vcpus and the highest supported vcpu-id overwriting the
> normal default.
> 
> This patch series is doing that for x86. The same scheme can be easily
> adapted to other architectures, but I don't want to do that in the
> first iteration.
> 
> In the long term I'd suggest to have a per-guest setting of the two
> parameters allowing to spare some memory for smaller guests. OTOH this
> would require new ioctl()s and respective qemu modifications, so I let
> those away for now.
> 
> I've tested the series not to break normal guest operation and the new
> parameters to be effective on x86. For Arm64 I did a compile test only.
> 
> Juergen Gross (6):
>    x86/kvm: fix vcpu-id indexed array sizes
>    x86/kvm: remove non-x86 stuff from arch/x86/kvm/ioapic.h
>    x86/kvm: add boot parameter for maximum vcpu-id
>    x86/kvm: introduce per cpu vcpu masks
>    kvm: allocate vcpu pointer array separately
>    x86/kvm: add boot parameter for setting max number of vcpus per guest
> 
>   .../admin-guide/kernel-parameters.txt         | 18 +++++++
>   arch/arm64/kvm/arm.c                          | 28 +++++++++--
>   arch/x86/include/asm/kvm_host.h               | 22 ++++++---
>   arch/x86/kvm/hyperv.c                         | 25 +++++++---
>   arch/x86/kvm/ioapic.c                         | 14 +++++-
>   arch/x86/kvm/ioapic.h                         |  8 +--
>   arch/x86/kvm/irq_comm.c                       |  9 +++-
>   arch/x86/kvm/x86.c                            | 49 ++++++++++++++++++-
>   include/linux/kvm_host.h                      | 17 ++++++-
>   9 files changed, 160 insertions(+), 30 deletions(-)
> 

Queued patches 1-2, thanks (1 for stable too).

Paolo


^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: [PATCH 6/6] x86/kvm: add boot parameter for setting max number of vcpus per guest
  2021-07-14 13:21           ` Vitaly Kuznetsov
@ 2021-09-03  7:01             ` Juergen Gross
  2021-09-03  7:40               ` Vitaly Kuznetsov
  0 siblings, 1 reply; 11+ messages in thread
From: Juergen Gross @ 2021-09-03  7:01 UTC (permalink / raw)
  To: Vitaly Kuznetsov
  Cc: Jonathan Corbet, Paolo Bonzini, Sean Christopherson, Wanpeng Li,
	Jim Mattson, Joerg Roedel, Thomas Gleixner, Ingo Molnar,
	Borislav Petkov, H. Peter Anvin, linux-kernel, x86, linux-doc,
	kvm


[-- Attachment #1.1.1: Type: text/plain, Size: 1309 bytes --]

On 14.07.21 15:21, Vitaly Kuznetsov wrote:
> Juergen Gross <jgross@suse.com> writes:
> 
>> On 14.07.21 13:45, Vitaly Kuznetsov wrote:
>>
>>> Personally, I'd vote for introducing a 'ratio' parameter then so
>>> generally users will only have to set 'kvm.max_vcpus'.
>>
>> Okay.
>>
>> Default '4' then? Or '2 ^ (topology_levels - 2)' (assuming a
>> topology_level of 3 on Intel: thread/core/socket and 4 on EPYC:
>> thread/core/package/socket).
> 
> I'd suggest we default to '4' for both Intel and AMD as we haven't given
> up completely on cross-vendor VMs (running AMD VMs on Intel CPUs and
> vice versa). It would be great to leave a comment where the number comes
> from of course.
> 

Thinking more about it I believe it would be better to make the
parameter something like "additional vcpu-id bits" with a default of
topology_levels - 2 (cross-vendor VMs are so special that I think the
need to specify another value explicitly in this case is acceptable).

Reasons are:

- the ability to specify factor values not being a power of 2 is weird
- just specifying the additional number of bits would lead to compatible
   behavior (e.g. a max vcpu-id of 1023 with max_vcpus being 288 and the
   default value of 1)
- the max vcpu-id should (normally) be 2^n - 1


Juergen

[-- Attachment #1.1.2: OpenPGP public key --]
[-- Type: application/pgp-keys, Size: 3135 bytes --]

[-- Attachment #2: OpenPGP digital signature --]
[-- Type: application/pgp-signature, Size: 495 bytes --]

^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: [PATCH 6/6] x86/kvm: add boot parameter for setting max number of vcpus per guest
  2021-09-03  7:01             ` Juergen Gross
@ 2021-09-03  7:40               ` Vitaly Kuznetsov
  0 siblings, 0 replies; 11+ messages in thread
From: Vitaly Kuznetsov @ 2021-09-03  7:40 UTC (permalink / raw)
  To: Juergen Gross
  Cc: Jonathan Corbet, Paolo Bonzini, Sean Christopherson, Wanpeng Li,
	Jim Mattson, Joerg Roedel, Thomas Gleixner, Ingo Molnar,
	Borislav Petkov, H. Peter Anvin, linux-kernel, x86, linux-doc,
	kvm, Eduardo Habkost

Juergen Gross <jgross@suse.com> writes:

> On 14.07.21 15:21, Vitaly Kuznetsov wrote:
>> Juergen Gross <jgross@suse.com> writes:
>> 
>>> On 14.07.21 13:45, Vitaly Kuznetsov wrote:
>>>
>>>> Personally, I'd vote for introducing a 'ratio' parameter then so
>>>> generally users will only have to set 'kvm.max_vcpus'.
>>>
>>> Okay.
>>>
>>> Default '4' then? Or '2 ^ (topology_levels - 2)' (assuming a
>>> topology_level of 3 on Intel: thread/core/socket and 4 on EPYC:
>>> thread/core/package/socket).
>> 
>> I'd suggest we default to '4' for both Intel and AMD as we haven't given
>> up completely on cross-vendor VMs (running AMD VMs on Intel CPUs and
>> vice versa). It would be great to leave a comment where the number comes
>> from of course.
>> 
>
> Thinking more about it I believe it would be better to make the
> parameter something like "additional vcpu-id bits" with a default of
> topology_levels - 2 (cross-vendor VMs are so special that I think the
> need to specify another value explicitly in this case is acceptable).
>
> Reasons are:
>
> - the ability to specify factor values not being a power of 2 is weird
> - just specifying the additional number of bits would lead to compatible
>    behavior (e.g. a max vcpu-id of 1023 with max_vcpus being 288 and the
>    default value of 1)
> - the max vcpu-id should (normally) be 2^n - 1

Sounds good to me! 

Also, there's an ongoing work to raise the default KVM_MAX_VCPUS number
by Eduardo (Cc):

https://lore.kernel.org/kvm/20210831204535.1594297-1-ehabkost@redhat.com/

It would be great if you could unify your efforts)

-- 
Vitaly


^ permalink raw reply	[flat|nested] 11+ messages in thread

end of thread, other threads:[~2021-09-03  7:41 UTC | newest]

Thread overview: 11+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2021-07-01 15:40 [PATCH 0/6] x86/kvm: add boot parameters for max vcpu configs Juergen Gross
2021-07-01 15:41 ` [PATCH 3/6] x86/kvm: add boot parameter for maximum vcpu-id Juergen Gross
2021-07-01 15:41 ` [PATCH 6/6] x86/kvm: add boot parameter for setting max number of vcpus per guest Juergen Gross
2021-07-14 11:15   ` Vitaly Kuznetsov
2021-07-14 11:24     ` Juergen Gross
2021-07-14 11:45       ` Vitaly Kuznetsov
2021-07-14 13:04         ` Juergen Gross
2021-07-14 13:21           ` Vitaly Kuznetsov
2021-09-03  7:01             ` Juergen Gross
2021-09-03  7:40               ` Vitaly Kuznetsov
2021-07-26 13:41 ` [PATCH 0/6] x86/kvm: add boot parameters for max vcpu configs Paolo Bonzini

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).