From: Sean Christopherson <seanjc@google.com>
To: "Wang, Wei W" <wei.w.wang@intel.com>
Cc: "pbonzini@redhat.com" <pbonzini@redhat.com>,
"dmatlack@google.com" <dmatlack@google.com>,
"vipinsh@google.com" <vipinsh@google.com>,
"ajones@ventanamicro.com" <ajones@ventanamicro.com>,
"eric.auger@redhat.com" <eric.auger@redhat.com>,
"kvm@vger.kernel.org" <kvm@vger.kernel.org>,
"linux-kernel@vger.kernel.org" <linux-kernel@vger.kernel.org>
Subject: Re: [PATCH v1 01/18] KVM: selftests/kvm_util: use array of pointers to maintain vcpus in kvm_vm
Date: Thu, 27 Oct 2022 15:27:21 +0000 [thread overview]
Message-ID: <Y1qjWZ8Lpveyky9m@google.com> (raw)
In-Reply-To: <DS0PR11MB6373E471C1378CBEEBD40A82DC339@DS0PR11MB6373.namprd11.prod.outlook.com>
On Thu, Oct 27, 2022, Wang, Wei W wrote:
> On Thursday, October 27, 2022 7:48 AM, Sean Christopherson wrote:
> > > + for (i = 0, vcpu = vm->vcpus[0]; \
> > > + vcpu && i < KVM_MAX_VCPUS; vcpu = vm->vcpus[++i])
> >
> > I hate pointer arithmetic more than most people, but in this case it avoids the
> > need to pass in 'i', which in turn cuts down on boilerplate and churn.
>
> Hmm, indeed, this can be improved, how about this one:
>
> +#define vm_iterate_over_vcpus(vm, vcpu) \
> + for (vcpu = vm->vcpus[0]; vcpu; vcpu = vm->vcpus[vcpu->id + 1]) \
Needs to be bounded by the size of the array.
> > > #endif /* SELFTEST_KVM_UTIL_H */
> > > diff --git a/tools/testing/selftests/kvm/include/kvm_util_base.h
> > > b/tools/testing/selftests/kvm/include/kvm_util_base.h
> > > index e42a09cd24a0..c90a9609b853 100644
> > > --- a/tools/testing/selftests/kvm/include/kvm_util_base.h
> > > +++ b/tools/testing/selftests/kvm/include/kvm_util_base.h
> > > @@ -45,7 +45,6 @@ struct userspace_mem_region { };
> > >
> > > struct kvm_vcpu {
> > > - struct list_head list;
> > > uint32_t id;
> > > int fd;
> > > struct kvm_vm *vm;
> > > @@ -75,7 +74,6 @@ struct kvm_vm {
> > > unsigned int pa_bits;
> > > unsigned int va_bits;
> > > uint64_t max_gfn;
> > > - struct list_head vcpus;
> > > struct userspace_mem_regions regions;
> > > struct sparsebit *vpages_valid;
> > > struct sparsebit *vpages_mapped;
> > > @@ -92,6 +90,7 @@ struct kvm_vm {
> > > int stats_fd;
> > > struct kvm_stats_header stats_header;
> > > struct kvm_stats_desc *stats_desc;
> > > + struct kvm_vcpu *vcpus[KVM_MAX_VCPUS];
> >
> > We can dynamically allocate the array without too much trouble, though I'm not
> > sure it's worth shaving the few KiB of memory. For __vm_create(), the number
> > of vCPUs is known when the VM is created. For vm_create_barebones(), we
> > could do the simple thing of allocating KVM_MAX_VCPU.
>
> The issue with dynamic allocation is that some users start with
> __vm_create(nr_vcpus), and later could add more cpus with vm_vcpu_add (e.g.
> x86_64/xapic_ipi_test.c). To support this we may need to re-allocate the
> array for later vm_vcpu_add(), and also need to add nr_vcpus to indicate the
> size.
Hrm, right, the number of runnable CPUs isn't a hard upper bound. Ideally it
would be, as the number of pages required for guest memory will fail to account
for the "extra" vcpus. E.g. that test should really do vm_create(2) and then
manually add each vCPU.
> It's userspace memory, and not a problem to use a bit larger virtual memory
> (memory are not really allocated until we have that many vcpus to touch the
> array entries), I think.
Yeah, just allocate the max for now, though the array still needs to be dynamically
allocated based on the actual maximum number of vCPUs. Oh, duh, we can do the
easy thing and just bump KVM_MAX_VCPUS to 1024 to match KVM. And then assert that
kvm_check_cap(KVM_CAP_MAX_VCPUS) == KVM_CAP_MAX_VCPUS in kvm_create_max_vcpus.c.
next prev parent reply other threads:[~2022-10-27 15:27 UTC|newest]
Thread overview: 43+ messages / expand[flat|nested] mbox.gz Atom feed top
2022-10-24 11:34 [PATCH v1 00/18] KVM selftests code consolidation and cleanup Wei Wang
2022-10-24 11:34 ` [PATCH v1 01/18] KVM: selftests/kvm_util: use array of pointers to maintain vcpus in kvm_vm Wei Wang
2022-10-26 23:47 ` Sean Christopherson
2022-10-27 12:28 ` Wang, Wei W
2022-10-27 15:27 ` Sean Christopherson [this message]
2022-10-28 2:13 ` Wang, Wei W
2022-10-24 11:34 ` [PATCH v1 02/18] KVM: selftests/kvm_util: use vm->vcpus[] when create vm with vcpus Wei Wang
2022-10-24 11:34 ` [PATCH v1 03/18] KVM: selftests/kvm_util: helper functions for vcpus and threads Wei Wang
2022-10-27 0:09 ` Sean Christopherson
2022-10-27 14:02 ` Wang, Wei W
2022-10-27 14:54 ` Sean Christopherson
2022-10-24 11:34 ` [PATCH v1 04/18] KVM: selftests/kvm_page_table_test: vcpu related code consolidation Wei Wang
2022-10-24 11:34 ` [PATCH v1 05/18] KVM: selftests/hardware_disable_test: code consolidation and cleanup Wei Wang
2022-10-27 0:16 ` Sean Christopherson
2022-10-27 14:14 ` Wang, Wei W
2022-10-27 18:03 ` Sean Christopherson
2022-10-28 2:16 ` Wang, Wei W
2022-10-24 11:34 ` [PATCH v1 06/18] KVM: selftests/dirty_log_test: vcpu related code consolidation Wei Wang
2022-10-24 11:34 ` [PATCH v1 07/18] KVM: selftests/max_guest_memory_test: " Wei Wang
2022-10-24 11:34 ` [PATCH v1 08/18] KVM: selftests/set_memory_region_test: " Wei Wang
2022-10-24 11:34 ` [PATCH v1 09/18] KVM: selftests/steal_time: vcpu related code consolidation and cleanup Wei Wang
2022-10-27 0:17 ` Sean Christopherson
2022-10-24 11:34 ` [PATCH v1 10/18] KVM: selftests/tsc_scaling_sync: vcpu related code consolidation Wei Wang
2022-10-24 11:34 ` [PATCH v1 11/18] KVM: selftest/xapic_ipi_test: " Wei Wang
2022-10-24 11:34 ` [PATCH v1 12/18] KVM: selftests/rseq_test: name the migration thread and some cleanup Wei Wang
2022-10-27 0:18 ` Sean Christopherson
2022-10-24 11:34 ` [PATCH v1 13/18] KVM: selftests/perf_test_util: vcpu related code consolidation Wei Wang
2022-10-24 11:34 ` [PATCH v1 14/18] KVM: selftest/memslot_perf_test: " Wei Wang
2022-10-24 11:34 ` [PATCH v1 15/18] KVM: selftests/vgic_init: " Wei Wang
2022-10-24 11:34 ` [PATCH v1 16/18] KVM: selftest/arch_timer: " Wei Wang
2022-10-24 11:34 ` [PATCH v1 17/18] KVM: selftests: remove the *vcpu[] input from __vm_create_with_vcpus Wei Wang
2022-10-24 11:34 ` [PATCH v1 18/18] KVM: selftests/kvm_create_max_vcpus: check KVM_MAX_VCPUS Wei Wang
2022-10-27 0:22 ` Sean Christopherson
2022-10-26 21:22 ` [PATCH v1 00/18] KVM selftests code consolidation and cleanup David Matlack
2022-10-27 12:18 ` Wang, Wei W
2022-10-27 15:44 ` Sean Christopherson
2022-10-27 16:24 ` David Matlack
2022-10-27 18:27 ` Sean Christopherson
2022-10-28 12:41 ` Andrew Jones
2022-10-28 15:49 ` Sean Christopherson
2022-11-07 18:11 ` David Matlack
2022-11-07 18:19 ` Sean Christopherson
2022-11-09 19:05 ` David Matlack
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=Y1qjWZ8Lpveyky9m@google.com \
--to=seanjc@google.com \
--cc=ajones@ventanamicro.com \
--cc=dmatlack@google.com \
--cc=eric.auger@redhat.com \
--cc=kvm@vger.kernel.org \
--cc=linux-kernel@vger.kernel.org \
--cc=pbonzini@redhat.com \
--cc=vipinsh@google.com \
--cc=wei.w.wang@intel.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox