From: "Nicholas Piggin" <npiggin@gmail.com>
To: "Harsh Prateek Bora" <harshpb@linux.ibm.com>,
"Salil Mehta" <salil.mehta@huawei.com>,
"pbonzini@redhat.com" <pbonzini@redhat.com>,
"Peter Maydell" <peter.maydell@linaro.org>,
"Michael S. Tsirkin" <mst@redhat.com>,
"Igor Mammedov" <imammedo@redhat.com>
Cc: "Vaibhav Jain" <vaibhav@linux.ibm.com>,
"sbhat@linux.ibm.com" <sbhat@linux.ibm.com>,
"salil.mehta@opnsrc.net" <salil.mehta@opnsrc.net>,
"qemu-devel@nongnu.org" <qemu-devel@nongnu.org>,
"qemu-ppc@nongnu.org" <qemu-ppc@nongnu.org>
Subject: Re: [PATCH V13 1/8] accel/kvm: Extract common KVM vCPU {creation,parking} code
Date: Thu, 04 Jul 2024 17:35:09 +1000 [thread overview]
Message-ID: <D2GKXYCPW5KX.4P5GHFKH4YS8@gmail.com> (raw)
In-Reply-To: <4512d9d5-a2ee-4d83-9c44-510241d72225@linux.ibm.com>
Looks like there is a bit of noise around this recently. Do we
think the hotplug patches can get over the line this time?
If not, perhaps we work with Salil to get this patch 1 upstream
at least.
Thanks,
Nick
On Tue Jun 25, 2024 at 3:08 PM AEST, Harsh Prateek Bora wrote:
> +qemu-devel, qemu-ppc
>
> Ping!
>
> On 6/17/24 15:18, Harsh Prateek Bora wrote:
> >
> > + MST, Igor - to help with early review/merge. TIA.
> >
> > On 6/14/24 16:06, Salil Mehta wrote:
> >> Hello
> >>
> >>> From: Harsh Prateek Bora <harshpb@linux.ibm.com>
> >>> Sent: Friday, June 14, 2024 6:24 AM
> >>> Hi Paolo, Nick,
> >>> Can this patch 1/8 be merged earlier provided we have got
> >>> sufficient R-bys
> >>> for it and the review of entire series may take a longer time?
> >>> We have some ppc64 patches based on it, hence the ask.
> >>> Hi Salil,
> >>> I am hoping we are not expecting anymore changes to this patch, please
> >>> confirm.
> >>
> >>
> >> I do not expect any change. I had requested Michael to merge the complete
> >> series as it is stranding other users. He then requested Igor to take
> >> a final look but
> >> he has not reverted yet. I'll remind Michael again. BTW, can you reply
> >> to below
> >> patch explicitly indicating your interest in the series so that MST
> >> knows who else
> >> are the stake holders here
> >>
> >> https://lore.kernel.org/qemu-devel/20240605160327.3c71f4ab@imammedo.users.ipa.redhat.com/
> >>
> >>
> >> Hi Paolo,
> >>
> >> A request, would it be possible to skim through this series from KVM
> >> perspective?
> >> (although nothing has changed which will affect the KVM and this is
> >> architecture
> >> agnostic patch-set)
> >>
> >> Many thanks!
> >>
> >> Best
> >> Salil.
> >>
> >>
> >>> regards,
> >>> Harsh
> >>> On 6/7/24 17:26, Salil Mehta wrote:
> >>> > KVM vCPU creation is done once during the vCPU realization when Qemu
> >>> > vCPU thread is spawned. This is common to all the architectures
> >>> as of now.
> >>> >
> >>> > Hot-unplug of vCPU results in destruction of the vCPU object in QOM
> >>> > but the corresponding KVM vCPU object in the Host KVM is not
> >>> destroyed
> >>> > as KVM doesn't support vCPU removal. Therefore, its
> >>> representative KVM
> >>> > vCPU object/context in Qemu is parked.
> >>> >
> >>> > Refactor architecture common logic so that some APIs could be reused
> >>> > by vCPU Hotplug code of some architectures likes ARM, Loongson etc.
> >>> > Update new/old APIs with trace events. No functional change is
> >>> intended
> >>> here.
> >>> >
> >>> > Signed-off-by: Salil Mehta <salil.mehta@huawei.com>
> >>> > Reviewed-by: Gavin Shan <gshan@redhat.com>
> >>> > Tested-by: Vishnu Pajjuri <vishnu@os.amperecomputing.com>
> >>> > Reviewed-by: Jonathan Cameron <Jonathan.Cameron@huawei.com>
> >>> > Tested-by: Xianglai Li <lixianglai@loongson.cn>
> >>> > Tested-by: Miguel Luis <miguel.luis@oracle.com>
> >>> > Reviewed-by: Shaoqin Huang <shahuang@redhat.com>
> >>> > Reviewed-by: Vishnu Pajjuri <vishnu@os.amperecomputing.com>
> >>> > Reviewed-by: Nicholas Piggin <npiggin@gmail.com>
> >>> > Tested-by: Zhao Liu <zhao1.liu@intel.com>
> >>> > Reviewed-by: Zhao Liu <zhao1.liu@intel.com>
> >>> > Reviewed-by: Harsh Prateek Bora <harshpb@linux.ibm.com>
> >>> > ---
> >>> > accel/kvm/kvm-all.c | 95
> >>> ++++++++++++++++++++++++++++------------
> >>> --
> >>> > accel/kvm/kvm-cpus.h | 1 -
> >>> > accel/kvm/trace-events | 5 ++-
> >>> > include/sysemu/kvm.h | 25 +++++++++++
> >>> > 4 files changed, 92 insertions(+), 34 deletions(-)
> >>> >
> >>> > diff --git a/accel/kvm/kvm-all.c b/accel/kvm/kvm-all.c index
> >>> > c0be9f5eed..8f9128bb92 100644
> >>> > --- a/accel/kvm/kvm-all.c
> >>> > +++ b/accel/kvm/kvm-all.c
> >>> > @@ -340,14 +340,71 @@ err:
> >>> > return ret;
> >>> > }
> >>> >
> >>> > +void kvm_park_vcpu(CPUState *cpu)
> >>> > +{
> >>> > + struct KVMParkedVcpu *vcpu;
> >>> > +
> >>> > + trace_kvm_park_vcpu(cpu->cpu_index, kvm_arch_vcpu_id(cpu));
> >>> > +
> >>> > + vcpu = g_malloc0(sizeof(*vcpu));
> >>> > + vcpu->vcpu_id = kvm_arch_vcpu_id(cpu);
> >>> > + vcpu->kvm_fd = cpu->kvm_fd;
> >>> > + QLIST_INSERT_HEAD(&kvm_state->kvm_parked_vcpus, vcpu, node); }
> >>> > +
> >>> > +int kvm_unpark_vcpu(KVMState *s, unsigned long vcpu_id) {
> >>> > + struct KVMParkedVcpu *cpu;
> >>> > + int kvm_fd = -ENOENT;
> >>> > +
> >>> > + QLIST_FOREACH(cpu, &s->kvm_parked_vcpus, node) {
> >>> > + if (cpu->vcpu_id == vcpu_id) {
> >>> > + QLIST_REMOVE(cpu, node);
> >>> > + kvm_fd = cpu->kvm_fd;
> >>> > + g_free(cpu);
> >>> > + }
> >>> > + }
> >>> > +
> >>> > + trace_kvm_unpark_vcpu(vcpu_id, kvm_fd > 0 ? "unparked" : "not
> >>> > + found parked");
> >>> > +
> >>> > + return kvm_fd;
> >>> > +}
> >>> > +
> >>> > +int kvm_create_vcpu(CPUState *cpu)
> >>> > +{
> >>> > + unsigned long vcpu_id = kvm_arch_vcpu_id(cpu);
> >>> > + KVMState *s = kvm_state;
> >>> > + int kvm_fd;
> >>> > +
> >>> > + /* check if the KVM vCPU already exist but is parked */
> >>> > + kvm_fd = kvm_unpark_vcpu(s, vcpu_id);
> >>> > + if (kvm_fd < 0) {
> >>> > + /* vCPU not parked: create a new KVM vCPU */
> >>> > + kvm_fd = kvm_vm_ioctl(s, KVM_CREATE_VCPU, vcpu_id);
> >>> > + if (kvm_fd < 0) {
> >>> > + error_report("KVM_CREATE_VCPU IOCTL failed for vCPU
> >>> %lu",
> >>> vcpu_id);
> >>> > + return kvm_fd;
> >>> > + }
> >>> > + }
> >>> > +
> >>> > + cpu->kvm_fd = kvm_fd;
> >>> > + cpu->kvm_state = s;
> >>> > + cpu->vcpu_dirty = true;
> >>> > + cpu->dirty_pages = 0;
> >>> > + cpu->throttle_us_per_full = 0;
> >>> > +
> >>> > + trace_kvm_create_vcpu(cpu->cpu_index, vcpu_id, kvm_fd);
> >>> > +
> >>> > + return 0;
> >>> > +}
> >>> > +
> >>> > static int do_kvm_destroy_vcpu(CPUState *cpu)
> >>> > {
> >>> > KVMState *s = kvm_state;
> >>> > long mmap_size;
> >>> > - struct KVMParkedVcpu *vcpu = NULL;
> >>> > int ret = 0;
> >>> >
> >>> > - trace_kvm_destroy_vcpu();
> >>> > + trace_kvm_destroy_vcpu(cpu->cpu_index, kvm_arch_vcpu_id(cpu));
> >>> >
> >>> > ret = kvm_arch_destroy_vcpu(cpu);
> >>> > if (ret < 0) {
> >>> > @@ -373,10 +430,7 @@ static int do_kvm_destroy_vcpu(CPUState *cpu)
> >>> > }
> >>> > }
> >>> >
> >>> > - vcpu = g_malloc0(sizeof(*vcpu));
> >>> > - vcpu->vcpu_id = kvm_arch_vcpu_id(cpu);
> >>> > - vcpu->kvm_fd = cpu->kvm_fd;
> >>> > - QLIST_INSERT_HEAD(&kvm_state->kvm_parked_vcpus, vcpu, node);
> >>> > + kvm_park_vcpu(cpu);
> >>> > err:
> >>> > return ret;
> >>> > }
> >>> > @@ -389,24 +443,6 @@ void kvm_destroy_vcpu(CPUState *cpu)
> >>> > }
> >>> > }
> >>> >
> >>> > -static int kvm_get_vcpu(KVMState *s, unsigned long vcpu_id) -{
> >>> > - struct KVMParkedVcpu *cpu;
> >>> > -
> >>> > - QLIST_FOREACH(cpu, &s->kvm_parked_vcpus, node) {
> >>> > - if (cpu->vcpu_id == vcpu_id) {
> >>> > - int kvm_fd;
> >>> > -
> >>> > - QLIST_REMOVE(cpu, node);
> >>> > - kvm_fd = cpu->kvm_fd;
> >>> > - g_free(cpu);
> >>> > - return kvm_fd;
> >>> > - }
> >>> > - }
> >>> > -
> >>> > - return kvm_vm_ioctl(s, KVM_CREATE_VCPU, (void *)vcpu_id);
> >>> > -}
> >>> > -
> >>> > int kvm_init_vcpu(CPUState *cpu, Error **errp)
> >>> > {
> >>> > KVMState *s = kvm_state;
> >>> > @@ -415,19 +451,14 @@ int kvm_init_vcpu(CPUState *cpu, Error **errp)
> >>> >
> >>> > trace_kvm_init_vcpu(cpu->cpu_index, kvm_arch_vcpu_id(cpu));
> >>> >
> >>> > - ret = kvm_get_vcpu(s, kvm_arch_vcpu_id(cpu));
> >>> > + ret = kvm_create_vcpu(cpu);
> >>> > if (ret < 0) {
> >>> > - error_setg_errno(errp, -ret, "kvm_init_vcpu:
> >>> kvm_get_vcpu failed
> >>> (%lu)",
> >>> > + error_setg_errno(errp, -ret,
> >>> > + "kvm_init_vcpu: kvm_create_vcpu failed
> >>> > + (%lu)",
> >>> > kvm_arch_vcpu_id(cpu));
> >>> > goto err;
> >>> > }
> >>> >
> >>> > - cpu->kvm_fd = ret;
> >>> > - cpu->kvm_state = s;
> >>> > - cpu->vcpu_dirty = true;
> >>> > - cpu->dirty_pages = 0;
> >>> > - cpu->throttle_us_per_full = 0;
> >>> > -
> >>> > mmap_size = kvm_ioctl(s, KVM_GET_VCPU_MMAP_SIZE, 0);
> >>> > if (mmap_size < 0) {
> >>> > ret = mmap_size;
> >>> > diff --git a/accel/kvm/kvm-cpus.h b/accel/kvm/kvm-cpus.h index
> >>> > ca40add32c..171b22fd29 100644
> >>> > --- a/accel/kvm/kvm-cpus.h
> >>> > +++ b/accel/kvm/kvm-cpus.h
> >>> > @@ -22,5 +22,4 @@ bool kvm_supports_guest_debug(void);
> >>> > int kvm_insert_breakpoint(CPUState *cpu, int type, vaddr addr,
> >>> vaddr
> >>> len);
> >>> > int kvm_remove_breakpoint(CPUState *cpu, int type, vaddr addr,
> >>> vaddr
> >>> len);
> >>> > void kvm_remove_all_breakpoints(CPUState *cpu);
> >>> > -
> >>> > #endif /* KVM_CPUS_H */
> >>> > diff --git a/accel/kvm/trace-events b/accel/kvm/trace-events index
> >>> > 681ccb667d..37626c1ac5 100644
> >>> > --- a/accel/kvm/trace-events
> >>> > +++ b/accel/kvm/trace-events
> >>> > @@ -9,6 +9,10 @@ kvm_device_ioctl(int fd, int type, void *arg)
> >>> "dev fd
> >>> %d, type 0x%x, arg %p"
> >>> > kvm_failed_reg_get(uint64_t id, const char *msg) "Warning:
> >>> Unable to
> >>> retrieve ONEREG %" PRIu64 " from KVM: %s"
> >>> > kvm_failed_reg_set(uint64_t id, const char *msg) "Warning:
> >>> Unable to
> >>> set ONEREG %" PRIu64 " to KVM: %s"
> >>> > kvm_init_vcpu(int cpu_index, unsigned long arch_cpu_id) "index:
> >>> %d id:
> >>> %lu"
> >>> > +kvm_create_vcpu(int cpu_index, unsigned long arch_cpu_id, int
> >>> kvm_fd) "index: %d, id: %lu, kvm fd: %d"
> >>> > +kvm_destroy_vcpu(int cpu_index, unsigned long arch_cpu_id) "index:
> >>> %d id: %lu"
> >>> > +kvm_park_vcpu(int cpu_index, unsigned long arch_cpu_id) "index: %d
> >>> id: %lu"
> >>> > +kvm_unpark_vcpu(unsigned long arch_cpu_id, const char *msg) "id:
> >>> %lu
> >>> %s"
> >>> > kvm_irqchip_commit_routes(void) ""
> >>> > kvm_irqchip_add_msi_route(char *name, int vector, int virq)
> >>> "dev %s
> >>> vector %d virq %d"
> >>> > kvm_irqchip_update_msi_route(int virq) "Updating MSI route
> >>> virq=%d"
> >>> > @@ -25,7 +29,6 @@ kvm_dirty_ring_reaper(const char *s) "%s"
> >>> > kvm_dirty_ring_reap(uint64_t count, int64_t t) "reaped
> >>> %"PRIu64" pages
> >>> (took %"PRIi64" us)"
> >>> > kvm_dirty_ring_reaper_kick(const char *reason) "%s"
> >>> > kvm_dirty_ring_flush(int finished) "%d"
> >>> > -kvm_destroy_vcpu(void) ""
> >>> > kvm_failed_get_vcpu_mmap_size(void) ""
> >>> > kvm_cpu_exec(void) ""
> >>> > kvm_interrupt_exit_request(void) ""
> >>> > diff --git a/include/sysemu/kvm.h b/include/sysemu/kvm.h index
> >>> > c31d9c7356..c4a914b3d8 100644
> >>> > --- a/include/sysemu/kvm.h
> >>> > +++ b/include/sysemu/kvm.h
> >>> > @@ -313,6 +313,31 @@ int kvm_create_device(KVMState *s, uint64_t
> >>> type, bool test);
> >>> > */
> >>> > bool kvm_device_supported(int vmfd, uint64_t type);
> >>> >
> >>> > +/**
> >>> > + * kvm_create_vcpu - Gets a parked KVM vCPU or creates a KVM vCPU
> >>> > + * @cpu: QOM CPUState object for which KVM vCPU has to be
> >>> fetched/created.
> >>> > + *
> >>> > + * @returns: 0 when success, errno (<0) when failed.
> >>> > + */
> >>> > +int kvm_create_vcpu(CPUState *cpu);
> >>> > +
> >>> > +/**
> >>> > + * kvm_park_vcpu - Park QEMU KVM vCPU context
> >>> > + * @cpu: QOM CPUState object for which QEMU KVM vCPU context has
> >>> to be parked.
> >>> > + *
> >>> > + * @returns: none
> >>> > + */
> >>> > +void kvm_park_vcpu(CPUState *cpu);
> >>> > +
> >>> > +/**
> >>> > + * kvm_unpark_vcpu - unpark QEMU KVM vCPU context
> >>> > + * @s: KVM State
> >>> > + * @vcpu_id: Architecture vCPU ID of the parked vCPU
> >>> > + *
> >>> > + * @returns: KVM fd
> >>> > + */
> >>> > +int kvm_unpark_vcpu(KVMState *s, unsigned long vcpu_id);
> >>> > +
> >>> > /* Arch specific hooks */
> >>> >
> >>> > extern const KVMCapabilityInfo kvm_arch_required_capabilities[];
next prev parent reply other threads:[~2024-07-04 7:36 UTC|newest]
Thread overview: 42+ messages / expand[flat|nested] mbox.gz Atom feed top
2024-06-07 11:56 [PATCH V13 0/8] Add architecture agnostic code to support vCPU Hotplug Salil Mehta via
2024-06-07 11:56 ` [PATCH V13 1/8] accel/kvm: Extract common KVM vCPU {creation, parking} code Salil Mehta via
[not found] ` <a9fe2b0f-baf2-4684-9e98-86f4df4e4ff5@linux.ibm.com>
[not found] ` <66be5cd4b44f4ccebb00fd90ad145e48@huawei.com>
[not found] ` <40196533-103c-42d8-9229-d394154affe6@linux.ibm.com>
2024-06-25 5:08 ` [PATCH V13 1/8] accel/kvm: Extract common KVM vCPU {creation,parking} code Harsh Prateek Bora
2024-07-04 7:35 ` Nicholas Piggin [this message]
2024-07-06 13:11 ` Igor Mammedov
2024-07-06 15:43 ` Salil Mehta
2024-07-08 13:32 ` Igor Mammedov
2024-07-08 23:30 ` Salil Mehta
2024-07-09 8:06 ` Igor Mammedov
2024-07-09 10:18 ` Salil Mehta via
2024-06-07 11:56 ` [PATCH V13 2/8] hw/acpi: Move CPU ctrl-dev MMIO region len macro to common header file Salil Mehta via
2024-07-06 13:20 ` Igor Mammedov
2024-07-08 5:00 ` Salil Mehta
2024-06-07 11:56 ` [PATCH V13 3/8] hw/acpi: Update ACPI GED framework to support vCPU Hotplug Salil Mehta via
2024-07-04 3:03 ` Nicholas Piggin
2024-07-04 11:43 ` Salil Mehta via
2024-07-06 13:46 ` Igor Mammedov
2024-07-08 5:12 ` Salil Mehta
2024-07-08 13:40 ` Igor Mammedov
2024-06-07 11:56 ` [PATCH V13 4/8] hw/acpi: Update GED _EVT method AML with CPU scan Salil Mehta via
2024-07-06 14:28 ` Igor Mammedov
2024-07-08 5:21 ` Salil Mehta
2024-07-08 13:46 ` Igor Mammedov
2024-07-11 3:29 ` Salil Mehta
2024-07-11 8:34 ` Igor Mammedov
2024-07-11 10:49 ` Salil Mehta
2024-06-07 11:56 ` [PATCH V13 5/8] hw/acpi: Update CPUs AML with cpu-(ctrl)dev change Salil Mehta via
2024-07-06 14:35 ` Igor Mammedov
2024-07-08 5:26 ` Salil Mehta
2024-07-08 13:49 ` Igor Mammedov
2024-06-07 11:56 ` [PATCH V13 6/8] physmem: Add helper function to destroy CPU AddressSpace Salil Mehta via
2024-07-06 14:37 ` Igor Mammedov
2024-07-08 5:26 ` Salil Mehta
2024-06-07 11:56 ` [PATCH V13 7/8] gdbstub: Add helper function to unregister GDB register space Salil Mehta via
2024-07-06 14:41 ` Igor Mammedov
2024-07-08 5:30 ` Salil Mehta
2024-06-07 11:56 ` [PATCH V13 8/8] docs/specs/acpi_hw_reduced_hotplug: Add the CPU Hotplug Event Bit Salil Mehta via
2024-07-06 14:45 ` Igor Mammedov
2024-07-08 5:32 ` Salil Mehta
2024-07-08 13:55 ` Igor Mammedov
2024-07-08 23:52 ` Salil Mehta
2024-07-03 22:58 ` [PATCH V13 0/8] Add architecture agnostic code to support vCPU Hotplug Michael S. Tsirkin
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=D2GKXYCPW5KX.4P5GHFKH4YS8@gmail.com \
--to=npiggin@gmail.com \
--cc=harshpb@linux.ibm.com \
--cc=imammedo@redhat.com \
--cc=mst@redhat.com \
--cc=pbonzini@redhat.com \
--cc=peter.maydell@linaro.org \
--cc=qemu-devel@nongnu.org \
--cc=qemu-ppc@nongnu.org \
--cc=salil.mehta@huawei.com \
--cc=salil.mehta@opnsrc.net \
--cc=sbhat@linux.ibm.com \
--cc=vaibhav@linux.ibm.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).