From: "Alex Bennée" <alex.bennee@linaro.org>
To: Salil Mehta <salil.mehta@huawei.com>
Cc: qemu-devel@nongnu.org, qemu-arm@nongnu.org, maz@kernel.org,
jean-philippe@linaro.org, jonathan.cameron@huawei.com,
lpieralisi@kernel.org, peter.maydell@linaro.org,
richard.henderson@linaro.org, imammedo@redhat.com,
andrew.jones@linux.dev, david@redhat.com, philmd@linaro.org,
eric.auger@redhat.com, oliver.upton@linux.dev,
pbonzini@redhat.com, mst@redhat.com, will@kernel.org,
gshan@redhat.com, rafael@kernel.org, linux@armlinux.org.uk,
darren@os.amperecomputing.com, ilkka@os.amperecomputing.com,
vishnu@os.amperecomputing.com, karl.heubaum@oracle.com,
miguel.luis@oracle.com, salil.mehta@opnsrc.net,
zhukeqian1@huawei.com, wangxiongfeng2@huawei.com,
wangyanan55@huawei.com, jiakernel2@gmail.com,
maobibo@loongson.cn, lixianglai@loongson.cn, linuxarm@huawei.com
Subject: Re: [PATCH 1/9] accel/kvm: Extract common KVM vCPU {creation,parking} code
Date: Fri, 29 Sep 2023 14:32:11 +0100 [thread overview]
Message-ID: <877co9oz6m.fsf@linaro.org> (raw)
In-Reply-To: <20230929124304.13672-2-salil.mehta@huawei.com>
Salil Mehta <salil.mehta@huawei.com> writes:
> KVM vCPU creation is done once during the initialization of the VM when Qemu
> threads are spawned. This is common to all the architectures.
>
> Hot-unplug of vCPU results in destruction of the vCPU objects in QOM but
> the KVM vCPU objects in the Host KVM are not destroyed and their representative
> KVM vCPU objects/context in Qemu are parked.
>
> Refactor common logic so that some APIs could be reused by vCPU Hotplug code.
>
> Signed-off-by: Salil Mehta <salil.mehta@huawei.com>
> ---
> accel/kvm/kvm-all.c | 61 ++++++++++++++++++++++++++++++++++----------
> include/sysemu/kvm.h | 2 ++
> 2 files changed, 49 insertions(+), 14 deletions(-)
>
> diff --git a/accel/kvm/kvm-all.c b/accel/kvm/kvm-all.c
> index ff1578bb32..57889c5f6c 100644
> --- a/accel/kvm/kvm-all.c
> +++ b/accel/kvm/kvm-all.c
> @@ -137,6 +137,7 @@ static QemuMutex kml_slots_lock;
> #define kvm_slots_unlock() qemu_mutex_unlock(&kml_slots_lock)
>
> static void kvm_slot_init_dirty_bitmap(KVMSlot *mem);
> +static int kvm_get_vcpu(KVMState *s, unsigned long vcpu_id);
weird to have a forward static declaration here if you are declare in
the header later on.
>
> static inline void kvm_resample_fd_remove(int gsi)
> {
> @@ -320,11 +321,51 @@ err:
> return ret;
> }
>
> +void kvm_park_vcpu(CPUState *cpu)
> +{
> + unsigned long vcpu_id = cpu->cpu_index;
cpu_index is a plain int in CPUState:
int cpu_index;
but is defined as an unsigned long in KVMParkedVcpu:
unsigned long vcpu_id;
I'm not sure if this is just a historical anomaly but I suspect we
should fixup the discrepancy first so all users of cpu_index use the
same size.
> + struct KVMParkedVcpu *vcpu;
> +
> + vcpu = g_malloc0(sizeof(*vcpu));
> + vcpu->vcpu_id = vcpu_id;
> + vcpu->kvm_fd = cpu->kvm_fd;
> + QLIST_INSERT_HEAD(&kvm_state->kvm_parked_vcpus, vcpu, node);
> +}
> +
> +int kvm_create_vcpu(CPUState *cpu)
> +{
I don't think you get anything other than -1 on failure so at this point
you might as well return a bool.
> + unsigned long vcpu_id = cpu->cpu_index;
Is this used?
> + KVMState *s = kvm_state;
> + int ret;
> +
> + DPRINTF("kvm_create_vcpu\n");
Please don't add new DPRINTFs - use tracepoints instead. Whether its
worth cleaning up up kvm-all first I leave up to you.
> + /* check if the KVM vCPU already exist but is parked */
> + ret = kvm_get_vcpu(s, kvm_arch_vcpu_id(cpu));
> + if (ret > 0) {
> + goto found;
> + }
> +
> + /* create a new KVM vCPU */
> + ret = kvm_vm_ioctl(s, KVM_CREATE_VCPU, (void *)vcpu_id);
> + if (ret < 0) {
> + return ret;
> + }
> +
> +found:
> + cpu->vcpu_dirty = true;
> + cpu->kvm_fd = ret;
> + cpu->kvm_state = s;
> + cpu->dirty_pages = 0;
> + cpu->throttle_us_per_full = 0;
> +
> + return 0;
> +}
> +
This is trivially nestable to avoid gotos:
bool kvm_create_vcpu(CPUState *cpu)
{
unsigned long vcpu_id = cpu->cpu_index;
KVMState *s = kvm_state;
int ret;
/* check if the KVM vCPU already exist but is parked */
ret = kvm_get_vcpu(s, kvm_arch_vcpu_id(cpu));
if (ret < 0) {
/* not found, try to create a new KVM vCPU */
ret = kvm_vm_ioctl(s, KVM_CREATE_VCPU, (void *)vcpu_id);
if (ret < 0) {
return false;
}
}
cpu->vcpu_dirty = true;
cpu->kvm_fd = ret;
cpu->kvm_state = s;
cpu->dirty_pages = 0;
cpu->throttle_us_per_full = 0;
return true;
}
> static int do_kvm_destroy_vcpu(CPUState *cpu)
> {
> KVMState *s = kvm_state;
> long mmap_size;
> - struct KVMParkedVcpu *vcpu = NULL;
> int ret = 0;
>
> DPRINTF("kvm_destroy_vcpu\n");
> @@ -353,10 +394,7 @@ static int do_kvm_destroy_vcpu(CPUState *cpu)
> }
> }
>
> - vcpu = g_malloc0(sizeof(*vcpu));
> - vcpu->vcpu_id = kvm_arch_vcpu_id(cpu);
> - vcpu->kvm_fd = cpu->kvm_fd;
> - QLIST_INSERT_HEAD(&kvm_state->kvm_parked_vcpus, vcpu, node);
> + kvm_park_vcpu(cpu);
> err:
> return ret;
> }
> @@ -384,7 +422,7 @@ static int kvm_get_vcpu(KVMState *s, unsigned long vcpu_id)
> }
> }
>
> - return kvm_vm_ioctl(s, KVM_CREATE_VCPU, (void *)vcpu_id);
> + return -1;
> }
>
> int kvm_init_vcpu(CPUState *cpu, Error **errp)
> @@ -395,19 +433,14 @@ int kvm_init_vcpu(CPUState *cpu, Error **errp)
Hmm it looks like no one cares about the return value here and the fact
that callers use &error_fatal should be enough to exit. You can then
just return early and avoid the error ladder.
>
> trace_kvm_init_vcpu(cpu->cpu_index, kvm_arch_vcpu_id(cpu));
>
> - ret = kvm_get_vcpu(s, kvm_arch_vcpu_id(cpu));
> + ret = kvm_create_vcpu(cpu);
> if (ret < 0) {
> - error_setg_errno(errp, -ret, "kvm_init_vcpu: kvm_get_vcpu failed (%lu)",
> + error_setg_errno(errp, -ret,
> + "kvm_init_vcpu: kvm_create_vcpu failed (%lu)",
> kvm_arch_vcpu_id(cpu));
> goto err;
> }
>
> - cpu->kvm_fd = ret;
> - cpu->kvm_state = s;
> - cpu->vcpu_dirty = true;
> - cpu->dirty_pages = 0;
> - cpu->throttle_us_per_full = 0;
> -
> mmap_size = kvm_ioctl(s, KVM_GET_VCPU_MMAP_SIZE, 0);
> if (mmap_size < 0) {
> ret = mmap_size;
> diff --git a/include/sysemu/kvm.h b/include/sysemu/kvm.h
> index ee9025f8e9..17919567a8 100644
> --- a/include/sysemu/kvm.h
> +++ b/include/sysemu/kvm.h
> @@ -464,6 +464,8 @@ void kvm_set_sigmask_len(KVMState *s, unsigned int sigmask_len);
>
> int kvm_physical_memory_addr_from_host(KVMState *s, void *ram_addr,
> hwaddr *phys_addr);
> +int kvm_create_vcpu(CPUState *cpu);
> +void kvm_park_vcpu(CPUState *cpu);
Please add kdoc comments for the public API functions describing their
usage and parameters.
>
> #endif /* NEED_CPU_H */
--
Alex Bennée
Virtualisation Tech Lead @ Linaro
next prev parent reply other threads:[~2023-09-29 14:03 UTC|newest]
Thread overview: 26+ messages / expand[flat|nested] mbox.gz Atom feed top
2023-09-29 12:42 [PATCH 0/9] Add architecture agnostic code to support vCPU Hotplug Salil Mehta via
2023-09-29 12:42 ` [PATCH 1/9] accel/kvm: Extract common KVM vCPU {creation, parking} code Salil Mehta via
2023-09-29 13:32 ` Alex Bennée [this message]
2023-09-29 15:22 ` Salil Mehta via
2023-09-29 16:45 ` [PATCH 1/9] accel/kvm: Extract common KVM vCPU {creation,parking} code Alex Bennée
2023-09-29 12:42 ` [PATCH 2/9] hw/acpi: Move CPU ctrl-dev MMIO region len macro to common header file Salil Mehta via
2023-09-29 14:03 ` Alex Bennée
2023-09-29 15:24 ` Salil Mehta via
2023-09-29 12:42 ` [PATCH 3/9] hw/acpi: Add ACPI CPU hotplug init stub Salil Mehta via
2023-09-29 14:27 ` Alex Bennée
2023-09-29 15:47 ` Salil Mehta via
2023-10-02 6:11 ` Philippe Mathieu-Daudé
2023-09-29 12:42 ` [PATCH 4/9] hw/acpi: Init GED framework with cpu hotplug events Salil Mehta via
2023-10-02 16:06 ` Jonathan Cameron via
2023-10-02 16:06 ` Jonathan Cameron
2023-10-03 10:24 ` Salil Mehta via
2023-10-03 10:24 ` Salil Mehta
2023-09-29 12:43 ` [PATCH 5/9] hw/acpi: Update CPUs AML with cpu-(ctrl)dev change Salil Mehta via
2023-09-29 12:43 ` [PATCH 6/9] hw/acpi: Update GED _EVT method AML with cpu scan Salil Mehta via
2023-09-29 12:43 ` [PATCH 7/9] hw/acpi: Update ACPI GED framework to support vCPU Hotplug Salil Mehta via
2023-09-29 12:43 ` [PATCH 8/9] physmem, gdbstub: Add helper functions to help *unrealize* vCPU object Salil Mehta via
2023-09-29 14:34 ` [PATCH 8/9] physmem,gdbstub: " Alex Bennée
2023-09-29 16:00 ` Salil Mehta via
2023-09-29 12:43 ` [PATCH 9/9] target/arm/kvm: Write CPU state back to KVM on reset Salil Mehta via
2023-09-29 14:39 ` Alex Bennée
2023-09-29 16:01 ` Salil Mehta via
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=877co9oz6m.fsf@linaro.org \
--to=alex.bennee@linaro.org \
--cc=andrew.jones@linux.dev \
--cc=darren@os.amperecomputing.com \
--cc=david@redhat.com \
--cc=eric.auger@redhat.com \
--cc=gshan@redhat.com \
--cc=ilkka@os.amperecomputing.com \
--cc=imammedo@redhat.com \
--cc=jean-philippe@linaro.org \
--cc=jiakernel2@gmail.com \
--cc=jonathan.cameron@huawei.com \
--cc=karl.heubaum@oracle.com \
--cc=linux@armlinux.org.uk \
--cc=linuxarm@huawei.com \
--cc=lixianglai@loongson.cn \
--cc=lpieralisi@kernel.org \
--cc=maobibo@loongson.cn \
--cc=maz@kernel.org \
--cc=miguel.luis@oracle.com \
--cc=mst@redhat.com \
--cc=oliver.upton@linux.dev \
--cc=pbonzini@redhat.com \
--cc=peter.maydell@linaro.org \
--cc=philmd@linaro.org \
--cc=qemu-arm@nongnu.org \
--cc=qemu-devel@nongnu.org \
--cc=rafael@kernel.org \
--cc=richard.henderson@linaro.org \
--cc=salil.mehta@huawei.com \
--cc=salil.mehta@opnsrc.net \
--cc=vishnu@os.amperecomputing.com \
--cc=wangxiongfeng2@huawei.com \
--cc=wangyanan55@huawei.com \
--cc=will@kernel.org \
--cc=zhukeqian1@huawei.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).