* [PATCH v2 0/4] target/ppc: vcpu hotplug failure handling fixes
@ 2024-05-16 5:32 Harsh Prateek Bora
2024-05-16 5:32 ` [PATCH v2 1/4] accel/kvm: Extract common KVM vCPU {creation, parking} code Harsh Prateek Bora
` (3 more replies)
0 siblings, 4 replies; 14+ messages in thread
From: Harsh Prateek Bora @ 2024-05-16 5:32 UTC (permalink / raw)
To: npiggin, qemu-ppc, qemu-devel; +Cc: danielhb413, vaibhav, sbhat, salil.mehta
On ppc64, the PowerVM hypervisor runs with limited memory and a VCPU
creation during hotplug may fail during kvm_ioctl for KVM_CREATE_VCPU,
leading to termination of guest since errp is set to &error_fatal while
calling kvm_init_vcpu. This unexpected behaviour can be avoided by
pre-creating and parking vcpu on success or return error otherwise.
This enables graceful error delivery for any vcpu hotplug failures while
the guest can keep running.
This series adds another helper to create and park vcpu (based on below
patch by Salil), exports cpu_get_free_index to be reused later and adds
ppc arch specfic handling for vcpu hotplug failure.
Based on api refactoring to create/park vcpus introduced in 1/8 of patch series:
https://lore.kernel.org/qemu-devel/20240312020000.12992-2-salil.mehta@huawei.com/
PS: I have just included patch 1 of above series after fixing a rebase
failure along with this series for better review purpose only.
Changelog:
v2: Addressed review comments from Nick
v1: Initial patch
Harsh Prateek Bora (3):
accel/kvm: Introduce kvm_create_and_park_vcpu() helper
cpu-common.c: export cpu_get_free_index to be reused later
target/ppc: handle vcpu hotplug failure gracefully
Salil Mehta (1):
accel/kvm: Extract common KVM vCPU {creation, parking} code
include/exec/cpu-common.h | 2 ++
include/sysemu/kvm.h | 23 ++++++++++++
accel/kvm/kvm-all.c | 76 +++++++++++++++++++++++++++++++--------
cpu-common.c | 7 ++--
target/ppc/kvm.c | 24 +++++++++++++
accel/kvm/trace-events | 5 ++-
6 files changed, 118 insertions(+), 19 deletions(-)
--
2.39.3
^ permalink raw reply [flat|nested] 14+ messages in thread
* [PATCH v2 1/4] accel/kvm: Extract common KVM vCPU {creation, parking} code
2024-05-16 5:32 [PATCH v2 0/4] target/ppc: vcpu hotplug failure handling fixes Harsh Prateek Bora
@ 2024-05-16 5:32 ` Harsh Prateek Bora
2024-05-16 8:30 ` Salil Mehta via
2024-05-16 5:32 ` [PATCH v2 2/4] accel/kvm: Introduce kvm_create_and_park_vcpu() helper Harsh Prateek Bora
` (2 subsequent siblings)
3 siblings, 1 reply; 14+ messages in thread
From: Harsh Prateek Bora @ 2024-05-16 5:32 UTC (permalink / raw)
To: npiggin, qemu-ppc, qemu-devel; +Cc: danielhb413, vaibhav, sbhat, salil.mehta
From: Salil Mehta <salil.mehta@huawei.com>
KVM vCPU creation is done once during the vCPU realization when Qemu vCPU thread
is spawned. This is common to all the architectures as of now.
Hot-unplug of vCPU results in destruction of the vCPU object in QOM but the
corresponding KVM vCPU object in the Host KVM is not destroyed as KVM doesn't
support vCPU removal. Therefore, its representative KVM vCPU object/context in
Qemu is parked.
Refactor architecture common logic so that some APIs could be reused by vCPU
Hotplug code of some architectures likes ARM, Loongson etc. Update new/old APIs
with trace events instead of DPRINTF. No functional change is intended here.
Signed-off-by: Salil Mehta <salil.mehta@huawei.com>
Reviewed-by: Gavin Shan <gshan@redhat.com>
Tested-by: Vishnu Pajjuri <vishnu@os.amperecomputing.com>
Reviewed-by: Jonathan Cameron <Jonathan.Cameron@huawei.com>
Tested-by: Xianglai Li <lixianglai@loongson.cn>
Tested-by: Miguel Luis <miguel.luis@oracle.com>
Reviewed-by: Shaoqin Huang <shahuang@redhat.com>
[harshpb: fixed rebase failures in include/sysemu/kvm.h]
Signed-off-by: Harsh Prateek Bora <harshpb@linux.ibm.com>
---
include/sysemu/kvm.h | 15 ++++++++++
accel/kvm/kvm-all.c | 64 ++++++++++++++++++++++++++++++++----------
accel/kvm/trace-events | 5 +++-
3 files changed, 68 insertions(+), 16 deletions(-)
diff --git a/include/sysemu/kvm.h b/include/sysemu/kvm.h
index eaf801bc93..fa3ec74442 100644
--- a/include/sysemu/kvm.h
+++ b/include/sysemu/kvm.h
@@ -434,6 +434,21 @@ void kvm_set_sigmask_len(KVMState *s, unsigned int sigmask_len);
int kvm_physical_memory_addr_from_host(KVMState *s, void *ram_addr,
hwaddr *phys_addr);
+/**
+ * kvm_create_vcpu - Gets a parked KVM vCPU or creates a KVM vCPU
+ * @cpu: QOM CPUState object for which KVM vCPU has to be fetched/created.
+ *
+ * @returns: 0 when success, errno (<0) when failed.
+ */
+int kvm_create_vcpu(CPUState *cpu);
+
+/**
+ * kvm_park_vcpu - Park QEMU KVM vCPU context
+ * @cpu: QOM CPUState object for which QEMU KVM vCPU context has to be parked.
+ *
+ * @returns: none
+ */
+void kvm_park_vcpu(CPUState *cpu);
#endif /* COMPILING_PER_TARGET */
diff --git a/accel/kvm/kvm-all.c b/accel/kvm/kvm-all.c
index d7281b93f3..30d42847de 100644
--- a/accel/kvm/kvm-all.c
+++ b/accel/kvm/kvm-all.c
@@ -128,6 +128,7 @@ static QemuMutex kml_slots_lock;
#define kvm_slots_unlock() qemu_mutex_unlock(&kml_slots_lock)
static void kvm_slot_init_dirty_bitmap(KVMSlot *mem);
+static int kvm_get_vcpu(KVMState *s, unsigned long vcpu_id);
static inline void kvm_resample_fd_remove(int gsi)
{
@@ -340,14 +341,53 @@ err:
return ret;
}
+void kvm_park_vcpu(CPUState *cpu)
+{
+ struct KVMParkedVcpu *vcpu;
+
+ trace_kvm_park_vcpu(cpu->cpu_index, kvm_arch_vcpu_id(cpu));
+
+ vcpu = g_malloc0(sizeof(*vcpu));
+ vcpu->vcpu_id = kvm_arch_vcpu_id(cpu);
+ vcpu->kvm_fd = cpu->kvm_fd;
+ QLIST_INSERT_HEAD(&kvm_state->kvm_parked_vcpus, vcpu, node);
+}
+
+int kvm_create_vcpu(CPUState *cpu)
+{
+ unsigned long vcpu_id = kvm_arch_vcpu_id(cpu);
+ KVMState *s = kvm_state;
+ int kvm_fd;
+
+ trace_kvm_create_vcpu(cpu->cpu_index, kvm_arch_vcpu_id(cpu));
+
+ /* check if the KVM vCPU already exist but is parked */
+ kvm_fd = kvm_get_vcpu(s, vcpu_id);
+ if (kvm_fd < 0) {
+ /* vCPU not parked: create a new KVM vCPU */
+ kvm_fd = kvm_vm_ioctl(s, KVM_CREATE_VCPU, vcpu_id);
+ if (kvm_fd < 0) {
+ error_report("KVM_CREATE_VCPU IOCTL failed for vCPU %lu", vcpu_id);
+ return kvm_fd;
+ }
+ }
+
+ cpu->kvm_fd = kvm_fd;
+ cpu->kvm_state = s;
+ cpu->vcpu_dirty = true;
+ cpu->dirty_pages = 0;
+ cpu->throttle_us_per_full = 0;
+
+ return 0;
+}
+
static int do_kvm_destroy_vcpu(CPUState *cpu)
{
KVMState *s = kvm_state;
long mmap_size;
- struct KVMParkedVcpu *vcpu = NULL;
int ret = 0;
- trace_kvm_destroy_vcpu();
+ trace_kvm_destroy_vcpu(cpu->cpu_index, kvm_arch_vcpu_id(cpu));
ret = kvm_arch_destroy_vcpu(cpu);
if (ret < 0) {
@@ -373,10 +413,7 @@ static int do_kvm_destroy_vcpu(CPUState *cpu)
}
}
- vcpu = g_malloc0(sizeof(*vcpu));
- vcpu->vcpu_id = kvm_arch_vcpu_id(cpu);
- vcpu->kvm_fd = cpu->kvm_fd;
- QLIST_INSERT_HEAD(&kvm_state->kvm_parked_vcpus, vcpu, node);
+ kvm_park_vcpu(cpu);
err:
return ret;
}
@@ -397,6 +434,8 @@ static int kvm_get_vcpu(KVMState *s, unsigned long vcpu_id)
if (cpu->vcpu_id == vcpu_id) {
int kvm_fd;
+ trace_kvm_get_vcpu(vcpu_id);
+
QLIST_REMOVE(cpu, node);
kvm_fd = cpu->kvm_fd;
g_free(cpu);
@@ -404,7 +443,7 @@ static int kvm_get_vcpu(KVMState *s, unsigned long vcpu_id)
}
}
- return kvm_vm_ioctl(s, KVM_CREATE_VCPU, (void *)vcpu_id);
+ return -ENOENT;
}
int kvm_init_vcpu(CPUState *cpu, Error **errp)
@@ -415,19 +454,14 @@ int kvm_init_vcpu(CPUState *cpu, Error **errp)
trace_kvm_init_vcpu(cpu->cpu_index, kvm_arch_vcpu_id(cpu));
- ret = kvm_get_vcpu(s, kvm_arch_vcpu_id(cpu));
+ ret = kvm_create_vcpu(cpu);
if (ret < 0) {
- error_setg_errno(errp, -ret, "kvm_init_vcpu: kvm_get_vcpu failed (%lu)",
+ error_setg_errno(errp, -ret,
+ "kvm_init_vcpu: kvm_create_vcpu failed (%lu)",
kvm_arch_vcpu_id(cpu));
goto err;
}
- cpu->kvm_fd = ret;
- cpu->kvm_state = s;
- cpu->vcpu_dirty = true;
- cpu->dirty_pages = 0;
- cpu->throttle_us_per_full = 0;
-
mmap_size = kvm_ioctl(s, KVM_GET_VCPU_MMAP_SIZE, 0);
if (mmap_size < 0) {
ret = mmap_size;
diff --git a/accel/kvm/trace-events b/accel/kvm/trace-events
index 681ccb667d..75c1724e78 100644
--- a/accel/kvm/trace-events
+++ b/accel/kvm/trace-events
@@ -9,6 +9,10 @@ kvm_device_ioctl(int fd, int type, void *arg) "dev fd %d, type 0x%x, arg %p"
kvm_failed_reg_get(uint64_t id, const char *msg) "Warning: Unable to retrieve ONEREG %" PRIu64 " from KVM: %s"
kvm_failed_reg_set(uint64_t id, const char *msg) "Warning: Unable to set ONEREG %" PRIu64 " to KVM: %s"
kvm_init_vcpu(int cpu_index, unsigned long arch_cpu_id) "index: %d id: %lu"
+kvm_create_vcpu(int cpu_index, unsigned long arch_cpu_id) "index: %d id: %lu"
+kvm_get_vcpu(unsigned long arch_cpu_id) "id: %lu"
+kvm_destroy_vcpu(int cpu_index, unsigned long arch_cpu_id) "index: %d id: %lu"
+kvm_park_vcpu(int cpu_index, unsigned long arch_cpu_id) "index: %d id: %lu"
kvm_irqchip_commit_routes(void) ""
kvm_irqchip_add_msi_route(char *name, int vector, int virq) "dev %s vector %d virq %d"
kvm_irqchip_update_msi_route(int virq) "Updating MSI route virq=%d"
@@ -25,7 +29,6 @@ kvm_dirty_ring_reaper(const char *s) "%s"
kvm_dirty_ring_reap(uint64_t count, int64_t t) "reaped %"PRIu64" pages (took %"PRIi64" us)"
kvm_dirty_ring_reaper_kick(const char *reason) "%s"
kvm_dirty_ring_flush(int finished) "%d"
-kvm_destroy_vcpu(void) ""
kvm_failed_get_vcpu_mmap_size(void) ""
kvm_cpu_exec(void) ""
kvm_interrupt_exit_request(void) ""
--
2.39.3
^ permalink raw reply related [flat|nested] 14+ messages in thread
* [PATCH v2 2/4] accel/kvm: Introduce kvm_create_and_park_vcpu() helper
2024-05-16 5:32 [PATCH v2 0/4] target/ppc: vcpu hotplug failure handling fixes Harsh Prateek Bora
2024-05-16 5:32 ` [PATCH v2 1/4] accel/kvm: Extract common KVM vCPU {creation, parking} code Harsh Prateek Bora
@ 2024-05-16 5:32 ` Harsh Prateek Bora
2024-05-16 5:32 ` [PATCH v2 3/4] cpu-common.c: export cpu_get_free_index to be reused later Harsh Prateek Bora
2024-05-16 5:32 ` [PATCH v2 4/4] target/ppc: handle vcpu hotplug failure gracefully Harsh Prateek Bora
3 siblings, 0 replies; 14+ messages in thread
From: Harsh Prateek Bora @ 2024-05-16 5:32 UTC (permalink / raw)
To: npiggin, qemu-ppc, qemu-devel; +Cc: danielhb413, vaibhav, sbhat, salil.mehta
There are distinct helpers for creating and parking a KVM vCPU.
However, there can be cases where a platform needs to create and
immediately park the vCPU during early stages of vcpu init which
can later be reused when vcpu thread gets initialized. This would
help detect failures with kvm_create_vcpu at an early stage.
Based on api refactoring to create/park vcpus introduced in 1/8 of patch series:
https://lore.kernel.org/qemu-devel/20240312020000.12992-2-salil.mehta@huawei.com/
Suggested-by: Nicholas Piggin <npiggin@gmail.com>
Signed-off-by: Harsh Prateek Bora <harshpb@linux.ibm.com>
---
include/sysemu/kvm.h | 8 ++++++++
accel/kvm/kvm-all.c | 12 ++++++++++++
2 files changed, 20 insertions(+)
diff --git a/include/sysemu/kvm.h b/include/sysemu/kvm.h
index fa3ec74442..221e6bd55b 100644
--- a/include/sysemu/kvm.h
+++ b/include/sysemu/kvm.h
@@ -450,6 +450,14 @@ int kvm_create_vcpu(CPUState *cpu);
*/
void kvm_park_vcpu(CPUState *cpu);
+/**
+ * kvm_create_and_park_vcpu - Create and park a KVM vCPU
+ * @cpu: QOM CPUState object for which KVM vCPU has to be created and parked.
+ *
+ * @returns: 0 when success, errno (<0) when failed.
+ */
+int kvm_create_and_park_vcpu(CPUState *cpu);
+
#endif /* COMPILING_PER_TARGET */
void kvm_cpu_synchronize_state(CPUState *cpu);
diff --git a/accel/kvm/kvm-all.c b/accel/kvm/kvm-all.c
index 30d42847de..3d7e5eaf0b 100644
--- a/accel/kvm/kvm-all.c
+++ b/accel/kvm/kvm-all.c
@@ -381,6 +381,18 @@ int kvm_create_vcpu(CPUState *cpu)
return 0;
}
+int kvm_create_and_park_vcpu(CPUState *cpu)
+{
+ int ret = 0;
+
+ ret = kvm_create_vcpu(cpu);
+ if (!ret) {
+ kvm_park_vcpu(cpu);
+ }
+
+ return ret;
+}
+
static int do_kvm_destroy_vcpu(CPUState *cpu)
{
KVMState *s = kvm_state;
--
2.39.3
^ permalink raw reply related [flat|nested] 14+ messages in thread
* [PATCH v2 3/4] cpu-common.c: export cpu_get_free_index to be reused later
2024-05-16 5:32 [PATCH v2 0/4] target/ppc: vcpu hotplug failure handling fixes Harsh Prateek Bora
2024-05-16 5:32 ` [PATCH v2 1/4] accel/kvm: Extract common KVM vCPU {creation, parking} code Harsh Prateek Bora
2024-05-16 5:32 ` [PATCH v2 2/4] accel/kvm: Introduce kvm_create_and_park_vcpu() helper Harsh Prateek Bora
@ 2024-05-16 5:32 ` Harsh Prateek Bora
2024-05-16 5:32 ` [PATCH v2 4/4] target/ppc: handle vcpu hotplug failure gracefully Harsh Prateek Bora
3 siblings, 0 replies; 14+ messages in thread
From: Harsh Prateek Bora @ 2024-05-16 5:32 UTC (permalink / raw)
To: npiggin, qemu-ppc, qemu-devel; +Cc: danielhb413, vaibhav, sbhat, salil.mehta
This helper provides an easy way to identify the next available free cpu
index which can be used for vcpu creation. Until now, this is being
called at a very later stage and there is a need to be able to call it
earlier (for now, with ppc64) hence the need to export.
Suggested-by: Nicholas Piggin <npiggin@gmail.com>
Signed-off-by: Harsh Prateek Bora <harshpb@linux.ibm.com>
---
include/exec/cpu-common.h | 2 ++
cpu-common.c | 7 ++++---
2 files changed, 6 insertions(+), 3 deletions(-)
diff --git a/include/exec/cpu-common.h b/include/exec/cpu-common.h
index 6d5318895a..0386f1ab29 100644
--- a/include/exec/cpu-common.h
+++ b/include/exec/cpu-common.h
@@ -29,6 +29,8 @@ void cpu_list_lock(void);
void cpu_list_unlock(void);
unsigned int cpu_list_generation_id_get(void);
+int cpu_get_free_index(void);
+
void tcg_iommu_init_notifier_list(CPUState *cpu);
void tcg_iommu_free_notifier_list(CPUState *cpu);
diff --git a/cpu-common.c b/cpu-common.c
index ce78273af5..82bd1b432d 100644
--- a/cpu-common.c
+++ b/cpu-common.c
@@ -57,14 +57,12 @@ void cpu_list_unlock(void)
qemu_mutex_unlock(&qemu_cpu_list_lock);
}
-static bool cpu_index_auto_assigned;
-static int cpu_get_free_index(void)
+int cpu_get_free_index(void)
{
CPUState *some_cpu;
int max_cpu_index = 0;
- cpu_index_auto_assigned = true;
CPU_FOREACH(some_cpu) {
if (some_cpu->cpu_index >= max_cpu_index) {
max_cpu_index = some_cpu->cpu_index + 1;
@@ -83,8 +81,11 @@ unsigned int cpu_list_generation_id_get(void)
void cpu_list_add(CPUState *cpu)
{
+ static bool cpu_index_auto_assigned;
+
QEMU_LOCK_GUARD(&qemu_cpu_list_lock);
if (cpu->cpu_index == UNASSIGNED_CPU_INDEX) {
+ cpu_index_auto_assigned = true;
cpu->cpu_index = cpu_get_free_index();
assert(cpu->cpu_index != UNASSIGNED_CPU_INDEX);
} else {
--
2.39.3
^ permalink raw reply related [flat|nested] 14+ messages in thread
* [PATCH v2 4/4] target/ppc: handle vcpu hotplug failure gracefully
2024-05-16 5:32 [PATCH v2 0/4] target/ppc: vcpu hotplug failure handling fixes Harsh Prateek Bora
` (2 preceding siblings ...)
2024-05-16 5:32 ` [PATCH v2 3/4] cpu-common.c: export cpu_get_free_index to be reused later Harsh Prateek Bora
@ 2024-05-16 5:32 ` Harsh Prateek Bora
3 siblings, 0 replies; 14+ messages in thread
From: Harsh Prateek Bora @ 2024-05-16 5:32 UTC (permalink / raw)
To: npiggin, qemu-ppc, qemu-devel; +Cc: danielhb413, vaibhav, sbhat, salil.mehta
On ppc64, the PowerVM hypervisor runs with limited memory and a VCPU
creation during hotplug may fail during kvm_ioctl for KVM_CREATE_VCPU,
leading to termination of guest since errp is set to &error_fatal while
calling kvm_init_vcpu. This unexpected behaviour can be avoided by
pre-creating and parking vcpu on success or return error otherwise.
This enables graceful error delivery for any vcpu hotplug failures while
the guest can keep running.
Based on api refactoring to create/park vcpus introduced in 1/8 of patch series:
https://lore.kernel.org/qemu-devel/20240312020000.12992-2-salil.mehta@huawei.com/
Tested OK by repeatedly doing a hotplug/unplug of vcpus as below:
#virsh setvcpus hotplug 40
#virsh setvcpus hotplug 70
error: internal error: unable to execute QEMU command 'device_add':
kvmppc_cpu_realize: vcpu hotplug failed with -12
Reported-by: Anushree Mathur <anushree.mathur@linux.vnet.ibm.com>
Suggested-by: Shivaprasad G Bhat <sbhat@linux.ibm.com>
Suggested-by: Vaibhav Jain <vaibhav@linux.ibm.com>
Signed-off by: Harsh Prateek Bora <harshpb@linux.ibm.com>
Tested-by: Anushree Mathur <anushree.mathur@linux.vnet.ibm.com>
---
target/ppc/kvm.c | 24 ++++++++++++++++++++++++
1 file changed, 24 insertions(+)
diff --git a/target/ppc/kvm.c b/target/ppc/kvm.c
index 63930d4a77..25f0cf0ba8 100644
--- a/target/ppc/kvm.c
+++ b/target/ppc/kvm.c
@@ -48,6 +48,8 @@
#include "qemu/mmap-alloc.h"
#include "elf.h"
#include "sysemu/kvm_int.h"
+#include "sysemu/kvm.h"
+#include "hw/core/accel-cpu.h"
#define PROC_DEVTREE_CPU "/proc/device-tree/cpus/"
@@ -2339,6 +2341,26 @@ static void alter_insns(uint64_t *word, uint64_t flags, bool on)
}
}
+static bool kvmppc_cpu_realize(CPUState *cs, Error **errp)
+{
+ int ret;
+
+ cs->cpu_index = cpu_get_free_index();
+
+ POWERPC_CPU(cs)->vcpu_id = cs->cpu_index;
+
+ if (cs->parent_obj.hotplugged) {
+ /* create and park to fail gracefully in case vcpu hotplug fails */
+ ret = kvm_create_and_park_vcpu(cs);
+ if (ret) {
+ error_setg(errp, "%s: vcpu hotplug failed with %d",
+ __func__, ret);
+ return false;
+ }
+ }
+ return true;
+}
+
static void kvmppc_host_cpu_class_init(ObjectClass *oc, void *data)
{
PowerPCCPUClass *pcc = POWERPC_CPU_CLASS(oc);
@@ -2958,4 +2980,6 @@ void kvmppc_set_reg_tb_offset(PowerPCCPU *cpu, int64_t tb_offset)
void kvm_arch_accel_class_init(ObjectClass *oc)
{
+ AccelClass *ac = ACCEL_CLASS(oc);
+ ac->cpu_common_realize = kvmppc_cpu_realize;
}
--
2.39.3
^ permalink raw reply related [flat|nested] 14+ messages in thread
* RE: [PATCH v2 1/4] accel/kvm: Extract common KVM vCPU {creation, parking} code
2024-05-16 5:32 ` [PATCH v2 1/4] accel/kvm: Extract common KVM vCPU {creation, parking} code Harsh Prateek Bora
@ 2024-05-16 8:30 ` Salil Mehta via
2024-05-16 10:15 ` Harsh Prateek Bora
0 siblings, 1 reply; 14+ messages in thread
From: Salil Mehta via @ 2024-05-16 8:30 UTC (permalink / raw)
To: Harsh Prateek Bora, npiggin@gmail.com, qemu-ppc@nongnu.org,
qemu-devel@nongnu.org
Cc: danielhb413@gmail.com, vaibhav@linux.ibm.com, sbhat@linux.ibm.com
Hi Harsh,
Thanks for your interest in the patch-set but taking away patches like
this from other series without any discussion can disrupt others work
and its acceptance on time. This is because we will have to put lot of
effort in rebasing bigger series and then testing overhead comes along
with it.
The patch-set (from where this patch has been taken) is part of even
bigger series and there have been many people and companies toiling
to fix the bugs collectively in that series and for years.
I'm about float the V9 version of the Arch agnostic series which this
patch is part of and you can rebase your patch-set from there. I'm
hopeful that it will get accepted in this cycle.
Many thanks
Salil.
> From: Harsh Prateek Bora <harshpb@linux.ibm.com>
> Sent: Thursday, May 16, 2024 6:32 AM
>
> From: Salil Mehta <salil.mehta@huawei.com>
>
> KVM vCPU creation is done once during the vCPU realization when Qemu
> vCPU thread is spawned. This is common to all the architectures as of now.
>
> Hot-unplug of vCPU results in destruction of the vCPU object in QOM but
> the corresponding KVM vCPU object in the Host KVM is not destroyed as
> KVM doesn't support vCPU removal. Therefore, its representative KVM
> vCPU object/context in Qemu is parked.
>
> Refactor architecture common logic so that some APIs could be reused by
> vCPU Hotplug code of some architectures likes ARM, Loongson etc. Update
> new/old APIs with trace events instead of DPRINTF. No functional change is
> intended here.
>
> Signed-off-by: Salil Mehta <salil.mehta@huawei.com>
> Reviewed-by: Gavin Shan <gshan@redhat.com>
> Tested-by: Vishnu Pajjuri <vishnu@os.amperecomputing.com>
> Reviewed-by: Jonathan Cameron <Jonathan.Cameron@huawei.com>
> Tested-by: Xianglai Li <lixianglai@loongson.cn>
> Tested-by: Miguel Luis <miguel.luis@oracle.com>
> Reviewed-by: Shaoqin Huang <shahuang@redhat.com>
> [harshpb: fixed rebase failures in include/sysemu/kvm.h]
> Signed-off-by: Harsh Prateek Bora <harshpb@linux.ibm.com>
> ---
> include/sysemu/kvm.h | 15 ++++++++++
> accel/kvm/kvm-all.c | 64 ++++++++++++++++++++++++++++++++--------
> --
> accel/kvm/trace-events | 5 +++-
> 3 files changed, 68 insertions(+), 16 deletions(-)
>
> diff --git a/include/sysemu/kvm.h b/include/sysemu/kvm.h index
> eaf801bc93..fa3ec74442 100644
> --- a/include/sysemu/kvm.h
> +++ b/include/sysemu/kvm.h
> @@ -434,6 +434,21 @@ void kvm_set_sigmask_len(KVMState *s, unsigned
> int sigmask_len);
>
> int kvm_physical_memory_addr_from_host(KVMState *s, void
> *ram_addr,
> hwaddr *phys_addr);
> +/**
> + * kvm_create_vcpu - Gets a parked KVM vCPU or creates a KVM vCPU
> + * @cpu: QOM CPUState object for which KVM vCPU has to be
> fetched/created.
> + *
> + * @returns: 0 when success, errno (<0) when failed.
> + */
> +int kvm_create_vcpu(CPUState *cpu);
> +
> +/**
> + * kvm_park_vcpu - Park QEMU KVM vCPU context
> + * @cpu: QOM CPUState object for which QEMU KVM vCPU context has to
> be parked.
> + *
> + * @returns: none
> + */
> +void kvm_park_vcpu(CPUState *cpu);
>
> #endif /* COMPILING_PER_TARGET */
>
> diff --git a/accel/kvm/kvm-all.c b/accel/kvm/kvm-all.c index
> d7281b93f3..30d42847de 100644
> --- a/accel/kvm/kvm-all.c
> +++ b/accel/kvm/kvm-all.c
> @@ -128,6 +128,7 @@ static QemuMutex kml_slots_lock; #define
> kvm_slots_unlock() qemu_mutex_unlock(&kml_slots_lock)
>
> static void kvm_slot_init_dirty_bitmap(KVMSlot *mem);
> +static int kvm_get_vcpu(KVMState *s, unsigned long vcpu_id);
>
> static inline void kvm_resample_fd_remove(int gsi) { @@ -340,14 +341,53
> @@ err:
> return ret;
> }
>
> +void kvm_park_vcpu(CPUState *cpu)
> +{
> + struct KVMParkedVcpu *vcpu;
> +
> + trace_kvm_park_vcpu(cpu->cpu_index, kvm_arch_vcpu_id(cpu));
> +
> + vcpu = g_malloc0(sizeof(*vcpu));
> + vcpu->vcpu_id = kvm_arch_vcpu_id(cpu);
> + vcpu->kvm_fd = cpu->kvm_fd;
> + QLIST_INSERT_HEAD(&kvm_state->kvm_parked_vcpus, vcpu, node); }
> +
> +int kvm_create_vcpu(CPUState *cpu)
> +{
> + unsigned long vcpu_id = kvm_arch_vcpu_id(cpu);
> + KVMState *s = kvm_state;
> + int kvm_fd;
> +
> + trace_kvm_create_vcpu(cpu->cpu_index, kvm_arch_vcpu_id(cpu));
> +
> + /* check if the KVM vCPU already exist but is parked */
> + kvm_fd = kvm_get_vcpu(s, vcpu_id);
> + if (kvm_fd < 0) {
> + /* vCPU not parked: create a new KVM vCPU */
> + kvm_fd = kvm_vm_ioctl(s, KVM_CREATE_VCPU, vcpu_id);
> + if (kvm_fd < 0) {
> + error_report("KVM_CREATE_VCPU IOCTL failed for vCPU %lu",
> vcpu_id);
> + return kvm_fd;
> + }
> + }
> +
> + cpu->kvm_fd = kvm_fd;
> + cpu->kvm_state = s;
> + cpu->vcpu_dirty = true;
> + cpu->dirty_pages = 0;
> + cpu->throttle_us_per_full = 0;
> +
> + return 0;
> +}
> +
> static int do_kvm_destroy_vcpu(CPUState *cpu) {
> KVMState *s = kvm_state;
> long mmap_size;
> - struct KVMParkedVcpu *vcpu = NULL;
> int ret = 0;
>
> - trace_kvm_destroy_vcpu();
> + trace_kvm_destroy_vcpu(cpu->cpu_index, kvm_arch_vcpu_id(cpu));
>
> ret = kvm_arch_destroy_vcpu(cpu);
> if (ret < 0) {
> @@ -373,10 +413,7 @@ static int do_kvm_destroy_vcpu(CPUState *cpu)
> }
> }
>
> - vcpu = g_malloc0(sizeof(*vcpu));
> - vcpu->vcpu_id = kvm_arch_vcpu_id(cpu);
> - vcpu->kvm_fd = cpu->kvm_fd;
> - QLIST_INSERT_HEAD(&kvm_state->kvm_parked_vcpus, vcpu, node);
> + kvm_park_vcpu(cpu);
> err:
> return ret;
> }
> @@ -397,6 +434,8 @@ static int kvm_get_vcpu(KVMState *s, unsigned long
> vcpu_id)
> if (cpu->vcpu_id == vcpu_id) {
> int kvm_fd;
>
> + trace_kvm_get_vcpu(vcpu_id);
> +
> QLIST_REMOVE(cpu, node);
> kvm_fd = cpu->kvm_fd;
> g_free(cpu);
> @@ -404,7 +443,7 @@ static int kvm_get_vcpu(KVMState *s, unsigned long
> vcpu_id)
> }
> }
>
> - return kvm_vm_ioctl(s, KVM_CREATE_VCPU, (void *)vcpu_id);
> + return -ENOENT;
> }
>
> int kvm_init_vcpu(CPUState *cpu, Error **errp) @@ -415,19 +454,14 @@
> int kvm_init_vcpu(CPUState *cpu, Error **errp)
>
> trace_kvm_init_vcpu(cpu->cpu_index, kvm_arch_vcpu_id(cpu));
>
> - ret = kvm_get_vcpu(s, kvm_arch_vcpu_id(cpu));
> + ret = kvm_create_vcpu(cpu);
> if (ret < 0) {
> - error_setg_errno(errp, -ret, "kvm_init_vcpu: kvm_get_vcpu failed
> (%lu)",
> + error_setg_errno(errp, -ret,
> + "kvm_init_vcpu: kvm_create_vcpu failed (%lu)",
> kvm_arch_vcpu_id(cpu));
> goto err;
> }
>
> - cpu->kvm_fd = ret;
> - cpu->kvm_state = s;
> - cpu->vcpu_dirty = true;
> - cpu->dirty_pages = 0;
> - cpu->throttle_us_per_full = 0;
> -
> mmap_size = kvm_ioctl(s, KVM_GET_VCPU_MMAP_SIZE, 0);
> if (mmap_size < 0) {
> ret = mmap_size;
> diff --git a/accel/kvm/trace-events b/accel/kvm/trace-events index
> 681ccb667d..75c1724e78 100644
> --- a/accel/kvm/trace-events
> +++ b/accel/kvm/trace-events
> @@ -9,6 +9,10 @@ kvm_device_ioctl(int fd, int type, void *arg) "dev fd %d,
> type 0x%x, arg %p"
> kvm_failed_reg_get(uint64_t id, const char *msg) "Warning: Unable to
> retrieve ONEREG %" PRIu64 " from KVM: %s"
> kvm_failed_reg_set(uint64_t id, const char *msg) "Warning: Unable to set
> ONEREG %" PRIu64 " to KVM: %s"
> kvm_init_vcpu(int cpu_index, unsigned long arch_cpu_id) "index: %d id:
> %lu"
> +kvm_create_vcpu(int cpu_index, unsigned long arch_cpu_id) "index: %d
> id: %lu"
> +kvm_get_vcpu(unsigned long arch_cpu_id) "id: %lu"
> +kvm_destroy_vcpu(int cpu_index, unsigned long arch_cpu_id) "index: %d
> id: %lu"
> +kvm_park_vcpu(int cpu_index, unsigned long arch_cpu_id) "index: %d id:
> %lu"
> kvm_irqchip_commit_routes(void) ""
> kvm_irqchip_add_msi_route(char *name, int vector, int virq) "dev %s
> vector %d virq %d"
> kvm_irqchip_update_msi_route(int virq) "Updating MSI route virq=%d"
> @@ -25,7 +29,6 @@ kvm_dirty_ring_reaper(const char *s) "%s"
> kvm_dirty_ring_reap(uint64_t count, int64_t t) "reaped %"PRIu64" pages
> (took %"PRIi64" us)"
> kvm_dirty_ring_reaper_kick(const char *reason) "%s"
> kvm_dirty_ring_flush(int finished) "%d"
> -kvm_destroy_vcpu(void) ""
> kvm_failed_get_vcpu_mmap_size(void) ""
> kvm_cpu_exec(void) ""
> kvm_interrupt_exit_request(void) ""
> --
> 2.39.3
^ permalink raw reply [flat|nested] 14+ messages in thread
* Re: [PATCH v2 1/4] accel/kvm: Extract common KVM vCPU {creation, parking} code
2024-05-16 8:30 ` Salil Mehta via
@ 2024-05-16 10:15 ` Harsh Prateek Bora
2024-05-16 12:12 ` Salil Mehta via
0 siblings, 1 reply; 14+ messages in thread
From: Harsh Prateek Bora @ 2024-05-16 10:15 UTC (permalink / raw)
To: Salil Mehta, npiggin@gmail.com, qemu-ppc@nongnu.org,
qemu-devel@nongnu.org
Cc: danielhb413@gmail.com, vaibhav@linux.ibm.com, sbhat@linux.ibm.com
Hi Salil,
Thanks for your email.
Your patch 1/8 is included here based on review comments on my previous
patch from one of the maintainers in the community and therefore I had
kept you in CC to be aware of the desire of having this independent
patch to get merged earlier even if your other patches in the series may
go through further reviews.
I am hoping to see your v9 soon and thereafter maintainer(s) may choose
to pick the latest independent patch if needs to be merged earlier.
Thanks for your work and let's be hopeful it gets merged soon.
regards,
Harsh
On 5/16/24 14:00, Salil Mehta wrote:
> Hi Harsh,
>
> Thanks for your interest in the patch-set but taking away patches like
> this from other series without any discussion can disrupt others work
> and its acceptance on time. This is because we will have to put lot of
> effort in rebasing bigger series and then testing overhead comes along
> with it.
>
> The patch-set (from where this patch has been taken) is part of even
> bigger series and there have been many people and companies toiling
> to fix the bugs collectively in that series and for years.
>
> I'm about float the V9 version of the Arch agnostic series which this
> patch is part of and you can rebase your patch-set from there. I'm
> hopeful that it will get accepted in this cycle.
>
>
> Many thanks
> Salil.
>
>> From: Harsh Prateek Bora <harshpb@linux.ibm.com>
>> Sent: Thursday, May 16, 2024 6:32 AM
>>
>> From: Salil Mehta <salil.mehta@huawei.com>
>>
>> KVM vCPU creation is done once during the vCPU realization when Qemu
>> vCPU thread is spawned. This is common to all the architectures as of now.
>>
>> Hot-unplug of vCPU results in destruction of the vCPU object in QOM but
>> the corresponding KVM vCPU object in the Host KVM is not destroyed as
>> KVM doesn't support vCPU removal. Therefore, its representative KVM
>> vCPU object/context in Qemu is parked.
>>
>> Refactor architecture common logic so that some APIs could be reused by
>> vCPU Hotplug code of some architectures likes ARM, Loongson etc. Update
>> new/old APIs with trace events instead of DPRINTF. No functional change is
>> intended here.
>>
>> Signed-off-by: Salil Mehta <salil.mehta@huawei.com>
>> Reviewed-by: Gavin Shan <gshan@redhat.com>
>> Tested-by: Vishnu Pajjuri <vishnu@os.amperecomputing.com>
>> Reviewed-by: Jonathan Cameron <Jonathan.Cameron@huawei.com>
>> Tested-by: Xianglai Li <lixianglai@loongson.cn>
>> Tested-by: Miguel Luis <miguel.luis@oracle.com>
>> Reviewed-by: Shaoqin Huang <shahuang@redhat.com>
>> [harshpb: fixed rebase failures in include/sysemu/kvm.h]
>> Signed-off-by: Harsh Prateek Bora <harshpb@linux.ibm.com>
>> ---
>> include/sysemu/kvm.h | 15 ++++++++++
>> accel/kvm/kvm-all.c | 64 ++++++++++++++++++++++++++++++++--------
>> --
>> accel/kvm/trace-events | 5 +++-
>> 3 files changed, 68 insertions(+), 16 deletions(-)
>>
>> diff --git a/include/sysemu/kvm.h b/include/sysemu/kvm.h index
>> eaf801bc93..fa3ec74442 100644
>> --- a/include/sysemu/kvm.h
>> +++ b/include/sysemu/kvm.h
>> @@ -434,6 +434,21 @@ void kvm_set_sigmask_len(KVMState *s, unsigned
>> int sigmask_len);
>>
>> int kvm_physical_memory_addr_from_host(KVMState *s, void
>> *ram_addr,
>> hwaddr *phys_addr);
>> +/**
>> + * kvm_create_vcpu - Gets a parked KVM vCPU or creates a KVM vCPU
>> + * @cpu: QOM CPUState object for which KVM vCPU has to be
>> fetched/created.
>> + *
>> + * @returns: 0 when success, errno (<0) when failed.
>> + */
>> +int kvm_create_vcpu(CPUState *cpu);
>> +
>> +/**
>> + * kvm_park_vcpu - Park QEMU KVM vCPU context
>> + * @cpu: QOM CPUState object for which QEMU KVM vCPU context has to
>> be parked.
>> + *
>> + * @returns: none
>> + */
>> +void kvm_park_vcpu(CPUState *cpu);
>>
>> #endif /* COMPILING_PER_TARGET */
>>
>> diff --git a/accel/kvm/kvm-all.c b/accel/kvm/kvm-all.c index
>> d7281b93f3..30d42847de 100644
>> --- a/accel/kvm/kvm-all.c
>> +++ b/accel/kvm/kvm-all.c
>> @@ -128,6 +128,7 @@ static QemuMutex kml_slots_lock; #define
>> kvm_slots_unlock() qemu_mutex_unlock(&kml_slots_lock)
>>
>> static void kvm_slot_init_dirty_bitmap(KVMSlot *mem);
>> +static int kvm_get_vcpu(KVMState *s, unsigned long vcpu_id);
>>
>> static inline void kvm_resample_fd_remove(int gsi) { @@ -340,14 +341,53
>> @@ err:
>> return ret;
>> }
>>
>> +void kvm_park_vcpu(CPUState *cpu)
>> +{
>> + struct KVMParkedVcpu *vcpu;
>> +
>> + trace_kvm_park_vcpu(cpu->cpu_index, kvm_arch_vcpu_id(cpu));
>> +
>> + vcpu = g_malloc0(sizeof(*vcpu));
>> + vcpu->vcpu_id = kvm_arch_vcpu_id(cpu);
>> + vcpu->kvm_fd = cpu->kvm_fd;
>> + QLIST_INSERT_HEAD(&kvm_state->kvm_parked_vcpus, vcpu, node); }
>> +
>> +int kvm_create_vcpu(CPUState *cpu)
>> +{
>> + unsigned long vcpu_id = kvm_arch_vcpu_id(cpu);
>> + KVMState *s = kvm_state;
>> + int kvm_fd;
>> +
>> + trace_kvm_create_vcpu(cpu->cpu_index, kvm_arch_vcpu_id(cpu));
>> +
>> + /* check if the KVM vCPU already exist but is parked */
>> + kvm_fd = kvm_get_vcpu(s, vcpu_id);
>> + if (kvm_fd < 0) {
>> + /* vCPU not parked: create a new KVM vCPU */
>> + kvm_fd = kvm_vm_ioctl(s, KVM_CREATE_VCPU, vcpu_id);
>> + if (kvm_fd < 0) {
>> + error_report("KVM_CREATE_VCPU IOCTL failed for vCPU %lu",
>> vcpu_id);
>> + return kvm_fd;
>> + }
>> + }
>> +
>> + cpu->kvm_fd = kvm_fd;
>> + cpu->kvm_state = s;
>> + cpu->vcpu_dirty = true;
>> + cpu->dirty_pages = 0;
>> + cpu->throttle_us_per_full = 0;
>> +
>> + return 0;
>> +}
>> +
>> static int do_kvm_destroy_vcpu(CPUState *cpu) {
>> KVMState *s = kvm_state;
>> long mmap_size;
>> - struct KVMParkedVcpu *vcpu = NULL;
>> int ret = 0;
>>
>> - trace_kvm_destroy_vcpu();
>> + trace_kvm_destroy_vcpu(cpu->cpu_index, kvm_arch_vcpu_id(cpu));
>>
>> ret = kvm_arch_destroy_vcpu(cpu);
>> if (ret < 0) {
>> @@ -373,10 +413,7 @@ static int do_kvm_destroy_vcpu(CPUState *cpu)
>> }
>> }
>>
>> - vcpu = g_malloc0(sizeof(*vcpu));
>> - vcpu->vcpu_id = kvm_arch_vcpu_id(cpu);
>> - vcpu->kvm_fd = cpu->kvm_fd;
>> - QLIST_INSERT_HEAD(&kvm_state->kvm_parked_vcpus, vcpu, node);
>> + kvm_park_vcpu(cpu);
>> err:
>> return ret;
>> }
>> @@ -397,6 +434,8 @@ static int kvm_get_vcpu(KVMState *s, unsigned long
>> vcpu_id)
>> if (cpu->vcpu_id == vcpu_id) {
>> int kvm_fd;
>>
>> + trace_kvm_get_vcpu(vcpu_id);
>> +
>> QLIST_REMOVE(cpu, node);
>> kvm_fd = cpu->kvm_fd;
>> g_free(cpu);
>> @@ -404,7 +443,7 @@ static int kvm_get_vcpu(KVMState *s, unsigned long
>> vcpu_id)
>> }
>> }
>>
>> - return kvm_vm_ioctl(s, KVM_CREATE_VCPU, (void *)vcpu_id);
>> + return -ENOENT;
>> }
>>
>> int kvm_init_vcpu(CPUState *cpu, Error **errp) @@ -415,19 +454,14 @@
>> int kvm_init_vcpu(CPUState *cpu, Error **errp)
>>
>> trace_kvm_init_vcpu(cpu->cpu_index, kvm_arch_vcpu_id(cpu));
>>
>> - ret = kvm_get_vcpu(s, kvm_arch_vcpu_id(cpu));
>> + ret = kvm_create_vcpu(cpu);
>> if (ret < 0) {
>> - error_setg_errno(errp, -ret, "kvm_init_vcpu: kvm_get_vcpu failed
>> (%lu)",
>> + error_setg_errno(errp, -ret,
>> + "kvm_init_vcpu: kvm_create_vcpu failed (%lu)",
>> kvm_arch_vcpu_id(cpu));
>> goto err;
>> }
>>
>> - cpu->kvm_fd = ret;
>> - cpu->kvm_state = s;
>> - cpu->vcpu_dirty = true;
>> - cpu->dirty_pages = 0;
>> - cpu->throttle_us_per_full = 0;
>> -
>> mmap_size = kvm_ioctl(s, KVM_GET_VCPU_MMAP_SIZE, 0);
>> if (mmap_size < 0) {
>> ret = mmap_size;
>> diff --git a/accel/kvm/trace-events b/accel/kvm/trace-events index
>> 681ccb667d..75c1724e78 100644
>> --- a/accel/kvm/trace-events
>> +++ b/accel/kvm/trace-events
>> @@ -9,6 +9,10 @@ kvm_device_ioctl(int fd, int type, void *arg) "dev fd %d,
>> type 0x%x, arg %p"
>> kvm_failed_reg_get(uint64_t id, const char *msg) "Warning: Unable to
>> retrieve ONEREG %" PRIu64 " from KVM: %s"
>> kvm_failed_reg_set(uint64_t id, const char *msg) "Warning: Unable to set
>> ONEREG %" PRIu64 " to KVM: %s"
>> kvm_init_vcpu(int cpu_index, unsigned long arch_cpu_id) "index: %d id:
>> %lu"
>> +kvm_create_vcpu(int cpu_index, unsigned long arch_cpu_id) "index: %d
>> id: %lu"
>> +kvm_get_vcpu(unsigned long arch_cpu_id) "id: %lu"
>> +kvm_destroy_vcpu(int cpu_index, unsigned long arch_cpu_id) "index: %d
>> id: %lu"
>> +kvm_park_vcpu(int cpu_index, unsigned long arch_cpu_id) "index: %d id:
>> %lu"
>> kvm_irqchip_commit_routes(void) ""
>> kvm_irqchip_add_msi_route(char *name, int vector, int virq) "dev %s
>> vector %d virq %d"
>> kvm_irqchip_update_msi_route(int virq) "Updating MSI route virq=%d"
>> @@ -25,7 +29,6 @@ kvm_dirty_ring_reaper(const char *s) "%s"
>> kvm_dirty_ring_reap(uint64_t count, int64_t t) "reaped %"PRIu64" pages
>> (took %"PRIi64" us)"
>> kvm_dirty_ring_reaper_kick(const char *reason) "%s"
>> kvm_dirty_ring_flush(int finished) "%d"
>> -kvm_destroy_vcpu(void) ""
>> kvm_failed_get_vcpu_mmap_size(void) ""
>> kvm_cpu_exec(void) ""
>> kvm_interrupt_exit_request(void) ""
>> --
>> 2.39.3
>
^ permalink raw reply [flat|nested] 14+ messages in thread
* RE: [PATCH v2 1/4] accel/kvm: Extract common KVM vCPU {creation, parking} code
2024-05-16 10:15 ` Harsh Prateek Bora
@ 2024-05-16 12:12 ` Salil Mehta via
2024-05-16 13:06 ` Harsh Prateek Bora
0 siblings, 1 reply; 14+ messages in thread
From: Salil Mehta via @ 2024-05-16 12:12 UTC (permalink / raw)
To: Harsh Prateek Bora, npiggin@gmail.com, qemu-ppc@nongnu.org,
qemu-devel@nongnu.org
Cc: danielhb413@gmail.com, vaibhav@linux.ibm.com, sbhat@linux.ibm.com
Hi Harsh,
> From: Harsh Prateek Bora <harshpb@linux.ibm.com>
> Sent: Thursday, May 16, 2024 11:15 AM
>
> Hi Salil,
>
> Thanks for your email.
> Your patch 1/8 is included here based on review comments on my previous
> patch from one of the maintainers in the community and therefore I had
> kept you in CC to be aware of the desire of having this independent patch to
> get merged earlier even if your other patches in the series may go through
> further reviews.
I really don’t know which discussion are you pointing at? Please understand
you are fixing a bug and we are pushing a feature which has got large series.
It will break the patch-set which is about t be merged.
There will be significant overhead of testing on us for the work we have been
carrying forward for large time. This will be disruptive. Please dont!
>
> I am hoping to see your v9 soon and thereafter maintainer(s) may choose to
> pick the latest independent patch if needs to be merged earlier.
I don’t think you are understanding what problem it is causing. For your
small bug fix you are causing significant delays at our end.
Thanks
Salil.
>
> Thanks for your work and let's be hopeful it gets merged soon.
>
> regards,
> Harsh
>
> On 5/16/24 14:00, Salil Mehta wrote:
> > Hi Harsh,
> >
> > Thanks for your interest in the patch-set but taking away patches like
> > this from other series without any discussion can disrupt others work
> > and its acceptance on time. This is because we will have to put lot of
> > effort in rebasing bigger series and then testing overhead comes along
> > with it.
> >
> > The patch-set (from where this patch has been taken) is part of even
> > bigger series and there have been many people and companies toiling to
> > fix the bugs collectively in that series and for years.
> >
> > I'm about float the V9 version of the Arch agnostic series which this
> > patch is part of and you can rebase your patch-set from there. I'm
> > hopeful that it will get accepted in this cycle.
> >
> >
> > Many thanks
> > Salil.
> >
> >> From: Harsh Prateek Bora <harshpb@linux.ibm.com>
> >> Sent: Thursday, May 16, 2024 6:32 AM
> >>
> >> From: Salil Mehta <salil.mehta@huawei.com>
> >>
> >> KVM vCPU creation is done once during the vCPU realization when
> Qemu
> >> vCPU thread is spawned. This is common to all the architectures as of
> now.
> >>
> >> Hot-unplug of vCPU results in destruction of the vCPU object in QOM
> but
> >> the corresponding KVM vCPU object in the Host KVM is not destroyed
> as
> >> KVM doesn't support vCPU removal. Therefore, its representative KVM
> >> vCPU object/context in Qemu is parked.
> >>
> >> Refactor architecture common logic so that some APIs could be reused
> by
> >> vCPU Hotplug code of some architectures likes ARM, Loongson etc.
> Update
> >> new/old APIs with trace events instead of DPRINTF. No functional
> change is
> >> intended here.
> >>
> >> Signed-off-by: Salil Mehta <salil.mehta@huawei.com>
> >> Reviewed-by: Gavin Shan <gshan@redhat.com>
> >> Tested-by: Vishnu Pajjuri <vishnu@os.amperecomputing.com>
> >> Reviewed-by: Jonathan Cameron <Jonathan.Cameron@huawei.com>
> >> Tested-by: Xianglai Li <lixianglai@loongson.cn>
> >> Tested-by: Miguel Luis <miguel.luis@oracle.com>
> >> Reviewed-by: Shaoqin Huang <shahuang@redhat.com>
> >> [harshpb: fixed rebase failures in include/sysemu/kvm.h]
> >> Signed-off-by: Harsh Prateek Bora <harshpb@linux.ibm.com>
> >> ---
> >> include/sysemu/kvm.h | 15 ++++++++++
> >> accel/kvm/kvm-all.c | 64 ++++++++++++++++++++++++++++++++---
> -----
> >> --
> >> accel/kvm/trace-events | 5 +++-
> >> 3 files changed, 68 insertions(+), 16 deletions(-)
> >>
> >> diff --git a/include/sysemu/kvm.h b/include/sysemu/kvm.h index
> >> eaf801bc93..fa3ec74442 100644
> >> --- a/include/sysemu/kvm.h
> >> +++ b/include/sysemu/kvm.h
> >> @@ -434,6 +434,21 @@ void kvm_set_sigmask_len(KVMState *s,
> unsigned
> >> int sigmask_len);
> >>
> >> int kvm_physical_memory_addr_from_host(KVMState *s, void
> >> *ram_addr,
> >> hwaddr *phys_addr);
> >> +/**
> >> + * kvm_create_vcpu - Gets a parked KVM vCPU or creates a KVM
> vCPU
> >> + * @cpu: QOM CPUState object for which KVM vCPU has to be
> >> fetched/created.
> >> + *
> >> + * @returns: 0 when success, errno (<0) when failed.
> >> + */
> >> +int kvm_create_vcpu(CPUState *cpu);
> >> +
> >> +/**
> >> + * kvm_park_vcpu - Park QEMU KVM vCPU context
> >> + * @cpu: QOM CPUState object for which QEMU KVM vCPU context
> has to
> >> be parked.
> >> + *
> >> + * @returns: none
> >> + */
> >> +void kvm_park_vcpu(CPUState *cpu);
> >>
> >> #endif /* COMPILING_PER_TARGET */
> >>
> >> diff --git a/accel/kvm/kvm-all.c b/accel/kvm/kvm-all.c index
> >> d7281b93f3..30d42847de 100644
> >> --- a/accel/kvm/kvm-all.c
> >> +++ b/accel/kvm/kvm-all.c
> >> @@ -128,6 +128,7 @@ static QemuMutex kml_slots_lock; #define
> >> kvm_slots_unlock() qemu_mutex_unlock(&kml_slots_lock)
> >>
> >> static void kvm_slot_init_dirty_bitmap(KVMSlot *mem);
> >> +static int kvm_get_vcpu(KVMState *s, unsigned long vcpu_id);
> >>
> >> static inline void kvm_resample_fd_remove(int gsi) { @@ -340,14
> +341,53
> >> @@ err:
> >> return ret;
> >> }
> >>
> >> +void kvm_park_vcpu(CPUState *cpu)
> >> +{
> >> + struct KVMParkedVcpu *vcpu;
> >> +
> >> + trace_kvm_park_vcpu(cpu->cpu_index, kvm_arch_vcpu_id(cpu));
> >> +
> >> + vcpu = g_malloc0(sizeof(*vcpu));
> >> + vcpu->vcpu_id = kvm_arch_vcpu_id(cpu);
> >> + vcpu->kvm_fd = cpu->kvm_fd;
> >> + QLIST_INSERT_HEAD(&kvm_state->kvm_parked_vcpus, vcpu,
> node); }
> >> +
> >> +int kvm_create_vcpu(CPUState *cpu)
> >> +{
> >> + unsigned long vcpu_id = kvm_arch_vcpu_id(cpu);
> >> + KVMState *s = kvm_state;
> >> + int kvm_fd;
> >> +
> >> + trace_kvm_create_vcpu(cpu->cpu_index, kvm_arch_vcpu_id(cpu));
> >> +
> >> + /* check if the KVM vCPU already exist but is parked */
> >> + kvm_fd = kvm_get_vcpu(s, vcpu_id);
> >> + if (kvm_fd < 0) {
> >> + /* vCPU not parked: create a new KVM vCPU */
> >> + kvm_fd = kvm_vm_ioctl(s, KVM_CREATE_VCPU, vcpu_id);
> >> + if (kvm_fd < 0) {
> >> + error_report("KVM_CREATE_VCPU IOCTL failed for vCPU %lu",
> >> vcpu_id);
> >> + return kvm_fd;
> >> + }
> >> + }
> >> +
> >> + cpu->kvm_fd = kvm_fd;
> >> + cpu->kvm_state = s;
> >> + cpu->vcpu_dirty = true;
> >> + cpu->dirty_pages = 0;
> >> + cpu->throttle_us_per_full = 0;
> >> +
> >> + return 0;
> >> +}
> >> +
> >> static int do_kvm_destroy_vcpu(CPUState *cpu) {
> >> KVMState *s = kvm_state;
> >> long mmap_size;
> >> - struct KVMParkedVcpu *vcpu = NULL;
> >> int ret = 0;
> >>
> >> - trace_kvm_destroy_vcpu();
> >> + trace_kvm_destroy_vcpu(cpu->cpu_index,
> kvm_arch_vcpu_id(cpu));
> >>
> >> ret = kvm_arch_destroy_vcpu(cpu);
> >> if (ret < 0) {
> >> @@ -373,10 +413,7 @@ static int do_kvm_destroy_vcpu(CPUState
> *cpu)
> >> }
> >> }
> >>
> >> - vcpu = g_malloc0(sizeof(*vcpu));
> >> - vcpu->vcpu_id = kvm_arch_vcpu_id(cpu);
> >> - vcpu->kvm_fd = cpu->kvm_fd;
> >> - QLIST_INSERT_HEAD(&kvm_state->kvm_parked_vcpus, vcpu,
> node);
> >> + kvm_park_vcpu(cpu);
> >> err:
> >> return ret;
> >> }
> >> @@ -397,6 +434,8 @@ static int kvm_get_vcpu(KVMState *s, unsigned
> long
> >> vcpu_id)
> >> if (cpu->vcpu_id == vcpu_id) {
> >> int kvm_fd;
> >>
> >> + trace_kvm_get_vcpu(vcpu_id);
> >> +
> >> QLIST_REMOVE(cpu, node);
> >> kvm_fd = cpu->kvm_fd;
> >> g_free(cpu);
> >> @@ -404,7 +443,7 @@ static int kvm_get_vcpu(KVMState *s, unsigned
> long
> >> vcpu_id)
> >> }
> >> }
> >>
> >> - return kvm_vm_ioctl(s, KVM_CREATE_VCPU, (void *)vcpu_id);
> >> + return -ENOENT;
> >> }
> >>
> >> int kvm_init_vcpu(CPUState *cpu, Error **errp) @@ -415,19 +454,14
> @@
> >> int kvm_init_vcpu(CPUState *cpu, Error **errp)
> >>
> >> trace_kvm_init_vcpu(cpu->cpu_index, kvm_arch_vcpu_id(cpu));
> >>
> >> - ret = kvm_get_vcpu(s, kvm_arch_vcpu_id(cpu));
> >> + ret = kvm_create_vcpu(cpu);
> >> if (ret < 0) {
> >> - error_setg_errno(errp, -ret, "kvm_init_vcpu: kvm_get_vcpu failed
> >> (%lu)",
> >> + error_setg_errno(errp, -ret,
> >> + "kvm_init_vcpu: kvm_create_vcpu failed (%lu)",
> >> kvm_arch_vcpu_id(cpu));
> >> goto err;
> >> }
> >>
> >> - cpu->kvm_fd = ret;
> >> - cpu->kvm_state = s;
> >> - cpu->vcpu_dirty = true;
> >> - cpu->dirty_pages = 0;
> >> - cpu->throttle_us_per_full = 0;
> >> -
> >> mmap_size = kvm_ioctl(s, KVM_GET_VCPU_MMAP_SIZE, 0);
> >> if (mmap_size < 0) {
> >> ret = mmap_size;
> >> diff --git a/accel/kvm/trace-events b/accel/kvm/trace-events index
> >> 681ccb667d..75c1724e78 100644
> >> --- a/accel/kvm/trace-events
> >> +++ b/accel/kvm/trace-events
> >> @@ -9,6 +9,10 @@ kvm_device_ioctl(int fd, int type, void *arg) "dev fd
> %d,
> >> type 0x%x, arg %p"
> >> kvm_failed_reg_get(uint64_t id, const char *msg) "Warning: Unable to
> >> retrieve ONEREG %" PRIu64 " from KVM: %s"
> >> kvm_failed_reg_set(uint64_t id, const char *msg) "Warning: Unable to
> set
> >> ONEREG %" PRIu64 " to KVM: %s"
> >> kvm_init_vcpu(int cpu_index, unsigned long arch_cpu_id) "index: %d
> id:
> >> %lu"
> >> +kvm_create_vcpu(int cpu_index, unsigned long arch_cpu_id) "index:
> %d
> >> id: %lu"
> >> +kvm_get_vcpu(unsigned long arch_cpu_id) "id: %lu"
> >> +kvm_destroy_vcpu(int cpu_index, unsigned long arch_cpu_id) "index:
> %d
> >> id: %lu"
> >> +kvm_park_vcpu(int cpu_index, unsigned long arch_cpu_id) "index: %d
> id:
> >> %lu"
> >> kvm_irqchip_commit_routes(void) ""
> >> kvm_irqchip_add_msi_route(char *name, int vector, int virq) "dev %s
> >> vector %d virq %d"
> >> kvm_irqchip_update_msi_route(int virq) "Updating MSI route
> virq=%d"
> >> @@ -25,7 +29,6 @@ kvm_dirty_ring_reaper(const char *s) "%s"
> >> kvm_dirty_ring_reap(uint64_t count, int64_t t) "reaped %"PRIu64"
> pages
> >> (took %"PRIi64" us)"
> >> kvm_dirty_ring_reaper_kick(const char *reason) "%s"
> >> kvm_dirty_ring_flush(int finished) "%d"
> >> -kvm_destroy_vcpu(void) ""
> >> kvm_failed_get_vcpu_mmap_size(void) ""
> >> kvm_cpu_exec(void) ""
> >> kvm_interrupt_exit_request(void) ""
> >> --
> >> 2.39.3
> >
^ permalink raw reply [flat|nested] 14+ messages in thread
* Re: [PATCH v2 1/4] accel/kvm: Extract common KVM vCPU {creation, parking} code
2024-05-16 12:12 ` Salil Mehta via
@ 2024-05-16 13:06 ` Harsh Prateek Bora
2024-05-16 13:35 ` Salil Mehta via
0 siblings, 1 reply; 14+ messages in thread
From: Harsh Prateek Bora @ 2024-05-16 13:06 UTC (permalink / raw)
To: Salil Mehta, npiggin@gmail.com, qemu-ppc@nongnu.org,
qemu-devel@nongnu.org
Cc: danielhb413@gmail.com, vaibhav@linux.ibm.com, sbhat@linux.ibm.com
Hi Salil,
On 5/16/24 17:42, Salil Mehta wrote:
> Hi Harsh,
>
>> From: Harsh Prateek Bora <harshpb@linux.ibm.com>
>> Sent: Thursday, May 16, 2024 11:15 AM
>>
>> Hi Salil,
>>
>> Thanks for your email.
>> Your patch 1/8 is included here based on review comments on my previous
>> patch from one of the maintainers in the community and therefore I had
>> kept you in CC to be aware of the desire of having this independent patch to
>> get merged earlier even if your other patches in the series may go through
>> further reviews.
>
> I really don’t know which discussion are you pointing at? Please understand
> you are fixing a bug and we are pushing a feature which has got large series.
> It will break the patch-set which is about t be merged.
>
> There will be significant overhead of testing on us for the work we have been
> carrying forward for large time. This will be disruptive. Please dont!
>
I was referring to the review discussion on my prev patch here:
https://lore.kernel.org/qemu-devel/D191D2JFAR7L.2EH4S445M4TGK@gmail.com/
Although your patch was included with this series only to facilitate
review of the additional patches depending on just one of your patch.
I am not sure what is appearing disruptive here. It is a common practive
in the community that maintainer(s) can pick individual patches from the
series if it has been vetted by siginificant number of reviewers.
However, in this case, since you have mentioned to post next version
soon, you need not worry about it as that would be the preferred version
for both of the series.
>
>>
>> I am hoping to see your v9 soon and thereafter maintainer(s) may choose to
>> pick the latest independent patch if needs to be merged earlier.
>
>
> I don’t think you are understanding what problem it is causing. For your
> small bug fix you are causing significant delays at our end.
>
I hope I clarfied above that including your patch here doesnt delay
anything. Hoping to see your v9 soon!
Thanks
Harsh
>
> Thanks
> Salil.
>>
>> Thanks for your work and let's be hopeful it gets merged soon.
>>
>> regards,
>> Harsh
>>
>> On 5/16/24 14:00, Salil Mehta wrote:
>> > Hi Harsh,
>> >
>> > Thanks for your interest in the patch-set but taking away patches like
>> > this from other series without any discussion can disrupt others work
>> > and its acceptance on time. This is because we will have to put lot of
>> > effort in rebasing bigger series and then testing overhead comes along
>> > with it.
>> >
>> > The patch-set (from where this patch has been taken) is part of even
>> > bigger series and there have been many people and companies toiling to
>> > fix the bugs collectively in that series and for years.
>> >
>> > I'm about float the V9 version of the Arch agnostic series which this
>> > patch is part of and you can rebase your patch-set from there. I'm
>> > hopeful that it will get accepted in this cycle.
>> >
>> >
>> > Many thanks
>> > Salil.
>> >
>> >> From: Harsh Prateek Bora <harshpb@linux.ibm.com>
>> >> Sent: Thursday, May 16, 2024 6:32 AM
>> >>
>> >> From: Salil Mehta <salil.mehta@huawei.com>
>> >>
>> >> KVM vCPU creation is done once during the vCPU realization when
>> Qemu
>> >> vCPU thread is spawned. This is common to all the architectures as of
>> now.
>> >>
>> >> Hot-unplug of vCPU results in destruction of the vCPU object in QOM
>> but
>> >> the corresponding KVM vCPU object in the Host KVM is not destroyed
>> as
>> >> KVM doesn't support vCPU removal. Therefore, its representative KVM
>> >> vCPU object/context in Qemu is parked.
>> >>
>> >> Refactor architecture common logic so that some APIs could be reused
>> by
>> >> vCPU Hotplug code of some architectures likes ARM, Loongson etc.
>> Update
>> >> new/old APIs with trace events instead of DPRINTF. No functional
>> change is
>> >> intended here.
>> >>
>> >> Signed-off-by: Salil Mehta <salil.mehta@huawei.com>
>> >> Reviewed-by: Gavin Shan <gshan@redhat.com>
>> >> Tested-by: Vishnu Pajjuri <vishnu@os.amperecomputing.com>
>> >> Reviewed-by: Jonathan Cameron <Jonathan.Cameron@huawei.com>
>> >> Tested-by: Xianglai Li <lixianglai@loongson.cn>
>> >> Tested-by: Miguel Luis <miguel.luis@oracle.com>
>> >> Reviewed-by: Shaoqin Huang <shahuang@redhat.com>
>> >> [harshpb: fixed rebase failures in include/sysemu/kvm.h]
>> >> Signed-off-by: Harsh Prateek Bora <harshpb@linux.ibm.com>
>> >> ---
>> >> include/sysemu/kvm.h | 15 ++++++++++
>> >> accel/kvm/kvm-all.c | 64 ++++++++++++++++++++++++++++++++---
>> -----
>> >> --
>> >> accel/kvm/trace-events | 5 +++-
>> >> 3 files changed, 68 insertions(+), 16 deletions(-)
>> >>
>> >> diff --git a/include/sysemu/kvm.h b/include/sysemu/kvm.h index
>> >> eaf801bc93..fa3ec74442 100644
>> >> --- a/include/sysemu/kvm.h
>> >> +++ b/include/sysemu/kvm.h
>> >> @@ -434,6 +434,21 @@ void kvm_set_sigmask_len(KVMState *s,
>> unsigned
>> >> int sigmask_len);
>> >>
>> >> int kvm_physical_memory_addr_from_host(KVMState *s, void
>> >> *ram_addr,
>> >> hwaddr *phys_addr);
>> >> +/**
>> >> + * kvm_create_vcpu - Gets a parked KVM vCPU or creates a KVM
>> vCPU
>> >> + * @cpu: QOM CPUState object for which KVM vCPU has to be
>> >> fetched/created.
>> >> + *
>> >> + * @returns: 0 when success, errno (<0) when failed.
>> >> + */
>> >> +int kvm_create_vcpu(CPUState *cpu);
>> >> +
>> >> +/**
>> >> + * kvm_park_vcpu - Park QEMU KVM vCPU context
>> >> + * @cpu: QOM CPUState object for which QEMU KVM vCPU context
>> has to
>> >> be parked.
>> >> + *
>> >> + * @returns: none
>> >> + */
>> >> +void kvm_park_vcpu(CPUState *cpu);
>> >>
>> >> #endif /* COMPILING_PER_TARGET */
>> >>
>> >> diff --git a/accel/kvm/kvm-all.c b/accel/kvm/kvm-all.c index
>> >> d7281b93f3..30d42847de 100644
>> >> --- a/accel/kvm/kvm-all.c
>> >> +++ b/accel/kvm/kvm-all.c
>> >> @@ -128,6 +128,7 @@ static QemuMutex kml_slots_lock; #define
>> >> kvm_slots_unlock() qemu_mutex_unlock(&kml_slots_lock)
>> >>
>> >> static void kvm_slot_init_dirty_bitmap(KVMSlot *mem);
>> >> +static int kvm_get_vcpu(KVMState *s, unsigned long vcpu_id);
>> >>
>> >> static inline void kvm_resample_fd_remove(int gsi) { @@ -340,14
>> +341,53
>> >> @@ err:
>> >> return ret;
>> >> }
>> >>
>> >> +void kvm_park_vcpu(CPUState *cpu)
>> >> +{
>> >> + struct KVMParkedVcpu *vcpu;
>> >> +
>> >> + trace_kvm_park_vcpu(cpu->cpu_index, kvm_arch_vcpu_id(cpu));
>> >> +
>> >> + vcpu = g_malloc0(sizeof(*vcpu));
>> >> + vcpu->vcpu_id = kvm_arch_vcpu_id(cpu);
>> >> + vcpu->kvm_fd = cpu->kvm_fd;
>> >> + QLIST_INSERT_HEAD(&kvm_state->kvm_parked_vcpus, vcpu,
>> node); }
>> >> +
>> >> +int kvm_create_vcpu(CPUState *cpu)
>> >> +{
>> >> + unsigned long vcpu_id = kvm_arch_vcpu_id(cpu);
>> >> + KVMState *s = kvm_state;
>> >> + int kvm_fd;
>> >> +
>> >> + trace_kvm_create_vcpu(cpu->cpu_index, kvm_arch_vcpu_id(cpu));
>> >> +
>> >> + /* check if the KVM vCPU already exist but is parked */
>> >> + kvm_fd = kvm_get_vcpu(s, vcpu_id);
>> >> + if (kvm_fd < 0) {
>> >> + /* vCPU not parked: create a new KVM vCPU */
>> >> + kvm_fd = kvm_vm_ioctl(s, KVM_CREATE_VCPU, vcpu_id);
>> >> + if (kvm_fd < 0) {
>> >> + error_report("KVM_CREATE_VCPU IOCTL failed for vCPU %lu",
>> >> vcpu_id);
>> >> + return kvm_fd;
>> >> + }
>> >> + }
>> >> +
>> >> + cpu->kvm_fd = kvm_fd;
>> >> + cpu->kvm_state = s;
>> >> + cpu->vcpu_dirty = true;
>> >> + cpu->dirty_pages = 0;
>> >> + cpu->throttle_us_per_full = 0;
>> >> +
>> >> + return 0;
>> >> +}
>> >> +
>> >> static int do_kvm_destroy_vcpu(CPUState *cpu) {
>> >> KVMState *s = kvm_state;
>> >> long mmap_size;
>> >> - struct KVMParkedVcpu *vcpu = NULL;
>> >> int ret = 0;
>> >>
>> >> - trace_kvm_destroy_vcpu();
>> >> + trace_kvm_destroy_vcpu(cpu->cpu_index,
>> kvm_arch_vcpu_id(cpu));
>> >>
>> >> ret = kvm_arch_destroy_vcpu(cpu);
>> >> if (ret < 0) {
>> >> @@ -373,10 +413,7 @@ static int do_kvm_destroy_vcpu(CPUState
>> *cpu)
>> >> }
>> >> }
>> >>
>> >> - vcpu = g_malloc0(sizeof(*vcpu));
>> >> - vcpu->vcpu_id = kvm_arch_vcpu_id(cpu);
>> >> - vcpu->kvm_fd = cpu->kvm_fd;
>> >> - QLIST_INSERT_HEAD(&kvm_state->kvm_parked_vcpus, vcpu,
>> node);
>> >> + kvm_park_vcpu(cpu);
>> >> err:
>> >> return ret;
>> >> }
>> >> @@ -397,6 +434,8 @@ static int kvm_get_vcpu(KVMState *s, unsigned
>> long
>> >> vcpu_id)
>> >> if (cpu->vcpu_id == vcpu_id) {
>> >> int kvm_fd;
>> >>
>> >> + trace_kvm_get_vcpu(vcpu_id);
>> >> +
>> >> QLIST_REMOVE(cpu, node);
>> >> kvm_fd = cpu->kvm_fd;
>> >> g_free(cpu);
>> >> @@ -404,7 +443,7 @@ static int kvm_get_vcpu(KVMState *s, unsigned
>> long
>> >> vcpu_id)
>> >> }
>> >> }
>> >>
>> >> - return kvm_vm_ioctl(s, KVM_CREATE_VCPU, (void *)vcpu_id);
>> >> + return -ENOENT;
>> >> }
>> >>
>> >> int kvm_init_vcpu(CPUState *cpu, Error **errp) @@ -415,19 +454,14
>> @@
>> >> int kvm_init_vcpu(CPUState *cpu, Error **errp)
>> >>
>> >> trace_kvm_init_vcpu(cpu->cpu_index, kvm_arch_vcpu_id(cpu));
>> >>
>> >> - ret = kvm_get_vcpu(s, kvm_arch_vcpu_id(cpu));
>> >> + ret = kvm_create_vcpu(cpu);
>> >> if (ret < 0) {
>> >> - error_setg_errno(errp, -ret, "kvm_init_vcpu: kvm_get_vcpu failed
>> >> (%lu)",
>> >> + error_setg_errno(errp, -ret,
>> >> + "kvm_init_vcpu: kvm_create_vcpu failed (%lu)",
>> >> kvm_arch_vcpu_id(cpu));
>> >> goto err;
>> >> }
>> >>
>> >> - cpu->kvm_fd = ret;
>> >> - cpu->kvm_state = s;
>> >> - cpu->vcpu_dirty = true;
>> >> - cpu->dirty_pages = 0;
>> >> - cpu->throttle_us_per_full = 0;
>> >> -
>> >> mmap_size = kvm_ioctl(s, KVM_GET_VCPU_MMAP_SIZE, 0);
>> >> if (mmap_size < 0) {
>> >> ret = mmap_size;
>> >> diff --git a/accel/kvm/trace-events b/accel/kvm/trace-events index
>> >> 681ccb667d..75c1724e78 100644
>> >> --- a/accel/kvm/trace-events
>> >> +++ b/accel/kvm/trace-events
>> >> @@ -9,6 +9,10 @@ kvm_device_ioctl(int fd, int type, void *arg) "dev fd
>> %d,
>> >> type 0x%x, arg %p"
>> >> kvm_failed_reg_get(uint64_t id, const char *msg) "Warning: Unable to
>> >> retrieve ONEREG %" PRIu64 " from KVM: %s"
>> >> kvm_failed_reg_set(uint64_t id, const char *msg) "Warning: Unable to
>> set
>> >> ONEREG %" PRIu64 " to KVM: %s"
>> >> kvm_init_vcpu(int cpu_index, unsigned long arch_cpu_id) "index: %d
>> id:
>> >> %lu"
>> >> +kvm_create_vcpu(int cpu_index, unsigned long arch_cpu_id) "index:
>> %d
>> >> id: %lu"
>> >> +kvm_get_vcpu(unsigned long arch_cpu_id) "id: %lu"
>> >> +kvm_destroy_vcpu(int cpu_index, unsigned long arch_cpu_id) "index:
>> %d
>> >> id: %lu"
>> >> +kvm_park_vcpu(int cpu_index, unsigned long arch_cpu_id) "index: %d
>> id:
>> >> %lu"
>> >> kvm_irqchip_commit_routes(void) ""
>> >> kvm_irqchip_add_msi_route(char *name, int vector, int virq) "dev %s
>> >> vector %d virq %d"
>> >> kvm_irqchip_update_msi_route(int virq) "Updating MSI route
>> virq=%d"
>> >> @@ -25,7 +29,6 @@ kvm_dirty_ring_reaper(const char *s) "%s"
>> >> kvm_dirty_ring_reap(uint64_t count, int64_t t) "reaped %"PRIu64"
>> pages
>> >> (took %"PRIi64" us)"
>> >> kvm_dirty_ring_reaper_kick(const char *reason) "%s"
>> >> kvm_dirty_ring_flush(int finished) "%d"
>> >> -kvm_destroy_vcpu(void) ""
>> >> kvm_failed_get_vcpu_mmap_size(void) ""
>> >> kvm_cpu_exec(void) ""
>> >> kvm_interrupt_exit_request(void) ""
>> >> --
>> >> 2.39.3
>> >
^ permalink raw reply [flat|nested] 14+ messages in thread
* RE: [PATCH v2 1/4] accel/kvm: Extract common KVM vCPU {creation, parking} code
2024-05-16 13:06 ` Harsh Prateek Bora
@ 2024-05-16 13:35 ` Salil Mehta via
[not found] ` <5bd52d8f5aaa49d6bc0ae419bb16c27c@huawei.com>
` (2 more replies)
0 siblings, 3 replies; 14+ messages in thread
From: Salil Mehta via @ 2024-05-16 13:35 UTC (permalink / raw)
To: Harsh Prateek Bora, npiggin@gmail.com, qemu-ppc@nongnu.org,
qemu-devel@nongnu.org
Cc: danielhb413@gmail.com, vaibhav@linux.ibm.com, sbhat@linux.ibm.com
> From: Harsh Prateek Bora <harshpb@linux.ibm.com>
> Sent: Thursday, May 16, 2024 2:07 PM
>
> Hi Salil,
>
> On 5/16/24 17:42, Salil Mehta wrote:
> > Hi Harsh,
> >
> >> From: Harsh Prateek Bora <harshpb@linux.ibm.com>
> >> Sent: Thursday, May 16, 2024 11:15 AM
> >>
> >> Hi Salil,
> >>
> >> Thanks for your email.
> >> Your patch 1/8 is included here based on review comments on my previous
> >> patch from one of the maintainers in the community and therefore I had
> >> kept you in CC to be aware of the desire of having this independent patch to
> >> get merged earlier even if your other patches in the series may go through
> >> further reviews.
> >
> > I really don’t know which discussion are you pointing at? Please
> > understand you are fixing a bug and we are pushing a feature which has got large series.
> > It will break the patch-set which is about t be merged.
> >
> > There will be significant overhead of testing on us for the work we
> > have been carrying forward for large time. This will be disruptive. Please dont!
> >
>
> I was referring to the review discussion on my prev patch here:
> https://lore.kernel.org/qemu-devel/D191D2JFAR7L.2EH4S445M4TGK@gmail.com/
Sure, I'm, not sure what this means.
> Although your patch was included with this series only to facilitate review of
> the additional patches depending on just one of your patch.
Generally you rebase your patch-set over the other and clearly state on the cover
letter that this patch-set is dependent upon such and such patch-set. Just imagine
if everyone starts to unilaterally pick up patches from each other's patch-set it will
create a chaos not only for the feature owners but also for the maintainers.
>
> I am not sure what is appearing disruptive here. It is a common practive in
> the community that maintainer(s) can pick individual patches from the
> series if it has been vetted by siginificant number of reviewers.
Don’t you think this patch-set is asking for acceptance for a patch already
part of another patch-set which is about to be accepted and is a bigger feature?
Will it cause maintenance overhead at the last moment? Yes, of course!
> However, in this case, since you have mentioned to post next version soon,
> you need not worry about it as that would be the preferred version for both
> of the series.
Yes, but please understand we are working for the benefit of overall community.
Please cooperate here.
>
> >
> >>
> >> I am hoping to see your v9 soon and thereafter maintainer(s) may
> choose to
> >> pick the latest independent patch if needs to be merged earlier.
> >
> >
> > I don’t think you are understanding what problem it is causing. For
> > your small bug fix you are causing significant delays at our end.
> >
>
> I hope I clarfied above that including your patch here doesnt delay anything.
> Hoping to see your v9 soon!
>
> Thanks
> Harsh
> >
> > Thanks
> > Salil.
> >>
> >> Thanks for your work and let's be hopeful it gets merged soon.
> >>
> >> regards,
> >> Harsh
> >>
> >> On 5/16/24 14:00, Salil Mehta wrote:
> >> > Hi Harsh,
> >> >
> >> > Thanks for your interest in the patch-set but taking away patches like
> >> > this from other series without any discussion can disrupt others work
> >> > and its acceptance on time. This is because we will have to put lot of
> >> > effort in rebasing bigger series and then testing overhead comes
> along
> >> > with it.
> >> >
> >> > The patch-set (from where this patch has been taken) is part of even
> >> > bigger series and there have been many people and companies toiling
> to
> >> > fix the bugs collectively in that series and for years.
> >> >
> >> > I'm about float the V9 version of the Arch agnostic series which this
> >> > patch is part of and you can rebase your patch-set from there. I'm
> >> > hopeful that it will get accepted in this cycle.
> >> >
> >> >
> >> > Many thanks
> >> > Salil.
> >> >
> >> >> From: Harsh Prateek Bora <harshpb@linux.ibm.com>
> >> >> Sent: Thursday, May 16, 2024 6:32 AM
> >> >>
> >> >> From: Salil Mehta <salil.mehta@huawei.com>
> >> >>
> >> >> KVM vCPU creation is done once during the vCPU realization when
> >> Qemu
> >> >> vCPU thread is spawned. This is common to all the architectures as
> of
> >> now.
> >> >>
> >> >> Hot-unplug of vCPU results in destruction of the vCPU object in
> QOM
> >> but
> >> >> the corresponding KVM vCPU object in the Host KVM is not
> destroyed
> >> as
> >> >> KVM doesn't support vCPU removal. Therefore, its representative
> KVM
> >> >> vCPU object/context in Qemu is parked.
> >> >>
> >> >> Refactor architecture common logic so that some APIs could be
> reused
> >> by
> >> >> vCPU Hotplug code of some architectures likes ARM, Loongson etc.
> >> Update
> >> >> new/old APIs with trace events instead of DPRINTF. No functional
> >> change is
> >> >> intended here.
> >> >>
> >> >> Signed-off-by: Salil Mehta <salil.mehta@huawei.com>
> >> >> Reviewed-by: Gavin Shan <gshan@redhat.com>
> >> >> Tested-by: Vishnu Pajjuri <vishnu@os.amperecomputing.com>
> >> >> Reviewed-by: Jonathan Cameron
> <Jonathan.Cameron@huawei.com>
> >> >> Tested-by: Xianglai Li <lixianglai@loongson.cn>
> >> >> Tested-by: Miguel Luis <miguel.luis@oracle.com>
> >> >> Reviewed-by: Shaoqin Huang <shahuang@redhat.com>
> >> >> [harshpb: fixed rebase failures in include/sysemu/kvm.h]
> >> >> Signed-off-by: Harsh Prateek Bora <harshpb@linux.ibm.com>
> >> >> ---
> >> >> include/sysemu/kvm.h | 15 ++++++++++
> >> >> accel/kvm/kvm-all.c | 64
> ++++++++++++++++++++++++++++++++---
> >> -----
> >> >> --
> >> >> accel/kvm/trace-events | 5 +++-
> >> >> 3 files changed, 68 insertions(+), 16 deletions(-)
> >> >>
> >> >> diff --git a/include/sysemu/kvm.h b/include/sysemu/kvm.h index
> >> >> eaf801bc93..fa3ec74442 100644
> >> >> --- a/include/sysemu/kvm.h
> >> >> +++ b/include/sysemu/kvm.h
> >> >> @@ -434,6 +434,21 @@ void kvm_set_sigmask_len(KVMState *s,
> >> unsigned
> >> >> int sigmask_len);
> >> >>
> >> >> int kvm_physical_memory_addr_from_host(KVMState *s, void
> >> >> *ram_addr,
> >> >> hwaddr *phys_addr);
> >> >> +/**
> >> >> + * kvm_create_vcpu - Gets a parked KVM vCPU or creates a KVM
> >> vCPU
> >> >> + * @cpu: QOM CPUState object for which KVM vCPU has to be
> >> >> fetched/created.
> >> >> + *
> >> >> + * @returns: 0 when success, errno (<0) when failed.
> >> >> + */
> >> >> +int kvm_create_vcpu(CPUState *cpu);
> >> >> +
> >> >> +/**
> >> >> + * kvm_park_vcpu - Park QEMU KVM vCPU context
> >> >> + * @cpu: QOM CPUState object for which QEMU KVM vCPU
> context
> >> has to
> >> >> be parked.
> >> >> + *
> >> >> + * @returns: none
> >> >> + */
> >> >> +void kvm_park_vcpu(CPUState *cpu);
> >> >>
> >> >> #endif /* COMPILING_PER_TARGET */
> >> >>
> >> >> diff --git a/accel/kvm/kvm-all.c b/accel/kvm/kvm-all.c index
> >> >> d7281b93f3..30d42847de 100644
> >> >> --- a/accel/kvm/kvm-all.c
> >> >> +++ b/accel/kvm/kvm-all.c
> >> >> @@ -128,6 +128,7 @@ static QemuMutex kml_slots_lock; #define
> >> >> kvm_slots_unlock() qemu_mutex_unlock(&kml_slots_lock)
> >> >>
> >> >> static void kvm_slot_init_dirty_bitmap(KVMSlot *mem);
> >> >> +static int kvm_get_vcpu(KVMState *s, unsigned long vcpu_id);
> >> >>
> >> >> static inline void kvm_resample_fd_remove(int gsi) { @@ -340,14
> >> +341,53
> >> >> @@ err:
> >> >> return ret;
> >> >> }
> >> >>
> >> >> +void kvm_park_vcpu(CPUState *cpu)
> >> >> +{
> >> >> + struct KVMParkedVcpu *vcpu;
> >> >> +
> >> >> + trace_kvm_park_vcpu(cpu->cpu_index,
> kvm_arch_vcpu_id(cpu));
> >> >> +
> >> >> + vcpu = g_malloc0(sizeof(*vcpu));
> >> >> + vcpu->vcpu_id = kvm_arch_vcpu_id(cpu);
> >> >> + vcpu->kvm_fd = cpu->kvm_fd;
> >> >> + QLIST_INSERT_HEAD(&kvm_state->kvm_parked_vcpus, vcpu,
> >> node); }
> >> >> +
> >> >> +int kvm_create_vcpu(CPUState *cpu)
> >> >> +{
> >> >> + unsigned long vcpu_id = kvm_arch_vcpu_id(cpu);
> >> >> + KVMState *s = kvm_state;
> >> >> + int kvm_fd;
> >> >> +
> >> >> + trace_kvm_create_vcpu(cpu->cpu_index,
> kvm_arch_vcpu_id(cpu));
> >> >> +
> >> >> + /* check if the KVM vCPU already exist but is parked */
> >> >> + kvm_fd = kvm_get_vcpu(s, vcpu_id);
> >> >> + if (kvm_fd < 0) {
> >> >> + /* vCPU not parked: create a new KVM vCPU */
> >> >> + kvm_fd = kvm_vm_ioctl(s, KVM_CREATE_VCPU, vcpu_id);
> >> >> + if (kvm_fd < 0) {
> >> >> + error_report("KVM_CREATE_VCPU IOCTL failed for vCPU
> %lu",
> >> >> vcpu_id);
> >> >> + return kvm_fd;
> >> >> + }
> >> >> + }
> >> >> +
> >> >> + cpu->kvm_fd = kvm_fd;
> >> >> + cpu->kvm_state = s;
> >> >> + cpu->vcpu_dirty = true;
> >> >> + cpu->dirty_pages = 0;
> >> >> + cpu->throttle_us_per_full = 0;
> >> >> +
> >> >> + return 0;
> >> >> +}
> >> >> +
> >> >> static int do_kvm_destroy_vcpu(CPUState *cpu) {
> >> >> KVMState *s = kvm_state;
> >> >> long mmap_size;
> >> >> - struct KVMParkedVcpu *vcpu = NULL;
> >> >> int ret = 0;
> >> >>
> >> >> - trace_kvm_destroy_vcpu();
> >> >> + trace_kvm_destroy_vcpu(cpu->cpu_index,
> >> kvm_arch_vcpu_id(cpu));
> >> >>
> >> >> ret = kvm_arch_destroy_vcpu(cpu);
> >> >> if (ret < 0) {
> >> >> @@ -373,10 +413,7 @@ static int do_kvm_destroy_vcpu(CPUState
> >> *cpu)
> >> >> }
> >> >> }
> >> >>
> >> >> - vcpu = g_malloc0(sizeof(*vcpu));
> >> >> - vcpu->vcpu_id = kvm_arch_vcpu_id(cpu);
> >> >> - vcpu->kvm_fd = cpu->kvm_fd;
> >> >> - QLIST_INSERT_HEAD(&kvm_state->kvm_parked_vcpus, vcpu,
> >> node);
> >> >> + kvm_park_vcpu(cpu);
> >> >> err:
> >> >> return ret;
> >> >> }
> >> >> @@ -397,6 +434,8 @@ static int kvm_get_vcpu(KVMState *s,
> unsigned
> >> long
> >> >> vcpu_id)
> >> >> if (cpu->vcpu_id == vcpu_id) {
> >> >> int kvm_fd;
> >> >>
> >> >> + trace_kvm_get_vcpu(vcpu_id);
> >> >> +
> >> >> QLIST_REMOVE(cpu, node);
> >> >> kvm_fd = cpu->kvm_fd;
> >> >> g_free(cpu);
> >> >> @@ -404,7 +443,7 @@ static int kvm_get_vcpu(KVMState *s,
> unsigned
> >> long
> >> >> vcpu_id)
> >> >> }
> >> >> }
> >> >>
> >> >> - return kvm_vm_ioctl(s, KVM_CREATE_VCPU, (void *)vcpu_id);
> >> >> + return -ENOENT;
> >> >> }
> >> >>
> >> >> int kvm_init_vcpu(CPUState *cpu, Error **errp) @@ -415,19
> +454,14
> >> @@
> >> >> int kvm_init_vcpu(CPUState *cpu, Error **errp)
> >> >>
> >> >> trace_kvm_init_vcpu(cpu->cpu_index, kvm_arch_vcpu_id(cpu));
> >> >>
> >> >> - ret = kvm_get_vcpu(s, kvm_arch_vcpu_id(cpu));
> >> >> + ret = kvm_create_vcpu(cpu);
> >> >> if (ret < 0) {
> >> >> - error_setg_errno(errp, -ret, "kvm_init_vcpu: kvm_get_vcpu
> failed
> >> >> (%lu)",
> >> >> + error_setg_errno(errp, -ret,
> >> >> + "kvm_init_vcpu: kvm_create_vcpu failed (%lu)",
> >> >> kvm_arch_vcpu_id(cpu));
> >> >> goto err;
> >> >> }
> >> >>
> >> >> - cpu->kvm_fd = ret;
> >> >> - cpu->kvm_state = s;
> >> >> - cpu->vcpu_dirty = true;
> >> >> - cpu->dirty_pages = 0;
> >> >> - cpu->throttle_us_per_full = 0;
> >> >> -
> >> >> mmap_size = kvm_ioctl(s, KVM_GET_VCPU_MMAP_SIZE, 0);
> >> >> if (mmap_size < 0) {
> >> >> ret = mmap_size;
> >> >> diff --git a/accel/kvm/trace-events b/accel/kvm/trace-events index
> >> >> 681ccb667d..75c1724e78 100644
> >> >> --- a/accel/kvm/trace-events
> >> >> +++ b/accel/kvm/trace-events
> >> >> @@ -9,6 +9,10 @@ kvm_device_ioctl(int fd, int type, void *arg)
> "dev fd
> >> %d,
> >> >> type 0x%x, arg %p"
> >> >> kvm_failed_reg_get(uint64_t id, const char *msg) "Warning:
> Unable to
> >> >> retrieve ONEREG %" PRIu64 " from KVM: %s"
> >> >> kvm_failed_reg_set(uint64_t id, const char *msg) "Warning:
> Unable to
> >> set
> >> >> ONEREG %" PRIu64 " to KVM: %s"
> >> >> kvm_init_vcpu(int cpu_index, unsigned long arch_cpu_id) "index:
> %d
> >> id:
> >> >> %lu"
> >> >> +kvm_create_vcpu(int cpu_index, unsigned long arch_cpu_id)
> "index:
> >> %d
> >> >> id: %lu"
> >> >> +kvm_get_vcpu(unsigned long arch_cpu_id) "id: %lu"
> >> >> +kvm_destroy_vcpu(int cpu_index, unsigned long arch_cpu_id)
> "index:
> >> %d
> >> >> id: %lu"
> >> >> +kvm_park_vcpu(int cpu_index, unsigned long arch_cpu_id)
> "index: %d
> >> id:
> >> >> %lu"
> >> >> kvm_irqchip_commit_routes(void) ""
> >> >> kvm_irqchip_add_msi_route(char *name, int vector, int virq) "dev
> %s
> >> >> vector %d virq %d"
> >> >> kvm_irqchip_update_msi_route(int virq) "Updating MSI route
> >> virq=%d"
> >> >> @@ -25,7 +29,6 @@ kvm_dirty_ring_reaper(const char *s) "%s"
> >> >> kvm_dirty_ring_reap(uint64_t count, int64_t t) "reaped %"PRIu64"
> >> pages
> >> >> (took %"PRIi64" us)"
> >> >> kvm_dirty_ring_reaper_kick(const char *reason) "%s"
> >> >> kvm_dirty_ring_flush(int finished) "%d"
> >> >> -kvm_destroy_vcpu(void) ""
> >> >> kvm_failed_get_vcpu_mmap_size(void) ""
> >> >> kvm_cpu_exec(void) ""
> >> >> kvm_interrupt_exit_request(void) ""
> >> >> --
> >> >> 2.39.3
> >> >
^ permalink raw reply [flat|nested] 14+ messages in thread
* [PATCH v2 1/4] accel/kvm: Extract common KVM vCPU {creation, parking} code
[not found] ` <5bd52d8f5aaa49d6bc0ae419bb16c27c@huawei.com>
@ 2024-05-16 14:19 ` Salil Mehta
0 siblings, 0 replies; 14+ messages in thread
From: Salil Mehta @ 2024-05-16 14:19 UTC (permalink / raw)
To: Harsh Prateek Bora
Cc: Salil Mehta, npiggin@gmail.com, qemu-ppc@nongnu.org,
qemu-devel@nongnu.org, danielhb413@gmail.com,
vaibhav@linux.ibm.com, sbhat@linux.ibm.com
[-- Attachment #1: Type: text/plain, Size: 15849 bytes --]
[+] Adding this email address to the conversation.
(sorry for the noise)
> From: Salil Mehta
> Sent: Thursday, May 16, 2024 2:36 PM
>
> > From: Harsh Prateek Bora <harshpb@linux.ibm.com>
> > Sent: Thursday, May 16, 2024 2:07 PM
> >
> > Hi Salil,
> >
> > On 5/16/24 17:42, Salil Mehta wrote:
> > > Hi Harsh,
> > >
> > >> From: Harsh Prateek Bora <harshpb@linux.ibm.com>
> > >> Sent: Thursday, May 16, 2024 11:15 AM
> > >>
> > >> Hi Salil,
> > >>
> > >> Thanks for your email.
> > >> Your patch 1/8 is included here based on review comments on my
previous
> > >> patch from one of the maintainers in the community and therefore
I had
> > >> kept you in CC to be aware of the desire of having this
independent patch to
> > >> get merged earlier even if your other patches in the series may
go through
> > >> further reviews.
> > >
> > > I really don’t know which discussion you are pointing at? Please
> > > understand you are fixing a bug and we are pushing a feature which
has got large series.
> > > It will break the patch-set which is about to be merged.
> > >
> > > There will be significant overhead of testing on us for the work we
> > > have been carrying forward for large time. This will be disruptive.
Please dont!
> > >
> >
> > I was referring to the review discussion on my prev patch here:
> >
> >
https://lore.kernel.org/qemu-devel/D191D2JFAR7L.2EH4S445M4TGK@gmail.com/
>
>
> Ok, I'm, not sure what this means.
>
>
> > Although your patch was included with this series only to facilitate
> > review of the additional patches depending on just one of your patch.
>
>
> Generally you rebase your patch-set over the other and clearly state on
the
> cover letter that this patch-set is dependent upon such and such
patch-set.
> Just imagine if everyone starts to unilaterally pick up patches from each
> other's patch-set it will create a chaos not only for the feature owners
but
> also for the maintainers.
>
>
> >
> > I am not sure what is appearing disruptive here. It is a common
> > practive in the community that maintainer(s) can pick individual
> > patches from the series if it has been vetted by siginificant number
of reviewers.
>
>
> Don’t you think this patch-set is asking for acceptance for a patch
already
> part of another patch-set which is about to be accepted and is a bigger
> feature? Will it cause maintenance overhead at the last moment? Yes, of
course!
>
>
> > However, in this case, since you have mentioned to post next version
soon,
> > you need not worry about it as that would be the preferred version
for both
> > of the series.
>
>
> Yes, but please understand we are working for the benefit of overall
> community. Please cooperate here.
>
> >
> > >
> > >>
> > >> I am hoping to see your v9 soon and thereafter maintainer(s) may
choose to
> > >> pick the latest independent patch if needs to be merged earlier.
> > >
> > >
> > > I don’t think you are understanding what problem it is causing. For
> > > your small bug fix you are causing significant delays at our end.
> > >
> >
> > I hope I clarfied above that including your patch here doesnt delay
anything.
> > Hoping to see your v9 soon!
> >
> > Thanks
> > Harsh
> > >
> > > Thanks
> > > Salil.
> > >>
> > >> Thanks for your work and let's be hopeful it gets merged soon.
> > >>
> > >> regards,
> > >> Harsh
> > >>
> > >> On 5/16/24 14:00, Salil Mehta wrote:
> > >> > Hi Harsh,
> > >> >
> > >> > Thanks for your interest in the patch-set but taking away
patches like
> > >> > this from other series without any discussion can disrupt
others work
> > >> > and its acceptance on time. This is because we will have to
put lot of
> > >> > effort in rebasing bigger series and then testing overhead
comes along
> > >> > with it.
> > >> >
> > >> > The patch-set (from where this patch has been taken) is part
of even
> > >> > bigger series and there have been many people and companies
toiling to
> > >> > fix the bugs collectively in that series and for years.
> > >> >
> > >> > I'm about float the V9 version of the Arch agnostic series
which this
> > >> > patch is part of and you can rebase your patch-set from there.
I'm
> > >> > hopeful that it will get accepted in this cycle.
> > >> >
> > >> >
> > >> > Many thanks
> > >> > Salil.
> > >> >
> > >> >> From: Harsh Prateek Bora <harshpb@linux.ibm.com>
> > >> >> Sent: Thursday, May 16, 2024 6:32 AM
> > >> >>
> > >> >> From: Salil Mehta <salil.mehta@huawei.com>
> > >> >>
> > >> >> KVM vCPU creation is done once during the vCPU realization
when Qemu
> > >> >> vCPU thread is spawned. This is common to all the
architectures as of now.
> > >> >>
> > >> >> Hot-unplug of vCPU results in destruction of the vCPU
object in QOM but
> > >> >> the corresponding KVM vCPU object in the Host KVM is not
destroyed as
> > >> >> KVM doesn't support vCPU removal. Therefore, its
representative KVM
> > >> >> vCPU object/context in Qemu is parked.
> > >> >>
> > >> >> Refactor architecture common logic so that some APIs could
be reused by
> > >> >> vCPU Hotplug code of some architectures likes ARM, Loongson
etc. Update
> > >> >> new/old APIs with trace events instead of DPRINTF. No
functional change is
> > >> >> intended here.
> > >> >>
> > >> >> Signed-off-by: Salil Mehta <salil.mehta@huawei.com>
> > >> >> Reviewed-by: Gavin Shan <gshan@redhat.com>
> > >> >> Tested-by: Vishnu Pajjuri <vishnu@os.amperecomputing.com>
> > >> >> Reviewed-by: Jonathan Cameron <Jonathan.Cameron@huawei.com>
> > >> >> Tested-by: Xianglai Li <lixianglai@loongson.cn>
> > >> >> Tested-by: Miguel Luis <miguel.luis@oracle.com>
> > >> >> Reviewed-by: Shaoqin Huang <shahuang@redhat.com>
> > >> >> [harshpb: fixed rebase failures in include/sysemu/kvm.h]
> > >> >> Signed-off-by: Harsh Prateek Bora <harshpb@linux.ibm.com>
> > >> >> ---
> > >> >> include/sysemu/kvm.h | 15 ++++++++++
> > >> >> accel/kvm/kvm-all.c | 64
> > ++++++++++++++++++++++++++++++++---
> > >> -----
> > >> >> --
> > >> >> accel/kvm/trace-events | 5 +++-
> > >> >> 3 files changed, 68 insertions(+), 16 deletions(-)
> > >> >>
> > >> >> diff --git a/include/sysemu/kvm.h b/include/sysemu/kvm.h
> index
> > >> >> eaf801bc93..fa3ec74442 100644
> > >> >> --- a/include/sysemu/kvm.h
> > >> >> +++ b/include/sysemu/kvm.h
> > >> >> @@ -434,6 +434,21 @@ void kvm_set_sigmask_len(KVMState *s,
> > >> unsigned
> > >> >> int sigmask_len);
> > >> >>
> > >> >> int kvm_physical_memory_addr_from_host(KVMState *s, void
> > >> >> *ram_addr,
> > >> >> hwaddr *phys_addr);
> > >> >> +/**
> > >> >> + * kvm_create_vcpu - Gets a parked KVM vCPU or creates a
KVM
> > >> vCPU
> > >> >> + * @cpu: QOM CPUState object for which KVM vCPU has to be
> > >> >> fetched/created.
> > >> >> + *
> > >> >> + * @returns: 0 when success, errno (<0) when failed.
> > >> >> + */
> > >> >> +int kvm_create_vcpu(CPUState *cpu);
> > >> >> +
> > >> >> +/**
> > >> >> + * kvm_park_vcpu - Park QEMU KVM vCPU context
> > >> >> + * @cpu: QOM CPUState object for which QEMU KVM vCPU
context has to
> > >> >> be parked.
> > >> >> + *
> > >> >> + * @returns: none
> > >> >> + */
> > >> >> +void kvm_park_vcpu(CPUState *cpu);
> > >> >>
> > >> >> #endif /* COMPILING_PER_TARGET */
> > >> >>
> > >> >> diff --git a/accel/kvm/kvm-all.c b/accel/kvm/kvm-all.c index
> > >> >> d7281b93f3..30d42847de 100644
> > >> >> --- a/accel/kvm/kvm-all.c
> > >> >> +++ b/accel/kvm/kvm-all.c
> > >> >> @@ -128,6 +128,7 @@ static QemuMutex kml_slots_lock;
> #define
> > >> >> kvm_slots_unlock() qemu_mutex_unlock(&kml_slots_lock)
> > >> >>
> > >> >> static void kvm_slot_init_dirty_bitmap(KVMSlot *mem);
> > >> >> +static int kvm_get_vcpu(KVMState *s, unsigned long
vcpu_id);
> > >> >>
> > >> >> static inline void kvm_resample_fd_remove(int gsi) { @@ -
> 340,14
> > >> +341,53
> > >> >> @@ err:
> > >> >> return ret;
> > >> >> }
> > >> >>
> > >> >> +void kvm_park_vcpu(CPUState *cpu)
> > >> >> +{
> > >> >> + struct KVMParkedVcpu *vcpu;
> > >> >> +
> > >> >> + trace_kvm_park_vcpu(cpu->cpu_index,
kvm_arch_vcpu_id(cpu));
> > >> >> +
> > >> >> + vcpu = g_malloc0(sizeof(*vcpu));
> > >> >> + vcpu->vcpu_id = kvm_arch_vcpu_id(cpu);
> > >> >> + vcpu->kvm_fd = cpu->kvm_fd;
> > >> >> + QLIST_INSERT_HEAD(&kvm_state->kvm_parked_vcpus, vcpu,
node); }
> > >> >> +
> > >> >> +int kvm_create_vcpu(CPUState *cpu)
> > >> >> +{
> > >> >> + unsigned long vcpu_id = kvm_arch_vcpu_id(cpu);
> > >> >> + KVMState *s = kvm_state;
> > >> >> + int kvm_fd;
> > >> >> +
> > >> >> + trace_kvm_create_vcpu(cpu->cpu_index,
kvm_arch_vcpu_id(cpu));
> > >> >> +
> > >> >> + /* check if the KVM vCPU already exist but is parked */
> > >> >> + kvm_fd = kvm_get_vcpu(s, vcpu_id);
> > >> >> + if (kvm_fd < 0) {
> > >> >> + /* vCPU not parked: create a new KVM vCPU */
> > >> >> + kvm_fd = kvm_vm_ioctl(s, KVM_CREATE_VCPU, vcpu_id);
> > >> >> + if (kvm_fd < 0) {
> > >> >> + error_report("KVM_CREATE_VCPU IOCTL failed for
vCPU %lu", vcpu_id);
> > >> >> + return kvm_fd;
> > >> >> + }
> > >> >> + }
> > >> >> +
> > >> >> + cpu->kvm_fd = kvm_fd;
> > >> >> + cpu->kvm_state = s;
> > >> >> + cpu->vcpu_dirty = true;
> > >> >> + cpu->dirty_pages = 0;
> > >> >> + cpu->throttle_us_per_full = 0;
> > >> >> +
> > >> >> + return 0;
> > >> >> +}
> > >> >> +
> > >> >> static int do_kvm_destroy_vcpu(CPUState *cpu) {
> > >> >> KVMState *s = kvm_state;
> > >> >> long mmap_size;
> > >> >> - struct KVMParkedVcpu *vcpu = NULL;
> > >> >> int ret = 0;
> > >> >>
> > >> >> - trace_kvm_destroy_vcpu();
> > >> >> + trace_kvm_destroy_vcpu(cpu->cpu_index,
> > >> kvm_arch_vcpu_id(cpu));
> > >> >>
> > >> >> ret = kvm_arch_destroy_vcpu(cpu);
> > >> >> if (ret < 0) {
> > >> >> @@ -373,10 +413,7 @@ static int
do_kvm_destroy_vcpu(CPUState *cpu)
> > >> >> }
> > >> >> }
> > >> >>
> > >> >> - vcpu = g_malloc0(sizeof(*vcpu));
> > >> >> - vcpu->vcpu_id = kvm_arch_vcpu_id(cpu);
> > >> >> - vcpu->kvm_fd = cpu->kvm_fd;
> > >> >> - QLIST_INSERT_HEAD(&kvm_state->kvm_parked_vcpus, vcpu,
> > >> node);
> > >> >> + kvm_park_vcpu(cpu);
> > >> >> err:
> > >> >> return ret;
> > >> >> }
> > >> >> @@ -397,6 +434,8 @@ static int kvm_get_vcpu(KVMState *s,
> > unsigned
> > >> long
> > >> >> vcpu_id)
> > >> >> if (cpu->vcpu_id == vcpu_id) {
> > >> >> int kvm_fd;
> > >> >>
> > >> >> + trace_kvm_get_vcpu(vcpu_id);
> > >> >> +
> > >> >> QLIST_REMOVE(cpu, node);
> > >> >> kvm_fd = cpu->kvm_fd;
> > >> >> g_free(cpu);
> > >> >> @@ -404,7 +443,7 @@ static int kvm_get_vcpu(KVMState *s,
> > unsigned
> > >> long
> > >> >> vcpu_id)
> > >> >> }
> > >> >> }
> > >> >>
> > >> >> - return kvm_vm_ioctl(s, KVM_CREATE_VCPU, (void
> *)vcpu_id);
> > >> >> + return -ENOENT;
> > >> >> }
> > >> >>
> > >> >> int kvm_init_vcpu(CPUState *cpu, Error **errp) @@ -415,19
> > +454,14
> > >> @@
> > >> >> int kvm_init_vcpu(CPUState *cpu, Error **errp)
> > >> >>
> > >> >> trace_kvm_init_vcpu(cpu->cpu_index,
> kvm_arch_vcpu_id(cpu));
> > >> >>
> > >> >> - ret = kvm_get_vcpu(s, kvm_arch_vcpu_id(cpu));
> > >> >> + ret = kvm_create_vcpu(cpu);
> > >> >> if (ret < 0) {
> > >> >> - error_setg_errno(errp, -ret, "kvm_init_vcpu:
kvm_get_vcpu
> > failed
> > >> >> (%lu)",
> > >> >> + error_setg_errno(errp, -ret,
> > >> >> + "kvm_init_vcpu: kvm_create_vcpu
failed (%lu)",
> > >> >> kvm_arch_vcpu_id(cpu));
> > >> >> goto err;
> > >> >> }
> > >> >>
> > >> >> - cpu->kvm_fd = ret;
> > >> >> - cpu->kvm_state = s;
> > >> >> - cpu->vcpu_dirty = true;
> > >> >> - cpu->dirty_pages = 0;
> > >> >> - cpu->throttle_us_per_full = 0;
> > >> >> -
> > >> >> mmap_size = kvm_ioctl(s, KVM_GET_VCPU_MMAP_SIZE, 0);
> > >> >> if (mmap_size < 0) {
> > >> >> ret = mmap_size;
> > >> >> diff --git a/accel/kvm/trace-events b/accel/kvm/trace-events
> index
> > >> >> 681ccb667d..75c1724e78 100644
> > >> >> --- a/accel/kvm/trace-events
> > >> >> +++ b/accel/kvm/trace-events
> > >> >> @@ -9,6 +9,10 @@ kvm_device_ioctl(int fd, int type, void
*arg)
> > "dev fd
> > >> %d,
> > >> >> type 0x%x, arg %p"
> > >> >> kvm_failed_reg_get(uint64_t id, const char *msg) "Warning:
> > Unable to
> > >> >> retrieve ONEREG %" PRIu64 " from KVM: %s"
> > >> >> kvm_failed_reg_set(uint64_t id, const char *msg) "Warning:
> > Unable to
> > >> set
> > >> >> ONEREG %" PRIu64 " to KVM: %s"
> > >> >> kvm_init_vcpu(int cpu_index, unsigned long arch_cpu_id)
> "index:
> > %d
> > >> id:
> > >> >> %lu"
> > >> >> +kvm_create_vcpu(int cpu_index, unsigned long arch_cpu_id)
> > "index:
> > >> %d
> > >> >> id: %lu"
> > >> >> +kvm_get_vcpu(unsigned long arch_cpu_id) "id: %lu"
> > >> >> +kvm_destroy_vcpu(int cpu_index, unsigned long arch_cpu_id)
> > "index:
> > >> %d
> > >> >> id: %lu"
> > >> >> +kvm_park_vcpu(int cpu_index, unsigned long arch_cpu_id)
> > "index: %d
> > >> id:
> > >> >> %lu"
> > >> >> kvm_irqchip_commit_routes(void) ""
> > >> >> kvm_irqchip_add_msi_route(char *name, int vector, int
virq) "dev
> > %s
> > >> >> vector %d virq %d"
> > >> >> kvm_irqchip_update_msi_route(int virq) "Updating MSI route
> > >> virq=%d"
> > >> >> @@ -25,7 +29,6 @@ kvm_dirty_ring_reaper(const char *s) "%s"
> > >> >> kvm_dirty_ring_reap(uint64_t count, int64_t t)
"reaped %"PRIu64"
> > >> pages
> > >> >> (took %"PRIi64" us)"
> > >> >> kvm_dirty_ring_reaper_kick(const char *reason) "%s"
> > >> >> kvm_dirty_ring_flush(int finished) "%d"
> > >> >> -kvm_destroy_vcpu(void) ""
> > >> >> kvm_failed_get_vcpu_mmap_size(void) ""
> > >> >> kvm_cpu_exec(void) ""
> > >> >> kvm_interrupt_exit_request(void) ""
> > >> >> --
> > >> >> 2.39.3
> > >> >
[-- Attachment #2: Type: text/html, Size: 25942 bytes --]
^ permalink raw reply [flat|nested] 14+ messages in thread
* Re: [PATCH v2 1/4] accel/kvm: Extract common KVM vCPU {creation, parking} code
2024-05-16 13:35 ` Salil Mehta via
[not found] ` <5bd52d8f5aaa49d6bc0ae419bb16c27c@huawei.com>
@ 2024-05-16 14:53 ` Harsh Prateek Bora
2024-05-17 3:44 ` Nicholas Piggin
2 siblings, 0 replies; 14+ messages in thread
From: Harsh Prateek Bora @ 2024-05-16 14:53 UTC (permalink / raw)
To: Salil Mehta, npiggin@gmail.com, qemu-ppc@nongnu.org,
qemu-devel@nongnu.org
Cc: danielhb413@gmail.com, vaibhav@linux.ibm.com, sbhat@linux.ibm.com
Hi Salil,
On 5/16/24 19:05, Salil Mehta wrote:
>
>> From: Harsh Prateek Bora <harshpb@linux.ibm.com>
>> Sent: Thursday, May 16, 2024 2:07 PM
>>
>> Hi Salil,
>>
>> On 5/16/24 17:42, Salil Mehta wrote:
>> > Hi Harsh,
>> >
>> >> From: Harsh Prateek Bora <harshpb@linux.ibm.com>
>> >> Sent: Thursday, May 16, 2024 11:15 AM
>> >>
>> >> Hi Salil,
>> >>
>> >> Thanks for your email.
>> >> Your patch 1/8 is included here based on review comments on my previous
>> >> patch from one of the maintainers in the community and therefore I had
>> >> kept you in CC to be aware of the desire of having this independent patch to
>> >> get merged earlier even if your other patches in the series may go through
>> >> further reviews.
>> >
>> > I really don’t know which discussion are you pointing at? Please
>> > understand you are fixing a bug and we are pushing a feature which has got large series.
>> > It will break the patch-set which is about t be merged.
>> >
>> > There will be significant overhead of testing on us for the work we
>> > have been carrying forward for large time. This will be disruptive. Please dont!
>> >
>>
>> I was referring to the review discussion on my prev patch here:
>> https://lore.kernel.org/qemu-devel/D191D2JFAR7L.2EH4S445M4TGK@gmail.com/
>
>
> Sure, I'm, not sure what this means.
>
No worries. If you had followed the conversation on the review
link I shared, I had made it clear that we are expecting a patch update
from you and it is included here just to facilitate review of additional
patches on the top.
>
>> Although your patch was included with this series only to facilitate review of
>> the additional patches depending on just one of your patch.
>
>
> Generally you rebase your patch-set over the other and clearly state on the cover
> letter that this patch-set is dependent upon such and such patch-set. Just imagine
> if everyone starts to unilaterally pick up patches from each other's patch-set it will
> create a chaos not only for the feature owners but also for the maintainers.
>
Please go through the review discussion on the link I shared above. It
was included on the suggestion of one of the maintainers. However, if
you are going to send v9 soon, everyone would be happy to wait.
>
>>
>> I am not sure what is appearing disruptive here. It is a common practive in
>> the community that maintainer(s) can pick individual patches from the
>> series if it has been vetted by siginificant number of reviewers.
>
>
> Don’t you think this patch-set is asking for acceptance for a patch already
> part of another patch-set which is about to be accepted and is a bigger feature?
> Will it cause maintenance overhead at the last moment? Yes, of course!
No, I dont think so.
>
>
>> However, in this case, since you have mentioned to post next version soon,
>> you need not worry about it as that would be the preferred version for both
>> of the series.
>
>
> Yes, but please understand we are working for the benefit of overall community.
> Please cooperate here.
>
Hope I cleared your confusion. We are waiting to see your v9 soon.
>>
>> >
>> >>
>> >> I am hoping to see your v9 soon and thereafter maintainer(s) may
>> choose to
>> >> pick the latest independent patch if needs to be merged earlier.
>> >
>> >
>> > I don’t think you are understanding what problem it is causing. For
>> > your small bug fix you are causing significant delays at our end.
>> >
>>
>> I hope I clarfied above that including your patch here doesnt delay anything.
>> Hoping to see your v9 soon!
>>
>> Thanks
>> Harsh
>> >
>> > Thanks
>> > Salil.
>> >>
>> >> Thanks for your work and let's be hopeful it gets merged soon.
>> >>
>> >> regards,
>> >> Harsh
>> >>
>> >> On 5/16/24 14:00, Salil Mehta wrote:
>> >> > Hi Harsh,
>> >> >
>> >> > Thanks for your interest in the patch-set but taking away patches like
>> >> > this from other series without any discussion can disrupt others work
>> >> > and its acceptance on time. This is because we will have to put lot of
>> >> > effort in rebasing bigger series and then testing overhead comes
>> along
>> >> > with it.
>> >> >
>> >> > The patch-set (from where this patch has been taken) is part of even
>> >> > bigger series and there have been many people and companies toiling
>> to
>> >> > fix the bugs collectively in that series and for years.
>> >> >
>> >> > I'm about float the V9 version of the Arch agnostic series which this
>> >> > patch is part of and you can rebase your patch-set from there. I'm
>> >> > hopeful that it will get accepted in this cycle.
>> >> >
>> >> >
>> >> > Many thanks
>> >> > Salil.
>> >> >
>> >> >> From: Harsh Prateek Bora <harshpb@linux.ibm.com>
>> >> >> Sent: Thursday, May 16, 2024 6:32 AM
>> >> >>
>> >> >> From: Salil Mehta <salil.mehta@huawei.com>
>> >> >>
>> >> >> KVM vCPU creation is done once during the vCPU realization when
>> >> Qemu
>> >> >> vCPU thread is spawned. This is common to all the architectures as
>> of
>> >> now.
>> >> >>
>> >> >> Hot-unplug of vCPU results in destruction of the vCPU object in
>> QOM
>> >> but
>> >> >> the corresponding KVM vCPU object in the Host KVM is not
>> destroyed
>> >> as
>> >> >> KVM doesn't support vCPU removal. Therefore, its representative
>> KVM
>> >> >> vCPU object/context in Qemu is parked.
>> >> >>
>> >> >> Refactor architecture common logic so that some APIs could be
>> reused
>> >> by
>> >> >> vCPU Hotplug code of some architectures likes ARM, Loongson etc.
>> >> Update
>> >> >> new/old APIs with trace events instead of DPRINTF. No functional
>> >> change is
>> >> >> intended here.
>> >> >>
>> >> >> Signed-off-by: Salil Mehta <salil.mehta@huawei.com>
>> >> >> Reviewed-by: Gavin Shan <gshan@redhat.com>
>> >> >> Tested-by: Vishnu Pajjuri <vishnu@os.amperecomputing.com>
>> >> >> Reviewed-by: Jonathan Cameron
>> <Jonathan.Cameron@huawei.com>
>> >> >> Tested-by: Xianglai Li <lixianglai@loongson.cn>
>> >> >> Tested-by: Miguel Luis <miguel.luis@oracle.com>
>> >> >> Reviewed-by: Shaoqin Huang <shahuang@redhat.com>
>> >> >> [harshpb: fixed rebase failures in include/sysemu/kvm.h]
>> >> >> Signed-off-by: Harsh Prateek Bora <harshpb@linux.ibm.com>
>> >> >> ---
>> >> >> include/sysemu/kvm.h | 15 ++++++++++
>> >> >> accel/kvm/kvm-all.c | 64
>> ++++++++++++++++++++++++++++++++---
>> >> -----
>> >> >> --
>> >> >> accel/kvm/trace-events | 5 +++-
>> >> >> 3 files changed, 68 insertions(+), 16 deletions(-)
>> >> >>
>> >> >> diff --git a/include/sysemu/kvm.h b/include/sysemu/kvm.h index
>> >> >> eaf801bc93..fa3ec74442 100644
>> >> >> --- a/include/sysemu/kvm.h
>> >> >> +++ b/include/sysemu/kvm.h
>> >> >> @@ -434,6 +434,21 @@ void kvm_set_sigmask_len(KVMState *s,
>> >> unsigned
>> >> >> int sigmask_len);
>> >> >>
>> >> >> int kvm_physical_memory_addr_from_host(KVMState *s, void
>> >> >> *ram_addr,
>> >> >> hwaddr *phys_addr);
>> >> >> +/**
>> >> >> + * kvm_create_vcpu - Gets a parked KVM vCPU or creates a KVM
>> >> vCPU
>> >> >> + * @cpu: QOM CPUState object for which KVM vCPU has to be
>> >> >> fetched/created.
>> >> >> + *
>> >> >> + * @returns: 0 when success, errno (<0) when failed.
>> >> >> + */
>> >> >> +int kvm_create_vcpu(CPUState *cpu);
>> >> >> +
>> >> >> +/**
>> >> >> + * kvm_park_vcpu - Park QEMU KVM vCPU context
>> >> >> + * @cpu: QOM CPUState object for which QEMU KVM vCPU
>> context
>> >> has to
>> >> >> be parked.
>> >> >> + *
>> >> >> + * @returns: none
>> >> >> + */
>> >> >> +void kvm_park_vcpu(CPUState *cpu);
>> >> >>
>> >> >> #endif /* COMPILING_PER_TARGET */
>> >> >>
>> >> >> diff --git a/accel/kvm/kvm-all.c b/accel/kvm/kvm-all.c index
>> >> >> d7281b93f3..30d42847de 100644
>> >> >> --- a/accel/kvm/kvm-all.c
>> >> >> +++ b/accel/kvm/kvm-all.c
>> >> >> @@ -128,6 +128,7 @@ static QemuMutex kml_slots_lock; #define
>> >> >> kvm_slots_unlock() qemu_mutex_unlock(&kml_slots_lock)
>> >> >>
>> >> >> static void kvm_slot_init_dirty_bitmap(KVMSlot *mem);
>> >> >> +static int kvm_get_vcpu(KVMState *s, unsigned long vcpu_id);
>> >> >>
>> >> >> static inline void kvm_resample_fd_remove(int gsi) { @@ -340,14
>> >> +341,53
>> >> >> @@ err:
>> >> >> return ret;
>> >> >> }
>> >> >>
>> >> >> +void kvm_park_vcpu(CPUState *cpu)
>> >> >> +{
>> >> >> + struct KVMParkedVcpu *vcpu;
>> >> >> +
>> >> >> + trace_kvm_park_vcpu(cpu->cpu_index,
>> kvm_arch_vcpu_id(cpu));
>> >> >> +
>> >> >> + vcpu = g_malloc0(sizeof(*vcpu));
>> >> >> + vcpu->vcpu_id = kvm_arch_vcpu_id(cpu);
>> >> >> + vcpu->kvm_fd = cpu->kvm_fd;
>> >> >> + QLIST_INSERT_HEAD(&kvm_state->kvm_parked_vcpus, vcpu,
>> >> node); }
>> >> >> +
>> >> >> +int kvm_create_vcpu(CPUState *cpu)
>> >> >> +{
>> >> >> + unsigned long vcpu_id = kvm_arch_vcpu_id(cpu);
>> >> >> + KVMState *s = kvm_state;
>> >> >> + int kvm_fd;
>> >> >> +
>> >> >> + trace_kvm_create_vcpu(cpu->cpu_index,
>> kvm_arch_vcpu_id(cpu));
>> >> >> +
>> >> >> + /* check if the KVM vCPU already exist but is parked */
>> >> >> + kvm_fd = kvm_get_vcpu(s, vcpu_id);
>> >> >> + if (kvm_fd < 0) {
>> >> >> + /* vCPU not parked: create a new KVM vCPU */
>> >> >> + kvm_fd = kvm_vm_ioctl(s, KVM_CREATE_VCPU, vcpu_id);
>> >> >> + if (kvm_fd < 0) {
>> >> >> + error_report("KVM_CREATE_VCPU IOCTL failed for vCPU
>> %lu",
>> >> >> vcpu_id);
>> >> >> + return kvm_fd;
>> >> >> + }
>> >> >> + }
>> >> >> +
>> >> >> + cpu->kvm_fd = kvm_fd;
>> >> >> + cpu->kvm_state = s;
>> >> >> + cpu->vcpu_dirty = true;
>> >> >> + cpu->dirty_pages = 0;
>> >> >> + cpu->throttle_us_per_full = 0;
>> >> >> +
>> >> >> + return 0;
>> >> >> +}
>> >> >> +
>> >> >> static int do_kvm_destroy_vcpu(CPUState *cpu) {
>> >> >> KVMState *s = kvm_state;
>> >> >> long mmap_size;
>> >> >> - struct KVMParkedVcpu *vcpu = NULL;
>> >> >> int ret = 0;
>> >> >>
>> >> >> - trace_kvm_destroy_vcpu();
>> >> >> + trace_kvm_destroy_vcpu(cpu->cpu_index,
>> >> kvm_arch_vcpu_id(cpu));
>> >> >>
>> >> >> ret = kvm_arch_destroy_vcpu(cpu);
>> >> >> if (ret < 0) {
>> >> >> @@ -373,10 +413,7 @@ static int do_kvm_destroy_vcpu(CPUState
>> >> *cpu)
>> >> >> }
>> >> >> }
>> >> >>
>> >> >> - vcpu = g_malloc0(sizeof(*vcpu));
>> >> >> - vcpu->vcpu_id = kvm_arch_vcpu_id(cpu);
>> >> >> - vcpu->kvm_fd = cpu->kvm_fd;
>> >> >> - QLIST_INSERT_HEAD(&kvm_state->kvm_parked_vcpus, vcpu,
>> >> node);
>> >> >> + kvm_park_vcpu(cpu);
>> >> >> err:
>> >> >> return ret;
>> >> >> }
>> >> >> @@ -397,6 +434,8 @@ static int kvm_get_vcpu(KVMState *s,
>> unsigned
>> >> long
>> >> >> vcpu_id)
>> >> >> if (cpu->vcpu_id == vcpu_id) {
>> >> >> int kvm_fd;
>> >> >>
>> >> >> + trace_kvm_get_vcpu(vcpu_id);
>> >> >> +
>> >> >> QLIST_REMOVE(cpu, node);
>> >> >> kvm_fd = cpu->kvm_fd;
>> >> >> g_free(cpu);
>> >> >> @@ -404,7 +443,7 @@ static int kvm_get_vcpu(KVMState *s,
>> unsigned
>> >> long
>> >> >> vcpu_id)
>> >> >> }
>> >> >> }
>> >> >>
>> >> >> - return kvm_vm_ioctl(s, KVM_CREATE_VCPU, (void *)vcpu_id);
>> >> >> + return -ENOENT;
>> >> >> }
>> >> >>
>> >> >> int kvm_init_vcpu(CPUState *cpu, Error **errp) @@ -415,19
>> +454,14
>> >> @@
>> >> >> int kvm_init_vcpu(CPUState *cpu, Error **errp)
>> >> >>
>> >> >> trace_kvm_init_vcpu(cpu->cpu_index, kvm_arch_vcpu_id(cpu));
>> >> >>
>> >> >> - ret = kvm_get_vcpu(s, kvm_arch_vcpu_id(cpu));
>> >> >> + ret = kvm_create_vcpu(cpu);
>> >> >> if (ret < 0) {
>> >> >> - error_setg_errno(errp, -ret, "kvm_init_vcpu: kvm_get_vcpu
>> failed
>> >> >> (%lu)",
>> >> >> + error_setg_errno(errp, -ret,
>> >> >> + "kvm_init_vcpu: kvm_create_vcpu failed (%lu)",
>> >> >> kvm_arch_vcpu_id(cpu));
>> >> >> goto err;
>> >> >> }
>> >> >>
>> >> >> - cpu->kvm_fd = ret;
>> >> >> - cpu->kvm_state = s;
>> >> >> - cpu->vcpu_dirty = true;
>> >> >> - cpu->dirty_pages = 0;
>> >> >> - cpu->throttle_us_per_full = 0;
>> >> >> -
>> >> >> mmap_size = kvm_ioctl(s, KVM_GET_VCPU_MMAP_SIZE, 0);
>> >> >> if (mmap_size < 0) {
>> >> >> ret = mmap_size;
>> >> >> diff --git a/accel/kvm/trace-events b/accel/kvm/trace-events index
>> >> >> 681ccb667d..75c1724e78 100644
>> >> >> --- a/accel/kvm/trace-events
>> >> >> +++ b/accel/kvm/trace-events
>> >> >> @@ -9,6 +9,10 @@ kvm_device_ioctl(int fd, int type, void *arg)
>> "dev fd
>> >> %d,
>> >> >> type 0x%x, arg %p"
>> >> >> kvm_failed_reg_get(uint64_t id, const char *msg) "Warning:
>> Unable to
>> >> >> retrieve ONEREG %" PRIu64 " from KVM: %s"
>> >> >> kvm_failed_reg_set(uint64_t id, const char *msg) "Warning:
>> Unable to
>> >> set
>> >> >> ONEREG %" PRIu64 " to KVM: %s"
>> >> >> kvm_init_vcpu(int cpu_index, unsigned long arch_cpu_id) "index:
>> %d
>> >> id:
>> >> >> %lu"
>> >> >> +kvm_create_vcpu(int cpu_index, unsigned long arch_cpu_id)
>> "index:
>> >> %d
>> >> >> id: %lu"
>> >> >> +kvm_get_vcpu(unsigned long arch_cpu_id) "id: %lu"
>> >> >> +kvm_destroy_vcpu(int cpu_index, unsigned long arch_cpu_id)
>> "index:
>> >> %d
>> >> >> id: %lu"
>> >> >> +kvm_park_vcpu(int cpu_index, unsigned long arch_cpu_id)
>> "index: %d
>> >> id:
>> >> >> %lu"
>> >> >> kvm_irqchip_commit_routes(void) ""
>> >> >> kvm_irqchip_add_msi_route(char *name, int vector, int virq) "dev
>> %s
>> >> >> vector %d virq %d"
>> >> >> kvm_irqchip_update_msi_route(int virq) "Updating MSI route
>> >> virq=%d"
>> >> >> @@ -25,7 +29,6 @@ kvm_dirty_ring_reaper(const char *s) "%s"
>> >> >> kvm_dirty_ring_reap(uint64_t count, int64_t t) "reaped %"PRIu64"
>> >> pages
>> >> >> (took %"PRIi64" us)"
>> >> >> kvm_dirty_ring_reaper_kick(const char *reason) "%s"
>> >> >> kvm_dirty_ring_flush(int finished) "%d"
>> >> >> -kvm_destroy_vcpu(void) ""
>> >> >> kvm_failed_get_vcpu_mmap_size(void) ""
>> >> >> kvm_cpu_exec(void) ""
>> >> >> kvm_interrupt_exit_request(void) ""
>> >> >> --
>> >> >> 2.39.3
>> >> >
^ permalink raw reply [flat|nested] 14+ messages in thread
* Re: [PATCH v2 1/4] accel/kvm: Extract common KVM vCPU {creation, parking} code
2024-05-16 13:35 ` Salil Mehta via
[not found] ` <5bd52d8f5aaa49d6bc0ae419bb16c27c@huawei.com>
2024-05-16 14:53 ` Harsh Prateek Bora
@ 2024-05-17 3:44 ` Nicholas Piggin
2024-05-17 10:13 ` Salil Mehta via
2 siblings, 1 reply; 14+ messages in thread
From: Nicholas Piggin @ 2024-05-17 3:44 UTC (permalink / raw)
To: Salil Mehta, Harsh Prateek Bora, qemu-ppc@nongnu.org,
qemu-devel@nongnu.org
Cc: danielhb413@gmail.com, vaibhav@linux.ibm.com, sbhat@linux.ibm.com
On Thu May 16, 2024 at 11:35 PM AEST, Salil Mehta wrote:
>
> > From: Harsh Prateek Bora <harshpb@linux.ibm.com>
> > Sent: Thursday, May 16, 2024 2:07 PM
> >
> > Hi Salil,
> >
> > On 5/16/24 17:42, Salil Mehta wrote:
> > > Hi Harsh,
> > >
> > >> From: Harsh Prateek Bora <harshpb@linux.ibm.com>
> > >> Sent: Thursday, May 16, 2024 11:15 AM
> > >>
> > >> Hi Salil,
> > >>
> > >> Thanks for your email.
> > >> Your patch 1/8 is included here based on review comments on my previous
> > >> patch from one of the maintainers in the community and therefore I had
> > >> kept you in CC to be aware of the desire of having this independent patch to
> > >> get merged earlier even if your other patches in the series may go through
> > >> further reviews.
> > >
> > > I really don’t know which discussion are you pointing at? Please
> > > understand you are fixing a bug and we are pushing a feature which has got large series.
> > > It will break the patch-set which is about t be merged.
> > >
> > > There will be significant overhead of testing on us for the work we
> > > have been carrying forward for large time. This will be disruptive. Please dont!
> > >
> >
> > I was referring to the review discussion on my prev patch here:
> > https://lore.kernel.org/qemu-devel/D191D2JFAR7L.2EH4S445M4TGK@gmail.com/
>
>
> Sure, I'm, not sure what this means.
>
>
> > Although your patch was included with this series only to facilitate review of
> > the additional patches depending on just one of your patch.
>
>
> Generally you rebase your patch-set over the other and clearly state on the cover
> letter that this patch-set is dependent upon such and such patch-set. Just imagine
> if everyone starts to unilaterally pick up patches from each other's patch-set it will
> create a chaos not only for the feature owners but also for the maintainers.
>
>
> >
> > I am not sure what is appearing disruptive here. It is a common practive in
> > the community that maintainer(s) can pick individual patches from the
> > series if it has been vetted by siginificant number of reviewers.
>
>
> Don’t you think this patch-set is asking for acceptance for a patch already
> part of another patch-set which is about to be accepted and is a bigger feature?
> Will it cause maintenance overhead at the last moment? Yes, of course!
>
>
> > However, in this case, since you have mentioned to post next version soon,
> > you need not worry about it as that would be the preferred version for both
> > of the series.
>
>
> Yes, but please understand we are working for the benefit of overall community.
> Please cooperate here.
There might be a misunderstanding, Harsh just said there had not been
much progress on your series for a while and he wasn't sure what the
status was. I mentioned that we *could* take your patch 1 (with your
blessing) if there was a hold up with the rest of the series. He was
going to check in with you to see how it was going.
This patch 1 was not intended to be merged as is without syncing up with
you first, but it's understandable you were concerned because that was
probably not communicated with you clearly.
I appreciate you bringing up your concerns, we'll try to do better.
Thanks,
Nick
^ permalink raw reply [flat|nested] 14+ messages in thread
* RE: [PATCH v2 1/4] accel/kvm: Extract common KVM vCPU {creation, parking} code
2024-05-17 3:44 ` Nicholas Piggin
@ 2024-05-17 10:13 ` Salil Mehta via
0 siblings, 0 replies; 14+ messages in thread
From: Salil Mehta via @ 2024-05-17 10:13 UTC (permalink / raw)
To: Nicholas Piggin, Harsh Prateek Bora, qemu-ppc@nongnu.org,
qemu-devel@nongnu.org
Cc: danielhb413@gmail.com, vaibhav@linux.ibm.com, sbhat@linux.ibm.com
Hi Nick,
> From: Nicholas Piggin <npiggin@gmail.com>
> Sent: Friday, May 17, 2024 4:44 AM
>
> On Thu May 16, 2024 at 11:35 PM AEST, Salil Mehta wrote:
> >
> > > From: Harsh Prateek Bora <harshpb@linux.ibm.com>
> > > Sent: Thursday, May 16, 2024 2:07 PM
> > >
> > > Hi Salil,
> > >
> > > On 5/16/24 17:42, Salil Mehta wrote:
> > > > Hi Harsh,
> > > >
> > > >> From: Harsh Prateek Bora <harshpb@linux.ibm.com>
> > > >> Sent: Thursday, May 16, 2024 11:15 AM
> > > >>
> > > >> Hi Salil,
> > > >>
> > > >> Thanks for your email.
> > > >> Your patch 1/8 is included here based on review comments on my previous
> > > >> patch from one of the maintainers in the community and therefore I had
> > > >> kept you in CC to be aware of the desire of having this independent patch to
> > > >> get merged earlier even if your other patches in the series may go through
> > > >> further reviews.
> > > >
> > > > I really don’t know which discussion are you pointing at? Please
> > > > understand you are fixing a bug and we are pushing a feature which has got large series.
> > > > It will break the patch-set which is about t be merged.
> > > >
> > > > There will be significant overhead of testing on us for the work
> > > we > have been carrying forward for large time. This will be disruptive. Please dont!
> > > >
> > >
> > > I was referring to the review discussion on my prev patch here:
> > >
> > > https://lore.kernel.org/qemu-devel/D191D2JFAR7L.2EH4S445M4TGK@gmail.com/
> >
> >
> > Sure, I'm, not sure what this means.
> >
> >
> > > Although your patch was included with this series only to
> > > facilitate review of the additional patches depending on just one of your patch.
> >
> >
> > Generally you rebase your patch-set over the other and clearly state
> > on the cover letter that this patch-set is dependent upon such and
> > such patch-set. Just imagine if everyone starts to unilaterally pick
> > up patches from each other's patch-set it will create a chaos not only for
> the feature owners but also for the maintainers.
> >
> >
> > >
> > > I am not sure what is appearing disruptive here. It is a common
> > > practive in the community that maintainer(s) can pick individual
> > > patches from the series if it has been vetted by siginificant number of reviewers.
> >
> >
> > Don’t you think this patch-set is asking for acceptance for a patch
> > already part of another patch-set which is about to be accepted and is a bigger feature?
> > Will it cause maintenance overhead at the last moment? Yes, of course!
> >
> >
> > > However, in this case, since you have mentioned to post next
> > > version soon, you need not worry about it as that would be the
> > > preferred version for both of the series.
> >
> >
> > Yes, but please understand we are working for the benefit of overall community.
> > Please cooperate here.
>
> There might be a misunderstanding, Harsh just said there had not been
> much progress on your series for a while and he wasn't sure what the status
> was. I mentioned that we *could* take your patch 1 (with your
> blessing) if there was a hold up with the rest of the series. He was going to
> check in with you to see how it was going.
Thanks for the clarification. No issues. I'm planning to float V9 of this series by
Monday and perhaps that’s all you want. 😊
As such, new cycle started on 23rd April and we had been busy rebasing and
testing. This series works in conjunction with other series. We have to ensure both
are compatible.
> This patch 1 was not intended to be merged as is without syncing up with
> you first, but it's understandable you were concerned because that was
> probably not communicated with you clearly.
No issues. I think we all are in the same page now. I understand your
requirement. We are trying our best to expedite acceptance of this series.
Perhaps your reviews on V9 might help.
>
> I appreciate you bringing up your concerns, we'll try to do better.
No problem. Thanks
Salil.
>
> Thanks,
> Nick
^ permalink raw reply [flat|nested] 14+ messages in thread
end of thread, other threads:[~2024-05-17 10:14 UTC | newest]
Thread overview: 14+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2024-05-16 5:32 [PATCH v2 0/4] target/ppc: vcpu hotplug failure handling fixes Harsh Prateek Bora
2024-05-16 5:32 ` [PATCH v2 1/4] accel/kvm: Extract common KVM vCPU {creation, parking} code Harsh Prateek Bora
2024-05-16 8:30 ` Salil Mehta via
2024-05-16 10:15 ` Harsh Prateek Bora
2024-05-16 12:12 ` Salil Mehta via
2024-05-16 13:06 ` Harsh Prateek Bora
2024-05-16 13:35 ` Salil Mehta via
[not found] ` <5bd52d8f5aaa49d6bc0ae419bb16c27c@huawei.com>
2024-05-16 14:19 ` Salil Mehta
2024-05-16 14:53 ` Harsh Prateek Bora
2024-05-17 3:44 ` Nicholas Piggin
2024-05-17 10:13 ` Salil Mehta via
2024-05-16 5:32 ` [PATCH v2 2/4] accel/kvm: Introduce kvm_create_and_park_vcpu() helper Harsh Prateek Bora
2024-05-16 5:32 ` [PATCH v2 3/4] cpu-common.c: export cpu_get_free_index to be reused later Harsh Prateek Bora
2024-05-16 5:32 ` [PATCH v2 4/4] target/ppc: handle vcpu hotplug failure gracefully Harsh Prateek Bora
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).