* [PATCH 1/2] KVM: For manual-protect GET_DIRTY_LOG, do not hold slots lock
@ 2025-09-30 17:28 James Houghton
2025-09-30 17:28 ` [PATCH 2/2] KVM: selftests: Add parallel KVM_GET_DIRTY_LOG to dirty_log_perf_test James Houghton
` (2 more replies)
0 siblings, 3 replies; 5+ messages in thread
From: James Houghton @ 2025-09-30 17:28 UTC (permalink / raw)
To: Paolo Bonzini, Sean Christopherson; +Cc: James Houghton, kvm, linux-kernel
For users that have enabled manual-protect, holding the srcu lock
instead of the slots lock allows KVM to copy the dirty bitmap for
multiple memslots in parallel.
Userspace can take advantage of this by creating multiple memslots and
calling GET_DIRTY_LOG on all of them at the same time, reducing the
dirty log collection time.
For VM live migration, the final dirty memory state can only be
collected after the VM has been paused (blackout). We can resume the VM
on the target host without this bitmap, but doing so requires the
post-copy implementation to assume that nothing is clean; this is very
slow. By being able to receive the bitmap quicker, VM responsiveness
improves dramatically.
On 12TiB Cascade Lake hosts, we observe GET_DIRTY_LOG times of about
25-40ms for each memslot when splitting the memslots 8 ways. This patch
reduces the total wall time spent calling GET_DIRTY_LOG from ~300ms to
~40ms. This means that the dirty log can be transferred to the target
~250ms faster, which is a significant responsiveness improvement. It
takes about 800ms to send the bitmap to the target today, so the 250ms
improvement represents a ~20% reduction in total time spent without the
dirty bitmap.
The bits that must be safe are:
1. The copy_to_user() to store the bitmap
2. kvm_arch_sync_dirty_log()
(1) is trivially safe.
(2) kvm_arch_sync_dirty_log() is non-trivially implemented for x86 and
s390. s390 does not set KVM_GENERIC_DIRTYLOG_READ_PROTECT, so the
optimization here does not apply. On x86, parallelization is safe.
The extra vCPU kicks that come from having more memslots should not be
an issue for the final dirty logging pass (the one I care about most
here), as vCPUs will have been kicked out to userspace at that point.
$ ./dirty_log_perf_test -x 8 -b 512G -s anonymous_hugetlb_1gb # serial
Iteration 1 get dirty log time: 0.004699057s
Iteration 2 get dirty log time: 0.003918316s
Iteration 3 get dirty log time: 0.003903790s
Iteration 4 get dirty log time: 0.003944732s
Iteration 5 get dirty log time: 0.003885857s
$ ./dirty_log_perf_test -x 8 -b 512G -s anonymous_hugetlb_1gb # parallel
Iteration 1 get dirty log time: 0.002352174s
Iteration 2 get dirty log time: 0.001064265s
Iteration 3 get dirty log time: 0.001102144s
Iteration 4 get dirty log time: 0.000960649s
Iteration 5 get dirty log time: 0.000972533s
So with 8 memslots, we get about a 4x reduction on this platform
(Skylake).
Signed-off-by: James Houghton <jthoughton@google.com>
---
include/linux/kvm_dirty_ring.h | 4 +-
virt/kvm/dirty_ring.c | 11 +++++-
virt/kvm/kvm_main.c | 68 +++++++++++++++++++++++-----------
3 files changed, 58 insertions(+), 25 deletions(-)
diff --git a/include/linux/kvm_dirty_ring.h b/include/linux/kvm_dirty_ring.h
index eb10d87adf7d..b7d18abec4d0 100644
--- a/include/linux/kvm_dirty_ring.h
+++ b/include/linux/kvm_dirty_ring.h
@@ -37,7 +37,7 @@ static inline u32 kvm_dirty_ring_get_rsvd_entries(struct kvm *kvm)
return 0;
}
-static inline bool kvm_use_dirty_bitmap(struct kvm *kvm)
+static inline bool kvm_use_dirty_bitmap(struct kvm *kvm, bool shared)
{
return true;
}
@@ -73,7 +73,7 @@ static inline void kvm_dirty_ring_free(struct kvm_dirty_ring *ring)
#else /* CONFIG_HAVE_KVM_DIRTY_RING */
int kvm_cpu_dirty_log_size(struct kvm *kvm);
-bool kvm_use_dirty_bitmap(struct kvm *kvm);
+bool kvm_use_dirty_bitmap(struct kvm *kvm, bool shared);
bool kvm_arch_allow_write_without_running_vcpu(struct kvm *kvm);
u32 kvm_dirty_ring_get_rsvd_entries(struct kvm *kvm);
int kvm_dirty_ring_alloc(struct kvm *kvm, struct kvm_dirty_ring *ring,
diff --git a/virt/kvm/dirty_ring.c b/virt/kvm/dirty_ring.c
index 02bc6b00d76c..6662c302a3fa 100644
--- a/virt/kvm/dirty_ring.c
+++ b/virt/kvm/dirty_ring.c
@@ -21,9 +21,16 @@ u32 kvm_dirty_ring_get_rsvd_entries(struct kvm *kvm)
return KVM_DIRTY_RING_RSVD_ENTRIES + kvm_cpu_dirty_log_size(kvm);
}
-bool kvm_use_dirty_bitmap(struct kvm *kvm)
+bool kvm_use_dirty_bitmap(struct kvm *kvm, bool shared)
{
- lockdep_assert_held(&kvm->slots_lock);
+ if (shared)
+ /*
+ * In the shared mode, racing with dirty log mode changes is
+ * tolerated.
+ */
+ lockdep_assert_held(&kvm->srcu);
+ else
+ lockdep_assert_held(&kvm->slots_lock);
return !kvm->dirty_ring_size || kvm->dirty_ring_with_bitmap;
}
diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c
index 18f29ef93543..bb1ec5556d76 100644
--- a/virt/kvm/kvm_main.c
+++ b/virt/kvm/kvm_main.c
@@ -1687,7 +1687,7 @@ static int kvm_prepare_memory_region(struct kvm *kvm,
new->dirty_bitmap = NULL;
else if (old && old->dirty_bitmap)
new->dirty_bitmap = old->dirty_bitmap;
- else if (kvm_use_dirty_bitmap(kvm)) {
+ else if (kvm_use_dirty_bitmap(kvm, false)) {
r = kvm_alloc_dirty_bitmap(new);
if (r)
return r;
@@ -2162,7 +2162,7 @@ int kvm_get_dirty_log(struct kvm *kvm, struct kvm_dirty_log *log,
unsigned long any = 0;
/* Dirty ring tracking may be exclusive to dirty log tracking */
- if (!kvm_use_dirty_bitmap(kvm))
+ if (!kvm_use_dirty_bitmap(kvm, false))
return -ENXIO;
*memslot = NULL;
@@ -2216,7 +2216,8 @@ EXPORT_SYMBOL_GPL(kvm_get_dirty_log);
* exiting to userspace will be logged for the next call.
*
*/
-static int kvm_get_dirty_log_protect(struct kvm *kvm, struct kvm_dirty_log *log)
+static int kvm_get_dirty_log_protect(struct kvm *kvm, struct kvm_dirty_log *log,
+ bool protect)
{
struct kvm_memslots *slots;
struct kvm_memory_slot *memslot;
@@ -2224,10 +2225,9 @@ static int kvm_get_dirty_log_protect(struct kvm *kvm, struct kvm_dirty_log *log)
unsigned long n;
unsigned long *dirty_bitmap;
unsigned long *dirty_bitmap_buffer;
- bool flush;
/* Dirty ring tracking may be exclusive to dirty log tracking */
- if (!kvm_use_dirty_bitmap(kvm))
+ if (!kvm_use_dirty_bitmap(kvm, !protect))
return -ENXIO;
as_id = log->slot >> 16;
@@ -2242,21 +2242,30 @@ static int kvm_get_dirty_log_protect(struct kvm *kvm, struct kvm_dirty_log *log)
dirty_bitmap = memslot->dirty_bitmap;
+ if (protect)
+ lockdep_assert_held(&kvm->slots_lock);
+ else
+ lockdep_assert_held(&kvm->srcu);
+ /*
+ * kvm_arch_sync_dirty_log() must be safe to call with either kvm->srcu
+ * held OR the slots lock held.
+ */
kvm_arch_sync_dirty_log(kvm, memslot);
n = kvm_dirty_bitmap_bytes(memslot);
- flush = false;
- if (kvm->manual_dirty_log_protect) {
+ if (!protect) {
/*
- * Unlike kvm_get_dirty_log, we always return false in *flush,
- * because no flush is needed until KVM_CLEAR_DIRTY_LOG. There
- * is some code duplication between this function and
- * kvm_get_dirty_log, but hopefully all architecture
- * transition to kvm_get_dirty_log_protect and kvm_get_dirty_log
- * can be eliminated.
+ * Unlike kvm_get_dirty_log, we never flush, because no flush is
+ * needed until KVM_CLEAR_DIRTY_LOG. There is some code
+ * duplication between this function and kvm_get_dirty_log, but
+ * hopefully all architecture transition to
+ * kvm_get_dirty_log_protect and kvm_get_dirty_log can be
+ * eliminated.
*/
dirty_bitmap_buffer = dirty_bitmap;
} else {
+ bool flush;
+
dirty_bitmap_buffer = kvm_second_dirty_bitmap(memslot);
memset(dirty_bitmap_buffer, 0, n);
@@ -2277,10 +2286,10 @@ static int kvm_get_dirty_log_protect(struct kvm *kvm, struct kvm_dirty_log *log)
offset, mask);
}
KVM_MMU_UNLOCK(kvm);
- }
- if (flush)
- kvm_flush_remote_tlbs_memslot(kvm, memslot);
+ if (flush)
+ kvm_flush_remote_tlbs_memslot(kvm, memslot);
+ }
if (copy_to_user(log->dirty_bitmap, dirty_bitmap_buffer, n))
return -EFAULT;
@@ -2310,13 +2319,30 @@ static int kvm_get_dirty_log_protect(struct kvm *kvm, struct kvm_dirty_log *log)
static int kvm_vm_ioctl_get_dirty_log(struct kvm *kvm,
struct kvm_dirty_log *log)
{
- int r;
+ bool protect;
+ int r, idx;
- mutex_lock(&kvm->slots_lock);
+ /*
+ * Only protect if manual protection isn't enabled.
+ */
+ protect = !kvm->manual_dirty_log_protect;
- r = kvm_get_dirty_log_protect(kvm, log);
+ /*
+ * If we are only collecting the dlrty log and not clearing it,
+ * the srcu lock is sufficient.
+ */
+ if (protect)
+ mutex_lock(&kvm->slots_lock);
+ else
+ idx = srcu_read_lock(&kvm->srcu);
+
+ r = kvm_get_dirty_log_protect(kvm, log, protect);
+
+ if (protect)
+ mutex_unlock(&kvm->slots_lock);
+ else
+ srcu_read_unlock(&kvm->srcu, idx);
- mutex_unlock(&kvm->slots_lock);
return r;
}
@@ -2339,7 +2365,7 @@ static int kvm_clear_dirty_log_protect(struct kvm *kvm,
bool flush;
/* Dirty ring tracking may be exclusive to dirty log tracking */
- if (!kvm_use_dirty_bitmap(kvm))
+ if (!kvm_use_dirty_bitmap(kvm, false))
return -ENXIO;
as_id = log->slot >> 16;
base-commit: a6ad54137af92535cfe32e19e5f3bc1bb7dbd383
--
2.51.0.618.g983fd99d29-goog
^ permalink raw reply related [flat|nested] 5+ messages in thread* [PATCH 2/2] KVM: selftests: Add parallel KVM_GET_DIRTY_LOG to dirty_log_perf_test
2025-09-30 17:28 [PATCH 1/2] KVM: For manual-protect GET_DIRTY_LOG, do not hold slots lock James Houghton
@ 2025-09-30 17:28 ` James Houghton
2025-10-01 11:50 ` [PATCH 1/2] KVM: For manual-protect GET_DIRTY_LOG, do not hold slots lock kernel test robot
2025-10-06 7:33 ` Dan Carpenter
2 siblings, 0 replies; 5+ messages in thread
From: James Houghton @ 2025-09-30 17:28 UTC (permalink / raw)
To: Paolo Bonzini, Sean Christopherson; +Cc: James Houghton, kvm, linux-kernel
The parallelism is by memslot. This is useful because KVM no longer
serializes KVM_GET_DIRTY_LOG if KVM_CAP_MANUAL_DIRTY_LOG_PROTECT2 is
enabled.
Signed-off-by: James Houghton <jthoughton@google.com>
---
.../selftests/kvm/dirty_log_perf_test.c | 20 ++++++++--
.../testing/selftests/kvm/include/memstress.h | 2 +
tools/testing/selftests/kvm/lib/kvm_util.c | 2 +
tools/testing/selftests/kvm/lib/memstress.c | 40 +++++++++++++++++++
4 files changed, 61 insertions(+), 3 deletions(-)
diff --git a/tools/testing/selftests/kvm/dirty_log_perf_test.c b/tools/testing/selftests/kvm/dirty_log_perf_test.c
index e79817bd0e29..8a5f289c4966 100644
--- a/tools/testing/selftests/kvm/dirty_log_perf_test.c
+++ b/tools/testing/selftests/kvm/dirty_log_perf_test.c
@@ -131,8 +131,18 @@ struct test_params {
int slots;
uint32_t write_percent;
bool random_access;
+ bool parallel_get_dirty_log;
};
+static void get_dirty_log(struct kvm_vm *vm, unsigned long *bitmaps[],
+ struct test_params *p)
+{
+ if (p->parallel_get_dirty_log)
+ memstress_get_dirty_log_parallel(vm, bitmaps, p->slots);
+ else
+ memstress_get_dirty_log(vm, bitmaps, p->slots);
+}
+
static void run_test(enum vm_guest_mode mode, void *arg)
{
struct test_params *p = arg;
@@ -230,7 +240,7 @@ static void run_test(enum vm_guest_mode mode, void *arg)
iteration, ts_diff.tv_sec, ts_diff.tv_nsec);
clock_gettime(CLOCK_MONOTONIC, &start);
- memstress_get_dirty_log(vm, bitmaps, p->slots);
+ get_dirty_log(vm, bitmaps, p);
ts_diff = timespec_elapsed(start);
get_dirty_log_total = timespec_add(get_dirty_log_total,
ts_diff);
@@ -292,7 +302,7 @@ static void run_test(enum vm_guest_mode mode, void *arg)
static void help(char *name)
{
puts("");
- printf("usage: %s [-h] [-a] [-i iterations] [-p offset] [-g] "
+ printf("usage: %s [-h] [-a] [-i iterations] [-p offset] [-g] [-l] "
"[-m mode] [-n] [-b vcpu bytes] [-v vcpus] [-o] [-r random seed ] [-s mem type]"
"[-x memslots] [-w percentage] [-c physical cpus to run test on]\n", name);
puts("");
@@ -305,6 +315,7 @@ static void help(char *name)
" and writes will be tracked as soon as dirty logging is\n"
" enabled on the memslot (i.e. KVM_DIRTY_LOG_INITIALLY_SET\n"
" is not enabled).\n");
+ printf(" -l: Do KVM_GET_DIRTY_LOG calls for each memslot in parallel.\n");
printf(" -p: specify guest physical test memory offset\n"
" Warning: a low offset can conflict with the loaded test code.\n");
guest_modes_help();
@@ -355,7 +366,7 @@ int main(int argc, char *argv[])
guest_modes_append_default();
- while ((opt = getopt(argc, argv, "ab:c:eghi:m:nop:r:s:v:x:w:")) != -1) {
+ while ((opt = getopt(argc, argv, "ab:c:eghi:lm:nop:r:s:v:x:w:")) != -1) {
switch (opt) {
case 'a':
p.random_access = true;
@@ -379,6 +390,9 @@ int main(int argc, char *argv[])
case 'i':
p.iterations = atoi_positive("Number of iterations", optarg);
break;
+ case 'l':
+ p.parallel_get_dirty_log = true;
+ break;
case 'm':
guest_modes_cmdline(optarg);
break;
diff --git a/tools/testing/selftests/kvm/include/memstress.h b/tools/testing/selftests/kvm/include/memstress.h
index 9071eb6dea60..3e6ad2cdec80 100644
--- a/tools/testing/selftests/kvm/include/memstress.h
+++ b/tools/testing/selftests/kvm/include/memstress.h
@@ -74,6 +74,8 @@ void memstress_setup_nested(struct kvm_vm *vm, int nr_vcpus, struct kvm_vcpu *vc
void memstress_enable_dirty_logging(struct kvm_vm *vm, int slots);
void memstress_disable_dirty_logging(struct kvm_vm *vm, int slots);
void memstress_get_dirty_log(struct kvm_vm *vm, unsigned long *bitmaps[], int slots);
+void memstress_get_dirty_log_parallel(struct kvm_vm *vm, unsigned long *bitmaps[],
+ int slots);
void memstress_clear_dirty_log(struct kvm_vm *vm, unsigned long *bitmaps[],
int slots, uint64_t pages_per_slot);
unsigned long **memstress_alloc_bitmaps(int slots, uint64_t pages_per_slot);
diff --git a/tools/testing/selftests/kvm/lib/memstress.c b/tools/testing/selftests/kvm/lib/memstress.c
index 557c0a0a5658..abbd96a1c3ba 100644
--- a/tools/testing/selftests/kvm/lib/memstress.c
+++ b/tools/testing/selftests/kvm/lib/memstress.c
@@ -40,6 +40,12 @@ static bool all_vcpu_threads_running;
static struct kvm_vcpu *vcpus[KVM_MAX_VCPUS];
+struct get_dirty_log_args {
+ struct kvm_vm *vm;
+ unsigned long *bitmap;
+ int slot;
+};
+
/*
* Continuously write to the first 8 bytes of each page in the
* specified region.
@@ -341,6 +347,15 @@ void memstress_disable_dirty_logging(struct kvm_vm *vm, int slots)
toggle_dirty_logging(vm, slots, false);
}
+static void *get_dirty_log_worker(void *arg)
+{
+ struct get_dirty_log_args *args = arg;
+
+ kvm_vm_get_dirty_log(args->vm, args->slot, args->bitmap);
+
+ return NULL;
+}
+
void memstress_get_dirty_log(struct kvm_vm *vm, unsigned long *bitmaps[], int slots)
{
int i;
@@ -352,6 +367,31 @@ void memstress_get_dirty_log(struct kvm_vm *vm, unsigned long *bitmaps[], int sl
}
}
+void memstress_get_dirty_log_parallel(struct kvm_vm *vm, unsigned long *bitmaps[],
+ int slots)
+{
+ struct {
+ pthread_t thd;
+ struct get_dirty_log_args args;
+ } *threads;
+ int i;
+
+ threads = malloc(slots * sizeof(*threads));
+
+ for (i = 0; i < slots; i++) {
+ threads[i].args.vm = vm;
+ threads[i].args.slot = MEMSTRESS_MEM_SLOT_INDEX + i;
+ threads[i].args.bitmap = bitmaps[i];
+ pthread_create(&threads[i].thd, NULL, get_dirty_log_worker,
+ &threads[i].args);
+ }
+
+ for (i = 0; i < slots; i++)
+ pthread_join(threads[i].thd, NULL);
+
+ free(threads);
+}
+
void memstress_clear_dirty_log(struct kvm_vm *vm, unsigned long *bitmaps[],
int slots, uint64_t pages_per_slot)
{
--
2.51.0.618.g983fd99d29-goog
^ permalink raw reply related [flat|nested] 5+ messages in thread* Re: [PATCH 1/2] KVM: For manual-protect GET_DIRTY_LOG, do not hold slots lock
2025-09-30 17:28 [PATCH 1/2] KVM: For manual-protect GET_DIRTY_LOG, do not hold slots lock James Houghton
2025-09-30 17:28 ` [PATCH 2/2] KVM: selftests: Add parallel KVM_GET_DIRTY_LOG to dirty_log_perf_test James Houghton
@ 2025-10-01 11:50 ` kernel test robot
2025-10-06 7:33 ` Dan Carpenter
2 siblings, 0 replies; 5+ messages in thread
From: kernel test robot @ 2025-10-01 11:50 UTC (permalink / raw)
To: James Houghton, Paolo Bonzini, Sean Christopherson
Cc: llvm, oe-kbuild-all, James Houghton, kvm, linux-kernel
Hi James,
kernel test robot noticed the following build warnings:
[auto build test WARNING on a6ad54137af92535cfe32e19e5f3bc1bb7dbd383]
url: https://github.com/intel-lab-lkp/linux/commits/James-Houghton/KVM-selftests-Add-parallel-KVM_GET_DIRTY_LOG-to-dirty_log_perf_test/20251001-013306
base: a6ad54137af92535cfe32e19e5f3bc1bb7dbd383
patch link: https://lore.kernel.org/r/20250930172850.598938-1-jthoughton%40google.com
patch subject: [PATCH 1/2] KVM: For manual-protect GET_DIRTY_LOG, do not hold slots lock
config: x86_64-buildonly-randconfig-005-20251001 (https://download.01.org/0day-ci/archive/20251001/202510011941.dJGxEiZE-lkp@intel.com/config)
compiler: clang version 20.1.8 (https://github.com/llvm/llvm-project 87f0227cb60147a26a1eeb4fb06e3b505e9c7261)
reproduce (this is a W=1 build): (https://download.01.org/0day-ci/archive/20251001/202510011941.dJGxEiZE-lkp@intel.com/reproduce)
If you fix the issue in a separate patch/commit (i.e. not just a new version of
the same patch/commit), kindly add following tags
| Reported-by: kernel test robot <lkp@intel.com>
| Closes: https://lore.kernel.org/oe-kbuild-all/202510011941.dJGxEiZE-lkp@intel.com/
All warnings (new ones prefixed by >>):
>> Warning: arch/x86/kvm/../../../virt/kvm/kvm_main.c:2220 function parameter 'protect' not described in 'kvm_get_dirty_log_protect'
--
0-DAY CI Kernel Test Service
https://github.com/intel/lkp-tests/wiki
^ permalink raw reply [flat|nested] 5+ messages in thread
* Re: [PATCH 1/2] KVM: For manual-protect GET_DIRTY_LOG, do not hold slots lock
2025-09-30 17:28 [PATCH 1/2] KVM: For manual-protect GET_DIRTY_LOG, do not hold slots lock James Houghton
2025-09-30 17:28 ` [PATCH 2/2] KVM: selftests: Add parallel KVM_GET_DIRTY_LOG to dirty_log_perf_test James Houghton
2025-10-01 11:50 ` [PATCH 1/2] KVM: For manual-protect GET_DIRTY_LOG, do not hold slots lock kernel test robot
@ 2025-10-06 7:33 ` Dan Carpenter
2025-10-08 22:31 ` James Houghton
2 siblings, 1 reply; 5+ messages in thread
From: Dan Carpenter @ 2025-10-06 7:33 UTC (permalink / raw)
To: oe-kbuild, James Houghton, Paolo Bonzini, Sean Christopherson
Cc: lkp, oe-kbuild-all, James Houghton, kvm, linux-kernel
Hi James,
kernel test robot noticed the following build warnings:
url: https://github.com/intel-lab-lkp/linux/commits/James-Houghton/KVM-selftests-Add-parallel-KVM_GET_DIRTY_LOG-to-dirty_log_perf_test/20251001-013306
base: a6ad54137af92535cfe32e19e5f3bc1bb7dbd383
patch link: https://lore.kernel.org/r/20250930172850.598938-1-jthoughton%40google.com
patch subject: [PATCH 1/2] KVM: For manual-protect GET_DIRTY_LOG, do not hold slots lock
config: x86_64-randconfig-161-20251004 (https://download.01.org/0day-ci/archive/20251004/202510041919.LaZWBcDN-lkp@intel.com/config)
compiler: clang version 20.1.8 (https://github.com/llvm/llvm-project 87f0227cb60147a26a1eeb4fb06e3b505e9c7261)
If you fix the issue in a separate patch/commit (i.e. not just a new version of
the same patch/commit), kindly add following tags
| Reported-by: kernel test robot <lkp@intel.com>
| Reported-by: Dan Carpenter <dan.carpenter@linaro.org>
| Closes: https://lore.kernel.org/r/202510041919.LaZWBcDN-lkp@intel.com/
New smatch warnings:
arch/x86/kvm/../../../virt/kvm/kvm_main.c:2290 kvm_get_dirty_log_protect() error: uninitialized symbol 'flush'.
vim +/flush +2290 arch/x86/kvm/../../../virt/kvm/kvm_main.c
ba0513b5b8ffbcb virt/kvm/kvm_main.c Mario Smarduch 2015-01-15 2255 n = kvm_dirty_bitmap_bytes(memslot);
82fb1294f7ad3ee virt/kvm/kvm_main.c James Houghton 2025-09-30 2256 if (!protect) {
2a31b9db153530d virt/kvm/kvm_main.c Paolo Bonzini 2018-10-23 2257 /*
82fb1294f7ad3ee virt/kvm/kvm_main.c James Houghton 2025-09-30 2258 * Unlike kvm_get_dirty_log, we never flush, because no flush is
82fb1294f7ad3ee virt/kvm/kvm_main.c James Houghton 2025-09-30 2259 * needed until KVM_CLEAR_DIRTY_LOG. There is some code
82fb1294f7ad3ee virt/kvm/kvm_main.c James Houghton 2025-09-30 2260 * duplication between this function and kvm_get_dirty_log, but
82fb1294f7ad3ee virt/kvm/kvm_main.c James Houghton 2025-09-30 2261 * hopefully all architecture transition to
82fb1294f7ad3ee virt/kvm/kvm_main.c James Houghton 2025-09-30 2262 * kvm_get_dirty_log_protect and kvm_get_dirty_log can be
82fb1294f7ad3ee virt/kvm/kvm_main.c James Houghton 2025-09-30 2263 * eliminated.
2a31b9db153530d virt/kvm/kvm_main.c Paolo Bonzini 2018-10-23 2264 */
2a31b9db153530d virt/kvm/kvm_main.c Paolo Bonzini 2018-10-23 2265 dirty_bitmap_buffer = dirty_bitmap;
2a31b9db153530d virt/kvm/kvm_main.c Paolo Bonzini 2018-10-23 2266 } else {
82fb1294f7ad3ee virt/kvm/kvm_main.c James Houghton 2025-09-30 2267 bool flush;
flush needs to be initialized to false.
82fb1294f7ad3ee virt/kvm/kvm_main.c James Houghton 2025-09-30 2268
03133347b4452ef virt/kvm/kvm_main.c Claudio Imbrenda 2018-04-30 2269 dirty_bitmap_buffer = kvm_second_dirty_bitmap(memslot);
ba0513b5b8ffbcb virt/kvm/kvm_main.c Mario Smarduch 2015-01-15 2270 memset(dirty_bitmap_buffer, 0, n);
ba0513b5b8ffbcb virt/kvm/kvm_main.c Mario Smarduch 2015-01-15 2271
531810caa9f4bc9 virt/kvm/kvm_main.c Ben Gardon 2021-02-02 2272 KVM_MMU_LOCK(kvm);
ba0513b5b8ffbcb virt/kvm/kvm_main.c Mario Smarduch 2015-01-15 2273 for (i = 0; i < n / sizeof(long); i++) {
ba0513b5b8ffbcb virt/kvm/kvm_main.c Mario Smarduch 2015-01-15 2274 unsigned long mask;
ba0513b5b8ffbcb virt/kvm/kvm_main.c Mario Smarduch 2015-01-15 2275 gfn_t offset;
ba0513b5b8ffbcb virt/kvm/kvm_main.c Mario Smarduch 2015-01-15 2276
ba0513b5b8ffbcb virt/kvm/kvm_main.c Mario Smarduch 2015-01-15 2277 if (!dirty_bitmap[i])
ba0513b5b8ffbcb virt/kvm/kvm_main.c Mario Smarduch 2015-01-15 2278 continue;
ba0513b5b8ffbcb virt/kvm/kvm_main.c Mario Smarduch 2015-01-15 2279
0dff084607bd555 virt/kvm/kvm_main.c Sean Christopherson 2020-02-18 2280 flush = true;
ba0513b5b8ffbcb virt/kvm/kvm_main.c Mario Smarduch 2015-01-15 2281 mask = xchg(&dirty_bitmap[i], 0);
ba0513b5b8ffbcb virt/kvm/kvm_main.c Mario Smarduch 2015-01-15 2282 dirty_bitmap_buffer[i] = mask;
ba0513b5b8ffbcb virt/kvm/kvm_main.c Mario Smarduch 2015-01-15 2283
ba0513b5b8ffbcb virt/kvm/kvm_main.c Mario Smarduch 2015-01-15 2284 offset = i * BITS_PER_LONG;
58d2930f4ee335a virt/kvm/kvm_main.c Takuya Yoshikawa 2015-03-17 2285 kvm_arch_mmu_enable_log_dirty_pt_masked(kvm, memslot,
58d2930f4ee335a virt/kvm/kvm_main.c Takuya Yoshikawa 2015-03-17 2286 offset, mask);
58d2930f4ee335a virt/kvm/kvm_main.c Takuya Yoshikawa 2015-03-17 2287 }
531810caa9f4bc9 virt/kvm/kvm_main.c Ben Gardon 2021-02-02 2288 KVM_MMU_UNLOCK(kvm);
2a31b9db153530d virt/kvm/kvm_main.c Paolo Bonzini 2018-10-23 2289
0dff084607bd555 virt/kvm/kvm_main.c Sean Christopherson 2020-02-18 @2290 if (flush)
Either uninitialized or true. Never false.
619b5072443c05c virt/kvm/kvm_main.c David Matlack 2023-08-11 2291 kvm_flush_remote_tlbs_memslot(kvm, memslot);
82fb1294f7ad3ee virt/kvm/kvm_main.c James Houghton 2025-09-30 2292 }
0dff084607bd555 virt/kvm/kvm_main.c Sean Christopherson 2020-02-18 2293
ba0513b5b8ffbcb virt/kvm/kvm_main.c Mario Smarduch 2015-01-15 2294 if (copy_to_user(log->dirty_bitmap, dirty_bitmap_buffer, n))
58d6db349172786 virt/kvm/kvm_main.c Markus Elfring 2017-01-22 2295 return -EFAULT;
58d6db349172786 virt/kvm/kvm_main.c Markus Elfring 2017-01-22 2296 return 0;
ba0513b5b8ffbcb virt/kvm/kvm_main.c Mario Smarduch 2015-01-15 2297 }
--
0-DAY CI Kernel Test Service
https://github.com/intel/lkp-tests/wiki
^ permalink raw reply [flat|nested] 5+ messages in thread* Re: [PATCH 1/2] KVM: For manual-protect GET_DIRTY_LOG, do not hold slots lock
2025-10-06 7:33 ` Dan Carpenter
@ 2025-10-08 22:31 ` James Houghton
0 siblings, 0 replies; 5+ messages in thread
From: James Houghton @ 2025-10-08 22:31 UTC (permalink / raw)
To: Dan Carpenter
Cc: oe-kbuild, Paolo Bonzini, Sean Christopherson, lkp, oe-kbuild-all,
kvm, linux-kernel
On Mon, Oct 6, 2025 at 12:33 AM Dan Carpenter <dan.carpenter@linaro.org> wrote:
>
> Hi James,
>
> kernel test robot noticed the following build warnings:
>
> url: https://github.com/intel-lab-lkp/linux/commits/James-Houghton/KVM-selftests-Add-parallel-KVM_GET_DIRTY_LOG-to-dirty_log_perf_test/20251001-013306
> base: a6ad54137af92535cfe32e19e5f3bc1bb7dbd383
> patch link: https://lore.kernel.org/r/20250930172850.598938-1-jthoughton%40google.com
> patch subject: [PATCH 1/2] KVM: For manual-protect GET_DIRTY_LOG, do not hold slots lock
> config: x86_64-randconfig-161-20251004 (https://download.01.org/0day-ci/archive/20251004/202510041919.LaZWBcDN-lkp@intel.com/config)
> compiler: clang version 20.1.8 (https://github.com/llvm/llvm-project 87f0227cb60147a26a1eeb4fb06e3b505e9c7261)
>
> If you fix the issue in a separate patch/commit (i.e. not just a new version of
> the same patch/commit), kindly add following tags
> | Reported-by: kernel test robot <lkp@intel.com>
> | Reported-by: Dan Carpenter <dan.carpenter@linaro.org>
> | Closes: https://lore.kernel.org/r/202510041919.LaZWBcDN-lkp@intel.com/
>
> New smatch warnings:
> arch/x86/kvm/../../../virt/kvm/kvm_main.c:2290 kvm_get_dirty_log_protect() error: uninitialized symbol 'flush'.
>
> vim +/flush +2290 arch/x86/kvm/../../../virt/kvm/kvm_main.c
>
> ba0513b5b8ffbcb virt/kvm/kvm_main.c Mario Smarduch 2015-01-15 2255 n = kvm_dirty_bitmap_bytes(memslot);
> 82fb1294f7ad3ee virt/kvm/kvm_main.c James Houghton 2025-09-30 2256 if (!protect) {
> 2a31b9db153530d virt/kvm/kvm_main.c Paolo Bonzini 2018-10-23 2257 /*
> 82fb1294f7ad3ee virt/kvm/kvm_main.c James Houghton 2025-09-30 2258 * Unlike kvm_get_dirty_log, we never flush, because no flush is
> 82fb1294f7ad3ee virt/kvm/kvm_main.c James Houghton 2025-09-30 2259 * needed until KVM_CLEAR_DIRTY_LOG. There is some code
> 82fb1294f7ad3ee virt/kvm/kvm_main.c James Houghton 2025-09-30 2260 * duplication between this function and kvm_get_dirty_log, but
> 82fb1294f7ad3ee virt/kvm/kvm_main.c James Houghton 2025-09-30 2261 * hopefully all architecture transition to
> 82fb1294f7ad3ee virt/kvm/kvm_main.c James Houghton 2025-09-30 2262 * kvm_get_dirty_log_protect and kvm_get_dirty_log can be
> 82fb1294f7ad3ee virt/kvm/kvm_main.c James Houghton 2025-09-30 2263 * eliminated.
> 2a31b9db153530d virt/kvm/kvm_main.c Paolo Bonzini 2018-10-23 2264 */
> 2a31b9db153530d virt/kvm/kvm_main.c Paolo Bonzini 2018-10-23 2265 dirty_bitmap_buffer = dirty_bitmap;
> 2a31b9db153530d virt/kvm/kvm_main.c Paolo Bonzini 2018-10-23 2266 } else {
> 82fb1294f7ad3ee virt/kvm/kvm_main.c James Houghton 2025-09-30 2267 bool flush;
>
> flush needs to be initialized to false.
I'll fix this and the other bug about not documenting the new
parameter, my mistake. :(
I think in a v2 I'll also merge kvm_get_dirty_log() into
kvm_get_dirty_log_protect(); might as well.
^ permalink raw reply [flat|nested] 5+ messages in thread
end of thread, other threads:[~2025-10-08 22:32 UTC | newest]
Thread overview: 5+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2025-09-30 17:28 [PATCH 1/2] KVM: For manual-protect GET_DIRTY_LOG, do not hold slots lock James Houghton
2025-09-30 17:28 ` [PATCH 2/2] KVM: selftests: Add parallel KVM_GET_DIRTY_LOG to dirty_log_perf_test James Houghton
2025-10-01 11:50 ` [PATCH 1/2] KVM: For manual-protect GET_DIRTY_LOG, do not hold slots lock kernel test robot
2025-10-06 7:33 ` Dan Carpenter
2025-10-08 22:31 ` James Houghton
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox