* [PATCH v5 0/2] KVM: x86: Include host suspended time in steal time
@ 2025-03-25 4:13 Suleiman Souhlal
2025-03-25 4:13 ` [PATCH v5 1/2] KVM: x86: Advance guest TSC after deep suspend Suleiman Souhlal
` (2 more replies)
0 siblings, 3 replies; 8+ messages in thread
From: Suleiman Souhlal @ 2025-03-25 4:13 UTC (permalink / raw)
To: Paolo Bonzini, Sean Christopherson
Cc: Thomas Gleixner, Ingo Molnar, Borislav Petkov, Dave Hansen, x86,
H. Peter Anvin, Chao Gao, David Woodhouse, Sergey Senozhatsky,
Konrad Rzeszutek Wilk, kvm, linux-kernel, ssouhlal,
Suleiman Souhlal
This series makes it so that the time that the host is suspended is
included in guests' steal time.
When the host resumes from a suspend, the guest thinks any task
that was running during the suspend ran for a long time, even though
the effective run time was much shorter, which can end up having
negative effects with scheduling.
To mitigate this issue, we include the time that the host was
suspended in steal time, which lets the guest can subtract the
duration from the tasks' runtime.
In addition, we make the guest TSC behavior consistent whether the
host TSC went backwards or not.
v5:
- Fix grammar mistakes in commit message.
v4: https://lore.kernel.org/kvm/20250221053927.486476-1-suleiman@google.com/T/
- Advance guest TSC on suspends where host TSC goes backwards.
- Block vCPUs from running until resume notifier.
- Move suspend duration accounting out of machine-independent kvm to
x86.
- Merge code and documentation patches.
- Reworded documentation.
v3: https://lore.kernel.org/kvm/Z5AB-6bLRNLle27G@google.com/T/
- Use PM notifier instead of syscore ops (kvm_suspend()/kvm_resume()),
because the latter doesn't get called on shallow suspend.
- Don't call function under UACCESS.
- Whitespace.
v2: https://lore.kernel.org/lkml/20241118043745.1857272-1-suleiman@google.com/
- Accumulate suspend time at machine-independent kvm layer and track per-VCPU
instead of per-VM.
- Document changes.
v1: https://lore.kernel.org/kvm/20240710074410.770409-1-suleiman@google.com/
Suleiman Souhlal (2):
KVM: x86: Advance guest TSC after deep suspend.
KVM: x86: Include host suspended time in steal time
Documentation/virt/kvm/x86/msr.rst | 10 +++-
arch/x86/include/asm/kvm_host.h | 7 +++
arch/x86/kvm/x86.c | 84 +++++++++++++++++++++++++++++-
3 files changed, 98 insertions(+), 3 deletions(-)
--
2.49.0.395.g12beb8f557-goog
^ permalink raw reply [flat|nested] 8+ messages in thread
* [PATCH v5 1/2] KVM: x86: Advance guest TSC after deep suspend.
2025-03-25 4:13 [PATCH v5 0/2] KVM: x86: Include host suspended time in steal time Suleiman Souhlal
@ 2025-03-25 4:13 ` Suleiman Souhlal
2025-04-22 4:47 ` Tzung-Bi Shih
2025-03-25 4:13 ` [PATCH v5 2/2] KVM: x86: Include host suspended time in steal time Suleiman Souhlal
2025-04-08 1:36 ` [PATCH v5 0/2] " Suleiman Souhlal
2 siblings, 1 reply; 8+ messages in thread
From: Suleiman Souhlal @ 2025-03-25 4:13 UTC (permalink / raw)
To: Paolo Bonzini, Sean Christopherson
Cc: Thomas Gleixner, Ingo Molnar, Borislav Petkov, Dave Hansen, x86,
H. Peter Anvin, Chao Gao, David Woodhouse, Sergey Senozhatsky,
Konrad Rzeszutek Wilk, kvm, linux-kernel, ssouhlal,
Suleiman Souhlal
Advance guest TSC to current time after suspend when the host
TSCs went backwards.
This makes the behavior consistent between suspends where host TSC
resets and suspends where it doesn't, such as suspend-to-idle, where
in the former case if the host TSC resets, the guests' would
previously be "frozen" due to KVM's backwards TSC prevention, while
in the latter case they would advance.
Suggested-by: Sean Christopherson <seanjc@google.com>
Signed-off-by: Suleiman Souhlal <suleiman@google.com>
---
arch/x86/include/asm/kvm_host.h | 1 +
arch/x86/kvm/x86.c | 33 ++++++++++++++++++++++++++++++++-
2 files changed, 33 insertions(+), 1 deletion(-)
diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h
index 32ae3aa50c7e38..f5ce2c2782142b 100644
--- a/arch/x86/include/asm/kvm_host.h
+++ b/arch/x86/include/asm/kvm_host.h
@@ -1399,6 +1399,7 @@ struct kvm_arch {
u64 cur_tsc_offset;
u64 cur_tsc_generation;
int nr_vcpus_matched_tsc;
+ bool host_was_suspended;
u32 default_tsc_khz;
bool user_set_tsc;
diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
index 4b64ab350bcd4d..6b4ea3be66e814 100644
--- a/arch/x86/kvm/x86.c
+++ b/arch/x86/kvm/x86.c
@@ -4971,7 +4971,37 @@ void kvm_arch_vcpu_load(struct kvm_vcpu *vcpu, int cpu)
/* Apply any externally detected TSC adjustments (due to suspend) */
if (unlikely(vcpu->arch.tsc_offset_adjustment)) {
- adjust_tsc_offset_host(vcpu, vcpu->arch.tsc_offset_adjustment);
+ unsigned long flags;
+ struct kvm *kvm;
+ bool advance;
+ u64 kernel_ns, l1_tsc, offset, tsc_now;
+
+ kvm = vcpu->kvm;
+ advance = kvm_get_time_and_clockread(&kernel_ns,
+ &tsc_now);
+ raw_spin_lock_irqsave(&kvm->arch.tsc_write_lock, flags);
+ /*
+ * Advance the guest's TSC to current time instead of only
+ * preventing it from going backwards, while making sure
+ * all the vCPUs use the same offset.
+ */
+ if (kvm->arch.host_was_suspended && advance) {
+ l1_tsc = nsec_to_cycles(vcpu,
+ vcpu->kvm->arch.kvmclock_offset +
+ kernel_ns);
+ offset = kvm_compute_l1_tsc_offset(vcpu,
+ l1_tsc);
+ kvm->arch.cur_tsc_offset = offset;
+ kvm_vcpu_write_tsc_offset(vcpu, offset);
+ } else if (advance)
+ kvm_vcpu_write_tsc_offset(vcpu,
+ vcpu->kvm->arch.cur_tsc_offset);
+ else
+ adjust_tsc_offset_host(vcpu,
+ vcpu->arch.tsc_offset_adjustment);
+ kvm->arch.host_was_suspended = 0;
+ raw_spin_unlock_irqrestore(&kvm->arch.tsc_write_lock,
+ flags);
vcpu->arch.tsc_offset_adjustment = 0;
kvm_make_request(KVM_REQ_CLOCK_UPDATE, vcpu);
}
@@ -12640,6 +12670,7 @@ int kvm_arch_enable_virtualization_cpu(void)
kvm_make_request(KVM_REQ_MASTERCLOCK_UPDATE, vcpu);
}
+ kvm->arch.host_was_suspended = 1;
/*
* We have to disable TSC offset matching.. if you were
* booting a VM while issuing an S4 host suspend....
--
2.49.0.395.g12beb8f557-goog
^ permalink raw reply related [flat|nested] 8+ messages in thread
* [PATCH v5 2/2] KVM: x86: Include host suspended time in steal time
2025-03-25 4:13 [PATCH v5 0/2] KVM: x86: Include host suspended time in steal time Suleiman Souhlal
2025-03-25 4:13 ` [PATCH v5 1/2] KVM: x86: Advance guest TSC after deep suspend Suleiman Souhlal
@ 2025-03-25 4:13 ` Suleiman Souhlal
2025-04-23 7:57 ` Tzung-Bi Shih
2025-05-02 1:17 ` Sean Christopherson
2025-04-08 1:36 ` [PATCH v5 0/2] " Suleiman Souhlal
2 siblings, 2 replies; 8+ messages in thread
From: Suleiman Souhlal @ 2025-03-25 4:13 UTC (permalink / raw)
To: Paolo Bonzini, Sean Christopherson
Cc: Thomas Gleixner, Ingo Molnar, Borislav Petkov, Dave Hansen, x86,
H. Peter Anvin, Chao Gao, David Woodhouse, Sergey Senozhatsky,
Konrad Rzeszutek Wilk, kvm, linux-kernel, ssouhlal,
Suleiman Souhlal
When the host resumes from a suspend, the guest thinks any task
that was running during the suspend ran for a long time, even though
the effective run time was much shorter, which can end up having
negative effects with scheduling.
To mitigate this issue, the time that the host was suspended is included
in steal time, which lets the guest subtract the duration from the
tasks' runtime.
In order to implement this behavior, once the suspend notifier fires,
vCPUs trying to run will block until the resume notifier finishes. This is
because the freezing of userspace tasks happens between these two points.
It means that vCPUs could otherwise run and get their suspend steal
time misaccounted, particularly if a vCPU would run after resume before
the resume notifier fires.
Incidentally, doing this also addresses a potential race with the
suspend notifier setting PVCLOCK_GUEST_STOPPED, which could then get
cleared before the suspend actually happened.
One potential caveat is that in the case of a suspend happening during
a VM migration, the suspend time might not be accounted for.
A workaround would be for the VMM to ensure that the guest is entered
with KVM_RUN after resuming from suspend.
Signed-off-by: Suleiman Souhlal <suleiman@google.com>
---
Documentation/virt/kvm/x86/msr.rst | 10 ++++--
arch/x86/include/asm/kvm_host.h | 6 ++++
arch/x86/kvm/x86.c | 51 ++++++++++++++++++++++++++++++
3 files changed, 65 insertions(+), 2 deletions(-)
diff --git a/Documentation/virt/kvm/x86/msr.rst b/Documentation/virt/kvm/x86/msr.rst
index 3aecf2a70e7b43..48f2a8ca519548 100644
--- a/Documentation/virt/kvm/x86/msr.rst
+++ b/Documentation/virt/kvm/x86/msr.rst
@@ -294,8 +294,14 @@ data:
steal:
the amount of time in which this vCPU did not run, in
- nanoseconds. Time during which the vcpu is idle, will not be
- reported as steal time.
+ nanoseconds. This includes the time during which the host is
+ suspended. Time during which the vcpu is idle, might not be
+ reported as steal time. The case where the host suspends
+ during a VM migration might not be accounted if VCPUs aren't
+ entered post-resume, because KVM does not currently support
+ suspend/resuming the associated metadata. A workaround would
+ be for the VMM to ensure that the guest is entered with
+ KVM_RUN after resuming from suspend.
preempted:
indicate the vCPU who owns this struct is running or
diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h
index f5ce2c2782142b..10634bbf2f5d21 100644
--- a/arch/x86/include/asm/kvm_host.h
+++ b/arch/x86/include/asm/kvm_host.h
@@ -124,6 +124,7 @@
#define KVM_REQ_HV_TLB_FLUSH \
KVM_ARCH_REQ_FLAGS(32, KVM_REQUEST_WAIT | KVM_REQUEST_NO_WAKEUP)
#define KVM_REQ_UPDATE_PROTECTED_GUEST_STATE KVM_ARCH_REQ(34)
+#define KVM_REQ_WAIT_FOR_RESUME KVM_ARCH_REQ(35)
#define CR0_RESERVED_BITS \
(~(unsigned long)(X86_CR0_PE | X86_CR0_MP | X86_CR0_EM | X86_CR0_TS \
@@ -917,8 +918,13 @@ struct kvm_vcpu_arch {
struct {
u8 preempted;
+ bool host_suspended;
u64 msr_val;
u64 last_steal;
+ u64 last_suspend;
+ u64 suspend_ns;
+ u64 last_suspend_ns;
+ wait_queue_head_t resume_waitq;
struct gfn_to_hva_cache cache;
} st;
diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
index 6b4ea3be66e814..327d1831dc0746 100644
--- a/arch/x86/kvm/x86.c
+++ b/arch/x86/kvm/x86.c
@@ -3717,6 +3717,8 @@ static void record_steal_time(struct kvm_vcpu *vcpu)
steal += current->sched_info.run_delay -
vcpu->arch.st.last_steal;
vcpu->arch.st.last_steal = current->sched_info.run_delay;
+ steal += vcpu->arch.st.suspend_ns - vcpu->arch.st.last_suspend_ns;
+ vcpu->arch.st.last_suspend_ns = vcpu->arch.st.suspend_ns;
unsafe_put_user(steal, &st->steal, out);
version += 1;
@@ -6930,6 +6932,19 @@ long kvm_arch_vm_compat_ioctl(struct file *filp, unsigned int ioctl,
}
#endif
+static void wait_for_resume(struct kvm_vcpu *vcpu)
+{
+ wait_event_interruptible(vcpu->arch.st.resume_waitq,
+ vcpu->arch.st.host_suspended == 0);
+
+ /*
+ * This might happen if we blocked here before the freezing of tasks
+ * and we get woken up by the freezer.
+ */
+ if (vcpu->arch.st.host_suspended)
+ kvm_make_request(KVM_REQ_WAIT_FOR_RESUME, vcpu);
+}
+
#ifdef CONFIG_HAVE_KVM_PM_NOTIFIER
static int kvm_arch_suspend_notifier(struct kvm *kvm)
{
@@ -6939,6 +6954,19 @@ static int kvm_arch_suspend_notifier(struct kvm *kvm)
mutex_lock(&kvm->lock);
kvm_for_each_vcpu(i, vcpu, kvm) {
+ vcpu->arch.st.last_suspend = ktime_get_boottime_ns();
+ /*
+ * Tasks get thawed before the resume notifier has been called
+ * so we need to block vCPUs until the resume notifier has run.
+ * Otherwise, suspend steal time might get applied too late,
+ * and get accounted to the wrong guest task.
+ * This also ensures that the guest paused bit set below
+ * doesn't get checked and cleared before the host actually
+ * suspends.
+ */
+ vcpu->arch.st.host_suspended = 1;
+ kvm_make_request(KVM_REQ_WAIT_FOR_RESUME, vcpu);
+
if (!vcpu->arch.pv_time.active)
continue;
@@ -6954,12 +6982,32 @@ static int kvm_arch_suspend_notifier(struct kvm *kvm)
return ret ? NOTIFY_BAD : NOTIFY_DONE;
}
+static int kvm_arch_resume_notifier(struct kvm *kvm)
+{
+ struct kvm_vcpu *vcpu;
+ unsigned long i;
+
+ mutex_lock(&kvm->lock);
+ kvm_for_each_vcpu(i, vcpu, kvm) {
+ vcpu->arch.st.host_suspended = 0;
+ vcpu->arch.st.suspend_ns += ktime_get_boottime_ns() -
+ vcpu->arch.st.last_suspend;
+ wake_up_interruptible(&vcpu->arch.st.resume_waitq);
+ }
+ mutex_unlock(&kvm->lock);
+
+ return NOTIFY_DONE;
+}
+
int kvm_arch_pm_notifier(struct kvm *kvm, unsigned long state)
{
switch (state) {
case PM_HIBERNATION_PREPARE:
case PM_SUSPEND_PREPARE:
return kvm_arch_suspend_notifier(kvm);
+ case PM_POST_HIBERNATION:
+ case PM_POST_SUSPEND:
+ return kvm_arch_resume_notifier(kvm);
}
return NOTIFY_DONE;
@@ -10813,6 +10861,8 @@ static int vcpu_enter_guest(struct kvm_vcpu *vcpu)
r = 1;
goto out;
}
+ if (kvm_check_request(KVM_REQ_WAIT_FOR_RESUME, vcpu))
+ wait_for_resume(vcpu);
if (kvm_check_request(KVM_REQ_STEAL_UPDATE, vcpu))
record_steal_time(vcpu);
if (kvm_check_request(KVM_REQ_PMU, vcpu))
@@ -12343,6 +12393,7 @@ int kvm_arch_vcpu_create(struct kvm_vcpu *vcpu)
if (r)
goto free_guest_fpu;
+ init_waitqueue_head(&vcpu->arch.st.resume_waitq);
kvm_xen_init_vcpu(vcpu);
vcpu_load(vcpu);
kvm_vcpu_after_set_cpuid(vcpu);
--
2.49.0.395.g12beb8f557-goog
^ permalink raw reply related [flat|nested] 8+ messages in thread
* Re: [PATCH v5 0/2] KVM: x86: Include host suspended time in steal time
2025-03-25 4:13 [PATCH v5 0/2] KVM: x86: Include host suspended time in steal time Suleiman Souhlal
2025-03-25 4:13 ` [PATCH v5 1/2] KVM: x86: Advance guest TSC after deep suspend Suleiman Souhlal
2025-03-25 4:13 ` [PATCH v5 2/2] KVM: x86: Include host suspended time in steal time Suleiman Souhlal
@ 2025-04-08 1:36 ` Suleiman Souhlal
2 siblings, 0 replies; 8+ messages in thread
From: Suleiman Souhlal @ 2025-04-08 1:36 UTC (permalink / raw)
To: Paolo Bonzini, Sean Christopherson
Cc: Thomas Gleixner, Ingo Molnar, Borislav Petkov, Dave Hansen, x86,
H. Peter Anvin, Chao Gao, David Woodhouse, Sergey Senozhatsky,
Konrad Rzeszutek Wilk, kvm, linux-kernel, ssouhlal
On Tue, Mar 25, 2025 at 1:13 PM Suleiman Souhlal <suleiman@google.com> wrote:
>
> This series makes it so that the time that the host is suspended is
> included in guests' steal time.
Friendly ping.
-- Suleiman
>
> When the host resumes from a suspend, the guest thinks any task
> that was running during the suspend ran for a long time, even though
> the effective run time was much shorter, which can end up having
> negative effects with scheduling.
>
> To mitigate this issue, we include the time that the host was
> suspended in steal time, which lets the guest can subtract the
> duration from the tasks' runtime.
>
> In addition, we make the guest TSC behavior consistent whether the
> host TSC went backwards or not.
>
> v5:
> - Fix grammar mistakes in commit message.
>
> v4: https://lore.kernel.org/kvm/20250221053927.486476-1-suleiman@google.com/T/
> - Advance guest TSC on suspends where host TSC goes backwards.
> - Block vCPUs from running until resume notifier.
> - Move suspend duration accounting out of machine-independent kvm to
> x86.
> - Merge code and documentation patches.
> - Reworded documentation.
>
> v3: https://lore.kernel.org/kvm/Z5AB-6bLRNLle27G@google.com/T/
> - Use PM notifier instead of syscore ops (kvm_suspend()/kvm_resume()),
> because the latter doesn't get called on shallow suspend.
> - Don't call function under UACCESS.
> - Whitespace.
>
> v2: https://lore.kernel.org/lkml/20241118043745.1857272-1-suleiman@google.com/
> - Accumulate suspend time at machine-independent kvm layer and track per-VCPU
> instead of per-VM.
> - Document changes.
>
> v1: https://lore.kernel.org/kvm/20240710074410.770409-1-suleiman@google.com/
>
> Suleiman Souhlal (2):
> KVM: x86: Advance guest TSC after deep suspend.
> KVM: x86: Include host suspended time in steal time
>
> Documentation/virt/kvm/x86/msr.rst | 10 +++-
> arch/x86/include/asm/kvm_host.h | 7 +++
> arch/x86/kvm/x86.c | 84 +++++++++++++++++++++++++++++-
> 3 files changed, 98 insertions(+), 3 deletions(-)
>
> --
> 2.49.0.395.g12beb8f557-goog
>
^ permalink raw reply [flat|nested] 8+ messages in thread
* Re: [PATCH v5 1/2] KVM: x86: Advance guest TSC after deep suspend.
2025-03-25 4:13 ` [PATCH v5 1/2] KVM: x86: Advance guest TSC after deep suspend Suleiman Souhlal
@ 2025-04-22 4:47 ` Tzung-Bi Shih
2025-05-01 23:49 ` Sean Christopherson
0 siblings, 1 reply; 8+ messages in thread
From: Tzung-Bi Shih @ 2025-04-22 4:47 UTC (permalink / raw)
To: Suleiman Souhlal
Cc: Paolo Bonzini, Sean Christopherson, Thomas Gleixner, Ingo Molnar,
Borislav Petkov, Dave Hansen, x86, H. Peter Anvin, Chao Gao,
David Woodhouse, Sergey Senozhatsky, Konrad Rzeszutek Wilk, kvm,
linux-kernel, ssouhlal
On Tue, Mar 25, 2025 at 01:13:49PM +0900, Suleiman Souhlal wrote:
> Advance guest TSC to current time after suspend when the host
> TSCs went backwards.
>
> This makes the behavior consistent between suspends where host TSC
> resets and suspends where it doesn't, such as suspend-to-idle, where
> in the former case if the host TSC resets, the guests' would
> previously be "frozen" due to KVM's backwards TSC prevention, while
> in the latter case they would advance.
>
> Suggested-by: Sean Christopherson <seanjc@google.com>
> Signed-off-by: Suleiman Souhlal <suleiman@google.com>
Tested with comparing `date` before and after suspend-to-RAM[1]:
echo deep >/sys/power/mem_sleep
echo $(date '+%s' -d '+3 minutes') >/sys/class/rtc/rtc0/wakealarm
echo mem >/sys/power/state
Without the patch, the guest's `date` is slower (~3 mins) than the host's
after resuming.
Tested-by: Tzung-Bi Shih <tzungbi@kernel.org>
[1]: https://www.kernel.org/doc/Documentation/power/states.txt
Some non-functional comments inline below.
> @@ -4971,7 +4971,37 @@ void kvm_arch_vcpu_load(struct kvm_vcpu *vcpu, int cpu)
>
> /* Apply any externally detected TSC adjustments (due to suspend) */
> if (unlikely(vcpu->arch.tsc_offset_adjustment)) {
> - adjust_tsc_offset_host(vcpu, vcpu->arch.tsc_offset_adjustment);
> + unsigned long flags;
> + struct kvm *kvm;
> + bool advance;
> + u64 kernel_ns, l1_tsc, offset, tsc_now;
> +
> + kvm = vcpu->kvm;
It will be more clear (at least to me) if moving the statement to its declaration:
struct kvm *kvm = vcpu->kvm;
Other than that, the following code should better utilitize the local
variable, e.g. s/vcpu->kvm/kvm/g.
> + advance = kvm_get_time_and_clockread(&kernel_ns,
> + &tsc_now);
> + raw_spin_lock_irqsave(&kvm->arch.tsc_write_lock, flags);
> + /*
> + * Advance the guest's TSC to current time instead of only
> + * preventing it from going backwards, while making sure
> + * all the vCPUs use the same offset.
> + */
> + if (kvm->arch.host_was_suspended && advance) {
> + l1_tsc = nsec_to_cycles(vcpu,
> + vcpu->kvm->arch.kvmclock_offset +
^^^^^^^^^
kvm
> + kernel_ns);
> + offset = kvm_compute_l1_tsc_offset(vcpu,
> + l1_tsc);
> + kvm->arch.cur_tsc_offset = offset;
> + kvm_vcpu_write_tsc_offset(vcpu, offset);
> + } else if (advance)
> + kvm_vcpu_write_tsc_offset(vcpu,
> + vcpu->kvm->arch.cur_tsc_offset);
^^^^^^^^^
kvm
> + else
> + adjust_tsc_offset_host(vcpu,
> + vcpu->arch.tsc_offset_adjustment);
Need braces in `else if` and `else` cases [2].
[2]: https://www.kernel.org/doc/html/latest/process/coding-style.html#placing-braces-and-spaces
> @@ -12640,6 +12670,7 @@ int kvm_arch_enable_virtualization_cpu(void)
> kvm_make_request(KVM_REQ_MASTERCLOCK_UPDATE, vcpu);
> }
>
> + kvm->arch.host_was_suspended = 1;
Given that it is a bool, how about use `true`?
^ permalink raw reply [flat|nested] 8+ messages in thread
* Re: [PATCH v5 2/2] KVM: x86: Include host suspended time in steal time
2025-03-25 4:13 ` [PATCH v5 2/2] KVM: x86: Include host suspended time in steal time Suleiman Souhlal
@ 2025-04-23 7:57 ` Tzung-Bi Shih
2025-05-02 1:17 ` Sean Christopherson
1 sibling, 0 replies; 8+ messages in thread
From: Tzung-Bi Shih @ 2025-04-23 7:57 UTC (permalink / raw)
To: Suleiman Souhlal
Cc: Paolo Bonzini, Sean Christopherson, Thomas Gleixner, Ingo Molnar,
Borislav Petkov, Dave Hansen, x86, H. Peter Anvin, Chao Gao,
David Woodhouse, Sergey Senozhatsky, Konrad Rzeszutek Wilk, kvm,
linux-kernel, ssouhlal
On Tue, Mar 25, 2025 at 01:13:50PM +0900, Suleiman Souhlal wrote:
> When the host resumes from a suspend, the guest thinks any task
> that was running during the suspend ran for a long time, even though
> the effective run time was much shorter, which can end up having
> negative effects with scheduling.
>
> [...]
>
> Signed-off-by: Suleiman Souhlal <suleiman@google.com>
Saw the corresponding host suspended time has been compensated in
update_rq_clock_task():
Tested-by: Tzung-Bi Shih <tzungbi@kernel.org>
With 1 minor comment:
Reviewed-by: Tzung-Bi Shih <tzungbi@kernel.org>
> @@ -917,8 +918,13 @@ struct kvm_vcpu_arch {
>
> struct {
> u8 preempted;
> + bool host_suspended;
Use it in bool manner.
> +static void wait_for_resume(struct kvm_vcpu *vcpu)
> +{
> + wait_event_interruptible(vcpu->arch.st.resume_waitq,
> + vcpu->arch.st.host_suspended == 0);
E.g.: !vcpu->arch.st.host_suspended.
> @@ -6939,6 +6954,19 @@ static int kvm_arch_suspend_notifier(struct kvm *kvm)
>
> mutex_lock(&kvm->lock);
> kvm_for_each_vcpu(i, vcpu, kvm) {
> + vcpu->arch.st.last_suspend = ktime_get_boottime_ns();
> + /*
> + * Tasks get thawed before the resume notifier has been called
> + * so we need to block vCPUs until the resume notifier has run.
> + * Otherwise, suspend steal time might get applied too late,
> + * and get accounted to the wrong guest task.
> + * This also ensures that the guest paused bit set below
> + * doesn't get checked and cleared before the host actually
> + * suspends.
> + */
> + vcpu->arch.st.host_suspended = 1;
E.g.: true.
> +static int kvm_arch_resume_notifier(struct kvm *kvm)
> +{
> + struct kvm_vcpu *vcpu;
> + unsigned long i;
> +
> + mutex_lock(&kvm->lock);
> + kvm_for_each_vcpu(i, vcpu, kvm) {
> + vcpu->arch.st.host_suspended = 0;
E.g.: false.
^ permalink raw reply [flat|nested] 8+ messages in thread
* Re: [PATCH v5 1/2] KVM: x86: Advance guest TSC after deep suspend.
2025-04-22 4:47 ` Tzung-Bi Shih
@ 2025-05-01 23:49 ` Sean Christopherson
0 siblings, 0 replies; 8+ messages in thread
From: Sean Christopherson @ 2025-05-01 23:49 UTC (permalink / raw)
To: Tzung-Bi Shih
Cc: Suleiman Souhlal, Paolo Bonzini, Thomas Gleixner, Ingo Molnar,
Borislav Petkov, Dave Hansen, x86, H. Peter Anvin, Chao Gao,
David Woodhouse, Sergey Senozhatsky, Konrad Rzeszutek Wilk, kvm,
linux-kernel, ssouhlal
On Tue, Apr 22, 2025, Tzung-Bi Shih wrote:
> On Tue, Mar 25, 2025 at 01:13:49PM +0900, Suleiman Souhlal wrote:
> > Advance guest TSC to current time after suspend when the host
> > TSCs went backwards.
> >
> > This makes the behavior consistent between suspends where host TSC
> > resets and suspends where it doesn't, such as suspend-to-idle, where
> > in the former case if the host TSC resets, the guests' would
> > previously be "frozen" due to KVM's backwards TSC prevention, while
> > in the latter case they would advance.
> >
> > Suggested-by: Sean Christopherson <seanjc@google.com>
> > Signed-off-by: Suleiman Souhlal <suleiman@google.com>
>
> Tested with comparing `date` before and after suspend-to-RAM[1]:
> echo deep >/sys/power/mem_sleep
> echo $(date '+%s' -d '+3 minutes') >/sys/class/rtc/rtc0/wakealarm
> echo mem >/sys/power/state
>
> Without the patch, the guest's `date` is slower (~3 mins) than the host's
> after resuming.
>
> Tested-by: Tzung-Bi Shih <tzungbi@kernel.org>
>
> [1]: https://www.kernel.org/doc/Documentation/power/states.txt
>
> Some non-functional comments inline below.
>
> > @@ -4971,7 +4971,37 @@ void kvm_arch_vcpu_load(struct kvm_vcpu *vcpu, int cpu)
> >
> > /* Apply any externally detected TSC adjustments (due to suspend) */
> > if (unlikely(vcpu->arch.tsc_offset_adjustment)) {
> > - adjust_tsc_offset_host(vcpu, vcpu->arch.tsc_offset_adjustment);
> > + unsigned long flags;
> > + struct kvm *kvm;
> > + bool advance;
> > + u64 kernel_ns, l1_tsc, offset, tsc_now;
> > +
> > + kvm = vcpu->kvm;
>
> It will be more clear (at least to me) if moving the statement to its declaration:
> struct kvm *kvm = vcpu->kvm;
>
> Other than that, the following code should better utilitize the local
> variable, e.g. s/vcpu->kvm/kvm/g.
>
> > + advance = kvm_get_time_and_clockread(&kernel_ns,
> > + &tsc_now);
In addition to Tzung-Bi's feedback...
Please don't wrap at weird points, and align when you do wrap. The 80 char limit
isn't a super hard limit, and many of these wraps are well below that anyways.
advance = kvm_get_time_and_clockread(&kernel_ns, &tsc_now);
raw_spin_lock_irqsave(&kvm->arch.tsc_write_lock, flags);
/*
* Advance the guest's TSC to current time instead of only
* preventing it from going backwards, while making sure
* all the vCPUs use the same offset.
*/
if (kvm->arch.host_was_suspended && advance) {
l1_tsc = nsec_to_cycles(vcpu,
vcpu->kvm->arch.kvmclock_offset + kernel_ns);
offset = kvm_compute_l1_tsc_offset(vcpu, l1_tsc);
kvm->arch.cur_tsc_offset = offset;
kvm_vcpu_write_tsc_offset(vcpu, offset);
} else if (advance) {
kvm_vcpu_write_tsc_offset(vcpu, vcpu->kvm->arch.cur_tsc_offset);
} else {
adjust_tsc_offset_host(vcpu, vcpu->arch.tsc_offset_adjustment);
}
kvm->arch.host_was_suspended = 0;
raw_spin_unlock_irqrestore(&kvm->arch.tsc_write_lock, flags);
As for the correctness of this code with respect to masterclock and TSC
synchronization, I'm definitely going to have to stare even more, and probably
bring in at least Paolo for a consult, because KVM's TSC code is all kinds of
brittle and complex.
^ permalink raw reply [flat|nested] 8+ messages in thread
* Re: [PATCH v5 2/2] KVM: x86: Include host suspended time in steal time
2025-03-25 4:13 ` [PATCH v5 2/2] KVM: x86: Include host suspended time in steal time Suleiman Souhlal
2025-04-23 7:57 ` Tzung-Bi Shih
@ 2025-05-02 1:17 ` Sean Christopherson
1 sibling, 0 replies; 8+ messages in thread
From: Sean Christopherson @ 2025-05-02 1:17 UTC (permalink / raw)
To: Suleiman Souhlal
Cc: Paolo Bonzini, Thomas Gleixner, Ingo Molnar, Borislav Petkov,
Dave Hansen, x86, H. Peter Anvin, Chao Gao, David Woodhouse,
Sergey Senozhatsky, Konrad Rzeszutek Wilk, kvm, linux-kernel,
ssouhlal
On Tue, Mar 25, 2025, Suleiman Souhlal wrote:
> When the host resumes from a suspend, the guest thinks any task
> that was running during the suspend ran for a long time, even though
> the effective run time was much shorter, which can end up having
> negative effects with scheduling.
>
> To mitigate this issue, the time that the host was suspended is included
> in steal time, which lets the guest subtract the duration from the
> tasks' runtime.
>
> In order to implement this behavior, once the suspend notifier fires,
> vCPUs trying to run will block until the resume notifier finishes. This is
> because the freezing of userspace tasks happens between these two points.
> It means that vCPUs could otherwise run and get their suspend steal
> time misaccounted, particularly if a vCPU would run after resume before
> the resume notifier fires.
> Incidentally, doing this also addresses a potential race with the
> suspend notifier setting PVCLOCK_GUEST_STOPPED, which could then get
> cleared before the suspend actually happened.
>
> One potential caveat is that in the case of a suspend happening during
> a VM migration, the suspend time might not be accounted for.
> A workaround would be for the VMM to ensure that the guest is entered
> with KVM_RUN after resuming from suspend.
Please rewrite this to state what changes are being made in impreative mood, as
commands. Describing the _effects_ of a change makes it extremely difficult to
understand whether the behavior is pre-patch or post-patch.
E.g. for this
vCPUs trying to run will block until the resume notifier finishes
I had to look at the code to understand what this was saying, which largely
defeats the purpose of the changelog.
> Signed-off-by: Suleiman Souhlal <suleiman@google.com>
> ---
> Documentation/virt/kvm/x86/msr.rst | 10 ++++--
> arch/x86/include/asm/kvm_host.h | 6 ++++
> arch/x86/kvm/x86.c | 51 ++++++++++++++++++++++++++++++
> 3 files changed, 65 insertions(+), 2 deletions(-)
>
> diff --git a/Documentation/virt/kvm/x86/msr.rst b/Documentation/virt/kvm/x86/msr.rst
> index 3aecf2a70e7b43..48f2a8ca519548 100644
> --- a/Documentation/virt/kvm/x86/msr.rst
> +++ b/Documentation/virt/kvm/x86/msr.rst
> @@ -294,8 +294,14 @@ data:
>
> steal:
> the amount of time in which this vCPU did not run, in
> - nanoseconds. Time during which the vcpu is idle, will not be
> - reported as steal time.
> + nanoseconds. This includes the time during which the host is
> + suspended. Time during which the vcpu is idle, might not be
> + reported as steal time. The case where the host suspends
> + during a VM migration might not be accounted if VCPUs aren't
> + entered post-resume, because KVM does not currently support
> + suspend/resuming the associated metadata. A workaround would
> + be for the VMM to ensure that the guest is entered with
> + KVM_RUN after resuming from suspend.
Coming back to this with fresh eyes, I kinda feel like this needs an opt-in
somewhere. E.g. a KVM capability, or maybe a guest-side steal-time feature? Or
maybe we can squeak by with a module param based on your use case?
IIRC, there is a guest-side fix that is needed to not go completely off the rails
for large steal-time values. I.e. enabling this blindly could negatively effect
existings guests.
The forced wait behavior introduced in v4 also gives me pause, but that should
really just be about getting the code right, i.e. shouldn't go sideways as long
as the host kernel is bug free.
Ugh, actually, yeah, that part needs a guard. At the very least, it needs to be
conditional on steal-time being enabled. KVM most definitely should not block
vCPUs that aren't using steal-time, as that's a complete waste and will only make
the effects of suspend worse for the guest. At that point, having the guest
opt-in to the behavior is a pretty minor change, and it gives users a way to
opt-out if this is causing pain.
> preempted:
> indicate the vCPU who owns this struct is running or
> diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h
> index f5ce2c2782142b..10634bbf2f5d21 100644
> --- a/arch/x86/include/asm/kvm_host.h
> +++ b/arch/x86/include/asm/kvm_host.h
> @@ -124,6 +124,7 @@
> #define KVM_REQ_HV_TLB_FLUSH \
> KVM_ARCH_REQ_FLAGS(32, KVM_REQUEST_WAIT | KVM_REQUEST_NO_WAKEUP)
> #define KVM_REQ_UPDATE_PROTECTED_GUEST_STATE KVM_ARCH_REQ(34)
> +#define KVM_REQ_WAIT_FOR_RESUME KVM_ARCH_REQ(35)
>
> #define CR0_RESERVED_BITS \
> (~(unsigned long)(X86_CR0_PE | X86_CR0_MP | X86_CR0_EM | X86_CR0_TS \
> @@ -917,8 +918,13 @@ struct kvm_vcpu_arch {
>
> struct {
> u8 preempted;
> + bool host_suspended;
> u64 msr_val;
> u64 last_steal;
> + u64 last_suspend;
> + u64 suspend_ns;
> + u64 last_suspend_ns;
> + wait_queue_head_t resume_waitq;
> struct gfn_to_hva_cache cache;
> } st;
>
> diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
> index 6b4ea3be66e814..327d1831dc0746 100644
> --- a/arch/x86/kvm/x86.c
> +++ b/arch/x86/kvm/x86.c
> @@ -3717,6 +3717,8 @@ static void record_steal_time(struct kvm_vcpu *vcpu)
> steal += current->sched_info.run_delay -
> vcpu->arch.st.last_steal;
> vcpu->arch.st.last_steal = current->sched_info.run_delay;
> + steal += vcpu->arch.st.suspend_ns - vcpu->arch.st.last_suspend_ns;
> + vcpu->arch.st.last_suspend_ns = vcpu->arch.st.suspend_ns;
Isn't this just:
steal += vcpu->arch.st.suspend_ns;
vcpu->arch.st.suspend_ns = 0;
or am I missing something? I suspect you implemented the code this way to avoid
writing vcpu->arch.st.suspend_ns in this context, because you discovered that
record_steal_time() can run concurrently with kvm_arch_suspend_notifier(), i.e.
because vcpu->arch.st.suspend_ns was getting corrupted.
The above doesn't fully solve the problem, it just makes the badness less bad
and/or much less likely to be hit. E.g. if vcpu->arch.st.suspend_ns is advanced
between the first and second loads, KVM would fail to account the delta between
the two loads.
Unless I'm missing something, the obvious/easy thing is to make arch.st.suspend_ns
and atomic64_t, e.g.
if (unlikely(atomic64_read(&vcpu->arch.st.suspend_ns)))
steal += atomic64_xchg(&vcpu->arch.st.suspend_ns, 0);
and then on the resume side:
atomic64_add(suspend_ns, &vcpu->arch.st.suspend_ns);
kvm_make_request(KVM_REQ_STEAL_UPDATE, vcpu);
> unsafe_put_user(steal, &st->steal, out);
>
> version += 1;
> @@ -6930,6 +6932,19 @@ long kvm_arch_vm_compat_ioctl(struct file *filp, unsigned int ioctl,
> }
> #endif
>
> +static void wait_for_resume(struct kvm_vcpu *vcpu)
> +{
> + wait_event_interruptible(vcpu->arch.st.resume_waitq,
> + vcpu->arch.st.host_suspended == 0);
> +
> + /*
> + * This might happen if we blocked here before the freezing of tasks
> + * and we get woken up by the freezer.
> + */
> + if (vcpu->arch.st.host_suspended)
> + kvm_make_request(KVM_REQ_WAIT_FOR_RESUME, vcpu);
I most definitely don't want to add custom waiting behavior for this. As this
code shows, ensuring a wakeup doesn't race with blocking isn't the easiest thing
in the world.
Off the top of my head, I can't think of any reason why we can't simply send the
vCPU into kvm_vcpu_block(), by treating the vCPU as completely non-runnable while
it is suspended.
> #ifdef CONFIG_HAVE_KVM_PM_NOTIFIER
> static int kvm_arch_suspend_notifier(struct kvm *kvm)
> {
> @@ -6939,6 +6954,19 @@ static int kvm_arch_suspend_notifier(struct kvm *kvm)
>
> mutex_lock(&kvm->lock);
> kvm_for_each_vcpu(i, vcpu, kvm) {
> + vcpu->arch.st.last_suspend = ktime_get_boottime_ns();
> + /*
> + * Tasks get thawed before the resume notifier has been called
> + * so we need to block vCPUs until the resume notifier has run.
> + * Otherwise, suspend steal time might get applied too late,
> + * and get accounted to the wrong guest task.
> + * This also ensures that the guest paused bit set below
> + * doesn't get checked and cleared before the host actually
> + * suspends.
> + */
> + vcpu->arch.st.host_suspended = 1;
We can definitely avoid this flag, e.g. by zeroing last_suspend in the resume
notified, and using that to detect "host suspended".
> + kvm_make_request(KVM_REQ_WAIT_FOR_RESUME, vcpu);
> +
> if (!vcpu->arch.pv_time.active)
> continue;
>
> @@ -6954,12 +6982,32 @@ static int kvm_arch_suspend_notifier(struct kvm *kvm)
> return ret ? NOTIFY_BAD : NOTIFY_DONE;
> }
>
> +static int kvm_arch_resume_notifier(struct kvm *kvm)
> +{
> + struct kvm_vcpu *vcpu;
> + unsigned long i;
> +
> + mutex_lock(&kvm->lock);
No need for this, it provides zero protection and can (very, very theoretically)
trigger deadlock. The lock has already been dropped from the suspend notifier.
> + kvm_for_each_vcpu(i, vcpu, kvm) {
> + vcpu->arch.st.host_suspended = 0;
> + vcpu->arch.st.suspend_ns += ktime_get_boottime_ns() -
> + vcpu->arch.st.last_suspend;
> + wake_up_interruptible(&vcpu->arch.st.resume_waitq);
This needs a
kvm_make_request(KVM_REQ_STEAL_UPDATE, vcpu);
to ensure the suspend_ns time is accounted. kvm_arch_vcpu_load() probably
guarantees KVM_REQ_STEAL_UPDATE is set, but KVM shouldn't rely on that.
Completely untested, and I didn't add any new ABI, but something like this?
---
Documentation/virt/kvm/x86/msr.rst | 10 ++++--
arch/x86/include/asm/kvm_host.h | 2 ++
arch/x86/kvm/x86.c | 56 +++++++++++++++++++++++++++++-
3 files changed, 65 insertions(+), 3 deletions(-)
diff --git a/Documentation/virt/kvm/x86/msr.rst b/Documentation/virt/kvm/x86/msr.rst
index 3aecf2a70e7b..48f2a8ca5195 100644
--- a/Documentation/virt/kvm/x86/msr.rst
+++ b/Documentation/virt/kvm/x86/msr.rst
@@ -294,8 +294,14 @@ data:
steal:
the amount of time in which this vCPU did not run, in
- nanoseconds. Time during which the vcpu is idle, will not be
- reported as steal time.
+ nanoseconds. This includes the time during which the host is
+ suspended. Time during which the vcpu is idle, might not be
+ reported as steal time. The case where the host suspends
+ during a VM migration might not be accounted if VCPUs aren't
+ entered post-resume, because KVM does not currently support
+ suspend/resuming the associated metadata. A workaround would
+ be for the VMM to ensure that the guest is entered with
+ KVM_RUN after resuming from suspend.
preempted:
indicate the vCPU who owns this struct is running or
diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h
index 8becf50d9ade..8a5ff888037a 100644
--- a/arch/x86/include/asm/kvm_host.h
+++ b/arch/x86/include/asm/kvm_host.h
@@ -931,6 +931,8 @@ struct kvm_vcpu_arch {
u8 preempted;
u64 msr_val;
u64 last_steal;
+ atomic64_t suspend_ns;
+ u64 suspend_ts;
struct gfn_to_hva_cache cache;
} st;
diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
index 73f4a85c72aa..b6120ebbb8fa 100644
--- a/arch/x86/kvm/x86.c
+++ b/arch/x86/kvm/x86.c
@@ -3751,6 +3751,10 @@ static void record_steal_time(struct kvm_vcpu *vcpu)
steal += current->sched_info.run_delay -
vcpu->arch.st.last_steal;
vcpu->arch.st.last_steal = current->sched_info.run_delay;
+
+ if (unlikely(atomic64_read(&vcpu->arch.st.suspend_ns)))
+ steal += atomic64_xchg(&vcpu->arch.st.suspend_ns, 0);
+
unsafe_put_user(steal, &st->steal, out);
version += 1;
@@ -6992,6 +6996,7 @@ long kvm_arch_vm_compat_ioctl(struct file *filp, unsigned int ioctl,
#ifdef CONFIG_HAVE_KVM_PM_NOTIFIER
static int kvm_arch_suspend_notifier(struct kvm *kvm)
{
+ bool kick_vcpus = false;
struct kvm_vcpu *vcpu;
unsigned long i;
@@ -6999,9 +7004,45 @@ static int kvm_arch_suspend_notifier(struct kvm *kvm)
* Ignore the return, marking the guest paused only "fails" if the vCPU
* isn't using kvmclock; continuing on is correct and desirable.
*/
- kvm_for_each_vcpu(i, vcpu, kvm)
+ kvm_for_each_vcpu(i, vcpu, kvm) {
(void)kvm_set_guest_paused(vcpu);
+ if (vcpu->arch.st.msr_val & KVM_MSR_ENABLED) {
+ kick_vcpus = true;
+ WRITE_ONCE(vcpu->arch.st.suspend_ts,
+ ktime_get_boottime_ns());
+ }
+ }
+
+ if (kick_vcpus)
+ kvm_make_all_cpus_request(kvm, KVM_REQ_OUTSIDE_GUEST_MODE);
+
+ return NOTIFY_DONE;
+}
+
+static int kvm_arch_resume_notifier(struct kvm *kvm)
+{
+ struct kvm_vcpu *vcpu;
+ unsigned long i;
+
+ kvm_for_each_vcpu(i, vcpu, kvm) {
+ u64 suspend_ns = ktime_get_boottime_ns() -
+ vcpu->arch.st.suspend_ts;
+
+ WRITE_ONCE(vcpu->arch.st.suspend_ts, 0);
+
+ /*
+ * Only accumulate the suspend time if steal-time is enabled,
+ * but always clear suspend_ts and kick the vCPU as the vCPU
+ * could have disabled steal-time after the suspend notifier
+ * grabbed suspend_ts.
+ */
+ if (vcpu->arch.st.msr_val & KVM_MSR_ENABLED)
+ atomic64_add(suspend_ns, &vcpu->arch.st.suspend_ns);
+
+ kvm_make_request(KVM_REQ_STEAL_UPDATE, vcpu);
+ }
+
return NOTIFY_DONE;
}
@@ -7011,6 +7052,9 @@ int kvm_arch_pm_notifier(struct kvm *kvm, unsigned long state)
case PM_HIBERNATION_PREPARE:
case PM_SUSPEND_PREPARE:
return kvm_arch_suspend_notifier(kvm);
+ case PM_POST_HIBERNATION:
+ case PM_POST_SUSPEND:
+ return kvm_arch_resume_notifier(kvm);
}
return NOTIFY_DONE;
@@ -11251,6 +11295,16 @@ EXPORT_SYMBOL_GPL(kvm_vcpu_has_events);
int kvm_arch_vcpu_runnable(struct kvm_vcpu *vcpu)
{
+ /*
+ * During host SUSPEND/RESUME tasks get frozen after SUSPEND notifiers
+ * run, and thawed before RESUME notifiers, i.e. vCPUs can be actively
+ * running when KVM sees the system as suspended. Block the vCPU if
+ * KVM sees the vCPU as suspended to ensure the suspend steal time is
+ * accounted before the guest can run, and to the correct guest task.
+ */
+ if (READ_ONCE(vcpu->arch.st.suspend_ts))
+ return false;
+
return kvm_vcpu_running(vcpu) || vcpu->arch.pv.pv_unhalted ||
kvm_vcpu_has_events(vcpu);
}
base-commit: 17cfb61855eafd72fd6a22d713a39be0d74660e1
--
^ permalink raw reply related [flat|nested] 8+ messages in thread
end of thread, other threads:[~2025-05-02 1:17 UTC | newest]
Thread overview: 8+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2025-03-25 4:13 [PATCH v5 0/2] KVM: x86: Include host suspended time in steal time Suleiman Souhlal
2025-03-25 4:13 ` [PATCH v5 1/2] KVM: x86: Advance guest TSC after deep suspend Suleiman Souhlal
2025-04-22 4:47 ` Tzung-Bi Shih
2025-05-01 23:49 ` Sean Christopherson
2025-03-25 4:13 ` [PATCH v5 2/2] KVM: x86: Include host suspended time in steal time Suleiman Souhlal
2025-04-23 7:57 ` Tzung-Bi Shih
2025-05-02 1:17 ` Sean Christopherson
2025-04-08 1:36 ` [PATCH v5 0/2] " Suleiman Souhlal
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox