From: Sean Christopherson <seanjc@google.com>
To: David Woodhouse <dwmw2@infradead.org>
Cc: kvm@vger.kernel.org, Paolo Bonzini <pbonzini@redhat.com>,
Jonathan Corbet <corbet@lwn.net>,
Thomas Gleixner <tglx@linutronix.de>,
Ingo Molnar <mingo@redhat.com>, Borislav Petkov <bp@alien8.de>,
Dave Hansen <dave.hansen@linux.intel.com>,
x86@kernel.org, "H. Peter Anvin" <hpa@zytor.com>,
Paul Durrant <paul@xen.org>,
Peter Zijlstra <peterz@infradead.org>,
Juri Lelli <juri.lelli@redhat.com>,
Vincent Guittot <vincent.guittot@linaro.org>,
Dietmar Eggemann <dietmar.eggemann@arm.com>,
Steven Rostedt <rostedt@goodmis.org>,
Ben Segall <bsegall@google.com>, Mel Gorman <mgorman@suse.de>,
Daniel Bristot de Oliveira <bristot@redhat.com>,
Valentin Schneider <vschneid@redhat.com>,
Shuah Khan <shuah@kernel.org>,
linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org,
jalliste@amazon.co.uk, sveith@amazon.de, zide.chen@intel.com,
Dongli Zhang <dongli.zhang@oracle.com>,
Chenyi Qiang <chenyi.qiang@intel.com>
Subject: Re: [RFC PATCH v3 21/21] sched/cputime: Cope with steal time going backwards or negative
Date: Thu, 15 Aug 2024 21:35:35 -0700 [thread overview]
Message-ID: <Zr7XF25TDKN-RR8E@google.com> (raw)
In-Reply-To: <20240522001817.619072-22-dwmw2@infradead.org>
This should be posted To: something other than kvm@, in a separate series, else
it's bound to get lost/ignored.
On Wed, May 22, 2024, David Woodhouse wrote:
> From: David Woodhouse <dwmw@amazon.co.uk>
>
> In steal_account_process_time(), a delta is calculated between the value
> returned by paravirt_steal_clock(), and this_rq()->prev_steal_time which
> is assumed to be the *previous* value returned by paravirt_steal_clock().
>
> However, instead of just assigning the newly-read value directly into
> ->prev_steal_time for use in the next iteration, ->prev_steal_time is
> *incremented* by the calculated delta.
>
> This used to be roughly the same, modulo conversion to jiffies and back,
> until commit 807e5b80687c0 ("sched/cputime: Add steal time support to
> full dynticks CPU time accounting") started clamping that delta to a
> maximum of the actual time elapsed. So now, if the value returned by
> paravirt_steal_clock() jumps by a large amount, instead of a *single*
> period of reporting 100% steal time, the system will report 100% steal
> time for as long as it takes to "catch up" with the reported value.
> Which is up to 584 years.
>
> But there is a benefit to advancing ->prev_steal_time only by the time
> which was *accounted* as having been stolen. It means that any extra
> time truncated by the clamping will be accounted in the next sample
> period rather than lost. Given the stochastic nature of the sampling,
> that is more accurate overall.
>
> So, continue to advance ->prev_steal_time by the accounted value as
> long as the delta isn't egregiously large (for which, use maxtime * 2).
> If the delta is more than that, just set ->prev_steal_time directly to
> the value returned by paravirt_steal_clock().
>
> Fixes: 807e5b80687c0 ("sched/cputime: Add steal time support to full dynticks CPU time accounting")
> Signed-off-by: David Woodhouse <dwmw@amazon.co.uk>
> ---
> kernel/sched/cputime.c | 20 ++++++++++++++------
> 1 file changed, 14 insertions(+), 6 deletions(-)
>
> diff --git a/kernel/sched/cputime.c b/kernel/sched/cputime.c
> index af7952f12e6c..3a8a8b38966d 100644
> --- a/kernel/sched/cputime.c
> +++ b/kernel/sched/cputime.c
> @@ -254,13 +254,21 @@ static __always_inline u64 steal_account_process_time(u64 maxtime)
> {
> #ifdef CONFIG_PARAVIRT
> if (static_key_false(¶virt_steal_enabled)) {
> - u64 steal;
> -
> - steal = paravirt_steal_clock(smp_processor_id());
> - steal -= this_rq()->prev_steal_time;
> - steal = min(steal, maxtime);
> + u64 steal, abs_steal;
> +
> + abs_steal = paravirt_steal_clock(smp_processor_id());
> + steal = abs_steal - this_rq()->prev_steal_time;
> + if (unlikely(steal > maxtime)) {
> + /*
> + * If the delta isn't egregious, it can be counted
> + * in the next time period. Only advance by maxtime.
> + */
> + if (steal < maxtime * 2)
> + abs_steal = this_rq()->prev_steal_time + maxtime;
> + steal = maxtime;
> + }
> account_steal_time(steal);
> - this_rq()->prev_steal_time += steal;
> + this_rq()->prev_steal_time = abs_steal;
>
> return steal;
> }
> --
> 2.44.0
>
prev parent reply other threads:[~2024-08-16 4:35 UTC|newest]
Thread overview: 58+ messages / expand[flat|nested] mbox.gz Atom feed top
2024-05-22 0:16 [RFC PATCH v3 00/21] Cleaning up the KVM clock mess David Woodhouse
2024-05-22 0:16 ` [RFC PATCH v3 01/21] KVM: x86/xen: Do not corrupt KVM clock in kvm_xen_shared_info_init() David Woodhouse
2024-05-22 0:16 ` [RFC PATCH v3 02/21] KVM: x86: Improve accuracy of KVM clock when TSC scaling is in force David Woodhouse
2024-08-13 17:50 ` Sean Christopherson
2024-05-22 0:16 ` [RFC PATCH v3 03/21] KVM: x86: Add KVM_[GS]ET_CLOCK_GUEST for accurate KVM clock migration David Woodhouse
2024-05-22 0:16 ` [RFC PATCH v3 04/21] UAPI: x86: Move pvclock-abi to UAPI for x86 platforms David Woodhouse
2024-05-24 13:14 ` Paul Durrant
2024-08-13 18:07 ` Sean Christopherson
2024-05-22 0:17 ` [RFC PATCH v3 05/21] KVM: selftests: Add KVM/PV clock selftest to prove timer correction David Woodhouse
2024-08-13 18:55 ` Sean Christopherson
2024-05-22 0:17 ` [RFC PATCH v3 06/21] KVM: x86: Explicitly disable TSC scaling without CONSTANT_TSC David Woodhouse
2024-05-22 0:17 ` [RFC PATCH v3 07/21] KVM: x86: Add KVM_VCPU_TSC_SCALE and fix the documentation on TSC migration David Woodhouse
2024-08-14 1:52 ` Sean Christopherson
2024-05-22 0:17 ` [RFC PATCH v3 08/21] KVM: x86: Avoid NTP frequency skew for KVM clock on 32-bit host David Woodhouse
2024-08-14 1:57 ` Sean Christopherson
2024-05-22 0:17 ` [RFC PATCH v3 09/21] KVM: x86: Fix KVM clock precision in __get_kvmclock() David Woodhouse
2024-05-24 13:20 ` Paul Durrant
2024-08-14 2:58 ` Sean Christopherson
2024-05-22 0:17 ` [RFC PATCH v3 10/21] KVM: x86: Fix software TSC upscaling in kvm_update_guest_time() David Woodhouse
2024-05-24 13:26 ` Paul Durrant
2024-08-14 4:57 ` Sean Christopherson
2024-05-22 0:17 ` [RFC PATCH v3 11/21] KVM: x86: Simplify and comment kvm_get_time_scale() David Woodhouse
2024-05-24 13:53 ` Paul Durrant
2024-08-15 15:46 ` Sean Christopherson
2024-05-22 0:17 ` [RFC PATCH v3 12/21] KVM: x86: Remove implicit rdtsc() from kvm_compute_l1_tsc_offset() David Woodhouse
2024-05-24 13:56 ` Paul Durrant
2024-08-15 15:52 ` Sean Christopherson
2024-05-22 0:17 ` [RFC PATCH v3 13/21] KVM: x86: Improve synchronization in kvm_synchronize_tsc() David Woodhouse
2024-05-24 14:03 ` Paul Durrant
2024-08-15 16:00 ` Sean Christopherson
2024-05-22 0:17 ` [RFC PATCH v3 14/21] KVM: x86: Kill cur_tsc_{nsec,offset,write} fields David Woodhouse
2024-05-24 14:05 ` Paul Durrant
2024-08-15 16:30 ` Sean Christopherson
2024-05-22 0:17 ` [RFC PATCH v3 15/21] KVM: x86: Allow KVM master clock mode when TSCs are offset from each other David Woodhouse
2024-05-24 14:10 ` Paul Durrant
2024-08-16 2:38 ` Sean Christopherson
2024-05-22 0:17 ` [RFC PATCH v3 16/21] KVM: x86: Factor out kvm_use_master_clock() David Woodhouse
2024-05-24 14:13 ` Paul Durrant
2024-08-15 17:12 ` Sean Christopherson
2024-05-22 0:17 ` [RFC PATCH v3 17/21] KVM: x86: Avoid global clock update on setting KVM clock MSR David Woodhouse
2024-05-24 14:14 ` Paul Durrant
2024-08-16 4:28 ` Sean Christopherson
2024-05-22 0:17 ` [RFC PATCH v3 18/21] KVM: x86: Avoid gratuitous global clock reload in kvm_arch_vcpu_load() David Woodhouse
2024-05-24 14:16 ` Paul Durrant
2024-08-15 17:31 ` Sean Christopherson
2024-05-22 0:17 ` [RFC PATCH v3 19/21] KVM: x86: Avoid periodic KVM clock updates in master clock mode David Woodhouse
2024-05-24 14:18 ` Paul Durrant
2024-08-16 4:33 ` Sean Christopherson
2024-05-22 0:17 ` [RFC PATCH v3 20/21] KVM: x86/xen: Prevent runstate times from becoming negative David Woodhouse
2024-05-24 14:21 ` Paul Durrant
2024-08-16 4:39 ` Sean Christopherson
2024-08-20 10:22 ` David Woodhouse
2024-08-20 15:08 ` Steven Rostedt
2024-08-20 15:42 ` David Woodhouse
2024-05-22 0:17 ` [RFC PATCH v3 21/21] sched/cputime: Cope with steal time going backwards or negative David Woodhouse
2024-05-24 14:25 ` Paul Durrant
2024-07-02 14:09 ` Shrikanth Hegde
2024-08-16 4:35 ` Sean Christopherson [this message]
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=Zr7XF25TDKN-RR8E@google.com \
--to=seanjc@google.com \
--cc=bp@alien8.de \
--cc=bristot@redhat.com \
--cc=bsegall@google.com \
--cc=chenyi.qiang@intel.com \
--cc=corbet@lwn.net \
--cc=dave.hansen@linux.intel.com \
--cc=dietmar.eggemann@arm.com \
--cc=dongli.zhang@oracle.com \
--cc=dwmw2@infradead.org \
--cc=hpa@zytor.com \
--cc=jalliste@amazon.co.uk \
--cc=juri.lelli@redhat.com \
--cc=kvm@vger.kernel.org \
--cc=linux-doc@vger.kernel.org \
--cc=linux-kernel@vger.kernel.org \
--cc=mgorman@suse.de \
--cc=mingo@redhat.com \
--cc=paul@xen.org \
--cc=pbonzini@redhat.com \
--cc=peterz@infradead.org \
--cc=rostedt@goodmis.org \
--cc=shuah@kernel.org \
--cc=sveith@amazon.de \
--cc=tglx@linutronix.de \
--cc=vincent.guittot@linaro.org \
--cc=vschneid@redhat.com \
--cc=x86@kernel.org \
--cc=zide.chen@intel.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).