From: Sean Christopherson <seanjc@google.com>
To: Paolo Bonzini <pbonzini@redhat.com>
Cc: linux-kernel@vger.kernel.org, kvm@vger.kernel.org,
michael.christie@oracle.com, Tejun Heo <tj@kernel.org>,
Luca Boccassi <bluca@debian.org>
Subject: Re: [PATCH] KVM: x86: switch hugepage recovery thread to vhost_task
Date: Thu, 14 Nov 2024 07:38:15 -0800 [thread overview]
Message-ID: <ZzYZZ4MgMhavYDM2@google.com> (raw)
In-Reply-To: <70ee319f-b9ec-448a-a068-8165c8e38e6d@redhat.com>
On Thu, Nov 14, 2024, Paolo Bonzini wrote:
> On 11/14/24 00:56, Sean Christopherson wrote:
> > > +static bool kvm_nx_huge_page_recovery_worker(void *data)
> > > +{
> > > + struct kvm *kvm = data;
> > > long remaining_time;
> > > - while (true) {
> > > - start_time = get_jiffies_64();
> > > - remaining_time = get_nx_huge_page_recovery_timeout(start_time);
> > > + if (kvm->arch.nx_huge_page_next == NX_HUGE_PAGE_DISABLED)
> > > + return false;
> >
> > The "next" concept is broken. Once KVM sees NX_HUGE_PAGE_DISABLED for a given VM,
> > KVM will never re-evaluate nx_huge_page_next. Similarly, if the recovery period
> > and/or ratio changes, KVM won't recompute the "next" time until the current timeout
> > has expired.
> >
> > I fiddled around with various ideas, but I don't see a better solution that something
> > along the lines of KVM's request system, e.g. set a bool to indicate the params
> > changed, and sprinkle smp_{r,w}mb() barriers to ensure the vhost task sees the
> > new params.
>
> "next" is broken, but there is a much better way to fix it. You just
> track the *last* time that the recovery ran. This is also better
> behaved when you flip recovery back and forth to disabled and back
> to enabled: if your recovery period is 1 minute, it will run the
> next recovery after 1 minute independent of how many times you flipped
> the parameter.
Heh, I my brain was trying to get there last night, but I couldn't quite piece
things together.
Reviewed-by: Sean Christopherson <seanjc@google.com>
next prev parent reply other threads:[~2024-11-14 15:38 UTC|newest]
Thread overview: 26+ messages / expand[flat|nested] mbox.gz Atom feed top
2024-11-08 13:07 [PATCH] KVM: x86: switch hugepage recovery thread to vhost_task Paolo Bonzini
2024-11-08 16:53 ` Tejun Heo
2024-11-09 0:23 ` Luca Boccassi
2024-11-13 23:56 ` Sean Christopherson
2024-11-14 12:02 ` Paolo Bonzini
2024-11-14 15:38 ` Sean Christopherson [this message]
2024-11-15 16:59 ` Michal Koutný
2024-11-18 12:42 ` Paolo Bonzini
2024-11-25 9:01 ` Michal Koutný
2024-11-25 11:22 ` Paolo Bonzini
2024-12-19 17:32 ` Keith Busch
2024-12-19 17:42 ` Paolo Bonzini
2024-12-19 18:08 ` Keith Busch
2024-12-19 20:30 ` Paolo Bonzini
2024-12-19 22:23 ` Keith Busch
2024-12-19 22:57 ` Paolo Bonzini
2024-12-19 23:31 ` Keith Busch
2025-01-13 15:35 ` Keith Busch
2025-01-14 18:10 ` Paolo Bonzini
2025-01-15 3:06 ` Sean Christopherson
2025-01-15 16:51 ` Keith Busch
2025-01-15 17:10 ` Paolo Bonzini
2025-01-15 19:03 ` Keith Busch
2025-01-22 11:38 ` Alyssa Ross
2025-01-22 14:56 ` Keith Busch
2025-01-22 22:32 ` Alyssa Ross
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=ZzYZZ4MgMhavYDM2@google.com \
--to=seanjc@google.com \
--cc=bluca@debian.org \
--cc=kvm@vger.kernel.org \
--cc=linux-kernel@vger.kernel.org \
--cc=michael.christie@oracle.com \
--cc=pbonzini@redhat.com \
--cc=tj@kernel.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox