public inbox for kvm@vger.kernel.org
 help / color / mirror / Atom feed
From: Paolo Bonzini <pbonzini@redhat.com>
To: Keith Busch <kbusch@kernel.org>, Sean Christopherson <seanjc@google.com>
Cc: kvm@vger.kernel.org, linux-kernel@vger.kernel.org
Subject: Re: [PATCH] KVM: x86/mmu: Ensure NX huge page recovery thread is alive before waking
Date: Tue, 28 Jan 2025 16:41:41 +0100	[thread overview]
Message-ID: <a0d9ad95-ea69-45dc-a07f-b6dc43e9731e@redhat.com> (raw)
In-Reply-To: <Z5fO5bac8ohqUH1D@kbusch-mbp>

On 1/27/25 19:22, Keith Busch wrote:
>> It's not clear to me that calling vhost_task_wake() before vhost_task_start() is
>> allowed, which is why I deliberately waited until the task was started to make it
>> visible.  Though FWIW, doing "vhost_task_wake(nx_thread)" before vhost_task_start()
>> doesn't explode.
>
> Hm, it does look questionable to try to wake a process that hadn't been
> started yet, but I think it may be okay: task state will be TASK_NEW
> before vhost_task_start(), which looks like will cause wake_up_process()
> to do nothing.

Yes, it's okay because both wake_up_new_task() and try_to_wake_up() take
p->pi_lock.  try_to_wake_up() does not match either bit in TASK_NORMAL
(which is TASK_INTERRUPTIBLE | TASK_UNINTERRUPTIBLE) and does nothing.

I'm queuing the patch with the store before vhost_task_start, and
acquire/release instead of just READ_ONCE/WRITE_ONCE.

diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c
index 74c20dbb92da..6d5708146384 100644
--- a/arch/x86/kvm/mmu/mmu.c
+++ b/arch/x86/kvm/mmu/mmu.c
@@ -7127,7 +7127,8 @@ static void kvm_wake_nx_recovery_thread(struct kvm *kvm)
  	 * may not be valid even though the VM is globally visible.  Do nothing,
  	 * as such a VM can't have any possible NX huge pages.
  	 */
-	struct vhost_task *nx_thread = READ_ONCE(kvm->arch.nx_huge_page_recovery_thread);
+	struct vhost_task *nx_thread =
+		smp_load_acquire(&kvm->arch.nx_huge_page_recovery_thread);
  
  	if (nx_thread)
  		vhost_task_wake(nx_thread);
@@ -7474,10 +7475,10 @@ static void kvm_mmu_start_lpage_recovery(struct once *once)
  	if (!nx_thread)
  		return;
  
-	vhost_task_start(nx_thread);
+	/* Make the task visible only once it is fully created. */
+	smp_store_release(&kvm->arch.nx_huge_page_recovery_thread, nx_thread);
  
-	/* Make the task visible only once it is fully started. */
-	WRITE_ONCE(kvm->arch.nx_huge_page_recovery_thread, nx_thread);
+	vhost_task_start(nx_thread);
  }
  
  int kvm_mmu_post_init_vm(struct kvm *kvm)


  reply	other threads:[~2025-01-28 15:41 UTC|newest]

Thread overview: 10+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2025-01-24 23:46 [PATCH] KVM: x86/mmu: Ensure NX huge page recovery thread is alive before waking Sean Christopherson
2025-01-25  0:50 ` Sean Christopherson
2025-01-25  4:11 ` Keith Busch
2025-01-27 16:48   ` Sean Christopherson
2025-01-27 17:04     ` Keith Busch
2025-01-27 17:19       ` Sean Christopherson
2025-01-27 18:22     ` Keith Busch
2025-01-28 15:41       ` Paolo Bonzini [this message]
2025-01-28 15:44         ` Keith Busch
2025-02-04 16:28 ` Paolo Bonzini

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=a0d9ad95-ea69-45dc-a07f-b6dc43e9731e@redhat.com \
    --to=pbonzini@redhat.com \
    --cc=kbusch@kernel.org \
    --cc=kvm@vger.kernel.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=seanjc@google.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox