From: Ryosuke Yasuoka <ryasuoka@redhat.com>
To: pbonzini@redhat.com, vkuznets@redhat.com, tglx@linutronix.de,
mingo@redhat.com, bp@alien8.de, dave.hansen@linux.intel.com,
hpa@zytor.com
Cc: x86@kernel.org, kvm@vger.kernel.org, linux-kernel@vger.kernel.org
Subject: Re: [PATCH] x86/kvm: Avoid freeing stack-allocated node in kvm_async_pf_queue_task
Date: Tue, 2 Dec 2025 12:48:10 +0900 [thread overview]
Message-ID: <aS5hekMmIrsJPK-L@zeus> (raw)
In-Reply-To: <20251122090828.1416464-1-ryasuoka@redhat.com>
On Sat, Nov 22, 2025 at 06:08:24PM +0900, Ryosuke Yasuoka wrote:
> kvm_async_pf_queue_task() can incorrectly remove a node allocated on the
> stack of kvm_async_pf_task_wait_schedule(). This occurs when a task
> request a PF while another task's PF request with the same token is
> still pending. Currently, kvm_async_pf_queue_task() assumes that any
> entry in the list is a dummy entry and tries to kfree(). To fix this,
> add a dummy flag to the node structure and the function should check
> this flag and kfree() only if it is a dummy entry.
>
> Signed-off-by: Ryosuke Yasuoka <ryasuoka@redhat.com>
> ---
> arch/x86/kernel/kvm.c | 13 +++++++++++--
> 1 file changed, 11 insertions(+), 2 deletions(-)
>
> diff --git a/arch/x86/kernel/kvm.c b/arch/x86/kernel/kvm.c
> index b67d7c59dca0..2c92ec528379 100644
> --- a/arch/x86/kernel/kvm.c
> +++ b/arch/x86/kernel/kvm.c
> @@ -88,6 +88,7 @@ struct kvm_task_sleep_node {
> struct swait_queue_head wq;
> u32 token;
> int cpu;
> + bool dummy;
> };
>
> static struct kvm_task_sleep_head {
> @@ -119,10 +120,17 @@ static bool kvm_async_pf_queue_task(u32 token, struct kvm_task_sleep_node *n)
> raw_spin_lock(&b->lock);
> e = _find_apf_task(b, token);
> if (e) {
> + struct kvm_task_sleep_node *dummy = NULL;
> +
> /* dummy entry exist -> wake up was delivered ahead of PF */
> - hlist_del(&e->link);
> + /* Otherwise it should not be freed here. */
> + if (e->dummy) {
> + hlist_del(&e->link);
> + dummy = e;
> + }
> +
> raw_spin_unlock(&b->lock);
> - kfree(e);
> + kfree(dummy);
> return false;
> }
>
> @@ -230,6 +238,7 @@ static void kvm_async_pf_task_wake(u32 token)
> }
> dummy->token = token;
> dummy->cpu = smp_processor_id();
> + dummy->dummy = true;
> init_swait_queue_head(&dummy->wq);
> hlist_add_head(&dummy->link, &b->list);
> dummy = NULL;
>
> base-commit: 2eba5e05d9bcf4cdea995ed51b0f07ba0275794a
> --
> 2.51.0
Hi all,
It's just a gentle ping on this patch.
Please let me know if thare any changes required. Any feedbacks are
welcome.
Here is a original patch's link:
https://lore.kernel.org/all/20251122090828.1416464-1-ryasuoka@redhat.com/
Thank you
Ryosuke
next prev parent reply other threads:[~2025-12-02 3:48 UTC|newest]
Thread overview: 3+ messages / expand[flat|nested] mbox.gz Atom feed top
2025-11-22 9:08 [PATCH] x86/kvm: Avoid freeing stack-allocated node in kvm_async_pf_queue_task Ryosuke Yasuoka
2025-12-02 3:48 ` Ryosuke Yasuoka [this message]
2025-12-03 13:24 ` Vitaly Kuznetsov
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=aS5hekMmIrsJPK-L@zeus \
--to=ryasuoka@redhat.com \
--cc=bp@alien8.de \
--cc=dave.hansen@linux.intel.com \
--cc=hpa@zytor.com \
--cc=kvm@vger.kernel.org \
--cc=linux-kernel@vger.kernel.org \
--cc=mingo@redhat.com \
--cc=pbonzini@redhat.com \
--cc=tglx@linutronix.de \
--cc=vkuznets@redhat.com \
--cc=x86@kernel.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox