From: Gavin Shan <gshan@redhat.com>
To: Eric Auger <eauger@redhat.com>, kvmarm@lists.cs.columbia.edu
Cc: kvm@vger.kernel.org, maz@kernel.org,
linux-kernel@vger.kernel.org, shan.gavin@gmail.com,
Jonathan.Cameron@huawei.com, pbonzini@redhat.com,
vkuznets@redhat.com, will@kernel.org
Subject: Re: [PATCH v4 02/15] KVM: async_pf: Add helper function to check completion queue
Date: Thu, 13 Jan 2022 15:38:43 +0800 [thread overview]
Message-ID: <2543ace0-444a-7777-460b-c3eab9eb612a@redhat.com> (raw)
In-Reply-To: <56d8dbec-a8fd-b109-0c0f-b01c1aef4741@redhat.com>
Hi Eric,
On 11/10/21 11:37 PM, Eric Auger wrote:
> On 8/15/21 2:59 AM, Gavin Shan wrote:
>> This adds inline helper kvm_check_async_pf_completion_queue() to
>> check if there are pending completion in the queue. The empty stub
>> is also added on !CONFIG_KVM_ASYNC_PF so that the caller needn't
>> consider if CONFIG_KVM_ASYNC_PF is enabled.
>>
>> All checks on the completion queue is done by the newly added inline
>> function since list_empty() and list_empty_careful() are interchangeable.
> why is it interchangeable?
>
I think the commit log is misleading. list_empty_careful() is more strict
than list_empty(). In this patch, we replace list_empty() with list_empty_careful().
I will correct the commit log in next respin like below:
All checks on the completion queue is done by the newly added inline
function where list_empty_careful() instead of list_empty() is used.
>>
>> Signed-off-by: Gavin Shan <gshan@redhat.com>
>> ---
>> arch/x86/kvm/x86.c | 2 +-
>> include/linux/kvm_host.h | 10 ++++++++++
>> virt/kvm/async_pf.c | 10 +++++-----
>> virt/kvm/kvm_main.c | 4 +---
>> 4 files changed, 17 insertions(+), 9 deletions(-)
>>
>> diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
>> index e5d5c5ed7dd4..7f35d9324b99 100644
>> --- a/arch/x86/kvm/x86.c
>> +++ b/arch/x86/kvm/x86.c
>> @@ -11591,7 +11591,7 @@ static inline bool kvm_guest_apic_has_interrupt(struct kvm_vcpu *vcpu)
>>
>> static inline bool kvm_vcpu_has_events(struct kvm_vcpu *vcpu)
>> {
>> - if (!list_empty_careful(&vcpu->async_pf.done))
>> + if (kvm_check_async_pf_completion_queue(vcpu))
>> return true;
>>
>> if (kvm_apic_has_events(vcpu))
>> diff --git a/include/linux/kvm_host.h b/include/linux/kvm_host.h
>> index 85b61a456f1c..a5f990f6dc35 100644
>> --- a/include/linux/kvm_host.h
>> +++ b/include/linux/kvm_host.h
>> @@ -339,12 +339,22 @@ struct kvm_async_pf {
>> bool notpresent_injected;
>> };
>>
>> +static inline bool kvm_check_async_pf_completion_queue(struct kvm_vcpu *vcpu)
>> +{
>> + return !list_empty_careful(&vcpu->async_pf.done);
>> +}
>> +
>> void kvm_clear_async_pf_completion_queue(struct kvm_vcpu *vcpu);
>> void kvm_check_async_pf_completion(struct kvm_vcpu *vcpu);
>> bool kvm_setup_async_pf(struct kvm_vcpu *vcpu, gpa_t cr2_or_gpa,
>> unsigned long hva, struct kvm_arch_async_pf *arch);
>> int kvm_async_pf_wakeup_all(struct kvm_vcpu *vcpu);
>> #else
>> +static inline bool kvm_check_async_pf_completion_queue(struct kvm_vcpu *vcpu)
>> +{
>> + return false;
>> +}
>> +
>> static inline void kvm_check_async_pf_completion(struct kvm_vcpu *vcpu) { }
>> #endif
>>
>> diff --git a/virt/kvm/async_pf.c b/virt/kvm/async_pf.c
>> index dd777688d14a..d145a61a046a 100644
>> --- a/virt/kvm/async_pf.c
>> +++ b/virt/kvm/async_pf.c
>> @@ -70,7 +70,7 @@ static void async_pf_execute(struct work_struct *work)
>> kvm_arch_async_page_present(vcpu, apf);
>>
>> spin_lock(&vcpu->async_pf.lock);
>> - first = list_empty(&vcpu->async_pf.done);
>> + first = !kvm_check_async_pf_completion_queue(vcpu);
>> list_add_tail(&apf->link, &vcpu->async_pf.done);
>> apf->vcpu = NULL;
>> spin_unlock(&vcpu->async_pf.lock);
>> @@ -122,7 +122,7 @@ void kvm_clear_async_pf_completion_queue(struct kvm_vcpu *vcpu)
>> spin_lock(&vcpu->async_pf.lock);
>> }
>>
>> - while (!list_empty(&vcpu->async_pf.done)) {
>> + while (kvm_check_async_pf_completion_queue(vcpu)) {
> this is replaced by a stronger check. Please can you explain why is it
> equivalent?
Access to the completion queue is protected by spinlock. So the additional
check in list_empty_careful() to verify the head's prev/next are modified
on the fly shouldn't happen. It means they're same in our case.
>> struct kvm_async_pf *work =
>> list_first_entry(&vcpu->async_pf.done,
>> typeof(*work), link);
>> @@ -138,7 +138,7 @@ void kvm_check_async_pf_completion(struct kvm_vcpu *vcpu)
>> {
>> struct kvm_async_pf *work;
>>
>> - while (!list_empty_careful(&vcpu->async_pf.done) &&
>> + while (kvm_check_async_pf_completion_queue(vcpu) &&
>> kvm_arch_can_dequeue_async_page_present(vcpu)) {
>> spin_lock(&vcpu->async_pf.lock);
>> work = list_first_entry(&vcpu->async_pf.done, typeof(*work),
>> @@ -205,7 +205,7 @@ int kvm_async_pf_wakeup_all(struct kvm_vcpu *vcpu)
>> struct kvm_async_pf *work;
>> bool first;
>>
>> - if (!list_empty_careful(&vcpu->async_pf.done))
>> + if (kvm_check_async_pf_completion_queue(vcpu))
>> return 0;
>>
>> work = kmem_cache_zalloc(async_pf_cache, GFP_ATOMIC);
>> @@ -216,7 +216,7 @@ int kvm_async_pf_wakeup_all(struct kvm_vcpu *vcpu)
>> INIT_LIST_HEAD(&work->queue); /* for list_del to work */
>>
>> spin_lock(&vcpu->async_pf.lock);
>> - first = list_empty(&vcpu->async_pf.done);
>> + first = !kvm_check_async_pf_completion_queue(vcpu);
>> list_add_tail(&work->link, &vcpu->async_pf.done);
>> spin_unlock(&vcpu->async_pf.lock);
>>
>> diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c
>> index b50dbe269f4b..8795503651b1 100644
>> --- a/virt/kvm/kvm_main.c
>> +++ b/virt/kvm/kvm_main.c
>> @@ -3282,10 +3282,8 @@ static bool vcpu_dy_runnable(struct kvm_vcpu *vcpu)
>> if (kvm_arch_dy_runnable(vcpu))
>> return true;
>>
>> -#ifdef CONFIG_KVM_ASYNC_PF
>> - if (!list_empty_careful(&vcpu->async_pf.done))
>> + if (kvm_check_async_pf_completion_queue(vcpu))
>> return true;
>> -#endif
>>
>> return false;
>> }
>>
Thanks,
Gavin
_______________________________________________
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm
next prev parent reply other threads:[~2022-01-13 7:39 UTC|newest]
Thread overview: 36+ messages / expand[flat|nested] mbox.gz Atom feed top
2021-08-15 0:59 [PATCH v4 00/15] Support Asynchronous Page Fault Gavin Shan
2021-08-15 0:59 ` [PATCH v4 01/15] KVM: async_pf: Move struct kvm_async_pf around Gavin Shan
2021-11-10 15:37 ` Eric Auger
2022-01-13 7:21 ` Gavin Shan
2021-08-15 0:59 ` [PATCH v4 02/15] KVM: async_pf: Add helper function to check completion queue Gavin Shan
2021-08-16 16:53 ` Vitaly Kuznetsov
2021-08-17 10:44 ` Gavin Shan
2021-11-10 15:37 ` Eric Auger
2022-01-13 7:38 ` Gavin Shan [this message]
2021-08-15 0:59 ` [PATCH v4 03/15] KVM: async_pf: Make GFN slot management generic Gavin Shan
2021-11-10 17:00 ` Eric Auger
2022-01-13 7:42 ` Gavin Shan
2021-11-10 17:00 ` Eric Auger
2021-08-15 0:59 ` [PATCH v4 04/15] KVM: x86: Use generic async PF slot management Gavin Shan
2021-11-10 17:03 ` Eric Auger
2022-01-13 7:44 ` Gavin Shan
2021-08-15 0:59 ` [PATCH v4 05/15] KVM: arm64: Export kvm_handle_user_mem_abort() Gavin Shan
2021-11-10 18:02 ` Eric Auger
2022-01-13 7:55 ` Gavin Shan
2021-08-15 0:59 ` [PATCH v4 06/15] KVM: arm64: Add paravirtualization header files Gavin Shan
2021-11-10 18:06 ` Eric Auger
2022-01-13 8:00 ` Gavin Shan
2021-08-15 0:59 ` [PATCH v4 07/15] KVM: arm64: Support page-not-present notification Gavin Shan
2021-11-12 15:01 ` Eric Auger
2022-01-13 8:43 ` Gavin Shan
2021-08-15 0:59 ` [PATCH v4 08/15] KVM: arm64: Support page-ready notification Gavin Shan
2021-08-15 0:59 ` [PATCH v4 09/15] KVM: arm64: Support async PF hypercalls Gavin Shan
2021-08-15 0:59 ` [PATCH v4 10/15] KVM: arm64: Support async PF ioctl commands Gavin Shan
2021-08-15 0:59 ` [PATCH v4 11/15] KVM: arm64: Export async PF capability Gavin Shan
2021-08-15 0:59 ` [PATCH v4 12/15] arm64: Detect async PF para-virtualization feature Gavin Shan
2021-08-15 0:59 ` [PATCH v4 13/15] arm64: Reschedule process on aync PF Gavin Shan
2021-08-15 0:59 ` [PATCH v4 14/15] arm64: Enable async PF Gavin Shan
2021-08-16 17:05 ` Vitaly Kuznetsov
2021-08-17 10:49 ` Gavin Shan
2021-08-15 0:59 ` [PATCH v4 15/15] KVM: arm64: Add async PF document Gavin Shan
2021-11-11 10:39 ` Eric Auger
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=2543ace0-444a-7777-460b-c3eab9eb612a@redhat.com \
--to=gshan@redhat.com \
--cc=Jonathan.Cameron@huawei.com \
--cc=eauger@redhat.com \
--cc=kvm@vger.kernel.org \
--cc=kvmarm@lists.cs.columbia.edu \
--cc=linux-kernel@vger.kernel.org \
--cc=maz@kernel.org \
--cc=pbonzini@redhat.com \
--cc=shan.gavin@gmail.com \
--cc=vkuznets@redhat.com \
--cc=will@kernel.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox