public inbox for linux-kernel@vger.kernel.org
 help / color / mirror / Atom feed
* [PATCH] x86/hyper-v: Validate entire GVA range for non-canonical addresses during PV TLB flush
@ 2026-02-19 20:05 Manuel Andreas
  2026-02-20 17:23 ` Sean Christopherson
                   ` (2 more replies)
  0 siblings, 3 replies; 5+ messages in thread
From: Manuel Andreas @ 2026-02-19 20:05 UTC (permalink / raw)
  To: kvm, linux-kernel

In KVM guests with Hyper-V hypercalls enabled, the hypercalls
HVCALL_FLUSH_VIRTUAL_ADDRESS_LIST and HVCALL_FLUSH_VIRTUAL_ADDRESS_LIST_EX
allow a guest to request invalidation of portions of a virtual TLB.
For this, the hypercall parameter includes a list of GVAs that are supposed
to be invalidated.

Currently, only the base GVA is checked to be canonical. In reality,
this check needs to be performed for the entire range of GVAs.
This still enables guests running on Intel hardware to trigger a
WARN_ONCE in the host (see prior commit below).

This patch simply moves the check for non-canonical addresses to be
performed for every single GVA of the supplied range. This should also
be more in line with the Hyper-V specification, since, although
unlikely, a range starting with an invalid GVA may still contain
GVAs that are valid.

Fixes: fa787ac07b3c ("KVM: x86/hyper-v: Skip non-canonical addresses during PV TLB flush")
Signed-off-by: Manuel Andreas <manuel.andreas@tum.de>
---
 arch/x86/kvm/hyperv.c | 9 +++++----
 1 file changed, 5 insertions(+), 4 deletions(-)

diff --git a/arch/x86/kvm/hyperv.c b/arch/x86/kvm/hyperv.c
index de92292eb1f5..f4f6accf1a33 100644
--- a/arch/x86/kvm/hyperv.c
+++ b/arch/x86/kvm/hyperv.c
@@ -1981,16 +1981,17 @@ int kvm_hv_vcpu_flush_tlb(struct kvm_vcpu *vcpu)
 		if (entries[i] == KVM_HV_TLB_FLUSHALL_ENTRY)
 			goto out_flush_all;
 
-		if (is_noncanonical_invlpg_address(entries[i], vcpu))
-			continue;
-
 		/*
 		 * Lower 12 bits of 'address' encode the number of additional
 		 * pages to flush.
 		 */
 		gva = entries[i] & PAGE_MASK;
-		for (j = 0; j < (entries[i] & ~PAGE_MASK) + 1; j++)
+		for (j = 0; j < (entries[i] & ~PAGE_MASK) + 1; j++) {
+			if (is_noncanonical_invlpg_address(gva + j * PAGE_SIZE, vcpu))
+				continue;
+
 			kvm_x86_call(flush_tlb_gva)(vcpu, gva + j * PAGE_SIZE);
+		}
 
 		++vcpu->stat.tlb_flush;
 	}
-- 
2.50.1



^ permalink raw reply related	[flat|nested] 5+ messages in thread

* Re: [PATCH] x86/hyper-v: Validate entire GVA range for non-canonical addresses during PV TLB flush
  2026-02-19 20:05 [PATCH] x86/hyper-v: Validate entire GVA range for non-canonical addresses during PV TLB flush Manuel Andreas
@ 2026-02-20 17:23 ` Sean Christopherson
  2026-02-23  8:55   ` Vitaly Kuznetsov
  2026-02-23  8:58 ` Vitaly Kuznetsov
  2026-03-05 17:07 ` Sean Christopherson
  2 siblings, 1 reply; 5+ messages in thread
From: Sean Christopherson @ 2026-02-20 17:23 UTC (permalink / raw)
  To: Manuel Andreas; +Cc: kvm, linux-kernel, Vitaly Kuznetsov, Paolo Bonzini

+Vitaly and Paolo

Please use scripts/get_maintainer.pl, otherwise your emails might not reach the
right eyeballs.

On Thu, Feb 19, 2026, Manuel Andreas wrote:
> In KVM guests with Hyper-V hypercalls enabled, the hypercalls
> HVCALL_FLUSH_VIRTUAL_ADDRESS_LIST and HVCALL_FLUSH_VIRTUAL_ADDRESS_LIST_EX
> allow a guest to request invalidation of portions of a virtual TLB.
> For this, the hypercall parameter includes a list of GVAs that are supposed
> to be invalidated.
> 
> Currently, only the base GVA is checked to be canonical. In reality,
> this check needs to be performed for the entire range of GVAs.
> This still enables guests running on Intel hardware to trigger a
> WARN_ONCE in the host (see prior commit below).
> 
> This patch simply moves the check for non-canonical addresses to be
> performed for every single GVA of the supplied range. This should also
> be more in line with the Hyper-V specification, since, although
> unlikely, a range starting with an invalid GVA may still contain
> GVAs that are valid.
> 
> Fixes: fa787ac07b3c ("KVM: x86/hyper-v: Skip non-canonical addresses during PV TLB flush")
> Signed-off-by: Manuel Andreas <manuel.andreas@tum.de>
> ---
>  arch/x86/kvm/hyperv.c | 9 +++++----
>  1 file changed, 5 insertions(+), 4 deletions(-)
> 
> diff --git a/arch/x86/kvm/hyperv.c b/arch/x86/kvm/hyperv.c
> index de92292eb1f5..f4f6accf1a33 100644
> --- a/arch/x86/kvm/hyperv.c
> +++ b/arch/x86/kvm/hyperv.c
> @@ -1981,16 +1981,17 @@ int kvm_hv_vcpu_flush_tlb(struct kvm_vcpu *vcpu)
>  		if (entries[i] == KVM_HV_TLB_FLUSHALL_ENTRY)
>  			goto out_flush_all;
>  
> -		if (is_noncanonical_invlpg_address(entries[i], vcpu))
> -			continue;
> -
>  		/*
>  		 * Lower 12 bits of 'address' encode the number of additional
>  		 * pages to flush.
>  		 */
>  		gva = entries[i] & PAGE_MASK;
> -		for (j = 0; j < (entries[i] & ~PAGE_MASK) + 1; j++)
> +		for (j = 0; j < (entries[i] & ~PAGE_MASK) + 1; j++) {
> +			if (is_noncanonical_invlpg_address(gva + j * PAGE_SIZE, vcpu))
> +				continue;
> +
>  			kvm_x86_call(flush_tlb_gva)(vcpu, gva + j * PAGE_SIZE);
> +		}

Vitaly, can we treat the entire request as garbage and throw it away if any part
isn't valid?  Or do you think we should go with the more conservative approach
as above?

diff --git a/arch/x86/kvm/hyperv.c b/arch/x86/kvm/hyperv.c
index de92292eb1f5..f568f3d4f6e5 100644
--- a/arch/x86/kvm/hyperv.c
+++ b/arch/x86/kvm/hyperv.c
@@ -1967,8 +1967,8 @@ int kvm_hv_vcpu_flush_tlb(struct kvm_vcpu *vcpu)
        struct kvm_vcpu_hv_tlb_flush_fifo *tlb_flush_fifo;
        struct kvm_vcpu_hv *hv_vcpu = to_hv_vcpu(vcpu);
        u64 entries[KVM_HV_TLB_FLUSH_FIFO_SIZE];
+       gva_t gva, extra_pages;
        int i, j, count;
-       gva_t gva;
 
        if (!tdp_enabled || !hv_vcpu)
                return -EINVAL;
@@ -1978,18 +1978,22 @@ int kvm_hv_vcpu_flush_tlb(struct kvm_vcpu *vcpu)
        count = kfifo_out(&tlb_flush_fifo->entries, entries, KVM_HV_TLB_FLUSH_FIFO_SIZE);
 
        for (i = 0; i < count; i++) {
+
                if (entries[i] == KVM_HV_TLB_FLUSHALL_ENTRY)
                        goto out_flush_all;
 
-               if (is_noncanonical_invlpg_address(entries[i], vcpu))
-                       continue;
-
                /*
                 * Lower 12 bits of 'address' encode the number of additional
                 * pages to flush.
                 */
                gva = entries[i] & PAGE_MASK;
-               for (j = 0; j < (entries[i] & ~PAGE_MASK) + 1; j++)
+               extra_pages = (entries[i] & ~PAGE_MASK);
+
+               if (is_noncanonical_invlpg_address(gva, vcpu) ||
+                   is_noncanonical_invlpg_address(gva + extra_pages * PAGE_SIZE))
+                       continue;
+
+               for (j = 0; j < extra_pages + 1; j++)
                        kvm_x86_call(flush_tlb_gva)(vcpu, gva + j * PAGE_SIZE);
 
                ++vcpu->stat.tlb_flush;

^ permalink raw reply related	[flat|nested] 5+ messages in thread

* Re: [PATCH] x86/hyper-v: Validate entire GVA range for non-canonical addresses during PV TLB flush
  2026-02-20 17:23 ` Sean Christopherson
@ 2026-02-23  8:55   ` Vitaly Kuznetsov
  0 siblings, 0 replies; 5+ messages in thread
From: Vitaly Kuznetsov @ 2026-02-23  8:55 UTC (permalink / raw)
  To: Sean Christopherson, Manuel Andreas; +Cc: kvm, linux-kernel, Paolo Bonzini

Sean Christopherson <seanjc@google.com> writes:

> +Vitaly and Paolo
>
> Please use scripts/get_maintainer.pl, otherwise your emails might not reach the
> right eyeballs.
>
> On Thu, Feb 19, 2026, Manuel Andreas wrote:
>> In KVM guests with Hyper-V hypercalls enabled, the hypercalls
>> HVCALL_FLUSH_VIRTUAL_ADDRESS_LIST and HVCALL_FLUSH_VIRTUAL_ADDRESS_LIST_EX
>> allow a guest to request invalidation of portions of a virtual TLB.
>> For this, the hypercall parameter includes a list of GVAs that are supposed
>> to be invalidated.
>> 
>> Currently, only the base GVA is checked to be canonical. In reality,
>> this check needs to be performed for the entire range of GVAs.
>> This still enables guests running on Intel hardware to trigger a
>> WARN_ONCE in the host (see prior commit below).
>> 
>> This patch simply moves the check for non-canonical addresses to be
>> performed for every single GVA of the supplied range. This should also
>> be more in line with the Hyper-V specification, since, although
>> unlikely, a range starting with an invalid GVA may still contain
>> GVAs that are valid.
>> 
>> Fixes: fa787ac07b3c ("KVM: x86/hyper-v: Skip non-canonical addresses during PV TLB flush")
>> Signed-off-by: Manuel Andreas <manuel.andreas@tum.de>
>> ---
>>  arch/x86/kvm/hyperv.c | 9 +++++----
>>  1 file changed, 5 insertions(+), 4 deletions(-)
>> 
>> diff --git a/arch/x86/kvm/hyperv.c b/arch/x86/kvm/hyperv.c
>> index de92292eb1f5..f4f6accf1a33 100644
>> --- a/arch/x86/kvm/hyperv.c
>> +++ b/arch/x86/kvm/hyperv.c
>> @@ -1981,16 +1981,17 @@ int kvm_hv_vcpu_flush_tlb(struct kvm_vcpu *vcpu)
>>  		if (entries[i] == KVM_HV_TLB_FLUSHALL_ENTRY)
>>  			goto out_flush_all;
>>  
>> -		if (is_noncanonical_invlpg_address(entries[i], vcpu))
>> -			continue;
>> -
>>  		/*
>>  		 * Lower 12 bits of 'address' encode the number of additional
>>  		 * pages to flush.
>>  		 */
>>  		gva = entries[i] & PAGE_MASK;
>> -		for (j = 0; j < (entries[i] & ~PAGE_MASK) + 1; j++)
>> +		for (j = 0; j < (entries[i] & ~PAGE_MASK) + 1; j++) {
>> +			if (is_noncanonical_invlpg_address(gva + j * PAGE_SIZE, vcpu))
>> +				continue;
>> +
>>  			kvm_x86_call(flush_tlb_gva)(vcpu, gva + j * PAGE_SIZE);
>> +		}
>
> Vitaly, can we treat the entire request as garbage and throw it away if any part
> isn't valid?  Or do you think we should go with the more conservative approach
> as above?

I don't remember if I have ever seen real Windows trying to flush
anything non-canonical at all but my gut feeling tells me we should
rather play safe and use Manuel's 'conservative' approach. Also, this
should be consistent with TLFS which says:

"Invalid GVAs (those that specify addresses beyond the end of the
partition’s GVA space) are ignored."

i.e. it doesn't say 'Invalid GVA RANGES are ignored'.

>
> diff --git a/arch/x86/kvm/hyperv.c b/arch/x86/kvm/hyperv.c
> index de92292eb1f5..f568f3d4f6e5 100644
> --- a/arch/x86/kvm/hyperv.c
> +++ b/arch/x86/kvm/hyperv.c
> @@ -1967,8 +1967,8 @@ int kvm_hv_vcpu_flush_tlb(struct kvm_vcpu *vcpu)
>         struct kvm_vcpu_hv_tlb_flush_fifo *tlb_flush_fifo;
>         struct kvm_vcpu_hv *hv_vcpu = to_hv_vcpu(vcpu);
>         u64 entries[KVM_HV_TLB_FLUSH_FIFO_SIZE];
> +       gva_t gva, extra_pages;
>         int i, j, count;
> -       gva_t gva;
>  
>         if (!tdp_enabled || !hv_vcpu)
>                 return -EINVAL;
> @@ -1978,18 +1978,22 @@ int kvm_hv_vcpu_flush_tlb(struct kvm_vcpu *vcpu)
>         count = kfifo_out(&tlb_flush_fifo->entries, entries, KVM_HV_TLB_FLUSH_FIFO_SIZE);
>  
>         for (i = 0; i < count; i++) {
> +
>                 if (entries[i] == KVM_HV_TLB_FLUSHALL_ENTRY)
>                         goto out_flush_all;
>  
> -               if (is_noncanonical_invlpg_address(entries[i], vcpu))
> -                       continue;
> -
>                 /*
>                  * Lower 12 bits of 'address' encode the number of additional
>                  * pages to flush.
>                  */
>                 gva = entries[i] & PAGE_MASK;
> -               for (j = 0; j < (entries[i] & ~PAGE_MASK) + 1; j++)
> +               extra_pages = (entries[i] & ~PAGE_MASK);
> +
> +               if (is_noncanonical_invlpg_address(gva, vcpu) ||
> +                   is_noncanonical_invlpg_address(gva + extra_pages * PAGE_SIZE))
> +                       continue;
> +
> +               for (j = 0; j < extra_pages + 1; j++)
>                         kvm_x86_call(flush_tlb_gva)(vcpu, gva + j * PAGE_SIZE);
>  
>                 ++vcpu->stat.tlb_flush;
>

-- 
Vitaly


^ permalink raw reply	[flat|nested] 5+ messages in thread

* Re: [PATCH] x86/hyper-v: Validate entire GVA range for non-canonical addresses during PV TLB flush
  2026-02-19 20:05 [PATCH] x86/hyper-v: Validate entire GVA range for non-canonical addresses during PV TLB flush Manuel Andreas
  2026-02-20 17:23 ` Sean Christopherson
@ 2026-02-23  8:58 ` Vitaly Kuznetsov
  2026-03-05 17:07 ` Sean Christopherson
  2 siblings, 0 replies; 5+ messages in thread
From: Vitaly Kuznetsov @ 2026-02-23  8:58 UTC (permalink / raw)
  To: Manuel Andreas, kvm, linux-kernel

Manuel Andreas <manuel.andreas@tum.de> writes:

> In KVM guests with Hyper-V hypercalls enabled, the hypercalls
> HVCALL_FLUSH_VIRTUAL_ADDRESS_LIST and HVCALL_FLUSH_VIRTUAL_ADDRESS_LIST_EX
> allow a guest to request invalidation of portions of a virtual TLB.
> For this, the hypercall parameter includes a list of GVAs that are supposed
> to be invalidated.
>
> Currently, only the base GVA is checked to be canonical. In reality,
> this check needs to be performed for the entire range of GVAs.
> This still enables guests running on Intel hardware to trigger a
> WARN_ONCE in the host (see prior commit below).
>
> This patch simply moves the check for non-canonical addresses to be
> performed for every single GVA of the supplied range. This should also
> be more in line with the Hyper-V specification, since, although
> unlikely, a range starting with an invalid GVA may still contain
> GVAs that are valid.
>
> Fixes: fa787ac07b3c ("KVM: x86/hyper-v: Skip non-canonical addresses during PV TLB flush")
> Signed-off-by: Manuel Andreas <manuel.andreas@tum.de>
> ---
>  arch/x86/kvm/hyperv.c | 9 +++++----
>  1 file changed, 5 insertions(+), 4 deletions(-)
>
> diff --git a/arch/x86/kvm/hyperv.c b/arch/x86/kvm/hyperv.c
> index de92292eb1f5..f4f6accf1a33 100644
> --- a/arch/x86/kvm/hyperv.c
> +++ b/arch/x86/kvm/hyperv.c
> @@ -1981,16 +1981,17 @@ int kvm_hv_vcpu_flush_tlb(struct kvm_vcpu *vcpu)
>  		if (entries[i] == KVM_HV_TLB_FLUSHALL_ENTRY)
>  			goto out_flush_all;
>  
> -		if (is_noncanonical_invlpg_address(entries[i], vcpu))
> -			continue;
> -
>  		/*
>  		 * Lower 12 bits of 'address' encode the number of additional
>  		 * pages to flush.
>  		 */
>  		gva = entries[i] & PAGE_MASK;
> -		for (j = 0; j < (entries[i] & ~PAGE_MASK) + 1; j++)
> +		for (j = 0; j < (entries[i] & ~PAGE_MASK) + 1; j++) {
> +			if (is_noncanonical_invlpg_address(gva + j * PAGE_SIZE, vcpu))
> +				continue;
> +
>  			kvm_x86_call(flush_tlb_gva)(vcpu, gva + j * PAGE_SIZE);
> +		}
>  
>  		++vcpu->stat.tlb_flush;
>  	}

Reviewed-by: Vitaly Kuznetsov <vkuznets@redhat.com>

-- 
Vitaly


^ permalink raw reply	[flat|nested] 5+ messages in thread

* Re: [PATCH] x86/hyper-v: Validate entire GVA range for non-canonical addresses during PV TLB flush
  2026-02-19 20:05 [PATCH] x86/hyper-v: Validate entire GVA range for non-canonical addresses during PV TLB flush Manuel Andreas
  2026-02-20 17:23 ` Sean Christopherson
  2026-02-23  8:58 ` Vitaly Kuznetsov
@ 2026-03-05 17:07 ` Sean Christopherson
  2 siblings, 0 replies; 5+ messages in thread
From: Sean Christopherson @ 2026-03-05 17:07 UTC (permalink / raw)
  To: Sean Christopherson, kvm, linux-kernel, Manuel Andreas

On Thu, 19 Feb 2026 21:05:49 +0100, Manuel Andreas wrote:
> In KVM guests with Hyper-V hypercalls enabled, the hypercalls
> HVCALL_FLUSH_VIRTUAL_ADDRESS_LIST and HVCALL_FLUSH_VIRTUAL_ADDRESS_LIST_EX
> allow a guest to request invalidation of portions of a virtual TLB.
> For this, the hypercall parameter includes a list of GVAs that are supposed
> to be invalidated.
> 
> Currently, only the base GVA is checked to be canonical. In reality,
> this check needs to be performed for the entire range of GVAs.
> This still enables guests running on Intel hardware to trigger a
> WARN_ONCE in the host (see prior commit below).
> 
> [...]

Applied to kvm-x86 fixes, thanks!

[1/1] x86/hyper-v: Validate entire GVA range for non-canonical addresses during PV TLB flush
      https://github.com/kvm-x86/linux/commit/45692aa4a7ce

--
https://github.com/kvm-x86/linux/tree/next

^ permalink raw reply	[flat|nested] 5+ messages in thread

end of thread, other threads:[~2026-03-05 17:08 UTC | newest]

Thread overview: 5+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2026-02-19 20:05 [PATCH] x86/hyper-v: Validate entire GVA range for non-canonical addresses during PV TLB flush Manuel Andreas
2026-02-20 17:23 ` Sean Christopherson
2026-02-23  8:55   ` Vitaly Kuznetsov
2026-02-23  8:58 ` Vitaly Kuznetsov
2026-03-05 17:07 ` Sean Christopherson

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox