public inbox for kvmarm@lists.cs.columbia.edu
 help / color / mirror / Atom feed
From: Punit Agrawal <punit.agrawal@arm.com>
To: Christoffer Dall <christoffer.dall@linaro.org>
Cc: kvmarm@lists.cs.columbia.edu, linux-kernel@vger.kernel.org,
	Marc Zyngier <marc.zyngier@arm.com>
Subject: Re: [PATCH] KVM: arm/arm64: Check pagesize when allocating a hugepage at Stage 2
Date: Thu, 11 Jan 2018 13:01:07 +0000	[thread overview]
Message-ID: <87mv1kr1m4.fsf@e105922-lin.cambridge.arm.com> (raw)
In-Reply-To: <20180111121542.GG15307@cbox> (Christoffer Dall's message of "Thu, 11 Jan 2018 13:15:42 +0100")

Christoffer Dall <christoffer.dall@linaro.org> writes:

> On Thu, Jan 04, 2018 at 06:24:33PM +0000, Punit Agrawal wrote:
>> KVM only supports PMD hugepages at stage 2 but doesn't actually check
>> that the provided hugepage memory pagesize is PMD_SIZE before populating
>> stage 2 entries.
>> 
>> In cases where the backing hugepage size is smaller than PMD_SIZE (such
>> as when using contiguous hugepages),
>
> what are contiguous hugepages and how are they created vs. a normal
> hugetlbfs?  Is this a kernel config thing, or how does it work?

Contiguous hugepages use the "Contiguous" bit (bit 52) in the page table
entry (pte), to mark successive entries as forming a block mapping.

The number of successive ptes that can be combined depend on the granule
size. E.g., for 4KB granule, 16 last-level ptes can form a 64KB
hugepage. or 16 adjacent PMD entries can form a 32MB hugepage.

There's no difference in instantiating contiguous hugepages vs normal
hugepages from a user's perspective other than passing in the
appropriate hugepage size.

There is no explicit config for contiguous hugepages - instead the
architectural helper to setup "hugepagesz" (see setup_hugepagesz() in
arch/arm64/mm/hugetlbpage.c") dictates the supported sizes.

Contiguous hugepage support has been enabled/disabled a few times for
arm64 - the latest of which is 5cd028b9d90403b ("arm64: Re-enable
support for contiguous hugepages").

>
>> KVM can end up creating stage 2
>> mappings that extend beyond the supplied memory.
>> 
>> Fix this by checking for the pagesize of userspace vma before creating
>> PMD hugepage at stage 2.
>> 
>> Fixes: ad361f093c1e31d ("KVM: ARM: Support hugetlbfs backed huge pages")
>> Signed-off-by: Punit Agrawal <punit.agrawal@arm.com>
>> Cc: Christoffer Dall <christoffer.dall@linaro.org>
>> Cc: Marc Zyngier <marc.zyngier@arm.com>
>> ---
>>  virt/kvm/arm/mmu.c | 2 +-
>>  1 file changed, 1 insertion(+), 1 deletion(-)
>> 
>> diff --git a/virt/kvm/arm/mmu.c b/virt/kvm/arm/mmu.c
>> index b4b69c2d1012..9dea96380339 100644
>> --- a/virt/kvm/arm/mmu.c
>> +++ b/virt/kvm/arm/mmu.c
>> @@ -1310,7 +1310,7 @@ static int user_mem_abort(struct kvm_vcpu *vcpu, phys_addr_t fault_ipa,
>>  		return -EFAULT;
>>  	}
>>  
>> -	if (is_vm_hugetlb_page(vma) && !logging_active) {
>> +	if (vma_kernel_pagesize(vma) == PMD_SIZE && !logging_active) {
>
> Don't we need to also fix this in kvm_send_hwpoison_signal?

I think we are OK here as the signal is delivered to userspace using the
hva and the lsb_shift is derived from the vma as well, i.e., stage 2 is
not involved here.

Does that make sense?

>
> (which probably implies this will then need a backport without that for
> older stable kernels.  Has this been an issue from the start or did we
> add contiguous hugepage support at some point?)

I think kvm was missed out in the first (and subsequent) enabling of
contiguous hugepage support. The functionality didn't start out broken
initially.

Note that applying the fix as far back as it applies isn't harmful
though.

Thanks,
Punit

>
>>  		hugetlb = true;
>>  		gfn = (fault_ipa & PMD_MASK) >> PAGE_SHIFT;
>>  	} else {
>> -- 
>> 2.15.1
>> 
>
> Thanks,
> -Christoffer

  reply	other threads:[~2018-01-11 13:01 UTC|newest]

Thread overview: 6+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2018-01-04 18:24 [PATCH] KVM: arm/arm64: Check pagesize when allocating a hugepage at Stage 2 Punit Agrawal
2018-01-11 12:15 ` Christoffer Dall
2018-01-11 13:01   ` Punit Agrawal [this message]
2018-01-11 13:49     ` Christoffer Dall
2018-01-11 14:23       ` Punit Agrawal
2018-01-11 14:25         ` Christoffer Dall

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=87mv1kr1m4.fsf@e105922-lin.cambridge.arm.com \
    --to=punit.agrawal@arm.com \
    --cc=christoffer.dall@linaro.org \
    --cc=kvmarm@lists.cs.columbia.edu \
    --cc=linux-kernel@vger.kernel.org \
    --cc=marc.zyngier@arm.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox