From: Marc Zyngier <maz@kernel.org>
To: Alexandru Elisei <alexandru.elisei@arm.com>
Cc: julien.thierry.kdev@gmail.com, james.morse@arm.com,
kvmarm@lists.cs.columbia.edu,
linux-arm-kernel@lists.infradead.org, suzuki.poulose@arm.com
Subject: Re: [PATCH 2/2] KVM: arm64: Try PMD block mappings if PUD mappings are not supported
Date: Tue, 08 Sep 2020 13:41:27 +0100 [thread overview]
Message-ID: <1997ea2bb47f81dcc689489dba596e2d@kernel.org> (raw)
In-Reply-To: <4002ad2c-59a2-00a6-2bb5-797cf62763c9@arm.com>
On 2020-09-08 13:23, Alexandru Elisei wrote:
> Hi Marc,
>
> On 9/4/20 10:58 AM, Marc Zyngier wrote:
>> Hi Alex,
>>
>> On Tue, 01 Sep 2020 14:33:57 +0100,
>> Alexandru Elisei <alexandru.elisei@arm.com> wrote:
>>> When userspace uses hugetlbfs for the VM memory, user_mem_abort()
>>> tries to
>>> use the same block size to map the faulting IPA in stage 2. If stage
>>> 2
>>> cannot use the same size mapping because the block size doesn't fit
>>> in the
>>> memslot or the memslot is not properly aligned, user_mem_abort() will
>>> fall
>>> back to a page mapping, regardless of the block size. We can do
>>> better for
>>> PUD backed hugetlbfs by checking if a PMD block mapping is possible
>>> before
>>> deciding to use a page.
>>>
>>> vma_pagesize is an unsigned long, use 1UL instead of 1ULL when
>>> assigning
>>> its value.
>>>
>>> Signed-off-by: Alexandru Elisei <alexandru.elisei@arm.com>
>>> ---
>>> arch/arm64/kvm/mmu.c | 19 ++++++++++++++-----
>>> 1 file changed, 14 insertions(+), 5 deletions(-)
>>>
>>> diff --git a/arch/arm64/kvm/mmu.c b/arch/arm64/kvm/mmu.c
>>> index 25e7dc52c086..f590f7355cda 100644
>>> --- a/arch/arm64/kvm/mmu.c
>>> +++ b/arch/arm64/kvm/mmu.c
>>> @@ -1871,15 +1871,24 @@ static int user_mem_abort(struct kvm_vcpu
>>> *vcpu, phys_addr_t fault_ipa,
>>> else
>>> vma_shift = PAGE_SHIFT;
>>>
>>> - vma_pagesize = 1ULL << vma_shift;
>>> if (logging_active ||
>>> - (vma->vm_flags & VM_PFNMAP) ||
>>> - !fault_supports_stage2_huge_mapping(memslot, hva,
>>> vma_pagesize)) {
>>> + (vma->vm_flags & VM_PFNMAP)) {
>>> force_pte = true;
>>> - vma_pagesize = PAGE_SIZE;
>>> vma_shift = PAGE_SHIFT;
>>> }
>>>
>>> + if (vma_shift == PUD_SHIFT &&
>>> + !fault_supports_stage2_huge_mapping(memslot, hva, PUD_SIZE))
>>> + vma_shift = PMD_SHIFT;
>>> +
>>> + if (vma_shift == PMD_SHIFT &&
>>> + !fault_supports_stage2_huge_mapping(memslot, hva, PMD_SIZE)) {
>>> + force_pte = true;
>>> + vma_shift = PAGE_SHIFT;
>>> + }
>>> +
>>> + vma_pagesize = 1UL << vma_shift;
>>> +
>>> /*
>>> * The stage2 has a minimum of 2 level table (For arm64 see
>>> * kvm_arm_setup_stage2()). Hence, we are guaranteed that we can
>>> @@ -1889,7 +1898,7 @@ static int user_mem_abort(struct kvm_vcpu
>>> *vcpu, phys_addr_t fault_ipa,
>>> */
>>> if (vma_pagesize == PMD_SIZE ||
>>> (vma_pagesize == PUD_SIZE && kvm_stage2_has_pmd(kvm)))
>>> - gfn = (fault_ipa & huge_page_mask(hstate_vma(vma))) >> PAGE_SHIFT;
>>> + gfn = (fault_ipa & ~(vma_pagesize - 1)) >> PAGE_SHIFT;
>>> mmap_read_unlock(current->mm);
>>>
>>> /* We need minimum second+third level pages */
>> Although this looks like a sensible change, I'm a reluctant to take it
>> at this stage, given that we already have a bunch of patches from Will
>> to change the way we deal with PTs.
>>
>> Could you look into how this could fit into the new code instead?
>
> Sure, that sounds very sensible. I'm in the process of reviewing Will's
> series,
> and after I'm done I'll rebase this on top of his patches and send it
> as v2. Does
> that sound ok to you? Or do you want me to base this patch on one of
> your branches?
Either way is fine (kvmarm/next has his patches). Just let me know
what this is based on when you post the patches.
M.
--
Jazz is not dead. It just smells funny...
_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel
next prev parent reply other threads:[~2020-09-08 12:42 UTC|newest]
Thread overview: 11+ messages / expand[flat|nested] mbox.gz Atom feed top
2020-09-01 13:33 [PATCH 0/2] KVM: arm64: user_mem_abort() improvements Alexandru Elisei
2020-09-01 13:33 ` [PATCH 1/2] KVM: arm64: Update page shift if stage 2 block mapping not supported Alexandru Elisei
2020-09-02 0:57 ` Gavin Shan
2020-09-01 13:33 ` [PATCH 2/2] KVM: arm64: Try PMD block mappings if PUD mappings are " Alexandru Elisei
2020-09-02 1:23 ` Gavin Shan
2020-09-02 9:01 ` Alexandru Elisei
2020-09-03 0:06 ` Gavin Shan
2020-09-04 9:58 ` Marc Zyngier
2020-09-08 12:23 ` Alexandru Elisei
2020-09-08 12:41 ` Marc Zyngier [this message]
2020-09-04 10:18 ` [PATCH 0/2] KVM: arm64: user_mem_abort() improvements Marc Zyngier
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=1997ea2bb47f81dcc689489dba596e2d@kernel.org \
--to=maz@kernel.org \
--cc=alexandru.elisei@arm.com \
--cc=james.morse@arm.com \
--cc=julien.thierry.kdev@gmail.com \
--cc=kvmarm@lists.cs.columbia.edu \
--cc=linux-arm-kernel@lists.infradead.org \
--cc=suzuki.poulose@arm.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).