From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-18.7 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER,INCLUDES_PATCH,MAILING_LIST_MULTI,NICE_REPLY_A, SPF_HELO_NONE,SPF_PASS,USER_AGENT_SANE_1 autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 2F12EC12002 for ; Mon, 19 Jul 2021 06:31:39 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id B49E760BD3 for ; Mon, 19 Jul 2021 06:31:38 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org B49E760BD3 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=redhat.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 4EDB48D0106; Mon, 19 Jul 2021 02:31:39 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 49DCC8D0100; Mon, 19 Jul 2021 02:31:39 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 3185C8D0106; Mon, 19 Jul 2021 02:31:39 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0161.hostedemail.com [216.40.44.161]) by kanga.kvack.org (Postfix) with ESMTP id 0984F8D0100 for ; Mon, 19 Jul 2021 02:31:39 -0400 (EDT) Received: from smtpin28.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay04.hostedemail.com (Postfix) with ESMTP id 934541DE79 for ; Mon, 19 Jul 2021 06:31:37 +0000 (UTC) X-FDA: 78378366234.28.3D3B473 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [216.205.24.124]) by imf24.hostedemail.com (Postfix) with ESMTP id 27B61B0000AF for ; Mon, 19 Jul 2021 06:31:36 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1626676296; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=+196Kfl7I/cOggKI9/d0VaO8T5ekNSjAsbN7j2ytW0I=; b=JvgVNJCbY0GKeeS5iRgPqi+s7v3xrLb8sxhw0WaJkT3eGwr5PENwPvN6iNk0Fa4mnlZcNq wkhCz3/WXveQIRz0m9wgHTE+GdeW968s+B4VYx7/bNNhk07GhiJF2lno36flqMlQnLV1W3 iGMowAJwywArdyuGf2xG29jgKUqOYpo= Received: from mail-ej1-f72.google.com (mail-ej1-f72.google.com [209.85.218.72]) (Using TLS) by relay.mimecast.com with ESMTP id us-mta-121-gXenpPLLNDWCqbYt0J8_jg-1; Mon, 19 Jul 2021 02:31:33 -0400 X-MC-Unique: gXenpPLLNDWCqbYt0J8_jg-1 Received: by mail-ej1-f72.google.com with SMTP id nc15-20020a1709071c0fb029052883e9de3eso4847078ejc.19 for ; Sun, 18 Jul 2021 23:31:33 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:subject:to:cc:references:from:message-id:date :user-agent:mime-version:in-reply-to:content-language :content-transfer-encoding; bh=+196Kfl7I/cOggKI9/d0VaO8T5ekNSjAsbN7j2ytW0I=; b=ko9HPvuLsjAFJ/hTT25Z8eCtXnrBQQ7b42ZDL8cCLjv/bvPaYr4nU0ZCa/p8ffjmnj h03NxqNeutg4avKCdxHcrV1d8rriPuoHEVU27xvFRzAhg4w1pYvOF10Ai4F+l+o58lxC dkqi3iHCq7UQjyELsl8yP3bYY8VuArAtE7F/8YcDFEOBiM9sA/WFb/WWA4BMu/PI8Jqv KRmRxRLI2RTB4HQf2rMNAjEPL1xmfkVKTauVnnmITw3uS/OMr/ZDgqHCdXsIzsFJsgYd kgKqhmdNOnrcu+w0VywGvxeAhO66OAITrBfJO5sQ4MI34dHLJoXvG/oA1+LWkrJOhxUU mLWw== X-Gm-Message-State: AOAM531dXz27j/+546MlidJ1ZzbgHBep3EXIYjKMPnn50nOHSFT9ccj5 8NgkGjI0NsIt0RSDdenYzuLXemYNnYZ6z3MlfMgWgdO6VuHP2/5ewUqfwvxDgeDv/IP9H7CzHCd L9Oyf2QQqFGY= X-Received: by 2002:a17:906:5f99:: with SMTP id a25mr26054149eju.101.1626676292004; Sun, 18 Jul 2021 23:31:32 -0700 (PDT) X-Google-Smtp-Source: ABdhPJwOvrTC0xug9PjA728zKCBsRdaYWyJcTPeNv01/Z04ULpdx8WvrT0arC7Gk/HA8qvC4THAFuA== X-Received: by 2002:a17:906:5f99:: with SMTP id a25mr26054125eju.101.1626676291767; Sun, 18 Jul 2021 23:31:31 -0700 (PDT) Received: from ?IPv6:2001:b07:6468:f312:63a7:c72e:ea0e:6045? ([2001:b07:6468:f312:63a7:c72e:ea0e:6045]) by smtp.gmail.com with ESMTPSA id f20sm5511280ejz.30.2021.07.18.23.31.30 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128); Sun, 18 Jul 2021 23:31:31 -0700 (PDT) Subject: Re: [PATCH 1/5] KVM: arm64: Walk userspace page tables to compute the THP mapping size To: Marc Zyngier , linux-arm-kernel@lists.infradead.org, kvm@vger.kernel.org, kvmarm@lists.cs.columbia.edu, linux-mm@kvack.org Cc: Sean Christopherson , Matthew Wilcox , Will Deacon , Quentin Perret , James Morse , Suzuki K Poulose , Alexandru Elisei , kernel-team@android.com References: <20210717095541.1486210-1-maz@kernel.org> <20210717095541.1486210-2-maz@kernel.org> From: Paolo Bonzini Message-ID: Date: Mon, 19 Jul 2021 08:31:30 +0200 User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101 Thunderbird/78.11.0 MIME-Version: 1.0 In-Reply-To: <20210717095541.1486210-2-maz@kernel.org> X-Mimecast-Spam-Score: 0 X-Mimecast-Originator: redhat.com Content-Type: text/plain; charset=utf-8; format=flowed Content-Language: en-US Content-Transfer-Encoding: 7bit Authentication-Results: imf24.hostedemail.com; dkim=pass header.d=redhat.com header.s=mimecast20190719 header.b=JvgVNJCb; spf=none (imf24.hostedemail.com: domain of pbonzini@redhat.com has no SPF policy when checking 216.205.24.124) smtp.mailfrom=pbonzini@redhat.com; dmarc=pass (policy=none) header.from=redhat.com X-Rspamd-Server: rspam05 X-Stat-Signature: 6iie9dopzskyd4ziw4qfkf5ejfdfwduu X-Rspamd-Queue-Id: 27B61B0000AF X-HE-Tag: 1626676296-849874 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: On 17/07/21 11:55, Marc Zyngier wrote: > We currently rely on the kvm_is_transparent_hugepage() helper to > discover whether a given page has the potential to be mapped as > a block mapping. > > However, this API doesn't really give un everything we want: > - we don't get the size: this is not crucial today as we only > support PMD-sized THPs, but we'd like to have larger sizes > in the future > - we're the only user left of the API, and there is a will > to remove it altogether > > To address the above, implement a simple walker using the existing > page table infrastructure, and plumb it into transparent_hugepage_adjust(). > No new page sizes are supported in the process. > > Signed-off-by: Marc Zyngier If it's okay for you to reuse the KVM page walker that's fine of course, but the arch/x86/mm functions lookup_address_in_{mm,pgd} are mostly machine-independent and it may make sense to move them to mm/. That would also allow reusing the x86 function host_pfn_mapping_level. Paolo > --- > arch/arm64/kvm/mmu.c | 46 ++++++++++++++++++++++++++++++++++++++++---- > 1 file changed, 42 insertions(+), 4 deletions(-) > > diff --git a/arch/arm64/kvm/mmu.c b/arch/arm64/kvm/mmu.c > index 3155c9e778f0..db6314b93e99 100644 > --- a/arch/arm64/kvm/mmu.c > +++ b/arch/arm64/kvm/mmu.c > @@ -433,6 +433,44 @@ int create_hyp_exec_mappings(phys_addr_t phys_addr, size_t size, > return 0; > } > > +static struct kvm_pgtable_mm_ops kvm_user_mm_ops = { > + /* We shouldn't need any other callback to walk the PT */ > + .phys_to_virt = kvm_host_va, > +}; > + > +struct user_walk_data { > + u32 level; > +}; > + > +static int user_walker(u64 addr, u64 end, u32 level, kvm_pte_t *ptep, > + enum kvm_pgtable_walk_flags flag, void * const arg) > +{ > + struct user_walk_data *data = arg; > + > + data->level = level; > + return 0; > +} > + > +static int get_user_mapping_size(struct kvm *kvm, u64 addr) > +{ > + struct user_walk_data data; > + struct kvm_pgtable pgt = { > + .pgd = (kvm_pte_t *)kvm->mm->pgd, > + .ia_bits = VA_BITS, > + .start_level = 4 - CONFIG_PGTABLE_LEVELS, > + .mm_ops = &kvm_user_mm_ops, > + }; > + struct kvm_pgtable_walker walker = { > + .cb = user_walker, > + .flags = KVM_PGTABLE_WALK_LEAF, > + .arg = &data, > + }; > + > + kvm_pgtable_walk(&pgt, ALIGN_DOWN(addr, PAGE_SIZE), PAGE_SIZE, &walker); > + > + return BIT(ARM64_HW_PGTABLE_LEVEL_SHIFT(data.level)); > +} > + > static struct kvm_pgtable_mm_ops kvm_s2_mm_ops = { > .zalloc_page = stage2_memcache_zalloc_page, > .zalloc_pages_exact = kvm_host_zalloc_pages_exact, > @@ -780,7 +818,7 @@ static bool fault_supports_stage2_huge_mapping(struct kvm_memory_slot *memslot, > * Returns the size of the mapping. > */ > static unsigned long > -transparent_hugepage_adjust(struct kvm_memory_slot *memslot, > +transparent_hugepage_adjust(struct kvm *kvm, struct kvm_memory_slot *memslot, > unsigned long hva, kvm_pfn_t *pfnp, > phys_addr_t *ipap) > { > @@ -791,8 +829,8 @@ transparent_hugepage_adjust(struct kvm_memory_slot *memslot, > * sure that the HVA and IPA are sufficiently aligned and that the > * block map is contained within the memslot. > */ > - if (kvm_is_transparent_hugepage(pfn) && > - fault_supports_stage2_huge_mapping(memslot, hva, PMD_SIZE)) { > + if (fault_supports_stage2_huge_mapping(memslot, hva, PMD_SIZE) && > + get_user_mapping_size(kvm, hva) >= PMD_SIZE) { > /* > * The address we faulted on is backed by a transparent huge > * page. However, because we map the compound huge page and > @@ -1051,7 +1089,7 @@ static int user_mem_abort(struct kvm_vcpu *vcpu, phys_addr_t fault_ipa, > * backed by a THP and thus use block mapping if possible. > */ > if (vma_pagesize == PAGE_SIZE && !(force_pte || device)) > - vma_pagesize = transparent_hugepage_adjust(memslot, hva, > + vma_pagesize = transparent_hugepage_adjust(kvm, memslot, hva, > &pfn, &fault_ipa); > > if (fault_status != FSC_PERM && !device && kvm_has_mte(kvm)) { >