From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-8.6 required=3.0 tests=DKIMWL_WL_HIGH,DKIM_SIGNED, DKIM_VALID,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_PATCH,MAILING_LIST_MULTI, SIGNED_OFF_BY,SPF_PASS,URIBL_BLOCKED,USER_AGENT_MUTT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 80FBEC65BAF for ; Mon, 10 Dec 2018 08:56:47 +0000 (UTC) Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 344B82082F for ; Mon, 10 Dec 2018 08:56:47 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=lists.infradead.org header.i=@lists.infradead.org header.b="kBg2hM2M" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 344B82082F Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=arm.com Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=linux-arm-kernel-bounces+infradead-linux-arm-kernel=archiver.kernel.org@lists.infradead.org DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20170209; h=Sender: Content-Transfer-Encoding:Content-Type:Cc:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:In-Reply-To:MIME-Version:References: Message-ID:Subject:To:From:Date:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=Qn41IkxlcirSQ50Wc8mJ9viem81nSlDDdmxG4pmUPUY=; b=kBg2hM2MTvapEq Pr7nwU3Ed6ieH8d8xoMAMfBk0KbLmpZ+RUhyaNoIaOTVvKPZhFnSbAPprPu/FXXcGa/o7blowYmwx 3zgvYCxSJrQ8CbJ5isNpMc0dgLDK/+Z+eONAImIAfXwk81ttlxzBo1scc+PPnFnpHNeVL45J89mM2 JzcquUWlgJRzkFnSWwOvtXf63OPHFfFOj69W7d6FWOJZQe64+JudV781xLNWXdWdINaaMSI22Bcs7 IvSiW0NfwQMTgNnYytd11wIv5vlXyPR8pNBBBVcG2Tz7h6PitXkuDuxk/bgrEaCTqkdymzW3wCu4o +60c+uBZw3xwMYItEOsg==; Received: from localhost ([127.0.0.1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.90_1 #2 (Red Hat Linux)) id 1gWHMm-0002xg-Sd; Mon, 10 Dec 2018 08:56:40 +0000 Received: from usa-sjc-mx-foss1.foss.arm.com ([217.140.101.70] helo=foss.arm.com) by bombadil.infradead.org with esmtp (Exim 4.90_1 #2 (Red Hat Linux)) id 1gWHMd-0002o8-Dy for linux-arm-kernel@lists.infradead.org; Mon, 10 Dec 2018 08:56:38 +0000 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.72.51.249]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 4013B1596; Mon, 10 Dec 2018 00:56:18 -0800 (PST) Received: from localhost (e113682-lin.copenhagen.arm.com [10.32.144.41]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id CCFAE3F575; Mon, 10 Dec 2018 00:56:17 -0800 (PST) Date: Mon, 10 Dec 2018 09:56:16 +0100 From: Christoffer Dall To: Suzuki K Poulose Subject: Re: [PATCH v9 1/8] KVM: arm/arm64: Share common code in user_mem_abort() Message-ID: <20181210085616.GB30263@e113682-lin.lund.arm.com> References: <20181031175745.18650-1-punit.agrawal@arm.com> <20181031175745.18650-2-punit.agrawal@arm.com> <8fd34e5f-7d75-4de2-3fee-d6d70805685c@arm.com> MIME-Version: 1.0 Content-Disposition: inline In-Reply-To: <8fd34e5f-7d75-4de2-3fee-d6d70805685c@arm.com> User-Agent: Mutt/1.5.24 (2015-08-30) X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20181210_005631_849497_2289FEE9 X-CRM114-Status: GOOD ( 29.16 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.21 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Anshuman Khandual , marc.zyngier@arm.com, will.deacon@arm.com, linux-kernel@vger.kernel.org, punitagrawal@gmail.com, kvmarm@lists.cs.columbia.edu, linux-arm-kernel@lists.infradead.org Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: 7bit Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+infradead-linux-arm-kernel=archiver.kernel.org@lists.infradead.org On Mon, Dec 03, 2018 at 01:37:37PM +0000, Suzuki K Poulose wrote: > Hi Anshuman, > > On 03/12/2018 12:11, Anshuman Khandual wrote: > > > > > >On 10/31/2018 11:27 PM, Punit Agrawal wrote: > >>The code for operations such as marking the pfn as dirty, and > >>dcache/icache maintenance during stage 2 fault handling is duplicated > >>between normal pages and PMD hugepages. > >> > >>Instead of creating another copy of the operations when we introduce > >>PUD hugepages, let's share them across the different pagesizes. > >> > >>Signed-off-by: Punit Agrawal > >>Reviewed-by: Suzuki K Poulose > >>Cc: Christoffer Dall > >>Cc: Marc Zyngier > >>--- > >> virt/kvm/arm/mmu.c | 49 ++++++++++++++++++++++++++++------------------ > >> 1 file changed, 30 insertions(+), 19 deletions(-) > >> > >>diff --git a/virt/kvm/arm/mmu.c b/virt/kvm/arm/mmu.c > >>index 5eca48bdb1a6..59595207c5e1 100644 > >>--- a/virt/kvm/arm/mmu.c > >>+++ b/virt/kvm/arm/mmu.c > >>@@ -1475,7 +1475,7 @@ static int user_mem_abort(struct kvm_vcpu *vcpu, phys_addr_t fault_ipa, > >> unsigned long fault_status) > >> { > >> int ret; > >>- bool write_fault, exec_fault, writable, hugetlb = false, force_pte = false; > >>+ bool write_fault, exec_fault, writable, force_pte = false; > >> unsigned long mmu_seq; > >> gfn_t gfn = fault_ipa >> PAGE_SHIFT; > >> struct kvm *kvm = vcpu->kvm; > >>@@ -1484,7 +1484,7 @@ static int user_mem_abort(struct kvm_vcpu *vcpu, phys_addr_t fault_ipa, > >> kvm_pfn_t pfn; > >> pgprot_t mem_type = PAGE_S2; > >> bool logging_active = memslot_is_logging(memslot); > >>- unsigned long flags = 0; > >>+ unsigned long vma_pagesize, flags = 0; > > > >A small nit s/vma_pagesize/pagesize. Why call it VMA ? Its implicit. > > May be we could call it mapsize. pagesize is confusing. > I'm ok with mapsize. I see the vma_pagesize name coming from the fact that this is initially set to the return value from vma_kernel_pagesize. I have not problems with either vma_pagesize or mapsize. > > > >> write_fault = kvm_is_write_fault(vcpu); > >> exec_fault = kvm_vcpu_trap_is_iabt(vcpu); > >>@@ -1504,10 +1504,16 @@ static int user_mem_abort(struct kvm_vcpu *vcpu, phys_addr_t fault_ipa, > >> return -EFAULT; > >> } > >>- if (vma_kernel_pagesize(vma) == PMD_SIZE && !logging_active) { > >>- hugetlb = true; > >>+ vma_pagesize = vma_kernel_pagesize(vma); > >>+ if (vma_pagesize == PMD_SIZE && !logging_active) { > >> gfn = (fault_ipa & PMD_MASK) >> PAGE_SHIFT; > >> } else { > >>+ /* > >>+ * Fallback to PTE if it's not one of the Stage 2 > >>+ * supported hugepage sizes > >>+ */ > >>+ vma_pagesize = PAGE_SIZE; > > > >This seems redundant and should be dropped. vma_kernel_pagesize() here either > >calls hugetlb_vm_op_pagesize (via hugetlb_vm_ops->pagesize) or simply returns > >PAGE_SIZE. The vm_ops path is taken if the QEMU VMA covering any given HVA is > >backed either by HugeTLB pages or simply normal pages. vma_pagesize would > >either has a value of PMD_SIZE (HugeTLB hstate based) or PAGE_SIZE. Hence if > >its not PMD_SIZE it must be PAGE_SIZE which should not be assigned again. > > We may want to force using the PTE mappings when logging_active (e.g, migration > ?) to prevent keep tracking of huge pages. So the check is still valid. > > Agreed, and let's not try additionally change the logic and flow with this patch. > > > >>+ > >> /* > >> * Pages belonging to memslots that don't have the same > >> * alignment for userspace and IPA cannot be mapped using > >>@@ -1573,23 +1579,33 @@ static int user_mem_abort(struct kvm_vcpu *vcpu, phys_addr_t fault_ipa, > >> if (mmu_notifier_retry(kvm, mmu_seq)) > >> goto out_unlock; > >>- if (!hugetlb && !force_pte) > >>- hugetlb = transparent_hugepage_adjust(&pfn, &fault_ipa); > >>+ if (vma_pagesize == PAGE_SIZE && !force_pte) { > >>+ /* > >>+ * Only PMD_SIZE transparent hugepages(THP) are > >>+ * currently supported. This code will need to be > >>+ * updated to support other THP sizes. > >>+ */ > > > >This comment belongs to transparent_hugepage_adjust() but not here. > > I think this is relevant here than in thp_adjust, unless we rename > the function below to something generic, handle_hugepage_adjust(). > Agreed. > >>+ if (transparent_hugepage_adjust(&pfn, &fault_ipa)) > >>+ vma_pagesize = PMD_SIZE; > > > >IIUC transparent_hugepage_adjust() is only getting called here. Instead of > >returning 'true' when it is able to detect a huge page backing and doing > >an adjustment there after, it should rather return THP size (PMD_SIZE) to > >accommodate probable multi size THP support in future . > > That makes sense. > That's fine. > > > >>+ } > >>+ > >>+ if (writable) > >>+ kvm_set_pfn_dirty(pfn); > >>- if (hugetlb) { > >>+ if (fault_status != FSC_PERM) > >>+ clean_dcache_guest_page(pfn, vma_pagesize); > >>+ > >>+ if (exec_fault) > >>+ invalidate_icache_guest_page(pfn, vma_pagesize); > >>+ > >>+ if (vma_pagesize == PMD_SIZE) { > >> pmd_t new_pmd = pfn_pmd(pfn, mem_type); > >> new_pmd = pmd_mkhuge(new_pmd); > >>- if (writable) { > >>+ if (writable) > >> new_pmd = kvm_s2pmd_mkwrite(new_pmd); > >>- kvm_set_pfn_dirty(pfn); > >>- } > >>- > >>- if (fault_status != FSC_PERM) > >>- clean_dcache_guest_page(pfn, PMD_SIZE); > >> if (exec_fault) { > >> new_pmd = kvm_s2pmd_mkexec(new_pmd); > >>- invalidate_icache_guest_page(pfn, PMD_SIZE); > >> } else if (fault_status == FSC_PERM) { > >> /* Preserve execute if XN was already cleared */ > >> if (stage2_is_exec(kvm, fault_ipa)) > >>@@ -1602,16 +1618,11 @@ static int user_mem_abort(struct kvm_vcpu *vcpu, phys_addr_t fault_ipa, > >> if (writable) { > >> new_pte = kvm_s2pte_mkwrite(new_pte); > >>- kvm_set_pfn_dirty(pfn); > >> mark_page_dirty(kvm, gfn); > >> } > >>- if (fault_status != FSC_PERM) > >>- clean_dcache_guest_page(pfn, PAGE_SIZE); > >>- > >> if (exec_fault) { > >> new_pte = kvm_s2pte_mkexec(new_pte); > >>- invalidate_icache_guest_page(pfn, PAGE_SIZE); > >> } else if (fault_status == FSC_PERM) { > >> /* Preserve execute if XN was already cleared */ > >> if (stage2_is_exec(kvm, fault_ipa)) > >> > > > >kvm_set_pfn_dirty, clean_dcache_guest_page, invalidate_icache_guest_page > >can all be safely moved before setting the page table entries either as > >PMD or PTE. > > I think this is what we do currently. So I assume this is fine. > Agreed, I don't understand the comment raised by Anshuman here. Thanks, Christoffer _______________________________________________ linux-arm-kernel mailing list linux-arm-kernel@lists.infradead.org http://lists.infradead.org/mailman/listinfo/linux-arm-kernel