From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1752584AbeBFOzY (ORCPT ); Tue, 6 Feb 2018 09:55:24 -0500 Received: from mail-wm0-f67.google.com ([74.125.82.67]:52642 "EHLO mail-wm0-f67.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1751424AbeBFOzL (ORCPT ); Tue, 6 Feb 2018 09:55:11 -0500 X-Google-Smtp-Source: AH8x227eDGrnC8bUh5sf3xUPAJhKeoSjp7ZZ/NToupfrfWgDZK6biDWeBhpGo/7YAgQqHPj7L+YUDQ== Date: Tue, 6 Feb 2018 15:55:08 +0100 From: Christoffer Dall To: Punit Agrawal Cc: kvmarm@lists.cs.columbia.edu, linux-arm-kernel@lists.infradead.org, linux-kernel@vger.kernel.org, suzuki.poulose@arm.com, Marc Zyngier Subject: Re: [RFC 2/4] KVM: arm64: Support dirty page tracking for PUD hugepages Message-ID: <20180206145508.GC23160@cbox> References: <20180110190729.18383-1-punit.agrawal@arm.com> <20180110190729.18383-3-punit.agrawal@arm.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20180110190729.18383-3-punit.agrawal@arm.com> User-Agent: Mutt/1.5.24 (2015-08-30) Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Wed, Jan 10, 2018 at 07:07:27PM +0000, Punit Agrawal wrote: > In preparation for creating PUD hugepages at stage 2, add support for > write protecting PUD hugepages when they are encountered. Write > protecting guest tables is used to track dirty pages when migrating VMs. > > Also, provide trivial implementations of required kvm_s2pud_* helpers to > allow code to compile on arm32. > > Signed-off-by: Punit Agrawal > Cc: Christoffer Dall > Cc: Marc Zyngier > --- > arch/arm/include/asm/kvm_mmu.h | 9 +++++++++ > arch/arm64/include/asm/kvm_mmu.h | 10 ++++++++++ > virt/kvm/arm/mmu.c | 9 ++++++--- > 3 files changed, 25 insertions(+), 3 deletions(-) > > diff --git a/arch/arm/include/asm/kvm_mmu.h b/arch/arm/include/asm/kvm_mmu.h > index fa6f2174276b..3fbe919b9181 100644 > --- a/arch/arm/include/asm/kvm_mmu.h > +++ b/arch/arm/include/asm/kvm_mmu.h > @@ -103,6 +103,15 @@ static inline bool kvm_s2pmd_readonly(pmd_t *pmd) > return (pmd_val(*pmd) & L_PMD_S2_RDWR) == L_PMD_S2_RDONLY; > } > > +static inline void kvm_set_s2pud_readonly(pud_t *pud) > +{ > +} > + > +static inline bool kvm_s2pud_readonly(pud_t *pud) > +{ > + return true; why true? Shouldn't this return the pgd's readonly value, strictly speaking, or if we rely on this never being called, have VM_BUG_ON() ? In any case, a comment explaining why we unconditionally return true would be nice. > +} > + > static inline bool kvm_page_empty(void *ptr) > { > struct page *ptr_page = virt_to_page(ptr); > diff --git a/arch/arm64/include/asm/kvm_mmu.h b/arch/arm64/include/asm/kvm_mmu.h > index 672c8684d5c2..dbfd18e08cfb 100644 > --- a/arch/arm64/include/asm/kvm_mmu.h > +++ b/arch/arm64/include/asm/kvm_mmu.h > @@ -201,6 +201,16 @@ static inline bool kvm_s2pmd_readonly(pmd_t *pmd) > return kvm_s2pte_readonly((pte_t *)pmd); > } > > +static inline void kvm_set_s2pud_readonly(pud_t *pud) > +{ > + kvm_set_s2pte_readonly((pte_t *)pud); > +} > + > +static inline bool kvm_s2pud_readonly(pud_t *pud) > +{ > + return kvm_s2pte_readonly((pte_t *)pud); > +} > + > static inline bool kvm_page_empty(void *ptr) > { > struct page *ptr_page = virt_to_page(ptr); > diff --git a/virt/kvm/arm/mmu.c b/virt/kvm/arm/mmu.c > index 9dea96380339..02eefda5d71e 100644 > --- a/virt/kvm/arm/mmu.c > +++ b/virt/kvm/arm/mmu.c > @@ -1155,9 +1155,12 @@ static void stage2_wp_puds(pgd_t *pgd, phys_addr_t addr, phys_addr_t end) > do { > next = stage2_pud_addr_end(addr, end); > if (!stage2_pud_none(*pud)) { > - /* TODO:PUD not supported, revisit later if supported */ > - BUG_ON(stage2_pud_huge(*pud)); > - stage2_wp_pmds(pud, addr, next); > + if (stage2_pud_huge(*pud)) { > + if (!kvm_s2pud_readonly(pud)) > + kvm_set_s2pud_readonly(pud); > + } else { > + stage2_wp_pmds(pud, addr, next); > + } > } > } while (pud++, addr = next, addr != end); > } > -- > 2.15.1 > Otherwise: Reviewed-by: Christoffer Dall