From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-6.9 required=3.0 tests=DKIMWL_WL_HIGH,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_PATCH, MAILING_LIST_MULTI,SIGNED_OFF_BY,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id AD4B9C35E0F for ; Tue, 25 Feb 2020 20:12:24 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 546F021744 for ; Tue, 25 Feb 2020 20:12:24 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (1024-bit key) header.d=chromium.org header.i=@chromium.org header.b="OxnZll4K" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 546F021744 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=chromium.org Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 080E76B0007; Tue, 25 Feb 2020 15:12:24 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 0300E6B0008; Tue, 25 Feb 2020 15:12:23 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id E3B1E6B000A; Tue, 25 Feb 2020 15:12:23 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0210.hostedemail.com [216.40.44.210]) by kanga.kvack.org (Postfix) with ESMTP id CAF1C6B0007 for ; Tue, 25 Feb 2020 15:12:23 -0500 (EST) Received: from smtpin26.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay02.hostedemail.com (Postfix) with ESMTP id 8D5D9443F for ; Tue, 25 Feb 2020 20:12:23 +0000 (UTC) X-FDA: 76529746566.26.back84_4f001d72b5358 X-HE-Tag: back84_4f001d72b5358 X-Filterd-Recvd-Size: 11830 Received: from mail-pj1-f67.google.com (mail-pj1-f67.google.com [209.85.216.67]) by imf11.hostedemail.com (Postfix) with ESMTP for ; Tue, 25 Feb 2020 20:12:22 +0000 (UTC) Received: by mail-pj1-f67.google.com with SMTP id j17so201149pjz.3 for ; Tue, 25 Feb 2020 12:12:22 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=chromium.org; s=google; h=date:from:to:cc:subject:message-id:references:mime-version :content-disposition:in-reply-to; bh=sH+NLiTWqgzfHa6XdfsJ+yIT5+v5YhITRd9gV+lY4/o=; b=OxnZll4KWu8fDEfWx3E6Z3IPUFOMDdGPcSCPwhvAEtbtj8JF5K5pLJiU3MnEimbv6S M9oK+LxP7mGI14KGWgxXjs2YCNF9H1e4e5R7b7g1L6nFYRg1FJq2Z9lhdZMROIwlpHZi HvhGy2mZDC5myUOCmkwAXWRtdDL/oJU1sUK5U= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:date:from:to:cc:subject:message-id:references :mime-version:content-disposition:in-reply-to; bh=sH+NLiTWqgzfHa6XdfsJ+yIT5+v5YhITRd9gV+lY4/o=; b=i9mjDnXXjn2YHSkYfOqafyggTAQ7yKOprBh1zaVmdetTVOnQMj21dYbpgxPKDB73GX DjRDq7+OrwZgl73bAWEbg/gsVFG726t61qQNkTAK+5CkyL9cAhRuG7iQV7sZW7CuO7nL F9zb/HsiFh9ldIDnYfZrayYNx+0mQBcX1w7yHA9JD/vT68kGcQ/Xvdq2mFpeXlP1jH68 cXkfY/Fyy+KK3NtnEAUsCmt8ixlFjwyZmWZohPkMVcUxhdRAu4jRcmW94gyeXJDAs+54 OPtqM3GT2H02+oI77y2QgXO2k0wDxcOS9enze/0hUwVk9iJGk/8SkPNmLq6BBxGrqNfV I0Ng== X-Gm-Message-State: APjAAAWL4/9zsRCDwSYJoKDh/eV9mKzHhu1ntO+4ic3w4NTImZgfOO3S KCkEaJlDDhttZp0RFGPEJPX+ZQ== X-Google-Smtp-Source: APXvYqwGU1tp/k5dqMPOjRYB6TzpbdnoXTKgSeVo8Ij8qJpFvzQqyaC5Vn3cvvDaYS/11NVEinm0gg== X-Received: by 2002:a17:90a:cf08:: with SMTP id h8mr799455pju.81.1582661541799; Tue, 25 Feb 2020 12:12:21 -0800 (PST) Received: from www.outflux.net (smtp.outflux.net. [198.145.64.163]) by smtp.gmail.com with ESMTPSA id t11sm4255164pjo.21.2020.02.25.12.12.20 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 25 Feb 2020 12:12:21 -0800 (PST) Date: Tue, 25 Feb 2020 12:12:19 -0800 From: Kees Cook To: Yu-cheng Yu Cc: x86@kernel.org, "H. Peter Anvin" , Thomas Gleixner , Ingo Molnar , linux-kernel@vger.kernel.org, linux-doc@vger.kernel.org, linux-mm@kvack.org, linux-arch@vger.kernel.org, linux-api@vger.kernel.org, Arnd Bergmann , Andy Lutomirski , Balbir Singh , Borislav Petkov , Cyrill Gorcunov , Dave Hansen , Eugene Syromiatnikov , Florian Weimer , "H.J. Lu" , Jann Horn , Jonathan Corbet , Mike Kravetz , Nadav Amit , Oleg Nesterov , Pavel Machek , Peter Zijlstra , Randy Dunlap , "Ravi V. Shankar" , Vedvyas Shanbhogue , Dave Martin , x86-patch-review@intel.com Subject: Re: [RFC PATCH v9 08/27] x86/mm: Change _PAGE_DIRTY to _PAGE_DIRTY_HW Message-ID: <202002251212.6A0968F@keescook> References: <20200205181935.3712-1-yu-cheng.yu@intel.com> <20200205181935.3712-9-yu-cheng.yu@intel.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20200205181935.3712-9-yu-cheng.yu@intel.com> X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: On Wed, Feb 05, 2020 at 10:19:16AM -0800, Yu-cheng Yu wrote: > Before introducing _PAGE_DIRTY_SW for non-hardware memory management > purposes in the next patch, rename _PAGE_DIRTY to _PAGE_DIRTY_HW and > _PAGE_BIT_DIRTY to _PAGE_BIT_DIRTY_HW to make these PTE dirty bits > more clear. There are no functional changes from this patch. > > v9: > - At some places _PAGE_DIRTY were not changed to _PAGE_DIRTY_HW, because > they will be changed again in the next patch to _PAGE_DIRTY_BITS. > However, this causes compile issues if the next patch is not yet applied. > Fix it by changing all _PAGE_DIRTY to _PAGE_DRITY_HW. > > Signed-off-by: Yu-cheng Yu Reviewed-by: Kees Cook -Kees > --- > arch/x86/include/asm/pgtable.h | 18 +++++++++--------- > arch/x86/include/asm/pgtable_types.h | 17 +++++++++-------- > arch/x86/kernel/relocate_kernel_64.S | 2 +- > arch/x86/kvm/vmx/vmx.c | 2 +- > 4 files changed, 20 insertions(+), 19 deletions(-) > > diff --git a/arch/x86/include/asm/pgtable.h b/arch/x86/include/asm/pgtable.h > index ad97dc155195..ab50d25f9afc 100644 > --- a/arch/x86/include/asm/pgtable.h > +++ b/arch/x86/include/asm/pgtable.h > @@ -122,7 +122,7 @@ extern pmdval_t early_pmd_flags; > */ > static inline int pte_dirty(pte_t pte) > { > - return pte_flags(pte) & _PAGE_DIRTY; > + return pte_flags(pte) & _PAGE_DIRTY_HW; > } > > > @@ -161,7 +161,7 @@ static inline int pte_young(pte_t pte) > > static inline int pmd_dirty(pmd_t pmd) > { > - return pmd_flags(pmd) & _PAGE_DIRTY; > + return pmd_flags(pmd) & _PAGE_DIRTY_HW; > } > > static inline int pmd_young(pmd_t pmd) > @@ -171,7 +171,7 @@ static inline int pmd_young(pmd_t pmd) > > static inline int pud_dirty(pud_t pud) > { > - return pud_flags(pud) & _PAGE_DIRTY; > + return pud_flags(pud) & _PAGE_DIRTY_HW; > } > > static inline int pud_young(pud_t pud) > @@ -312,7 +312,7 @@ static inline pte_t pte_clear_flags(pte_t pte, pteval_t clear) > > static inline pte_t pte_mkclean(pte_t pte) > { > - return pte_clear_flags(pte, _PAGE_DIRTY); > + return pte_clear_flags(pte, _PAGE_DIRTY_HW); > } > > static inline pte_t pte_mkold(pte_t pte) > @@ -332,7 +332,7 @@ static inline pte_t pte_mkexec(pte_t pte) > > static inline pte_t pte_mkdirty(pte_t pte) > { > - return pte_set_flags(pte, _PAGE_DIRTY | _PAGE_SOFT_DIRTY); > + return pte_set_flags(pte, _PAGE_DIRTY_HW | _PAGE_SOFT_DIRTY); > } > > static inline pte_t pte_mkyoung(pte_t pte) > @@ -396,7 +396,7 @@ static inline pmd_t pmd_mkold(pmd_t pmd) > > static inline pmd_t pmd_mkclean(pmd_t pmd) > { > - return pmd_clear_flags(pmd, _PAGE_DIRTY); > + return pmd_clear_flags(pmd, _PAGE_DIRTY_HW); > } > > static inline pmd_t pmd_wrprotect(pmd_t pmd) > @@ -406,7 +406,7 @@ static inline pmd_t pmd_wrprotect(pmd_t pmd) > > static inline pmd_t pmd_mkdirty(pmd_t pmd) > { > - return pmd_set_flags(pmd, _PAGE_DIRTY | _PAGE_SOFT_DIRTY); > + return pmd_set_flags(pmd, _PAGE_DIRTY_HW | _PAGE_SOFT_DIRTY); > } > > static inline pmd_t pmd_mkdevmap(pmd_t pmd) > @@ -450,7 +450,7 @@ static inline pud_t pud_mkold(pud_t pud) > > static inline pud_t pud_mkclean(pud_t pud) > { > - return pud_clear_flags(pud, _PAGE_DIRTY); > + return pud_clear_flags(pud, _PAGE_DIRTY_HW); > } > > static inline pud_t pud_wrprotect(pud_t pud) > @@ -460,7 +460,7 @@ static inline pud_t pud_wrprotect(pud_t pud) > > static inline pud_t pud_mkdirty(pud_t pud) > { > - return pud_set_flags(pud, _PAGE_DIRTY | _PAGE_SOFT_DIRTY); > + return pud_set_flags(pud, _PAGE_DIRTY_HW | _PAGE_SOFT_DIRTY); > } > > static inline pud_t pud_mkdevmap(pud_t pud) > diff --git a/arch/x86/include/asm/pgtable_types.h b/arch/x86/include/asm/pgtable_types.h > index b5e49e6bac63..e647e3c75578 100644 > --- a/arch/x86/include/asm/pgtable_types.h > +++ b/arch/x86/include/asm/pgtable_types.h > @@ -15,7 +15,7 @@ > #define _PAGE_BIT_PWT 3 /* page write through */ > #define _PAGE_BIT_PCD 4 /* page cache disabled */ > #define _PAGE_BIT_ACCESSED 5 /* was accessed (raised by CPU) */ > -#define _PAGE_BIT_DIRTY 6 /* was written to (raised by CPU) */ > +#define _PAGE_BIT_DIRTY_HW 6 /* was written to (raised by CPU) */ > #define _PAGE_BIT_PSE 7 /* 4 MB (or 2MB) page */ > #define _PAGE_BIT_PAT 7 /* on 4KB pages */ > #define _PAGE_BIT_GLOBAL 8 /* Global TLB entry PPro+ */ > @@ -45,7 +45,7 @@ > #define _PAGE_PWT (_AT(pteval_t, 1) << _PAGE_BIT_PWT) > #define _PAGE_PCD (_AT(pteval_t, 1) << _PAGE_BIT_PCD) > #define _PAGE_ACCESSED (_AT(pteval_t, 1) << _PAGE_BIT_ACCESSED) > -#define _PAGE_DIRTY (_AT(pteval_t, 1) << _PAGE_BIT_DIRTY) > +#define _PAGE_DIRTY_HW (_AT(pteval_t, 1) << _PAGE_BIT_DIRTY_HW) > #define _PAGE_PSE (_AT(pteval_t, 1) << _PAGE_BIT_PSE) > #define _PAGE_GLOBAL (_AT(pteval_t, 1) << _PAGE_BIT_GLOBAL) > #define _PAGE_SOFTW1 (_AT(pteval_t, 1) << _PAGE_BIT_SOFTW1) > @@ -73,7 +73,7 @@ > _PAGE_PKEY_BIT3) > > #if defined(CONFIG_X86_64) || defined(CONFIG_X86_PAE) > -#define _PAGE_KNL_ERRATUM_MASK (_PAGE_DIRTY | _PAGE_ACCESSED) > +#define _PAGE_KNL_ERRATUM_MASK (_PAGE_DIRTY_HW | _PAGE_ACCESSED) > #else > #define _PAGE_KNL_ERRATUM_MASK 0 > #endif > @@ -111,9 +111,9 @@ > #define _PAGE_PROTNONE (_AT(pteval_t, 1) << _PAGE_BIT_PROTNONE) > > #define _PAGE_TABLE_NOENC (_PAGE_PRESENT | _PAGE_RW | _PAGE_USER |\ > - _PAGE_ACCESSED | _PAGE_DIRTY) > + _PAGE_ACCESSED | _PAGE_DIRTY_HW) > #define _KERNPG_TABLE_NOENC (_PAGE_PRESENT | _PAGE_RW | \ > - _PAGE_ACCESSED | _PAGE_DIRTY) > + _PAGE_ACCESSED | _PAGE_DIRTY_HW) > > /* > * Set of bits not changed in pte_modify. The pte's > @@ -122,7 +122,7 @@ > * pte_modify() does modify it. > */ > #define _PAGE_CHG_MASK (PTE_PFN_MASK | _PAGE_PCD | _PAGE_PWT | \ > - _PAGE_SPECIAL | _PAGE_ACCESSED | _PAGE_DIRTY | \ > + _PAGE_SPECIAL | _PAGE_ACCESSED | _PAGE_DIRTY_HW | \ > _PAGE_SOFT_DIRTY | _PAGE_DEVMAP) > #define _HPAGE_CHG_MASK (_PAGE_CHG_MASK | _PAGE_PSE) > > @@ -167,7 +167,8 @@ enum page_cache_mode { > _PAGE_ACCESSED) > > #define __PAGE_KERNEL_EXEC \ > - (_PAGE_PRESENT | _PAGE_RW | _PAGE_DIRTY | _PAGE_ACCESSED | _PAGE_GLOBAL) > + (_PAGE_PRESENT | _PAGE_RW | _PAGE_DIRTY_HW | _PAGE_ACCESSED | \ > + _PAGE_GLOBAL) > #define __PAGE_KERNEL (__PAGE_KERNEL_EXEC | _PAGE_NX) > > #define __PAGE_KERNEL_RO (__PAGE_KERNEL & ~_PAGE_RW) > @@ -186,7 +187,7 @@ enum page_cache_mode { > #define _PAGE_ENC (_AT(pteval_t, sme_me_mask)) > > #define _KERNPG_TABLE (_PAGE_PRESENT | _PAGE_RW | _PAGE_ACCESSED | \ > - _PAGE_DIRTY | _PAGE_ENC) > + _PAGE_DIRTY_HW | _PAGE_ENC) > #define _PAGE_TABLE (_KERNPG_TABLE | _PAGE_USER) > > #define __PAGE_KERNEL_ENC (__PAGE_KERNEL | _PAGE_ENC) > diff --git a/arch/x86/kernel/relocate_kernel_64.S b/arch/x86/kernel/relocate_kernel_64.S > index ef3ba99068d3..3acd75f97b61 100644 > --- a/arch/x86/kernel/relocate_kernel_64.S > +++ b/arch/x86/kernel/relocate_kernel_64.S > @@ -15,7 +15,7 @@ > */ > > #define PTR(x) (x << 3) > -#define PAGE_ATTR (_PAGE_PRESENT | _PAGE_RW | _PAGE_ACCESSED | _PAGE_DIRTY) > +#define PAGE_ATTR (_PAGE_PRESENT | _PAGE_RW | _PAGE_ACCESSED | _PAGE_DIRTY_HW) > > /* > * control_page + KEXEC_CONTROL_CODE_MAX_SIZE > diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c > index e3394c839dea..fbbbf621b0d9 100644 > --- a/arch/x86/kvm/vmx/vmx.c > +++ b/arch/x86/kvm/vmx/vmx.c > @@ -3503,7 +3503,7 @@ static int init_rmode_identity_map(struct kvm *kvm) > /* Set up identity-mapping pagetable for EPT in real mode */ > for (i = 0; i < PT32_ENT_PER_PAGE; i++) { > tmp = (i << 22) + (_PAGE_PRESENT | _PAGE_RW | _PAGE_USER | > - _PAGE_ACCESSED | _PAGE_DIRTY | _PAGE_PSE); > + _PAGE_ACCESSED | _PAGE_DIRTY_HW | _PAGE_PSE); > r = kvm_write_guest_page(kvm, identity_map_pfn, > &tmp, i * sizeof(tmp), sizeof(tmp)); > if (r < 0) > -- > 2.21.0 > -- Kees Cook