From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-6.9 required=3.0 tests=DKIMWL_WL_HIGH,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_PATCH, MAILING_LIST_MULTI,SIGNED_OFF_BY,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id A8E70C35E1C for ; Tue, 25 Feb 2020 20:59:05 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 603832176D for ; Tue, 25 Feb 2020 20:59:05 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (1024-bit key) header.d=chromium.org header.i=@chromium.org header.b="Jwk9kPFJ" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 603832176D Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=chromium.org Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id DF28F6B0006; Tue, 25 Feb 2020 15:59:04 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id D7D156B0007; Tue, 25 Feb 2020 15:59:04 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id BF6CC6B0008; Tue, 25 Feb 2020 15:59:04 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0030.hostedemail.com [216.40.44.30]) by kanga.kvack.org (Postfix) with ESMTP id 9DAED6B0006 for ; Tue, 25 Feb 2020 15:59:04 -0500 (EST) Received: from smtpin28.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay05.hostedemail.com (Postfix) with ESMTP id 4FA46181AC9BF for ; Tue, 25 Feb 2020 20:59:04 +0000 (UTC) X-FDA: 76529864208.28.horn52_3207d713f811b X-HE-Tag: horn52_3207d713f811b X-Filterd-Recvd-Size: 7749 Received: from mail-pj1-f67.google.com (mail-pj1-f67.google.com [209.85.216.67]) by imf03.hostedemail.com (Postfix) with ESMTP for ; Tue, 25 Feb 2020 20:59:03 +0000 (UTC) Received: by mail-pj1-f67.google.com with SMTP id m7so1387505pjs.0 for ; Tue, 25 Feb 2020 12:59:03 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=chromium.org; s=google; h=date:from:to:cc:subject:message-id:references:mime-version :content-disposition:in-reply-to; bh=8dX5wGDp7/7RsShn+G7Hibf511wFtQ+pPngKimjNxu4=; b=Jwk9kPFJM5a5cMiSmyglbYvymQaWI+0Gd2bok0pF2yDJjBVyB4ypgMVYnkhvHZBskU awRC2Wrz2BPMSVl/HcdJAxTGcUgu8+Afk5Wd46vG4ulAXoicWcgKpp72XA+FY46HMWdT kiPA3E07NGXbdh2+9CYQ4kSt958zqN9ltfGR0= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:date:from:to:cc:subject:message-id:references :mime-version:content-disposition:in-reply-to; bh=8dX5wGDp7/7RsShn+G7Hibf511wFtQ+pPngKimjNxu4=; b=CON7/ZlTdSpSBVmjB0+qR9VCpbPGq02Cr/Wb5yD1JvvsSolTgsq7KyvYBlNvWWBUJR WZUytz6h1weCtC+dOEKExW5vV6V1zZ5nSffxDZK/h2w5At6pqqCpTtMuWgplJSi4DPKV sXQDF1SHu7utfLktw9VfMQCC3mr7FDdVOLfAorvzmdm2fTMLH41OZqlWkFr65Se1cqyz yaIif+zZzvVh2EN/OzqI/+XtjiC79kazgA3L5fAwPnvoSg2BKE4Gp20cS7If2EHykE99 s7s6x903/UENwcZnr38Jsa8MdfR1Cn1EopJXssw5jED7XSQFQ3LYND1LDF1PoT1UEGmb 1MPw== X-Gm-Message-State: APjAAAV/wl23zTvKROI1tqwz1fAWphw2Wd2WfGn3Epo3cOCAlZlpdr2q +i4oURFt8YhqqzVkNx0vh1UCdg== X-Google-Smtp-Source: APXvYqy7twXol1ZzVJucdwTIs4N2cjv5l0rTxkBOo20OqiQFr1W86iRpQPhEogyZW0rHsPhwBYPDdQ== X-Received: by 2002:a17:90a:d985:: with SMTP id d5mr1051052pjv.73.1582664342668; Tue, 25 Feb 2020 12:59:02 -0800 (PST) Received: from www.outflux.net (smtp.outflux.net. [198.145.64.163]) by smtp.gmail.com with ESMTPSA id e6sm17859883pfh.32.2020.02.25.12.59.01 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 25 Feb 2020 12:59:01 -0800 (PST) Date: Tue, 25 Feb 2020 12:59:00 -0800 From: Kees Cook To: Yu-cheng Yu Cc: x86@kernel.org, "H. Peter Anvin" , Thomas Gleixner , Ingo Molnar , linux-kernel@vger.kernel.org, linux-doc@vger.kernel.org, linux-mm@kvack.org, linux-arch@vger.kernel.org, linux-api@vger.kernel.org, Arnd Bergmann , Andy Lutomirski , Balbir Singh , Borislav Petkov , Cyrill Gorcunov , Dave Hansen , Eugene Syromiatnikov , Florian Weimer , "H.J. Lu" , Jann Horn , Jonathan Corbet , Mike Kravetz , Nadav Amit , Oleg Nesterov , Pavel Machek , Peter Zijlstra , Randy Dunlap , "Ravi V. Shankar" , Vedvyas Shanbhogue , Dave Martin , x86-patch-review@intel.com Subject: Re: [RFC PATCH v9 15/27] mm: Handle THP/HugeTLB Shadow Stack page fault Message-ID: <202002251258.7D6DA92@keescook> References: <20200205181935.3712-1-yu-cheng.yu@intel.com> <20200205181935.3712-16-yu-cheng.yu@intel.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20200205181935.3712-16-yu-cheng.yu@intel.com> X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: On Wed, Feb 05, 2020 at 10:19:23AM -0800, Yu-cheng Yu wrote: > This patch implements THP Shadow Stack (SHSTK) copying in the same way as > in the previous patch for regular PTE. > > In copy_huge_pmd(), clear the dirty bit from the PMD to cause a page fault > upon the next SHSTK access to the PMD. At that time, fix the PMD and > copy/re-use the page. Now is as good a time as any to ask: do you have selftests for all this? It seems like it would be really nice to have a way to verify SHSTK is working correctly. -Kees > > Signed-off-by: Yu-cheng Yu > --- > arch/x86/mm/pgtable.c | 8 ++++++++ > include/asm-generic/pgtable.h | 11 +++++++++++ > mm/huge_memory.c | 4 ++++ > 3 files changed, 23 insertions(+) > > diff --git a/arch/x86/mm/pgtable.c b/arch/x86/mm/pgtable.c > index 2eb33794c08d..3340b1d4e9da 100644 > --- a/arch/x86/mm/pgtable.c > +++ b/arch/x86/mm/pgtable.c > @@ -886,4 +886,12 @@ inline pte_t pte_set_vma_features(pte_t pte, struct vm_area_struct *vma) > else > return pte; > } > + > +inline pmd_t pmd_set_vma_features(pmd_t pmd, struct vm_area_struct *vma) > +{ > + if (vma->vm_flags & VM_SHSTK) > + return pmd_mkdirty_shstk(pmd); > + else > + return pmd; > +} > #endif /* CONFIG_X86_INTEL_SHADOW_STACK_USER */ > diff --git a/include/asm-generic/pgtable.h b/include/asm-generic/pgtable.h > index 9cb2f9ba5895..a9df093fdf45 100644 > --- a/include/asm-generic/pgtable.h > +++ b/include/asm-generic/pgtable.h > @@ -1201,9 +1201,20 @@ static inline pte_t pte_set_vma_features(pte_t pte, struct vm_area_struct *vma) > { > return pte; > } > + > +#ifdef CONFIG_TRANSPARENT_HUGEPAGE > +static inline pmd_t pmd_set_vma_features(pmd_t pmd, struct vm_area_struct *vma) > +{ > + return pmd; > +} > +#endif > #else > bool arch_copy_pte_mapping(vm_flags_t vm_flags); > pte_t pte_set_vma_features(pte_t pte, struct vm_area_struct *vma); > + > +#ifdef CONFIG_TRANSPARENT_HUGEPAGE > +pmd_t pmd_set_vma_features(pmd_t pmd, struct vm_area_struct *vma); > +#endif > #endif > #endif /* CONFIG_MMU */ > > diff --git a/mm/huge_memory.c b/mm/huge_memory.c > index a88093213674..93ef368df2dd 100644 > --- a/mm/huge_memory.c > +++ b/mm/huge_memory.c > @@ -636,6 +636,7 @@ static vm_fault_t __do_huge_pmd_anonymous_page(struct vm_fault *vmf, > > entry = mk_huge_pmd(page, vma->vm_page_prot); > entry = maybe_pmd_mkwrite(pmd_mkdirty(entry), vma); > + entry = pmd_set_vma_features(entry, vma); > page_add_new_anon_rmap(page, vma, haddr, true); > mem_cgroup_commit_charge(page, memcg, false, true); > lru_cache_add_active_or_unevictable(page, vma); > @@ -1278,6 +1279,7 @@ static vm_fault_t do_huge_pmd_wp_page_fallback(struct vm_fault *vmf, > pte_t entry; > entry = mk_pte(pages[i], vma->vm_page_prot); > entry = maybe_mkwrite(pte_mkdirty(entry), vma); > + entry = pte_set_vma_features(entry, vma); > memcg = (void *)page_private(pages[i]); > set_page_private(pages[i], 0); > page_add_new_anon_rmap(pages[i], vmf->vma, haddr, false); > @@ -1360,6 +1362,7 @@ vm_fault_t do_huge_pmd_wp_page(struct vm_fault *vmf, pmd_t orig_pmd) > pmd_t entry; > entry = pmd_mkyoung(orig_pmd); > entry = maybe_pmd_mkwrite(pmd_mkdirty(entry), vma); > + entry = pmd_set_vma_features(entry, vma); > if (pmdp_set_access_flags(vma, haddr, vmf->pmd, entry, 1)) > update_mmu_cache_pmd(vma, vmf->address, vmf->pmd); > ret |= VM_FAULT_WRITE; > @@ -1432,6 +1435,7 @@ vm_fault_t do_huge_pmd_wp_page(struct vm_fault *vmf, pmd_t orig_pmd) > pmd_t entry; > entry = mk_huge_pmd(new_page, vma->vm_page_prot); > entry = maybe_pmd_mkwrite(pmd_mkdirty(entry), vma); > + entry = pmd_set_vma_features(entry, vma); > pmdp_huge_clear_flush_notify(vma, haddr, vmf->pmd); > page_add_new_anon_rmap(new_page, vma, haddr, true); > mem_cgroup_commit_charge(new_page, memcg, false, true); > -- > 2.21.0 > -- Kees Cook