From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.6 required=3.0 tests=BAYES_00,DKIM_INVALID, DKIM_SIGNED,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 195B1C1B08C for ; Wed, 14 Jul 2021 22:21:34 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id BF5A1613C5 for ; Wed, 14 Jul 2021 22:21:33 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org BF5A1613C5 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=redhat.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id D26F36B00AD; Wed, 14 Jul 2021 18:21:31 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id C8A2A6B00AE; Wed, 14 Jul 2021 18:21:31 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id B2A076B00AF; Wed, 14 Jul 2021 18:21:31 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0106.hostedemail.com [216.40.44.106]) by kanga.kvack.org (Postfix) with ESMTP id 8BE986B00AD for ; Wed, 14 Jul 2021 18:21:31 -0400 (EDT) Received: from smtpin26.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay01.hostedemail.com (Postfix) with ESMTP id 81CF418464D1D for ; Wed, 14 Jul 2021 22:21:30 +0000 (UTC) X-FDA: 78362615940.26.0AED6A4 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [216.205.24.124]) by imf20.hostedemail.com (Postfix) with ESMTP id 295DAD0000B8 for ; Wed, 14 Jul 2021 22:21:30 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1626301289; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=Y0JmFuDDelZXYWv/E+BZRvjlS5kkw3E6CXCgFUA2NEY=; b=aZwSWU6GsWpXyx6SJM21OBJD10SvZ77P6LKhm+/2HKI07q69Nrw+bsnnDGSaGhzq9f8u7K dTWFUl0dtXo+03hG9iXdptpj/gfOqg4PvLcwsoMCCp2KY0/ePuoLbwC59A3X3TKkkiGA/B HuOGapPXiqB9tKTS4IGAbudXCqs5x8E= Received: from mail-qt1-f198.google.com (mail-qt1-f198.google.com [209.85.160.198]) (Using TLS) by relay.mimecast.com with ESMTP id us-mta-447-uC6jtfvxN8-ahIg_6LHCWg-1; Wed, 14 Jul 2021 18:21:28 -0400 X-MC-Unique: uC6jtfvxN8-ahIg_6LHCWg-1 Received: by mail-qt1-f198.google.com with SMTP id b6-20020a05622a0206b029025eb726dd9bso385176qtx.8 for ; Wed, 14 Jul 2021 15:21:28 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=Y0JmFuDDelZXYWv/E+BZRvjlS5kkw3E6CXCgFUA2NEY=; b=ehVBA9o75nviUvvUP96yv9BnIw7zDD4n3QYHC40Yid99fkDFZQIPNmjeEPBDNLj7JT S/AmsgDGjQaHtlfPA5z2/Ds4LEaW+39kkuloqYd0JTZ4ZBa6rCzTQh3rVjNxqoR3BFdM 6U2LxCySMDOF6TwtUlLfQstakdVvh/CGtivEgisXFeQbVGwv9mTA3IOvZU8k0r1z41AR 2dFbtMwvvS6f03FSt4wNLI3OhE8hDqGLjQFwZziOtcvNECmYq+dmtrSI6MlKgXps+hxR HeANOMbQPgCbUIkEJP5tZ/UiT1v6cEY94Oenidf9Q+sUgVMYfRfu6GqXMrAnMtZaSDPH NwuQ== X-Gm-Message-State: AOAM530Akmmjf512GtzbLv6S/Fap3PQFzWqzvxArhE/KAfHa/FS/Gxup 8rjFhlezvvGSuN581l8IwjWaMhpnG35QPQCpRlYvyxREypmyKX9cPGq45OLxqUUivf8DRFDL/uc Zw6jw6NkaBrM= X-Received: by 2002:a05:622a:1987:: with SMTP id u7mr308284qtc.45.1626301287970; Wed, 14 Jul 2021 15:21:27 -0700 (PDT) X-Google-Smtp-Source: ABdhPJw8a60fJ/Iga9Wm6qTFCnF80A2iIi32agCJqRunS6d3XLWes8fPNIWRySZtcDQS5RPr3NCouw== X-Received: by 2002:a05:622a:1987:: with SMTP id u7mr308258qtc.45.1626301287734; Wed, 14 Jul 2021 15:21:27 -0700 (PDT) Received: from localhost.localdomain (bras-base-toroon474qw-grc-65-184-144-111-238.dsl.bell.ca. [184.144.111.238]) by smtp.gmail.com with ESMTPSA id b25sm1625854qka.123.2021.07.14.15.21.26 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 14 Jul 2021 15:21:27 -0700 (PDT) From: Peter Xu To: linux-kernel@vger.kernel.org, linux-mm@kvack.org Cc: Jason Gunthorpe , peterx@redhat.com, Matthew Wilcox , Andrew Morton , Axel Rasmussen , Nadav Amit , Jerome Glisse , Mike Rapoport , Miaohe Lin , Hugh Dickins , Alistair Popple , Andrea Arcangeli , Mike Kravetz , "Kirill A . Shutemov" , David Hildenbrand Subject: [PATCH v4 04/26] mm/userfaultfd: Introduce special pte for unmapped file-backed mem Date: Wed, 14 Jul 2021 18:20:55 -0400 Message-Id: <20210714222117.47648-5-peterx@redhat.com> X-Mailer: git-send-email 2.31.1 In-Reply-To: <20210714222117.47648-1-peterx@redhat.com> References: <20210714222117.47648-1-peterx@redhat.com> MIME-Version: 1.0 X-Mimecast-Spam-Score: 0 X-Mimecast-Originator: redhat.com Content-Type: text/plain; charset="US-ASCII" Authentication-Results: imf20.hostedemail.com; dkim=pass header.d=redhat.com header.s=mimecast20190719 header.b=aZwSWU6G; spf=none (imf20.hostedemail.com: domain of peterx@redhat.com has no SPF policy when checking 216.205.24.124) smtp.mailfrom=peterx@redhat.com; dmarc=pass (policy=none) header.from=redhat.com X-Rspamd-Server: rspam05 X-Stat-Signature: mwyk7docpomg96348emstatujam5qeph X-Rspamd-Queue-Id: 295DAD0000B8 X-HE-Tag: 1626301290-502903 Content-Transfer-Encoding: quoted-printable X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: This patch introduces a very special swap-like pte for file-backed memori= es. Currently it's only defined for x86_64 only, but as long as any arch that= can properly define the UFFD_WP_SWP_PTE_SPECIAL value as requested, it should conceptually work too. We will use this special pte to arm the ptes that got either unmapped or swapped out for a file-backed region that was previously wr-protected. T= his special pte could trigger a page fault just like swap entries, and as lon= g as the page fault will satisfy pte_none()=3D=3Dfalse && pte_present()=3D=3Df= alse. Then we can revive the special pte into a normal pte backed by the page c= ache. This idea is greatly inspired by Hugh and Andrea in the discussion, which= is referenced in the links below. The other idea (from Hugh) is that we use swp_type=3D=3D1 and swp_offset=3D= 0 as the special pte. The current solution (as pointed out by Andrea) is slightly preferred in that we don't even need swp_entry_t knowledge at all in trap= ping these accesses. Meanwhile, we also reuse _PAGE_SWP_UFFD_WP from the anon= ymous swp entries. This patch only introduces the special pte and its operators. It's not y= et applied to have any functional difference. Link: https://lore.kernel.org/lkml/20201126222359.8120-1-peterx@redhat.co= m/ Link: https://lore.kernel.org/lkml/20201130230603.46187-1-peterx@redhat.c= om/ Suggested-by: Andrea Arcangeli Suggested-by: Hugh Dickins Signed-off-by: Peter Xu --- arch/x86/include/asm/pgtable.h | 28 ++++++++++++++++++++++++++++ include/asm-generic/pgtable_uffd.h | 3 +++ include/linux/userfaultfd_k.h | 25 +++++++++++++++++++++++++ 3 files changed, 56 insertions(+) diff --git a/arch/x86/include/asm/pgtable.h b/arch/x86/include/asm/pgtabl= e.h index 448cd01eb3ec..71b1e73d5b26 100644 --- a/arch/x86/include/asm/pgtable.h +++ b/arch/x86/include/asm/pgtable.h @@ -1300,6 +1300,34 @@ static inline pmd_t pmd_swp_clear_soft_dirty(pmd_t= pmd) #endif =20 #ifdef CONFIG_HAVE_ARCH_USERFAULTFD_WP + +/* + * This is a very special swap-like pte that marks this pte as "wr-prote= cted" + * by userfaultfd-wp. It should only exist for file-backed memory where= the + * page (previously got wr-protected) has been unmapped or swapped out. + * + * For anonymous memories, the userfaultfd-wp _PAGE_SWP_UFFD_WP bit is k= ept + * along with a real swp entry instead. + * + * Let's make some rules for this special pte: + * + * (1) pte_none()=3D=3Dfalse, so that it'll not trigger a missing page f= ault. + * + * (2) pte_present()=3D=3Dfalse, so that it's recognized as swap (is_swa= p_pte). + * + * (3) pte_swp_uffd_wp()=3D=3Dtrue, so it can be tested just like a swap= pte that + * contains a valid swap entry, so that we can check a swap pte alwa= ys + * using "is_swap_pte() && pte_swp_uffd_wp()" without caring about w= hether + * there's one swap entry inside of the pte. + * + * (4) It should not be a valid swap pte anywhere, so that when we see t= his pte + * we know it does not contain a swap entry. + * + * For x86, the simplest special pte which satisfies all of above should= be the + * pte with only _PAGE_SWP_UFFD_WP bit set (where swp_type=3D=3Dswp_offs= et=3D=3D0). + */ +#define UFFD_WP_SWP_PTE_SPECIAL __pte(_PAGE_SWP_UFFD_WP) + static inline pte_t pte_swp_mkuffd_wp(pte_t pte) { return pte_set_flags(pte, _PAGE_SWP_UFFD_WP); diff --git a/include/asm-generic/pgtable_uffd.h b/include/asm-generic/pgt= able_uffd.h index 828966d4c281..95e9811ce9d1 100644 --- a/include/asm-generic/pgtable_uffd.h +++ b/include/asm-generic/pgtable_uffd.h @@ -2,6 +2,9 @@ #define _ASM_GENERIC_PGTABLE_UFFD_H =20 #ifndef CONFIG_HAVE_ARCH_USERFAULTFD_WP + +#define UFFD_WP_SWP_PTE_SPECIAL __pte(0) + static __always_inline int pte_uffd_wp(pte_t pte) { return 0; diff --git a/include/linux/userfaultfd_k.h b/include/linux/userfaultfd_k.= h index 331d2ccf0bcc..bb5a72a2b07a 100644 --- a/include/linux/userfaultfd_k.h +++ b/include/linux/userfaultfd_k.h @@ -145,6 +145,21 @@ extern int userfaultfd_unmap_prep(struct vm_area_str= uct *vma, extern void userfaultfd_unmap_complete(struct mm_struct *mm, struct list_head *uf); =20 +static inline pte_t pte_swp_mkuffd_wp_special(struct vm_area_struct *vma= ) +{ + WARN_ON_ONCE(vma_is_anonymous(vma)); + return UFFD_WP_SWP_PTE_SPECIAL; +} + +static inline bool pte_swp_uffd_wp_special(pte_t pte) +{ +#ifdef CONFIG_HAVE_ARCH_USERFAULTFD_WP + return pte_same(pte, UFFD_WP_SWP_PTE_SPECIAL); +#else + return false; +#endif +} + #else /* CONFIG_USERFAULTFD */ =20 /* mm helpers */ @@ -234,6 +249,16 @@ static inline void userfaultfd_unmap_complete(struct= mm_struct *mm, { } =20 +static inline pte_t pte_swp_mkuffd_wp_special(struct vm_area_struct *vma= ) +{ + return __pte(0); +} + +static inline bool pte_swp_uffd_wp_special(pte_t pte) +{ + return false; +} + #endif /* CONFIG_USERFAULTFD */ =20 #endif /* _LINUX_USERFAULTFD_K_H */ --=20 2.31.1