From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) (using TLSv1 with cipher DHE-RSA-AES256-SHA (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id DE53ACD3436 for ; Fri, 8 May 2026 15:56:40 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 3A77D6B019D; Fri, 8 May 2026 11:56:34 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 37F8F6B019F; Fri, 8 May 2026 11:56:34 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 26FB16B01A0; Fri, 8 May 2026 11:56:34 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0015.hostedemail.com [216.40.44.15]) by kanga.kvack.org (Postfix) with ESMTP id 149856B019D for ; Fri, 8 May 2026 11:56:34 -0400 (EDT) Received: from smtpin26.hostedemail.com (lb01a-stub [10.200.18.249]) by unirelay04.hostedemail.com (Postfix) with ESMTP id CA29B1A01F9 for ; Fri, 8 May 2026 15:56:33 +0000 (UTC) X-FDA: 84744705066.26.F7862FC Received: from sea.source.kernel.org (sea.source.kernel.org [172.234.252.31]) by imf16.hostedemail.com (Postfix) with ESMTP id AB58918000A for ; Fri, 8 May 2026 15:56:31 +0000 (UTC) Authentication-Results: imf16.hostedemail.com; dkim=pass header.d=kernel.org header.s=k20201202 header.b=ZybUUsLn; spf=pass (imf16.hostedemail.com: domain of kas@kernel.org designates 172.234.252.31 as permitted sender) smtp.mailfrom=kas@kernel.org; dmarc=pass (policy=quarantine) header.from=kernel.org ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1778255791; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=gN6GU5x/MMcKBD2GoPmjUMzo8HigxIXja6PT8hag3wo=; b=NaMfa2FtQuD9Uh0iGn40l+qsVH22efw89nw45Gb1nlAbtJ1HVqwEnqf6XDOzG/RYFK9qUy gEVeiLfoFf+nbH2M3Ald93mQIXy2gVF7f1ziU/tHLA0ku2/YIrjXnIy8rx2GEmZbzEToYg Eb7t9sa57qXphDAgGFfrggpDtohN1/k= ARC-Authentication-Results: i=1; imf16.hostedemail.com; dkim=pass header.d=kernel.org header.s=k20201202 header.b=ZybUUsLn; spf=pass (imf16.hostedemail.com: domain of kas@kernel.org designates 172.234.252.31 as permitted sender) smtp.mailfrom=kas@kernel.org; dmarc=pass (policy=quarantine) header.from=kernel.org ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1778255791; a=rsa-sha256; cv=none; b=tw5+ZJiBlXR9VyrOqQsvmXJDLFQpvUe/tEL2npR0uN6WLDroQZYyvtIrbEcRQ2HR7YKktK aLeiOyX/nZiFO+N7i3+J3EES+DVBsEheawv3ZdrnPsrdIDlWlKUaNC01YzQQuZithNgQq/ RTR2ChVI27vqMBduSehvTrmMN1AAlkY= Received: from smtp.kernel.org (transwarp.subspace.kernel.org [100.75.92.58]) by sea.source.kernel.org (Postfix) with ESMTP id D4F9C4451C; Fri, 8 May 2026 15:56:30 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 18F8DC2BCB0; Fri, 8 May 2026 15:56:30 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1778255790; bh=MmROK3Pdv7PoW41hl/J7s4OwsgZ9V6AbB+xCexOn0wg=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=ZybUUsLnzPBtlY1sN8DLZPmS/VldjV/jI6Tb/ibVZCQar+poQSbXfNIk0REtXRnHI MF3O/NVb3iJJz0Nm0jLgrjSSmXVymZqAhyGZXBMxcFEUzbbZITseam7xgPPJfABCzQ VvjX06CEO2I61JkjFv6rd1Uc8FKbuPMlaWdFTr1aCUh6Pu97ybFRm32/Xg0WGGEltp n/+vBnAC3YWGJwveREXf9mVTj8mjTgs42fO9krLw3ZC87L3GMUTPbiMEFMKmKXNm81 RYG5LabkLTVpBuaUcDQ13hntaCLyuo5qalDFY5GC9WpWnhEp3IXUKISd79UclATISi MvPQXRlfHaCgg== Received: from phl-compute-02.internal (phl-compute-02.internal [10.202.2.42]) by mailfauth.phl.internal (Postfix) with ESMTP id F04C1F4006B; Fri, 8 May 2026 11:56:28 -0400 (EDT) Received: from phl-frontend-04 ([10.202.2.163]) by phl-compute-02.internal (MEProxy); Fri, 08 May 2026 11:56:28 -0400 X-ME-Sender: X-ME-Received: X-ME-Proxy-Cause: gggruggvucftvghtrhhoucdtuddrgeefhedrtddtgdduuddtjeejucetufdoteggodetrf dotffvucfrrhhofhhilhgvmecuhfgrshhtofgrihhlpdfurfetoffkrfgpnffqhgenuceu rghilhhouhhtmecufedttdenucenucfjughrpefhvfevufffkffojghfgggtgfesthekre dtredtjeenucfhrhhomhepfdfmihhrhihlucfuhhhuthhsvghmrghuucdlofgvthgrmddf uceokhgrsheskhgvrhhnvghlrdhorhhgqeenucggtffrrghtthgvrhhnpefhvdefvdevje evhefhhfevudefudejfeduvdekheeludfhiefhhedujeffffeigfenucevlhhushhtvghr ufhiiigvpedtnecurfgrrhgrmhepmhgrihhlfhhrohhmpehkihhrihhllhdomhgvshhmth hprghuthhhphgvrhhsohhnrghlihhthidqudeiudduiedvieehhedqvdekgeeggeejvdek qdhkrghspeepkhgvrhhnvghlrdhorhhgsehshhhuthgvmhhovhdrnhgrmhgvpdhnsggprh gtphhtthhopedvgedpmhhouggvpehsmhhtphhouhhtpdhrtghpthhtoheprghkphhmsehl ihhnuhigqdhfohhunhgurghtihhonhdrohhrghdprhgtphhtthhopehrphhptheskhgvrh hnvghlrdhorhhgpdhrtghpthhtohepphgvthgvrhigsehrvgguhhgrthdrtghomhdprhgt phhtthhopegurghvihgusehkvghrnhgvlhdrohhrghdprhgtphhtthhopehljhhssehkvg hrnhgvlhdrohhrghdprhgtphhtthhopehsuhhrvghnsgesghhoohhglhgvrdgtohhmpdhr tghpthhtohepvhgsrggskhgrsehkvghrnhgvlhdrohhrghdprhgtphhtthhopehlihgrmh drhhhofihlvghtthesohhrrggtlhgvrdgtohhmpdhrtghpthhtohepiihihiesnhhvihgu ihgrrdgtohhm X-ME-Proxy: Feedback-ID: i10464835:Fastmail Received: by mail.messagingengine.com (Postfix) with ESMTPA; Fri, 8 May 2026 11:56:28 -0400 (EDT) From: "Kiryl Shutsemau (Meta)" To: akpm@linux-foundation.org, rppt@kernel.org, peterx@redhat.com, david@kernel.org Cc: ljs@kernel.org, surenb@google.com, vbabka@kernel.org, Liam.Howlett@oracle.com, ziy@nvidia.com, corbet@lwn.net, skhan@linuxfoundation.org, seanjc@google.com, pbonzini@redhat.com, jthoughton@google.com, aarcange@redhat.com, sj@kernel.org, usama.arif@linux.dev, linux-mm@kvack.org, linux-kernel@vger.kernel.org, linux-doc@vger.kernel.org, linux-kselftest@vger.kernel.org, kvm@vger.kernel.org, kernel-team@meta.com, "Kiryl Shutsemau (Meta)" Subject: [PATCH v2 11/14] userfaultfd: add UFFD_FEATURE_RWP_ASYNC for async fault resolution Date: Fri, 8 May 2026 16:55:23 +0100 Message-ID: <65492c7b535080c7e85e90cb7ca962a52871e8b9.1778254670.git.kas@kernel.org> X-Mailer: git-send-email 2.51.2 In-Reply-To: References: MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit X-Rspamd-Server: rspam02 X-Rspamd-Queue-Id: AB58918000A X-Rspam-User: X-Stat-Signature: b65suwwi16mkre9onp1bhwjxzj6z83mx X-HE-Tag: 1778255791-314264 X-HE-Meta: U2FsdGVkX19Jgc+3XRza/DU+zPb3tDMNSEsKfHTjemUqLfAgDeZEX40+/Ey9g/yVxGAJ89u6Lo3C5j7IlJP4M5kPmQ3dhkBVPyRDeeU7Q3PpSVx2w8kePfv5LekS/7doRuJAdHZHg2sTWwP4fgXrN26dLV3oWQunmJjZ1gJ+/rnHAAVHteNQlzeaXLSGtF46Dq/5hBeLG1JYvsiIvl1A9CUfiIssbg5obu4M2ufGPy/qXy+62arASxDMTOLJ0e+jWdRAaDLzYZWsNI/niqfsw/X+ixYMZMi1Z075Tl9CNYhIQsCWOn27h2UlD6J/ulh+sk0t6VSt+DCNeM9bRgT29tnxNg15lcWLjhl2MFwo6C0P2+bpfkyoNuJ0wLkNDIayTVzz+5CxsfGQFk2WcrFOPakMsIH/S8AEQkC+eFF+dgEMsk7q2SDaskecpiYAI/pphpc3B1TnFXNJ8LkeJrrm/TR65avB2kOi0r+s5+mzHT4uP++9hVR0KRYrSPoiEnhSy/31mXKdHymsKuVGffzq8OCo3sSfaRYGaLRUrHD6Jq/3hbelBJjWaZ0XYsgDT9rDcZQo9d6oYhreHXrNfLgJjgD23GT3u2L/+3tNZs4jLYjlIXzonI9vKFYisudrdcIl4PPdBE0Cz2WYK21FCZB7LcCoXi6wVsk4kGNDMRhNpXiaaZdlPt3ZqN1e91ih6iIjVpnPH4yPrdyKeo51zKIdQv+2KxFK3ZxnMOomIRl+zwWfa2GAdfvff700t3JwyzI+AyicazrWBdl3cDPSIJrqpaXh4GZdcUZ+tIrgyXL94kWcI6SJjwQ7++/XzJWy7S7PkvDWKAglAthb1FTlqVwe7oqUmOZ7wL76f9rZevmvqJ2AU7QyhsYI4rZLjr4kArzeG9UJEQo95o7E7vk1cxt0mXjSl26+UJ991Yr2njoVOdOWz7IOR+SN0WodhZa+WMBMlZ8vu8CZWoqkdiX0qTg OTGvqHjV NQPCnv8OeohWM38lD0G91ZedFLfYrbZvs1ghE3u3DXfboVLznqmyxZAFDOzkaWyZweRiP/bqr1l/yqgkTF4IQ8j+j61kCWbBSU2lQbP/NFfKzOdk6LHwGose/Fs9nfamfoPn26KZFMP7ouu8+6gpvDxBfHQ89mECbAfTfevIEvdAC/bPogtg6CZCbbTZvf9inS5Boog8E2LV9L3zR4ln+tzJwxo3ZBP3+ektMuG7bF12Ftrn6SX8JKlMKzlkv94cSUnWmX/2TjfJHRM2Q3icVLMU8Skn9HS9m0XtqNMAfQTTzeSRxLOD1A1j/r0VbsAcBSOM8OPOMKyTh61WWyzERRe1b14mnlcjhdDGZ+tk39kdcVV/sGJK60FL/Ew== Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: Sync RWP delivers a message and blocks the faulting thread until the handler resolves the fault. For working-set tracking the VMM does not need the message: it just needs to know, at scan time, which pages were touched. Async RWP serves that use case — the kernel restores access in-place and the faulting thread continues without blocking. The VMM reconstructs the access pattern after the fact via PAGEMAP_SCAN: pages whose uffd bit is still set (inverted PAGE_IS_ACCESSED) were not re-accessed since the last RWP cycle. Worth calling out: async resolution upgrades writable private anon PTEs via pte_mkwrite() when can_change_pte_writable() allows, mirroring do_numa_page(). Without it, every re-access of an RWP'd writable page would COW-fault a second time. UFFD_FEATURE_RWP_ASYNC requires UFFD_FEATURE_RWP. Signed-off-by: Kiryl Shutsemau Assisted-by: Claude:claude-opus-4-6 --- fs/userfaultfd.c | 19 ++++++++++++++++++- include/linux/userfaultfd_k.h | 6 ++++++ include/uapi/linux/userfaultfd.h | 11 ++++++++++- mm/huge_memory.c | 25 ++++++++++++++++++++++++- mm/hugetlb.c | 32 +++++++++++++++++++++++++++++++- mm/memory.c | 27 +++++++++++++++++++++++++-- 6 files changed, 114 insertions(+), 6 deletions(-) diff --git a/fs/userfaultfd.c b/fs/userfaultfd.c index 6e577c4ac4dd..4a701ac830f4 100644 --- a/fs/userfaultfd.c +++ b/fs/userfaultfd.c @@ -89,6 +89,11 @@ static bool userfaultfd_wp_async_ctx(struct userfaultfd_ctx *ctx) return ctx && (ctx->features & UFFD_FEATURE_WP_ASYNC); } +static bool userfaultfd_rwp_async_ctx(struct userfaultfd_ctx *ctx) +{ + return ctx && (ctx->features & UFFD_FEATURE_RWP_ASYNC); +} + /* * Whether WP_UNPOPULATED is enabled on the uffd context. It is only * meaningful when userfaultfd_wp()==true on the vma and when it's @@ -1989,6 +1994,11 @@ bool userfaultfd_wp_async(struct vm_area_struct *vma) return userfaultfd_wp_async_ctx(vma->vm_userfaultfd_ctx.ctx); } +bool userfaultfd_rwp_async(struct vm_area_struct *vma) +{ + return userfaultfd_rwp_async_ctx(vma->vm_userfaultfd_ctx.ctx); +} + static inline unsigned int uffd_ctx_features(__u64 user_features) { /* @@ -2092,6 +2102,12 @@ static int userfaultfd_api(struct userfaultfd_ctx *ctx, if (features & UFFD_FEATURE_WP_ASYNC) features |= UFFD_FEATURE_WP_UNPOPULATED; + ret = -EINVAL; + /* RWP_ASYNC requires RWP */ + if ((features & UFFD_FEATURE_RWP_ASYNC) && + !(features & UFFD_FEATURE_RWP)) + goto err_out; + /* report all available features and ioctls to userland */ uffdio_api.features = UFFD_API_FEATURES; #ifndef CONFIG_HAVE_ARCH_USERFAULTFD_MINOR @@ -2114,7 +2130,8 @@ static int userfaultfd_api(struct userfaultfd_ctx *ctx, * but not actually usable. */ if (VM_UFFD_RWP == VM_NONE || !pgtable_supports_uffd()) - uffdio_api.features &= ~UFFD_FEATURE_RWP; + uffdio_api.features &= + ~(UFFD_FEATURE_RWP | UFFD_FEATURE_RWP_ASYNC); ret = -EINVAL; if (features & ~uffdio_api.features) diff --git a/include/linux/userfaultfd_k.h b/include/linux/userfaultfd_k.h index 37e8d0d29353..777e332edeff 100644 --- a/include/linux/userfaultfd_k.h +++ b/include/linux/userfaultfd_k.h @@ -295,6 +295,7 @@ extern void userfaultfd_unmap_complete(struct mm_struct *mm, struct list_head *uf); extern bool userfaultfd_wp_unpopulated(struct vm_area_struct *vma); extern bool userfaultfd_wp_async(struct vm_area_struct *vma); +extern bool userfaultfd_rwp_async(struct vm_area_struct *vma); void userfaultfd_reset_ctx(struct vm_area_struct *vma); @@ -492,6 +493,11 @@ static inline bool userfaultfd_wp_async(struct vm_area_struct *vma) return false; } +static inline bool userfaultfd_rwp_async(struct vm_area_struct *vma) +{ + return false; +} + static inline bool vma_has_uffd_without_event_remap(struct vm_area_struct *vma) { return false; diff --git a/include/uapi/linux/userfaultfd.h b/include/uapi/linux/userfaultfd.h index d803e76d47ad..c10f08f8a618 100644 --- a/include/uapi/linux/userfaultfd.h +++ b/include/uapi/linux/userfaultfd.h @@ -44,7 +44,8 @@ UFFD_FEATURE_POISON | \ UFFD_FEATURE_WP_ASYNC | \ UFFD_FEATURE_MOVE | \ - UFFD_FEATURE_RWP) + UFFD_FEATURE_RWP | \ + UFFD_FEATURE_RWP_ASYNC) #define UFFD_API_IOCTLS \ ((__u64)1 << _UFFDIO_REGISTER | \ (__u64)1 << _UFFDIO_UNREGISTER | \ @@ -243,6 +244,13 @@ struct uffdio_api { * UFFDIO_REGISTER_MODE_RWP for read-write protection tracking. * Pages are made inaccessible via UFFDIO_RWPROTECT and faults * are delivered when the pages are re-accessed. + * + * UFFD_FEATURE_RWP_ASYNC indicates asynchronous mode for + * UFFDIO_REGISTER_MODE_RWP. When set, faults on read-write + * protected pages are auto-resolved by the kernel (PTE + * permissions restored immediately) without delivering a message + * to the userfaultfd handler. Use PAGEMAP_SCAN with inverted + * PAGE_IS_ACCESSED to find pages that were not re-accessed. */ #define UFFD_FEATURE_PAGEFAULT_FLAG_WP (1<<0) #define UFFD_FEATURE_EVENT_FORK (1<<1) @@ -262,6 +270,7 @@ struct uffdio_api { #define UFFD_FEATURE_WP_ASYNC (1<<15) #define UFFD_FEATURE_MOVE (1<<16) #define UFFD_FEATURE_RWP (1<<17) +#define UFFD_FEATURE_RWP_ASYNC (1<<18) __u64 features; __u64 ioctls; diff --git a/mm/huge_memory.c b/mm/huge_memory.c index 631e0355919f..d49facfdb16b 100644 --- a/mm/huge_memory.c +++ b/mm/huge_memory.c @@ -2266,7 +2266,30 @@ static inline bool can_change_pmd_writable(struct vm_area_struct *vma, vm_fault_t do_huge_pmd_uffd_rwp(struct vm_fault *vmf) { - return handle_userfault(vmf, VM_UFFD_RWP); + struct vm_area_struct *vma = vmf->vma; + pmd_t pmd; + + if (!userfaultfd_rwp_async(vma)) + return handle_userfault(vmf, VM_UFFD_RWP); + + vmf->ptl = pmd_lock(vma->vm_mm, vmf->pmd); + if (unlikely(!pmd_same(pmdp_get(vmf->pmd), vmf->orig_pmd))) { + spin_unlock(vmf->ptl); + return 0; + } + pmd = pmd_modify(vmf->orig_pmd, vma->vm_page_prot); + /* pmd_modify() preserves _PAGE_UFFD; drop it on resolution */ + pmd = pmd_clear_uffd(pmd); + pmd = pmd_mkyoung(pmd); + if (!pmd_write(pmd) && + vma_wants_manual_pte_write_upgrade(vma) && + can_change_pmd_writable(vma, vmf->address, pmd)) + pmd = pmd_mkwrite(pmd, vma); + set_pmd_at(vma->vm_mm, vmf->address & HPAGE_PMD_MASK, + vmf->pmd, pmd); + update_mmu_cache_pmd(vma, vmf->address, vmf->pmd); + spin_unlock(vmf->ptl); + return 0; } /* NUMA hinting page fault entry point for trans huge pmds */ diff --git a/mm/hugetlb.c b/mm/hugetlb.c index bac9aa852f6b..dc581adcb0ab 100644 --- a/mm/hugetlb.c +++ b/mm/hugetlb.c @@ -6075,7 +6075,37 @@ vm_fault_t hugetlb_fault(struct mm_struct *mm, struct vm_area_struct *vma, */ if (pte_protnone(vmf.orig_pte) && vma_is_accessible(vma) && userfaultfd_rwp(vma) && huge_pte_uffd(vmf.orig_pte)) { - return hugetlb_handle_userfault(&vmf, mapping, VM_UFFD_RWP); + spinlock_t *ptl; + pte_t pte; + + /* Sync: drop hugetlb locks before blocking in handle_userfault() */ + if (!userfaultfd_rwp_async(vma)) + return hugetlb_handle_userfault(&vmf, mapping, VM_UFFD_RWP); + + ptl = huge_pte_lock(h, mm, vmf.pte); + pte = huge_ptep_get(mm, vmf.address, vmf.pte); + if (pte_protnone(pte) && huge_pte_uffd(pte)) { + unsigned int shift = huge_page_shift(h); + + pte = huge_pte_modify(pte, vma->vm_page_prot); + pte = arch_make_huge_pte(pte, shift, vma->vm_flags); + /* huge_pte_modify() preserves _PAGE_UFFD; drop it on resolution */ + pte = huge_pte_clear_uffd(pte); + pte = pte_mkyoung(pte); + /* + * Unlike do_uffd_rwp(), do not upgrade to writable + * here. Hugetlb lacks a can_change_huge_pte_writable() + * equivalent, so a write access will take a separate + * COW fault — acceptable for the rare private hugetlb + * case. + */ + set_huge_pte_at(mm, vmf.address, vmf.pte, pte, + huge_page_size(h)); + update_mmu_cache(vma, vmf.address, vmf.pte); + } + spin_unlock(ptl); + ret = 0; + goto out_mutex; } /* diff --git a/mm/memory.c b/mm/memory.c index e0dcf2c28d9d..bfe6f218fb16 100644 --- a/mm/memory.c +++ b/mm/memory.c @@ -6174,8 +6174,31 @@ static void numa_rebuild_large_mapping(struct vm_fault *vmf, struct vm_area_stru static vm_fault_t do_uffd_rwp(struct vm_fault *vmf) { - pte_unmap(vmf->pte); - return handle_userfault(vmf, VM_UFFD_RWP); + pte_t pte; + + if (!userfaultfd_rwp_async(vmf->vma)) { + /* Sync mode: unmap PTE and deliver to userfaultfd handler */ + pte_unmap(vmf->pte); + return handle_userfault(vmf, VM_UFFD_RWP); + } + + spin_lock(vmf->ptl); + if (unlikely(!pte_same(ptep_get(vmf->pte), vmf->orig_pte))) { + pte_unmap_unlock(vmf->pte, vmf->ptl); + return 0; + } + pte = pte_modify(vmf->orig_pte, vmf->vma->vm_page_prot); + /* pte_modify() preserves _PAGE_UFFD; drop it on resolution */ + pte = pte_clear_uffd(pte); + pte = pte_mkyoung(pte); + if (!pte_write(pte) && + vma_wants_manual_pte_write_upgrade(vmf->vma) && + can_change_pte_writable(vmf->vma, vmf->address, pte)) + pte = pte_mkwrite(pte, vmf->vma); + set_pte_at(vmf->vma->vm_mm, vmf->address, vmf->pte, pte); + update_mmu_cache(vmf->vma, vmf->address, vmf->pte); + pte_unmap_unlock(vmf->pte, vmf->ptl); + return 0; } static vm_fault_t do_numa_page(struct vm_fault *vmf) -- 2.51.2