From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id E0917C46CD2 for ; Sat, 27 Jan 2024 19:53:49 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 62C856B007B; Sat, 27 Jan 2024 14:53:49 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 5DC0D6B007D; Sat, 27 Jan 2024 14:53:49 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 4CAAA6B007E; Sat, 27 Jan 2024 14:53:49 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0013.hostedemail.com [216.40.44.13]) by kanga.kvack.org (Postfix) with ESMTP id 3AF9F6B007B for ; Sat, 27 Jan 2024 14:53:49 -0500 (EST) Received: from smtpin10.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay09.hostedemail.com (Postfix) with ESMTP id 02CDF801E8 for ; Sat, 27 Jan 2024 19:53:48 +0000 (UTC) X-FDA: 81726141378.10.791A828 Received: from sin.source.kernel.org (sin.source.kernel.org [145.40.73.55]) by imf06.hostedemail.com (Postfix) with ESMTP id 6746518000A for ; Sat, 27 Jan 2024 19:53:46 +0000 (UTC) Authentication-Results: imf06.hostedemail.com; dkim=pass header.d=kernel.org header.s=k20201202 header.b=WsvsX7Sq; spf=pass (imf06.hostedemail.com: domain of chrisl@kernel.org designates 145.40.73.55 as permitted sender) smtp.mailfrom=chrisl@kernel.org; dmarc=pass (policy=none) header.from=kernel.org ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1706385227; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=q0jnxc6VZDuvqbzGAO84oJgNA4E+gM2YY+COPjF0A/Y=; b=xAx+A92numdD8Dt4H0XFkFw5D/8BSQ1b3S0HmMQQ+qo/TThN1vgC8r8mx7CgJ4MelmjAQl VM79c/OlvSyaWdjcyzisbe49Z49ZwURsdCLHdDXT+vI1Vsut3lVODDi/CTXXUIRc84r3Ck 6SzS24hPUcVIID407fspvJBBb6u9dZg= ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1706385227; a=rsa-sha256; cv=none; b=2C86fWSprhuDm43J928hPoGhxUc6Rt+ZMR/mbxDvWcUB2Qylp1OlOje40AIHrkv0P1HjGr xMuMrp1d5N3TWwGgXXP6FUGpjtiPHXpyqlu95iqevegjWFq/iE4Rcjfv64xg/focYBJ/4a ye/8q7Ur/YyWrm/F0BKOQmfmUXcSzWY= ARC-Authentication-Results: i=1; imf06.hostedemail.com; dkim=pass header.d=kernel.org header.s=k20201202 header.b=WsvsX7Sq; spf=pass (imf06.hostedemail.com: domain of chrisl@kernel.org designates 145.40.73.55 as permitted sender) smtp.mailfrom=chrisl@kernel.org; dmarc=pass (policy=none) header.from=kernel.org Received: from smtp.kernel.org (transwarp.subspace.kernel.org [100.75.92.58]) by sin.source.kernel.org (Postfix) with ESMTP id A2CD8CE390C for ; Sat, 27 Jan 2024 19:53:41 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id E731FC4166C for ; Sat, 27 Jan 2024 19:53:39 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1706385219; bh=j4v/A1mVl/hwvoSbJrZBrn3tRpELbeqOdvJi4/jO23g=; h=References:In-Reply-To:From:Date:Subject:To:Cc:From; b=WsvsX7SqptqpMAej4Bp7s9AzIAZxR3l8kX5WmRmq8vnHR3et5xOaxsNFeeYJrqfeD RQhuLIOZCJZmDfshbh7NG7UJnZ4zNRVh7pVKK1EAwGtvvipzIfAh5kHdlXtK7GnrH0 20UmyA5Om2Zv6CfC92wl8D/emcGB4WWCoOVJ3Cde3zivgKi3wKVQD72DzQABJXihz4 i4PhNIUrYONQ/rPO/Lh0A6P1pFC5xQW+aNfnfYAdDGoQ63Z18z0O/fbxMZ5z0qaNDO Xwzzypds7HkndR5XrjK4Z2mzb8jOzhBlXnabCykv8hdkb+7YsAogBnarfs5eHFh2AW 5SIQzKJX3CPzA== Received: by mail-io1-f42.google.com with SMTP id ca18e2360f4ac-7bfeacc32d2so630139f.2 for ; Sat, 27 Jan 2024 11:53:39 -0800 (PST) X-Gm-Message-State: AOJu0YyUgpO6dpq7VLPUGp/A3Ta0Xi82PtUr9H3a2kPBr9NqL/9gB/Nz SSq2rxmUWKIatxdJZD9Pa/DIT/uRTVbkNaTtNKuIttEu55TTIMRp3xOvhA1OiJqaQKqqk/WPIB1 LTOSRNohpodTAb+ZHNVXamziJYSk7JQGniEfg X-Google-Smtp-Source: AGHT+IF3ZQJGVFjO++UPAapLuUsvIr+PTimeHPUMgG4n+YWKlAAJv21BsS5duQuHg0u49EwyXSXlOsoTakOgsl2o8YM= X-Received: by 2002:a05:6e02:1a4a:b0:361:a7a2:9daf with SMTP id u10-20020a056e021a4a00b00361a7a29dafmr3982711ilv.27.1706385219049; Sat, 27 Jan 2024 11:53:39 -0800 (PST) MIME-Version: 1.0 References: <20231025144546.577640-1-ryan.roberts@arm.com> <20240118111036.72641-1-21cnbao@gmail.com> <20240118111036.72641-5-21cnbao@gmail.com> In-Reply-To: <20240118111036.72641-5-21cnbao@gmail.com> From: Chris Li Date: Sat, 27 Jan 2024 11:53:26 -0800 X-Gmail-Original-Message-ID: Message-ID: Subject: Re: [PATCH RFC 4/6] mm: support large folios swapin as a whole To: Barry Song <21cnbao@gmail.com> Cc: ryan.roberts@arm.com, akpm@linux-foundation.org, david@redhat.com, linux-mm@kvack.org, linux-kernel@vger.kernel.org, mhocko@suse.com, shy828301@gmail.com, wangkefeng.wang@huawei.com, willy@infradead.org, xiang@kernel.org, ying.huang@intel.com, yuzhao@google.com, surenb@google.com, steven.price@arm.com, Chuanhua Han , Barry Song Content-Type: text/plain; charset="UTF-8" Content-Transfer-Encoding: quoted-printable X-Rspamd-Queue-Id: 6746518000A X-Rspam-User: X-Rspamd-Server: rspam11 X-Stat-Signature: jegxka88two1ahf93wyb86b7177ztoeu X-HE-Tag: 1706385226-875824 X-HE-Meta: U2FsdGVkX1+AVvJHD8DYJP1U2N1ccXJQierMeLpXkxtIGb1aTq/p54QLcS7gZKDXMMXnNJisxuSufGWnAXdoppoqFE9YMSGORy0Hz+mGewQhIibyoTgyfpVLH8pnAJq5DV/hKU1ytr5dK545ntfWKpzNlGC+xFdVgrLDslRIzgp6Apo+Ps7xmPxvE8enUMbe2yXFqJULhuScQZ6451YxAdwhcIF3kusMHdceJ9yATqXiSpOC0TZ5jPeKzYyrAX/ZsI5fY29NpU/Tu/0vIQIOMmSYjjotqSAdxdM+PoltbdTHgO7280uaxaXGvyVjYswLLmvM03Jx1QwiP2V+6j1UJi2JElGxqnBm3znpvsOjOiEU7D0vKeiI+kLDeDY3V8TMhu9bKaivWG84nUPJYPhf0MgSRkOgxTSZx+f/lc2IBaGRc6K+OOtf2yInHnBAd2a0Dqmnrn3e+/7SOIZRBcP6srY8Jnd3HRrDEk2Aa4OfZKIYQSeFzAybBpNWy08nFb0bHdQHRugjHeJawWdmTaWEoGzwffR1mAIaMTjySGDBBMmPoKPqRxJz36b2e8t9oWzKMDzUm8MPWwNuy2gDen/cuf9rAhs75gpyg61kqR1oC+gjSMfXBOvKSD9dCDhOwDQEjRbs8jyacN5ntkpaCD0ZoqUuJEuijusGJrS3NkI8hzCyW+jVtFZwnweuZJkWzFl+Qw3wX5qbh2ngKoCpYGT5/yqw2WPhIuFVatkOaUvfWddouhDOGiwgWnQItfvCxtNWmoxDSAvssxGDKv9K9nsbsIx3lJAD90ZfTdTNptmXQu+dD2z4AFemqQvfz2nKffuynS9TVAunonKVeSnnnBeIAOy35GNX1V3tquizdA4t22D05d1TDEgjkeiN5WfG2BBNsFlJ4fgVYXqQt7LKoy+4GW5PwRwwYgg42keGUSm/54UjF397per5RzLn+VvpAa5AzmjSbLqk1P9NtN6KoBC HlC18QAa JsCQhw/fTrO1rdN9Ic5VFx/n0UkiXi4QCqyAJaIHVyVhywHIJoG/3sZ+RROn0baUWwlSXQhBxsi3kx4JKYR5sF2TLgxvuB61PyRHtF8SAhicxoRG5kd74ljcP5gIAG0EG5rze9PjkowzmVYZs5wSe5U8FK4CDVE5IYu0B/K4xCRSHvuk8sD3RrHZ3fY42SYYmqJ690KS4by6uypnqMNKvzbcM2oAAHIuzvy0wCgP856PyG9QaS+MH11nXwWKJK6INA32EDmQX0+sKcOGcWUusmfBh5ASHdd9oehJPdS7WCK7adAQ= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: On Thu, Jan 18, 2024 at 3:12=E2=80=AFAM Barry Song <21cnbao@gmail.com> wrot= e: > > From: Chuanhua Han > > On an embedded system like Android, more than half of anon memory is actu= ally > in swap devices such as zRAM. For example, while an app is switched to ba= ck- > ground, its most memory might be swapped-out. > > Now we have mTHP features, unfortunately, if we don't support large folio= s > swap-in, once those large folios are swapped-out, we immediately lose the > performance gain we can get through large folios and hardware optimizatio= n > such as CONT-PTE. > > This patch brings up mTHP swap-in support. Right now, we limit mTHP swap-= in > to those contiguous swaps which were likely swapped out from mTHP as a wh= ole. > > On the other hand, the current implementation only covers the SWAP_SYCHRO= NOUS > case. It doesn't support swapin_readahead as large folios yet. > > Right now, we are re-faulting large folios which are still in swapcache a= s a > whole, this can effectively decrease extra loops and early-exitings which= we > have increased in arch_swap_restore() while supporting MTE restore for fo= lios > rather than page. > > Signed-off-by: Chuanhua Han > Co-developed-by: Barry Song > Signed-off-by: Barry Song > --- > mm/memory.c | 108 +++++++++++++++++++++++++++++++++++++++++++++------- > 1 file changed, 94 insertions(+), 14 deletions(-) > > diff --git a/mm/memory.c b/mm/memory.c > index f61a48929ba7..928b3f542932 100644 > --- a/mm/memory.c > +++ b/mm/memory.c > @@ -107,6 +107,8 @@ EXPORT_SYMBOL(mem_map); > static vm_fault_t do_fault(struct vm_fault *vmf); > static vm_fault_t do_anonymous_page(struct vm_fault *vmf); > static bool vmf_pte_changed(struct vm_fault *vmf); > +static struct folio *alloc_anon_folio(struct vm_fault *vmf, > + bool (*pte_range_check)(pte_t *, in= t)); Instead of returning "bool", the pte_range_check() can return the start of the swap entry of the large folio. That will save some of the later code needed to get the start of the large folio. > > /* > * Return true if the original pte was a uffd-wp pte marker (so the pte = was > @@ -3784,6 +3786,34 @@ static vm_fault_t handle_pte_marker(struct vm_faul= t *vmf) > return VM_FAULT_SIGBUS; > } > > +static bool pte_range_swap(pte_t *pte, int nr_pages) This function name seems to suggest it will perform the range swap. That is not what it is doing. Suggest change to some other name reflecting that it is only a condition test without actual swap action. I am not very good at naming functions. Just think it out loud: e.g. pte_range_swap_check, pte_test_range_swap. You can come up with something better. > +{ > + int i; > + swp_entry_t entry; > + unsigned type; > + pgoff_t start_offset; > + > + entry =3D pte_to_swp_entry(ptep_get_lockless(pte)); > + if (non_swap_entry(entry)) > + return false; > + start_offset =3D swp_offset(entry); > + if (start_offset % nr_pages) > + return false; This suggests the pte argument needs to point to the beginning of the large folio equivalent of swap entry(not sure what to call it. Let me call it "large folio swap" here.). We might want to unify the terms for that. Any way, might want to document this requirement, otherwise the caller might consider passing the current pte that generates the fault. From the function name it is not obvious which pte should pass it. > + > + type =3D swp_type(entry); > + for (i =3D 1; i < nr_pages; i++) { You might want to test the last page backwards, because if the entry is not the large folio swap, most likely it will have the last entry invalid. Some of the beginning swap entries might match due to batch allocation etc. The SSD likes to group the nearby swap entry write out together on the disk. > + entry =3D pte_to_swp_entry(ptep_get_lockless(pte + i)); > + if (non_swap_entry(entry)) > + return false; > + if (swp_offset(entry) !=3D start_offset + i) > + return false; > + if (swp_type(entry) !=3D type) > + return false; > + } > + > + return true; > +} > + > /* > * We enter with non-exclusive mmap_lock (to exclude vma changes, > * but allow concurrent faults), and pte mapped but not yet locked. > @@ -3804,6 +3834,9 @@ vm_fault_t do_swap_page(struct vm_fault *vmf) > pte_t pte; > vm_fault_t ret =3D 0; > void *shadow =3D NULL; > + int nr_pages =3D 1; > + unsigned long start_address; > + pte_t *start_pte; > > if (!pte_unmap_same(vmf)) > goto out; > @@ -3868,13 +3901,20 @@ vm_fault_t do_swap_page(struct vm_fault *vmf) > if (data_race(si->flags & SWP_SYNCHRONOUS_IO) && > __swap_count(entry) =3D=3D 1) { > /* skip swapcache */ > - folio =3D vma_alloc_folio(GFP_HIGHUSER_MOVABLE, 0= , > - vma, vmf->address, false)= ; > + folio =3D alloc_anon_folio(vmf, pte_range_swap); This function can call pte_range_swap() twice(), one here, another one in folio_test_large(). Consider caching the result so it does not need to walk the pte range swap twice. I think alloc_anon_folio should either be told what is the size(prefered) or just figure out the right size. I don't think it needs to pass in the checking function as function callbacks. There are two call sites of alloc_anon_folio, they are all within this function. The call back seems a bit overkill here. Also duplicate the range swap walk. > page =3D &folio->page; > if (folio) { > __folio_set_locked(folio); > __folio_set_swapbacked(folio); > > + if (folio_test_large(folio)) { > + unsigned long start_offset; > + > + nr_pages =3D folio_nr_pages(folio= ); > + start_offset =3D swp_offset(entry= ) & ~(nr_pages - 1); Here is the first place place we roll up the start offset with folio size > + entry =3D swp_entry(swp_type(entr= y), start_offset); > + } > + > if (mem_cgroup_swapin_charge_folio(folio, > vma->vm_mm, GFP_K= ERNEL, > entry)) { > @@ -3980,6 +4020,39 @@ vm_fault_t do_swap_page(struct vm_fault *vmf) > */ > vmf->pte =3D pte_offset_map_lock(vma->vm_mm, vmf->pmd, vmf->addre= ss, > &vmf->ptl); > + > + start_address =3D vmf->address; > + start_pte =3D vmf->pte; > + if (folio_test_large(folio)) { > + unsigned long nr =3D folio_nr_pages(folio); > + unsigned long addr =3D ALIGN_DOWN(vmf->address, nr * PAGE= _SIZE); > + pte_t *pte_t =3D vmf->pte - (vmf->address - addr) / PAGE_= SIZE; Here is the second place we roll up the folio size. Maybe we can cache results and avoid repetition? > + > + /* > + * case 1: we are allocating large_folio, try to map it a= s a whole > + * iff the swap entries are still entirely mapped; > + * case 2: we hit a large folio in swapcache, and all swa= p entries > + * are still entirely mapped, try to map a large folio as= a whole. > + * otherwise, map only the faulting page within the large= folio > + * which is swapcache > + */ One question I have in mind is that the swap device is locked. We can't change the swap slot allocations. It does not stop the pte entry getting changed right? Then we can have someone in the user pace racing to change the PTE vs we checking the pte there. > + if (pte_range_swap(pte_t, nr)) { After this pte_range_swap() check, some of the PTE entries get changed and now we don't have the full large page swap any more? At least I can't conclude this possibility can't happen yet, please enlighten me. > + start_address =3D addr; > + start_pte =3D pte_t; > + if (unlikely(folio =3D=3D swapcache)) { > + /* > + * the below has been done before swap_re= ad_folio() > + * for case 1 > + */ > + nr_pages =3D nr; > + entry =3D pte_to_swp_entry(ptep_get(start= _pte)); If we make pte_range_swap() return the entry, we can avoid refetching the swap entry here. > + page =3D &folio->page; > + } > + } else if (nr_pages > 1) { /* ptes have changed for case = 1 */ > + goto out_nomap; > + } > + } > + I rewrote the above to make the code indentation matching the execution flo= w. I did not add any functional change. Just rearrange the code to be a bit more streamlined. Get rid of the "else if goto". if (!pte_range_swap(pte_t, nr)) { if (nr_pages > 1) /* ptes have changed for case 1 = */ goto out_nomap; goto check_pte; } start_address =3D addr; start_pte =3D pte_t; if (unlikely(folio =3D=3D swapcache)) { /* * the below has been done before swap_read_folio() * for case 1 */ nr_pages =3D nr; entry =3D pte_to_swp_entry(ptep_get(start_pte)); page =3D &folio->page; } } check_pte: > if (unlikely(!vmf->pte || !pte_same(ptep_get(vmf->pte), vmf->orig= _pte))) > goto out_nomap; > > @@ -4047,12 +4120,14 @@ vm_fault_t do_swap_page(struct vm_fault *vmf) > * We're already holding a reference on the page but haven't mapp= ed it > * yet. > */ > - swap_free(entry); > + swap_nr_free(entry, nr_pages); > if (should_try_to_free_swap(folio, vma, vmf->flags)) > folio_free_swap(folio); > > - inc_mm_counter(vma->vm_mm, MM_ANONPAGES); > - dec_mm_counter(vma->vm_mm, MM_SWAPENTS); > + folio_ref_add(folio, nr_pages - 1); > + add_mm_counter(vma->vm_mm, MM_ANONPAGES, nr_pages); > + add_mm_counter(vma->vm_mm, MM_SWAPENTS, -nr_pages); > + > pte =3D mk_pte(page, vma->vm_page_prot); > > /* > @@ -4062,14 +4137,14 @@ vm_fault_t do_swap_page(struct vm_fault *vmf) > * exclusivity. > */ > if (!folio_test_ksm(folio) && > - (exclusive || folio_ref_count(folio) =3D=3D 1)) { > + (exclusive || folio_ref_count(folio) =3D=3D nr_pages)) { > if (vmf->flags & FAULT_FLAG_WRITE) { > pte =3D maybe_mkwrite(pte_mkdirty(pte), vma); > vmf->flags &=3D ~FAULT_FLAG_WRITE; > } > rmap_flags |=3D RMAP_EXCLUSIVE; > } > - flush_icache_page(vma, page); > + flush_icache_pages(vma, page, nr_pages); > if (pte_swp_soft_dirty(vmf->orig_pte)) > pte =3D pte_mksoft_dirty(pte); > if (pte_swp_uffd_wp(vmf->orig_pte)) > @@ -4081,14 +4156,15 @@ vm_fault_t do_swap_page(struct vm_fault *vmf) > folio_add_new_anon_rmap(folio, vma, vmf->address); > folio_add_lru_vma(folio, vma); > } else { > - folio_add_anon_rmap_pte(folio, page, vma, vmf->address, > + folio_add_anon_rmap_ptes(folio, page, nr_pages, vma, star= t_address, > rmap_flags); > } > > VM_BUG_ON(!folio_test_anon(folio) || > (pte_write(pte) && !PageAnonExclusive(page))); > - set_pte_at(vma->vm_mm, vmf->address, vmf->pte, pte); > - arch_do_swap_page(vma->vm_mm, vma, vmf->address, pte, vmf->orig_p= te); > + set_ptes(vma->vm_mm, start_address, start_pte, pte, nr_pages); > + > + arch_do_swap_page(vma->vm_mm, vma, start_address, pte, vmf->orig_= pte); > > folio_unlock(folio); > if (folio !=3D swapcache && swapcache) { > @@ -4105,6 +4181,9 @@ vm_fault_t do_swap_page(struct vm_fault *vmf) > } > > if (vmf->flags & FAULT_FLAG_WRITE) { > + if (folio_test_large(folio) && nr_pages > 1) > + vmf->orig_pte =3D ptep_get(vmf->pte); > + > ret |=3D do_wp_page(vmf); > if (ret & VM_FAULT_ERROR) > ret &=3D VM_FAULT_ERROR; > @@ -4112,7 +4191,7 @@ vm_fault_t do_swap_page(struct vm_fault *vmf) > } > > /* No need to invalidate - it was non-present before */ > - update_mmu_cache_range(vmf, vma, vmf->address, vmf->pte, 1); > + update_mmu_cache_range(vmf, vma, start_address, start_pte, nr_pag= es); > unlock: > if (vmf->pte) > pte_unmap_unlock(vmf->pte, vmf->ptl); > @@ -4148,7 +4227,8 @@ static bool pte_range_none(pte_t *pte, int nr_pages= ) > return true; > } > > -static struct folio *alloc_anon_folio(struct vm_fault *vmf) > +static struct folio *alloc_anon_folio(struct vm_fault *vmf, > + bool (*pte_range_check)(pte_t *, in= t)) > { > #ifdef CONFIG_TRANSPARENT_HUGEPAGE > struct vm_area_struct *vma =3D vmf->vma; > @@ -4190,7 +4270,7 @@ static struct folio *alloc_anon_folio(struct vm_fau= lt *vmf) About this patch context we have the following comments in the source code. /* * Find the highest order where the aligned range is completely * pte_none(). Note that all remaining orders will be completely * pte_none(). */ > order =3D highest_order(orders); > while (orders) { > addr =3D ALIGN_DOWN(vmf->address, PAGE_SIZE << order); > - if (pte_range_none(pte + pte_index(addr), 1 << order)) > + if (pte_range_check(pte + pte_index(addr), 1 << order)) Again, I don't think we need to pass in the pte_range_check() as call back functions. There are only two call sites, all within this file. This will totally invalide the above comments about pte_none(). In the worst case, just make it accept one argument: it is checking swap range or none range or not. Depending on the argument, do check none or swap range. We should make it blend in with alloc_anon_folio better. My gut feeling is that there should be a better way to make the range check blend in with alloc_anon_folio better. e.g. Maybe store some of the large swap context in the vmf and pass to different places etc. I need to spend more time thinking about it to come up with happier solutions. Chris > break; > order =3D next_order(&orders, order); > } > @@ -4269,7 +4349,7 @@ static vm_fault_t do_anonymous_page(struct vm_fault= *vmf) > if (unlikely(anon_vma_prepare(vma))) > goto oom; > /* Returns NULL on OOM or ERR_PTR(-EAGAIN) if we must retry the f= ault */ > - folio =3D alloc_anon_folio(vmf); > + folio =3D alloc_anon_folio(vmf, pte_range_none); > if (IS_ERR(folio)) > return 0; > if (!folio) > -- > 2.34.1 > >