From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-11.5 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id E997CC12002 for ; Fri, 16 Jul 2021 19:11:39 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 7D5B2613D4 for ; Fri, 16 Jul 2021 19:11:39 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 7D5B2613D4 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=redhat.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 863608D00F4; Fri, 16 Jul 2021 15:11:39 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 8126F8D00EC; Fri, 16 Jul 2021 15:11:39 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 68C248D00F4; Fri, 16 Jul 2021 15:11:39 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0002.hostedemail.com [216.40.44.2]) by kanga.kvack.org (Postfix) with ESMTP id 463628D00EC for ; Fri, 16 Jul 2021 15:11:39 -0400 (EDT) Received: from smtpin29.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay03.hostedemail.com (Postfix) with ESMTP id E8F16824805A for ; Fri, 16 Jul 2021 19:11:37 +0000 (UTC) X-FDA: 78369395034.29.70BEBB5 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.133.124]) by imf24.hostedemail.com (Postfix) with ESMTP id 7A492B00009F for ; Fri, 16 Jul 2021 19:11:37 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1626462696; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: in-reply-to:in-reply-to:references:references; bh=Ps3YKAkMPQCr54CoIkJfQF0HxMdsGrNOAI0OEZIH6SA=; b=O9a6Hqz/YUrFlfhQ8ERjWGVMGKAinRhZEWowp2S/ejmahJh79MDDs7yFRCXnQ41ksAdvSo gjNIeq6H+wsyhuGlQ9KGoMM1l5mHbXCUsgyl98ZwXQhvit12zS7DM6xuvpr1Is7t+ifCWE SzSq/fve5pbSCOZRYOdL7g+bx1LnZAw= Received: from mail-qv1-f70.google.com (mail-qv1-f70.google.com [209.85.219.70]) (Using TLS) by relay.mimecast.com with ESMTP id us-mta-320--Jql8xRgONK0rh_6UxEuDg-1; Fri, 16 Jul 2021 15:11:35 -0400 X-MC-Unique: -Jql8xRgONK0rh_6UxEuDg-1 Received: by mail-qv1-f70.google.com with SMTP id l4-20020a0ce0840000b02902cec39ab618so7389183qvk.5 for ; Fri, 16 Jul 2021 12:11:35 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:date:from:to:cc:subject:message-id:references :mime-version:content-disposition:in-reply-to; bh=Ps3YKAkMPQCr54CoIkJfQF0HxMdsGrNOAI0OEZIH6SA=; b=B69fraQGOWc/KVh4YQzBzSbFYyCouedPA1ByPYUG1A0DR6Psp0Ao8+FMO0G1/WjBsZ cpLfqI6xIdaWYfFEnhCuR0dLR9hIhG2QrO0qg8DUQhFeNwT7Gk+nDruLNAiS7SNZnNUN xqnNLQ9phlk76xsTGO78uuEo+8pIij5ZgY7s0ljiRtjMYQ7Orktm/e/KRkRApDId/K/d ZTQnRMerLbolTlAs5tu462xSvZFF2j0drcKx+8XKpHUi3Km9Fe5pOjPR9Qy43mUcg9TY RtyR8WRdqTRKkJHywgGE7yMZ+AVueeT1a9YerGqhDI83mssN3taJuSiVSEleP+ADu3vQ 3xXg== X-Gm-Message-State: AOAM533bMEcMYFNrfKc1Vo2duF0MH6pFTIIsyZ4bb9+obTCMhLqGf2gy 4skjlFH9Aya4Wu2k0CKjauGaAdF8eEnAq5/g2DXAdbIs02C7Zmrvcceeb+7sp1D2yg7rSmqLIqk JBY7+Kso0c60= X-Received: by 2002:a05:620a:2991:: with SMTP id r17mr11114052qkp.252.1626462695272; Fri, 16 Jul 2021 12:11:35 -0700 (PDT) X-Google-Smtp-Source: ABdhPJylP5fgqe+iexbYgH8CG8TlrCQRF+M90s2TEdvUMhYYxuHWgkAAG8I2xjPb6wEL7PJUITTEVw== X-Received: by 2002:a05:620a:2991:: with SMTP id r17mr11114021qkp.252.1626462695004; Fri, 16 Jul 2021 12:11:35 -0700 (PDT) Received: from t490s (bras-base-toroon474qw-grc-65-184-144-111-238.dsl.bell.ca. [184.144.111.238]) by smtp.gmail.com with ESMTPSA id c190sm3787428qkg.46.2021.07.16.12.11.33 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 16 Jul 2021 12:11:34 -0700 (PDT) Date: Fri, 16 Jul 2021 15:11:33 -0400 From: Peter Xu To: Alistair Popple Cc: linux-mm@kvack.org, linux-kernel@vger.kernel.org, Jason Gunthorpe , Mike Kravetz , David Hildenbrand , Matthew Wilcox , "Kirill A . Shutemov" , Hugh Dickins , Tiberiu Georgescu , Andrea Arcangeli , Axel Rasmussen , Nadav Amit , Mike Rapoport , Jerome Glisse , Andrew Morton , Miaohe Lin Subject: Re: [PATCH v5 05/26] mm/swap: Introduce the idea of special swap ptes Message-ID: References: <20210715201422.211004-1-peterx@redhat.com> <20210715201422.211004-6-peterx@redhat.com> <6116877.MhgVfB7NV9@nvdebian> MIME-Version: 1.0 In-Reply-To: <6116877.MhgVfB7NV9@nvdebian> X-Mimecast-Spam-Score: 0 X-Mimecast-Originator: redhat.com Content-Type: text/plain; charset=utf-8 Content-Disposition: inline Authentication-Results: imf24.hostedemail.com; dkim=pass header.d=redhat.com header.s=mimecast20190719 header.b="O9a6Hqz/"; spf=none (imf24.hostedemail.com: domain of peterx@redhat.com has no SPF policy when checking 170.10.133.124) smtp.mailfrom=peterx@redhat.com; dmarc=pass (policy=none) header.from=redhat.com X-Rspamd-Server: rspam02 X-Stat-Signature: 53yadyixtbb1g7bghhbfhgkoohpho6fz X-Rspamd-Queue-Id: 7A492B00009F X-HE-Tag: 1626462697-428015 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: On Fri, Jul 16, 2021 at 03:50:52PM +1000, Alistair Popple wrote: > Hi Peter, > > [...] > > > diff --git a/mm/memcontrol.c b/mm/memcontrol.c > > index ae1f5d0cb581..4b46c099ad94 100644 > > --- a/mm/memcontrol.c > > +++ b/mm/memcontrol.c > > @@ -5738,7 +5738,7 @@ static enum mc_target_type get_mctgt_type(struct vm_area_struct *vma, > > > > if (pte_present(ptent)) > > page = mc_handle_present_pte(vma, addr, ptent); > > - else if (is_swap_pte(ptent)) > > + else if (pte_has_swap_entry(ptent)) > > page = mc_handle_swap_pte(vma, ptent, &ent); > > else if (pte_none(ptent)) > > page = mc_handle_file_pte(vma, addr, ptent, &ent); > > As I understand things pte_none() == False for a special swap pte, but > shouldn't this be treated as pte_none() here? Ie. does this need to be > pte_none(ptent) || is_swap_special_pte() here? Looks correct; here the page/swap cache could hide behind the special pte just like a none pte. Will fix it. Thanks! > > > diff --git a/mm/memory.c b/mm/memory.c > > index 0e0de08a2cd5..998a4f9a3744 100644 > > --- a/mm/memory.c > > +++ b/mm/memory.c > > @@ -3491,6 +3491,13 @@ vm_fault_t do_swap_page(struct vm_fault *vmf) > > if (!pte_unmap_same(vmf)) > > goto out; > > > > + /* > > + * We should never call do_swap_page upon a swap special pte; just be > > + * safe to bail out if it happens. > > + */ > > + if (WARN_ON_ONCE(is_swap_special_pte(vmf->orig_pte))) > > + goto out; > > + > > entry = pte_to_swp_entry(vmf->orig_pte); > > if (unlikely(non_swap_entry(entry))) { > > if (is_migration_entry(entry)) { > > Are there other changes required here? Because we can end up with stale special > pte's and a special pte is !pte_none don't we need to fix some of the !pte_none > checks in these functions: > > insert_pfn() -> checks for !pte_none > remap_pte_range() -> BUG_ON(!pte_none) > apply_to_pte_range() -> didn't check further but it tests for !pte_none > > In general it feels like I might be missing something here though. There are > plenty of checks in the kernel for pte_none() which haven't been updated. Is > there some rule that says none of those paths can see a special pte? My rule on doing this was to only care about vma that can be backed by RAM, majorly shmem/hugetlb, so the special pte can only exist there within those vmas. I believe in most pte_none() users this special pte won't exist. So if it's not related to RAM backed memory at all, maybe it's fine to keep the pte_none() usage like before. Take the example of insert_pfn() referenced first - I think it can be used to map some MMIO regions, but I don't think we'll call that upon a RAM region (either shmem or hugetlb), nor can it be uffd wr-protected. So I'm not sure adding special pte check there would be helpful. apply_to_pte_range() seems to be a bit special - I think the pte_fn_t matters more on whether the special pte will matter. I had a quick look, it seems still be used mostly by all kinds of driver code not mm core. It's used in two forms: apply_to_page_range apply_to_existing_page_range The first one creates ptes only, so it ignores the pte_none() check so I skipped. The second one has two call sites: *** arch/powerpc/mm/pageattr.c: change_memory_attr[99] return apply_to_existing_page_range(&init_mm, start, size, set_memory_attr[132] return apply_to_existing_page_range(&init_mm, start, sz, set_page_attr, *** mm/kasan/shadow.c: kasan_release_vmalloc[485] apply_to_existing_page_range(&init_mm, I'll leave the ppc callers for now as uffd-wp is not even supported there. The kasan_release_vmalloc() should be for kernel allocated memories only, so should not be a target for special pte either. So indeed it's hard to 100% cover all pte_none() users to make sure things are used right. As stated above I still believe most callers don't need that, but the worst case is if someone triggered uffd-wp issues with a specific feature, we can look into it. I am not sure whether it's good we add this for all the pte_none() users, because mostly they'll be useless checks, imho. So far what I planned to do is to cover most things we know that may be affected like this patch so the change may bring a difference, hopefully we won't miss any important spots. > > > diff --git a/mm/migrate.c b/mm/migrate.c > > index 23cbd9de030b..b477d0d5f911 100644 > > --- a/mm/migrate.c > > +++ b/mm/migrate.c > > @@ -294,7 +294,7 @@ void __migration_entry_wait(struct mm_struct *mm, pte_t *ptep, > > > > spin_lock(ptl); > > pte = *ptep; > > - if (!is_swap_pte(pte)) > > + if (!pte_has_swap_entry(pte)) > > goto out; > > > > entry = pte_to_swp_entry(pte); > > @@ -2276,7 +2276,7 @@ static int migrate_vma_collect_pmd(pmd_t *pmdp, > > > > pte = *ptep; > > > > - if (pte_none(pte)) { > > + if (pte_none(pte) || is_swap_special_pte(pte)) { > > I was wondering if we can loose the special pte information here? However I see > that in migrate_vma_insert_page() we check again and fail the migration if > !pte_none() so I think this is ok. > > I think it would be better if this check was moved below so the migration fails > early. Ie: > > if (pte_none(pte)) { > if (vma_is_anonymous(vma) && !is_swap_special_pte(pte)) { Hmm.. but shouldn't vma_is_anonymous()==true already means it must not be a swap special pte? Because swap special pte only exists when !vma_is_anonymous(). > > Also how does this work for page migration in general? I can see in > page_vma_mapped_walk() that we skip special pte's, but doesn't this mean we > loose the special pte in that instance? Or is that ok for some reason? Do you mean try_to_migrate_one()? Does it need to be aware of that? Per my understanding that's only for anonymous private memory, while in that world there should have no swap special pte (page_lock_anon_vma_read will return NULL early for !vma_is_anonymous). Thanks, -- Peter Xu