From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 701F7C433F5 for ; Thu, 12 May 2022 20:54:55 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id F0BF46B0073; Thu, 12 May 2022 16:54:54 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id EBB516B0075; Thu, 12 May 2022 16:54:54 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id D35736B0078; Thu, 12 May 2022 16:54:54 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0011.hostedemail.com [216.40.44.11]) by kanga.kvack.org (Postfix) with ESMTP id BF2916B0073 for ; Thu, 12 May 2022 16:54:54 -0400 (EDT) Received: from smtpin26.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay02.hostedemail.com (Postfix) with ESMTP id 95E0A31AA1 for ; Thu, 12 May 2022 20:54:54 +0000 (UTC) X-FDA: 79458295308.26.C9FBCA1 Received: from mail-pg1-f175.google.com (mail-pg1-f175.google.com [209.85.215.175]) by imf08.hostedemail.com (Postfix) with ESMTP id 6DBD71600A2 for ; Thu, 12 May 2022 20:54:39 +0000 (UTC) Received: by mail-pg1-f175.google.com with SMTP id x8so2220294pgr.4 for ; Thu, 12 May 2022 13:54:53 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=sender:date:from:to:cc:subject:message-id:references:mime-version :content-disposition:in-reply-to; bh=fgMpaknKQTL/YDwGcAjGRJjLANeCgoFIWM8w8VosUm8=; b=bKpsa0uSMKH9cKFZPdYlk+UrZykGpIXypdbdvD7uVyvrmdFEb72DetHIPcQi+/L5oX wQCKZN98KPUd63WexE4AKzrGaRNITCd9Y/2qX1q88NaLqU4R8op4R0Wf2K2c7FGe1UNV si9xLT0A+ICf9lYktfIbEXK5udUsakhMlHhTRq+9xcR1Pshcz1xMZpA1lAgvzk8xwOKq f51cZqZe5ZHQVplSZhDau3+KYMbk7sH686yCo7xRTbPo/eLe2NPUWDdMEP4Rapgo1Nzn NcDRjDhDqxSbduWcfrwRHR3RPpbLW4tA+hbnvgGbLhH9xiS7FPkFq/oZ8ZGA1640GO6h G0+A== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:sender:date:from:to:cc:subject:message-id :references:mime-version:content-disposition:in-reply-to; bh=fgMpaknKQTL/YDwGcAjGRJjLANeCgoFIWM8w8VosUm8=; b=jxZJv/sqRygpLZHgj3fuPIrIlnxvj134sXlvAEHgFIgGU3874Yk4R0OM8bjxQDxggF 2PN1MSQN3Jzx7mNBWB0l2Va6MPdPTgt2EKPMd+NNcUIkbGMWz9brOjdp62IU3Q8D28uU tuWz66zfIWX3RGlnlL5vup+8F2WzfQhJchHa+GXu4cth67+0hDDOWsD6GdmLyg9QVC3z c+RZXe3DhNKvGtpLGMWFEPthmvz+sDjT2usB66c/uZeX9ccILcL1AsrNPcvaXRsuB8wp NeLNAep40bTF2xLrksY6lY3KDV3zKk4CPDBh5xHhmZdeu+NptWXklxFQHoA+hHUUfbjV XMdw== X-Gm-Message-State: AOAM533tXTTQlPapSRZOMDlfovvmjQEFndftrz8t9WutXg4gxJZ3hKh+ nrfCqfiTdT/4vbWyR5ALCK8= X-Google-Smtp-Source: ABdhPJzHg0YPs9U5kJNFjbNV59KxzItUAZ+++S4QsjDlng5KSrRKNRdq1GjRM/ghF5ZDyJBptSdj3g== X-Received: by 2002:a05:6a00:b85:b0:510:4275:2c71 with SMTP id g5-20020a056a000b8500b0051042752c71mr1483312pfj.31.1652388892837; Thu, 12 May 2022 13:54:52 -0700 (PDT) Received: from google.com ([2620:15c:211:201:872f:bbca:8e23:fae5]) by smtp.gmail.com with ESMTPSA id n10-20020a62970a000000b0050dc762817asm248839pfe.84.2022.05.12.13.54.52 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 12 May 2022 13:54:52 -0700 (PDT) Date: Thu, 12 May 2022 13:54:50 -0700 From: Minchan Kim To: John Hubbard Cc: Andrew Morton , linux-mm , LKML , "Paul E . McKenney" , John Dias , David Hildenbrand Subject: Re: [PATCH v5] mm: fix is_pinnable_page against on cma page Message-ID: References: <20220512204143.3961150-1-minchan@kernel.org> <5d9eb30e-6e0e-81a3-2b2c-47adc4e85470@nvidia.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <5d9eb30e-6e0e-81a3-2b2c-47adc4e85470@nvidia.com> X-Rspamd-Server: rspam04 X-Rspamd-Queue-Id: 6DBD71600A2 X-Stat-Signature: 4kq4wrc8hfiqh5634mmqwxuzfyzq3ta4 X-Rspam-User: Authentication-Results: imf08.hostedemail.com; dkim=pass header.d=gmail.com header.s=20210112 header.b=bKpsa0uS; spf=pass (imf08.hostedemail.com: domain of minchan.kim@gmail.com designates 209.85.215.175 as permitted sender) smtp.mailfrom=minchan.kim@gmail.com; dmarc=fail reason="SPF not aligned (relaxed), DKIM not aligned (relaxed)" header.from=kernel.org (policy=none) X-HE-Tag: 1652388879-917082 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: On Thu, May 12, 2022 at 01:51:47PM -0700, John Hubbard wrote: > On 5/12/22 13:41, Minchan Kim wrote: > > Pages on CMA area could have MIGRATE_ISOLATE as well as MIGRATE_CMA > > so current is_pinnable_page could miss CMA pages which has MIGRATE_ > > ISOLATE. It ends up pinning CMA pages as longterm at pin_user_pages > > APIs so CMA allocation keep failed until the pin is released. > > > > CPU 0 CPU 1 - Task B > > > > cma_alloc > > alloc_contig_range > > pin_user_pages_fast(FOLL_LONGTERM) > > change pageblock as MIGRATE_ISOLATE > > internal_get_user_pages_fast > > lockless_pages_from_mm > > gup_pte_range > > try_grab_folio > > is_pinnable_page > > return true; > > So, pinned the page successfully. > > page migration failure with pinned page > > .. > > .. After 30 sec > > unpin_user_page(page) > > > > CMA allocation succeeded after 30 sec. > > > > The CMA allocation path protects the migration type change race > > using zone->lock but what GUP path need to know is just whether the > > page is on CMA area or not rather than exact migration type. > > Thus, we don't need zone->lock but just checks migration type in > > either of (MIGRATE_ISOLATE and MIGRATE_CMA). > > > > Adding the MIGRATE_ISOLATE check in is_pinnable_page could cause > > rejecting of pinning pages on MIGRATE_ISOLATE pageblocks even > > though it's neither CMA nor movable zone if the page is temporarily > > unmovable. However, such a migration failure by unexpected temporal > > refcount holding is general issue, not only come from MIGRATE_ISOLATE > > and the MIGRATE_ISOLATE is also transient state like other temporal > > elevated refcount problem. > > > > Cc: "Paul E . McKenney" > > Cc: John Hubbard > > Cc: David Hildenbrand > > Signed-off-by: Minchan Kim > > --- > > * from v4 - https://lore.kernel.org/all/20220510211743.95831-1-minchan@kernel.org/ > > * clarification why we need READ_ONCE - Paul > > * Adding a comment about READ_ONCE - John > > > > * from v3 - https://lore.kernel.org/all/20220509153430.4125710-1-minchan@kernel.org/ > > * Fix typo and adding more description - akpm > > > > * from v2 - https://lore.kernel.org/all/20220505064429.2818496-1-minchan@kernel.org/ > > * Use __READ_ONCE instead of volatile - akpm > > > > * from v1 - https://lore.kernel.org/all/20220502173558.2510641-1-minchan@kernel.org/ > > * fix build warning - lkp > > * fix refetching issue of migration type > > * add side effect on !ZONE_MOVABLE and !MIGRATE_CMA in description - david > > > > include/linux/mm.h | 16 ++++++++++++++-- > > 1 file changed, 14 insertions(+), 2 deletions(-) > > > > diff --git a/include/linux/mm.h b/include/linux/mm.h > > index 6acca5cecbc5..2d7a5d87decd 100644 > > --- a/include/linux/mm.h > > +++ b/include/linux/mm.h > > @@ -1625,8 +1625,20 @@ static inline bool page_needs_cow_for_dma(struct vm_area_struct *vma, > > #ifdef CONFIG_MIGRATION > > static inline bool is_pinnable_page(struct page *page) > > { > > - return !(is_zone_movable_page(page) || is_migrate_cma_page(page)) || > > - is_zero_pfn(page_to_pfn(page)); > > +#ifdef CONFIG_CMA > > + /* > > + * Defend against future compiler LTO features, or code refactoring > > + * that inlines the above function, by forcing a single read. Because, > > + * this routine races with set_pageblock_migratetype(), and we want to > > + * avoid reading zero, when actually one or the other flags was set. > > + */ > > The most interesting line got dropped in this version. :) > > This is missing: > > int __mt = get_pageblock_migratetype(page); > > Assuming that that is restored, please feel free to add: > > Reviewed-by: John Hubbard Just caught after clicked the button with my fat finger :( Thanks, John! Andrew, Could you pick this up? >From 90ad049d48f5c36075f17ac996dfe3c33127aeb6 Mon Sep 17 00:00:00 2001 From: Minchan Kim Date: Mon, 2 May 2022 10:03:48 -0700 Subject: [PATCH v5] mm: fix is_pinnable_page against on cma page Pages on CMA area could have MIGRATE_ISOLATE as well as MIGRATE_CMA so current is_pinnable_page could miss CMA pages which has MIGRATE_ ISOLATE. It ends up pinning CMA pages as longterm at pin_user_pages APIs so CMA allocation keep failed until the pin is released. CPU 0 CPU 1 - Task B cma_alloc alloc_contig_range pin_user_pages_fast(FOLL_LONGTERM) change pageblock as MIGRATE_ISOLATE internal_get_user_pages_fast lockless_pages_from_mm gup_pte_range try_grab_folio is_pinnable_page return true; So, pinned the page successfully. page migration failure with pinned page .. .. After 30 sec unpin_user_page(page) CMA allocation succeeded after 30 sec. The CMA allocation path protects the migration type change race using zone->lock but what GUP path need to know is just whether the page is on CMA area or not rather than exact migration type. Thus, we don't need zone->lock but just checks migration type in either of (MIGRATE_ISOLATE and MIGRATE_CMA). Adding the MIGRATE_ISOLATE check in is_pinnable_page could cause rejecting of pinning pages on MIGRATE_ISOLATE pageblocks even though it's neither CMA nor movable zone if the page is temporarily unmovable. However, such a migration failure by unexpected temporal refcount holding is general issue, not only come from MIGRATE_ISOLATE and the MIGRATE_ISOLATE is also transient state like other temporal elevated refcount problem. Cc: "Paul E . McKenney" Cc: David Hildenbrand Reviewed-by: John Hubbard Signed-off-by: Minchan Kim --- * from v4 - https://lore.kernel.org/all/20220510211743.95831-1-minchan@kernel.org/ * clarification why we need READ_ONCE - Paul * Adding a comment about READ_ONCE - John * from v3 - https://lore.kernel.org/all/20220509153430.4125710-1-minchan@kernel.org/ * Fix typo and adding more description - akpm * from v2 - https://lore.kernel.org/all/20220505064429.2818496-1-minchan@kernel.org/ * Use __READ_ONCE instead of volatile - akpm * from v1 - https://lore.kernel.org/all/20220502173558.2510641-1-minchan@kernel.org/ * fix build warning - lkp * fix refetching issue of migration type * add side effect on !ZONE_MOVABLE and !MIGRATE_CMA in description - david include/linux/mm.h | 17 +++++++++++++++-- 1 file changed, 15 insertions(+), 2 deletions(-) diff --git a/include/linux/mm.h b/include/linux/mm.h index 6acca5cecbc5..b23c6f1b90b5 100644 --- a/include/linux/mm.h +++ b/include/linux/mm.h @@ -1625,8 +1625,21 @@ static inline bool page_needs_cow_for_dma(struct vm_area_struct *vma, #ifdef CONFIG_MIGRATION static inline bool is_pinnable_page(struct page *page) { - return !(is_zone_movable_page(page) || is_migrate_cma_page(page)) || - is_zero_pfn(page_to_pfn(page)); +#ifdef CONFIG_CMA + /* + * Defend against future compiler LTO features, or code refactoring + * that inlines the above function, by forcing a single read. Because, + * this routine races with set_pageblock_migratetype(), and we want to + * avoid reading zero, when actually one or the other flags was set. + */ + int __mt = get_pageblock_migratetype(page); + int mt = __READ_ONCE(__mt); + + if (mt & (MIGRATE_CMA | MIGRATE_ISOLATE)) + return false; +#endif + + return !(is_zone_movable_page(page) || is_zero_pfn(page_to_pfn(page))); } #else static inline bool is_pinnable_page(struct page *page) -- 2.36.0.550.gb090851708-goog