From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 04DEAC433F5 for ; Tue, 24 May 2022 05:17:07 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 2E4246B0072; Tue, 24 May 2022 01:17:07 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 290216B0073; Tue, 24 May 2022 01:17:07 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 17D7D6B0074; Tue, 24 May 2022 01:17:07 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0010.hostedemail.com [216.40.44.10]) by kanga.kvack.org (Postfix) with ESMTP id 03E186B0072 for ; Tue, 24 May 2022 01:17:07 -0400 (EDT) Received: from smtpin11.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay09.hostedemail.com (Postfix) with ESMTP id C08003546B for ; Tue, 24 May 2022 05:17:06 +0000 (UTC) X-FDA: 79499477652.11.2DF6937 Received: from mail-pf1-f170.google.com (mail-pf1-f170.google.com [209.85.210.170]) by imf31.hostedemail.com (Postfix) with ESMTP id 87DAF2002F for ; Tue, 24 May 2022 05:16:32 +0000 (UTC) Received: by mail-pf1-f170.google.com with SMTP id c14so15522066pfn.2 for ; Mon, 23 May 2022 22:17:06 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=sender:date:from:to:cc:subject:message-id:references:mime-version :content-disposition:in-reply-to; bh=tQCOa3IH60VXmxlz7qxuQA0xwwvIVCrbt/GEpedjsGY=; b=QBGQx2+Cy7tW9UqeW8jJmQslZMNcie20t35LOpCM8ZuvqWrvfO1vkTmMFXVSRqOEZs LE7bW0Af5q6tcuvF6B6z0KQyIz8IRXLlpeQNZEkeln6oWA5pCBL9gX8eqQmiayqao/Uo 4CNzMTK3W7qgKlSBO1Dy8Uga2So9FKj9uaiHqBgDY8GAtoI9qv4Nk65CLgfe4TEhIi3t EsYdcRdIK5RAMJaPYETYeet58NfE6x9NPcEWP0AHqGHe5HDyi3eKxLCJJUTq63+OHxhc RUytoZt7ZLQbP7CC6gyewugxCbXmK6mTpl/9w+/NDZKUKgjwFsALar+ZG9R0xw3ZTy+s ysCg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:sender:date:from:to:cc:subject:message-id :references:mime-version:content-disposition:in-reply-to; bh=tQCOa3IH60VXmxlz7qxuQA0xwwvIVCrbt/GEpedjsGY=; b=DaRP/Y2bFmB6DopKnsR/T35SIGotzbRALwgvC+aQegqcmVhRlqTOqnjaa4ysA/AH39 SgpsVmv1Ot04KOg8xZn1QP4MYf9g4ifNkRyPx9XvjdLhFUd+TufsaUu8kgXZbgv20oJj cSfvkIQzc1Je4VRVDlKMcLFad1r5pncEtQuskBtJ7h1N/54+80ZP5cpWc/yCXuNtNzHM ajp8ME7xnFXdlWqaiJXZTtt7e7uCtzdgrutkIIYbc1w9dAcyIDz7V9st2ljIi/KX4L5w KSRee7UHcXkPd2SFdzhxPSKgXoBHpD0EW2XaD24h6qRktKqPYJMvk1QdbR9bMV11zVbf L0aQ== X-Gm-Message-State: AOAM530MgHKfa6MpRXq/d5Gw62iKEe972uLFcdvmt5Wsi4pPGSrjXYJp wl2AtYWSx51wbPHSOuZiQeg= X-Google-Smtp-Source: ABdhPJynhhKAV8KX3bjL9ZF39qlEkLQJx+w7nBKe6sQVHWsJkxRMNlqlYW3LIUReDgG6NIiZY9i2/A== X-Received: by 2002:a65:5385:0:b0:3fa:52e3:6468 with SMTP id x5-20020a655385000000b003fa52e36468mr7333773pgq.366.1653369419350; Mon, 23 May 2022 22:16:59 -0700 (PDT) Received: from google.com (c-67-188-95-58.hsd1.ca.comcast.net. [67.188.95.58]) by smtp.gmail.com with ESMTPSA id k9-20020aa79d09000000b0050dc76281b5sm8500767pfp.143.2022.05.23.22.16.58 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 23 May 2022 22:16:58 -0700 (PDT) Date: Mon, 23 May 2022 22:16:58 -0700 From: Minchan Kim To: John Hubbard Cc: Jason Gunthorpe , "Paul E. McKenney" , Andrew Morton , linux-mm , LKML , John Dias , David Hildenbrand Subject: Re: [PATCH v4] mm: fix is_pinnable_page against on cma page Message-ID: References: <0accce46-fac6-cdfb-db7f-d08396bf9d35@nvidia.com> <20220517140049.GF63055@ziepe.ca> <20220517192825.GM63055@ziepe.ca> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: X-Rspamd-Server: rspam11 X-Rspamd-Queue-Id: 87DAF2002F X-Stat-Signature: g9qpcpccoc5xndtyh8r4qng7q6khxyfx X-Rspam-User: Authentication-Results: imf31.hostedemail.com; dkim=pass header.d=gmail.com header.s=20210112 header.b=QBGQx2+C; dmarc=fail reason="SPF not aligned (relaxed), DKIM not aligned (relaxed)" header.from=kernel.org (policy=none); spf=pass (imf31.hostedemail.com: domain of minchan.kim@gmail.com designates 209.85.210.170 as permitted sender) smtp.mailfrom=minchan.kim@gmail.com X-HE-Tag: 1653369392-381246 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: On Mon, May 23, 2022 at 07:55:25PM -0700, John Hubbard wrote: > On 5/23/22 09:33, Minchan Kim wrote: > ... > > > So then: > > > > > > diff --git a/mm/page_alloc.c b/mm/page_alloc.c > > > index 0e42038382c1..b404f87e2682 100644 > > > --- a/mm/page_alloc.c > > > +++ b/mm/page_alloc.c > > > @@ -482,7 +482,12 @@ unsigned long __get_pfnblock_flags_mask(const struct page *page, > > > word_bitidx = bitidx / BITS_PER_LONG; > > > bitidx &= (BITS_PER_LONG-1); > > > > > > - word = bitmap[word_bitidx]; > > > + /* > > > + * This races, without locks, with set_pageblock_migratetype(). Ensure > > set_pfnblock_flags_mask would be better? > > > + * a consistent (non-tearing) read of the memory array, so that results, > > > > Thanks for proceeding and suggestion, John. > > > > IIUC, the load tearing wouldn't be an issue since [1] fixed the issue. > > Did it? [1] fixed something, but I'm not sure we can claim that that > code is now safe against tearing in all possible cases, especially given > the recent discussion here. Specifically, having this code do a read, > then follow that up with calculations, seems correct. Anything else is The load tearing you are trying to explain in the comment would be solved by [1] since the bits will always align on a word and accessing word size based on word aligned address is always atomic so there is no load tearing problem IIUC. Instead of the tearing problem, what we are trying to solve with READ_ONCE is to prevent refetching when the function would be inlined in the future. > sketchy... > > > > > The concern in our dicussion was aggressive compiler(e.g., LTO) or code refactoring > > to make the code inline in *future* could potentially cause forcing refetching(i.e., > > re-read) tie bitmap[word_bitidx]. > > > > If so, shouldn't the comment be the one you helped before? > > Well, maybe updated to something like this? > > /* > * This races, without locks, with set_pageblock_migratetype(). Ensure set_pageblock_migratetype is more upper level function so it would be better fit to say set_pfnblock_flags_mask. > * a consistent (non-tearing) read of the memory array, so that results, So tearing problem should't already happen by [1] so I am trying to explain refetching(or re-read) problem in the comment. > * even though racy, are not corrupted--even if this function is The value is already atomic so I don't think it could be corrupted even though it would be inlined in the future. Please correct me if I miss something. > * refactored and/or inlined. > */