From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 53418C433EF for ; Mon, 23 May 2022 16:33:12 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 8D6ED6B0007; Mon, 23 May 2022 12:33:11 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 886A66B0008; Mon, 23 May 2022 12:33:11 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 775846B000A; Mon, 23 May 2022 12:33:11 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0017.hostedemail.com [216.40.44.17]) by kanga.kvack.org (Postfix) with ESMTP id 68D4E6B0007 for ; Mon, 23 May 2022 12:33:11 -0400 (EDT) Received: from smtpin29.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay09.hostedemail.com (Postfix) with ESMTP id 3FF433508A for ; Mon, 23 May 2022 16:33:11 +0000 (UTC) X-FDA: 79497552582.29.892B7E1 Received: from mail-pj1-f46.google.com (mail-pj1-f46.google.com [209.85.216.46]) by imf21.hostedemail.com (Postfix) with ESMTP id 54BE31C0026 for ; Mon, 23 May 2022 16:32:59 +0000 (UTC) Received: by mail-pj1-f46.google.com with SMTP id z7-20020a17090abd8700b001df78c7c209so18000184pjr.1 for ; Mon, 23 May 2022 09:33:10 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=sender:date:from:to:cc:subject:message-id:references:mime-version :content-disposition:in-reply-to; bh=5tDQKdG9kWYVmaajJtTxi2aIE/n1fHAJvs7gBOrSJGE=; b=CAFc6Kb4a5jeMcq2Dm8+NXWFdVfri+0CMXOrtJfbFy8rYFJOVsgzYgP5RkVh8WiCrx LoeKKjGqPpQBUnTPlu/1aVACw7f7JTPh1wQz7tlp4HzgDc/wLR7HAaN3hKjqZSHd3k6U aHvxNB/5PaVPb4M8IMJSEM0G7pBZMsK8JUoJvqFL0APZGh9b8F5iP1O/S6fzrZDBzXxn o4wP59lVv66pYHK4jRKELx9uuYfqz5oEDYK9VRK2mSiaW+N3Nh0BdbwTsxCJnkYSJjjK PW12BXF08oZrEy6WfQYdfHVhb1CS9f9llxXdSukzBvz9l6iQLiil5bWcYN6g+iJp5Q6s X/8g== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:sender:date:from:to:cc:subject:message-id :references:mime-version:content-disposition:in-reply-to; bh=5tDQKdG9kWYVmaajJtTxi2aIE/n1fHAJvs7gBOrSJGE=; b=SDxMgDFsUe0oju8E2duzce6WhFzqn2ilLvcKmGcZNvr8F1BqoBip3NOuZAUswOl+7Q 2+qKo4tsx6JzuCtKBguJb3XursY28p1+G50eLR5Lbe/TKZYrohIKR0IPSZJvvbFDi6/w JEfYl8/w87117RsOyB+42y2UYguKjpYYL5fbDD4eCO7UtDgJ9n7T0yQxJmX3h3fjD9vp dAbvfjpwnb4BTuJSz+1SAMP2RSFf4EEEhOKeCf1TkfJUPqVUjmejWwGwlV7XUo0EuLNF 2iX7PhL42iEBfXghLguHMu/1LFq2AUkpLqQ6yZ0C0D0e5ruDEwlNgYZJGzrhwZYkkzFK JoHA== X-Gm-Message-State: AOAM533BvyDbZBo4ZldLIkPo84zzvD+anM+yHGzzumH5p66kKzyy2a9g Rm+MoybWmnqfSwtCN6wyGwU= X-Google-Smtp-Source: ABdhPJy0Ls1q3U+6SXA1GvTCL9MA14HVQwtc71UoWl6XdWj6+Juk7OWFUyB+iphXKqZQJTySlj/rKA== X-Received: by 2002:a17:902:ce84:b0:162:cbe:f39a with SMTP id f4-20020a170902ce8400b001620cbef39amr10584283plg.79.1653323589739; Mon, 23 May 2022 09:33:09 -0700 (PDT) Received: from google.com ([2620:15c:211:201:c004:5d89:478f:e308]) by smtp.gmail.com with ESMTPSA id y6-20020aa78546000000b0051844f3f637sm4242350pfn.40.2022.05.23.09.33.08 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 23 May 2022 09:33:09 -0700 (PDT) Date: Mon, 23 May 2022 09:33:07 -0700 From: Minchan Kim To: John Hubbard Cc: Jason Gunthorpe , "Paul E. McKenney" , Andrew Morton , linux-mm , LKML , John Dias , David Hildenbrand Subject: Re: [PATCH v4] mm: fix is_pinnable_page against on cma page Message-ID: References: <20220512004949.GK1790663@paulmck-ThinkPad-P17-Gen-1> <0accce46-fac6-cdfb-db7f-d08396bf9d35@nvidia.com> <20220517140049.GF63055@ziepe.ca> <20220517192825.GM63055@ziepe.ca> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: X-Rspamd-Server: rspam08 X-Rspamd-Queue-Id: 54BE31C0026 X-Stat-Signature: dcyoscrh6j7yy3a71bmbqstajfbinp5g Authentication-Results: imf21.hostedemail.com; dkim=pass header.d=gmail.com header.s=20210112 header.b=CAFc6Kb4; dmarc=fail reason="SPF not aligned (relaxed), DKIM not aligned (relaxed)" header.from=kernel.org (policy=none); spf=pass (imf21.hostedemail.com: domain of minchan.kim@gmail.com designates 209.85.216.46 as permitted sender) smtp.mailfrom=minchan.kim@gmail.com X-Rspam-User: X-HE-Tag: 1653323579-812333 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: On Tue, May 17, 2022 at 01:12:02PM -0700, John Hubbard wrote: > On 5/17/22 12:28, Jason Gunthorpe wrote: > > > If you compare this to the snippet above, you'll see that there is > > > an extra mov statement, and that one dereferences a pointer from > > > %rax: > > > > > > mov (%rax),%rbx > > > > That is the same move as: > > > > mov 0x8(%rdx,%rax,8),%rbx > > > > Except that the EA calculation was done in advance and stored in rax. > > > > lea isn't a memory reference, it is just computing the pointer value > > that 0x8(%rdx,%rax,8) represents. ie the lea computes > > > > %rax = %rdx + %rax*8 + 6 > > > > Which is then fed into the mov. Maybe it is an optimization to allow > > one pipe to do the shr and an other to the EA - IDK, seems like a > > random thing for the compiler to do. > > Apologies for getting that wrong, and thanks for walking me through the > asm. > > [...] > > > > Paul can correct me, but I understand we do not have a list of allowed > > operations that are exempted from the READ_ONCE() requirement. ie it > > is not just conditional branching that requires READ_ONCE(). > > > > This is why READ_ONCE() must always be on the memory load, because the > > point is to sanitize away the uncertainty that comes with an unlocked > > read of unstable memory contents. READ_ONCE() samples the value in > > memory, and removes all tearing, multiload, etc "instability" that may > > effect down stream computations. In this way down stream compulations > > become reliable. > > > > Jason > > So then: > > diff --git a/mm/page_alloc.c b/mm/page_alloc.c > index 0e42038382c1..b404f87e2682 100644 > --- a/mm/page_alloc.c > +++ b/mm/page_alloc.c > @@ -482,7 +482,12 @@ unsigned long __get_pfnblock_flags_mask(const struct page *page, > word_bitidx = bitidx / BITS_PER_LONG; > bitidx &= (BITS_PER_LONG-1); > > - word = bitmap[word_bitidx]; > + /* > + * This races, without locks, with set_pageblock_migratetype(). Ensure set_pfnblock_flags_mask would be better? > + * a consistent (non-tearing) read of the memory array, so that results, Thanks for proceeding and suggestion, John. IIUC, the load tearing wouldn't be an issue since [1] fixed the issue. The concern in our dicussion was aggressive compiler(e.g., LTO) or code refactoring to make the code inline in *future* could potentially cause forcing refetching(i.e., re-read) tie bitmap[word_bitidx]. If so, shouldn't the comment be the one you helped before? /* * Defend against future compiler LTO features, or code refactoring * that inlines the above function, by forcing a single read. Because, * re-reads of bitmap[word_bitidx] by inlining could cause trouble * for whom believe they use a local variable for the value. */ [1] e58469bafd05, mm: page_alloc: use word-based accesses for get/set pageblock bitmaps > + * even though racy, are not corrupted. > + */ > + word = READ_ONCE(bitmap[word_bitidx]); > return (word >> bitidx) & mask; > } > > > thanks, > -- > John Hubbard > NVIDIA