From: John Hubbard <jhubbard@nvidia.com>
To: Minchan Kim <minchan@kernel.org>
Cc: Jason Gunthorpe <jgg@ziepe.ca>,
"Paul E. McKenney" <paulmck@kernel.org>,
Andrew Morton <akpm@linux-foundation.org>,
linux-mm <linux-mm@kvack.org>,
LKML <linux-kernel@vger.kernel.org>,
John Dias <joaodias@google.com>,
David Hildenbrand <david@redhat.com>
Subject: Re: [PATCH v4] mm: fix is_pinnable_page against on cma page
Date: Mon, 23 May 2022 23:22:11 -0700 [thread overview]
Message-ID: <1fab652f-4cd3-e45c-19b0-cf22bcb36cf5@nvidia.com> (raw)
In-Reply-To: <YoxqSud9fvNXqo89@google.com>
On 5/23/22 10:16 PM, Minchan Kim wrote:
> On Mon, May 23, 2022 at 07:55:25PM -0700, John Hubbard wrote:
>> On 5/23/22 09:33, Minchan Kim wrote:
>> ...
>>>> So then:
>>>>
>>>> diff --git a/mm/page_alloc.c b/mm/page_alloc.c
>>>> index 0e42038382c1..b404f87e2682 100644
>>>> --- a/mm/page_alloc.c
>>>> +++ b/mm/page_alloc.c
>>>> @@ -482,7 +482,12 @@ unsigned long __get_pfnblock_flags_mask(const struct page *page,
>>>> word_bitidx = bitidx / BITS_PER_LONG;
>>>> bitidx &= (BITS_PER_LONG-1);
>>>>
>>>> - word = bitmap[word_bitidx];
>>>> + /*
>>>> + * This races, without locks, with set_pageblock_migratetype(). Ensure
>>> set_pfnblock_flags_mask would be better?
>>>> + * a consistent (non-tearing) read of the memory array, so that results,
>>>
>>> Thanks for proceeding and suggestion, John.
>>>
>>> IIUC, the load tearing wouldn't be an issue since [1] fixed the issue.
>>
>> Did it? [1] fixed something, but I'm not sure we can claim that that
>> code is now safe against tearing in all possible cases, especially given
>> the recent discussion here. Specifically, having this code do a read,
>> then follow that up with calculations, seems correct. Anything else is
>
> The load tearing you are trying to explain in the comment would be
> solved by [1] since the bits will always align on a word and accessing
> word size based on word aligned address is always atomic so there is
> no load tearing problem IIUC.
>
> Instead of the tearing problem, what we are trying to solve with
> READ_ONCE is to prevent refetching when the function would be
> inlined in the future.
>
I'm perhaps using "tearing" as too broad of a term, maybe just removing
the "(non-tearing)" part would fix up the comment.
>> sketchy...
>>
>>>
>>> The concern in our dicussion was aggressive compiler(e.g., LTO) or code refactoring
>>> to make the code inline in *future* could potentially cause forcing refetching(i.e.,
>>> re-read) tie bitmap[word_bitidx].
>>>
>>> If so, shouldn't the comment be the one you helped before?
>>
>> Well, maybe updated to something like this?
>>
>> /*
>> * This races, without locks, with set_pageblock_migratetype(). Ensure
>
> set_pageblock_migratetype is more upper level function so it would
> be better fit to say set_pfnblock_flags_mask.
OK
>
>> * a consistent (non-tearing) read of the memory array, so that results,
>
> So tearing problem should't already happen by [1] so I am trying to
> explain refetching(or re-read) problem in the comment.
>
>> * even though racy, are not corrupted--even if this function is
>
> The value is already atomic so I don't think it could be corrupted
> even though it would be inlined in the future.
>
> Please correct me if I miss something.
>
>> * refactored and/or inlined.
>> */
>
thanks,
--
John Hubbard
NVIDIA
next prev parent reply other threads:[~2022-05-24 6:22 UTC|newest]
Thread overview: 46+ messages / expand[flat|nested] mbox.gz Atom feed top
2022-05-10 21:17 [PATCH v4] mm: fix is_pinnable_page against on cma page Minchan Kim
2022-05-10 22:56 ` John Hubbard
2022-05-10 23:31 ` Minchan Kim
2022-05-10 23:58 ` John Hubbard
2022-05-11 0:09 ` Minchan Kim
2022-05-11 4:32 ` John Hubbard
2022-05-11 21:46 ` Minchan Kim
2022-05-11 22:25 ` John Hubbard
2022-05-11 22:37 ` Minchan Kim
2022-05-11 22:49 ` John Hubbard
2022-05-11 23:08 ` Minchan Kim
2022-05-11 23:13 ` John Hubbard
2022-05-11 23:15 ` Minchan Kim
2022-05-11 23:28 ` Minchan Kim
2022-05-11 23:33 ` John Hubbard
2022-05-11 23:45 ` Paul E. McKenney
2022-05-11 23:57 ` John Hubbard
2022-05-12 0:12 ` Paul E. McKenney
2022-05-12 0:12 ` John Hubbard
2022-05-12 0:22 ` Paul E. McKenney
2022-05-12 0:26 ` Minchan Kim
2022-05-12 0:34 ` John Hubbard
2022-05-12 0:49 ` Paul E. McKenney
2022-05-12 1:02 ` John Hubbard
2022-05-12 1:03 ` Minchan Kim
2022-05-12 1:08 ` John Hubbard
2022-05-12 2:18 ` John Hubbard
2022-05-12 3:44 ` Minchan Kim
2022-05-12 4:47 ` John Hubbard
2022-05-17 14:00 ` Jason Gunthorpe
2022-05-17 18:12 ` John Hubbard
2022-05-17 19:28 ` Jason Gunthorpe
2022-05-17 20:12 ` John Hubbard
2022-05-17 20:21 ` Paul E. McKenney
2022-05-23 16:33 ` Minchan Kim
2022-05-24 2:55 ` John Hubbard
2022-05-24 5:16 ` Minchan Kim
2022-05-24 6:22 ` John Hubbard [this message]
2022-05-24 14:19 ` Jason Gunthorpe
2022-05-24 15:43 ` Minchan Kim
2022-05-24 15:48 ` Jason Gunthorpe
2022-05-24 16:37 ` Paul E. McKenney
2022-05-24 16:59 ` Minchan Kim
2022-05-12 3:57 ` Paul E. McKenney
2022-05-12 1:03 ` Minchan Kim
2022-05-12 0:35 ` Paul E. McKenney
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=1fab652f-4cd3-e45c-19b0-cf22bcb36cf5@nvidia.com \
--to=jhubbard@nvidia.com \
--cc=akpm@linux-foundation.org \
--cc=david@redhat.com \
--cc=jgg@ziepe.ca \
--cc=joaodias@google.com \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-mm@kvack.org \
--cc=minchan@kernel.org \
--cc=paulmck@kernel.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).