linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
From: Sidhartha Kumar <sidhartha.kumar@oracle.com>
To: Matthew Wilcox <willy@infradead.org>
Cc: linux-kernel@vger.kernel.org, linux-mm@kvack.org,
	akpm@linux-foundation.org, linmiaohe@huawei.com,
	jane.chu@oracle.com, nao.horiguchi@gmail.com, osalvador@suse.de
Subject: Re: [PATCH] mm/memory-failure: remove shake_page()
Date: Fri, 26 Apr 2024 10:57:31 -0700	[thread overview]
Message-ID: <d05cbf22-f3e8-414a-a2e3-03e0b857eaca@oracle.com> (raw)
In-Reply-To: <ZivlrMAwRI6xJhc-@casper.infradead.org>

On 4/26/24 10:34 AM, Matthew Wilcox wrote:
> On Fri, Apr 26, 2024 at 10:15:11AM -0700, Sidhartha Kumar wrote:
>> Use a folio in get_any_page() to save 5 calls to compound head and
>> convert the last user of shake_page() to shake_folio(). This allows us
>> to remove the shake_page() definition.
> 
> So I didn't do this before because I wasn't convinced it was safe.
> We don't have a refcount on the folio, so the page might no longer
> be part of this folio by the time we get the refcount on the folio.
> 
> I'd really like to see some argumentation for why this is safe.

If I moved down the folio = page_folio() line to after we verify 
__get_hwpoison_page() has returned 1, which indicates the reference count was 
successfully incremented via foliO_try_get(), that means the folio conversion 
would happen after we have a refcount. In the case we don't call 
__get_hwpoison_page(), that means the MF_COUNT_INCREASED flag is set. This means 
the page has existing users so that path would be safe as well. So I think this 
is safe after moving page_folio() after __get_hwpoison_page().

Does that seem correct?

Thanks,
Sid




> 
>> Signed-off-by: Sidhartha Kumar <sidhartha.kumar@oracle.com>
>> ---
>>   mm/memory-failure.c | 20 ++++++++------------
>>   1 file changed, 8 insertions(+), 12 deletions(-)
>>
>> diff --git a/mm/memory-failure.c b/mm/memory-failure.c
>> index 16ada4fb02b79..273f6fef29f25 100644
>> --- a/mm/memory-failure.c
>> +++ b/mm/memory-failure.c
>> @@ -385,11 +385,6 @@ void shake_folio(struct folio *folio)
>>   }
>>   EXPORT_SYMBOL_GPL(shake_folio);
>>   
>> -static void shake_page(struct page *page)
>> -{
>> -	shake_folio(page_folio(page));
>> -}
>> -
>>   static unsigned long dev_pagemap_mapping_shift(struct vm_area_struct *vma,
>>   		unsigned long address)
>>   {
>> @@ -1433,6 +1428,7 @@ static int get_any_page(struct page *p, unsigned long flags)
>>   {
>>   	int ret = 0, pass = 0;
>>   	bool count_increased = false;
>> +	struct folio *folio = page_folio(p);
>>   
>>   	if (flags & MF_COUNT_INCREASED)
>>   		count_increased = true;
>> @@ -1446,7 +1442,7 @@ static int get_any_page(struct page *p, unsigned long flags)
>>   				if (pass++ < 3)
>>   					goto try_again;
>>   				ret = -EBUSY;
>> -			} else if (!PageHuge(p) && !is_free_buddy_page(p)) {
>> +			} else if (!folio_test_hugetlb(folio) && !is_free_buddy_page(p)) {
>>   				/* We raced with put_page, retry. */
>>   				if (pass++ < 3)
>>   					goto try_again;
>> @@ -1459,7 +1455,7 @@ static int get_any_page(struct page *p, unsigned long flags)
>>   			 * page, retry.
>>   			 */
>>   			if (pass++ < 3) {
>> -				shake_page(p);
>> +				shake_folio(folio);
>>   				goto try_again;
>>   			}
>>   			ret = -EIO;
>> @@ -1467,7 +1463,7 @@ static int get_any_page(struct page *p, unsigned long flags)
>>   		}
>>   	}
>>   
>> -	if (PageHuge(p) || HWPoisonHandlable(p, flags)) {
>> +	if (folio_test_hugetlb(folio) || HWPoisonHandlable(p, flags)) {
>>   		ret = 1;
>>   	} else {
>>   		/*
>> @@ -1475,12 +1471,12 @@ static int get_any_page(struct page *p, unsigned long flags)
>>   		 * it into something we can handle.
>>   		 */
>>   		if (pass++ < 3) {
>> -			put_page(p);
>> -			shake_page(p);
>> +			folio_put(folio);
>> +			shake_folio(folio);
>>   			count_increased = false;
>>   			goto try_again;
>>   		}
>> -		put_page(p);
>> +		folio_put(folio);
>>   		ret = -EIO;
>>   	}
>>   out:
>> @@ -1643,7 +1639,7 @@ static bool hwpoison_user_mappings(struct folio *folio, struct page *p,
>>   
>>   	/*
>>   	 * try_to_unmap() might put mlocked page in lru cache, so call
>> -	 * shake_page() again to ensure that it's flushed.
>> +	 * shake_folio() again to ensure that it's flushed.
>>   	 */
>>   	if (mlocked)
>>   		shake_folio(folio);
>> -- 
>> 2.44.0
>>
> 



  reply	other threads:[~2024-04-26 17:57 UTC|newest]

Thread overview: 9+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2024-04-26 17:15 [PATCH] mm/memory-failure: remove shake_page() Sidhartha Kumar
2024-04-26 17:34 ` Matthew Wilcox
2024-04-26 17:57   ` Sidhartha Kumar [this message]
2024-04-26 18:27     ` Matthew Wilcox
2024-04-26 18:53       ` Sidhartha Kumar
2024-04-26 19:05         ` Matthew Wilcox
2024-04-26 19:52           ` Jane Chu
2024-04-26 20:33             ` Jane Chu
2024-04-28  2:24               ` Miaohe Lin

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=d05cbf22-f3e8-414a-a2e3-03e0b857eaca@oracle.com \
    --to=sidhartha.kumar@oracle.com \
    --cc=akpm@linux-foundation.org \
    --cc=jane.chu@oracle.com \
    --cc=linmiaohe@huawei.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-mm@kvack.org \
    --cc=nao.horiguchi@gmail.com \
    --cc=osalvador@suse.de \
    --cc=willy@infradead.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).