From: Chao Yu <chao@kernel.org>
To: Jack Qiu <jack.qiu@huawei.com>, linux-f2fs-devel@lists.sourceforge.net
Subject: Re: [f2fs-dev] [PATCH] f2fs: fix page leak in redirty_blocks
Date: Mon, 30 May 2022 20:02:58 +0800 [thread overview]
Message-ID: <b9047826-d171-69f9-7c97-c9fc71a4d218@kernel.org> (raw)
In-Reply-To: <b5b3ecce-d4d9-eeff-cdbb-3441885cf538@huawei.com>
On 2022/5/30 9:02, Jack Qiu wrote:
> On 2022/5/29 14:58, Chao Yu wrote:
>> On 2022/5/28 17:35, Jack Qiu via Linux-f2fs-devel wrote:
>>> When find_lock_page return error, page in [i, page_len) will leak.
>>
>> I doubt it is impossible to fail in find_lock_page due to one extra
>> reference count was added in previous read_cache_page().
>
> Thanks for review.
> I'm not sure about it with limited knowledge.
I guess something like those pages were pinned by adding extra
reference, and find_lock_page() should never miss them...
> If it is true, maybe use f2fs_bug_on(sbi, !page) is better?
I think so.
Thanks,
>
>>
>> Thanks,
>>
>>>
>>> Signed-off-by: Jack Qiu <jack.qiu@huawei.com>
>>> ---
>>> fs/f2fs/file.c | 13 +++++++++++++
>>> 1 file changed, 13 insertions(+)
>>>
>>> diff --git a/fs/f2fs/file.c b/fs/f2fs/file.c
>>> index 100637b1adb3..0e8938c5918e 100644
>>> --- a/fs/f2fs/file.c
>>> +++ b/fs/f2fs/file.c
>>> @@ -3954,6 +3954,12 @@ static int redirty_blocks(struct inode *inode, pgoff_t page_idx, int len)
>>> struct page *page;
>>> pgoff_t redirty_idx = page_idx;
>>> int i, page_len = 0, ret = 0;
>>> + struct page **pages;
>>> +
>>> + pages = f2fs_kvzalloc(F2FS_I_SB(inode),
>>> + sizeof(struct page *) * len, GFP_NOFS);
>>> + if (!pages)
>>> + return -ENOMEM;
>>>
>>> page_cache_ra_unbounded(&ractl, len, 0);
>>>
>>> @@ -3964,6 +3970,7 @@ static int redirty_blocks(struct inode *inode, pgoff_t page_idx, int len)
>>> break;
>>> }
>>> page_len++;
>>> + pages[i] = page;
>>> }
>>>
>>> for (i = 0; i < page_len; i++, redirty_idx++) {
>>> @@ -3975,8 +3982,14 @@ static int redirty_blocks(struct inode *inode, pgoff_t page_idx, int len)
>>> set_page_dirty(page);
>>> f2fs_put_page(page, 1);
>>> f2fs_put_page(page, 0);
>>> + pages[i] = NULL;
>>> }
>>>
>>> + /* put pages[i, page_len) when error happens */
>>> + for (; ret < 0 && i < page_len; i++)
>>> + f2fs_put_page(pages[i], 0);
>>> + kvfree(pages);
>>> +
>>> return ret;
>>> }
>>>
>>> --
>>> 2.31.1
>>>
>>>
>>>
>>> _______________________________________________
>>> Linux-f2fs-devel mailing list
>>> Linux-f2fs-devel@lists.sourceforge.net
>>> https://lists.sourceforge.net/lists/listinfo/linux-f2fs-devel
>> .
>
_______________________________________________
Linux-f2fs-devel mailing list
Linux-f2fs-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/linux-f2fs-devel
prev parent reply other threads:[~2022-05-30 12:03 UTC|newest]
Thread overview: 4+ messages / expand[flat|nested] mbox.gz Atom feed top
2022-05-28 9:35 [f2fs-dev] [PATCH] f2fs: fix page leak in redirty_blocks Jack Qiu via Linux-f2fs-devel
2022-05-29 6:58 ` Chao Yu
2022-05-30 1:02 ` Jack Qiu via Linux-f2fs-devel
2022-05-30 12:02 ` Chao Yu [this message]
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=b9047826-d171-69f9-7c97-c9fc71a4d218@kernel.org \
--to=chao@kernel.org \
--cc=jack.qiu@huawei.com \
--cc=linux-f2fs-devel@lists.sourceforge.net \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).