linux-f2fs-devel.lists.sourceforge.net archive mirror
 help / color / mirror / Atom feed
From: Chao Yu <chao@kernel.org>
To: Vishal Moola <vishal.moola@gmail.com>
Cc: linux-fsdevel@vger.kernel.org, linux-mm@kvack.org,
	linux-kernel@vger.kernel.org,
	linux-f2fs-devel@lists.sourceforge.net
Subject: Re: [f2fs-dev] [PATCH v3 14/23] f2fs: Convert f2fs_write_cache_pages() to use filemap_get_folios_tag()
Date: Mon, 12 Dec 2022 22:41:52 +0800	[thread overview]
Message-ID: <0a95ba7b-9335-ce03-0f47-5d9f4cce988f@kernel.org> (raw)
In-Reply-To: <CAOzc2pzp0JEanJTgzSrRt3ziRCrR6rGCjpwJvAD8uCqsHqXnHg@mail.gmail.com>

Hi Vishal,

Sorry for the delay reply.

On 2022/12/6 4:34, Vishal Moola wrote:
> On Tue, Nov 22, 2022 at 6:26 PM Vishal Moola <vishal.moola@gmail.com> wrote:
>>
>> On Mon, Nov 14, 2022 at 1:38 PM Vishal Moola <vishal.moola@gmail.com> wrote:
>>>
>>> On Sun, Nov 13, 2022 at 11:02 PM Chao Yu <chao@kernel.org> wrote:
>>>>
>>>> On 2022/10/18 4:24, Vishal Moola (Oracle) wrote:
>>>>> Converted the function to use a folio_batch instead of pagevec. This is in
>>>>> preparation for the removal of find_get_pages_range_tag().
>>>>>
>>>>> Also modified f2fs_all_cluster_page_ready to take in a folio_batch instead
>>>>> of pagevec. This does NOT support large folios. The function currently
>>>>
>>>> Vishal,
>>>>
>>>> It looks this patch tries to revert Fengnan's change:
>>>>
>>>> https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/commit/?id=01fc4b9a6ed8eacb64e5609bab7ac963e1c7e486
>>>>
>>>> How about doing some tests to evaluate its performance effect?
>>>
>>> Yeah I'll play around with it to see how much of a difference it makes.
>>
>> I did some testing. Looks like reverting Fengnan's change allows for
>> occasional, but significant, spikes in write latency. I'll work on a variation
>> of the patch that maintains the use of F2FS_ONSTACK_PAGES and send
>> that in the next version of the patch series. Thanks for pointing that out!
> 
> Following Matthew's comment, I'm thinking we should go with this patch
> as is. The numbers between both variations did not have substantial
> differences with regard to latency.
> 
> While the new variant would maintain the use of F2FS_ONSTACK_PAGES,
> the code becomes messier and would end up limiting the number of
> folios written back once large folio support is added. This means it would
> have to be converted down to this version later anyways.
> 
> Does leaving this patch as is sound good to you?
> 
> For reference, here's what the version continuing to use a page
> array of size F2FS_ONSTACK_PAGES would change:
> 
> +               nr_pages = 0;
> +again:
> +               nr_folios = filemap_get_folios_tag(mapping, &index, end,
> +                               tag, &fbatch);
> +               if (nr_folios == 0) {
> +                       if (nr_pages)
> +                               goto write;
> +                               goto write;

Duplicated code.

>                          break;
> +               }
> 
> +               for (i = 0; i < nr_folios; i++) {
> +                       struct folio* folio = fbatch.folios[i];
> +
> +                       idx = 0;
> +                       p = folio_nr_pages(folio);
> +add_more:
> +                       pages[nr_pages] = folio_page(folio,idx);
> +                       folio_ref_inc(folio);
> +                       if (++nr_pages == F2FS_ONSTACK_PAGES) {
> +                               index = folio->index + idx + 1;
> +                               folio_batch_release(&fbatch);
> +                               goto write;
> +                       }
> +                       if (++idx < p)
> +                               goto add_more;
> +               }
> +               folio_batch_release(&fbatch);
> +               goto again;
> +write:

Looks fine to me, can you please send a formal patch?

Thanks,

> 
>> How do the remaining f2fs patches in the series look to you?
>> Patch 16/23 f2fs_sync_meta_pages() in particular seems like it may
>> be prone to problems. If there are any changes that need to be made to
>> it I can include those in the next version as well.
> 
> Thanks for reviewing the patches so far. I wanted to follow up on asking
> for review of the last couple of patches.


_______________________________________________
Linux-f2fs-devel mailing list
Linux-f2fs-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/linux-f2fs-devel

  reply	other threads:[~2022-12-12 14:42 UTC|newest]

Thread overview: 60+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2022-10-17 20:24 [f2fs-dev] [PATCH v3 00/23] Convert to filemap_get_folios_tag() Vishal Moola (Oracle)
2022-10-17 20:24 ` [f2fs-dev] [PATCH v3 01/23] pagemap: Add filemap_grab_folio() Vishal Moola (Oracle)
2022-10-24 19:36   ` Vishal Moola
2022-10-24 19:38   ` Matthew Wilcox
2022-10-17 20:24 ` [f2fs-dev] [PATCH v3 02/23] filemap: Added filemap_get_folios_tag() Vishal Moola (Oracle)
2022-10-24 19:42   ` Matthew Wilcox
2022-10-17 20:24 ` [f2fs-dev] [PATCH v3 03/23] filemap: Convert __filemap_fdatawait_range() to use filemap_get_folios_tag() Vishal Moola (Oracle)
2022-10-24 20:06   ` Matthew Wilcox
2022-10-17 20:24 ` [f2fs-dev] [PATCH v3 04/23] page-writeback: Convert write_cache_pages() " Vishal Moola (Oracle)
2022-10-24 20:12   ` Matthew Wilcox
2022-10-17 20:24 ` [f2fs-dev] [PATCH v3 05/23] afs: Convert afs_writepages_region() " Vishal Moola (Oracle)
2022-10-17 20:24 ` [f2fs-dev] [PATCH v3 06/23] btrfs: Convert btree_write_cache_pages() to use filemap_get_folio_tag() Vishal Moola (Oracle)
2022-10-17 20:24 ` [f2fs-dev] [PATCH v3 07/23] btrfs: Convert extent_write_cache_pages() to use filemap_get_folios_tag() Vishal Moola (Oracle)
2022-10-17 20:24 ` [f2fs-dev] [PATCH v3 08/23] ceph: Convert ceph_writepages_start() " Vishal Moola (Oracle)
2022-10-28 17:20   ` Jeff Layton
2022-10-17 20:24 ` [f2fs-dev] [PATCH v3 09/23] cifs: Convert wdata_alloc_and_fillpages() " Vishal Moola (Oracle)
2022-10-17 20:24 ` [f2fs-dev] [PATCH v3 10/23] ext4: Convert mpage_prepare_extent_to_map() " Vishal Moola (Oracle)
2022-10-24 19:26   ` Vishal Moola
2022-10-17 20:24 ` [f2fs-dev] [PATCH v3 11/23] f2fs: Convert f2fs_fsync_node_pages() " Vishal Moola (Oracle)
2022-10-24 19:31   ` Vishal Moola
2022-11-10 18:51     ` Vishal Moola
2022-10-29  4:46   ` Chao Yu
2022-10-17 20:24 ` [f2fs-dev] [PATCH v3 12/23] f2fs: Convert f2fs_flush_inline_data() " Vishal Moola (Oracle)
2022-10-29  4:47   ` Chao Yu
2022-10-17 20:24 ` [f2fs-dev] [PATCH v3 13/23] f2fs: Convert f2fs_sync_node_pages() " Vishal Moola (Oracle)
2022-10-29  4:47   ` Chao Yu
2022-10-17 20:24 ` [f2fs-dev] [PATCH v3 14/23] f2fs: Convert f2fs_write_cache_pages() " Vishal Moola (Oracle)
2022-11-14  7:02   ` Chao Yu
2022-11-14 21:38     ` Vishal Moola
2022-11-23  2:26       ` Vishal Moola
2022-11-23  7:51         ` Vishal Moola
2022-12-05 20:34         ` Vishal Moola
2022-12-12 14:41           ` Chao Yu [this message]
2022-12-12 19:13             ` [f2fs-dev] [RFC PATCH] " Vishal Moola (Oracle)
2022-12-15  1:48               ` Chao Yu
2022-12-15 18:45                 ` Matthew Wilcox
2022-12-21 17:17                   ` Vishal Moola
2022-12-23  8:07                     ` Christoph Hellwig
2022-12-15 19:02               ` Jaegeuk Kim
2023-01-03 20:53                 ` Matthew Wilcox
2022-11-29 19:14     ` [f2fs-dev] [PATCH v3 14/23] " Matthew Wilcox
2022-11-30 12:48       ` [f2fs-dev] [PATCH] f2fs: Support enhanced hot/cold data separation for f2fs Yangtao Li via Linux-f2fs-devel
2022-11-30 15:18         ` Matthew Wilcox
2022-12-07 20:51           ` Luis Chamberlain
2024-01-25 20:47             ` Matthew Wilcox
2024-01-25 20:54               ` Luis Chamberlain
2024-01-26 21:01                 ` Matthew Wilcox
2024-01-26 21:32                   ` Luis Chamberlain
2024-01-27  7:05                     ` Eric Biggers
2022-11-30 12:51       ` [f2fs-dev] [PATCH]f2fs: Convert f2fs_write_cache_pages() to use filemap_get_folios_tag() Yangtao Li via Linux-f2fs-devel
2022-10-17 20:24 ` [f2fs-dev] [PATCH v3 15/23] f2fs: Convert last_fsync_dnode() " Vishal Moola (Oracle)
2022-10-17 20:24 ` [f2fs-dev] [PATCH v3 16/23] f2fs: Convert f2fs_sync_meta_pages() " Vishal Moola (Oracle)
2022-10-17 20:24 ` [f2fs-dev] [PATCH v3 17/23] gfs2: Convert gfs2_write_cache_jdata() " Vishal Moola (Oracle)
2022-10-24 19:23   ` Vishal Moola
2022-10-17 20:24 ` [f2fs-dev] [PATCH v3 18/23] nilfs2: Convert nilfs_lookup_dirty_data_buffers() " Vishal Moola (Oracle)
2022-10-17 20:24 ` [f2fs-dev] [PATCH v3 19/23] nilfs2: Convert nilfs_lookup_dirty_node_buffers() " Vishal Moola (Oracle)
2022-10-17 20:24 ` [f2fs-dev] [PATCH v3 20/23] nilfs2: Convert nilfs_btree_lookup_dirty_buffers() " Vishal Moola (Oracle)
2022-10-17 20:24 ` [f2fs-dev] [PATCH v3 21/23] nilfs2: Convert nilfs_copy_dirty_pages() " Vishal Moola (Oracle)
2022-10-17 20:24 ` [f2fs-dev] [PATCH v3 22/23] nilfs2: Convert nilfs_clear_dirty_pages() " Vishal Moola (Oracle)
2022-10-17 20:24 ` [f2fs-dev] [PATCH v3 23/23] filemap: Remove find_get_pages_range_tag() Vishal Moola (Oracle)

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=0a95ba7b-9335-ce03-0f47-5d9f4cce988f@kernel.org \
    --to=chao@kernel.org \
    --cc=linux-f2fs-devel@lists.sourceforge.net \
    --cc=linux-fsdevel@vger.kernel.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-mm@kvack.org \
    --cc=vishal.moola@gmail.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).