public inbox for linux-mm@kvack.org
 help / color / mirror / Atom feed
From: Kefeng Wang <wangkefeng.wang@huawei.com>
To: Barry Song <21cnbao@gmail.com>
Cc: Matthew Wilcox <willy@infradead.org>,
	Andrew Morton <akpm@linux-foundation.org>,
	Hugh Dickins <hughd@google.com>,
	David Hildenbrand <david@redhat.com>,
	Baolin Wang <baolin.wang@linux.alibaba.com>, <linux-mm@kvack.org>
Subject: Re: [PATCH] mm: shmem: convert to use folio_zero_range()
Date: Mon, 21 Oct 2024 16:14:40 +0800	[thread overview]
Message-ID: <e7a1ff17-d7c8-45bf-ae2c-18ac3a37c22d@huawei.com> (raw)
In-Reply-To: <CAGsJ_4xDdBtOwHqGSrtmJv=p6XDHFDT8RC==PybCc6e1qib=Fw@mail.gmail.com>



On 2024/10/21 15:55, Barry Song wrote:
> On Mon, Oct 21, 2024 at 8:47 PM Barry Song <21cnbao@gmail.com> wrote:
>>
>> On Mon, Oct 21, 2024 at 7:09 PM Kefeng Wang <wangkefeng.wang@huawei.com> wrote:
>>>
>>>
>>>
>>> On 2024/10/21 13:38, Barry Song wrote:
>>>> On Mon, Oct 21, 2024 at 6:16 PM Kefeng Wang <wangkefeng.wang@huawei.com> wrote:
>>>>>
>>>>>
>>>>>
>>>>> On 2024/10/21 12:15, Barry Song wrote:
>>>>>> On Fri, Oct 18, 2024 at 8:48 PM Kefeng Wang <wangkefeng.wang@huawei.com> wrote:
>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>> On 2024/10/18 15:32, Kefeng Wang wrote:
>>>>>>>>
>>>>>>>>
>>>>>>>> On 2024/10/18 13:23, Barry Song wrote:
>>>>>>>>> On Fri, Oct 18, 2024 at 6:20 PM Kefeng Wang
>>>>>>>>> <wangkefeng.wang@huawei.com> wrote:
>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>> On 2024/10/17 23:09, Matthew Wilcox wrote:
>>>>>>>>>>> On Thu, Oct 17, 2024 at 10:25:04PM +0800, Kefeng Wang wrote:
>>>>>>>>>>>> Directly use folio_zero_range() to cleanup code.
>>>>>>>>>>>
>>>>>>>>>>> Are you sure there's no performance regression introduced by this?
>>>>>>>>>>> clear_highpage() is often optimised in ways that we can't optimise for
>>>>>>>>>>> a plain memset().  On the other hand, if the folio is large, maybe a
>>>>>>>>>>> modern CPU will be able to do better than clear-one-page-at-a-time.
>>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>> Right, I missing this, clear_page might be better than memset, I change
>>>>>>>>>> this one when look at the shmem_writepage(), which already convert to
>>>>>>>>>> use folio_zero_range() from clear_highpage(), also I grep
>>>>>>>>>> folio_zero_range(), there are some other to use folio_zero_range().
>>>>>>>>>>
>>>>>>>>>> fs/bcachefs/fs-io-buffered.c:           folio_zero_range(folio, 0,
>>>>>>>>>> folio_size(folio));
>>>>>>>>>> fs/bcachefs/fs-io-buffered.c:                   folio_zero_range(f,
>>>>>>>>>> 0, folio_size(f));
>>>>>>>>>> fs/bcachefs/fs-io-buffered.c:                   folio_zero_range(f,
>>>>>>>>>> 0, folio_size(f));
>>>>>>>>>> fs/libfs.c:     folio_zero_range(folio, 0, folio_size(folio));
>>>>>>>>>> fs/ntfs3/frecord.c:             folio_zero_range(folio, 0,
>>>>>>>>>> folio_size(folio));
>>>>>>>>>> mm/page_io.c:   folio_zero_range(folio, 0, folio_size(folio));
>>>>>>>>>> mm/shmem.c:             folio_zero_range(folio, 0, folio_size(folio));
>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>>> IOW, what performance testing have you done with this patch?
>>>>>>>>>>
>>>>>>>>>> No performance test before, but I write a testcase,
>>>>>>>>>>
>>>>>>>>>> 1) allocate N large folios (folio_alloc(PMD_ORDER))
>>>>>>>>>> 2) then calculate the diff(us) when clear all N folios
>>>>>>>>>>         clear_highpage/folio_zero_range/folio_zero_user
>>>>>>>>>> 3) release N folios
>>>>>>>>>>
>>>>>>>>>> the result(run 5 times) shown below on my machine,
>>>>>>>>>>
>>>>>>>>>> N=1,
>>>>>>>>>>             clear_highpage  folio_zero_range    folio_zero_user
>>>>>>>>>>        1      69                   74                 177
>>>>>>>>>>        2      57                   62                 168
>>>>>>>>>>        3      54                   58                 234
>>>>>>>>>>        4      54                   58                 157
>>>>>>>>>>        5      56                   62                 148
>>>>>>>>>> avg       58                   62.8               176.8
>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>> N=100
>>>>>>>>>>             clear_highpage  folio_zero_range    folio_zero_user
>>>>>>>>>>        1    11015                 11309               32833
>>>>>>>>>>        2    10385                 11110               49751
>>>>>>>>>>        3    10369                 11056               33095
>>>>>>>>>>        4    10332                 11017               33106
>>>>>>>>>>        5    10483                 11000               49032
>>>>>>>>>> avg     10516.8               11098.4             39563.4
>>>>>>>>>>
>>>>>>>>>> N=512
>>>>>>>>>>             clear_highpage  folio_zero_range   folio_zero_user
>>>>>>>>>>        1    55560                 60055              156876
>>>>>>>>>>        2    55485                 60024              157132
>>>>>>>>>>        3    55474                 60129              156658
>>>>>>>>>>        4    55555                 59867              157259
>>>>>>>>>>        5    55528                 59932              157108
>>>>>>>>>> avg     55520.4               60001.4            157006.6
>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>> folio_zero_user with many cond_resched(), so time fluctuates a lot,
>>>>>>>>>> clear_highpage is better folio_zero_range as you said.
>>>>>>>>>>
>>>>>>>>>> Maybe add a new helper to convert all folio_zero_range(folio, 0,
>>>>>>>>>> folio_size(folio))
>>>>>>>>>> to use clear_highpage + flush_dcache_folio?
>>>>>>>>>
>>>>>>>>> If this also improves performance for other existing callers of
>>>>>>>>> folio_zero_range(), then that's a positive outcome.
>>>>>>>>
>>> ...
>>>
>>>>>> hi Kefeng,
>>>>>> what's your point? providing a helper like clear_highfolio() or similar?
>>>>>
>>>>> Yes, from above test, using clear_highpage/flush_dcache_folio is better
>>>>> than using folio_zero_range() for folio zero(especially for large
>>>>> folio), so I'd like to add a new helper, maybe name it folio_zero()
>>>>> since it zero the whole folio.
>>>>
>>>> we already have a helper like folio_zero_user()?
>>>> it is not good enough?
>>>
>>> Since it is with many cond_resched(), the performance is worst...
>>
>> Not exactly? It should have zero cost for a preemptible kernel.
>> For a non-preemptible kernel, it helps avoid clearing the folio
>> from occupying the CPU and starving other processes, right?
> 
> --- a/mm/shmem.c
> +++ b/mm/shmem.c
> 
> @@ -2393,10 +2393,7 @@ static int shmem_get_folio_gfp(struct inode
> *inode, pgoff_t index,
>           * it now, lest undo on failure cancel our earlier guarantee.
>           */
> 
>          if (sgp != SGP_WRITE && !folio_test_uptodate(folio)) {
> -               long i, n = folio_nr_pages(folio);
> -
> -               for (i = 0; i < n; i++)
> -                       clear_highpage(folio_page(folio, i));
> +               folio_zero_user(folio, vmf->address);
>                  flush_dcache_folio(folio);
>                  folio_mark_uptodate(folio);
>          }
> 
> Do we perform better or worse with the following?

Here is for SGP_FALLOC, vmf = NULL, we could use folio_zero_user(folio, 
0), I think the performance is worse, will retest once I can access 
hardware.




  reply	other threads:[~2024-10-21  8:14 UTC|newest]

Thread overview: 30+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2024-10-17 14:25 [PATCH] mm: shmem: avoid repeated flush dcache in shmem_writepage() Kefeng Wang
2024-10-17 14:25 ` [PATCH] mm: shmem: convert to use folio_zero_range() Kefeng Wang
2024-10-17 15:09   ` Matthew Wilcox
2024-10-18  5:20     ` Kefeng Wang
2024-10-18  5:23       ` Barry Song
2024-10-18  7:32         ` Kefeng Wang
2024-10-18  7:47           ` Kefeng Wang
2024-10-21  4:15             ` Barry Song
2024-10-21  5:16               ` Kefeng Wang
2024-10-21  5:38                 ` Barry Song
2024-10-21  6:09                   ` Kefeng Wang
2024-10-21  7:47                     ` Barry Song
2024-10-21  7:55                       ` Barry Song
2024-10-21  8:14                         ` Kefeng Wang [this message]
2024-10-21  9:17                           ` Barry Song
2024-10-21 15:33                             ` Kefeng Wang
2024-10-21 20:32                               ` Barry Song
2024-10-22 15:10                                 ` Kefeng Wang
2024-10-22 22:56                                   ` Barry Song
2024-10-24 10:10                                     ` Kefeng Wang
2024-10-25  2:59                                       ` Huang, Ying
2024-10-25  7:42                                         ` Kefeng Wang
2024-10-25  7:47                                           ` Huang, Ying
2024-10-25 10:21                                             ` Kefeng Wang
2024-10-25 12:21                                               ` Huang, Ying
2024-10-25 13:35                                                 ` Kefeng Wang
2024-10-28  2:39                                                   ` Huang, Ying
2024-10-28  6:37                                                     ` Kefeng Wang
2024-10-28 11:41                                                       ` Kefeng Wang
2024-10-30  1:26                                                         ` Huang, Ying

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=e7a1ff17-d7c8-45bf-ae2c-18ac3a37c22d@huawei.com \
    --to=wangkefeng.wang@huawei.com \
    --cc=21cnbao@gmail.com \
    --cc=akpm@linux-foundation.org \
    --cc=baolin.wang@linux.alibaba.com \
    --cc=david@redhat.com \
    --cc=hughd@google.com \
    --cc=linux-mm@kvack.org \
    --cc=willy@infradead.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox