linux-perf-users.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: David Hildenbrand <david@redhat.com>
To: Yang Shi <shy828301@gmail.com>
Cc: Chih-En Lin <shiyn.lin@gmail.com>,
	Pasha Tatashin <pasha.tatashin@soleen.com>,
	Andrew Morton <akpm@linux-foundation.org>,
	Qi Zheng <zhengqi.arch@bytedance.com>,
	"Matthew Wilcox (Oracle)" <willy@infradead.org>,
	Christophe Leroy <christophe.leroy@csgroup.eu>,
	John Hubbard <jhubbard@nvidia.com>, Nadav Amit <namit@vmware.com>,
	Barry Song <baohua@kernel.org>,
	Steven Rostedt <rostedt@goodmis.org>,
	Masami Hiramatsu <mhiramat@kernel.org>,
	Peter Zijlstra <peterz@infradead.org>,
	Ingo Molnar <mingo@redhat.com>,
	Arnaldo Carvalho de Melo <acme@kernel.org>,
	Mark Rutland <mark.rutland@arm.com>,
	Alexander Shishkin <alexander.shishkin@linux.intel.com>,
	Jiri Olsa <jolsa@kernel.org>, Namhyung Kim <namhyung@kernel.org>,
	Peter Xu <peterx@redhat.com>, Vlastimil Babka <vbabka@suse.cz>,
	Zach O'Keefe <zokeefe@google.com>,
	Yun Zhou <yun.zhou@windriver.com>,
	Hugh Dickins <hughd@google.com>,
	Suren Baghdasaryan <surenb@google.com>,
	Yu Zhao <yuzhao@google.com>, Juergen Gross <jgross@suse.com>,
	Tong Tiangen <tongtiangen@huawei.com>,
	Liu Shixin <liushixin2@huawei.com>,
	Anshuman Khandual <anshuman.khandual@arm.com>,
	Li kunyu <kunyu@nfschina.com>, Minchan Kim <minchan@kernel.org>,
	Miaohe Lin <linmiaohe@huawei.com>,
	Gautam Menghani <gautammenghani201@gmail.com>,
	Catalin Marinas <catalin.marinas@arm.com>,
	Mark Brown <broonie@kernel.org>, Will Deacon <will@kernel.org>,
	Vincenzo Frascino <Vincenzo.Frascino@arm.com>,
	Thomas Gleixner <tglx@linutronix.de>,
	"Eric W. Biederman" <ebiederm@xmission.com>,
	Andy Lutomirski <luto@kernel.org>,
	Sebastian Andrzej Siewior <bigeasy@linutronix.de>,
	"Liam R. Howlett" <Liam.Howlett@oracle.com>,
	Fenghua Yu <fenghua.yu@intel.com>,
	Andrei Vagin <avagin@gmail.com>, Barret Rhoden <brho@google.com>,
	Michal Hocko <mhocko@suse.com>,
	"Jason A. Donenfeld" <Jason@zx2c4.com>,
	Alexey Gladkov <legion@kernel.org>,
	linux-kernel@vger.kernel.org, linux-fsdevel@vger.kernel.org,
	linux-mm@kvack.org, linux-trace-kernel@vger.kernel.org,
	linux-perf-users@vger.kernel.org,
	Dinglan Peng <peng301@purdue.edu>,
	Pedro Fonseca <pfonseca@purdue.edu>,
	Jim Huang <jserv@ccns.ncku.edu.tw>,
	Huichun Feng <foxhoundsk.tw@gmail.com>
Subject: Re: [PATCH v4 00/14] Introduce Copy-On-Write to Page Table
Date: Tue, 14 Feb 2023 18:39:21 +0100	[thread overview]
Message-ID: <b1cada27-33b0-f53a-4059-07c54d9f1bc4@redhat.com> (raw)
In-Reply-To: <CAHbLzkoYo3Fwz2H=GM3X+ao33NN2fc2qh6y_ir4A-RL0LvJaZA@mail.gmail.com>

On 14.02.23 18:23, Yang Shi wrote:
> On Tue, Feb 14, 2023 at 1:58 AM David Hildenbrand <david@redhat.com> wrote:
>>
>> On 10.02.23 18:20, Chih-En Lin wrote:
>>> On Fri, Feb 10, 2023 at 11:21:16AM -0500, Pasha Tatashin wrote:
>>>>>>> Currently, copy-on-write is only used for the mapped memory; the child
>>>>>>> process still needs to copy the entire page table from the parent
>>>>>>> process during forking. The parent process might take a lot of time and
>>>>>>> memory to copy the page table when the parent has a big page table
>>>>>>> allocated. For example, the memory usage of a process after forking with
>>>>>>> 1 GB mapped memory is as follows:
>>>>>>
>>>>>> For some reason, I was not able to reproduce performance improvements
>>>>>> with a simple fork() performance measurement program. The results that
>>>>>> I saw are the following:
>>>>>>
>>>>>> Base:
>>>>>> Fork latency per gigabyte: 0.004416 seconds
>>>>>> Fork latency per gigabyte: 0.004382 seconds
>>>>>> Fork latency per gigabyte: 0.004442 seconds
>>>>>> COW kernel:
>>>>>> Fork latency per gigabyte: 0.004524 seconds
>>>>>> Fork latency per gigabyte: 0.004764 seconds
>>>>>> Fork latency per gigabyte: 0.004547 seconds
>>>>>>
>>>>>> AMD EPYC 7B12 64-Core Processor
>>>>>> Base:
>>>>>> Fork latency per gigabyte: 0.003923 seconds
>>>>>> Fork latency per gigabyte: 0.003909 seconds
>>>>>> Fork latency per gigabyte: 0.003955 seconds
>>>>>> COW kernel:
>>>>>> Fork latency per gigabyte: 0.004221 seconds
>>>>>> Fork latency per gigabyte: 0.003882 seconds
>>>>>> Fork latency per gigabyte: 0.003854 seconds
>>>>>>
>>>>>> Given, that page table for child is not copied, I was expecting the
>>>>>> performance to be better with COW kernel, and also not to depend on
>>>>>> the size of the parent.
>>>>>
>>>>> Yes, the child won't duplicate the page table, but fork will still
>>>>> traverse all the page table entries to do the accounting.
>>>>> And, since this patch expends the COW to the PTE table level, it's not
>>>>> the mapped page (page table entry) grained anymore, so we have to
>>>>> guarantee that all the mapped page is available to do COW mapping in
>>>>> the such page table.
>>>>> This kind of checking also costs some time.
>>>>> As a result, since the accounting and the checking, the COW PTE fork
>>>>> still depends on the size of the parent so the improvement might not
>>>>> be significant.
>>>>
>>>> The current version of the series does not provide any performance
>>>> improvements for fork(). I would recommend removing claims from the
>>>> cover letter about better fork() performance, as this may be
>>>> misleading for those looking for a way to speed up forking. In my
>>>
>>>   From v3 to v4, I changed the implementation of the COW fork() part to do
>>> the accounting and checking. At the time, I also removed most of the
>>> descriptions about the better fork() performance. Maybe it's not enough
>>> and still has some misleading. I will fix this in the next version.
>>> Thanks.
>>>
>>>> case, I was looking to speed up Redis OSS, which relies on fork() to
>>>> create consistent snapshots for driving replicates/backups. The O(N)
>>>> per-page operation causes fork() to be slow, so I was hoping that this
>>>> series, which does not duplicate the VA during fork(), would make the
>>>> operation much quicker.
>>>
>>> Indeed, at first, I tried to avoid the O(N) per-page operation by
>>> deferring the accounting and the swap stuff to the page fault. But,
>>> as I mentioned, it's not suitable for the mainline.
>>>
>>> Honestly, for improving the fork(), I have an idea to skip the per-page
>>> operation without breaking the logic. However, this will introduce the
>>> complicated mechanism and may has the overhead for other features. It
>>> might not be worth it. It's hard to strike a balance between the
>>> over-complicated mechanism with (probably) better performance and data
>>> consistency with the page status. So, I would focus on the safety and
>>> stable approach at first.
>>
>> Yes, it is most probably possible, but complexity, robustness and
>> maintainability have to be considered as well.
>>
>> Thanks for implementing this approach (only deduplication without other
>> optimizations) and evaluating it accordingly. It's certainly "cleaner",
>> such that we only have to mess with unsharing and not with other
>> accounting/pinning/mapcount thingies. But it also highlights how
>> intrusive even this basic deduplication approach already is -- and that
>> most benefits of the original approach requires even more complexity on top.
>>
>> I am not quite sure if the benefit is worth the price (I am not to
>> decide and I would like to hear other options).
>>
>> My quick thoughts after skimming over the core parts of this series
>>
>> (1) forgetting to break COW on a PTE in some pgtable walker feels quite
>>       likely (meaning that it might be fairly error-prone) and forgetting
>>       to break COW on a PTE table, accidentally modifying the shared
>>       table.
>> (2) break_cow_pte() can fail, which means that we can fail some
>>       operations (possibly silently halfway through) now. For example,
>>       looking at your change_pte_range() change, I suspect it's wrong.
>> (3) handle_cow_pte_fault() looks quite complicated and needs quite some
>>       double-checking: we temporarily clear the PMD, to reset it
>>       afterwards. I am not sure if that is correct. For example, what
>>       stops another page fault stumbling over that pmd_none() and
>>       allocating an empty page table? Maybe there are some locking details
>>       missing or they are very subtle such that we better document them. I
>>      recall that THP played quite some tricks to make such cases work ...
>>
>>>
>>>>> Actually, at the RFC v1 and v2, we proposed the version of skipping
>>>>> those works, and we got a significant improvement. You can see the
>>>>> number from RFC v2 cover letter [1]:
>>>>> "In short, with 512 MB mapped memory, COW PTE decreases latency by 93%
>>>>> for normal fork"
>>>>
>>>> I suspect the 93% improvement (when the mapcount was not updated) was
>>>> only for VAs with 4K pages. With 2M mappings this series did not
>>>> provide any benefit is this correct?
>>>
>>> Yes. In this case, the COW PTE performance is similar to the normal
>>> fork().
>>
>>
>> The thing with THP is, that during fork(), we always allocate a backup
>> PTE table, to be able to PTE-map the THP whenever we have to. Otherwise
>> we'd have to eventually fail some operations we don't want to fail --
>> similar to the case where break_cow_pte() could fail now due to -ENOMEM
>> although we really don't want to fail (e.g., change_pte_range() ).
>>
>> I always considered that wasteful, because in many scenarios, we'll
>> never ever split a THP and possibly waste memory.
> 
> When you say "split THP", do you mean split the compound page to base
> pages? IIUC the backup PTE table page is used to guarantee the PMD
> split (just convert pmd mapped THP to PTE-mapped but not split the
> compound page) succeed. You may already notice there is no return
> value for PMD split.

Yes, as I raised in my other reply.

> 
> The PMD split may be called quite often, for example, MADV_DONTNEED,
> mbind, mlock, and even in memory reclamation context  (THP swap).

Yes, but with a single MADV_DONTNEED call you cannot PTE-map more than 2 
THP (all other overlapped THP will get zapped). Same with most other 
operations.

There are corner cases, though. I recall that s390x/kvm wants to break 
all THP in a given VMA range. But that operation could safely fail if we 
can't do that.

Certainly needs some investigation, that's most probably why it hasn't 
been done yet.

> 
>>
>> Optimizing that for THP (e.g., don't always allocate backup THP, have
>> some global allocation backup pool for splits + refill when
>> close-to-empty) might provide similar fork() improvements, both in speed
>> and memory consumption when it comes to anonymous memory.
> 
> It might work. But may be much more complicated than what you thought
> when handling multiple parallel PMD splits.


I consider the whole PTE-table linking to THPs complicated enough to 
eventually replace it by something differently complicated that wastes 
less memory ;)

-- 
Thanks,

David / dhildenb


  reply	other threads:[~2023-02-14 17:40 UTC|newest]

Thread overview: 37+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2023-02-07  3:51 [PATCH v4 00/14] Introduce Copy-On-Write to Page Table Chih-En Lin
2023-02-07  3:51 ` [PATCH v4 01/14] mm: Allow user to control COW PTE via prctl Chih-En Lin
2023-02-07  3:51 ` [PATCH v4 02/14] mm: Add Copy-On-Write PTE to fork() Chih-En Lin
2023-02-07  3:51 ` [PATCH v4 03/14] mm: Add break COW PTE fault and helper functions Chih-En Lin
2023-02-07  3:51 ` [PATCH v4 04/14] mm/rmap: Break COW PTE in rmap walking Chih-En Lin
2023-02-07  3:51 ` [PATCH v4 05/14] mm/khugepaged: Break COW PTE before scanning pte Chih-En Lin
2023-02-07  3:51 ` [PATCH v4 06/14] mm/ksm: Break COW PTE before modify shared PTE Chih-En Lin
2023-02-07  3:51 ` [PATCH v4 07/14] mm/madvise: Handle COW-ed PTE with madvise() Chih-En Lin
2023-02-07  3:51 ` [PATCH v4 08/14] mm/gup: Trigger break COW PTE before calling follow_pfn_pte() Chih-En Lin
2023-02-07  3:51 ` [PATCH v4 09/14] mm/mprotect: Break COW PTE before changing protection Chih-En Lin
2023-02-07  3:51 ` [PATCH v4 10/14] mm/userfaultfd: Support COW PTE Chih-En Lin
2023-02-07  3:51 ` [PATCH v4 11/14] mm/migrate_device: " Chih-En Lin
2023-02-07  3:51 ` [PATCH v4 12/14] fs/proc: Support COW PTE with clear_refs_write Chih-En Lin
2023-02-07  3:51 ` [PATCH v4 13/14] events/uprobes: Break COW PTE before replacing page Chih-En Lin
2023-02-07  3:51 ` [PATCH v4 14/14] mm: fork: Enable COW PTE to fork system call Chih-En Lin
2023-02-09 18:15 ` [PATCH v4 00/14] Introduce Copy-On-Write to Page Table Pasha Tatashin
2023-02-10  2:17   ` Chih-En Lin
2023-02-10 16:21     ` Pasha Tatashin
2023-02-10 17:20       ` Chih-En Lin
2023-02-10 19:02         ` Chih-En Lin
2023-02-14  9:58         ` David Hildenbrand
2023-02-14 13:07           ` Pasha Tatashin
2023-02-14 13:17             ` David Hildenbrand
2023-02-14 15:59           ` Chih-En Lin
2023-02-14 16:30             ` Pasha Tatashin
2023-02-14 18:41               ` Chih-En Lin
2023-02-14 18:52                 ` Pasha Tatashin
2023-02-14 19:17                   ` Chih-En Lin
2023-02-14 16:58             ` David Hildenbrand
2023-02-14 17:03               ` David Hildenbrand
2023-02-14 17:56                 ` Chih-En Lin
2023-02-14 17:54               ` Chih-En Lin
2023-02-14 17:59                 ` David Hildenbrand
2023-02-14 19:06                   ` Chih-En Lin
2023-02-14 17:23           ` Yang Shi
2023-02-14 17:39             ` David Hildenbrand [this message]
2023-02-14 18:25               ` Yang Shi

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=b1cada27-33b0-f53a-4059-07c54d9f1bc4@redhat.com \
    --to=david@redhat.com \
    --cc=Jason@zx2c4.com \
    --cc=Liam.Howlett@oracle.com \
    --cc=Vincenzo.Frascino@arm.com \
    --cc=acme@kernel.org \
    --cc=akpm@linux-foundation.org \
    --cc=alexander.shishkin@linux.intel.com \
    --cc=anshuman.khandual@arm.com \
    --cc=avagin@gmail.com \
    --cc=baohua@kernel.org \
    --cc=bigeasy@linutronix.de \
    --cc=brho@google.com \
    --cc=broonie@kernel.org \
    --cc=catalin.marinas@arm.com \
    --cc=christophe.leroy@csgroup.eu \
    --cc=ebiederm@xmission.com \
    --cc=fenghua.yu@intel.com \
    --cc=foxhoundsk.tw@gmail.com \
    --cc=gautammenghani201@gmail.com \
    --cc=hughd@google.com \
    --cc=jgross@suse.com \
    --cc=jhubbard@nvidia.com \
    --cc=jolsa@kernel.org \
    --cc=jserv@ccns.ncku.edu.tw \
    --cc=kunyu@nfschina.com \
    --cc=legion@kernel.org \
    --cc=linmiaohe@huawei.com \
    --cc=linux-fsdevel@vger.kernel.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-mm@kvack.org \
    --cc=linux-perf-users@vger.kernel.org \
    --cc=linux-trace-kernel@vger.kernel.org \
    --cc=liushixin2@huawei.com \
    --cc=luto@kernel.org \
    --cc=mark.rutland@arm.com \
    --cc=mhiramat@kernel.org \
    --cc=mhocko@suse.com \
    --cc=minchan@kernel.org \
    --cc=mingo@redhat.com \
    --cc=namhyung@kernel.org \
    --cc=namit@vmware.com \
    --cc=pasha.tatashin@soleen.com \
    --cc=peng301@purdue.edu \
    --cc=peterx@redhat.com \
    --cc=peterz@infradead.org \
    --cc=pfonseca@purdue.edu \
    --cc=rostedt@goodmis.org \
    --cc=shiyn.lin@gmail.com \
    --cc=shy828301@gmail.com \
    --cc=surenb@google.com \
    --cc=tglx@linutronix.de \
    --cc=tongtiangen@huawei.com \
    --cc=vbabka@suse.cz \
    --cc=will@kernel.org \
    --cc=willy@infradead.org \
    --cc=yun.zhou@windriver.com \
    --cc=yuzhao@google.com \
    --cc=zhengqi.arch@bytedance.com \
    --cc=zokeefe@google.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).