linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
* mmap_sem bottleneck
@ 2016-10-17 12:33 Laurent Dufour
  2016-10-17 12:51 ` Peter Zijlstra
  2016-10-17 12:57 ` mmap_sem bottleneck Michal Hocko
  0 siblings, 2 replies; 21+ messages in thread
From: Laurent Dufour @ 2016-10-17 12:33 UTC (permalink / raw)
  To: Linux MM, Andi Kleen, Peter Zijlstra, Mel Gorman, Jan Kara,
	Michal Hocko, Davidlohr Bueso, Hugh Dickins, Andrew Morton,
	Al Viro, Paul E. McKenney, Aneesh Kumar K.V

Hi all,

I'm sorry to resurrect this topic, but with the increasing number of
CPUs, this becomes more frequent that the mmap_sem is a bottleneck
especially between the page fault handling and the other threads memory
management calls.

In the case I'm seeing, there is a lot of page fault occurring while
other threads are trying to manipulate the process memory layout through
mmap/munmap.

There is no *real* conflict between these operations, the page fault are
done a different page and areas that the one addressed by the mmap/unmap
operations. Thus threads are dealing with different part of the
process's memory space. However since page fault handlers and mmap/unmap
operations grab the mmap_sem, the page fault handling are serialized
with the mmap operations, which impact the performance on large system.

For the record, the page fault are done while reading data from a file
system, and I/O are really impacted by this serialization when dealing
with a large number of parallel threads, in my case 192 threads (1 per
online CPU). But the source of the page fault doesn't really matter I guess.

I took time trying to figure out how to get rid of this bottleneck, but
this is definitively too complex for me.
I read this mailing history, and some LWN articles about that and my
feeling is that there is no clear way to limit the impact of this
semaphore. Last discussion on this topic seemed to happen last march
during the LSFMM submit (https://lwn.net/Articles/636334/). But this
doesn't seem to have lead to major changes, or may be I missed them.

I'm now seeing that this is a big thing and that it would be hard and
potentially massively intrusive to get rid of this bottleneck, and I'm
wondering what could be to best approach here, RCU, range locks, etc..

Does anyone have an idea ?

Thanks,
Laurent.

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 21+ messages in thread

end of thread, other threads:[~2016-12-02 14:10 UTC | newest]

Thread overview: 21+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2016-10-17 12:33 mmap_sem bottleneck Laurent Dufour
2016-10-17 12:51 ` Peter Zijlstra
2016-10-18 14:50   ` Laurent Dufour
2016-10-18 15:01     ` Kirill A. Shutemov
2016-10-18 15:02     ` Peter Zijlstra
2016-11-18 11:08       ` [RFC PATCH v2 0/7] Speculative page faults Laurent Dufour
2016-11-18 11:08         ` [RFC PATCH v2 1/7] mm: Dont assume page-table invariance during faults Laurent Dufour
2016-11-18 11:08         ` [RFC PATCH v2 2/7] mm: Prepare for FAULT_FLAG_SPECULATIVE Laurent Dufour
2016-11-18 11:08         ` [RFC PATCH v2 3/7] mm: Introduce pte_spinlock Laurent Dufour
2016-11-18 11:08         ` [RFC PATCH v2 4/7] mm: VMA sequence count Laurent Dufour
2016-11-18 11:08         ` [RFC PATCH v2 5/7] SRCU free VMAs Laurent Dufour
2016-11-18 11:08         ` [RFC PATCH v2 6/7] mm: Provide speculative fault infrastructure Laurent Dufour
2016-11-18 11:08         ` [RFC PATCH v2 7/7] mm,x86: Add speculative pagefault handling Laurent Dufour
2016-11-18 14:08         ` [RFC PATCH v2 0/7] Speculative page faults Andi Kleen
2016-12-01  8:34           ` Laurent Dufour
2016-12-01 12:50             ` Balbir Singh
2016-12-01 13:26               ` Laurent Dufour
2016-12-02 14:10         ` Michal Hocko
2016-10-17 12:57 ` mmap_sem bottleneck Michal Hocko
2016-10-20  7:23   ` Laurent Dufour
2016-10-20 10:55     ` Michal Hocko

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).