linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
From: Laurent Dufour <ldufour@linux.vnet.ibm.com>
To: Michal Hocko <mhocko@suse.cz>
Cc: Linux MM <linux-mm@kvack.org>, Andi Kleen <ak@linux.intel.com>,
	Peter Zijlstra <peterz@infradead.org>,
	Mel Gorman <mgorman@techsingularity.net>, Jan Kara <jack@suse.cz>,
	Davidlohr Bueso <dbueso@suse.de>, Hugh Dickins <hughd@google.com>,
	Andrew Morton <akpm@linux-foundation.org>,
	Al Viro <viro@ZenIV.linux.org.uk>,
	"Paul E. McKenney" <paulmck@linux.vnet.ibm.com>,
	"Aneesh Kumar K.V" <aneesh.kumar@linux.vnet.ibm.com>,
	"Kirill A. Shutemov" <kirill@shutemov.name>
Subject: Re: mmap_sem bottleneck
Date: Thu, 20 Oct 2016 09:23:37 +0200	[thread overview]
Message-ID: <e1e865c5-51ab-fce1-0958-b5c668da4dac@linux.vnet.ibm.com> (raw)
In-Reply-To: <20161017125717.GK23322@dhcp22.suse.cz>

On 17/10/2016 14:57, Michal Hocko wrote:
> On Mon 17-10-16 14:33:53, Laurent Dufour wrote:
>> Hi all,
>>
>> I'm sorry to resurrect this topic, but with the increasing number of
>> CPUs, this becomes more frequent that the mmap_sem is a bottleneck
>> especially between the page fault handling and the other threads memory
>> management calls.
>>
>> In the case I'm seeing, there is a lot of page fault occurring while
>> other threads are trying to manipulate the process memory layout through
>> mmap/munmap.
>>
>> There is no *real* conflict between these operations, the page fault are
>> done a different page and areas that the one addressed by the mmap/unmap
>> operations. Thus threads are dealing with different part of the
>> process's memory space. However since page fault handlers and mmap/unmap
>> operations grab the mmap_sem, the page fault handling are serialized
>> with the mmap operations, which impact the performance on large system.
> 
> Could you quantify how much overhead are we talking about here?

I recorded perf data using a sampler which recreates the bottleneck
issueby simulating the database initialization process which spawns a
thread per cpu in charge of allocating a piece of memory and request a
disk reading in it.

The perf data shows that 23% of the time is spent waiting for the
mm semaphore in do_page_fault(). This has been recording using a 4.8-rc8
kernel on pppc64le architecture.

>> For the record, the page fault are done while reading data from a file
>> system, and I/O are really impacted by this serialization when dealing
>> with a large number of parallel threads, in my case 192 threads (1 per
>> online CPU). But the source of the page fault doesn't really matter I guess.
> 
> But we are dropping the mmap_sem for the IO and retry the page fault.
> I am not sure I understood you correctly here though.
> 
>> I took time trying to figure out how to get rid of this bottleneck, but
>> this is definitively too complex for me.
>> I read this mailing history, and some LWN articles about that and my
>> feeling is that there is no clear way to limit the impact of this
>> semaphore. Last discussion on this topic seemed to happen last march
>> during the LSFMM submit (https://lwn.net/Articles/636334/). But this
>> doesn't seem to have lead to major changes, or may be I missed them.
> 
> At least mmap/munmap write lock contention could be reduced by the above
> proposed range locking. Jan Kara has implemented a prototype [1] of the
> lock for mapping which could be used for mmap_sem as well) but it had
> some perfomance implications AFAIR. There wasn't a strong usecase for
> this so far. If there is one, please describe it and we can think what
> to do about it.

When recreating the issue with a sampler there is no file system I/O in
the picture, just pure mmap/memcpy and a lot of threads (I need about
192 CPUs to recreate it).
But there is a real use case, beyond that. The SAP HANA database is
using all the available CPUs to read the database from the disk when
starting. When run on top flash storage and a large number of CPUs
(>192), we hit the mm semaphore bottleneck which impact the loading
performance by serializing the memory management.

I think there is a place for enhancements in the user space part (the
database loader), but the mm semaphore is still a bottleneck when a
massively multi-threaded process is dealing with its memory while page
faulting on it.
Unfortunately, this requires big system to recreate such an issue which
make it harder to track and investigate.


> There were also some attempts to replace mmap_sem by RCU AFAIR but my
> vague recollection is that they had some issues as well.
> 
> [1] http://linux-kernel.2935.n7.nabble.com/PATCH-0-6-RFC-Mapping-range-lock-td592872.html

I took a look to this series which is very interesting but it is
quite old now, and I'm wondering if it is still applicable.

Cheers,
Laurent.

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

  reply	other threads:[~2016-10-20  7:23 UTC|newest]

Thread overview: 21+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2016-10-17 12:33 mmap_sem bottleneck Laurent Dufour
2016-10-17 12:51 ` Peter Zijlstra
2016-10-18 14:50   ` Laurent Dufour
2016-10-18 15:01     ` Kirill A. Shutemov
2016-10-18 15:02     ` Peter Zijlstra
2016-11-18 11:08       ` [RFC PATCH v2 0/7] Speculative page faults Laurent Dufour
2016-11-18 11:08         ` [RFC PATCH v2 1/7] mm: Dont assume page-table invariance during faults Laurent Dufour
2016-11-18 11:08         ` [RFC PATCH v2 2/7] mm: Prepare for FAULT_FLAG_SPECULATIVE Laurent Dufour
2016-11-18 11:08         ` [RFC PATCH v2 3/7] mm: Introduce pte_spinlock Laurent Dufour
2016-11-18 11:08         ` [RFC PATCH v2 4/7] mm: VMA sequence count Laurent Dufour
2016-11-18 11:08         ` [RFC PATCH v2 5/7] SRCU free VMAs Laurent Dufour
2016-11-18 11:08         ` [RFC PATCH v2 6/7] mm: Provide speculative fault infrastructure Laurent Dufour
2016-11-18 11:08         ` [RFC PATCH v2 7/7] mm,x86: Add speculative pagefault handling Laurent Dufour
2016-11-18 14:08         ` [RFC PATCH v2 0/7] Speculative page faults Andi Kleen
2016-12-01  8:34           ` Laurent Dufour
2016-12-01 12:50             ` Balbir Singh
2016-12-01 13:26               ` Laurent Dufour
2016-12-02 14:10         ` Michal Hocko
2016-10-17 12:57 ` mmap_sem bottleneck Michal Hocko
2016-10-20  7:23   ` Laurent Dufour [this message]
2016-10-20 10:55     ` Michal Hocko

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=e1e865c5-51ab-fce1-0958-b5c668da4dac@linux.vnet.ibm.com \
    --to=ldufour@linux.vnet.ibm.com \
    --cc=ak@linux.intel.com \
    --cc=akpm@linux-foundation.org \
    --cc=aneesh.kumar@linux.vnet.ibm.com \
    --cc=dbueso@suse.de \
    --cc=hughd@google.com \
    --cc=jack@suse.cz \
    --cc=kirill@shutemov.name \
    --cc=linux-mm@kvack.org \
    --cc=mgorman@techsingularity.net \
    --cc=mhocko@suse.cz \
    --cc=paulmck@linux.vnet.ibm.com \
    --cc=peterz@infradead.org \
    --cc=viro@ZenIV.linux.org.uk \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).