From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mail-wr0-f199.google.com (mail-wr0-f199.google.com [209.85.128.199]) by kanga.kvack.org (Postfix) with ESMTP id 9AC4A6B026E for ; Fri, 15 Dec 2017 18:04:37 -0500 (EST) Received: by mail-wr0-f199.google.com with SMTP id r20so5680132wrg.23 for ; Fri, 15 Dec 2017 15:04:37 -0800 (PST) Received: from mail.linuxfoundation.org (mail.linuxfoundation.org. [140.211.169.12]) by mx.google.com with ESMTPS id j16si5659146wme.109.2017.12.15.15.04.35 for (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Fri, 15 Dec 2017 15:04:36 -0800 (PST) Date: Fri, 15 Dec 2017 15:04:29 -0800 From: Andrew Morton Subject: Re: [patch v2 1/2] mm, mmu_notifier: annotate mmu notifiers with blockable invalidate callbacks Message-Id: <20171215150429.f68862867392337f35a49848@linux-foundation.org> In-Reply-To: References: Mime-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit Sender: owner-linux-mm@kvack.org List-ID: To: David Rientjes Cc: Michal Hocko , Andrea Arcangeli , Benjamin Herrenschmidt , Paul Mackerras , Oded Gabbay , Alex Deucher , Christian =?UTF-8?B?S8O2bmln?= , David Airlie , Joerg Roedel , Doug Ledford , Jani Nikula , Mike Marciniszyn , Sean Hefty , Dimitri Sivanich , Boris Ostrovsky , =?UTF-8?B?SsOpcsO0bWU=?= Glisse , Paolo Bonzini , Radim =?UTF-8?B?S3LEjW3DocWZ?= , linux-kernel@vger.kernel.org, linux-mm@kvack.org On Thu, 14 Dec 2017 13:30:56 -0800 (PST) David Rientjes wrote: > Commit 4d4bbd8526a8 ("mm, oom_reaper: skip mm structs with mmu notifiers") > prevented the oom reaper from unmapping private anonymous memory with the > oom reaper when the oom victim mm had mmu notifiers registered. > > The rationale is that doing mmu_notifier_invalidate_range_{start,end}() > around the unmap_page_range(), which is needed, can block and the oom > killer will stall forever waiting for the victim to exit, which may not > be possible without reaping. > > That concern is real, but only true for mmu notifiers that have blockable > invalidate_range_{start,end}() callbacks. This patch adds a "flags" field > to mmu notifier ops that can set a bit to indicate that these callbacks do > not block. > > The implementation is steered toward an expensive slowpath, such as after > the oom reaper has grabbed mm->mmap_sem of a still alive oom victim. some tweakage, please review. From: Andrew Morton Subject: mm-mmu_notifier-annotate-mmu-notifiers-with-blockable-invalidate-callbacks-fix make mm_has_blockable_invalidate_notifiers() return bool, use rwsem_is_locked() Cc: Alex Deucher Cc: Andrea Arcangeli Cc: Benjamin Herrenschmidt Cc: Boris Ostrovsky Cc: Christian KA?nig Cc: David Airlie Cc: David Rientjes Cc: Dimitri Sivanich Cc: Doug Ledford Cc: Jani Nikula Cc: JA(C)rA'me Glisse Cc: Joerg Roedel Cc: Michal Hocko Cc: Mike Marciniszyn Cc: Oded Gabbay Cc: Paolo Bonzini Cc: Paul Mackerras Cc: Radim KrA?mA!A? Cc: Sean Hefty Signed-off-by: Andrew Morton --- include/linux/mmu_notifier.h | 7 ++++--- mm/mmu_notifier.c | 8 ++++---- 2 files changed, 8 insertions(+), 7 deletions(-) diff -puN include/linux/mmu_notifier.h~mm-mmu_notifier-annotate-mmu-notifiers-with-blockable-invalidate-callbacks-fix include/linux/mmu_notifier.h --- a/include/linux/mmu_notifier.h~mm-mmu_notifier-annotate-mmu-notifiers-with-blockable-invalidate-callbacks-fix +++ a/include/linux/mmu_notifier.h @@ -2,6 +2,7 @@ #ifndef _LINUX_MMU_NOTIFIER_H #define _LINUX_MMU_NOTIFIER_H +#include #include #include #include @@ -233,7 +234,7 @@ extern void __mmu_notifier_invalidate_ra bool only_end); extern void __mmu_notifier_invalidate_range(struct mm_struct *mm, unsigned long start, unsigned long end); -extern int mm_has_blockable_invalidate_notifiers(struct mm_struct *mm); +extern bool mm_has_blockable_invalidate_notifiers(struct mm_struct *mm); static inline void mmu_notifier_release(struct mm_struct *mm) { @@ -473,9 +474,9 @@ static inline void mmu_notifier_invalida { } -static inline int mm_has_blockable_invalidate_notifiers(struct mm_struct *mm) +static inline bool mm_has_blockable_invalidate_notifiers(struct mm_struct *mm) { - return 0; + return false; } static inline void mmu_notifier_mm_init(struct mm_struct *mm) diff -puN mm/mmu_notifier.c~mm-mmu_notifier-annotate-mmu-notifiers-with-blockable-invalidate-callbacks-fix mm/mmu_notifier.c --- a/mm/mmu_notifier.c~mm-mmu_notifier-annotate-mmu-notifiers-with-blockable-invalidate-callbacks-fix +++ a/mm/mmu_notifier.c @@ -240,13 +240,13 @@ EXPORT_SYMBOL_GPL(__mmu_notifier_invalid * Must be called while holding mm->mmap_sem for either read or write. * The result is guaranteed to be valid until mm->mmap_sem is dropped. */ -int mm_has_blockable_invalidate_notifiers(struct mm_struct *mm) +bool mm_has_blockable_invalidate_notifiers(struct mm_struct *mm) { struct mmu_notifier *mn; int id; - int ret = 0; + bool ret = false; - WARN_ON_ONCE(down_write_trylock(&mm->mmap_sem)); + WARN_ON_ONCE(!rwsem_is_locked(&mm->mmap_sem)); if (!mm_has_notifiers(mm)) return ret; @@ -259,7 +259,7 @@ int mm_has_blockable_invalidate_notifier continue; if (!(mn->ops->flags & MMU_INVALIDATE_DOES_NOT_BLOCK)) { - ret = 1; + ret = true; break; } } _ -- To unsubscribe, send a message with 'unsubscribe linux-mm' in the body to majordomo@kvack.org. For more info on Linux MM, see: http://www.linux-mm.org/ . Don't email: email@kvack.org