From mboxrd@z Thu Jan 1 00:00:00 1970 From: Leonardo Bras Subject: [PATCH v5 03/11] mm/gup: Applies counting method to monitor gup_pgd_range Date: Wed, 2 Oct 2019 22:33:17 -0300 Message-ID: <20191003013325.2614-4-leonardo@linux.ibm.com> References: <20191003013325.2614-1-leonardo@linux.ibm.com> Mime-Version: 1.0 Content-Transfer-Encoding: 8bit Return-path: In-Reply-To: <20191003013325.2614-1-leonardo@linux.ibm.com> List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: linuxppc-dev-bounces+glppe-linuxppc-embedded-2=m.gmane.org@lists.ozlabs.org Sender: "Linuxppc-dev" To: linuxppc-dev@lists.ozlabs.org, linux-kernel@vger.kernel.org, kvm-ppc@vger.kernel.org, linux-arch@vger.kernel.org, linux-mm@kvack.org Cc: Song Liu , Michal Hocko , "Peter Zijlstra (Intel)" , "Dmitry V. Levin" , Keith Busch , Paul Mackerras , Christoph Lameter , Ira Weiny , Thomas Gleixner , Elena Reshetova , Andrea Arcangeli , Santosh Sivaraj , Davidlohr Bueso , "Aneesh Kumar K.V" , Bartlomiej Zolnierkiewicz , Mike Rapoport , Jason Gunthorpe , Allison Randal , Mahesh Salgaonkar , Leonardo Bras , Alexey Dobriyan , Ingo Molnar , Ralph Campbell List-Id: linux-arch.vger.kernel.org As described, gup_pgd_range is a lockless pagetable walk. So, in order to monitor against THP split/collapse with the counting method, it's necessary to bound it with {begin,end}_lockless_pgtbl_walk. local_irq_{save,restore} is already inside {begin,end}_lockless_pgtbl_walk, so there is no need to repeat it here. Signed-off-by: Leonardo Bras --- mm/gup.c | 10 +++++----- 1 file changed, 5 insertions(+), 5 deletions(-) diff --git a/mm/gup.c b/mm/gup.c index 23a9f9c9d377..52e53b4f39d8 100644 --- a/mm/gup.c +++ b/mm/gup.c @@ -2319,7 +2319,7 @@ int __get_user_pages_fast(unsigned long start, int nr_pages, int write, struct page **pages) { unsigned long len, end; - unsigned long flags; + unsigned long irq_mask; int nr = 0; start = untagged_addr(start) & PAGE_MASK; @@ -2345,9 +2345,9 @@ int __get_user_pages_fast(unsigned long start, int nr_pages, int write, if (IS_ENABLED(CONFIG_HAVE_FAST_GUP) && gup_fast_permitted(start, end)) { - local_irq_save(flags); + irq_mask = begin_lockless_pgtbl_walk(current->mm); gup_pgd_range(start, end, write ? FOLL_WRITE : 0, pages, &nr); - local_irq_restore(flags); + end_lockless_pgtbl_walk(current->mm, irq_mask); } return nr; @@ -2414,9 +2414,9 @@ int get_user_pages_fast(unsigned long start, int nr_pages, if (IS_ENABLED(CONFIG_HAVE_FAST_GUP) && gup_fast_permitted(start, end)) { - local_irq_disable(); + begin_lockless_pgtbl_walk(current->mm); gup_pgd_range(addr, end, gup_flags, pages, &nr); - local_irq_enable(); + end_lockless_pgtbl_walk(current->mm, IRQS_ENABLED); ret = nr; } -- 2.20.1