From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mail-pg1-f197.google.com (mail-pg1-f197.google.com [209.85.215.197]) by kanga.kvack.org (Postfix) with ESMTP id F27CF6B02FE for ; Fri, 26 Oct 2018 07:01:31 -0400 (EDT) Received: by mail-pg1-f197.google.com with SMTP id h9-v6so369938pgs.11 for ; Fri, 26 Oct 2018 04:01:31 -0700 (PDT) Received: from alexa-out-blr-01.qualcomm.com (alexa-out-blr-01.qualcomm.com. [103.229.18.197]) by mx.google.com with ESMTPS id g9-v6si7269096pgp.488.2018.10.26.04.01.30 for (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Fri, 26 Oct 2018 04:01:31 -0700 (PDT) From: Arun KS Subject: [PATCH v1 4/4] mm: Remove managed_page_count spinlock Date: Fri, 26 Oct 2018 16:31:02 +0530 Message-Id: <1540551662-26458-5-git-send-email-arunks@codeaurora.org> In-Reply-To: <1540551662-26458-1-git-send-email-arunks@codeaurora.org> References: <1540551662-26458-1-git-send-email-arunks@codeaurora.org> Sender: owner-linux-mm@kvack.org List-ID: Cc: keescook@chromium.org, khlebnikov@yandex-team.ru, minchan@kernel.org, getarunks@gmail.com, gregkh@linuxfoundation.org, akpm@linux-foundation.org, mhocko@kernel.org, vbabka@suse.cz, linux-kernel@vger.kernel.org, linux-mm@kvack.org, Arun KS Now totalram_pages and managed_pages are atomic varibles. No need of managed_page_count spinlock. Signed-off-by: Arun KS --- include/linux/mmzone.h | 6 ------ mm/page_alloc.c | 5 ----- 2 files changed, 11 deletions(-) diff --git a/include/linux/mmzone.h b/include/linux/mmzone.h index 597b0c7..aa960f6 100644 --- a/include/linux/mmzone.h +++ b/include/linux/mmzone.h @@ -428,12 +428,6 @@ struct zone { * Write access to present_pages at runtime should be protected by * mem_hotplug_begin/end(). Any reader who can't tolerant drift of * present_pages should get_online_mems() to get a stable value. - * - * Read access to managed_pages should be safe because it's unsigned - * long. Write access to zone->managed_pages and totalram_pages are - * protected by managed_page_count_lock at runtime. Idealy only - * adjust_managed_page_count() should be used instead of directly - * touching zone->managed_pages and totalram_pages. */ atomic_long_t managed_pages; unsigned long spanned_pages; diff --git a/mm/page_alloc.c b/mm/page_alloc.c index af832de..e29e78f 100644 --- a/mm/page_alloc.c +++ b/mm/page_alloc.c @@ -122,9 +122,6 @@ }; EXPORT_SYMBOL(node_states); -/* Protect totalram_pages and zone->managed_pages */ -static DEFINE_SPINLOCK(managed_page_count_lock); - atomic_long_t _totalram_pages __read_mostly; unsigned long totalreserve_pages __read_mostly; unsigned long totalcma_pages __read_mostly; @@ -7062,14 +7059,12 @@ static int __init cmdline_parse_movablecore(char *p) void adjust_managed_page_count(struct page *page, long count) { - spin_lock(&managed_page_count_lock); atomic_long_add(count, &page_zone(page)->managed_pages); totalram_pages_add(count); #ifdef CONFIG_HIGHMEM if (PageHighMem(page)) totalhigh_pages_add(count); #endif - spin_unlock(&managed_page_count_lock); } EXPORT_SYMBOL(adjust_managed_page_count); -- 1.9.1