From: Balbir Singh <bsingharora@gmail.com>
To: Mel Gorman <mgorman@techsingularity.net>
Cc: Andrew Morton <akpm@linux-foundation.org>,
Linux-MM <linux-mm@kvack.org>, Rik van Riel <riel@surriel.com>,
Vlastimil Babka <vbabka@suse.cz>,
Johannes Weiner <hannes@cmpxchg.org>,
Minchan Kim <minchan@kernel.org>,
Joonsoo Kim <iamjoonsoo.kim@lge.com>,
LKML <linux-kernel@vger.kernel.org>
Subject: Re: [PATCH 02/34] mm, vmscan: move lru_lock to the node
Date: Tue, 12 Jul 2016 21:06:04 +1000 [thread overview]
Message-ID: <20160712110604.GA5981@350D> (raw)
In-Reply-To: <1467970510-21195-3-git-send-email-mgorman@techsingularity.net>
On Fri, Jul 08, 2016 at 10:34:38AM +0100, Mel Gorman wrote:
> Node-based reclaim requires node-based LRUs and locking. This is a
> preparation patch that just moves the lru_lock to the node so later
> patches are easier to review. It is a mechanical change but note this
> patch makes contention worse because the LRU lock is hotter and direct
> reclaim and kswapd can contend on the same lock even when reclaiming from
> different zones.
>
> Signed-off-by: Mel Gorman <mgorman@techsingularity.net>
> Acked-by: Johannes Weiner <hannes@cmpxchg.org>
> Acked-by: Vlastimil Babka <vbabka@suse.cz>
> Reviewed-by: Minchan Kim <minchan@kernel.org>
> ---
> Documentation/cgroup-v1/memcg_test.txt | 4 +--
> Documentation/cgroup-v1/memory.txt | 4 +--
> include/linux/mm_types.h | 2 +-
> include/linux/mmzone.h | 10 +++++--
> mm/compaction.c | 10 +++----
> mm/filemap.c | 4 +--
> mm/huge_memory.c | 6 ++---
> mm/memcontrol.c | 6 ++---
> mm/mlock.c | 10 +++----
> mm/page_alloc.c | 4 +--
> mm/page_idle.c | 4 +--
> mm/rmap.c | 2 +-
> mm/swap.c | 30 ++++++++++-----------
> mm/vmscan.c | 48 +++++++++++++++++-----------------
> 14 files changed, 75 insertions(+), 69 deletions(-)
>
> diff --git a/Documentation/cgroup-v1/memcg_test.txt b/Documentation/cgroup-v1/memcg_test.txt
> index 8870b0212150..78a8c2963b38 100644
> --- a/Documentation/cgroup-v1/memcg_test.txt
> +++ b/Documentation/cgroup-v1/memcg_test.txt
> @@ -107,9 +107,9 @@ Under below explanation, we assume CONFIG_MEM_RES_CTRL_SWAP=y.
>
> 8. LRU
> Each memcg has its own private LRU. Now, its handling is under global
> - VM's control (means that it's handled under global zone->lru_lock).
> + VM's control (means that it's handled under global zone_lru_lock).
> Almost all routines around memcg's LRU is called by global LRU's
> - list management functions under zone->lru_lock().
> + list management functions under zone_lru_lock().
>
> A special function is mem_cgroup_isolate_pages(). This scans
> memcg's private LRU and call __isolate_lru_page() to extract a page
> diff --git a/Documentation/cgroup-v1/memory.txt b/Documentation/cgroup-v1/memory.txt
> index b14abf217239..946e69103cdd 100644
> --- a/Documentation/cgroup-v1/memory.txt
> +++ b/Documentation/cgroup-v1/memory.txt
> @@ -267,11 +267,11 @@ When oom event notifier is registered, event will be delivered.
> Other lock order is following:
> PG_locked.
> mm->page_table_lock
> - zone->lru_lock
> + zone_lru_lock
zone_lru_lock is a little confusing, can't we just call it
node_lru_lock?
> lock_page_cgroup.
> In many cases, just lock_page_cgroup() is called.
> per-zone-per-cgroup LRU (cgroup's private LRU) is just guarded by
> - zone->lru_lock, it has no lock of its own.
> + zone_lru_lock, it has no lock of its own.
>
> 2.7 Kernel Memory Extension (CONFIG_MEMCG_KMEM)
>
> diff --git a/include/linux/mm_types.h b/include/linux/mm_types.h
> index e093e1d3285b..ca2ed9a6c8d8 100644
> --- a/include/linux/mm_types.h
> +++ b/include/linux/mm_types.h
> @@ -118,7 +118,7 @@ struct page {
> */
> union {
> struct list_head lru; /* Pageout list, eg. active_list
> - * protected by zone->lru_lock !
> + * protected by zone_lru_lock !
> * Can be used as a generic list
> * by the page owner.
> */
> diff --git a/include/linux/mmzone.h b/include/linux/mmzone.h
> index 078ecb81e209..cfa870107abe 100644
> --- a/include/linux/mmzone.h
> +++ b/include/linux/mmzone.h
> @@ -93,7 +93,7 @@ struct free_area {
> struct pglist_data;
>
> /*
> - * zone->lock and zone->lru_lock are two of the hottest locks in the kernel.
> + * zone->lock and the zone lru_lock are two of the hottest locks in the kernel.
> * So add a wild amount of padding here to ensure that they fall into separate
> * cachelines. There are very few zone structures in the machine, so space
> * consumption is not a concern here.
> @@ -496,7 +496,6 @@ struct zone {
> /* Write-intensive fields used by page reclaim */
>
> /* Fields commonly accessed by the page reclaim scanner */
> - spinlock_t lru_lock;
> struct lruvec lruvec;
>
> /*
> @@ -690,6 +689,9 @@ typedef struct pglist_data {
> /* Number of pages migrated during the rate limiting time interval */
> unsigned long numabalancing_migrate_nr_pages;
> #endif
> + /* Write-intensive fields used by page reclaim */
> + ZONE_PADDING(_pad1_)a
I thought this was to have zone->lock and zone->lru_lock in different
cachelines, do we still need the padding here?
> + spinlock_t lru_lock;
>
Balbir Singh.
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
next prev parent reply other threads:[~2016-07-12 11:05 UTC|newest]
Thread overview: 109+ messages / expand[flat|nested] mbox.gz Atom feed top
2016-07-08 9:34 [PATCH 00/34] Move LRU page reclaim from zones to nodes v9 Mel Gorman
2016-07-08 9:34 ` [PATCH 01/34] mm, vmstat: add infrastructure for per-node vmstats Mel Gorman
2016-08-03 19:13 ` Reza Arbab
2016-07-08 9:34 ` [PATCH 02/34] mm, vmscan: move lru_lock to the node Mel Gorman
2016-07-12 11:06 ` Balbir Singh [this message]
2016-07-12 11:18 ` Mel Gorman
2016-07-13 5:50 ` Balbir Singh
2016-07-13 8:39 ` Vlastimil Babka
2016-07-08 9:34 ` [PATCH 03/34] mm, vmscan: move LRU lists to node Mel Gorman
2016-08-04 20:59 ` James Hogan
2016-08-05 8:41 ` Mel Gorman
2016-08-05 10:52 ` James Hogan
2016-08-05 11:55 ` Mel Gorman
2016-08-05 12:02 ` James Hogan
2016-07-08 9:34 ` [PATCH 04/34] mm, mmzone: clarify the usage of zone padding Mel Gorman
2016-07-12 13:49 ` Johannes Weiner
2016-07-08 9:34 ` [PATCH 05/34] mm, vmscan: begin reclaiming pages on a per-node basis Mel Gorman
2016-07-12 13:54 ` Johannes Weiner
2016-07-14 9:19 ` Vlastimil Babka
2016-07-08 9:34 ` [PATCH 06/34] mm, vmscan: have kswapd only scan based on the highest requested zone Mel Gorman
2016-07-12 14:05 ` Johannes Weiner
2016-07-13 8:37 ` Mel Gorman
2016-07-08 9:34 ` [PATCH 07/34] mm, vmscan: make kswapd reclaim in terms of nodes Mel Gorman
2016-08-29 9:38 ` Srikar Dronamraju
2016-08-30 12:07 ` Mel Gorman
2016-08-30 14:25 ` Srikar Dronamraju
2016-08-30 15:00 ` Mel Gorman
2016-08-31 6:09 ` Srikar Dronamraju
2016-08-31 8:49 ` Mel Gorman
2016-08-31 11:09 ` Michal Hocko
2016-08-31 12:46 ` Mel Gorman
2016-08-31 17:33 ` Srikar Dronamraju
2016-07-08 9:34 ` [PATCH 08/34] mm, vmscan: remove balance gap Mel Gorman
2016-07-12 14:06 ` Johannes Weiner
2016-07-08 9:34 ` [PATCH 09/34] mm, vmscan: simplify the logic deciding whether kswapd sleeps Mel Gorman
2016-07-08 9:34 ` [PATCH 10/34] mm, vmscan: by default have direct reclaim only shrink once per node Mel Gorman
2016-07-08 9:34 ` [PATCH 11/34] mm, vmscan: remove duplicate logic clearing node congestion and dirty state Mel Gorman
2016-07-12 14:22 ` Johannes Weiner
2016-07-13 8:40 ` Mel Gorman
2016-07-14 9:45 ` Vlastimil Babka
2016-07-08 9:34 ` [PATCH 12/34] mm: vmscan: do not reclaim from kswapd if there is any eligible zone Mel Gorman
2016-07-12 14:29 ` Johannes Weiner
2016-07-13 8:47 ` Mel Gorman
2016-07-13 12:28 ` Johannes Weiner
2016-07-08 9:34 ` [PATCH 13/34] mm, vmscan: make shrink_node decisions more node-centric Mel Gorman
2016-07-12 14:32 ` Johannes Weiner
2016-07-13 8:48 ` Mel Gorman
2016-07-08 9:34 ` [PATCH 14/34] mm, memcg: move memcg limit enforcement from zones to nodes Mel Gorman
2016-07-12 14:38 ` Johannes Weiner
2016-07-08 9:34 ` [PATCH 15/34] mm, workingset: make working set detection node-aware Mel Gorman
2016-07-08 9:34 ` [PATCH 16/34] mm, page_alloc: consider dirtyable memory in terms of nodes Mel Gorman
2016-07-08 9:34 ` [PATCH 17/34] mm: move page mapped accounting to the node Mel Gorman
2016-07-12 14:42 ` Johannes Weiner
2016-07-08 9:34 ` [PATCH 18/34] mm: rename NR_ANON_PAGES to NR_ANON_MAPPED Mel Gorman
2016-07-12 14:58 ` Johannes Weiner
2016-07-13 8:55 ` Mel Gorman
2016-07-13 13:04 ` Johannes Weiner
2016-07-13 13:37 ` Mel Gorman
2016-07-13 21:13 ` Andrew Morton
2016-07-15 10:46 ` Mel Gorman
2016-07-15 22:35 ` Andrew Morton
2016-07-18 13:34 ` Johannes Weiner
2016-07-14 1:27 ` Minchan Kim
2016-07-08 9:34 ` [PATCH 19/34] mm: move most file-based accounting to the node Mel Gorman
2016-07-12 15:11 ` Johannes Weiner
2016-07-08 9:34 ` [PATCH 20/34] mm: move vmscan writes and file write " Mel Gorman
2016-07-12 15:15 ` Johannes Weiner
2016-07-08 9:34 ` [PATCH 21/34] mm, vmscan: only wakeup kswapd once per node for the requested classzone Mel Gorman
2016-07-12 17:18 ` Johannes Weiner
2016-07-08 9:34 ` [PATCH 22/34] mm, page_alloc: wake kswapd based on the highest eligible zone Mel Gorman
2016-07-12 17:24 ` Johannes Weiner
2016-07-14 10:05 ` Vlastimil Babka
2016-07-08 9:34 ` [PATCH 23/34] mm: convert zone_reclaim to node_reclaim Mel Gorman
2016-07-12 17:28 ` Johannes Weiner
2016-07-08 9:35 ` [PATCH 24/34] mm, vmscan: avoid passing in classzone_idx unnecessarily to shrink_node Mel Gorman
2016-07-12 17:31 ` Johannes Weiner
2016-07-14 10:09 ` Vlastimil Babka
2016-07-08 9:35 ` [PATCH 25/34] mm, vmscan: avoid passing in classzone_idx unnecessarily to compaction_ready Mel Gorman
2016-07-12 18:01 ` Johannes Weiner
2016-07-14 12:12 ` Vlastimil Babka
2016-07-08 9:35 ` [PATCH 26/34] mm, vmscan: avoid passing in remaining unnecessarily to prepare_kswapd_sleep Mel Gorman
2016-07-12 18:06 ` Johannes Weiner
2016-07-14 12:48 ` Vlastimil Babka
2016-07-08 9:35 ` [PATCH 27/34] mm, vmscan: Have kswapd reclaim from all zones if reclaiming and buffer_heads_over_limit Mel Gorman
2016-07-12 18:10 ` Johannes Weiner
2016-07-14 12:54 ` Vlastimil Babka
2016-07-08 9:35 ` [PATCH 28/34] mm, vmscan: add classzone information to tracepoints Mel Gorman
2016-07-12 18:13 ` Johannes Weiner
2016-07-08 9:35 ` [PATCH 29/34] mm, page_alloc: remove fair zone allocation policy Mel Gorman
2016-07-12 18:18 ` Johannes Weiner
2016-07-08 9:35 ` [PATCH 30/34] mm: page_alloc: cache the last node whose dirty limit is reached Mel Gorman
2016-07-12 18:43 ` Johannes Weiner
2016-07-08 9:35 ` [PATCH 31/34] mm: vmstat: replace __count_zone_vm_events with a zone id equivalent Mel Gorman
2016-07-12 19:10 ` Johannes Weiner
2016-07-08 9:35 ` [PATCH 32/34] mm: vmstat: account per-zone stalls and pages skipped during reclaim Mel Gorman
2016-07-12 19:06 ` Johannes Weiner
2016-07-08 9:35 ` [PATCH 33/34] mm, vmstat: print node-based stats in zoneinfo file Mel Gorman
2016-07-12 19:18 ` Johannes Weiner
2016-07-14 12:56 ` Vlastimil Babka
2016-07-08 9:35 ` [PATCH 34/34] mm, vmstat: remove zone and node double accounting by approximating retries Mel Gorman
2016-07-14 13:40 ` Vlastimil Babka
2016-07-15 7:48 ` Mel Gorman
2016-07-15 12:20 ` Vlastimil Babka
2016-08-19 13:12 ` [PATCH 00/34] Move LRU page reclaim from zones to nodes v9 Andrea Arcangeli
2016-08-19 13:23 ` Vlastimil Babka
2016-08-19 13:55 ` Andrea Arcangeli
2016-08-19 14:53 ` Mel Gorman
2016-08-19 15:32 ` Andrea Arcangeli
2016-08-19 15:55 ` Mel Gorman
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20160712110604.GA5981@350D \
--to=bsingharora@gmail.com \
--cc=akpm@linux-foundation.org \
--cc=hannes@cmpxchg.org \
--cc=iamjoonsoo.kim@lge.com \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-mm@kvack.org \
--cc=mgorman@techsingularity.net \
--cc=minchan@kernel.org \
--cc=riel@surriel.com \
--cc=vbabka@suse.cz \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).