linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Minchan Kim <minchan@kernel.org>
To: Mel Gorman <mgorman@techsingularity.net>
Cc: Andrew Morton <akpm@linux-foundation.org>,
	Linux-MM <linux-mm@kvack.org>, Rik van Riel <riel@surriel.com>,
	Vlastimil Babka <vbabka@suse.cz>,
	Johannes Weiner <hannes@cmpxchg.org>,
	LKML <linux-kernel@vger.kernel.org>
Subject: Re: [PATCH 02/31] mm, vmscan: move lru_lock to the node
Date: Tue, 5 Jul 2016 09:03:56 +0900	[thread overview]
Message-ID: <20160705000356.GA28164@bbox> (raw)
In-Reply-To: <1467403299-25786-3-git-send-email-mgorman@techsingularity.net>

On Fri, Jul 01, 2016 at 09:01:10PM +0100, Mel Gorman wrote:
> Node-based reclaim requires node-based LRUs and locking.  This is a
> preparation patch that just moves the lru_lock to the node so later
> patches are easier to review.  It is a mechanical change but note this
> patch makes contention worse because the LRU lock is hotter and direct
> reclaim and kswapd can contend on the same lock even when reclaiming from
> different zones.
> 
> Signed-off-by: Mel Gorman <mgorman@techsingularity.net>
> Acked-by: Johannes Weiner <hannes@cmpxchg.org>
> Acked-by: Vlastimil Babka <vbabka@suse.cz>
> ---
>  Documentation/cgroup-v1/memcg_test.txt |  4 +--
>  Documentation/cgroup-v1/memory.txt     |  4 +--
>  include/linux/mm_types.h               |  2 +-
>  include/linux/mmzone.h                 | 10 +++++--
>  mm/compaction.c                        | 10 +++----
>  mm/filemap.c                           |  4 +--
>  mm/huge_memory.c                       |  6 ++---
>  mm/memcontrol.c                        |  6 ++---
>  mm/mlock.c                             | 10 +++----
>  mm/page_alloc.c                        |  4 +--
>  mm/page_idle.c                         |  4 +--
>  mm/rmap.c                              |  2 +-
>  mm/swap.c                              | 30 ++++++++++-----------
>  mm/vmscan.c                            | 48 +++++++++++++++++-----------------
>  14 files changed, 75 insertions(+), 69 deletions(-)
> 
> diff --git a/Documentation/cgroup-v1/memcg_test.txt b/Documentation/cgroup-v1/memcg_test.txt
> index 8870b0212150..78a8c2963b38 100644
> --- a/Documentation/cgroup-v1/memcg_test.txt
> +++ b/Documentation/cgroup-v1/memcg_test.txt
> @@ -107,9 +107,9 @@ Under below explanation, we assume CONFIG_MEM_RES_CTRL_SWAP=y.
>  
>  8. LRU
>          Each memcg has its own private LRU. Now, its handling is under global
> -	VM's control (means that it's handled under global zone->lru_lock).
> +	VM's control (means that it's handled under global zone_lru_lock).
>  	Almost all routines around memcg's LRU is called by global LRU's
> -	list management functions under zone->lru_lock().
> +	list management functions under zone_lru_lock().
>  
>  	A special function is mem_cgroup_isolate_pages(). This scans
>  	memcg's private LRU and call __isolate_lru_page() to extract a page
> diff --git a/Documentation/cgroup-v1/memory.txt b/Documentation/cgroup-v1/memory.txt
> index b14abf217239..946e69103cdd 100644
> --- a/Documentation/cgroup-v1/memory.txt
> +++ b/Documentation/cgroup-v1/memory.txt
> @@ -267,11 +267,11 @@ When oom event notifier is registered, event will be delivered.
>     Other lock order is following:
>     PG_locked.
>     mm->page_table_lock
> -       zone->lru_lock
> +       zone_lru_lock
>  	  lock_page_cgroup.
>    In many cases, just lock_page_cgroup() is called.
>    per-zone-per-cgroup LRU (cgroup's private LRU) is just guarded by
> -  zone->lru_lock, it has no lock of its own.
> +  zone_lru_lock, it has no lock of its own.
>  
>  2.7 Kernel Memory Extension (CONFIG_MEMCG_KMEM)
>  
> diff --git a/include/linux/mm_types.h b/include/linux/mm_types.h
> index e093e1d3285b..ca2ed9a6c8d8 100644
> --- a/include/linux/mm_types.h
> +++ b/include/linux/mm_types.h
> @@ -118,7 +118,7 @@ struct page {
>  	 */
>  	union {
>  		struct list_head lru;	/* Pageout list, eg. active_list
> -					 * protected by zone->lru_lock !
> +					 * protected by zone_lru_lock !
>  					 * Can be used as a generic list
>  					 * by the page owner.
>  					 */
> diff --git a/include/linux/mmzone.h b/include/linux/mmzone.h
> index 078ecb81e209..2d5087e3c034 100644
> --- a/include/linux/mmzone.h
> +++ b/include/linux/mmzone.h
> @@ -93,7 +93,7 @@ struct free_area {
>  struct pglist_data;
>  
>  /*
> - * zone->lock and zone->lru_lock are two of the hottest locks in the kernel.
> + * zone->lock and the zone lru_lock are two of the hottest locks in the kernel.
>   * So add a wild amount of padding here to ensure that they fall into separate
>   * cachelines.  There are very few zone structures in the machine, so space
>   * consumption is not a concern here.
> @@ -496,7 +496,6 @@ struct zone {
>  	/* Write-intensive fields used by page reclaim */
>  
>  	/* Fields commonly accessed by the page reclaim scanner */
> -	spinlock_t		lru_lock;
>  	struct lruvec		lruvec;
>  
>  	/*
> @@ -690,6 +689,9 @@ typedef struct pglist_data {
>  	/* Number of pages migrated during the rate limiting time interval */
>  	unsigned long numabalancing_migrate_nr_pages;
>  #endif
> +	/* Write-intensive fields used from the page allocator */

                                                page reclaim.
> +	ZONE_PADDING(_pad1_)
> +	spinlock_t		lru_lock;
>  
>  #ifdef CONFIG_DEFERRED_STRUCT_PAGE_INIT
>  	/*

Otherwise, looks good to me.

Reviewed-by: Minchan Kim <minchan@kernel.org>

  reply	other threads:[~2016-07-05  0:03 UTC|newest]

Thread overview: 90+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2016-07-01 20:01 [PATCH 00/31] Move LRU page reclaim from zones to nodes v8 Mel Gorman
2016-07-01 20:01 ` [PATCH 01/31] mm, vmstat: add infrastructure for per-node vmstats Mel Gorman
2016-07-04 23:50   ` Minchan Kim
2016-07-05  8:14     ` Mel Gorman
2016-07-06  0:15       ` Minchan Kim
2016-07-01 20:01 ` [PATCH 02/31] mm, vmscan: move lru_lock to the node Mel Gorman
2016-07-05  0:03   ` Minchan Kim [this message]
2016-07-01 20:01 ` [PATCH 03/31] mm, vmscan: move LRU lists to node Mel Gorman
2016-07-05  1:19   ` Minchan Kim
2016-07-05 10:14     ` Mel Gorman
2016-07-01 20:01 ` [PATCH 04/31] mm, vmscan: begin reclaiming pages on a per-node basis Mel Gorman
2016-07-07  1:12   ` Joonsoo Kim
2016-07-07  9:48     ` Mel Gorman
2016-07-08  2:28       ` Joonsoo Kim
2016-07-08 10:05         ` Mel Gorman
2016-07-14  6:28           ` Joonsoo Kim
2016-07-14  7:48             ` Vlastimil Babka
2016-07-18  4:52               ` Joonsoo Kim
2016-07-18 12:11             ` Mel Gorman
2016-07-18 14:27               ` Mel Gorman
2016-07-19  8:30                 ` Joonsoo Kim
2016-07-19 14:25                   ` Mel Gorman
2016-07-01 20:01 ` [PATCH 05/31] mm, vmscan: have kswapd only scan based on the highest requested zone Mel Gorman
2016-07-01 20:01 ` [PATCH 06/31] mm, vmscan: make kswapd reclaim in terms of nodes Mel Gorman
2016-07-01 20:01 ` [PATCH 07/31] mm, vmscan: remove balance gap Mel Gorman
2016-07-01 20:01 ` [PATCH 08/31] mm, vmscan: simplify the logic deciding whether kswapd sleeps Mel Gorman
2016-07-05  5:59   ` Minchan Kim
2016-07-05 10:26     ` Mel Gorman
2016-07-06  0:30       ` Minchan Kim
2016-07-06  8:31         ` Mel Gorman
2016-07-07  5:51           ` Minchan Kim
2016-07-07  9:56             ` Mel Gorman
2016-07-07  1:20   ` Joonsoo Kim
2016-07-07 10:17     ` Mel Gorman
2016-07-08  2:44       ` Joonsoo Kim
2016-07-08 10:11         ` Mel Gorman
2016-07-14  5:23           ` Joonsoo Kim
2016-07-14  8:32             ` Vlastimil Babka
2016-07-18  5:07               ` Joonsoo Kim
2016-07-18  6:51                 ` Vlastimil Babka
2016-07-18  7:24                   ` Joonsoo Kim
2016-07-14  9:05             ` Mel Gorman
2016-07-18  5:03               ` Joonsoo Kim
2016-07-01 20:01 ` [PATCH 09/31] mm, vmscan: by default have direct reclaim only shrink once per node Mel Gorman
2016-07-07  1:43   ` Joonsoo Kim
2016-07-07 10:27     ` Mel Gorman
2016-07-01 20:01 ` [PATCH 10/31] mm, vmscan: remove duplicate logic clearing node congestion and dirty state Mel Gorman
2016-07-01 20:01 ` [PATCH 11/31] mm: vmscan: do not reclaim from kswapd if there is any eligible zone Mel Gorman
2016-07-05  6:11   ` Minchan Kim
2016-07-05 10:38     ` Mel Gorman
2016-07-06  1:25       ` Minchan Kim
2016-07-06  8:42         ` Mel Gorman
2016-07-07  6:27           ` Minchan Kim
2016-07-07 10:55             ` Mel Gorman
2016-07-01 20:01 ` [PATCH 12/31] mm, vmscan: make shrink_node decisions more node-centric Mel Gorman
2016-07-05  6:24   ` Minchan Kim
2016-07-05 10:40     ` Mel Gorman
2016-07-01 20:01 ` [PATCH 13/31] mm, memcg: move memcg limit enforcement from zones to nodes Mel Gorman
2016-07-01 20:01 ` [PATCH 14/31] mm, workingset: make working set detection node-aware Mel Gorman
2016-07-01 20:01 ` [PATCH 15/31] mm, page_alloc: consider dirtyable memory in terms of nodes Mel Gorman
2016-07-01 20:01 ` [PATCH 16/31] mm: move page mapped accounting to the node Mel Gorman
2016-07-01 20:01 ` [PATCH 17/31] mm: rename NR_ANON_PAGES to NR_ANON_MAPPED Mel Gorman
2016-07-01 20:01 ` [PATCH 18/31] mm: move most file-based accounting to the node Mel Gorman
2016-07-01 20:01 ` [PATCH 19/31] mm: move vmscan writes and file write " Mel Gorman
2016-07-01 20:01 ` [PATCH 20/31] mm, vmscan: only wakeup kswapd once per node for the requested classzone Mel Gorman
2016-07-07  1:24   ` Joonsoo Kim
2016-07-07 10:58     ` Mel Gorman
2016-07-01 20:01 ` [PATCH 21/31] mm, page_alloc: Wake kswapd based on the highest eligible zone Mel Gorman
2016-07-01 20:01 ` [PATCH 22/31] mm: convert zone_reclaim to node_reclaim Mel Gorman
2016-07-01 20:01 ` [PATCH 23/31] mm, vmscan: Avoid passing in classzone_idx unnecessarily to shrink_node Mel Gorman
2016-07-01 20:01 ` [PATCH 24/31] mm, vmscan: Avoid passing in classzone_idx unnecessarily to compaction_ready Mel Gorman
2016-07-01 20:01 ` [PATCH 25/31] mm, vmscan: add classzone information to tracepoints Mel Gorman
2016-07-01 20:01 ` [PATCH 26/31] mm, page_alloc: remove fair zone allocation policy Mel Gorman
2016-07-01 20:01 ` [PATCH 27/31] mm: page_alloc: cache the last node whose dirty limit is reached Mel Gorman
2016-07-01 20:01 ` [PATCH 28/31] mm: vmstat: replace __count_zone_vm_events with a zone id equivalent Mel Gorman
2016-07-01 20:01 ` [PATCH 29/31] mm: vmstat: account per-zone stalls and pages skipped during reclaim Mel Gorman
2016-07-01 20:01 ` [PATCH 30/31] mm, vmstat: print node-based stats in zoneinfo file Mel Gorman
2016-07-01 20:01 ` [PATCH 31/31] mm, vmstat: Remove zone and node double accounting by approximating retries Mel Gorman
2016-07-06  0:02   ` Minchan Kim
2016-07-06  8:58     ` Mel Gorman
2016-07-06  9:33       ` Mel Gorman
2016-07-07  6:47       ` Minchan Kim
2016-07-06 18:12   ` Dave Hansen
2016-07-07 11:26     ` Mel Gorman
2016-07-04  1:37 ` [PATCH 00/31] Move LRU page reclaim from zones to nodes v8 Minchan Kim
2016-07-04  4:34   ` Mel Gorman
2016-07-04  8:04     ` Minchan Kim
2016-07-04  9:55       ` Mel Gorman
2016-07-06  1:51         ` Minchan Kim
  -- strict thread matches above, loose matches on Subject: below --
2016-07-01 15:37 Mel Gorman
2016-07-01 15:37 ` [PATCH 02/31] mm, vmscan: move lru_lock to the node Mel Gorman

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20160705000356.GA28164@bbox \
    --to=minchan@kernel.org \
    --cc=akpm@linux-foundation.org \
    --cc=hannes@cmpxchg.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-mm@kvack.org \
    --cc=mgorman@techsingularity.net \
    --cc=riel@surriel.com \
    --cc=vbabka@suse.cz \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).