linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
* [PATCH for mmotm 0/5] introduce swap-backed-file-mapped count and fix vmscan-change-the-number-of-the-unmapped-files-in-zone-reclaim.patch
@ 2009-06-11 10:25 KOSAKI Motohiro
  2009-06-11 10:26 ` [PATCH for mmotm 1/5] cleanp page_remove_rmap() KOSAKI Motohiro
                   ` (5 more replies)
  0 siblings, 6 replies; 17+ messages in thread
From: KOSAKI Motohiro @ 2009-06-11 10:25 UTC (permalink / raw)
  To: Mel Gorman, Wu Fengguang, linux-mm, LKML, Andrew Morton; +Cc: kosaki.motohiro

Recently, Wu Fengguang pointed out vmscan-change-the-number-of-the-unmapped-files-in-zone-reclaim.patch
has underflow problem.

This patch series introduce new vmstat of swap-backed-file-mapped and fix above
patch by it.





--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 17+ messages in thread

* [PATCH for mmotm 1/5] cleanp page_remove_rmap()
  2009-06-11 10:25 [PATCH for mmotm 0/5] introduce swap-backed-file-mapped count and fix vmscan-change-the-number-of-the-unmapped-files-in-zone-reclaim.patch KOSAKI Motohiro
@ 2009-06-11 10:26 ` KOSAKI Motohiro
  2009-06-11 11:01   ` Mel Gorman
  2009-06-11 23:21   ` Wu Fengguang
  2009-06-11 10:26 ` [PATCH for mmotm 2/5] KOSAKI Motohiro
                   ` (4 subsequent siblings)
  5 siblings, 2 replies; 17+ messages in thread
From: KOSAKI Motohiro @ 2009-06-11 10:26 UTC (permalink / raw)
  To: linux-mm, LKML; +Cc: kosaki.motohiro, Mel Gorman, Wu Fengguang, Andrew Morton

Subject: [PATCH] cleanp page_remove_rmap()

page_remove_rmap() has multiple PageAnon() test and it has
a bit deeply nesting.

cleanup here.

note: this patch doesn't have behavior change.


Signed-off-by: KOSAKI Motohiro <kosaki.motohiro@jp.fujitsu.com>
Cc: KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>
Cc: Mel Gorman <mel@csn.ul.ie>
Cc: Wu Fengguang <fengguang.wu@intel.com> 
---
 mm/rmap.c |   59 ++++++++++++++++++++++++++++++++---------------------------
 1 file changed, 32 insertions(+), 27 deletions(-)

Index: b/mm/rmap.c
===================================================================
--- a/mm/rmap.c
+++ b/mm/rmap.c
@@ -862,34 +862,39 @@ void page_dup_rmap(struct page *page, st
  */
 void page_remove_rmap(struct page *page)
 {
-	if (atomic_add_negative(-1, &page->_mapcount)) {
-		/*
-		 * Now that the last pte has gone, s390 must transfer dirty
-		 * flag from storage key to struct page.  We can usually skip
-		 * this if the page is anon, so about to be freed; but perhaps
-		 * not if it's in swapcache - there might be another pte slot
-		 * containing the swap entry, but page not yet written to swap.
-		 */
-		if ((!PageAnon(page) || PageSwapCache(page)) &&
-		    page_test_dirty(page)) {
-			page_clear_dirty(page);
-			set_page_dirty(page);
-		}
-		if (PageAnon(page))
-			mem_cgroup_uncharge_page(page);
-		__dec_zone_page_state(page,
-			PageAnon(page) ? NR_ANON_PAGES : NR_FILE_MAPPED);
-		mem_cgroup_update_mapped_file_stat(page, -1);
-		/*
-		 * It would be tidy to reset the PageAnon mapping here,
-		 * but that might overwrite a racing page_add_anon_rmap
-		 * which increments mapcount after us but sets mapping
-		 * before us: so leave the reset to free_hot_cold_page,
-		 * and remember that it's only reliable while mapped.
-		 * Leaving it set also helps swapoff to reinstate ptes
-		 * faster for those pages still in swapcache.
-		 */
+	if (!atomic_add_negative(-1, &page->_mapcount)) {
+		/* the page is still mapped another one */
+		return;
 	}
+
+	/*
+	 * Now that the last pte has gone, s390 must transfer dirty
+	 * flag from storage key to struct page.  We can usually skip
+	 * this if the page is anon, so about to be freed; but perhaps
+	 * not if it's in swapcache - there might be another pte slot
+	 * containing the swap entry, but page not yet written to swap.
+	 */
+	if ((!PageAnon(page) || PageSwapCache(page)) &&
+		page_test_dirty(page)) {
+		page_clear_dirty(page);
+		set_page_dirty(page);
+	}
+	if (PageAnon(page)) {
+		mem_cgroup_uncharge_page(page);
+		__dec_zone_page_state(page, NR_ANON_PAGES);
+	} else {
+		__dec_zone_page_state(page, NR_FILE_MAPPED);
+	}
+	mem_cgroup_update_mapped_file_stat(page, -1);
+	/*
+	 * It would be tidy to reset the PageAnon mapping here,
+	 * but that might overwrite a racing page_add_anon_rmap
+	 * which increments mapcount after us but sets mapping
+	 * before us: so leave the reset to free_hot_cold_page,
+	 * and remember that it's only reliable while mapped.
+	 * Leaving it set also helps swapoff to reinstate ptes
+	 * faster for those pages still in swapcache.
+	 */
 }
 
 /*


--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 17+ messages in thread

* [PATCH for mmotm 2/5]
  2009-06-11 10:25 [PATCH for mmotm 0/5] introduce swap-backed-file-mapped count and fix vmscan-change-the-number-of-the-unmapped-files-in-zone-reclaim.patch KOSAKI Motohiro
  2009-06-11 10:26 ` [PATCH for mmotm 1/5] cleanp page_remove_rmap() KOSAKI Motohiro
@ 2009-06-11 10:26 ` KOSAKI Motohiro
  2009-06-11 11:13   ` Mel Gorman
  2009-06-11 23:29   ` Wu Fengguang
  2009-06-11 10:27 ` [PATCH for mmotm 3/5] add Mapped(SwapBacked) field to /proc/meminfo KOSAKI Motohiro
                   ` (3 subsequent siblings)
  5 siblings, 2 replies; 17+ messages in thread
From: KOSAKI Motohiro @ 2009-06-11 10:26 UTC (permalink / raw)
  To: linux-mm, LKML; +Cc: kosaki.motohiro, Mel Gorman, Wu Fengguang, Andrew Morton

Changes since Wu's original patch
  - adding vmstat
  - rename NR_TMPFS_MAPPED to NR_SWAP_BACKED_FILE_MAPPED


----------------------
Subject: [PATCH] introduce NR_SWAP_BACKED_FILE_MAPPED zone stat

Desirable zone reclaim implementaion want to know the number of
file-backed and unmapped pages.

Thus, we need to know number of swap-backed mapped pages for
calculate above number.


Cc: Mel Gorman <mel@csn.ul.ie>
Signed-off-by: Wu Fengguang <fengguang.wu@intel.com>
Signed-off-by: KOSAKI Motohiro <kosaki.motohiro@jp.fujitsu.com>
---
 include/linux/mmzone.h |    2 ++
 mm/rmap.c              |    7 +++++++
 mm/vmstat.c            |    1 +
 3 files changed, 10 insertions(+)

Index: b/include/linux/mmzone.h
===================================================================
--- a/include/linux/mmzone.h
+++ b/include/linux/mmzone.h
@@ -88,6 +88,8 @@ enum zone_stat_item {
 	NR_ANON_PAGES,	/* Mapped anonymous pages */
 	NR_FILE_MAPPED,	/* pagecache pages mapped into pagetables.
 			   only modified from process context */
+	NR_SWAP_BACKED_FILE_MAPPED, /* Similar to NR_FILE_MAPPED. but
+				       only account swap-backed pages */
 	NR_FILE_PAGES,
 	NR_FILE_DIRTY,
 	NR_WRITEBACK,
Index: b/mm/rmap.c
===================================================================
--- a/mm/rmap.c
+++ b/mm/rmap.c
@@ -829,6 +829,10 @@ void page_add_file_rmap(struct page *pag
 {
 	if (atomic_inc_and_test(&page->_mapcount)) {
 		__inc_zone_page_state(page, NR_FILE_MAPPED);
+		if (PageSwapBacked(page))
+			__inc_zone_page_state(page,
+					      NR_SWAP_BACKED_FILE_MAPPED);
+
 		mem_cgroup_update_mapped_file_stat(page, 1);
 	}
 }
@@ -884,6 +888,9 @@ void page_remove_rmap(struct page *page)
 		__dec_zone_page_state(page, NR_ANON_PAGES);
 	} else {
 		__dec_zone_page_state(page, NR_FILE_MAPPED);
+		if (PageSwapBacked(page))
+			__dec_zone_page_state(page,
+					NR_SWAP_BACKED_FILE_MAPPED);
 	}
 	mem_cgroup_update_mapped_file_stat(page, -1);
 	/*
Index: b/mm/vmstat.c
===================================================================
--- a/mm/vmstat.c
+++ b/mm/vmstat.c
@@ -633,6 +633,7 @@ static const char * const vmstat_text[] 
 	"nr_mlock",
 	"nr_anon_pages",
 	"nr_mapped",
+	"nr_swap_backed_file_mapped",
 	"nr_file_pages",
 	"nr_dirty",
 	"nr_writeback",


--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 17+ messages in thread

* [PATCH for mmotm 3/5] add Mapped(SwapBacked) field to /proc/meminfo
  2009-06-11 10:25 [PATCH for mmotm 0/5] introduce swap-backed-file-mapped count and fix vmscan-change-the-number-of-the-unmapped-files-in-zone-reclaim.patch KOSAKI Motohiro
  2009-06-11 10:26 ` [PATCH for mmotm 1/5] cleanp page_remove_rmap() KOSAKI Motohiro
  2009-06-11 10:26 ` [PATCH for mmotm 2/5] KOSAKI Motohiro
@ 2009-06-11 10:27 ` KOSAKI Motohiro
  2009-06-11 10:27 ` [PATCH for mmotm 4/5] adjust fields length of /proc/meminfo KOSAKI Motohiro
                   ` (2 subsequent siblings)
  5 siblings, 0 replies; 17+ messages in thread
From: KOSAKI Motohiro @ 2009-06-11 10:27 UTC (permalink / raw)
  To: linux-mm, LKML; +Cc: kosaki.motohiro, Mel Gorman, Wu Fengguang, Andrew Morton

Subject: [PATCH] add Mapped(SwapBacked) field to /proc/meminfo

Now, We have NR_SWAP_BACKED_FILE_MAPPED statistics. Thus we can also
display it by /proc/meminfo.


example:

$ cat /proc/meminfo
MemTotal:       32275164 kB
MemFree:        31880212 kB
(snip)
Mapped:            28048 kB
Mapped(SwapBacked):      836 kB

Signed-off-by: KOSAKI Motohiro <kosaki.motohiro@jp.fujitsu.com>
Cc: Mel Gorman <mel@csn.ul.ie>
Cc: Wu Fengguang <fengguang.wu@intel.com>
---
 fs/proc/meminfo.c |    2 ++
 1 file changed, 2 insertions(+)

Index: b/fs/proc/meminfo.c
===================================================================
--- a/fs/proc/meminfo.c
+++ b/fs/proc/meminfo.c
@@ -81,6 +81,7 @@ static int meminfo_proc_show(struct seq_
 		"Writeback:      %8lu kB\n"
 		"AnonPages:      %8lu kB\n"
 		"Mapped:         %8lu kB\n"
+		"Mapped(SwapBacked): %8lu kB\n"
 		"Slab:           %8lu kB\n"
 		"SReclaimable:   %8lu kB\n"
 		"SUnreclaim:     %8lu kB\n"
@@ -124,6 +125,7 @@ static int meminfo_proc_show(struct seq_
 		K(global_page_state(NR_WRITEBACK)),
 		K(global_page_state(NR_ANON_PAGES)),
 		K(global_page_state(NR_FILE_MAPPED)),
+		K(global_page_state(NR_SWAP_BACKED_FILE_MAPPED)),
 		K(global_page_state(NR_SLAB_RECLAIMABLE) +
 				global_page_state(NR_SLAB_UNRECLAIMABLE)),
 		K(global_page_state(NR_SLAB_RECLAIMABLE)),


--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 17+ messages in thread

* [PATCH for mmotm 4/5] adjust fields length of /proc/meminfo
  2009-06-11 10:25 [PATCH for mmotm 0/5] introduce swap-backed-file-mapped count and fix vmscan-change-the-number-of-the-unmapped-files-in-zone-reclaim.patch KOSAKI Motohiro
                   ` (2 preceding siblings ...)
  2009-06-11 10:27 ` [PATCH for mmotm 3/5] add Mapped(SwapBacked) field to /proc/meminfo KOSAKI Motohiro
@ 2009-06-11 10:27 ` KOSAKI Motohiro
  2009-06-11 10:28 ` [PATCH for mmotm 5/5] fix vmscan-change-the-number-of-the-unmapped-files-in-zone-reclaim.patch KOSAKI Motohiro
  2009-06-11 10:38 ` [PATCH for mmotm 0/5] introduce swap-backed-file-mapped count and " Mel Gorman
  5 siblings, 0 replies; 17+ messages in thread
From: KOSAKI Motohiro @ 2009-06-11 10:27 UTC (permalink / raw)
  To: linux-mm, LKML; +Cc: kosaki.motohiro, Mel Gorman, Wu Fengguang, Andrew Morton

Subject: [PATCH] adjust fields length of /proc/meminfo

This patch adjust fields of /proc/meminfo. it doesn't have any behavior
change.


<before>
$ cat /proc/meminfo
MemTotal:       32275164 kB
MemFree:        31880212 kB
Buffers:            8824 kB
Cached:           175304 kB
SwapCached:            0 kB
Active:            97236 kB
Inactive:         161336 kB
Active(anon):      75344 kB
Inactive(anon):        0 kB
Active(file):      21892 kB
Inactive(file):   161336 kB
Unevictable:           0 kB
Mlocked:               0 kB
SwapTotal:       4192956 kB
SwapFree:        4192956 kB
Dirty:                 0 kB
Writeback:             0 kB
AnonPages:         74480 kB
Mapped:            28048 kB
Mapped(SwapBacked):      836 kB
Slab:              45904 kB
SReclaimable:      23460 kB
SUnreclaim:        22444 kB
PageTables:         8484 kB
NFS_Unstable:          0 kB
Bounce:                0 kB
WritebackTmp:          0 kB
CommitLimit:    20330536 kB
Committed_AS:     162652 kB
VmallocTotal:   34359738367 kB
VmallocUsed:       85348 kB
VmallocChunk:   34359638395 kB
HugePages_Total:       0
HugePages_Free:        0
HugePages_Rsvd:        0
HugePages_Surp:        0
Hugepagesize:       2048 kB
DirectMap4k:        7680 kB
DirectMap2M:    33546240 kB


<after>
$ cat /proc/meminfo
MemTotal:           32275164 kB
MemFree:            32000220 kB
Buffers:                8132 kB
Cached:                81224 kB
SwapCached:                0 kB
Active:                70840 kB
Inactive:              72244 kB
Active(anon):          54492 kB
Inactive(anon):            0 kB
Active(file):          16348 kB
Inactive(file):        72244 kB
Unevictable:               0 kB
Mlocked:                   0 kB
SwapTotal:           4192956 kB
SwapFree:            4192956 kB
Dirty:                    60 kB
Writeback:                 0 kB
AnonPages:             53764 kB
Mapped:                27672 kB
Mapped(SwapBacked):      708 kB
Slab:                  41544 kB
SReclaimable:          18648 kB
SUnreclaim:            22896 kB
PageTables:             8440 kB
NFS_Unstable:              0 kB
Bounce:                    0 kB
WritebackTmp:              0 kB
CommitLimit:        20330536 kB
Committed_AS:         141696 kB
VmallocTotal:    34359738367 kB
VmallocUsed:           85348 kB
VmallocChunk:    34359638395 kB
HugePages_Total:           0
HugePages_Free:            0
HugePages_Rsvd:            0
HugePages_Surp:            0
Hugepagesize:           2048 kB
DirectMap4k:            7680 kB
DirectMap2M:        33546240 kB


Signed-off-by: KOSAKI Motohiro <kosaki.motohiro@jp.fujitsu.com>
---
 arch/x86/mm/pageattr.c |    8 ++---
 fs/proc/meminfo.c      |   74 ++++++++++++++++++++++++-------------------------
 mm/hugetlb.c           |   10 +++---
 3 files changed, 46 insertions(+), 46 deletions(-)

Index: b/arch/x86/mm/pageattr.c
===================================================================
--- a/arch/x86/mm/pageattr.c
+++ b/arch/x86/mm/pageattr.c
@@ -70,18 +70,18 @@ static void split_page_count(int level)
 
 void arch_report_meminfo(struct seq_file *m)
 {
-	seq_printf(m, "DirectMap4k:    %8lu kB\n",
+	seq_printf(m, "DirectMap4k:        %8lu kB\n",
 			direct_pages_count[PG_LEVEL_4K] << 2);
 #if defined(CONFIG_X86_64) || defined(CONFIG_X86_PAE)
-	seq_printf(m, "DirectMap2M:    %8lu kB\n",
+	seq_printf(m, "DirectMap2M:        %8lu kB\n",
 			direct_pages_count[PG_LEVEL_2M] << 11);
 #else
-	seq_printf(m, "DirectMap4M:    %8lu kB\n",
+	seq_printf(m, "DirectMap4M:        %8lu kB\n",
 			direct_pages_count[PG_LEVEL_2M] << 12);
 #endif
 #ifdef CONFIG_X86_64
 	if (direct_gbpages)
-		seq_printf(m, "DirectMap1G:    %8lu kB\n",
+		seq_printf(m, "DirectMap1G:        %8lu kB\n",
 			direct_pages_count[PG_LEVEL_1G] << 20);
 #endif
 }
Index: b/fs/proc/meminfo.c
===================================================================
--- a/fs/proc/meminfo.c
+++ b/fs/proc/meminfo.c
@@ -53,50 +53,50 @@ static int meminfo_proc_show(struct seq_
 	 * Tagged format, for easy grepping and expansion.
 	 */
 	seq_printf(m,
-		"MemTotal:       %8lu kB\n"
-		"MemFree:        %8lu kB\n"
-		"Buffers:        %8lu kB\n"
-		"Cached:         %8lu kB\n"
-		"SwapCached:     %8lu kB\n"
-		"Active:         %8lu kB\n"
-		"Inactive:       %8lu kB\n"
-		"Active(anon):   %8lu kB\n"
-		"Inactive(anon): %8lu kB\n"
-		"Active(file):   %8lu kB\n"
-		"Inactive(file): %8lu kB\n"
-		"Unevictable:    %8lu kB\n"
-		"Mlocked:        %8lu kB\n"
+		"MemTotal:           %8lu kB\n"
+		"MemFree:            %8lu kB\n"
+		"Buffers:            %8lu kB\n"
+		"Cached:             %8lu kB\n"
+		"SwapCached:         %8lu kB\n"
+		"Active:             %8lu kB\n"
+		"Inactive:           %8lu kB\n"
+		"Active(anon):       %8lu kB\n"
+		"Inactive(anon):     %8lu kB\n"
+		"Active(file):       %8lu kB\n"
+		"Inactive(file):     %8lu kB\n"
+		"Unevictable:        %8lu kB\n"
+		"Mlocked:            %8lu kB\n"
 #ifdef CONFIG_HIGHMEM
-		"HighTotal:      %8lu kB\n"
-		"HighFree:       %8lu kB\n"
-		"LowTotal:       %8lu kB\n"
-		"LowFree:        %8lu kB\n"
+		"HighTotal:          %8lu kB\n"
+		"HighFree:           %8lu kB\n"
+		"LowTotal:           %8lu kB\n"
+		"LowFree:            %8lu kB\n"
 #endif
 #ifndef CONFIG_MMU
-		"MmapCopy:       %8lu kB\n"
+		"MmapCopy:           %8lu kB\n"
 #endif
-		"SwapTotal:      %8lu kB\n"
-		"SwapFree:       %8lu kB\n"
-		"Dirty:          %8lu kB\n"
-		"Writeback:      %8lu kB\n"
-		"AnonPages:      %8lu kB\n"
-		"Mapped:         %8lu kB\n"
+		"SwapTotal:          %8lu kB\n"
+		"SwapFree:           %8lu kB\n"
+		"Dirty:              %8lu kB\n"
+		"Writeback:          %8lu kB\n"
+		"AnonPages:          %8lu kB\n"
+		"Mapped:             %8lu kB\n"
 		"Mapped(SwapBacked): %8lu kB\n"
-		"Slab:           %8lu kB\n"
-		"SReclaimable:   %8lu kB\n"
-		"SUnreclaim:     %8lu kB\n"
-		"PageTables:     %8lu kB\n"
+		"Slab:               %8lu kB\n"
+		"SReclaimable:       %8lu kB\n"
+		"SUnreclaim:         %8lu kB\n"
+		"PageTables:         %8lu kB\n"
 #ifdef CONFIG_QUICKLIST
-		"Quicklists:     %8lu kB\n"
+		"Quicklists:         %8lu kB\n"
 #endif
-		"NFS_Unstable:   %8lu kB\n"
-		"Bounce:         %8lu kB\n"
-		"WritebackTmp:   %8lu kB\n"
-		"CommitLimit:    %8lu kB\n"
-		"Committed_AS:   %8lu kB\n"
-		"VmallocTotal:   %8lu kB\n"
-		"VmallocUsed:    %8lu kB\n"
-		"VmallocChunk:   %8lu kB\n",
+		"NFS_Unstable:       %8lu kB\n"
+		"Bounce:             %8lu kB\n"
+		"WritebackTmp:       %8lu kB\n"
+		"CommitLimit:    %12lu kB\n"
+		"Committed_AS:   %12lu kB\n"
+		"VmallocTotal:   %12lu kB\n"
+		"VmallocUsed:    %12lu kB\n"
+		"VmallocChunk:   %12lu kB\n",
 		K(i.totalram),
 		K(i.freeram),
 		K(i.bufferram),
Index: b/mm/hugetlb.c
===================================================================
--- a/mm/hugetlb.c
+++ b/mm/hugetlb.c
@@ -1565,11 +1565,11 @@ void hugetlb_report_meminfo(struct seq_f
 {
 	struct hstate *h = &default_hstate;
 	seq_printf(m,
-			"HugePages_Total:   %5lu\n"
-			"HugePages_Free:    %5lu\n"
-			"HugePages_Rsvd:    %5lu\n"
-			"HugePages_Surp:    %5lu\n"
-			"Hugepagesize:   %8lu kB\n",
+			"HugePages_Total:    %8lu\n"
+			"HugePages_Free:     %8lu\n"
+			"HugePages_Rsvd:     %8lu\n"
+			"HugePages_Surp:     %8lu\n"
+			"Hugepagesize:       %8lu kB\n",
 			h->nr_huge_pages,
 			h->free_huge_pages,
 			h->resv_huge_pages,


--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 17+ messages in thread

* [PATCH for mmotm 5/5] fix vmscan-change-the-number-of-the-unmapped-files-in-zone-reclaim.patch
  2009-06-11 10:25 [PATCH for mmotm 0/5] introduce swap-backed-file-mapped count and fix vmscan-change-the-number-of-the-unmapped-files-in-zone-reclaim.patch KOSAKI Motohiro
                   ` (3 preceding siblings ...)
  2009-06-11 10:27 ` [PATCH for mmotm 4/5] adjust fields length of /proc/meminfo KOSAKI Motohiro
@ 2009-06-11 10:28 ` KOSAKI Motohiro
  2009-06-11 11:15   ` Mel Gorman
  2009-06-11 10:38 ` [PATCH for mmotm 0/5] introduce swap-backed-file-mapped count and " Mel Gorman
  5 siblings, 1 reply; 17+ messages in thread
From: KOSAKI Motohiro @ 2009-06-11 10:28 UTC (permalink / raw)
  To: linux-mm, LKML; +Cc: kosaki.motohiro, Mel Gorman, Wu Fengguang, Andrew Morton

Subject: [PATCH] fix vmscan-change-the-number-of-the-unmapped-files-in-zone-reclaim.patch 


+	nr_unmapped_file_pages = zone_page_state(zone, NR_INACTIVE_FILE) +
+				 zone_page_state(zone, NR_ACTIVE_FILE) -
+				 zone_page_state(zone, NR_FILE_MAPPED);

is wrong. it can be underflow because tmpfs pages are not counted NR_*_FILE,
but they are counted NR_FILE_MAPPED.

fixing here.


Signed-off-by: KOSAKI Motohiro <kosaki.motohiro@jp.fujitsu.com>
Cc: Mel Gorman <mel@csn.ul.ie>
Cc: Wu Fengguang <fengguang.wu@intel.com>
---
 mm/vmscan.c |   32 ++++++++++++++++++++------------
 1 file changed, 20 insertions(+), 12 deletions(-)

Index: b/mm/vmscan.c
===================================================================
--- a/mm/vmscan.c
+++ b/mm/vmscan.c
@@ -2333,6 +2333,23 @@ int sysctl_min_unmapped_ratio = 1;
  */
 int sysctl_min_slab_ratio = 5;
 
+static unsigned long zone_unmapped_file_pages(struct zone *zone)
+{
+	long nr_file_pages;
+	long nr_file_mapped;
+	long nr_unmapped_file_pages;
+
+	nr_file_pages = zone_page_state(zone, NR_INACTIVE_FILE) +
+			zone_page_state(zone, NR_ACTIVE_FILE);
+	nr_file_mapped = zone_page_state(zone, NR_FILE_MAPPED) -
+			 zone_page_state(zone,
+					NR_SWAP_BACKED_FILE_MAPPED);
+	nr_unmapped_file_pages = nr_file_pages - nr_file_mapped;
+
+	return nr_unmapped_file_pages > 0 ? nr_unmapped_file_pages : 0;
+}
+
+
 /*
  * Try to free up some pages from this zone through reclaim.
  */
@@ -2355,7 +2372,6 @@ static int __zone_reclaim(struct zone *z
 		.isolate_pages = isolate_pages_global,
 	};
 	unsigned long slab_reclaimable;
-	long nr_unmapped_file_pages;
 
 	disable_swap_token();
 	cond_resched();
@@ -2368,11 +2384,7 @@ static int __zone_reclaim(struct zone *z
 	reclaim_state.reclaimed_slab = 0;
 	p->reclaim_state = &reclaim_state;
 
-	nr_unmapped_file_pages = zone_page_state(zone, NR_INACTIVE_FILE) +
-				 zone_page_state(zone, NR_ACTIVE_FILE) -
-				 zone_page_state(zone, NR_FILE_MAPPED);
-
-	if (nr_unmapped_file_pages > zone->min_unmapped_pages) {
+	if (zone_unmapped_file_pages(zone) > zone->min_unmapped_pages) {
 		/*
 		 * Free memory by calling shrink zone with increasing
 		 * priorities until we have enough memory freed.
@@ -2419,8 +2431,7 @@ int zone_reclaim(struct zone *zone, gfp_
 {
 	int node_id;
 	int ret;
-	long nr_unmapped_file_pages;
-	long nr_slab_reclaimable;
+	unsigned long nr_slab_reclaimable;
 
 	/*
 	 * Zone reclaim reclaims unmapped file backed pages and
@@ -2432,11 +2443,8 @@ int zone_reclaim(struct zone *zone, gfp_
 	 * if less than a specified percentage of the zone is used by
 	 * unmapped file backed pages.
 	 */
-	nr_unmapped_file_pages = zone_page_state(zone, NR_INACTIVE_FILE) +
-				 zone_page_state(zone, NR_ACTIVE_FILE) -
-				 zone_page_state(zone, NR_FILE_MAPPED);
 	nr_slab_reclaimable = zone_page_state(zone, NR_SLAB_RECLAIMABLE);
-	if (nr_unmapped_file_pages <= zone->min_unmapped_pages &&
+	if (zone_unmapped_file_pages(zone) <= zone->min_unmapped_pages &&
 	    nr_slab_reclaimable <= zone->min_slab_pages)
 		return 0;
 


--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [PATCH for mmotm 0/5] introduce swap-backed-file-mapped count and fix vmscan-change-the-number-of-the-unmapped-files-in-zone-reclaim.patch
  2009-06-11 10:25 [PATCH for mmotm 0/5] introduce swap-backed-file-mapped count and fix vmscan-change-the-number-of-the-unmapped-files-in-zone-reclaim.patch KOSAKI Motohiro
                   ` (4 preceding siblings ...)
  2009-06-11 10:28 ` [PATCH for mmotm 5/5] fix vmscan-change-the-number-of-the-unmapped-files-in-zone-reclaim.patch KOSAKI Motohiro
@ 2009-06-11 10:38 ` Mel Gorman
  2009-06-11 10:42   ` KOSAKI Motohiro
  5 siblings, 1 reply; 17+ messages in thread
From: Mel Gorman @ 2009-06-11 10:38 UTC (permalink / raw)
  To: KOSAKI Motohiro; +Cc: Wu Fengguang, linux-mm, LKML, Andrew Morton

On Thu, Jun 11, 2009 at 07:25:09PM +0900, KOSAKI Motohiro wrote:
> Recently, Wu Fengguang pointed out vmscan-change-the-number-of-the-unmapped-files-in-zone-reclaim.patch
> has underflow problem.
> 

Can you drop this aspect of the patchset please? I'm doing a final test
on the scan-avoidance heuristic that incorporates this patch and the
underflow fix. Ram (the tester of the malloc()-stall) confirms the patch
fixes his problem.

> This patch series introduce new vmstat of swap-backed-file-mapped and fix above
> patch by it.
> 

-- 
Mel Gorman
Part-time Phd Student                          Linux Technology Center
University of Limerick                         IBM Dublin Software Lab

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [PATCH for mmotm 0/5] introduce swap-backed-file-mapped count and fix vmscan-change-the-number-of-the-unmapped-files-in-zone-reclaim.patch
  2009-06-11 10:38 ` [PATCH for mmotm 0/5] introduce swap-backed-file-mapped count and " Mel Gorman
@ 2009-06-11 10:42   ` KOSAKI Motohiro
  2009-06-11 10:53     ` Mel Gorman
  0 siblings, 1 reply; 17+ messages in thread
From: KOSAKI Motohiro @ 2009-06-11 10:42 UTC (permalink / raw)
  To: Mel Gorman; +Cc: kosaki.motohiro, Wu Fengguang, linux-mm, LKML, Andrew Morton

> On Thu, Jun 11, 2009 at 07:25:09PM +0900, KOSAKI Motohiro wrote:
> > Recently, Wu Fengguang pointed out vmscan-change-the-number-of-the-unmapped-files-in-zone-reclaim.patch
> > has underflow problem.
> > 
> 
> Can you drop this aspect of the patchset please? I'm doing a final test
> on the scan-avoidance heuristic that incorporates this patch and the
> underflow fix. Ram (the tester of the malloc()-stall) confirms the patch
> fixes his problem.

OK.
insted, I'll join to review your patch :)



> > This patch series introduce new vmstat of swap-backed-file-mapped and fix above
> > patch by it.




--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [PATCH for mmotm 0/5] introduce swap-backed-file-mapped count and fix vmscan-change-the-number-of-the-unmapped-files-in-zone-reclaim.patch
  2009-06-11 10:42   ` KOSAKI Motohiro
@ 2009-06-11 10:53     ` Mel Gorman
  2009-06-11 11:32       ` KOSAKI Motohiro
  0 siblings, 1 reply; 17+ messages in thread
From: Mel Gorman @ 2009-06-11 10:53 UTC (permalink / raw)
  To: KOSAKI Motohiro; +Cc: Wu Fengguang, linux-mm, LKML, Andrew Morton

On Thu, Jun 11, 2009 at 07:42:33PM +0900, KOSAKI Motohiro wrote:
> > On Thu, Jun 11, 2009 at 07:25:09PM +0900, KOSAKI Motohiro wrote:
> > > Recently, Wu Fengguang pointed out vmscan-change-the-number-of-the-unmapped-files-in-zone-reclaim.patch
> > > has underflow problem.
> > > 
> > 
> > Can you drop this aspect of the patchset please? I'm doing a final test
> > on the scan-avoidance heuristic that incorporates this patch and the
> > underflow fix. Ram (the tester of the malloc()-stall) confirms the patch
> > fixes his problem.
> 
> OK.
> insted, I'll join to review your patch :)
> 

Thanks. You should have it now. In particular, I'm interested in hearing you
opinion about patch 1 of the series "Fix malloc() stall in zone_reclaim()
and bring behaviour more in line with expectations V3" and if addresses;

1. Does patch 1 address the problem that first led you to develop the patch
vmscan-change-the-number-of-the-unmapped-files-in-zone-reclaim.patch?

2. Do you think patch 1 should merge with and replace
vmscan-change-the-number-of-the-unmapped-files-in-zone-reclaim.patch?

> > > This patch series introduce new vmstat of swap-backed-file-mapped and fix above
> > > patch by it.
> 

I don't think the patch above needs to be fixed by another counter. At
least, once the underflow was fixed up, it handled the malloc-stall without
additional counters. If we need to account swap-backed-file-mapped, we need
another failure case that it addresses to be sure we're doing the right thing.

-- 
Mel Gorman
Part-time Phd Student                          Linux Technology Center
University of Limerick                         IBM Dublin Software Lab

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [PATCH for mmotm 1/5] cleanp page_remove_rmap()
  2009-06-11 10:26 ` [PATCH for mmotm 1/5] cleanp page_remove_rmap() KOSAKI Motohiro
@ 2009-06-11 11:01   ` Mel Gorman
  2009-06-11 23:21   ` Wu Fengguang
  1 sibling, 0 replies; 17+ messages in thread
From: Mel Gorman @ 2009-06-11 11:01 UTC (permalink / raw)
  To: KOSAKI Motohiro; +Cc: linux-mm, LKML, Wu Fengguang, Andrew Morton

On Thu, Jun 11, 2009 at 07:26:04PM +0900, KOSAKI Motohiro wrote:
> Subject: [PATCH] cleanp page_remove_rmap()
> 
> page_remove_rmap() has multiple PageAnon() test and it has
> a bit deeply nesting.
> 
> cleanup here.
> 
> note: this patch doesn't have behavior change.
> 
> 
> Signed-off-by: KOSAKI Motohiro <kosaki.motohiro@jp.fujitsu.com>
> Cc: KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>
> Cc: Mel Gorman <mel@csn.ul.ie>
> Cc: Wu Fengguang <fengguang.wu@intel.com> 
> ---
>  mm/rmap.c |   59 ++++++++++++++++++++++++++++++++---------------------------
>  1 file changed, 32 insertions(+), 27 deletions(-)
> 
> Index: b/mm/rmap.c
> ===================================================================
> --- a/mm/rmap.c
> +++ b/mm/rmap.c
> @@ -862,34 +862,39 @@ void page_dup_rmap(struct page *page, st
>   */
>  void page_remove_rmap(struct page *page)
>  {
> -	if (atomic_add_negative(-1, &page->_mapcount)) {
> -		/*
> -		 * Now that the last pte has gone, s390 must transfer dirty
> -		 * flag from storage key to struct page.  We can usually skip
> -		 * this if the page is anon, so about to be freed; but perhaps
> -		 * not if it's in swapcache - there might be another pte slot
> -		 * containing the swap entry, but page not yet written to swap.
> -		 */
> -		if ((!PageAnon(page) || PageSwapCache(page)) &&
> -		    page_test_dirty(page)) {
> -			page_clear_dirty(page);
> -			set_page_dirty(page);
> -		}
> -		if (PageAnon(page))
> -			mem_cgroup_uncharge_page(page);
> -		__dec_zone_page_state(page,
> -			PageAnon(page) ? NR_ANON_PAGES : NR_FILE_MAPPED);
> -		mem_cgroup_update_mapped_file_stat(page, -1);
> -		/*
> -		 * It would be tidy to reset the PageAnon mapping here,
> -		 * but that might overwrite a racing page_add_anon_rmap
> -		 * which increments mapcount after us but sets mapping
> -		 * before us: so leave the reset to free_hot_cold_page,
> -		 * and remember that it's only reliable while mapped.
> -		 * Leaving it set also helps swapoff to reinstate ptes
> -		 * faster for those pages still in swapcache.
> -		 */
> +	if (!atomic_add_negative(-1, &page->_mapcount)) {
> +		/* the page is still mapped another one */
> +		return;
>  	}
> +
> +	/*
> +	 * Now that the last pte has gone, s390 must transfer dirty
> +	 * flag from storage key to struct page.  We can usually skip
> +	 * this if the page is anon, so about to be freed; but perhaps
> +	 * not if it's in swapcache - there might be another pte slot
> +	 * containing the swap entry, but page not yet written to swap.
> +	 */
> +	if ((!PageAnon(page) || PageSwapCache(page)) &&
> +		page_test_dirty(page)) {
> +		page_clear_dirty(page);
> +		set_page_dirty(page);
> +	}

Pure nitpick. It looks like page_test_dirty() can merge with the line
above it now. Then the condition won't be at the same indentation as the
statements.


> +	if (PageAnon(page)) {
> +		mem_cgroup_uncharge_page(page);
> +		__dec_zone_page_state(page, NR_ANON_PAGES);
> +	} else {
> +		__dec_zone_page_state(page, NR_FILE_MAPPED);
> +	}

Ok, first actual change and it looks functionally equivalent and avoids a
second PageAnon test. I suspect it fractionally increases text size but as
PageAnon is an atomic bit opreation, we want to avoid calling that twice too.

> +	mem_cgroup_update_mapped_file_stat(page, -1);
> +	/*
> +	 * It would be tidy to reset the PageAnon mapping here,
> +	 * but that might overwrite a racing page_add_anon_rmap
> +	 * which increments mapcount after us but sets mapping
> +	 * before us: so leave the reset to free_hot_cold_page,
> +	 * and remember that it's only reliable while mapped.
> +	 * Leaving it set also helps swapoff to reinstate ptes
> +	 * faster for those pages still in swapcache.
> +	 */
>  }
>  

Ok, patch looks good to me. I'm not seeing what it has to do with the
zone_reclaim() problem though so you might want to send it separate from
the set for clarity.

Acked-by: Mel Gorman <mel@csn.ul.ie>

-- 
Mel Gorman
Part-time Phd Student                          Linux Technology Center
University of Limerick                         IBM Dublin Software Lab

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [PATCH for mmotm 2/5]
  2009-06-11 10:26 ` [PATCH for mmotm 2/5] KOSAKI Motohiro
@ 2009-06-11 11:13   ` Mel Gorman
  2009-06-11 11:50     ` KOSAKI Motohiro
  2009-06-11 23:29   ` Wu Fengguang
  1 sibling, 1 reply; 17+ messages in thread
From: Mel Gorman @ 2009-06-11 11:13 UTC (permalink / raw)
  To: KOSAKI Motohiro; +Cc: linux-mm, LKML, Wu Fengguang, Andrew Morton

On Thu, Jun 11, 2009 at 07:26:48PM +0900, KOSAKI Motohiro wrote:
> Changes since Wu's original patch
>   - adding vmstat
>   - rename NR_TMPFS_MAPPED to NR_SWAP_BACKED_FILE_MAPPED
> 
> 
> ----------------------
> Subject: [PATCH] introduce NR_SWAP_BACKED_FILE_MAPPED zone stat

This got lost in the actual subject line.

> Desirable zone reclaim implementaion want to know the number of
> file-backed and unmapped pages.
> 

There needs to be more justification for this. We need an example
failure case that this addresses. For example, Patch 1 of my series was
to address the following problem included with the patchset leader

"The reported problem was that malloc() stalled for a long time (minutes
in some cases) if a large tmpfs mount was occupying a large percentage of
memory overall. The pages did not get cleaned or reclaimed by zone_reclaim()
because the zone_reclaim_mode was unsuitable, but the lists are uselessly
scanned frequencly making the CPU spin at near 100%."

We should have a similar case.

What "desirable" zone_reclaim() should be spelled out as well. Minimally
something like

"For zone_reclaim() to be efficient, it must be able to detect in advance
if the LRU scan will reclaim the necessary pages with the limitations of
the current zone_reclaim_mode. Otherwise, the CPU usage is increases as
zone_reclaim() uselessly scans the LRU list.

The problem with the heuristic is ....

This patch fixes the heuristic by ...."

etc?

I'm not trying to be awkward. I believe I provided similar reasoning
with my own patchset.

> Thus, we need to know number of swap-backed mapped pages for
> calculate above number.
> 
> 
> Cc: Mel Gorman <mel@csn.ul.ie>
> Signed-off-by: Wu Fengguang <fengguang.wu@intel.com>
> Signed-off-by: KOSAKI Motohiro <kosaki.motohiro@jp.fujitsu.com>
> ---
>  include/linux/mmzone.h |    2 ++
>  mm/rmap.c              |    7 +++++++
>  mm/vmstat.c            |    1 +
>  3 files changed, 10 insertions(+)
> 
> Index: b/include/linux/mmzone.h
> ===================================================================
> --- a/include/linux/mmzone.h
> +++ b/include/linux/mmzone.h
> @@ -88,6 +88,8 @@ enum zone_stat_item {
>  	NR_ANON_PAGES,	/* Mapped anonymous pages */
>  	NR_FILE_MAPPED,	/* pagecache pages mapped into pagetables.
>  			   only modified from process context */
> +	NR_SWAP_BACKED_FILE_MAPPED, /* Similar to NR_FILE_MAPPED. but
> +				       only account swap-backed pages */
>  	NR_FILE_PAGES,
>  	NR_FILE_DIRTY,
>  	NR_WRITEBACK,
> Index: b/mm/rmap.c
> ===================================================================
> --- a/mm/rmap.c
> +++ b/mm/rmap.c
> @@ -829,6 +829,10 @@ void page_add_file_rmap(struct page *pag
>  {
>  	if (atomic_inc_and_test(&page->_mapcount)) {
>  		__inc_zone_page_state(page, NR_FILE_MAPPED);
> +		if (PageSwapBacked(page))
> +			__inc_zone_page_state(page,
> +					      NR_SWAP_BACKED_FILE_MAPPED);
> +
>  		mem_cgroup_update_mapped_file_stat(page, 1);
>  	}
>  }
> @@ -884,6 +888,9 @@ void page_remove_rmap(struct page *page)
>  		__dec_zone_page_state(page, NR_ANON_PAGES);
>  	} else {
>  		__dec_zone_page_state(page, NR_FILE_MAPPED);
> +		if (PageSwapBacked(page))
> +			__dec_zone_page_state(page,
> +					NR_SWAP_BACKED_FILE_MAPPED);
>  	}
>  	mem_cgroup_update_mapped_file_stat(page, -1);
>  	/*
> Index: b/mm/vmstat.c
> ===================================================================
> --- a/mm/vmstat.c
> +++ b/mm/vmstat.c
> @@ -633,6 +633,7 @@ static const char * const vmstat_text[] 
>  	"nr_mlock",
>  	"nr_anon_pages",
>  	"nr_mapped",
> +	"nr_swap_backed_file_mapped",
>  	"nr_file_pages",
>  	"nr_dirty",
>  	"nr_writeback",
> 

Otherwise the patch seems reasonable.

-- 
Mel Gorman
Part-time Phd Student                          Linux Technology Center
University of Limerick                         IBM Dublin Software Lab

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [PATCH for mmotm 5/5] fix vmscan-change-the-number-of-the-unmapped-files-in-zone-reclaim.patch
  2009-06-11 10:28 ` [PATCH for mmotm 5/5] fix vmscan-change-the-number-of-the-unmapped-files-in-zone-reclaim.patch KOSAKI Motohiro
@ 2009-06-11 11:15   ` Mel Gorman
  0 siblings, 0 replies; 17+ messages in thread
From: Mel Gorman @ 2009-06-11 11:15 UTC (permalink / raw)
  To: KOSAKI Motohiro; +Cc: linux-mm, LKML, Wu Fengguang, Andrew Morton

On Thu, Jun 11, 2009 at 07:28:30PM +0900, KOSAKI Motohiro wrote:
> Subject: [PATCH] fix vmscan-change-the-number-of-the-unmapped-files-in-zone-reclaim.patch 
> 
> 
> +	nr_unmapped_file_pages = zone_page_state(zone, NR_INACTIVE_FILE) +
> +				 zone_page_state(zone, NR_ACTIVE_FILE) -
> +				 zone_page_state(zone, NR_FILE_MAPPED);
> 
> is wrong. it can be underflow because tmpfs pages are not counted NR_*_FILE,
> but they are counted NR_FILE_MAPPED.
> 
> fixing here.
> 
> 
> Signed-off-by: KOSAKI Motohiro <kosaki.motohiro@jp.fujitsu.com>
> Cc: Mel Gorman <mel@csn.ul.ie>
> Cc: Wu Fengguang <fengguang.wu@intel.com>
> ---
>  mm/vmscan.c |   32 ++++++++++++++++++++------------
>  1 file changed, 20 insertions(+), 12 deletions(-)
> 
> Index: b/mm/vmscan.c
> ===================================================================
> --- a/mm/vmscan.c
> +++ b/mm/vmscan.c
> @@ -2333,6 +2333,23 @@ int sysctl_min_unmapped_ratio = 1;
>   */
>  int sysctl_min_slab_ratio = 5;
>  
> +static unsigned long zone_unmapped_file_pages(struct zone *zone)
> +{
> +	long nr_file_pages;
> +	long nr_file_mapped;
> +	long nr_unmapped_file_pages;
> +
> +	nr_file_pages = zone_page_state(zone, NR_INACTIVE_FILE) +
> +			zone_page_state(zone, NR_ACTIVE_FILE);
> +	nr_file_mapped = zone_page_state(zone, NR_FILE_MAPPED) -
> +			 zone_page_state(zone,
> +					NR_SWAP_BACKED_FILE_MAPPED);
> +	nr_unmapped_file_pages = nr_file_pages - nr_file_mapped;
> +
> +	return nr_unmapped_file_pages > 0 ? nr_unmapped_file_pages : 0;
> +}

This is a more accurate calculation for sure. The question is - is it
necessary?

> +
> +
>  /*
>   * Try to free up some pages from this zone through reclaim.
>   */
> @@ -2355,7 +2372,6 @@ static int __zone_reclaim(struct zone *z
>  		.isolate_pages = isolate_pages_global,
>  	};
>  	unsigned long slab_reclaimable;
> -	long nr_unmapped_file_pages;
>  
>  	disable_swap_token();
>  	cond_resched();
> @@ -2368,11 +2384,7 @@ static int __zone_reclaim(struct zone *z
>  	reclaim_state.reclaimed_slab = 0;
>  	p->reclaim_state = &reclaim_state;
>  
> -	nr_unmapped_file_pages = zone_page_state(zone, NR_INACTIVE_FILE) +
> -				 zone_page_state(zone, NR_ACTIVE_FILE) -
> -				 zone_page_state(zone, NR_FILE_MAPPED);
> -
> -	if (nr_unmapped_file_pages > zone->min_unmapped_pages) {
> +	if (zone_unmapped_file_pages(zone) > zone->min_unmapped_pages) {
>  		/*
>  		 * Free memory by calling shrink zone with increasing
>  		 * priorities until we have enough memory freed.
> @@ -2419,8 +2431,7 @@ int zone_reclaim(struct zone *zone, gfp_
>  {
>  	int node_id;
>  	int ret;
> -	long nr_unmapped_file_pages;
> -	long nr_slab_reclaimable;
> +	unsigned long nr_slab_reclaimable;
>  
>  	/*
>  	 * Zone reclaim reclaims unmapped file backed pages and
> @@ -2432,11 +2443,8 @@ int zone_reclaim(struct zone *zone, gfp_
>  	 * if less than a specified percentage of the zone is used by
>  	 * unmapped file backed pages.
>  	 */
> -	nr_unmapped_file_pages = zone_page_state(zone, NR_INACTIVE_FILE) +
> -				 zone_page_state(zone, NR_ACTIVE_FILE) -
> -				 zone_page_state(zone, NR_FILE_MAPPED);
>  	nr_slab_reclaimable = zone_page_state(zone, NR_SLAB_RECLAIMABLE);
> -	if (nr_unmapped_file_pages <= zone->min_unmapped_pages &&
> +	if (zone_unmapped_file_pages(zone) <= zone->min_unmapped_pages &&
>  	    nr_slab_reclaimable <= zone->min_slab_pages)
>  		return 0;
>  
> 
> 

-- 
Mel Gorman
Part-time Phd Student                          Linux Technology Center
University of Limerick                         IBM Dublin Software Lab

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [PATCH for mmotm 0/5] introduce swap-backed-file-mapped count and fix vmscan-change-the-number-of-the-unmapped-files-in-zone-reclaim.patch
  2009-06-11 10:53     ` Mel Gorman
@ 2009-06-11 11:32       ` KOSAKI Motohiro
  0 siblings, 0 replies; 17+ messages in thread
From: KOSAKI Motohiro @ 2009-06-11 11:32 UTC (permalink / raw)
  To: Mel Gorman; +Cc: kosaki.motohiro, Wu Fengguang, linux-mm, LKML, Andrew Morton

> On Thu, Jun 11, 2009 at 07:42:33PM +0900, KOSAKI Motohiro wrote:
> > > On Thu, Jun 11, 2009 at 07:25:09PM +0900, KOSAKI Motohiro wrote:
> > > > Recently, Wu Fengguang pointed out vmscan-change-the-number-of-the-unmapped-files-in-zone-reclaim.patch
> > > > has underflow problem.
> > > > 
> > > 
> > > Can you drop this aspect of the patchset please? I'm doing a final test
> > > on the scan-avoidance heuristic that incorporates this patch and the
> > > underflow fix. Ram (the tester of the malloc()-stall) confirms the patch
> > > fixes his problem.
> > 
> > OK.
> > insted, I'll join to review your patch :)
> > 
> 
> Thanks. You should have it now. In particular, I'm interested in hearing you
> opinion about patch 1 of the series "Fix malloc() stall in zone_reclaim()
> and bring behaviour more in line with expectations V3" and if addresses;
> 
> 1. Does patch 1 address the problem that first led you to develop the patch
> vmscan-change-the-number-of-the-unmapped-files-in-zone-reclaim.patch?

Yes, thanks. my original issue is

1. mem-hog process eat all pages in one node of numa machine.
2. kswapd run and makes many swapcache. 
   it mean increasing NR_FILE_PAGES - NR_FILE_MAPPED.
3. any page allocation invoke zone reclaim, but the zone don't have
   any file-backed page at all.

distro kernel can reproduce easily, but mainline kernel can't so easy.
but I think root cause is remain. it is (NR_FILE_PAGES - NR_FILE_MAPPED)
calculation.


> 2. Do you think patch 1 should merge with and replace
> vmscan-change-the-number-of-the-unmapped-files-in-zone-reclaim.patch?

In my personal prefer, your patch seems to have very good description and
rewrite almost part of mine.
Thus replacing is better. Can you please make replacing patch?



> > > > This patch series introduce new vmstat of swap-backed-file-mapped and fix above
> > > > patch by it.
> > 
> 
> I don't think the patch above needs to be fixed by another counter. At
> least, once the underflow was fixed up, it handled the malloc-stall without
> additional counters. If we need to account swap-backed-file-mapped, we need
> another failure case that it addresses to be sure we're doing the right thing.

ok, I drop this.



--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [PATCH for mmotm 2/5]
  2009-06-11 11:13   ` Mel Gorman
@ 2009-06-11 11:50     ` KOSAKI Motohiro
  2009-06-12 10:22       ` Mel Gorman
  0 siblings, 1 reply; 17+ messages in thread
From: KOSAKI Motohiro @ 2009-06-11 11:50 UTC (permalink / raw)
  To: Mel Gorman; +Cc: kosaki.motohiro, linux-mm, LKML, Wu Fengguang, Andrew Morton

> On Thu, Jun 11, 2009 at 07:26:48PM +0900, KOSAKI Motohiro wrote:
> > Changes since Wu's original patch
> >   - adding vmstat
> >   - rename NR_TMPFS_MAPPED to NR_SWAP_BACKED_FILE_MAPPED
> > 
> > 
> > ----------------------
> > Subject: [PATCH] introduce NR_SWAP_BACKED_FILE_MAPPED zone stat
> 
> This got lost in the actual subject line.
> 
> > Desirable zone reclaim implementaion want to know the number of
> > file-backed and unmapped pages.
> > 
> 
> There needs to be more justification for this. We need an example
> failure case that this addresses. For example, Patch 1 of my series was
> to address the following problem included with the patchset leader
> 
> "The reported problem was that malloc() stalled for a long time (minutes
> in some cases) if a large tmpfs mount was occupying a large percentage of
> memory overall. The pages did not get cleaned or reclaimed by zone_reclaim()
> because the zone_reclaim_mode was unsuitable, but the lists are uselessly
> scanned frequencly making the CPU spin at near 100%."
> 
> We should have a similar case.
> 
> What "desirable" zone_reclaim() should be spelled out as well. Minimally
> something like
> 
> "For zone_reclaim() to be efficient, it must be able to detect in advance
> if the LRU scan will reclaim the necessary pages with the limitations of
> the current zone_reclaim_mode. Otherwise, the CPU usage is increases as
> zone_reclaim() uselessly scans the LRU list.
> 
> The problem with the heuristic is ....
> 
> This patch fixes the heuristic by ...."
> 
> etc?
> 
> I'm not trying to be awkward. I believe I provided similar reasoning
> with my own patchset.

You are right. my intention is not actual issue, it only fix
documentation lie.

Documentation/sysctl/vm.txt says
=============================================================

min_unmapped_ratio:

This is available only on NUMA kernels.

A percentage of the total pages in each zone.  Zone reclaim will only
occur if more than this percentage of pages are file backed and unmapped.
This is to insure that a minimal amount of local pages is still available for
file I/O even if the node is overallocated.

The default is 1 percent.
==============================================================

but actual code don't account "percentage of file backed and unmapped".
Administrator can't imazine current implementation form this documentation.

Plus, I don't think this patch is too messy. thus I did decide to make
this fix.

if anyone provide good documentation fix, my worry will vanish.



> > Thus, we need to know number of swap-backed mapped pages for
> > calculate above number.
> > 
> > 
> > Cc: Mel Gorman <mel@csn.ul.ie>
> > Signed-off-by: Wu Fengguang <fengguang.wu@intel.com>
> > Signed-off-by: KOSAKI Motohiro <kosaki.motohiro@jp.fujitsu.com>
> > ---
> >  include/linux/mmzone.h |    2 ++
> >  mm/rmap.c              |    7 +++++++
> >  mm/vmstat.c            |    1 +
> >  3 files changed, 10 insertions(+)
> > 
> > Index: b/include/linux/mmzone.h
> > ===================================================================
> > --- a/include/linux/mmzone.h
> > +++ b/include/linux/mmzone.h
> > @@ -88,6 +88,8 @@ enum zone_stat_item {
> >  	NR_ANON_PAGES,	/* Mapped anonymous pages */
> >  	NR_FILE_MAPPED,	/* pagecache pages mapped into pagetables.
> >  			   only modified from process context */
> > +	NR_SWAP_BACKED_FILE_MAPPED, /* Similar to NR_FILE_MAPPED. but
> > +				       only account swap-backed pages */
> >  	NR_FILE_PAGES,
> >  	NR_FILE_DIRTY,
> >  	NR_WRITEBACK,
> > Index: b/mm/rmap.c
> > ===================================================================
> > --- a/mm/rmap.c
> > +++ b/mm/rmap.c
> > @@ -829,6 +829,10 @@ void page_add_file_rmap(struct page *pag
> >  {
> >  	if (atomic_inc_and_test(&page->_mapcount)) {
> >  		__inc_zone_page_state(page, NR_FILE_MAPPED);
> > +		if (PageSwapBacked(page))
> > +			__inc_zone_page_state(page,
> > +					      NR_SWAP_BACKED_FILE_MAPPED);
> > +
> >  		mem_cgroup_update_mapped_file_stat(page, 1);
> >  	}
> >  }
> > @@ -884,6 +888,9 @@ void page_remove_rmap(struct page *page)
> >  		__dec_zone_page_state(page, NR_ANON_PAGES);
> >  	} else {
> >  		__dec_zone_page_state(page, NR_FILE_MAPPED);
> > +		if (PageSwapBacked(page))
> > +			__dec_zone_page_state(page,
> > +					NR_SWAP_BACKED_FILE_MAPPED);
> >  	}
> >  	mem_cgroup_update_mapped_file_stat(page, -1);
> >  	/*
> > Index: b/mm/vmstat.c
> > ===================================================================
> > --- a/mm/vmstat.c
> > +++ b/mm/vmstat.c
> > @@ -633,6 +633,7 @@ static const char * const vmstat_text[] 
> >  	"nr_mlock",
> >  	"nr_anon_pages",
> >  	"nr_mapped",
> > +	"nr_swap_backed_file_mapped",
> >  	"nr_file_pages",
> >  	"nr_dirty",
> >  	"nr_writeback",
> > 
> 
> Otherwise the patch seems reasonable.
> 
> -- 
> Mel Gorman
> Part-time Phd Student                          Linux Technology Center
> University of Limerick                         IBM Dublin Software Lab



--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [PATCH for mmotm 1/5] cleanp page_remove_rmap()
  2009-06-11 10:26 ` [PATCH for mmotm 1/5] cleanp page_remove_rmap() KOSAKI Motohiro
  2009-06-11 11:01   ` Mel Gorman
@ 2009-06-11 23:21   ` Wu Fengguang
  1 sibling, 0 replies; 17+ messages in thread
From: Wu Fengguang @ 2009-06-11 23:21 UTC (permalink / raw)
  To: KOSAKI Motohiro; +Cc: linux-mm, LKML, Mel Gorman, Andrew Morton

On Thu, Jun 11, 2009 at 06:26:04PM +0800, KOSAKI Motohiro wrote:
> Subject: [PATCH] cleanp page_remove_rmap()
> 
> page_remove_rmap() has multiple PageAnon() test and it has
> a bit deeply nesting.
> 
> cleanup here.
> 
> note: this patch doesn't have behavior change.
> 
> 
> Signed-off-by: KOSAKI Motohiro <kosaki.motohiro@jp.fujitsu.com>
> Cc: KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>
> Cc: Mel Gorman <mel@csn.ul.ie>
> Cc: Wu Fengguang <fengguang.wu@intel.com> 

> ---
>  mm/rmap.c |   59 ++++++++++++++++++++++++++++++++---------------------------
>  1 file changed, 32 insertions(+), 27 deletions(-)
> 
> Index: b/mm/rmap.c
> ===================================================================
> --- a/mm/rmap.c
> +++ b/mm/rmap.c
> @@ -862,34 +862,39 @@ void page_dup_rmap(struct page *page, st
>   */
>  void page_remove_rmap(struct page *page)
>  {
> -	if (atomic_add_negative(-1, &page->_mapcount)) {
> -		/*
> -		 * Now that the last pte has gone, s390 must transfer dirty
> -		 * flag from storage key to struct page.  We can usually skip
> -		 * this if the page is anon, so about to be freed; but perhaps
> -		 * not if it's in swapcache - there might be another pte slot
> -		 * containing the swap entry, but page not yet written to swap.
> -		 */
> -		if ((!PageAnon(page) || PageSwapCache(page)) &&
> -		    page_test_dirty(page)) {
> -			page_clear_dirty(page);
> -			set_page_dirty(page);
> -		}
> -		if (PageAnon(page))
> -			mem_cgroup_uncharge_page(page);
> -		__dec_zone_page_state(page,
> -			PageAnon(page) ? NR_ANON_PAGES : NR_FILE_MAPPED);
> -		mem_cgroup_update_mapped_file_stat(page, -1);
> -		/*
> -		 * It would be tidy to reset the PageAnon mapping here,
> -		 * but that might overwrite a racing page_add_anon_rmap
> -		 * which increments mapcount after us but sets mapping
> -		 * before us: so leave the reset to free_hot_cold_page,
> -		 * and remember that it's only reliable while mapped.
> -		 * Leaving it set also helps swapoff to reinstate ptes
> -		 * faster for those pages still in swapcache.
> -		 */
> +	if (!atomic_add_negative(-1, &page->_mapcount)) {
> +		/* the page is still mapped another one */
> +		return;
>  	}

Coding style nitpick.
I'd prefer to remove the brackets and move the comment above the line:

        /* page still mapped by someone else? */
	if (!atomic_add_negative(-1, &page->_mapcount))
                return;

Reviewed-by: Wu Fengguang <fengguang.wu@intel.com>

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [PATCH for mmotm 2/5]
  2009-06-11 10:26 ` [PATCH for mmotm 2/5] KOSAKI Motohiro
  2009-06-11 11:13   ` Mel Gorman
@ 2009-06-11 23:29   ` Wu Fengguang
  1 sibling, 0 replies; 17+ messages in thread
From: Wu Fengguang @ 2009-06-11 23:29 UTC (permalink / raw)
  To: KOSAKI Motohiro; +Cc: linux-mm, LKML, Mel Gorman, Andrew Morton

On Thu, Jun 11, 2009 at 06:26:48PM +0800, KOSAKI Motohiro wrote:
> Changes since Wu's original patch
>   - adding vmstat
>   - rename NR_TMPFS_MAPPED to NR_SWAP_BACKED_FILE_MAPPED
> 
> 
> ----------------------
> Subject: [PATCH] introduce NR_SWAP_BACKED_FILE_MAPPED zone stat
> 
> Desirable zone reclaim implementaion want to know the number of
> file-backed and unmapped pages.
> 
> Thus, we need to know number of swap-backed mapped pages for
> calculate above number.
> 
> 
> Cc: Mel Gorman <mel@csn.ul.ie>
> Signed-off-by: Wu Fengguang <fengguang.wu@intel.com>
> Signed-off-by: KOSAKI Motohiro <kosaki.motohiro@jp.fujitsu.com>
> ---
>  include/linux/mmzone.h |    2 ++
>  mm/rmap.c              |    7 +++++++
>  mm/vmstat.c            |    1 +
>  3 files changed, 10 insertions(+)
> 
> Index: b/include/linux/mmzone.h
> ===================================================================
> --- a/include/linux/mmzone.h
> +++ b/include/linux/mmzone.h
> @@ -88,6 +88,8 @@ enum zone_stat_item {
>  	NR_ANON_PAGES,	/* Mapped anonymous pages */
>  	NR_FILE_MAPPED,	/* pagecache pages mapped into pagetables.
>  			   only modified from process context */
> +	NR_SWAP_BACKED_FILE_MAPPED, /* Similar to NR_FILE_MAPPED. but

comment it as "a subset of NR_FILE_MAPPED"?

Why move this 'cold' item to the first hot cache line?

> +				       only account swap-backed pages */
>  	NR_FILE_PAGES,
>  	NR_FILE_DIRTY,
>  	NR_WRITEBACK,
> Index: b/mm/rmap.c
> ===================================================================
> --- a/mm/rmap.c
> +++ b/mm/rmap.c
> @@ -829,6 +829,10 @@ void page_add_file_rmap(struct page *pag
>  {
>  	if (atomic_inc_and_test(&page->_mapcount)) {
>  		__inc_zone_page_state(page, NR_FILE_MAPPED);
> +		if (PageSwapBacked(page))
> +			__inc_zone_page_state(page,
> +					      NR_SWAP_BACKED_FILE_MAPPED);
> +

The line wrapping is not necessary here.

>  		mem_cgroup_update_mapped_file_stat(page, 1);
>  	}
>  }
> @@ -884,6 +888,9 @@ void page_remove_rmap(struct page *page)
>  		__dec_zone_page_state(page, NR_ANON_PAGES);
>  	} else {
>  		__dec_zone_page_state(page, NR_FILE_MAPPED);
> +		if (PageSwapBacked(page))
> +			__dec_zone_page_state(page,
> +					NR_SWAP_BACKED_FILE_MAPPED);

ditto.

>  	}
>  	mem_cgroup_update_mapped_file_stat(page, -1);
>  	/*
> Index: b/mm/vmstat.c
> ===================================================================
> --- a/mm/vmstat.c
> +++ b/mm/vmstat.c
> @@ -633,6 +633,7 @@ static const char * const vmstat_text[] 
>  	"nr_mlock",
>  	"nr_anon_pages",
>  	"nr_mapped",
> +	"nr_swap_backed_file_mapped",

An overlong name, in my updated patch, I do it this way.

        "nr_bounce",
        "nr_vmscan_write",
        "nr_writeback_temp",
+       "nr_mapped_swapbacked",
 

The "mapped" comes first because I want to emphasis that
this is a subset of nr_mapped.

Thanks,
Fengguang

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [PATCH for mmotm 2/5]
  2009-06-11 11:50     ` KOSAKI Motohiro
@ 2009-06-12 10:22       ` Mel Gorman
  0 siblings, 0 replies; 17+ messages in thread
From: Mel Gorman @ 2009-06-12 10:22 UTC (permalink / raw)
  To: KOSAKI Motohiro; +Cc: linux-mm, LKML, Wu Fengguang, Andrew Morton

On Thu, Jun 11, 2009 at 08:50:06PM +0900, KOSAKI Motohiro wrote:
> > On Thu, Jun 11, 2009 at 07:26:48PM +0900, KOSAKI Motohiro wrote:
> > > Changes since Wu's original patch
> > >   - adding vmstat
> > >   - rename NR_TMPFS_MAPPED to NR_SWAP_BACKED_FILE_MAPPED
> > > 
> > > 
> > > ----------------------
> > > Subject: [PATCH] introduce NR_SWAP_BACKED_FILE_MAPPED zone stat
> > 
> > This got lost in the actual subject line.
> > 
> > > Desirable zone reclaim implementaion want to know the number of
> > > file-backed and unmapped pages.
> > > 
> > 
> > There needs to be more justification for this. We need an example
> > failure case that this addresses. For example, Patch 1 of my series was
> > to address the following problem included with the patchset leader
> > 
> > "The reported problem was that malloc() stalled for a long time (minutes
> > in some cases) if a large tmpfs mount was occupying a large percentage of
> > memory overall. The pages did not get cleaned or reclaimed by zone_reclaim()
> > because the zone_reclaim_mode was unsuitable, but the lists are uselessly
> > scanned frequencly making the CPU spin at near 100%."
> > 
> > We should have a similar case.
> > 
> > What "desirable" zone_reclaim() should be spelled out as well. Minimally
> > something like
> > 
> > "For zone_reclaim() to be efficient, it must be able to detect in advance
> > if the LRU scan will reclaim the necessary pages with the limitations of
> > the current zone_reclaim_mode. Otherwise, the CPU usage is increases as
> > zone_reclaim() uselessly scans the LRU list.
> > 
> > The problem with the heuristic is ....
> > 
> > This patch fixes the heuristic by ...."
> > 
> > etc?
> > 
> > I'm not trying to be awkward. I believe I provided similar reasoning
> > with my own patchset.
> 
> You are right. my intention is not actual issue, it only fix
> documentation lie.
> 
> Documentation/sysctl/vm.txt says
> =============================================================
> 
> min_unmapped_ratio:
> 
> This is available only on NUMA kernels.
> 
> A percentage of the total pages in each zone.  Zone reclaim will only
> occur if more than this percentage of pages are file backed and unmapped.
> This is to insure that a minimal amount of local pages is still available for
> file I/O even if the node is overallocated.
> 
> The default is 1 percent.
> ==============================================================
> 
> but actual code don't account "percentage of file backed and unmapped".
> Administrator can't imazine current implementation form this documentation.
> 

That's a good point. I've suggested alternative documentation in another
thread.

> Plus, I don't think this patch is too messy. thus I did decide to make
> this fix.
> 
> if anyone provide good documentation fix, my worry will vanish.
> 

Hopefully your worry has vanished.

While I have no objection to the patch as such, I would like to know
what it's fixing. Believe me, if the scan-heuristic breaks again, this
patch would be one of the first things I considered as a fix :/

> 
> 
> > > Thus, we need to know number of swap-backed mapped pages for
> > > calculate above number.
> > > 
> > > 
> > > Cc: Mel Gorman <mel@csn.ul.ie>
> > > Signed-off-by: Wu Fengguang <fengguang.wu@intel.com>
> > > Signed-off-by: KOSAKI Motohiro <kosaki.motohiro@jp.fujitsu.com>
> > > ---
> > >  include/linux/mmzone.h |    2 ++
> > >  mm/rmap.c              |    7 +++++++
> > >  mm/vmstat.c            |    1 +
> > >  3 files changed, 10 insertions(+)
> > > 
> > > Index: b/include/linux/mmzone.h
> > > ===================================================================
> > > --- a/include/linux/mmzone.h
> > > +++ b/include/linux/mmzone.h
> > > @@ -88,6 +88,8 @@ enum zone_stat_item {
> > >  	NR_ANON_PAGES,	/* Mapped anonymous pages */
> > >  	NR_FILE_MAPPED,	/* pagecache pages mapped into pagetables.
> > >  			   only modified from process context */
> > > +	NR_SWAP_BACKED_FILE_MAPPED, /* Similar to NR_FILE_MAPPED. but
> > > +				       only account swap-backed pages */
> > >  	NR_FILE_PAGES,
> > >  	NR_FILE_DIRTY,
> > >  	NR_WRITEBACK,
> > > Index: b/mm/rmap.c
> > > ===================================================================
> > > --- a/mm/rmap.c
> > > +++ b/mm/rmap.c
> > > @@ -829,6 +829,10 @@ void page_add_file_rmap(struct page *pag
> > >  {
> > >  	if (atomic_inc_and_test(&page->_mapcount)) {
> > >  		__inc_zone_page_state(page, NR_FILE_MAPPED);
> > > +		if (PageSwapBacked(page))
> > > +			__inc_zone_page_state(page,
> > > +					      NR_SWAP_BACKED_FILE_MAPPED);
> > > +
> > >  		mem_cgroup_update_mapped_file_stat(page, 1);
> > >  	}
> > >  }
> > > @@ -884,6 +888,9 @@ void page_remove_rmap(struct page *page)
> > >  		__dec_zone_page_state(page, NR_ANON_PAGES);
> > >  	} else {
> > >  		__dec_zone_page_state(page, NR_FILE_MAPPED);
> > > +		if (PageSwapBacked(page))
> > > +			__dec_zone_page_state(page,
> > > +					NR_SWAP_BACKED_FILE_MAPPED);
> > >  	}
> > >  	mem_cgroup_update_mapped_file_stat(page, -1);
> > >  	/*
> > > Index: b/mm/vmstat.c
> > > ===================================================================
> > > --- a/mm/vmstat.c
> > > +++ b/mm/vmstat.c
> > > @@ -633,6 +633,7 @@ static const char * const vmstat_text[] 
> > >  	"nr_mlock",
> > >  	"nr_anon_pages",
> > >  	"nr_mapped",
> > > +	"nr_swap_backed_file_mapped",
> > >  	"nr_file_pages",
> > >  	"nr_dirty",
> > >  	"nr_writeback",
> > > 
> > 
> > Otherwise the patch seems reasonable.
> > 
> > -- 
> > Mel Gorman
> > Part-time Phd Student                          Linux Technology Center
> > University of Limerick                         IBM Dublin Software Lab
> 
> 
> 

-- 
Mel Gorman
Part-time Phd Student                          Linux Technology Center
University of Limerick                         IBM Dublin Software Lab

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 17+ messages in thread

end of thread, other threads:[~2009-06-12 10:21 UTC | newest]

Thread overview: 17+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2009-06-11 10:25 [PATCH for mmotm 0/5] introduce swap-backed-file-mapped count and fix vmscan-change-the-number-of-the-unmapped-files-in-zone-reclaim.patch KOSAKI Motohiro
2009-06-11 10:26 ` [PATCH for mmotm 1/5] cleanp page_remove_rmap() KOSAKI Motohiro
2009-06-11 11:01   ` Mel Gorman
2009-06-11 23:21   ` Wu Fengguang
2009-06-11 10:26 ` [PATCH for mmotm 2/5] KOSAKI Motohiro
2009-06-11 11:13   ` Mel Gorman
2009-06-11 11:50     ` KOSAKI Motohiro
2009-06-12 10:22       ` Mel Gorman
2009-06-11 23:29   ` Wu Fengguang
2009-06-11 10:27 ` [PATCH for mmotm 3/5] add Mapped(SwapBacked) field to /proc/meminfo KOSAKI Motohiro
2009-06-11 10:27 ` [PATCH for mmotm 4/5] adjust fields length of /proc/meminfo KOSAKI Motohiro
2009-06-11 10:28 ` [PATCH for mmotm 5/5] fix vmscan-change-the-number-of-the-unmapped-files-in-zone-reclaim.patch KOSAKI Motohiro
2009-06-11 11:15   ` Mel Gorman
2009-06-11 10:38 ` [PATCH for mmotm 0/5] introduce swap-backed-file-mapped count and " Mel Gorman
2009-06-11 10:42   ` KOSAKI Motohiro
2009-06-11 10:53     ` Mel Gorman
2009-06-11 11:32       ` KOSAKI Motohiro

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).