linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
* [PATCH -mm] throttle direct reclaim when too many pages are isolated already
@ 2009-07-16  2:38 Rik van Riel
  2009-07-16  2:48 ` Andrew Morton
  2009-07-16  3:19 ` [PATCH -mm] throttle direct reclaim when too many pages are isolated already KAMEZAWA Hiroyuki
  0 siblings, 2 replies; 19+ messages in thread
From: Rik van Riel @ 2009-07-16  2:38 UTC (permalink / raw)
  To: KOSAKI Motohiro; +Cc: LKML, linux-mm, Andrew Morton, Wu Fengguang

When way too many processes go into direct reclaim, it is possible
for all of the pages to be taken off the LRU.  One result of this
is that the next process in the page reclaim code thinks there are
no reclaimable pages left and triggers an out of memory kill.

One solution to this problem is to never let so many processes into
the page reclaim path that the entire LRU is emptied.  Limiting the
system to only having half of each inactive list isolated for
reclaim should be safe.

Signed-off-by: Rik van Riel <riel@redhat.com>
---
This patch goes on top of Kosaki's "Account the number of isolated pages"
patch series.

 mm/vmscan.c |   25 +++++++++++++++++++++++++
 1 file changed, 25 insertions(+)

Index: mmotm/mm/vmscan.c
===================================================================
--- mmotm.orig/mm/vmscan.c	2009-07-08 21:37:01.000000000 -0400
+++ mmotm/mm/vmscan.c	2009-07-08 21:39:02.000000000 -0400
@@ -1035,6 +1035,27 @@ int isolate_lru_page(struct page *page)
 }
 
 /*
+ * Are there way too many processes in the direct reclaim path already?
+ */
+static int too_many_isolated(struct zone *zone, int file)
+{
+	unsigned long inactive, isolated;
+
+	if (current_is_kswapd())
+		return 0;
+
+	if (file) {
+		inactive = zone_page_state(zone, NR_INACTIVE_FILE);
+		isolated = zone_page_state(zone, NR_ISOLATED_FILE);
+	} else {
+		inactive = zone_page_state(zone, NR_INACTIVE_ANON);
+		isolated = zone_page_state(zone, NR_ISOLATED_ANON);
+	}
+
+	return isolated > inactive;
+}
+
+/*
  * shrink_inactive_list() is a helper for shrink_zone().  It returns the number
  * of reclaimed pages
  */
@@ -1049,6 +1070,10 @@ static unsigned long shrink_inactive_lis
 	struct zone_reclaim_stat *reclaim_stat = get_reclaim_stat(zone, sc);
 	int lumpy_reclaim = 0;
 
+	while (unlikely(too_many_isolated(zone, file))) {
+		schedule_timeout_interruptible(HZ/10);
+	}
+
 	/*
 	 * If we need a large contiguous chunk of memory, or have
 	 * trouble getting a small set of contiguous pages, we

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 19+ messages in thread

end of thread, other threads:[~2009-07-29 16:19 UTC | newest]

Thread overview: 19+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2009-07-16  2:38 [PATCH -mm] throttle direct reclaim when too many pages are isolated already Rik van Riel
2009-07-16  2:48 ` Andrew Morton
2009-07-16  3:10   ` Rik van Riel
2009-07-16  3:21     ` Andrew Morton
2009-07-16  3:28       ` Rik van Riel
2009-07-16  3:38         ` Andrew Morton
2009-07-16  3:42           ` Rik van Riel
2009-07-16  3:51             ` Andrew Morton
2009-07-16  3:53           ` [PATCH -mm] throttle direct reclaim when too many pages are isolated already (v3) Rik van Riel
2009-07-16  4:02             ` Andrew Morton
2009-07-16  4:09               ` Rik van Riel
2009-07-16  4:26                 ` Andrew Morton
2009-07-29 15:04             ` Pavel Machek
2009-07-29 16:19               ` Rik van Riel
2009-07-16  3:36   ` [PATCH -mm] throttle direct reclaim when too many pages are isolated already (v2) Rik van Riel
2009-07-16  3:19 ` [PATCH -mm] throttle direct reclaim when too many pages are isolated already KAMEZAWA Hiroyuki
2009-07-16  3:32   ` Rik van Riel
2009-07-16  3:42     ` KAMEZAWA Hiroyuki
2009-07-16  3:47       ` Rik van Riel

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).