From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mail138.messagelabs.com (mail138.messagelabs.com [216.82.249.35]) by kanga.kvack.org (Postfix) with ESMTP id 1B2B36B0082 for ; Tue, 7 Jun 2011 10:39:16 -0400 (EDT) Received: by mail-pw0-f41.google.com with SMTP id 12so3429632pwi.14 for ; Tue, 07 Jun 2011 07:39:13 -0700 (PDT) From: Minchan Kim Subject: [PATCH v3 05/10] vmscan: make isolate_lru_page with filter aware Date: Tue, 7 Jun 2011 23:38:18 +0900 Message-Id: In-Reply-To: References: In-Reply-To: References: Sender: owner-linux-mm@kvack.org List-ID: To: Andrew Morton Cc: linux-mm , LKML , KOSAKI Motohiro , Mel Gorman , Andrea Arcangeli , Rik van Riel , Johannes Weiner , KAMEZAWA Hiroyuki , Minchan Kim In __zone_reclaim case, we don't want to shrink mapped page. Nonetheless, we have isolated mapped page and re-add it into LRU's head. It's unnecessary CPU overhead and makes LRU churning. Of course, when we isolate the page, the page might be mapped but when we try to migrate the page, the page would be not mapped. So it could be migrated. But race is rare and although it happens, it's no big deal. Cc: KOSAKI Motohiro Cc: Mel Gorman Cc: Rik van Riel Cc: Andrea Arcangeli Reviewed-by: KAMEZAWA Hiroyuki Acked-by: Johannes Weiner Signed-off-by: Minchan Kim --- mm/vmscan.c | 17 +++++++++++++++-- 1 files changed, 15 insertions(+), 2 deletions(-) diff --git a/mm/vmscan.c b/mm/vmscan.c index 26aa627..c08911d 100644 --- a/mm/vmscan.c +++ b/mm/vmscan.c @@ -1408,6 +1408,12 @@ shrink_inactive_list(unsigned long nr_to_scan, struct zone *zone, reclaim_mode |= ISOLATE_ACTIVE; lru_add_drain(); + + if (!sc->may_unmap) + reclaim_mode |= ISOLATE_UNMAPPED; + if (!sc->may_writepage) + reclaim_mode |= ISOLATE_CLEAN; + spin_lock_irq(&zone->lru_lock); if (scanning_global_lru(sc)) { @@ -1525,19 +1531,26 @@ static void shrink_active_list(unsigned long nr_pages, struct zone *zone, struct page *page; struct zone_reclaim_stat *reclaim_stat = get_reclaim_stat(zone, sc); unsigned long nr_rotated = 0; + enum ISOLATE_MODE reclaim_mode = ISOLATE_ACTIVE; lru_add_drain(); + + if (!sc->may_unmap) + reclaim_mode |= ISOLATE_UNMAPPED; + if (!sc->may_writepage) + reclaim_mode |= ISOLATE_CLEAN; + spin_lock_irq(&zone->lru_lock); if (scanning_global_lru(sc)) { nr_taken = isolate_pages_global(nr_pages, &l_hold, &pgscanned, sc->order, - ISOLATE_ACTIVE, zone, + reclaim_mode, zone, 1, file); zone->pages_scanned += pgscanned; } else { nr_taken = mem_cgroup_isolate_pages(nr_pages, &l_hold, &pgscanned, sc->order, - ISOLATE_ACTIVE, zone, + reclaim_mode, zone, sc->mem_cgroup, 1, file); /* * mem_cgroup_isolate_pages() keeps track of -- 1.7.0.4 -- To unsubscribe, send a message with 'unsubscribe linux-mm' in the body to majordomo@kvack.org. For more info on Linux MM, see: http://www.linux-mm.org/ . Fight unfair telecom internet charges in Canada: sign http://stopthemeter.ca/ Don't email: email@kvack.org