* [PATCH v2] shrink_all_memory() use sc.nr_reclaimed
@ 2009-02-12 7:33 MinChan Kim
2009-02-12 9:31 ` MinChan Kim
2009-02-12 11:25 ` Johannes Weiner
0 siblings, 2 replies; 4+ messages in thread
From: MinChan Kim @ 2009-02-12 7:33 UTC (permalink / raw)
To: Andrew Morton, KOSAKI Motohiro
Cc: linux-mm, LKML, Johannes Weiner, Rafael J. Wysocki, Rik van Riel
Impact: cleanup
Commit a79311c14eae4bb946a97af25f3e1b17d625985d "vmscan: bail out of
direct reclaim after swap_cluster_max pages" moved the nr_reclaimed
counter into the scan control to accumulate the number of all
reclaimed pages in a reclaim invocation.
The shrink_all_memory() can use the same mechanism. it increases code
consistency and readability.
It's based on mmtom 2009-02-11-17-15.
Signed-off-by: MinChan Kim <minchan.kim@gmail.com>
Signed-off-by: KOSAKI Motohiro <kosaki.motohiro@jp.fujitsu.com>
Cc: Johannes Weiner <hannes@cmpxchg.org>
Cc: "Rafael J. Wysocki" <rjw@sisk.pl>
Cc: Rik van Riel <riel@redhat.com>
---
mm/vmscan.c | 51 ++++++++++++++++++++++++++++++---------------------
1 files changed, 30 insertions(+), 21 deletions(-)
diff --git a/mm/vmscan.c b/mm/vmscan.c
index ae4202b..caa2de5 100644
--- a/mm/vmscan.c
+++ b/mm/vmscan.c
@@ -2055,16 +2055,15 @@ unsigned long global_lru_pages(void)
#ifdef CONFIG_PM
/*
* Helper function for shrink_all_memory(). Tries to reclaim 'nr_pages' pages
- * from LRU lists system-wide, for given pass and priority, and returns the
- * number of reclaimed pages
+ * from LRU lists system-wide, for given pass and priority.
*
* For pass > 3 we also try to shrink the LRU lists that contain a few pages
*/
-static unsigned long shrink_all_zones(unsigned long nr_pages, int prio,
+static void shrink_all_zones(unsigned long nr_pages, int prio,
int pass, struct scan_control *sc)
{
struct zone *zone;
- unsigned long ret = 0;
+ unsigned long nr_reclaimed = 0;
for_each_populated_zone(zone) {
enum lru_list l;
@@ -2087,14 +2086,16 @@ static unsigned long shrink_all_zones(unsigned long nr_pages, int prio,
zone->lru[l].nr_scan = 0;
nr_to_scan = min(nr_pages, lru_pages);
- ret += shrink_list(l, nr_to_scan, zone,
+ nr_reclaimed += shrink_list(l, nr_to_scan, zone,
sc, prio);
- if (ret >= nr_pages)
- return ret;
+ if (nr_reclaimed >= nr_pages) {
+ sc->nr_reclaimed = nr_reclaimed;
+ return;
+ }
}
}
}
- return ret;
+ sc->nr_reclaimed = nr_reclaimed;
}
/*
@@ -2126,13 +2127,15 @@ unsigned long shrink_all_memory(unsigned long nr_pages)
/* If slab caches are huge, it's better to hit them first */
while (nr_slab >= lru_pages) {
reclaim_state.reclaimed_slab = 0;
- shrink_slab(nr_pages, sc.gfp_mask, lru_pages);
+ shrink_slab(sc.swap_cluster_max, sc.gfp_mask, lru_pages);
if (!reclaim_state.reclaimed_slab)
break;
- ret += reclaim_state.reclaimed_slab;
- if (ret >= nr_pages)
+ sc.nr_reclaimed += reclaim_state.reclaimed_slab;
+ if (sc.nr_reclaimed >= sc.swap_cluster_max) {
+ ret = sc.nr_reclaimed;
goto out;
+ }
nr_slab -= reclaim_state.reclaimed_slab;
}
@@ -2153,19 +2156,23 @@ unsigned long shrink_all_memory(unsigned long nr_pages)
sc.may_unmap = 1;
for (prio = DEF_PRIORITY; prio >= 0; prio--) {
- unsigned long nr_to_scan = nr_pages - ret;
+ unsigned long nr_to_scan = sc.swap_cluster_max - sc.nr_reclaimed;
sc.nr_scanned = 0;
- ret += shrink_all_zones(nr_to_scan, prio, pass, &sc);
- if (ret >= nr_pages)
+ shrink_all_zones(nr_to_scan, prio, pass, &sc);
+ if (sc.nr_reclaimed >= sc.swap_cluster_max) {
+ ret = sc.nr_reclaimed;
goto out;
+ }
reclaim_state.reclaimed_slab = 0;
shrink_slab(sc.nr_scanned, sc.gfp_mask,
global_lru_pages());
- ret += reclaim_state.reclaimed_slab;
- if (ret >= nr_pages)
+ sc.nr_reclaimed += reclaim_state.reclaimed_slab;
+ if (sc.nr_reclaimed >= sc.swap_cluster_max) {
+ ret = sc.nr_reclaimed;
goto out;
+ }
if (sc.nr_scanned && prio < DEF_PRIORITY - 2)
congestion_wait(WRITE, HZ / 10);
@@ -2173,17 +2180,19 @@ unsigned long shrink_all_memory(unsigned long nr_pages)
}
/*
- * If ret = 0, we could not shrink LRUs, but there may be something
+ * If sc.nr_reclaimed = 0, we could not shrink LRUs, but there may be something
* in slab caches
*/
- if (!ret) {
+ if (!sc.nr_reclaimed) {
do {
reclaim_state.reclaimed_slab = 0;
- shrink_slab(nr_pages, sc.gfp_mask, global_lru_pages());
- ret += reclaim_state.reclaimed_slab;
- } while (ret < nr_pages && reclaim_state.reclaimed_slab > 0);
+ shrink_slab(sc.swap_cluster_max, sc.gfp_mask, global_lru_pages());
+ sc.nr_reclaimed += reclaim_state.reclaimed_slab;
+ } while (sc.nr_reclaimed < sc.swap_cluster_max && reclaim_state.reclaimed_slab > 0);
}
+ ret = sc.nr_reclaimed;
+
out:
current->reclaim_state = NULL;
--
1.5.4.3
--
Kinds Regards
MinChan Kim
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
^ permalink raw reply related [flat|nested] 4+ messages in thread
* Re: [PATCH v2] shrink_all_memory() use sc.nr_reclaimed
2009-02-12 7:33 [PATCH v2] shrink_all_memory() use sc.nr_reclaimed MinChan Kim
@ 2009-02-12 9:31 ` MinChan Kim
2009-02-12 11:25 ` Johannes Weiner
1 sibling, 0 replies; 4+ messages in thread
From: MinChan Kim @ 2009-02-12 9:31 UTC (permalink / raw)
To: Andrew Morton, KOSAKI Motohiro
Cc: linux-mm, LKML, Johannes Weiner, Rafael J. Wysocki, Rik van Riel
On Thu, Feb 12, 2009 at 4:33 PM, MinChan Kim <minchan.kim@gmail.com> wrote:
>
> Impact: cleanup
>
> Commit a79311c14eae4bb946a97af25f3e1b17d625985d "vmscan: bail out of
> direct reclaim after swap_cluster_max pages" moved the nr_reclaimed
> counter into the scan control to accumulate the number of all
> reclaimed pages in a reclaim invocation.
>
> The shrink_all_memory() can use the same mechanism. it increases code
> consistency and readability.
>
> It's based on mmtom 2009-02-11-17-15.
Andrew, Sorry for confusing.
It's wrong. It's based on 2009-02-11-18-32
> Signed-off-by: MinChan Kim <minchan.kim@gmail.com>
> Signed-off-by: KOSAKI Motohiro <kosaki.motohiro@jp.fujitsu.com>
> Cc: Johannes Weiner <hannes@cmpxchg.org>
> Cc: "Rafael J. Wysocki" <rjw@sisk.pl>
> Cc: Rik van Riel <riel@redhat.com>
>
>
> ---
> mm/vmscan.c | 51 ++++++++++++++++++++++++++++++---------------------
> 1 files changed, 30 insertions(+), 21 deletions(-)
>
> diff --git a/mm/vmscan.c b/mm/vmscan.c
> index ae4202b..caa2de5 100644
> --- a/mm/vmscan.c
> +++ b/mm/vmscan.c
> @@ -2055,16 +2055,15 @@ unsigned long global_lru_pages(void)
> #ifdef CONFIG_PM
> /*
> * Helper function for shrink_all_memory(). Tries to reclaim 'nr_pages' pages
> - * from LRU lists system-wide, for given pass and priority, and returns the
> - * number of reclaimed pages
> + * from LRU lists system-wide, for given pass and priority.
> *
> * For pass > 3 we also try to shrink the LRU lists that contain a few pages
> */
> -static unsigned long shrink_all_zones(unsigned long nr_pages, int prio,
> +static void shrink_all_zones(unsigned long nr_pages, int prio,
> int pass, struct scan_control *sc)
> {
> struct zone *zone;
> - unsigned long ret = 0;
> + unsigned long nr_reclaimed = 0;
>
> for_each_populated_zone(zone) {
> enum lru_list l;
> @@ -2087,14 +2086,16 @@ static unsigned long shrink_all_zones(unsigned long nr_pages, int prio,
>
> zone->lru[l].nr_scan = 0;
> nr_to_scan = min(nr_pages, lru_pages);
> - ret += shrink_list(l, nr_to_scan, zone,
> + nr_reclaimed += shrink_list(l, nr_to_scan, zone,
> sc, prio);
> - if (ret >= nr_pages)
> - return ret;
> + if (nr_reclaimed >= nr_pages) {
> + sc->nr_reclaimed = nr_reclaimed;
> + return;
> + }
> }
> }
> }
> - return ret;
> + sc->nr_reclaimed = nr_reclaimed;
> }
>
> /*
> @@ -2126,13 +2127,15 @@ unsigned long shrink_all_memory(unsigned long nr_pages)
> /* If slab caches are huge, it's better to hit them first */
> while (nr_slab >= lru_pages) {
> reclaim_state.reclaimed_slab = 0;
> - shrink_slab(nr_pages, sc.gfp_mask, lru_pages);
> + shrink_slab(sc.swap_cluster_max, sc.gfp_mask, lru_pages);
> if (!reclaim_state.reclaimed_slab)
> break;
>
> - ret += reclaim_state.reclaimed_slab;
> - if (ret >= nr_pages)
> + sc.nr_reclaimed += reclaim_state.reclaimed_slab;
> + if (sc.nr_reclaimed >= sc.swap_cluster_max) {
> + ret = sc.nr_reclaimed;
> goto out;
> + }
>
> nr_slab -= reclaim_state.reclaimed_slab;
> }
> @@ -2153,19 +2156,23 @@ unsigned long shrink_all_memory(unsigned long nr_pages)
> sc.may_unmap = 1;
>
> for (prio = DEF_PRIORITY; prio >= 0; prio--) {
> - unsigned long nr_to_scan = nr_pages - ret;
> + unsigned long nr_to_scan = sc.swap_cluster_max - sc.nr_reclaimed;
>
> sc.nr_scanned = 0;
> - ret += shrink_all_zones(nr_to_scan, prio, pass, &sc);
> - if (ret >= nr_pages)
> + shrink_all_zones(nr_to_scan, prio, pass, &sc);
> + if (sc.nr_reclaimed >= sc.swap_cluster_max) {
> + ret = sc.nr_reclaimed;
> goto out;
> + }
>
> reclaim_state.reclaimed_slab = 0;
> shrink_slab(sc.nr_scanned, sc.gfp_mask,
> global_lru_pages());
> - ret += reclaim_state.reclaimed_slab;
> - if (ret >= nr_pages)
> + sc.nr_reclaimed += reclaim_state.reclaimed_slab;
> + if (sc.nr_reclaimed >= sc.swap_cluster_max) {
> + ret = sc.nr_reclaimed;
> goto out;
> + }
>
> if (sc.nr_scanned && prio < DEF_PRIORITY - 2)
> congestion_wait(WRITE, HZ / 10);
> @@ -2173,17 +2180,19 @@ unsigned long shrink_all_memory(unsigned long nr_pages)
> }
>
> /*
> - * If ret = 0, we could not shrink LRUs, but there may be something
> + * If sc.nr_reclaimed = 0, we could not shrink LRUs, but there may be something
> * in slab caches
> */
> - if (!ret) {
> + if (!sc.nr_reclaimed) {
> do {
> reclaim_state.reclaimed_slab = 0;
> - shrink_slab(nr_pages, sc.gfp_mask, global_lru_pages());
> - ret += reclaim_state.reclaimed_slab;
> - } while (ret < nr_pages && reclaim_state.reclaimed_slab > 0);
> + shrink_slab(sc.swap_cluster_max, sc.gfp_mask, global_lru_pages());
> + sc.nr_reclaimed += reclaim_state.reclaimed_slab;
> + } while (sc.nr_reclaimed < sc.swap_cluster_max && reclaim_state.reclaimed_slab > 0);
> }
>
> + ret = sc.nr_reclaimed;
> +
> out:
> current->reclaim_state = NULL;
>
> --
> 1.5.4.3
>
>
>
> --
> Kinds Regards
> MinChan Kim
>
--
Kinds regards,
MinChan Kim
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
^ permalink raw reply [flat|nested] 4+ messages in thread
* Re: [PATCH v2] shrink_all_memory() use sc.nr_reclaimed
2009-02-12 7:33 [PATCH v2] shrink_all_memory() use sc.nr_reclaimed MinChan Kim
2009-02-12 9:31 ` MinChan Kim
@ 2009-02-12 11:25 ` Johannes Weiner
2009-02-12 13:11 ` MinChan Kim
1 sibling, 1 reply; 4+ messages in thread
From: Johannes Weiner @ 2009-02-12 11:25 UTC (permalink / raw)
To: MinChan Kim
Cc: Andrew Morton, KOSAKI Motohiro, linux-mm, LKML, Rafael J. Wysocki,
Rik van Riel
On Thu, Feb 12, 2009 at 04:33:10PM +0900, MinChan Kim wrote:
>
> Impact: cleanup
>
> Commit a79311c14eae4bb946a97af25f3e1b17d625985d "vmscan: bail out of
> direct reclaim after swap_cluster_max pages" moved the nr_reclaimed
> counter into the scan control to accumulate the number of all
> reclaimed pages in a reclaim invocation.
>
> The shrink_all_memory() can use the same mechanism. it increases code
> consistency and readability.
>
> It's based on mmtom 2009-02-11-17-15.
>
> Signed-off-by: MinChan Kim <minchan.kim@gmail.com>
> Signed-off-by: KOSAKI Motohiro <kosaki.motohiro@jp.fujitsu.com>
> Cc: Johannes Weiner <hannes@cmpxchg.org>
> Cc: "Rafael J. Wysocki" <rjw@sisk.pl>
> Cc: Rik van Riel <riel@redhat.com>
>
>
> ---
> mm/vmscan.c | 51 ++++++++++++++++++++++++++++++---------------------
> 1 files changed, 30 insertions(+), 21 deletions(-)
>
> diff --git a/mm/vmscan.c b/mm/vmscan.c
> index ae4202b..caa2de5 100644
> --- a/mm/vmscan.c
> +++ b/mm/vmscan.c
> @@ -2055,16 +2055,15 @@ unsigned long global_lru_pages(void)
> #ifdef CONFIG_PM
> /*
> * Helper function for shrink_all_memory(). Tries to reclaim 'nr_pages' pages
> - * from LRU lists system-wide, for given pass and priority, and returns the
> - * number of reclaimed pages
> + * from LRU lists system-wide, for given pass and priority.
> *
> * For pass > 3 we also try to shrink the LRU lists that contain a few pages
> */
> -static unsigned long shrink_all_zones(unsigned long nr_pages, int prio,
> +static void shrink_all_zones(unsigned long nr_pages, int prio,
> int pass, struct scan_control *sc)
> {
> struct zone *zone;
> - unsigned long ret = 0;
> + unsigned long nr_reclaimed = 0;
Why this extra variable? You could use sc->nr_reclaimed throughout,
like you do in shrink_all_memory().
> for_each_populated_zone(zone) {
> enum lru_list l;
> @@ -2087,14 +2086,16 @@ static unsigned long shrink_all_zones(unsigned long nr_pages, int prio,
>
> zone->lru[l].nr_scan = 0;
> nr_to_scan = min(nr_pages, lru_pages);
> - ret += shrink_list(l, nr_to_scan, zone,
> + nr_reclaimed += shrink_list(l, nr_to_scan, zone,
> sc, prio);
> - if (ret >= nr_pages)
> - return ret;
> + if (nr_reclaimed >= nr_pages) {
> + sc->nr_reclaimed = nr_reclaimed;
> + return;
> + }
> }
> }
> }
> - return ret;
> + sc->nr_reclaimed = nr_reclaimed;
> }
>
> /*
> @@ -2126,13 +2127,15 @@ unsigned long shrink_all_memory(unsigned long nr_pages)
> /* If slab caches are huge, it's better to hit them first */
> while (nr_slab >= lru_pages) {
> reclaim_state.reclaimed_slab = 0;
> - shrink_slab(nr_pages, sc.gfp_mask, lru_pages);
> + shrink_slab(sc.swap_cluster_max, sc.gfp_mask, lru_pages);
> if (!reclaim_state.reclaimed_slab)
> break;
>
> - ret += reclaim_state.reclaimed_slab;
> - if (ret >= nr_pages)
> + sc.nr_reclaimed += reclaim_state.reclaimed_slab;
> + if (sc.nr_reclaimed >= sc.swap_cluster_max) {
> + ret = sc.nr_reclaimed;
Why do you still maintain `ret'? Just return sc.nr_reclaimed at the
end and get rid of ret alltogether.
Using sc.swap_cluster_max here seems to be a good idea at first sight
but really it is not.
Usually, swap_cluster_max is smaller than the reclaim goal and reclaim
code uses it combined with other conditions to bail out BEFORE the
original reclaim goal is met. But sc.swap_cluster_max IS our original
reclaim goal, so it means something different.
It's btw buggy, we never decrease swap_cluster_max which leads to
funky overreclaim in shrink_inactive_list(). I will send the original
patch from Kosaki-san for using sc->nr_reclaimed and a patch for the
overreclaim problem.
Hannes
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
^ permalink raw reply [flat|nested] 4+ messages in thread
* Re: [PATCH v2] shrink_all_memory() use sc.nr_reclaimed
2009-02-12 11:25 ` Johannes Weiner
@ 2009-02-12 13:11 ` MinChan Kim
0 siblings, 0 replies; 4+ messages in thread
From: MinChan Kim @ 2009-02-12 13:11 UTC (permalink / raw)
To: Johannes Weiner
Cc: Andrew Morton, KOSAKI Motohiro, linux-mm, LKML, Rafael J. Wysocki,
Rik van Riel
On Thu, Feb 12, 2009 at 8:25 PM, Johannes Weiner <hannes@cmpxchg.org> wrote:
> On Thu, Feb 12, 2009 at 04:33:10PM +0900, MinChan Kim wrote:
>>
>> Impact: cleanup
>>
>> Commit a79311c14eae4bb946a97af25f3e1b17d625985d "vmscan: bail out of
>> direct reclaim after swap_cluster_max pages" moved the nr_reclaimed
>> counter into the scan control to accumulate the number of all
>> reclaimed pages in a reclaim invocation.
>>
>> The shrink_all_memory() can use the same mechanism. it increases code
>> consistency and readability.
>>
>> It's based on mmtom 2009-02-11-17-15.
>>
>> Signed-off-by: MinChan Kim <minchan.kim@gmail.com>
>> Signed-off-by: KOSAKI Motohiro <kosaki.motohiro@jp.fujitsu.com>
>> Cc: Johannes Weiner <hannes@cmpxchg.org>
>> Cc: "Rafael J. Wysocki" <rjw@sisk.pl>
>> Cc: Rik van Riel <riel@redhat.com>
>>
>>
>> ---
>> mm/vmscan.c | 51 ++++++++++++++++++++++++++++++---------------------
>> 1 files changed, 30 insertions(+), 21 deletions(-)
>>
>> diff --git a/mm/vmscan.c b/mm/vmscan.c
>> index ae4202b..caa2de5 100644
>> --- a/mm/vmscan.c
>> +++ b/mm/vmscan.c
>> @@ -2055,16 +2055,15 @@ unsigned long global_lru_pages(void)
>> #ifdef CONFIG_PM
>> /*
>> * Helper function for shrink_all_memory(). Tries to reclaim 'nr_pages' pages
>> - * from LRU lists system-wide, for given pass and priority, and returns the
>> - * number of reclaimed pages
>> + * from LRU lists system-wide, for given pass and priority.
>> *
>> * For pass > 3 we also try to shrink the LRU lists that contain a few pages
>> */
>> -static unsigned long shrink_all_zones(unsigned long nr_pages, int prio,
>> +static void shrink_all_zones(unsigned long nr_pages, int prio,
>> int pass, struct scan_control *sc)
>> {
>> struct zone *zone;
>> - unsigned long ret = 0;
>> + unsigned long nr_reclaimed = 0;
>
> Why this extra variable? You could use sc->nr_reclaimed throughout,
> like you do in shrink_all_memory().
It's just for matching shrink_zone style in order to code consistency.
But, I have no objection to remove extra variable.
>
>> for_each_populated_zone(zone) {
>> enum lru_list l;
>> @@ -2087,14 +2086,16 @@ static unsigned long shrink_all_zones(unsigned long nr_pages, int prio,
>>
>> zone->lru[l].nr_scan = 0;
>> nr_to_scan = min(nr_pages, lru_pages);
>> - ret += shrink_list(l, nr_to_scan, zone,
>> + nr_reclaimed += shrink_list(l, nr_to_scan, zone,
>> sc, prio);
>> - if (ret >= nr_pages)
>> - return ret;
>> + if (nr_reclaimed >= nr_pages) {
>> + sc->nr_reclaimed = nr_reclaimed;
>> + return;
>> + }
>> }
>> }
>> }
>> - return ret;
>> + sc->nr_reclaimed = nr_reclaimed;
>> }
>>
>> /*
>> @@ -2126,13 +2127,15 @@ unsigned long shrink_all_memory(unsigned long nr_pages)
>> /* If slab caches are huge, it's better to hit them first */
>> while (nr_slab >= lru_pages) {
>> reclaim_state.reclaimed_slab = 0;
>> - shrink_slab(nr_pages, sc.gfp_mask, lru_pages);
>> + shrink_slab(sc.swap_cluster_max, sc.gfp_mask, lru_pages);
>> if (!reclaim_state.reclaimed_slab)
>> break;
>>
>> - ret += reclaim_state.reclaimed_slab;
>> - if (ret >= nr_pages)
>> + sc.nr_reclaimed += reclaim_state.reclaimed_slab;
>> + if (sc.nr_reclaimed >= sc.swap_cluster_max) {
>> + ret = sc.nr_reclaimed;
>
> Why do you still maintain `ret'? Just return sc.nr_reclaimed at the
> end and get rid of ret alltogether.
It' just for emphasis on return variable.
Of course, I have no objection to remove 'ret'. ;
> Using sc.swap_cluster_max here seems to be a good idea at first sight
> but really it is not.
>
> Usually, swap_cluster_max is smaller than the reclaim goal and reclaim
> code uses it combined with other conditions to bail out BEFORE the
> original reclaim goal is met. But sc.swap_cluster_max IS our original
> reclaim goal, so it means something different.
>
> It's btw buggy, we never decrease swap_cluster_max which leads to
> funky overreclaim in shrink_inactive_list(). I will send the original
> patch from Kosaki-san for using sc->nr_reclaimed and a patch for the
> overreclaim problem.
>
> Hannes
>
--
Kinds regards,
MinChan Kim
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
^ permalink raw reply [flat|nested] 4+ messages in thread
end of thread, other threads:[~2009-02-12 13:11 UTC | newest]
Thread overview: 4+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2009-02-12 7:33 [PATCH v2] shrink_all_memory() use sc.nr_reclaimed MinChan Kim
2009-02-12 9:31 ` MinChan Kim
2009-02-12 11:25 ` Johannes Weiner
2009-02-12 13:11 ` MinChan Kim
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).