linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
* [patch] mm: adjust kswapd nice level for high priority page allocators
@ 2010-03-01 10:14 David Rientjes
  2010-03-01 13:52 ` Mel Gorman
  2010-03-01 16:02 ` Minchan Kim
  0 siblings, 2 replies; 10+ messages in thread
From: David Rientjes @ 2010-03-01 10:14 UTC (permalink / raw)
  To: Andrew Morton; +Cc: Mel Gorman, Con Kolivas, linux-mm

From: Con Kolivas <kernel@kolivas.org>

When kswapd is awoken due to reclaim by a running task, set the priority
of kswapd to that of the task allocating pages thus making memory reclaim
cpu activity affected by nice level.

[rientjes@google.com: refactor for current]
Cc: Mel Gorman <mel@csn.ul.ie>
Signed-off-by: Con Kolivas <kernel@kolivas.org>
Signed-off-by: David Rientjes <rientjes@google.com>
---
 mm/vmscan.c |   33 ++++++++++++++++++++++++++++++++-
 1 files changed, 32 insertions(+), 1 deletions(-)

diff --git a/mm/vmscan.c b/mm/vmscan.c
--- a/mm/vmscan.c
+++ b/mm/vmscan.c
@@ -1658,6 +1658,33 @@ static void shrink_zone(int priority, struct zone *zone,
 }
 
 /*
+ * Helper functions to adjust nice level of kswapd, based on the priority of
+ * the task allocating pages. If it is already higher priority we do not
+ * demote its nice level since it is still working on behalf of a higher
+ * priority task. With kernel threads we leave it at nice 0.
+ *
+ * We don't ever run kswapd real time, so if a real time task calls kswapd we
+ * set it to highest SCHED_NORMAL priority.
+ */
+static int effective_sc_prio(struct task_struct *p)
+{
+	if (likely(p->mm)) {
+		if (rt_task(p))
+			return -20;
+		return task_nice(p);
+	}
+	return 0;
+}
+
+static void set_kswapd_nice(struct task_struct *kswapd, int active)
+{
+	long nice = effective_sc_prio(current);
+
+	if (task_nice(kswapd) > nice || !active)
+		set_user_nice(kswapd, nice);
+}
+
+/*
  * This is the direct reclaim path, for page-allocating processes.  We only
  * try to reclaim pages from zones which will satisfy the caller's allocation
  * request.
@@ -2257,6 +2284,7 @@ static int kswapd(void *p)
 				}
 			}
 
+			set_user_nice(tsk, 0);
 			order = pgdat->kswapd_max_order;
 		}
 		finish_wait(&pgdat->kswapd_wait, &wait);
@@ -2281,6 +2309,7 @@ static int kswapd(void *p)
 void wakeup_kswapd(struct zone *zone, int order)
 {
 	pg_data_t *pgdat;
+	int active;
 
 	if (!populated_zone(zone))
 		return;
@@ -2292,7 +2321,9 @@ void wakeup_kswapd(struct zone *zone, int order)
 		pgdat->kswapd_max_order = order;
 	if (!cpuset_zone_allowed_hardwall(zone, GFP_KERNEL))
 		return;
-	if (!waitqueue_active(&pgdat->kswapd_wait))
+	active = waitqueue_active(&pgdat->kswapd_wait);
+	set_kswapd_nice(pgdat->kswapd, active);
+	if (!active)
 		return;
 	wake_up_interruptible(&pgdat->kswapd_wait);
 }

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [patch] mm: adjust kswapd nice level for high priority page allocators
  2010-03-01 10:14 [patch] mm: adjust kswapd nice level for high priority page allocators David Rientjes
@ 2010-03-01 13:52 ` Mel Gorman
  2010-03-01 17:56   ` David Rientjes
  2010-03-02 23:48   ` Andrew Morton
  2010-03-01 16:02 ` Minchan Kim
  1 sibling, 2 replies; 10+ messages in thread
From: Mel Gorman @ 2010-03-01 13:52 UTC (permalink / raw)
  To: David Rientjes; +Cc: Andrew Morton, Con Kolivas, linux-mm

On Mon, Mar 01, 2010 at 02:14:39AM -0800, David Rientjes wrote:
> From: Con Kolivas <kernel@kolivas.org>
> 
> When kswapd is awoken due to reclaim by a running task, set the priority
> of kswapd to that of the task allocating pages thus making memory reclaim
> cpu activity affected by nice level.
> 

Why?

When a process kicks kswapd, the watermark at which a process enters
direct reclaim has not been reached yet. In other words, there is no
guarantee that a process will stall due to memory pressure.

The exception would be if there are many high-priority processes allocating
pages at a steady rate that are starving kswapd of CPU time and
consequently entering direct reclaim. In this case, the high-priority
processes effectively should stall until they have reclaimed the pages.
As Con is involved, I'm guessing there are high-priority interactive
processes that jitter in low-memory situations but as I've never
observed such a scenario I'm not sure.

My main concern is that in the case there are a mix of high and low processes
with kswapd towards the higher priority as a result of this patch, kswapd
could be keeping CPU time from low-priority processes that are well behaved
that would would make less forward progress as a result of this patch.

I'm not against it as such, but I'd like to know more about the problem
this solves and what the before and after behaviour looks like.

> [rientjes@google.com: refactor for current]
> Cc: Mel Gorman <mel@csn.ul.ie>
> Signed-off-by: Con Kolivas <kernel@kolivas.org>
> Signed-off-by: David Rientjes <rientjes@google.com>
> ---
>  mm/vmscan.c |   33 ++++++++++++++++++++++++++++++++-
>  1 files changed, 32 insertions(+), 1 deletions(-)
> 
> diff --git a/mm/vmscan.c b/mm/vmscan.c
> --- a/mm/vmscan.c
> +++ b/mm/vmscan.c
> @@ -1658,6 +1658,33 @@ static void shrink_zone(int priority, struct zone *zone,
>  }
>  
>  /*
> + * Helper functions to adjust nice level of kswapd, based on the priority of
> + * the task allocating pages. If it is already higher priority we do not
> + * demote its nice level since it is still working on behalf of a higher
> + * priority task. With kernel threads we leave it at nice 0.
> + *
> + * We don't ever run kswapd real time, so if a real time task calls kswapd we
> + * set it to highest SCHED_NORMAL priority.
> + */
> +static int effective_sc_prio(struct task_struct *p)
> +{
> +	if (likely(p->mm)) {
> +		if (rt_task(p))
> +			return -20;
> +		return task_nice(p);
> +	}
> +	return 0;
> +}
> +
> +static void set_kswapd_nice(struct task_struct *kswapd, int active)
> +{
> +	long nice = effective_sc_prio(current);
> +
> +	if (task_nice(kswapd) > nice || !active)
> +		set_user_nice(kswapd, nice);
> +}
> +
> +/*
>   * This is the direct reclaim path, for page-allocating processes.  We only
>   * try to reclaim pages from zones which will satisfy the caller's allocation
>   * request.
> @@ -2257,6 +2284,7 @@ static int kswapd(void *p)
>  				}
>  			}
>  
> +			set_user_nice(tsk, 0);
>  			order = pgdat->kswapd_max_order;
>  		}
>  		finish_wait(&pgdat->kswapd_wait, &wait);
> @@ -2281,6 +2309,7 @@ static int kswapd(void *p)
>  void wakeup_kswapd(struct zone *zone, int order)
>  {
>  	pg_data_t *pgdat;
> +	int active;
>  
>  	if (!populated_zone(zone))
>  		return;
> @@ -2292,7 +2321,9 @@ void wakeup_kswapd(struct zone *zone, int order)
>  		pgdat->kswapd_max_order = order;
>  	if (!cpuset_zone_allowed_hardwall(zone, GFP_KERNEL))
>  		return;
> -	if (!waitqueue_active(&pgdat->kswapd_wait))
> +	active = waitqueue_active(&pgdat->kswapd_wait);
> +	set_kswapd_nice(pgdat->kswapd, active);
> +	if (!active)
>  		return;
>  	wake_up_interruptible(&pgdat->kswapd_wait);
>  }
> 

-- 
Mel Gorman
Part-time Phd Student                          Linux Technology Center
University of Limerick                         IBM Dublin Software Lab

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [patch] mm: adjust kswapd nice level for high priority page allocators
  2010-03-01 10:14 [patch] mm: adjust kswapd nice level for high priority page allocators David Rientjes
  2010-03-01 13:52 ` Mel Gorman
@ 2010-03-01 16:02 ` Minchan Kim
  2010-03-02  4:29   ` Minchan Kim
  1 sibling, 1 reply; 10+ messages in thread
From: Minchan Kim @ 2010-03-01 16:02 UTC (permalink / raw)
  To: David Rientjes; +Cc: Andrew Morton, Mel Gorman, Con Kolivas, linux-mm

On Mon, Mar 1, 2010 at 7:14 PM, David Rientjes <rientjes@google.com> wrote:
> From: Con Kolivas <kernel@kolivas.org>
>
> When kswapd is awoken due to reclaim by a running task, set the priority
> of kswapd to that of the task allocating pages thus making memory reclaim
> cpu activity affected by nice level.
>
> [rientjes@google.com: refactor for current]
> Cc: Mel Gorman <mel@csn.ul.ie>
> Signed-off-by: Con Kolivas <kernel@kolivas.org>
> Signed-off-by: David Rientjes <rientjes@google.com>
> ---
>  mm/vmscan.c |   33 ++++++++++++++++++++++++++++++++-
>  1 files changed, 32 insertions(+), 1 deletions(-)
>
> diff --git a/mm/vmscan.c b/mm/vmscan.c
> --- a/mm/vmscan.c
> +++ b/mm/vmscan.c
> @@ -1658,6 +1658,33 @@ static void shrink_zone(int priority, struct zone *zone,
>  }
>
>  /*
> + * Helper functions to adjust nice level of kswapd, based on the priority of
> + * the task allocating pages. If it is already higher priority we do not
> + * demote its nice level since it is still working on behalf of a higher
> + * priority task. With kernel threads we leave it at nice 0.
> + *
> + * We don't ever run kswapd real time, so if a real time task calls kswapd we
> + * set it to highest SCHED_NORMAL priority.
> + */
> +static int effective_sc_prio(struct task_struct *p)
> +{
> +       if (likely(p->mm)) {
> +               if (rt_task(p))
> +                       return -20;
> +               return task_nice(p);
> +       }
> +       return 0;
> +}
> +
> +static void set_kswapd_nice(struct task_struct *kswapd, int active)
> +{
> +       long nice = effective_sc_prio(current);
> +
> +       if (task_nice(kswapd) > nice || !active)
> +               set_user_nice(kswapd, nice);
> +}
> +
> +/*
>  * This is the direct reclaim path, for page-allocating processes.  We only
>  * try to reclaim pages from zones which will satisfy the caller's allocation
>  * request.
> @@ -2257,6 +2284,7 @@ static int kswapd(void *p)
>                                }
>                        }
>
> +                       set_user_nice(tsk, 0);

Why do you reset nice value which set by set_kswapd_nice?


-- 
Kind regards,
Minchan Kim

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [patch] mm: adjust kswapd nice level for high priority page allocators
  2010-03-01 13:52 ` Mel Gorman
@ 2010-03-01 17:56   ` David Rientjes
  2010-03-01 18:04     ` Mel Gorman
  2010-03-02 23:48   ` Andrew Morton
  1 sibling, 1 reply; 10+ messages in thread
From: David Rientjes @ 2010-03-01 17:56 UTC (permalink / raw)
  To: Mel Gorman; +Cc: Andrew Morton, Con Kolivas, linux-mm

On Mon, 1 Mar 2010, Mel Gorman wrote:

> > When kswapd is awoken due to reclaim by a running task, set the priority
> > of kswapd to that of the task allocating pages thus making memory reclaim
> > cpu activity affected by nice level.
> > 
> 
> Why?
> 
> When a process kicks kswapd, the watermark at which a process enters
> direct reclaim has not been reached yet. In other words, there is no
> guarantee that a process will stall due to memory pressure.
> 
> The exception would be if there are many high-priority processes allocating
> pages at a steady rate that are starving kswapd of CPU time and
> consequently entering direct reclaim.

They don't necessarily need to be allocating pages, they may simply be 
starving kswapd of cputime which increases the liklihood of subsequently 
entering direct reclaim because of a low watermark on a later allocation.  
Without this patch, it's trivial especially on smaller desktop machines or 
servers using cpusets to preempt kswapd from running by setting nice 
levels for processes from userspace to have high priority.

If we're going to be doing background reclaim, it should not be done 
slower than one or more processes allocating pages; otherwise, we bias 
high priority tasks trying to allocate pages and favor lower priority.

> My main concern is that in the case there are a mix of high and low processes
> with kswapd towards the higher priority as a result of this patch, kswapd
> could be keeping CPU time from low-priority processes that are well behaved
> that would would make less forward progress as a result of this patch.
> 

That would only be the case if we constantly follow the slowpath in the 
page allocator, in which case we want kswapd to run and reclaim memory so 
that all processes can use the fastpath.

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [patch] mm: adjust kswapd nice level for high priority page allocators
  2010-03-01 17:56   ` David Rientjes
@ 2010-03-01 18:04     ` Mel Gorman
  2010-03-08 23:23       ` David Rientjes
  0 siblings, 1 reply; 10+ messages in thread
From: Mel Gorman @ 2010-03-01 18:04 UTC (permalink / raw)
  To: David Rientjes; +Cc: Andrew Morton, Con Kolivas, linux-mm

On Mon, Mar 01, 2010 at 09:56:02AM -0800, David Rientjes wrote:
> On Mon, 1 Mar 2010, Mel Gorman wrote:
> 
> > > When kswapd is awoken due to reclaim by a running task, set the priority
> > > of kswapd to that of the task allocating pages thus making memory reclaim
> > > cpu activity affected by nice level.
> > > 
> > 
> > Why?
> > 
> > When a process kicks kswapd, the watermark at which a process enters
> > direct reclaim has not been reached yet. In other words, there is no
> > guarantee that a process will stall due to memory pressure.
> > 
> > The exception would be if there are many high-priority processes allocating
> > pages at a steady rate that are starving kswapd of CPU time and
> > consequently entering direct reclaim.
> 
> They don't necessarily need to be allocating pages, they may simply be 
> starving kswapd of cputime which increases the liklihood of subsequently 
> entering direct reclaim because of a low watermark on a later allocation.  

True.

> Without this patch, it's trivial especially on smaller desktop machines or 
> servers using cpusets to preempt kswapd from running by setting nice 
> levels for processes from userspace to have high priority.
> 

Can that be included with the changelog then please?

Can figures also be shown then as part of the patch? It would appear that
one possibility would be to boot a machine with 1G and simply measure the
time taken to complete 7 simultaneous kernel compiles (so that kswapd is
active) and measure the number of pages direct reclaimed and reclaimed by
kswapd. Rerun the test except that all the kernel builds are at a higher
priority than kswapd.

When all the priorities are the same, the reclaim figures should match
with or without the patch. With the priorities higher, then the direct
reclaims should be higher without this patch reflecting the fact that
kswapd was starved of CPU.

> If we're going to be doing background reclaim, it should not be done 
> slower than one or more processes allocating pages; otherwise, we bias 
> high priority tasks trying to allocate pages and favor lower priority.
> 
> > My main concern is that in the case there are a mix of high and low processes
> > with kswapd towards the higher priority as a result of this patch, kswapd
> > could be keeping CPU time from low-priority processes that are well behaved
> > that would would make less forward progress as a result of this patch.
> > 
> 
> That would only be the case if we constantly follow the slowpath in the 
> page allocator, in which case we want kswapd to run and reclaim memory so 
> that all processes can use the fastpath.
> 

Not necessarily. The CPU time used by the low-priority processes is not
necessarily allocator related. It could just be doing normal work, but
less of it because kswapd is getting more CPU time. Maybe it wouldn't
matter in practice because the lower CPU time is offset by the avoidance
of direct reclaims at some future point.

-- 
Mel Gorman
Part-time Phd Student                          Linux Technology Center
University of Limerick                         IBM Dublin Software Lab

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [patch] mm: adjust kswapd nice level for high priority page allocators
  2010-03-01 16:02 ` Minchan Kim
@ 2010-03-02  4:29   ` Minchan Kim
  2010-03-03  0:14     ` David Rientjes
  0 siblings, 1 reply; 10+ messages in thread
From: Minchan Kim @ 2010-03-02  4:29 UTC (permalink / raw)
  To: David Rientjes; +Cc: Andrew Morton, Mel Gorman, Con Kolivas, linux-mm

On Tue, Mar 2, 2010 at 1:02 AM, Minchan Kim <minchan.kim@gmail.com> wrote:
> On Mon, Mar 1, 2010 at 7:14 PM, David Rientjes <rientjes@google.com> wrote:
>> From: Con Kolivas <kernel@kolivas.org>
>>
>> When kswapd is awoken due to reclaim by a running task, set the priority
>> of kswapd to that of the task allocating pages thus making memory reclaim
>> cpu activity affected by nice level.
>>
>> [rientjes@google.com: refactor for current]
>> Cc: Mel Gorman <mel@csn.ul.ie>
>> Signed-off-by: Con Kolivas <kernel@kolivas.org>
>> Signed-off-by: David Rientjes <rientjes@google.com>
>> ---
>>  mm/vmscan.c |   33 ++++++++++++++++++++++++++++++++-
>>  1 files changed, 32 insertions(+), 1 deletions(-)
>>
>> diff --git a/mm/vmscan.c b/mm/vmscan.c
>> --- a/mm/vmscan.c
>> +++ b/mm/vmscan.c
>> @@ -1658,6 +1658,33 @@ static void shrink_zone(int priority, struct zone *zone,
>>  }
>>
>>  /*
>> + * Helper functions to adjust nice level of kswapd, based on the priority of
>> + * the task allocating pages. If it is already higher priority we do not
>> + * demote its nice level since it is still working on behalf of a higher
>> + * priority task. With kernel threads we leave it at nice 0.
>> + *
>> + * We don't ever run kswapd real time, so if a real time task calls kswapd we
>> + * set it to highest SCHED_NORMAL priority.
>> + */
>> +static int effective_sc_prio(struct task_struct *p)
>> +{
>> +       if (likely(p->mm)) {
>> +               if (rt_task(p))
>> +                       return -20;
>> +               return task_nice(p);
>> +       }
>> +       return 0;
>> +}
>> +
>> +static void set_kswapd_nice(struct task_struct *kswapd, int active)
>> +{
>> +       long nice = effective_sc_prio(current);
>> +
>> +       if (task_nice(kswapd) > nice || !active)
>> +               set_user_nice(kswapd, nice);
>> +}
>> +
>> +/*
>>  * This is the direct reclaim path, for page-allocating processes.  We only
>>  * try to reclaim pages from zones which will satisfy the caller's allocation
>>  * request.
>> @@ -2257,6 +2284,7 @@ static int kswapd(void *p)
>>                                }
>>                        }
>>
>> +                       set_user_nice(tsk, 0);
>
> Why do you reset nice value which set by set_kswapd_nice?

My point is that you reset nice value(which is boosted at wakeup_kswapd) to 0
before calling balance_pgdat. It means kswapd could be rescheduled by nice 0
before really reclaim happens by balance_pgdat.
I think it would invalidate your goal which kswapd's priority can be inherited
by direct reclaimed process's one.

What am I missing now?

>
> --
> Kind regards,
> Minchan Kim
>



-- 
Kind regards,
Minchan Kim

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [patch] mm: adjust kswapd nice level for high priority page allocators
  2010-03-01 13:52 ` Mel Gorman
  2010-03-01 17:56   ` David Rientjes
@ 2010-03-02 23:48   ` Andrew Morton
  1 sibling, 0 replies; 10+ messages in thread
From: Andrew Morton @ 2010-03-02 23:48 UTC (permalink / raw)
  To: Mel Gorman; +Cc: David Rientjes, Con Kolivas, linux-mm

On Mon, 1 Mar 2010 13:52:42 +0000
Mel Gorman <mel@csn.ul.ie> wrote:

> I'm not against it as such, but I'd like to know more about the problem
> this solves and what the before and after behaviour looks like.


^^ this

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [patch] mm: adjust kswapd nice level for high priority page allocators
  2010-03-02  4:29   ` Minchan Kim
@ 2010-03-03  0:14     ` David Rientjes
  2010-03-03  6:25       ` Minchan Kim
  0 siblings, 1 reply; 10+ messages in thread
From: David Rientjes @ 2010-03-03  0:14 UTC (permalink / raw)
  To: Minchan Kim; +Cc: Andrew Morton, Mel Gorman, Con Kolivas, linux-mm

On Tue, 2 Mar 2010, Minchan Kim wrote:

> > Why do you reset nice value which set by set_kswapd_nice?
> 
> My point is that you reset nice value(which is boosted at wakeup_kswapd) to 0
> before calling balance_pgdat. It means kswapd could be rescheduled by nice 0
> before really reclaim happens by balance_pgdat.

wakeup_kswapd() wakes up kswapd at the finish_wait() point so that it has 
the nice value set by set_kswapd_nice() when it calls balance_pgdat(), 
loops, and then sets it back to the default nice level of 0.

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [patch] mm: adjust kswapd nice level for high priority page allocators
  2010-03-03  0:14     ` David Rientjes
@ 2010-03-03  6:25       ` Minchan Kim
  0 siblings, 0 replies; 10+ messages in thread
From: Minchan Kim @ 2010-03-03  6:25 UTC (permalink / raw)
  To: David Rientjes; +Cc: Andrew Morton, Mel Gorman, Con Kolivas, linux-mm

On Wed, Mar 3, 2010 at 9:14 AM, David Rientjes <rientjes@google.com> wrote:
> On Tue, 2 Mar 2010, Minchan Kim wrote:
>
>> > Why do you reset nice value which set by set_kswapd_nice?
>>
>> My point is that you reset nice value(which is boosted at wakeup_kswapd) to 0
>> before calling balance_pgdat. It means kswapd could be rescheduled by nice 0
>> before really reclaim happens by balance_pgdat.
>
> wakeup_kswapd() wakes up kswapd at the finish_wait() point so that it has
> the nice value set by set_kswapd_nice() when it calls balance_pgdat(),
> loops, and then sets it back to the default nice level of 0.

I can't understand your point.

Now kswapd is working following as.

for (; ;) {
  prepare_to_wait();
  if ( ... ) {
    ...
    ...
    schedule() < --- wakeup point
    ...
    set_user_nice(tsk, 0); <-- You reset nice value to zero.
    order = pgdata->kswapd_max_order;
  }
  finish_wait();
  balance_pgdat(); << before entering balance_pgdat, the nice vaule
will be invalidated.
}

As above code, wakeup_kswapd() wakes up kswapd at not finish_wait but
next line of schedule(). So I think nice vaule promoted by
wakeup_kswapd would be invalidated.


-- 
Kind regards,
Minchan Kim

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [patch] mm: adjust kswapd nice level for high priority page allocators
  2010-03-01 18:04     ` Mel Gorman
@ 2010-03-08 23:23       ` David Rientjes
  0 siblings, 0 replies; 10+ messages in thread
From: David Rientjes @ 2010-03-08 23:23 UTC (permalink / raw)
  To: Mel Gorman; +Cc: Andrew Morton, Con Kolivas, linux-mm

On Mon, 1 Mar 2010, Mel Gorman wrote:

> Can figures also be shown then as part of the patch? It would appear that
> one possibility would be to boot a machine with 1G and simply measure the
> time taken to complete 7 simultaneous kernel compiles (so that kswapd is
> active) and measure the number of pages direct reclaimed and reclaimed by
> kswapd. Rerun the test except that all the kernel builds are at a higher
> priority than kswapd.
> 

Ok, I'll collect those statistics.

> When all the priorities are the same, the reclaim figures should match
> with or without the patch. With the priorities higher, then the direct
> reclaims should be higher without this patch reflecting the fact that
> kswapd was starved of CPU.
> 

Agreed.

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 10+ messages in thread

end of thread, other threads:[~2010-03-08 23:23 UTC | newest]

Thread overview: 10+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2010-03-01 10:14 [patch] mm: adjust kswapd nice level for high priority page allocators David Rientjes
2010-03-01 13:52 ` Mel Gorman
2010-03-01 17:56   ` David Rientjes
2010-03-01 18:04     ` Mel Gorman
2010-03-08 23:23       ` David Rientjes
2010-03-02 23:48   ` Andrew Morton
2010-03-01 16:02 ` Minchan Kim
2010-03-02  4:29   ` Minchan Kim
2010-03-03  0:14     ` David Rientjes
2010-03-03  6:25       ` Minchan Kim

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).