linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
* [RFC PATCH -mm] add extra free kbytes tunable
@ 2011-05-03  1:24 Rik van Riel
  2011-05-04  1:33 ` Ying Han
  2011-05-12  3:46 ` Satoru Moriya
  0 siblings, 2 replies; 3+ messages in thread
From: Rik van Riel @ 2011-05-03  1:24 UTC (permalink / raw)
  To: Satoru Moriya
  Cc: linux-mm, Ying Han, Mel Gorman, Minchan Kim, KOSAKI Motohiro,
	Hugh Dickins, Johannes Weiner

Add a userspace visible knob to tell the VM to keep an extra amount
of memory free, by increasing the gap between each zone's min and
low watermarks.

This can be used to make the VM free up memory, for when an extra
workload is to be added to a system, or to temporarily reduce the
memory use of a virtual machine. In this application, extra_free_kbytes
would be raised temporarily and reduced again later.  The workload
management system could also monitor the current workloads with
reduced memory, to verify that there really is memory space for
an additional workload, before starting it.

It may also be useful for realtime applications that call system
calls and have a bound on the number of allocations that happen
in any short time period.  In this application, extra_free_kbytes
would be left at an amount equal to or larger than than the
maximum number of allocations that happen in any burst.

I realize nobody really likes this solution to their particular
issue, but it is hard to deny the simplicity - especially
considering that this one knob could solve three different issues
and is fairly simple to understand.

Signed-off-by: Rik van Riel<riel@redhat.com>

diff --git a/kernel/sysctl.c b/kernel/sysctl.c
index c0bb324..feecc1a 100644
--- a/kernel/sysctl.c
+++ b/kernel/sysctl.c
@@ -95,6 +95,7 @@ extern char core_pattern[];
 extern unsigned int core_pipe_limit;
 extern int pid_max;
 extern int min_free_kbytes;
+extern int extra_free_kbytes;
 extern int pid_max_min, pid_max_max;
 extern int sysctl_drop_caches;
 extern int percpu_pagelist_fraction;
@@ -1173,6 +1174,14 @@ static struct ctl_table vm_table[] = {
 		.extra1		= &zero,
 	},
 	{
+		.procname	= "extra_free_kbytes",
+		.data		= &extra_free_kbytes,
+		.maxlen		= sizeof(extra_free_kbytes),
+		.mode		= 0644,
+		.proc_handler	= min_free_kbytes_sysctl_handler,
+		.extra1		= &zero,
+	},
+	{
 		.procname	= "percpu_pagelist_fraction",
 		.data		= &percpu_pagelist_fraction,
 		.maxlen		= sizeof(percpu_pagelist_fraction),
diff --git a/mm/page_alloc.c b/mm/page_alloc.c
index 9f8a97b..b85dcb1 100644
--- a/mm/page_alloc.c
+++ b/mm/page_alloc.c
@@ -172,8 +172,20 @@ static char * const zone_names[MAX_NR_ZONES] = {
 	 "Movable",
 };
 
+/*
+ * Try to keep at least this much lowmem free.  Do not allow normal
+ * allocations below this point, only high priority ones. Automatically
+ * tuned according to the amount of memory in the system.
+ */
 int min_free_kbytes = 1024;
 
+/*
+ * Extra memory for the system to try freeing. Used to temporarily
+ * free memory, to make space for new workloads. Anyone can allocate
+ * down to the min watermarks controlled by min_free_kbytes above.
+ */
+int extra_free_kbytes = 0;
+
 static unsigned long __meminitdata nr_kernel_pages;
 static unsigned long __meminitdata nr_all_pages;
 static unsigned long __meminitdata dma_reserve;
@@ -4999,6 +5011,7 @@ static void setup_per_zone_lowmem_reserve(void)
 void setup_per_zone_wmarks(void)
 {
 	unsigned long pages_min = min_free_kbytes >> (PAGE_SHIFT - 10);
+	unsigned long pages_low = extra_free_kbytes >> (PAGE_SHIFT - 10);
 	unsigned long lowmem_pages = 0;
 	struct zone *zone;
 	unsigned long flags;
@@ -5010,11 +5023,14 @@ void setup_per_zone_wmarks(void)
 	}
 
 	for_each_zone(zone) {
-		u64 tmp;
+		u64 min, low;
 
 		spin_lock_irqsave(&zone->lock, flags);
-		tmp = (u64)pages_min * zone->present_pages;
-		do_div(tmp, lowmem_pages);
+		min = (u64)pages_min * zone->present_pages;
+		do_div(min, lowmem_pages);
+		low = (u64)pages_low * zone->present_pages;
+		do_div(low, vm_total_pages);
+
 		if (is_highmem(zone)) {
 			/*
 			 * __GFP_HIGH and PF_MEMALLOC allocations usually don't
@@ -5038,11 +5054,13 @@ void setup_per_zone_wmarks(void)
 			 * If it's a lowmem zone, reserve a number of pages
 			 * proportionate to the zone's size.
 			 */
-			zone->watermark[WMARK_MIN] = tmp;
+			zone->watermark[WMARK_MIN] = min;
 		}
 
-		zone->watermark[WMARK_LOW]  = min_wmark_pages(zone) + (tmp >> 2);
-		zone->watermark[WMARK_HIGH] = min_wmark_pages(zone) + (tmp >> 1);
+		zone->watermark[WMARK_LOW]  = min_wmark_pages(zone) +
+					low + (min >> 2);
+		zone->watermark[WMARK_HIGH] = min_wmark_pages(zone) +
+					low + (min >> 1);
 		setup_zone_migrate_reserve(zone);
 		spin_unlock_irqrestore(&zone->lock, flags);
 	}
@@ -5139,7 +5157,7 @@ module_init(init_per_zone_wmark_min)
 /*
  * min_free_kbytes_sysctl_handler - just a wrapper around proc_dointvec() so 
  *	that we can call two helper functions whenever min_free_kbytes
- *	changes.
+ *	or extra_free_kbytes changes.
  */
 int min_free_kbytes_sysctl_handler(ctl_table *table, int write, 
 	void __user *buffer, size_t *length, loff_t *ppos)

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Fight unfair telecom internet charges in Canada: sign http://stopthemeter.ca/
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply related	[flat|nested] 3+ messages in thread

* Re: [RFC PATCH -mm] add extra free kbytes tunable
  2011-05-03  1:24 [RFC PATCH -mm] add extra free kbytes tunable Rik van Riel
@ 2011-05-04  1:33 ` Ying Han
  2011-05-12  3:46 ` Satoru Moriya
  1 sibling, 0 replies; 3+ messages in thread
From: Ying Han @ 2011-05-04  1:33 UTC (permalink / raw)
  To: Rik van Riel
  Cc: Satoru Moriya, linux-mm, Mel Gorman, Minchan Kim, KOSAKI Motohiro,
	Hugh Dickins, Johannes Weiner

On Mon, May 2, 2011 at 6:24 PM, Rik van Riel <riel@redhat.com> wrote:
> Add a userspace visible knob to tell the VM to keep an extra amount
> of memory free, by increasing the gap between each zone's min and
> low watermarks.
>
> This can be used to make the VM free up memory, for when an extra
> workload is to be added to a system, or to temporarily reduce the
> memory use of a virtual machine. In this application, extra_free_kbytes
> would be raised temporarily and reduced again later.  The workload
> management system could also monitor the current workloads with
> reduced memory, to verify that there really is memory space for
> an additional workload, before starting it.
>
> It may also be useful for realtime applications that call system
> calls and have a bound on the number of allocations that happen
> in any short time period.  In this application, extra_free_kbytes
> would be left at an amount equal to or larger than than the
> maximum number of allocations that happen in any burst.
>
> I realize nobody really likes this solution to their particular
> issue, but it is hard to deny the simplicity - especially
> considering that this one knob could solve three different issues
> and is fairly simple to understand.

Hi Rik:

In general, i would wonder what's the specific use case requiring the
extra tunable in the kernel. I think I can see the point you are
making,  but it would be hard for admin to adjust the per-zone
extra_free_bytes based on the workload.

In memcg case, we are proposing to add the "high_wmark_distance"
per-memcg, which allows us to tune the high/low_wmark per-memcg
background reclaim. One of the usecase shares w/ your comment which we
want to proactively reclaim memory for launching new jobs. More on
that, this tunable gives us more targeting reclaim. While looking at
this patch, some motivation are common but not the same. At least in
memcg, it might be hard to use per-zone extra_free_bytes to serve the
same purpose because the tunable is for every zone of the system. So
we might start soft_limit reclaim on all the memcgs on all the zones.
Which sounds overkill if we can pick few memcgs to reclaim from.

In non-memcg case, we can imagine lifting the low_wmark might
introduce more background reclaim, which in turn less direct reclaim.
But tuning the knob would be tricky and we need data to support that.

Thanks

--Ying

>
> Signed-off-by: Rik van Riel<riel@redhat.com>
>
> diff --git a/kernel/sysctl.c b/kernel/sysctl.c
> index c0bb324..feecc1a 100644
> --- a/kernel/sysctl.c
> +++ b/kernel/sysctl.c
> @@ -95,6 +95,7 @@ extern char core_pattern[];
>  extern unsigned int core_pipe_limit;
>  extern int pid_max;
>  extern int min_free_kbytes;
> +extern int extra_free_kbytes;
>  extern int pid_max_min, pid_max_max;
>  extern int sysctl_drop_caches;
>  extern int percpu_pagelist_fraction;
> @@ -1173,6 +1174,14 @@ static struct ctl_table vm_table[] = {
>                .extra1         = &zero,
>        },
>        {
> +               .procname       = "extra_free_kbytes",
> +               .data           = &extra_free_kbytes,
> +               .maxlen         = sizeof(extra_free_kbytes),
> +               .mode           = 0644,
> +               .proc_handler   = min_free_kbytes_sysctl_handler,
> +               .extra1         = &zero,
> +       },
> +       {
>                .procname       = "percpu_pagelist_fraction",
>                .data           = &percpu_pagelist_fraction,
>                .maxlen         = sizeof(percpu_pagelist_fraction),
> diff --git a/mm/page_alloc.c b/mm/page_alloc.c
> index 9f8a97b..b85dcb1 100644
> --- a/mm/page_alloc.c
> +++ b/mm/page_alloc.c
> @@ -172,8 +172,20 @@ static char * const zone_names[MAX_NR_ZONES] = {
>         "Movable",
>  };
>
> +/*
> + * Try to keep at least this much lowmem free.  Do not allow normal
> + * allocations below this point, only high priority ones. Automatically
> + * tuned according to the amount of memory in the system.
> + */
>  int min_free_kbytes = 1024;
>
> +/*
> + * Extra memory for the system to try freeing. Used to temporarily
> + * free memory, to make space for new workloads. Anyone can allocate
> + * down to the min watermarks controlled by min_free_kbytes above.
> + */
> +int extra_free_kbytes = 0;
> +
>  static unsigned long __meminitdata nr_kernel_pages;
>  static unsigned long __meminitdata nr_all_pages;
>  static unsigned long __meminitdata dma_reserve;
> @@ -4999,6 +5011,7 @@ static void setup_per_zone_lowmem_reserve(void)
>  void setup_per_zone_wmarks(void)
>  {
>        unsigned long pages_min = min_free_kbytes >> (PAGE_SHIFT - 10);
> +       unsigned long pages_low = extra_free_kbytes >> (PAGE_SHIFT - 10);
>        unsigned long lowmem_pages = 0;
>        struct zone *zone;
>        unsigned long flags;
> @@ -5010,11 +5023,14 @@ void setup_per_zone_wmarks(void)
>        }
>
>        for_each_zone(zone) {
> -               u64 tmp;
> +               u64 min, low;
>
>                spin_lock_irqsave(&zone->lock, flags);
> -               tmp = (u64)pages_min * zone->present_pages;
> -               do_div(tmp, lowmem_pages);
> +               min = (u64)pages_min * zone->present_pages;
> +               do_div(min, lowmem_pages);
> +               low = (u64)pages_low * zone->present_pages;
> +               do_div(low, vm_total_pages);
> +
>                if (is_highmem(zone)) {
>                        /*
>                         * __GFP_HIGH and PF_MEMALLOC allocations usually don't
> @@ -5038,11 +5054,13 @@ void setup_per_zone_wmarks(void)
>                         * If it's a lowmem zone, reserve a number of pages
>                         * proportionate to the zone's size.
>                         */
> -                       zone->watermark[WMARK_MIN] = tmp;
> +                       zone->watermark[WMARK_MIN] = min;
>                }
>
> -               zone->watermark[WMARK_LOW]  = min_wmark_pages(zone) + (tmp >> 2);
> -               zone->watermark[WMARK_HIGH] = min_wmark_pages(zone) + (tmp >> 1);
> +               zone->watermark[WMARK_LOW]  = min_wmark_pages(zone) +
> +                                       low + (min >> 2);
> +               zone->watermark[WMARK_HIGH] = min_wmark_pages(zone) +
> +                                       low + (min >> 1);
>                setup_zone_migrate_reserve(zone);
>                spin_unlock_irqrestore(&zone->lock, flags);
>        }
> @@ -5139,7 +5157,7 @@ module_init(init_per_zone_wmark_min)
>  /*
>  * min_free_kbytes_sysctl_handler - just a wrapper around proc_dointvec() so
>  *     that we can call two helper functions whenever min_free_kbytes
> - *     changes.
> + *     or extra_free_kbytes changes.
>  */
>  int min_free_kbytes_sysctl_handler(ctl_table *table, int write,
>        void __user *buffer, size_t *length, loff_t *ppos)
>
>

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Fight unfair telecom internet charges in Canada: sign http://stopthemeter.ca/
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 3+ messages in thread

* RE: [RFC PATCH -mm] add extra free kbytes tunable
  2011-05-03  1:24 [RFC PATCH -mm] add extra free kbytes tunable Rik van Riel
  2011-05-04  1:33 ` Ying Han
@ 2011-05-12  3:46 ` Satoru Moriya
  1 sibling, 0 replies; 3+ messages in thread
From: Satoru Moriya @ 2011-05-12  3:46 UTC (permalink / raw)
  To: Rik van Riel
  Cc: linux-mm@kvack.org, Ying Han, Mel Gorman, Minchan Kim,
	KOSAKI Motohiro, Hugh Dickins, Johannes Weiner

Hi Rik,

Sorry for slow response.

On 05/02/2011 09:24 PM, Rik van Riel wrote:
>
> This can be used to make the VM free up memory, for when an extra
> workload is to be added to a system, or to temporarily reduce the
> memory use of a virtual machine. In this application, extra_free_kbytes
> would be raised temporarily and reduced again later.  The workload
> management system could also monitor the current workloads with
> reduced memory, to verify that there really is memory space for
> an additional workload, before starting it.
> 
> It may also be useful for realtime applications that call system
> calls and have a bound on the number of allocations that happen
> in any short time period.  In this application, extra_free_kbytes
> would be left at an amount equal to or larger than than the
> maximum number of allocations that happen in any burst.

I tested it with my simple test case and it worked well.

- System memory: 2GB

- Background load:
  $ dd if=/dev/zero of=/tmp/tmp_file1 &
  $ dd if=/dev/zero of=/tmp/tmp_file2 &

- Main load:
  $ mapped-file-stream 1 $((1024 * 1024 * 256)) 

The result is following:

                   | default |  case 1  |  case 2  |  case 3  | 
----------------------------------------------------------------------
min_free_kbytes    |  5752   |   5752   |   5752   |   5752   |
extra_free_kbytes  |     0   |  64*1024 | 128*1024 | 256*1024 | (KB)
----------------------------------------------------------------------
avereage latency(*)|     4   |      4   |      4   |      4   |
worst latency(*)   |   192   |     52   |     60   |     54   | (usec)
----------------------------------------------------------------------
page fault         | 65535   |  65535   |  65535   |  65535   |
direct reclaim     |    21   |      0   |      0   |      0   | (times)
----------------------------------------------------------------------
vmstat result (**) |         |          |          |          |
 allocstall        |     69  |       0  |       0  |       0  |
 pgscan_steal_*    | 130573  |  128826  |  126258  |  133778  |
 kswapd_steal_*    | 127505  |  128826  |  126258  |  133778  | (times)
 (direct reclaim   |   3068  |       0  |       0  |       0  |
  steal)      

(*) Latency per one page allocation(pagefault)
(**)
 $ cat /proc/vmstat > vmstat_start
 $ mapped_file_stream....
 $ cat /proc/vmstat > vmstat_end

  -> "vmstat_end - vmstat_start" for each entry

As you can see, in the default case there were 21 direct reclaims in
the main load and its worst latency was 192 usecs. This may be bigger
if a process would sleep in the direct reclaim path(congestion_wait etc.).
In the other cases, there were no direct reclaim and its worst latencies
were less or equal 60 usecs.

We can avoid direct reclaim and keep a latency low with this knob.

> 
> I realize nobody really likes this solution to their particular
> issue, but it is hard to deny the simplicity - especially
> considering that this one knob could solve three different issues
> and is fairly simple to understand.

Yeah, this is much simpler than I proposed several month ago.

Thanks,
Satoru

> 
> Signed-off-by: Rik van Riel<riel@redhat.com>
> 

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Fight unfair telecom internet charges in Canada: sign http://stopthemeter.ca/
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 3+ messages in thread

end of thread, other threads:[~2011-05-12  3:50 UTC | newest]

Thread overview: 3+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2011-05-03  1:24 [RFC PATCH -mm] add extra free kbytes tunable Rik van Riel
2011-05-04  1:33 ` Ying Han
2011-05-12  3:46 ` Satoru Moriya

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).