linux-fsdevel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH v2 0/2] slightly change shrinker behaviour for very small object sets
@ 2012-12-21 10:46 Glauber Costa
  2012-12-21 10:46 ` [PATCH v2 1/2] super: fix calculation of shrinkable objects for small numbers Glauber Costa
                   ` (2 more replies)
  0 siblings, 3 replies; 8+ messages in thread
From: Glauber Costa @ 2012-12-21 10:46 UTC (permalink / raw)
  To: linux-fsdevel-u79uwXL29TY76Z2rM5mHXA
  Cc: linux-mm-Bw31MaZKKs3YtjvyW6yDsg, cgroups-u79uwXL29TY76Z2rM5mHXA,
	Dave Shrinnker, Mel Gorman, Johannes Weiner, Michal Hocko,
	kamezawa.hiroyu-+CUm20s59erQFUHtdCDX3A

Hi,

* v2: fix sysctl_vfs_cache_pressure for all users.

I've recently noticed some glitches in the object shrinker mechanism when a
very small number of objects is used. Those situations are theoretically
possible, albeit unlikely. But although it may feel like it is purely
theoretical, they can become common in environments with many small containers
(cgroups) in a box.

Those patches came from some experimentation I am doing with targetted-shrinking
for kmem-limited memory cgroups (Dave Shrinnker is already aware of such work).
In such scenarios, one can set the available memory to very low limits, and it
becomes easy to see this.



Glauber Costa (2):
  super: fix calculation of shrinkable objects for small numbers
  vmscan: take at least one pass with shrinkers

 fs/gfs2/glock.c        | 2 +-
 fs/gfs2/quota.c        | 2 +-
 fs/mbcache.c           | 2 +-
 fs/nfs/dir.c           | 2 +-
 fs/quota/dquot.c       | 5 ++---
 fs/super.c             | 2 +-
 fs/xfs/xfs_qm.c        | 2 +-
 include/linux/dcache.h | 4 ++++
 mm/vmscan.c            | 4 ++--
 9 files changed, 14 insertions(+), 11 deletions(-)

-- 
1.7.11.7

^ permalink raw reply	[flat|nested] 8+ messages in thread

* [PATCH v2 1/2] super: fix calculation of shrinkable objects for small numbers
  2012-12-21 10:46 [PATCH v2 0/2] slightly change shrinker behaviour for very small object sets Glauber Costa
@ 2012-12-21 10:46 ` Glauber Costa
  2012-12-21 11:08   ` Carlos Maiolino
  2012-12-21 10:46 ` [PATCH v2 2/2] vmscan: take at least one pass with shrinkers Glauber Costa
  2013-01-14 19:58 ` [PATCH v2 0/2] slightly change shrinker behaviour for very small object sets Glauber Costa
  2 siblings, 1 reply; 8+ messages in thread
From: Glauber Costa @ 2012-12-21 10:46 UTC (permalink / raw)
  To: linux-fsdevel
  Cc: linux-mm, cgroups, Dave Shrinnker, Mel Gorman, Johannes Weiner,
	Michal Hocko, kamezawa.hiroyu, Glauber Costa, Theodore Ts'o,
	Al Viro

The sysctl knob sysctl_vfs_cache_pressure is used to determine which
percentage of the shrinkable objects in our cache we should actively try
to shrink.

It works great in situations in which we have many objects (at least
more than 100), because the aproximation errors will be negligible. But
if this is not the case, specially when total_objects < 100, we may end
up concluding that we have no objects at all (total / 100 = 0,  if total
< 100).

This is certainly not the biggest killer in the world, but may matter in
very low kernel memory situations.

[ v2: fix it for all occurrences of sysctl_vfs_cache_pressure ]

Signed-off-by: Glauber Costa <glommer@parallels.com>
CC: Dave Chinner <david@fromorbit.com>
CC: "Theodore Ts'o" <tytso@mit.edu>
CC: Al Viro <viro@zeniv.linux.org.uk>
---
 fs/gfs2/glock.c        | 2 +-
 fs/gfs2/quota.c        | 2 +-
 fs/mbcache.c           | 2 +-
 fs/nfs/dir.c           | 2 +-
 fs/quota/dquot.c       | 5 ++---
 fs/super.c             | 2 +-
 fs/xfs/xfs_qm.c        | 2 +-
 include/linux/dcache.h | 4 ++++
 8 files changed, 12 insertions(+), 9 deletions(-)

diff --git a/fs/gfs2/glock.c b/fs/gfs2/glock.c
index 0f22d09..b2b65dc 100644
--- a/fs/gfs2/glock.c
+++ b/fs/gfs2/glock.c
@@ -1415,7 +1415,7 @@ static int gfs2_shrink_glock_memory(struct shrinker *shrink,
 	atomic_add(nr_skipped, &lru_count);
 	spin_unlock(&lru_lock);
 out:
-	return (atomic_read(&lru_count) / 100) * sysctl_vfs_cache_pressure;
+	return vfs_pressure_ratio(atomic_read(&lru_count));
 }
 
 static struct shrinker glock_shrinker = {
diff --git a/fs/gfs2/quota.c b/fs/gfs2/quota.c
index c5af8e1..fcc92de 100644
--- a/fs/gfs2/quota.c
+++ b/fs/gfs2/quota.c
@@ -117,7 +117,7 @@ int gfs2_shrink_qd_memory(struct shrinker *shrink, struct shrink_control *sc)
 	spin_unlock(&qd_lru_lock);
 
 out:
-	return (atomic_read(&qd_lru_count) * sysctl_vfs_cache_pressure) / 100;
+	return vfs_pressure_ratio(atomic_read(&qd_lru_count));
 }
 
 static u64 qd2offset(struct gfs2_quota_data *qd)
diff --git a/fs/mbcache.c b/fs/mbcache.c
index 8c32ef3..5eb0476 100644
--- a/fs/mbcache.c
+++ b/fs/mbcache.c
@@ -189,7 +189,7 @@ mb_cache_shrink_fn(struct shrinker *shrink, struct shrink_control *sc)
 	list_for_each_entry_safe(entry, tmp, &free_list, e_lru_list) {
 		__mb_cache_entry_forget(entry, gfp_mask);
 	}
-	return (count / 100) * sysctl_vfs_cache_pressure;
+	return vfs_pressure_ratio(count);
 }
 
 
diff --git a/fs/nfs/dir.c b/fs/nfs/dir.c
index b9e66b7..6002971 100644
--- a/fs/nfs/dir.c
+++ b/fs/nfs/dir.c
@@ -1950,7 +1950,7 @@ remove_lru_entry:
 	}
 	spin_unlock(&nfs_access_lru_lock);
 	nfs_access_free_list(&head);
-	return (atomic_long_read(&nfs_access_nr_entries) / 100) * sysctl_vfs_cache_pressure;
+	return vfs_pressure_ratio(atomic_long_read(&nfs_access_nr_entries));
 }
 
 static void __nfs_access_zap_cache(struct nfs_inode *nfsi, struct list_head *head)
diff --git a/fs/quota/dquot.c b/fs/quota/dquot.c
index 05ae3c9..f90bdf2 100644
--- a/fs/quota/dquot.c
+++ b/fs/quota/dquot.c
@@ -719,9 +719,8 @@ static int shrink_dqcache_memory(struct shrinker *shrink,
 		prune_dqcache(nr);
 		spin_unlock(&dq_list_lock);
 	}
-	return ((unsigned)
-		percpu_counter_read_positive(&dqstats.counter[DQST_FREE_DQUOTS])
-		/100) * sysctl_vfs_cache_pressure;
+	return vfs_pressure_ratio(
+	percpu_counter_read_positive(&dqstats.counter[DQST_FREE_DQUOTS]));
 }
 
 static struct shrinker dqcache_shrinker = {
diff --git a/fs/super.c b/fs/super.c
index 12f1237..1302f63 100644
--- a/fs/super.c
+++ b/fs/super.c
@@ -104,7 +104,7 @@ static int prune_super(struct shrinker *shrink, struct shrink_control *sc)
 				sb->s_nr_inodes_unused + fs_objects;
 	}
 
-	total_objects = (total_objects / 100) * sysctl_vfs_cache_pressure;
+	total_objects = vfs_pressure_ratio(total_objects);
 	drop_super(sb);
 	return total_objects;
 }
diff --git a/fs/xfs/xfs_qm.c b/fs/xfs/xfs_qm.c
index 2e86fa0..269ca79 100644
--- a/fs/xfs/xfs_qm.c
+++ b/fs/xfs/xfs_qm.c
@@ -1566,7 +1566,7 @@ xfs_qm_shake(
 	}
 
 out:
-	return (qi->qi_lru_count / 100) * sysctl_vfs_cache_pressure;
+	return vfs_pressure_ratio(qi->qi_lru_count);
 }
 
 /*
diff --git a/include/linux/dcache.h b/include/linux/dcache.h
index 5920079..f30006c 100644
--- a/include/linux/dcache.h
+++ b/include/linux/dcache.h
@@ -417,4 +417,8 @@ extern void d_clear_need_lookup(struct dentry *dentry);
 
 extern int sysctl_vfs_cache_pressure;
 
+static inline unsigned long vfs_pressure_ratio(unsigned long val)
+{
+	return mult_frac(val, sysctl_vfs_cache_pressure, 100);
+}
 #endif	/* __LINUX_DCACHE_H */
-- 
1.7.11.7

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply related	[flat|nested] 8+ messages in thread

* [PATCH v2 2/2] vmscan: take at least one pass with shrinkers
  2012-12-21 10:46 [PATCH v2 0/2] slightly change shrinker behaviour for very small object sets Glauber Costa
  2012-12-21 10:46 ` [PATCH v2 1/2] super: fix calculation of shrinkable objects for small numbers Glauber Costa
@ 2012-12-21 10:46 ` Glauber Costa
  2012-12-21 11:10   ` Carlos Maiolino
  2012-12-22 23:53   ` Dave Chinner
  2013-01-14 19:58 ` [PATCH v2 0/2] slightly change shrinker behaviour for very small object sets Glauber Costa
  2 siblings, 2 replies; 8+ messages in thread
From: Glauber Costa @ 2012-12-21 10:46 UTC (permalink / raw)
  To: linux-fsdevel
  Cc: linux-mm, cgroups, Dave Shrinnker, Mel Gorman, Johannes Weiner,
	Michal Hocko, kamezawa.hiroyu, Glauber Costa, Theodore Ts'o,
	Al Viro

In very low free kernel memory situations, it may be the case that we
have less objects to free than our initial batch size. If this is the
case, it is better to shrink those, and open space for the new workload
then to keep them and fail the new allocations.

More specifically, this happens because we encode this in a loop with
the condition: "while (total_scan >= batch_size)". So if we are in such
a case, we'll not even enter the loop.

This patch modifies turns it into a do () while {} loop, that will
guarantee that we scan it at least once, while keeping the behaviour
exactly the same for the cases in which total_scan > batch_size.

Signed-off-by: Glauber Costa <glommer@parallels.com>
Acked-by: Dave Chinner <david@fromorbit.com>
CC: "Theodore Ts'o" <tytso@mit.edu>
CC: Al Viro <viro@zeniv.linux.org.uk>
---
 mm/vmscan.c | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/mm/vmscan.c b/mm/vmscan.c
index 7f30961..fcd1aa0 100644
--- a/mm/vmscan.c
+++ b/mm/vmscan.c
@@ -280,7 +280,7 @@ unsigned long shrink_slab(struct shrink_control *shrink,
 					nr_pages_scanned, lru_pages,
 					max_pass, delta, total_scan);
 
-		while (total_scan >= batch_size) {
+		do {
 			int nr_before;
 
 			nr_before = do_shrinker_shrink(shrinker, shrink, 0);
@@ -294,7 +294,7 @@ unsigned long shrink_slab(struct shrink_control *shrink,
 			total_scan -= batch_size;
 
 			cond_resched();
-		}
+		} while (total_scan >= batch_size);
 
 		/*
 		 * move the unused scan count back into the shrinker in a
-- 
1.7.11.7

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply related	[flat|nested] 8+ messages in thread

* Re: [PATCH v2 1/2] super: fix calculation of shrinkable objects for small numbers
  2012-12-21 10:46 ` [PATCH v2 1/2] super: fix calculation of shrinkable objects for small numbers Glauber Costa
@ 2012-12-21 11:08   ` Carlos Maiolino
  0 siblings, 0 replies; 8+ messages in thread
From: Carlos Maiolino @ 2012-12-21 11:08 UTC (permalink / raw)
  To: Glauber Costa
  Cc: linux-fsdevel, linux-mm, cgroups, Dave Shrinnker, Mel Gorman,
	Johannes Weiner, Michal Hocko, kamezawa.hiroyu, Theodore Ts'o,
	Al Viro

On Fri, Dec 21, 2012 at 02:46:49PM +0400, Glauber Costa wrote:
> The sysctl knob sysctl_vfs_cache_pressure is used to determine which
> percentage of the shrinkable objects in our cache we should actively try
> to shrink.
> 
> It works great in situations in which we have many objects (at least
> more than 100), because the aproximation errors will be negligible. But
> if this is not the case, specially when total_objects < 100, we may end
> up concluding that we have no objects at all (total / 100 = 0,  if total
> < 100).
> 
> This is certainly not the biggest killer in the world, but may matter in
> very low kernel memory situations.
> 
> [ v2: fix it for all occurrences of sysctl_vfs_cache_pressure ]
> 
> Signed-off-by: Glauber Costa <glommer@parallels.com>
> CC: Dave Chinner <david@fromorbit.com>
> CC: "Theodore Ts'o" <tytso@mit.edu>
> CC: Al Viro <viro@zeniv.linux.org.uk>
> ---
>  fs/gfs2/glock.c        | 2 +-
>  fs/gfs2/quota.c        | 2 +-
>  fs/mbcache.c           | 2 +-
>  fs/nfs/dir.c           | 2 +-
>  fs/quota/dquot.c       | 5 ++---
>  fs/super.c             | 2 +-
>  fs/xfs/xfs_qm.c        | 2 +-
>  include/linux/dcache.h | 4 ++++
>  8 files changed, 12 insertions(+), 9 deletions(-)
> 
> diff --git a/fs/gfs2/glock.c b/fs/gfs2/glock.c
> index 0f22d09..b2b65dc 100644
> --- a/fs/gfs2/glock.c
> +++ b/fs/gfs2/glock.c
> @@ -1415,7 +1415,7 @@ static int gfs2_shrink_glock_memory(struct shrinker *shrink,
>  	atomic_add(nr_skipped, &lru_count);
>  	spin_unlock(&lru_lock);
>  out:
> -	return (atomic_read(&lru_count) / 100) * sysctl_vfs_cache_pressure;
> +	return vfs_pressure_ratio(atomic_read(&lru_count));
>  }
>  
>  static struct shrinker glock_shrinker = {
> diff --git a/fs/gfs2/quota.c b/fs/gfs2/quota.c
> index c5af8e1..fcc92de 100644
> --- a/fs/gfs2/quota.c
> +++ b/fs/gfs2/quota.c
> @@ -117,7 +117,7 @@ int gfs2_shrink_qd_memory(struct shrinker *shrink, struct shrink_control *sc)
>  	spin_unlock(&qd_lru_lock);
>  
>  out:
> -	return (atomic_read(&qd_lru_count) * sysctl_vfs_cache_pressure) / 100;
> +	return vfs_pressure_ratio(atomic_read(&qd_lru_count));
>  }
>  
>  static u64 qd2offset(struct gfs2_quota_data *qd)
> diff --git a/fs/mbcache.c b/fs/mbcache.c
> index 8c32ef3..5eb0476 100644
> --- a/fs/mbcache.c
> +++ b/fs/mbcache.c
> @@ -189,7 +189,7 @@ mb_cache_shrink_fn(struct shrinker *shrink, struct shrink_control *sc)
>  	list_for_each_entry_safe(entry, tmp, &free_list, e_lru_list) {
>  		__mb_cache_entry_forget(entry, gfp_mask);
>  	}
> -	return (count / 100) * sysctl_vfs_cache_pressure;
> +	return vfs_pressure_ratio(count);
>  }
>  
>  
> diff --git a/fs/nfs/dir.c b/fs/nfs/dir.c
> index b9e66b7..6002971 100644
> --- a/fs/nfs/dir.c
> +++ b/fs/nfs/dir.c
> @@ -1950,7 +1950,7 @@ remove_lru_entry:
>  	}
>  	spin_unlock(&nfs_access_lru_lock);
>  	nfs_access_free_list(&head);
> -	return (atomic_long_read(&nfs_access_nr_entries) / 100) * sysctl_vfs_cache_pressure;
> +	return vfs_pressure_ratio(atomic_long_read(&nfs_access_nr_entries));
>  }
>  
>  static void __nfs_access_zap_cache(struct nfs_inode *nfsi, struct list_head *head)
> diff --git a/fs/quota/dquot.c b/fs/quota/dquot.c
> index 05ae3c9..f90bdf2 100644
> --- a/fs/quota/dquot.c
> +++ b/fs/quota/dquot.c
> @@ -719,9 +719,8 @@ static int shrink_dqcache_memory(struct shrinker *shrink,
>  		prune_dqcache(nr);
>  		spin_unlock(&dq_list_lock);
>  	}
> -	return ((unsigned)
> -		percpu_counter_read_positive(&dqstats.counter[DQST_FREE_DQUOTS])
> -		/100) * sysctl_vfs_cache_pressure;
> +	return vfs_pressure_ratio(
> +	percpu_counter_read_positive(&dqstats.counter[DQST_FREE_DQUOTS]));
>  }
>  
>  static struct shrinker dqcache_shrinker = {
> diff --git a/fs/super.c b/fs/super.c
> index 12f1237..1302f63 100644
> --- a/fs/super.c
> +++ b/fs/super.c
> @@ -104,7 +104,7 @@ static int prune_super(struct shrinker *shrink, struct shrink_control *sc)
>  				sb->s_nr_inodes_unused + fs_objects;
>  	}
>  
> -	total_objects = (total_objects / 100) * sysctl_vfs_cache_pressure;
> +	total_objects = vfs_pressure_ratio(total_objects);
>  	drop_super(sb);
>  	return total_objects;
>  }
> diff --git a/fs/xfs/xfs_qm.c b/fs/xfs/xfs_qm.c
> index 2e86fa0..269ca79 100644
> --- a/fs/xfs/xfs_qm.c
> +++ b/fs/xfs/xfs_qm.c
> @@ -1566,7 +1566,7 @@ xfs_qm_shake(
>  	}
>  
>  out:
> -	return (qi->qi_lru_count / 100) * sysctl_vfs_cache_pressure;
> +	return vfs_pressure_ratio(qi->qi_lru_count);
>  }
>  
>  /*
> diff --git a/include/linux/dcache.h b/include/linux/dcache.h
> index 5920079..f30006c 100644
> --- a/include/linux/dcache.h
> +++ b/include/linux/dcache.h
> @@ -417,4 +417,8 @@ extern void d_clear_need_lookup(struct dentry *dentry);
>  
>  extern int sysctl_vfs_cache_pressure;
>  
> +static inline unsigned long vfs_pressure_ratio(unsigned long val)
> +{
> +	return mult_frac(val, sysctl_vfs_cache_pressure, 100);
> +}
>  #endif	/* __LINUX_DCACHE_H */
> -- 
> 1.7.11.7
> 

Looks Good,

Reviewed-by: Carlos Maiolino <cmaiolino@redhat.com>
-- 
Carlos

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [PATCH v2 2/2] vmscan: take at least one pass with shrinkers
  2012-12-21 10:46 ` [PATCH v2 2/2] vmscan: take at least one pass with shrinkers Glauber Costa
@ 2012-12-21 11:10   ` Carlos Maiolino
  2012-12-22 23:53   ` Dave Chinner
  1 sibling, 0 replies; 8+ messages in thread
From: Carlos Maiolino @ 2012-12-21 11:10 UTC (permalink / raw)
  To: Glauber Costa
  Cc: linux-fsdevel, linux-mm, cgroups, Dave Shrinnker, Mel Gorman,
	Johannes Weiner, Michal Hocko, kamezawa.hiroyu, Theodore Ts'o,
	Al Viro

On Fri, Dec 21, 2012 at 02:46:50PM +0400, Glauber Costa wrote:
> In very low free kernel memory situations, it may be the case that we
> have less objects to free than our initial batch size. If this is the
> case, it is better to shrink those, and open space for the new workload
> then to keep them and fail the new allocations.
> 
> More specifically, this happens because we encode this in a loop with
> the condition: "while (total_scan >= batch_size)". So if we are in such
> a case, we'll not even enter the loop.
> 
> This patch modifies turns it into a do () while {} loop, that will
> guarantee that we scan it at least once, while keeping the behaviour
> exactly the same for the cases in which total_scan > batch_size.
> 
> Signed-off-by: Glauber Costa <glommer@parallels.com>
> Acked-by: Dave Chinner <david@fromorbit.com>
> CC: "Theodore Ts'o" <tytso@mit.edu>
> CC: Al Viro <viro@zeniv.linux.org.uk>
> ---
>  mm/vmscan.c | 4 ++--
>  1 file changed, 2 insertions(+), 2 deletions(-)
> 
> diff --git a/mm/vmscan.c b/mm/vmscan.c
> index 7f30961..fcd1aa0 100644
> --- a/mm/vmscan.c
> +++ b/mm/vmscan.c
> @@ -280,7 +280,7 @@ unsigned long shrink_slab(struct shrink_control *shrink,
>  					nr_pages_scanned, lru_pages,
>  					max_pass, delta, total_scan);
>  
> -		while (total_scan >= batch_size) {
> +		do {
>  			int nr_before;
>  
>  			nr_before = do_shrinker_shrink(shrinker, shrink, 0);
> @@ -294,7 +294,7 @@ unsigned long shrink_slab(struct shrink_control *shrink,
>  			total_scan -= batch_size;
>  
>  			cond_resched();
> -		}
> +		} while (total_scan >= batch_size);
>  
>  		/*
>  		 * move the unused scan count back into the shrinker in a
> -- 
> 1.7.11.7
> 
> --
> To unsubscribe from this list: send the line "unsubscribe linux-fsdevel" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html

Looks Good,
Reviewed-by: Carlos Maiolino <cmaiolino@redhat.com>
-- 
Carlos

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [PATCH v2 2/2] vmscan: take at least one pass with shrinkers
  2012-12-21 10:46 ` [PATCH v2 2/2] vmscan: take at least one pass with shrinkers Glauber Costa
  2012-12-21 11:10   ` Carlos Maiolino
@ 2012-12-22 23:53   ` Dave Chinner
  2012-12-22 23:56     ` Glauber Costa
  1 sibling, 1 reply; 8+ messages in thread
From: Dave Chinner @ 2012-12-22 23:53 UTC (permalink / raw)
  To: Glauber Costa
  Cc: linux-fsdevel, linux-mm, cgroups, Mel Gorman, Johannes Weiner,
	Michal Hocko, kamezawa.hiroyu, Theodore Ts'o, Al Viro

On Fri, Dec 21, 2012 at 02:46:50PM +0400, Glauber Costa wrote:
> In very low free kernel memory situations, it may be the case that we
> have less objects to free than our initial batch size. If this is the
> case, it is better to shrink those, and open space for the new workload
> then to keep them and fail the new allocations.
> 
> More specifically, this happens because we encode this in a loop with
> the condition: "while (total_scan >= batch_size)". So if we are in such
> a case, we'll not even enter the loop.
> 
> This patch modifies turns it into a do () while {} loop, that will
> guarantee that we scan it at least once, while keeping the behaviour
> exactly the same for the cases in which total_scan > batch_size.
> 
> Signed-off-by: Glauber Costa <glommer@parallels.com>
> Acked-by: Dave Chinner <david@fromorbit.com>

I think you'll find I said:

Reviewed-by: Dave Chinner <dchinner@redhat.com>

That has a significantly different meaning to Acked-by, so you
should be careful to correctly transcribe tags back to the
patches...

Cheers,

Dave.
-- 
Dave Chinner
david@fromorbit.com

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [PATCH v2 2/2] vmscan: take at least one pass with shrinkers
  2012-12-22 23:53   ` Dave Chinner
@ 2012-12-22 23:56     ` Glauber Costa
  0 siblings, 0 replies; 8+ messages in thread
From: Glauber Costa @ 2012-12-22 23:56 UTC (permalink / raw)
  To: Dave Chinner
  Cc: linux-fsdevel, linux-mm, cgroups, Mel Gorman, Johannes Weiner,
	Michal Hocko, kamezawa.hiroyu, Theodore Ts'o, Al Viro

On 12/23/2012 03:53 AM, Dave Chinner wrote:
> On Fri, Dec 21, 2012 at 02:46:50PM +0400, Glauber Costa wrote:
>> In very low free kernel memory situations, it may be the case that we
>> have less objects to free than our initial batch size. If this is the
>> case, it is better to shrink those, and open space for the new workload
>> then to keep them and fail the new allocations.
>>
>> More specifically, this happens because we encode this in a loop with
>> the condition: "while (total_scan >= batch_size)". So if we are in such
>> a case, we'll not even enter the loop.
>>
>> This patch modifies turns it into a do () while {} loop, that will
>> guarantee that we scan it at least once, while keeping the behaviour
>> exactly the same for the cases in which total_scan > batch_size.
>>
>> Signed-off-by: Glauber Costa <glommer@parallels.com>
>> Acked-by: Dave Chinner <david@fromorbit.com>
> 
> I think you'll find I said:
> 
> Reviewed-by: Dave Chinner <dchinner@redhat.com>
> 
> That has a significantly different meaning to Acked-by, so you
> should be careful to correctly transcribe tags back to the
> patches...
> 

Ooops

You are right Dave. That was obviously just lack of attention on my
side, not any attempt to upgrade your tag.

Thanks for spotting



^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [PATCH v2 0/2] slightly change shrinker behaviour for very small object sets
  2012-12-21 10:46 [PATCH v2 0/2] slightly change shrinker behaviour for very small object sets Glauber Costa
  2012-12-21 10:46 ` [PATCH v2 1/2] super: fix calculation of shrinkable objects for small numbers Glauber Costa
  2012-12-21 10:46 ` [PATCH v2 2/2] vmscan: take at least one pass with shrinkers Glauber Costa
@ 2013-01-14 19:58 ` Glauber Costa
  2 siblings, 0 replies; 8+ messages in thread
From: Glauber Costa @ 2013-01-14 19:58 UTC (permalink / raw)
  To: linux-fsdevel
  Cc: linux-mm, cgroups, Dave Shrinnker, Mel Gorman, Johannes Weiner,
	Michal Hocko, kamezawa.hiroyu

On 12/21/2012 02:46 AM, Glauber Costa wrote:
> Hi,
> 
> * v2: fix sysctl_vfs_cache_pressure for all users.
> 
> I've recently noticed some glitches in the object shrinker mechanism when a
> very small number of objects is used. Those situations are theoretically
> possible, albeit unlikely. But although it may feel like it is purely
> theoretical, they can become common in environments with many small containers
> (cgroups) in a box.
> 
> Those patches came from some experimentation I am doing with targetted-shrinking
> for kmem-limited memory cgroups (Dave Shrinnker is already aware of such work).
> In such scenarios, one can set the available memory to very low limits, and it
> becomes easy to see this.
> 
> 
Hi,

Who should pick this one up?

Are there any comments aside from Dave's Reviewed-by tag that I wrongly
transcribed?

Thanks!

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 8+ messages in thread

end of thread, other threads:[~2013-01-14 19:58 UTC | newest]

Thread overview: 8+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2012-12-21 10:46 [PATCH v2 0/2] slightly change shrinker behaviour for very small object sets Glauber Costa
2012-12-21 10:46 ` [PATCH v2 1/2] super: fix calculation of shrinkable objects for small numbers Glauber Costa
2012-12-21 11:08   ` Carlos Maiolino
2012-12-21 10:46 ` [PATCH v2 2/2] vmscan: take at least one pass with shrinkers Glauber Costa
2012-12-21 11:10   ` Carlos Maiolino
2012-12-22 23:53   ` Dave Chinner
2012-12-22 23:56     ` Glauber Costa
2013-01-14 19:58 ` [PATCH v2 0/2] slightly change shrinker behaviour for very small object sets Glauber Costa

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).