public inbox for linux-kernel@vger.kernel.org
 help / color / mirror / Atom feed
* [PATCH] memcg: fix shmem_unuse_inode charging
@ 2008-06-29  0:13 Hugh Dickins
  2008-06-29  0:14 ` [PATCH] memcg: shmem_getpage release page sooner Hugh Dickins
                   ` (3 more replies)
  0 siblings, 4 replies; 12+ messages in thread
From: Hugh Dickins @ 2008-06-29  0:13 UTC (permalink / raw)
  To: Andrew Morton; +Cc: KAMEZAWA Hiroyuki, Balbir Singh, linux-kernel

The mem_cgroup_cache_charge in shmem_unuse_inode has been moved after
the radix_tree_preload, so generating a BUG: sleeping function called
from invalid context.  Move it back, uncharging where necessary.

Signed-off-by: Hugh Dickins <hugh@veritas.com>
---
Should follow mmotm's memcg-remove-refcnt-from-page_cgroup.patch

mm/shmem.c |   28 ++++++++++++----------------
 1 file changed, 12 insertions(+), 16 deletions(-)

--- mmotm/mm/shmem.c	2008-06-27 13:39:20.000000000 +0100
+++ linux/mm/shmem.c	2008-06-27 17:25:41.000000000 +0100
@@ -922,27 +922,24 @@ found:
 	error = 1;
 	if (!inode)
 		goto out;
-	error = radix_tree_preload(GFP_KERNEL);
-	if (error)
-		goto out;
-	/*
-	 * Because we use GFP_NOWAIT in add_to_page_cache(), we can see -ENOMEM
-	 * failure because of memory pressure in memory resource controller.
-	 * Then, precharge page while we can wait, uncharge at failure will be
-	 * automatically done in add_to_page_cache()
-	 */
+	/* Precharge page using GFP_KERNEL while we can wait */
 	error = mem_cgroup_cache_charge(page, current->mm, GFP_KERNEL);
 	if (error)
-		goto preload_out;
-
+		goto out;
+	error = radix_tree_preload(GFP_KERNEL);
+	if (error) {
+		mem_cgroup_uncharge_cache_page(page);
+		goto out;
+	}
 	error = 1;
 
 	spin_lock(&info->lock);
 	ptr = shmem_swp_entry(info, idx, NULL);
-	if (ptr && ptr->val == entry.val)
-		error = add_to_page_cache(page, inode->i_mapping, idx,
-					GFP_NOWAIT);
-	else /* we don't have to account this page. */
+	if (ptr && ptr->val == entry.val) {
+		error = add_to_page_cache(page, inode->i_mapping,
+						idx, GFP_NOWAIT);
+		/* does mem_cgroup_uncharge_cache_page on error */
+	} else	/* we must compensate for our precharge above */
 		mem_cgroup_uncharge_cache_page(page);
 
 	if (error == -EEXIST) {
@@ -969,7 +966,6 @@ found:
 	if (ptr)
 		shmem_swp_unmap(ptr);
 	spin_unlock(&info->lock);
-preload_out:
 	radix_tree_preload_end();
 out:
 	unlock_page(page);

^ permalink raw reply	[flat|nested] 12+ messages in thread

* [PATCH] memcg: shmem_getpage release page sooner
  2008-06-29  0:13 [PATCH] memcg: fix shmem_unuse_inode charging Hugh Dickins
@ 2008-06-29  0:14 ` Hugh Dickins
  2008-06-30  2:28   ` KAMEZAWA Hiroyuki
  2008-06-29  0:15 ` [PATCH] memcg: mem_cgroup_shrink_usage css_put Hugh Dickins
                   ` (2 subsequent siblings)
  3 siblings, 1 reply; 12+ messages in thread
From: Hugh Dickins @ 2008-06-29  0:14 UTC (permalink / raw)
  To: Andrew Morton; +Cc: KAMEZAWA Hiroyuki, Balbir Singh, linux-kernel

No big deal, but since mem_cgroup_shrink_usage doesn't require a page to
operate upon, page_cache_release the swappage before calling it, so it's
not pinned across the reclaim.

Signed-off-by: Hugh Dickins <hugh@veritas.com>
---
Should follow mmotm's memcg-helper-function-for-relcaim-from-shmem.patch

 mm/shmem.c |    8 +++-----
 1 file changed, 3 insertions(+), 5 deletions(-)

--- mmotm/mm/shmem.c	2008-06-27 13:39:20.000000000 +0100
+++ linux/mm/shmem.c	2008-06-27 17:25:41.000000000 +0100
@@ -1315,16 +1315,14 @@ repeat:
 			shmem_swp_unmap(entry);
 			spin_unlock(&info->lock);
 			unlock_page(swappage);
+			page_cache_release(swappage);
 			if (error == -ENOMEM) {
 				/* allow reclaim from this memory cgroup */
 				error = mem_cgroup_shrink_usage(current->mm,
-					gfp & ~__GFP_HIGHMEM);
-				if (error) {
-					page_cache_release(swappage);
+								gfp);
+				if (error)
 					goto failed;
-				}
 			}
-			page_cache_release(swappage);
 			goto repeat;
 		}
 	} else if (sgp == SGP_READ && !filepage) {

^ permalink raw reply	[flat|nested] 12+ messages in thread

* [PATCH] memcg: mem_cgroup_shrink_usage css_put
  2008-06-29  0:13 [PATCH] memcg: fix shmem_unuse_inode charging Hugh Dickins
  2008-06-29  0:14 ` [PATCH] memcg: shmem_getpage release page sooner Hugh Dickins
@ 2008-06-29  0:15 ` Hugh Dickins
  2008-06-30  2:27   ` KAMEZAWA Hiroyuki
  2008-06-29  0:17 ` [PATCH] memcg: further checking of disabled flag Hugh Dickins
  2008-06-30  2:23 ` [PATCH] memcg: fix shmem_unuse_inode charging KAMEZAWA Hiroyuki
  3 siblings, 1 reply; 12+ messages in thread
From: Hugh Dickins @ 2008-06-29  0:15 UTC (permalink / raw)
  To: Andrew Morton; +Cc: KAMEZAWA Hiroyuki, Balbir Singh, linux-kernel

mem_cgroup_shrink_usage makes no charge: balance its css_get with a css_put.

Signed-off-by: Hugh Dickins <hugh@veritas.com>
---
Should follow mmotm's memcg-helper-function-for-relcaim-from-shmem.patch

 mm/memcontrol.c |    1 +
 1 file changed, 1 insertion(+)

--- mmotm/mm/memcontrol.c	2008-06-27 13:39:20.000000000 +0100
+++ linux/mm/memcontrol.c	2008-06-27 17:32:29.000000000 +0100
@@ -801,6 +801,7 @@ int mem_cgroup_shrink_usage(struct mm_st
 		progress = try_to_free_mem_cgroup_pages(mem, gfp_mask);
 	} while (!progress && --retry);
 
+	css_put(&mem->css);
 	if (!retry)
 		return -ENOMEM;
 	return 0;

^ permalink raw reply	[flat|nested] 12+ messages in thread

* [PATCH] memcg: further checking of disabled flag
  2008-06-29  0:13 [PATCH] memcg: fix shmem_unuse_inode charging Hugh Dickins
  2008-06-29  0:14 ` [PATCH] memcg: shmem_getpage release page sooner Hugh Dickins
  2008-06-29  0:15 ` [PATCH] memcg: mem_cgroup_shrink_usage css_put Hugh Dickins
@ 2008-06-29  0:17 ` Hugh Dickins
  2008-06-30  2:26   ` KAMEZAWA Hiroyuki
  2008-06-30  6:05   ` Li Zefan
  2008-06-30  2:23 ` [PATCH] memcg: fix shmem_unuse_inode charging KAMEZAWA Hiroyuki
  3 siblings, 2 replies; 12+ messages in thread
From: Hugh Dickins @ 2008-06-29  0:17 UTC (permalink / raw)
  To: Andrew Morton; +Cc: KAMEZAWA Hiroyuki, Balbir Singh, Li Zefan, linux-kernel

Further adjustments to the mem_cgroup_subsys.disabled tests: add one to
mem_cgroup_shrink_usage; move mem_cgroup_charge_common's into its callers,
before they've done any work; and add one to mem_cgroup_move_lists, to
avoid the overhead of its bit spin locking and unlocking.

Signed-off-by: Hugh Dickins <hugh@veritas.com>
---
Should follow mmotm's memcg-clean-up-checking-of-the-disabled-flag.patch

 mm/memcontrol.c |   15 ++++++++++++---
 1 file changed, 12 insertions(+), 3 deletions(-)

--- mmotm/mm/memcontrol.c	2008-06-27 13:39:20.000000000 +0100
+++ linux/mm/memcontrol.c	2008-06-27 17:32:29.000000000 +0100
@@ -354,6 +354,9 @@ void mem_cgroup_move_lists(struct page *
 	struct mem_cgroup_per_zone *mz;
 	unsigned long flags;
 
+	if (mem_cgroup_subsys.disabled)
+		return;
+
 	/*
 	 * We cannot lock_page_cgroup while holding zone's lru_lock,
 	 * because other holders of lock_page_cgroup can be interrupted
@@ -533,9 +536,6 @@ static int mem_cgroup_charge_common(stru
 	unsigned long nr_retries = MEM_CGROUP_RECLAIM_RETRIES;
 	struct mem_cgroup_per_zone *mz;
 
-	if (mem_cgroup_subsys.disabled)
-		return 0;
-
 	pc = kmem_cache_alloc(page_cgroup_cache, gfp_mask);
 	if (unlikely(pc == NULL))
 		goto err;
@@ -620,6 +620,9 @@ err:
 
 int mem_cgroup_charge(struct page *page, struct mm_struct *mm, gfp_t gfp_mask)
 {
+	if (mem_cgroup_subsys.disabled)
+		return 0;
+
 	/*
 	 * If already mapped, we don't have to account.
 	 * If page cache, page->mapping has address_space.
@@ -638,6 +641,9 @@ int mem_cgroup_charge(struct page *page,
 int mem_cgroup_cache_charge(struct page *page, struct mm_struct *mm,
 				gfp_t gfp_mask)
 {
+	if (mem_cgroup_subsys.disabled)
+		return 0;
+
 	/*
 	 * Corner case handling. This is called from add_to_page_cache()
 	 * in usual. But some FS (shmem) precharges this page before calling it
@@ -789,6 +795,9 @@ int mem_cgroup_shrink_usage(struct mm_st
 	int progress = 0;
 	int retry = MEM_CGROUP_RECLAIM_RETRIES;
 
+	if (mem_cgroup_subsys.disabled)
+		return 0;
+
 	rcu_read_lock();
 	mem = mem_cgroup_from_task(rcu_dereference(mm->owner));
 	css_get(&mem->css);

^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: [PATCH] memcg: fix shmem_unuse_inode charging
  2008-06-29  0:13 [PATCH] memcg: fix shmem_unuse_inode charging Hugh Dickins
                   ` (2 preceding siblings ...)
  2008-06-29  0:17 ` [PATCH] memcg: further checking of disabled flag Hugh Dickins
@ 2008-06-30  2:23 ` KAMEZAWA Hiroyuki
  2008-06-30  6:26   ` Balbir Singh
  3 siblings, 1 reply; 12+ messages in thread
From: KAMEZAWA Hiroyuki @ 2008-06-30  2:23 UTC (permalink / raw)
  To: Hugh Dickins; +Cc: Andrew Morton, Balbir Singh, linux-kernel

On Sun, 29 Jun 2008 01:13:20 +0100 (BST)
Hugh Dickins <hugh@veritas.com> wrote:

> The mem_cgroup_cache_charge in shmem_unuse_inode has been moved after
> the radix_tree_preload, so generating a BUG: sleeping function called
> from invalid context.  Move it back, uncharging where necessary.
> 
> Signed-off-by: Hugh Dickins <hugh@veritas.com>

Sure, thanks,

Acked-by: KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>

> ---
> Should follow mmotm's memcg-remove-refcnt-from-page_cgroup.patch
> 
> mm/shmem.c |   28 ++++++++++++----------------
>  1 file changed, 12 insertions(+), 16 deletions(-)
> 
> --- mmotm/mm/shmem.c	2008-06-27 13:39:20.000000000 +0100
> +++ linux/mm/shmem.c	2008-06-27 17:25:41.000000000 +0100
> @@ -922,27 +922,24 @@ found:
>  	error = 1;
>  	if (!inode)
>  		goto out;
> -	error = radix_tree_preload(GFP_KERNEL);
> -	if (error)
> -		goto out;
> -	/*
> -	 * Because we use GFP_NOWAIT in add_to_page_cache(), we can see -ENOMEM
> -	 * failure because of memory pressure in memory resource controller.
> -	 * Then, precharge page while we can wait, uncharge at failure will be
> -	 * automatically done in add_to_page_cache()
> -	 */
> +	/* Precharge page using GFP_KERNEL while we can wait */
>  	error = mem_cgroup_cache_charge(page, current->mm, GFP_KERNEL);
>  	if (error)
> -		goto preload_out;
> -
> +		goto out;
> +	error = radix_tree_preload(GFP_KERNEL);
> +	if (error) {
> +		mem_cgroup_uncharge_cache_page(page);
> +		goto out;
> +	}
>  	error = 1;
>  
>  	spin_lock(&info->lock);
>  	ptr = shmem_swp_entry(info, idx, NULL);
> -	if (ptr && ptr->val == entry.val)
> -		error = add_to_page_cache(page, inode->i_mapping, idx,
> -					GFP_NOWAIT);
> -	else /* we don't have to account this page. */
> +	if (ptr && ptr->val == entry.val) {
> +		error = add_to_page_cache(page, inode->i_mapping,
> +						idx, GFP_NOWAIT);
> +		/* does mem_cgroup_uncharge_cache_page on error */
> +	} else	/* we must compensate for our precharge above */
>  		mem_cgroup_uncharge_cache_page(page);
>  
>  	if (error == -EEXIST) {
> @@ -969,7 +966,6 @@ found:
>  	if (ptr)
>  		shmem_swp_unmap(ptr);
>  	spin_unlock(&info->lock);
> -preload_out:
>  	radix_tree_preload_end();
>  out:
>  	unlock_page(page);
> 


^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: [PATCH] memcg: further checking of disabled flag
  2008-06-29  0:17 ` [PATCH] memcg: further checking of disabled flag Hugh Dickins
@ 2008-06-30  2:26   ` KAMEZAWA Hiroyuki
  2008-06-30  6:27     ` Balbir Singh
  2008-06-30  6:05   ` Li Zefan
  1 sibling, 1 reply; 12+ messages in thread
From: KAMEZAWA Hiroyuki @ 2008-06-30  2:26 UTC (permalink / raw)
  To: Hugh Dickins; +Cc: Andrew Morton, Balbir Singh, Li Zefan, linux-kernel

On Sun, 29 Jun 2008 01:17:17 +0100 (BST)
Hugh Dickins <hugh@veritas.com> wrote:

> Further adjustments to the mem_cgroup_subsys.disabled tests: add one to
> mem_cgroup_shrink_usage; move mem_cgroup_charge_common's into its callers,
> before they've done any work; and add one to mem_cgroup_move_lists, to
> avoid the overhead of its bit spin locking and unlocking.
> 
> Signed-off-by: Hugh Dickins <hugh@veritas.com>

seems better

Acked-by: KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>


> ---
> Should follow mmotm's memcg-clean-up-checking-of-the-disabled-flag.patch
> 
>  mm/memcontrol.c |   15 ++++++++++++---
>  1 file changed, 12 insertions(+), 3 deletions(-)
> 
> --- mmotm/mm/memcontrol.c	2008-06-27 13:39:20.000000000 +0100
> +++ linux/mm/memcontrol.c	2008-06-27 17:32:29.000000000 +0100
> @@ -354,6 +354,9 @@ void mem_cgroup_move_lists(struct page *
>  	struct mem_cgroup_per_zone *mz;
>  	unsigned long flags;
>  
> +	if (mem_cgroup_subsys.disabled)
> +		return;
> +
>  	/*
>  	 * We cannot lock_page_cgroup while holding zone's lru_lock,
>  	 * because other holders of lock_page_cgroup can be interrupted
> @@ -533,9 +536,6 @@ static int mem_cgroup_charge_common(stru
>  	unsigned long nr_retries = MEM_CGROUP_RECLAIM_RETRIES;
>  	struct mem_cgroup_per_zone *mz;
>  
> -	if (mem_cgroup_subsys.disabled)
> -		return 0;
> -
>  	pc = kmem_cache_alloc(page_cgroup_cache, gfp_mask);
>  	if (unlikely(pc == NULL))
>  		goto err;
> @@ -620,6 +620,9 @@ err:
>  
>  int mem_cgroup_charge(struct page *page, struct mm_struct *mm, gfp_t gfp_mask)
>  {
> +	if (mem_cgroup_subsys.disabled)
> +		return 0;
> +
>  	/*
>  	 * If already mapped, we don't have to account.
>  	 * If page cache, page->mapping has address_space.
> @@ -638,6 +641,9 @@ int mem_cgroup_charge(struct page *page,
>  int mem_cgroup_cache_charge(struct page *page, struct mm_struct *mm,
>  				gfp_t gfp_mask)
>  {
> +	if (mem_cgroup_subsys.disabled)
> +		return 0;
> +
>  	/*
>  	 * Corner case handling. This is called from add_to_page_cache()
>  	 * in usual. But some FS (shmem) precharges this page before calling it
> @@ -789,6 +795,9 @@ int mem_cgroup_shrink_usage(struct mm_st
>  	int progress = 0;
>  	int retry = MEM_CGROUP_RECLAIM_RETRIES;
>  
> +	if (mem_cgroup_subsys.disabled)
> +		return 0;
> +
>  	rcu_read_lock();
>  	mem = mem_cgroup_from_task(rcu_dereference(mm->owner));
>  	css_get(&mem->css);
> 


^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: [PATCH] memcg: mem_cgroup_shrink_usage css_put
  2008-06-29  0:15 ` [PATCH] memcg: mem_cgroup_shrink_usage css_put Hugh Dickins
@ 2008-06-30  2:27   ` KAMEZAWA Hiroyuki
  2008-06-30  6:28     ` Balbir Singh
  0 siblings, 1 reply; 12+ messages in thread
From: KAMEZAWA Hiroyuki @ 2008-06-30  2:27 UTC (permalink / raw)
  To: Hugh Dickins; +Cc: Andrew Morton, Balbir Singh, linux-kernel

On Sun, 29 Jun 2008 01:15:38 +0100 (BST)
Hugh Dickins <hugh@veritas.com> wrote:

> mem_cgroup_shrink_usage makes no charge: balance its css_get with a css_put.
> 
> Signed-off-by: Hugh Dickins <hugh@veritas.com>

Sorry for my mistakes

Acked-by: KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>


> ---
> Should follow mmotm's memcg-helper-function-for-relcaim-from-shmem.patch
> 
>  mm/memcontrol.c |    1 +
>  1 file changed, 1 insertion(+)
> 
> --- mmotm/mm/memcontrol.c	2008-06-27 13:39:20.000000000 +0100
> +++ linux/mm/memcontrol.c	2008-06-27 17:32:29.000000000 +0100
> @@ -801,6 +801,7 @@ int mem_cgroup_shrink_usage(struct mm_st
>  		progress = try_to_free_mem_cgroup_pages(mem, gfp_mask);
>  	} while (!progress && --retry);
>  
> +	css_put(&mem->css);
>  	if (!retry)
>  		return -ENOMEM;
>  	return 0;
> --
> To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html
> Please read the FAQ at  http://www.tux.org/lkml/
> 


^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: [PATCH] memcg: shmem_getpage release page sooner
  2008-06-29  0:14 ` [PATCH] memcg: shmem_getpage release page sooner Hugh Dickins
@ 2008-06-30  2:28   ` KAMEZAWA Hiroyuki
  0 siblings, 0 replies; 12+ messages in thread
From: KAMEZAWA Hiroyuki @ 2008-06-30  2:28 UTC (permalink / raw)
  To: Hugh Dickins; +Cc: Andrew Morton, Balbir Singh, linux-kernel

On Sun, 29 Jun 2008 01:14:30 +0100 (BST)
Hugh Dickins <hugh@veritas.com> wrote:

> No big deal, but since mem_cgroup_shrink_usage doesn't require a page to
> operate upon, page_cache_release the swappage before calling it, so it's
> not pinned across the reclaim.
> 
> Signed-off-by: Hugh Dickins <hugh@veritas.com>

this one is better, thanks.

Acked-by: KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>

> ---
> Should follow mmotm's memcg-helper-function-for-relcaim-from-shmem.patch
> 
>  mm/shmem.c |    8 +++-----
>  1 file changed, 3 insertions(+), 5 deletions(-)
> 
> --- mmotm/mm/shmem.c	2008-06-27 13:39:20.000000000 +0100
> +++ linux/mm/shmem.c	2008-06-27 17:25:41.000000000 +0100
> @@ -1315,16 +1315,14 @@ repeat:
>  			shmem_swp_unmap(entry);
>  			spin_unlock(&info->lock);
>  			unlock_page(swappage);
> +			page_cache_release(swappage);
>  			if (error == -ENOMEM) {
>  				/* allow reclaim from this memory cgroup */
>  				error = mem_cgroup_shrink_usage(current->mm,
> -					gfp & ~__GFP_HIGHMEM);
> -				if (error) {
> -					page_cache_release(swappage);
> +								gfp);
> +				if (error)
>  					goto failed;
> -				}
>  			}
> -			page_cache_release(swappage);
>  			goto repeat;
>  		}
>  	} else if (sgp == SGP_READ && !filepage) {
> 


^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: [PATCH] memcg: further checking of disabled flag
  2008-06-29  0:17 ` [PATCH] memcg: further checking of disabled flag Hugh Dickins
  2008-06-30  2:26   ` KAMEZAWA Hiroyuki
@ 2008-06-30  6:05   ` Li Zefan
  1 sibling, 0 replies; 12+ messages in thread
From: Li Zefan @ 2008-06-30  6:05 UTC (permalink / raw)
  To: Hugh Dickins; +Cc: Andrew Morton, KAMEZAWA Hiroyuki, Balbir Singh, linux-kernel

Hugh Dickins wrote:
> Further adjustments to the mem_cgroup_subsys.disabled tests: add one to
> mem_cgroup_shrink_usage; move mem_cgroup_charge_common's into its callers,
> before they've done any work; and add one to mem_cgroup_move_lists, to
> avoid the overhead of its bit spin locking and unlocking.
> 
> Signed-off-by: Hugh Dickins <hugh@veritas.com>

Reviewed-by: Li Zefan <lizf@cn.fujitsu.com>

> ---
> Should follow mmotm's memcg-clean-up-checking-of-the-disabled-flag.patch
> 

But this needn't follow memcg-clean-up-checking-of-the-disabled-flag.patch,
because this patch is not a fix to it. :)

^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: [PATCH] memcg: fix shmem_unuse_inode charging
  2008-06-30  2:23 ` [PATCH] memcg: fix shmem_unuse_inode charging KAMEZAWA Hiroyuki
@ 2008-06-30  6:26   ` Balbir Singh
  0 siblings, 0 replies; 12+ messages in thread
From: Balbir Singh @ 2008-06-30  6:26 UTC (permalink / raw)
  To: KAMEZAWA Hiroyuki; +Cc: Hugh Dickins, Andrew Morton, Balbir Singh, linux-kernel

KAMEZAWA Hiroyuki wrote:
> On Sun, 29 Jun 2008 01:13:20 +0100 (BST)
> Hugh Dickins <hugh@veritas.com> wrote:
> 
>> The mem_cgroup_cache_charge in shmem_unuse_inode has been moved after
>> the radix_tree_preload, so generating a BUG: sleeping function called
>> from invalid context.  Move it back, uncharging where necessary.
>>
>> Signed-off-by: Hugh Dickins <hugh@veritas.com>
> 
> Sure, thanks,
> 
> Acked-by: KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>

Acked-by: Balbir Singh <balbir@linux.vnet.ibm.com>


-- 
	Warm Regards,
	Balbir Singh
	Linux Technology Center
	IBM, ISTL

^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: [PATCH] memcg: further checking of disabled flag
  2008-06-30  2:26   ` KAMEZAWA Hiroyuki
@ 2008-06-30  6:27     ` Balbir Singh
  0 siblings, 0 replies; 12+ messages in thread
From: Balbir Singh @ 2008-06-30  6:27 UTC (permalink / raw)
  To: KAMEZAWA Hiroyuki
  Cc: Hugh Dickins, Andrew Morton, Balbir Singh, Li Zefan, linux-kernel

KAMEZAWA Hiroyuki wrote:
> On Sun, 29 Jun 2008 01:17:17 +0100 (BST)
> Hugh Dickins <hugh@veritas.com> wrote:
> 
>> Further adjustments to the mem_cgroup_subsys.disabled tests: add one to
>> mem_cgroup_shrink_usage; move mem_cgroup_charge_common's into its callers,
>> before they've done any work; and add one to mem_cgroup_move_lists, to
>> avoid the overhead of its bit spin locking and unlocking.
>>
>> Signed-off-by: Hugh Dickins <hugh@veritas.com>
> 
> seems better
> 
> Acked-by: KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>

Acked-by: Balbir Singh <balbir@linux.vnet.ibm.com>

-- 
	Warm Regards,
	Balbir Singh
	Linux Technology Center
	IBM, ISTL

^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: [PATCH] memcg: mem_cgroup_shrink_usage css_put
  2008-06-30  2:27   ` KAMEZAWA Hiroyuki
@ 2008-06-30  6:28     ` Balbir Singh
  0 siblings, 0 replies; 12+ messages in thread
From: Balbir Singh @ 2008-06-30  6:28 UTC (permalink / raw)
  To: KAMEZAWA Hiroyuki; +Cc: Hugh Dickins, Andrew Morton, Balbir Singh, linux-kernel

KAMEZAWA Hiroyuki wrote:
> On Sun, 29 Jun 2008 01:15:38 +0100 (BST)
> Hugh Dickins <hugh@veritas.com> wrote:
> 
>> mem_cgroup_shrink_usage makes no charge: balance its css_get with a css_put.
>>
>> Signed-off-by: Hugh Dickins <hugh@veritas.com>
> 
> Sorry for my mistakes
> 
> Acked-by: KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>

Reviewed-by: Balbir Singh <balbir@linux.vnet.ibm.com>

-- 
	Warm Regards,
	Balbir Singh
	Linux Technology Center
	IBM, ISTL

^ permalink raw reply	[flat|nested] 12+ messages in thread

end of thread, other threads:[~2008-06-30  6:29 UTC | newest]

Thread overview: 12+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2008-06-29  0:13 [PATCH] memcg: fix shmem_unuse_inode charging Hugh Dickins
2008-06-29  0:14 ` [PATCH] memcg: shmem_getpage release page sooner Hugh Dickins
2008-06-30  2:28   ` KAMEZAWA Hiroyuki
2008-06-29  0:15 ` [PATCH] memcg: mem_cgroup_shrink_usage css_put Hugh Dickins
2008-06-30  2:27   ` KAMEZAWA Hiroyuki
2008-06-30  6:28     ` Balbir Singh
2008-06-29  0:17 ` [PATCH] memcg: further checking of disabled flag Hugh Dickins
2008-06-30  2:26   ` KAMEZAWA Hiroyuki
2008-06-30  6:27     ` Balbir Singh
2008-06-30  6:05   ` Li Zefan
2008-06-30  2:23 ` [PATCH] memcg: fix shmem_unuse_inode charging KAMEZAWA Hiroyuki
2008-06-30  6:26   ` Balbir Singh

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox