linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
* [PATCH 02/15] mm,ksm: use new hashtable implementation
       [not found] <1355756497-15834-1-git-send-email-sasha.levin@oracle.com>
@ 2012-12-17 15:01 ` Sasha Levin
  2012-12-18  3:25   ` Hugh Dickins
  2012-12-17 15:01 ` [PATCH 04/15] mm/huge_memory: " Sasha Levin
  1 sibling, 1 reply; 7+ messages in thread
From: Sasha Levin @ 2012-12-17 15:01 UTC (permalink / raw)
  To: Andrew Morton, Hugh Dickins, Michal Hocko, Konstantin Khlebnikov,
	Bob Liu, linux-mm, linux-kernel
  Cc: Sasha Levin

Switch ksm to use the new hashtable implementation. This reduces the amount of
generic unrelated code in the ksm module.

This patch depends on d9b482c ("hashtable: introduce a small and naive
hashtable") which was merged in v3.6.

Signed-off-by: Sasha Levin <sasha.levin@oracle.com>
---
 mm/ksm.c | 31 +++++++++++++------------------
 1 file changed, 13 insertions(+), 18 deletions(-)

diff --git a/mm/ksm.c b/mm/ksm.c
index 382d930..e888f54 100644
--- a/mm/ksm.c
+++ b/mm/ksm.c
@@ -33,7 +33,7 @@
 #include <linux/mmu_notifier.h>
 #include <linux/swap.h>
 #include <linux/ksm.h>
-#include <linux/hash.h>
+#include <linux/hashtable.h>
 #include <linux/freezer.h>
 #include <linux/oom.h>
 
@@ -156,9 +156,8 @@ struct rmap_item {
 static struct rb_root root_stable_tree = RB_ROOT;
 static struct rb_root root_unstable_tree = RB_ROOT;
 
-#define MM_SLOTS_HASH_SHIFT 10
-#define MM_SLOTS_HASH_HEADS (1 << MM_SLOTS_HASH_SHIFT)
-static struct hlist_head mm_slots_hash[MM_SLOTS_HASH_HEADS];
+#define MM_SLOTS_HASH_BITS 10
+static DEFINE_HASHTABLE(mm_slots_hash, MM_SLOTS_HASH_BITS);
 
 static struct mm_slot ksm_mm_head = {
 	.mm_list = LIST_HEAD_INIT(ksm_mm_head.mm_list),
@@ -275,26 +274,21 @@ static inline void free_mm_slot(struct mm_slot *mm_slot)
 
 static struct mm_slot *get_mm_slot(struct mm_struct *mm)
 {
-	struct mm_slot *mm_slot;
-	struct hlist_head *bucket;
 	struct hlist_node *node;
+	struct mm_slot *slot;
+
+	hash_for_each_possible(mm_slots_hash, slot, node, link, (unsigned long)mm) 
+		if (slot->mm == mm)
+			return slot;
 
-	bucket = &mm_slots_hash[hash_ptr(mm, MM_SLOTS_HASH_SHIFT)];
-	hlist_for_each_entry(mm_slot, node, bucket, link) {
-		if (mm == mm_slot->mm)
-			return mm_slot;
-	}
 	return NULL;
 }
 
 static void insert_to_mm_slots_hash(struct mm_struct *mm,
 				    struct mm_slot *mm_slot)
 {
-	struct hlist_head *bucket;
-
-	bucket = &mm_slots_hash[hash_ptr(mm, MM_SLOTS_HASH_SHIFT)];
 	mm_slot->mm = mm;
-	hlist_add_head(&mm_slot->link, bucket);
+	hash_add(mm_slots_hash, &mm_slot->link, (unsigned long)mm);
 }
 
 static inline int in_stable_tree(struct rmap_item *rmap_item)
@@ -647,7 +641,7 @@ static int unmerge_and_remove_all_rmap_items(void)
 		ksm_scan.mm_slot = list_entry(mm_slot->mm_list.next,
 						struct mm_slot, mm_list);
 		if (ksm_test_exit(mm)) {
-			hlist_del(&mm_slot->link);
+			hash_del(&mm_slot->link);
 			list_del(&mm_slot->mm_list);
 			spin_unlock(&ksm_mmlist_lock);
 
@@ -1392,7 +1386,7 @@ next_mm:
 		 * or when all VM_MERGEABLE areas have been unmapped (and
 		 * mmap_sem then protects against race with MADV_MERGEABLE).
 		 */
-		hlist_del(&slot->link);
+		hash_del(&slot->link);
 		list_del(&slot->mm_list);
 		spin_unlock(&ksm_mmlist_lock);
 
@@ -1559,7 +1553,7 @@ void __ksm_exit(struct mm_struct *mm)
 	mm_slot = get_mm_slot(mm);
 	if (mm_slot && ksm_scan.mm_slot != mm_slot) {
 		if (!mm_slot->rmap_list) {
-			hlist_del(&mm_slot->link);
+			hash_del(&mm_slot->link);
 			list_del(&mm_slot->mm_list);
 			easy_to_free = 1;
 		} else {
@@ -2035,6 +2029,7 @@ static int __init ksm_init(void)
 	 */
 	hotplug_memory_notifier(ksm_memory_callback, 100);
 #endif
+
 	return 0;
 
 out_free:
-- 
1.8.0

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply related	[flat|nested] 7+ messages in thread

* [PATCH 04/15] mm/huge_memory: use new hashtable implementation
       [not found] <1355756497-15834-1-git-send-email-sasha.levin@oracle.com>
  2012-12-17 15:01 ` [PATCH 02/15] mm,ksm: use new hashtable implementation Sasha Levin
@ 2012-12-17 15:01 ` Sasha Levin
  2012-12-19 22:26   ` David Rientjes
  1 sibling, 1 reply; 7+ messages in thread
From: Sasha Levin @ 2012-12-17 15:01 UTC (permalink / raw)
  To: Andrew Morton, Kirill A. Shutemov, Xiao Guangrong,
	Andrea Arcangeli, David Rientjes, linux-mm, linux-kernel
  Cc: Sasha Levin

Switch hugemem to use the new hashtable implementation. This reduces the
amount of generic unrelated code in the hugemem.

This also removes the dymanic allocation of the hash table. The size of the table is
constant so there's no point in paying the price of an extra dereference when accessing
it.

This patch depends on d9b482c ("hashtable: introduce a small and naive
hashtable") which was merged in v3.6.

Signed-off-by: Sasha Levin <sasha.levin@oracle.com>
---
 mm/huge_memory.c | 53 ++++++++++++-----------------------------------------
 1 file changed, 12 insertions(+), 41 deletions(-)

diff --git a/mm/huge_memory.c b/mm/huge_memory.c
index 827d9c8..2a0ef01 100644
--- a/mm/huge_memory.c
+++ b/mm/huge_memory.c
@@ -20,6 +20,7 @@
 #include <linux/mman.h>
 #include <linux/pagemap.h>
 
+#include <linux/hashtable.h>
 #include <asm/tlb.h>
 #include <asm/pgalloc.h>
 #include "internal.h"
@@ -61,12 +62,12 @@ static DECLARE_WAIT_QUEUE_HEAD(khugepaged_wait);
 static unsigned int khugepaged_max_ptes_none __read_mostly = HPAGE_PMD_NR-1;
 
 static int khugepaged(void *none);
-static int mm_slots_hash_init(void);
 static int khugepaged_slab_init(void);
 static void khugepaged_slab_free(void);
 
-#define MM_SLOTS_HASH_HEADS 1024
-static struct hlist_head *mm_slots_hash __read_mostly;
+#define MM_SLOTS_HASH_BITS 10
+static DEFINE_HASHTABLE(mm_slots_hash, MM_SLOTS_HASH_BITS);
+
 static struct kmem_cache *mm_slot_cache __read_mostly;
 
 /**
@@ -633,12 +634,6 @@ static int __init hugepage_init(void)
 	if (err)
 		goto out;
 
-	err = mm_slots_hash_init();
-	if (err) {
-		khugepaged_slab_free();
-		goto out;
-	}
-
 	register_shrinker(&huge_zero_page_shrinker);
 
 	/*
@@ -1821,47 +1816,23 @@ static inline void free_mm_slot(struct mm_slot *mm_slot)
 	kmem_cache_free(mm_slot_cache, mm_slot);
 }
 
-static int __init mm_slots_hash_init(void)
-{
-	mm_slots_hash = kzalloc(MM_SLOTS_HASH_HEADS * sizeof(struct hlist_head),
-				GFP_KERNEL);
-	if (!mm_slots_hash)
-		return -ENOMEM;
-	return 0;
-}
-
-#if 0
-static void __init mm_slots_hash_free(void)
-{
-	kfree(mm_slots_hash);
-	mm_slots_hash = NULL;
-}
-#endif
-
 static struct mm_slot *get_mm_slot(struct mm_struct *mm)
 {
-	struct mm_slot *mm_slot;
-	struct hlist_head *bucket;
+	struct mm_slot *slot;
 	struct hlist_node *node;
 
-	bucket = &mm_slots_hash[((unsigned long)mm / sizeof(struct mm_struct))
-				% MM_SLOTS_HASH_HEADS];
-	hlist_for_each_entry(mm_slot, node, bucket, hash) {
-		if (mm == mm_slot->mm)
-			return mm_slot;
-	}
+	hash_for_each_possible(mm_slots_hash, slot, node, hash, (unsigned long) mm)
+		if (slot->mm == mm)
+			return slot;
+
 	return NULL;
 }
 
 static void insert_to_mm_slots_hash(struct mm_struct *mm,
 				    struct mm_slot *mm_slot)
 {
-	struct hlist_head *bucket;
-
-	bucket = &mm_slots_hash[((unsigned long)mm / sizeof(struct mm_struct))
-				% MM_SLOTS_HASH_HEADS];
 	mm_slot->mm = mm;
-	hlist_add_head(&mm_slot->hash, bucket);
+	hash_add(mm_slots_hash, &mm_slot->hash, (long)mm);
 }
 
 static inline int khugepaged_test_exit(struct mm_struct *mm)
@@ -1930,7 +1901,7 @@ void __khugepaged_exit(struct mm_struct *mm)
 	spin_lock(&khugepaged_mm_lock);
 	mm_slot = get_mm_slot(mm);
 	if (mm_slot && khugepaged_scan.mm_slot != mm_slot) {
-		hlist_del(&mm_slot->hash);
+		hash_del(&mm_slot->hash);
 		list_del(&mm_slot->mm_node);
 		free = 1;
 	}
@@ -2379,7 +2350,7 @@ static void collect_mm_slot(struct mm_slot *mm_slot)
 
 	if (khugepaged_test_exit(mm)) {
 		/* free mm_slot */
-		hlist_del(&mm_slot->hash);
+		hash_del(&mm_slot->hash);
 		list_del(&mm_slot->mm_node);
 
 		/*
-- 
1.8.0

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply related	[flat|nested] 7+ messages in thread

* Re: [PATCH 02/15] mm,ksm: use new hashtable implementation
  2012-12-17 15:01 ` [PATCH 02/15] mm,ksm: use new hashtable implementation Sasha Levin
@ 2012-12-18  3:25   ` Hugh Dickins
  0 siblings, 0 replies; 7+ messages in thread
From: Hugh Dickins @ 2012-12-18  3:25 UTC (permalink / raw)
  To: Sasha Levin
  Cc: Andrew Morton, Michal Hocko, Konstantin Khlebnikov, Bob Liu,
	linux-mm, linux-kernel

On Mon, 17 Dec 2012, Sasha Levin wrote:
> Switch ksm to use the new hashtable implementation. This reduces the amount of
> generic unrelated code in the ksm module.
> 
> This patch depends on d9b482c ("hashtable: introduce a small and naive
> hashtable") which was merged in v3.6.
> 
> Signed-off-by: Sasha Levin <sasha.levin@oracle.com>

This seems fine, thanks:
except please drop that irrelevant final hunk to ksm_init(), then
Acked-by: Hugh Dickins <hughd@google.com>

> ---
>  mm/ksm.c | 31 +++++++++++++------------------
>  1 file changed, 13 insertions(+), 18 deletions(-)
> 
> diff --git a/mm/ksm.c b/mm/ksm.c
> index 382d930..e888f54 100644
> --- a/mm/ksm.c
> +++ b/mm/ksm.c
> @@ -33,7 +33,7 @@
>  #include <linux/mmu_notifier.h>
>  #include <linux/swap.h>
>  #include <linux/ksm.h>
> -#include <linux/hash.h>
> +#include <linux/hashtable.h>
>  #include <linux/freezer.h>
>  #include <linux/oom.h>
>  
> @@ -156,9 +156,8 @@ struct rmap_item {
>  static struct rb_root root_stable_tree = RB_ROOT;
>  static struct rb_root root_unstable_tree = RB_ROOT;
>  
> -#define MM_SLOTS_HASH_SHIFT 10
> -#define MM_SLOTS_HASH_HEADS (1 << MM_SLOTS_HASH_SHIFT)
> -static struct hlist_head mm_slots_hash[MM_SLOTS_HASH_HEADS];
> +#define MM_SLOTS_HASH_BITS 10
> +static DEFINE_HASHTABLE(mm_slots_hash, MM_SLOTS_HASH_BITS);
>  
>  static struct mm_slot ksm_mm_head = {
>  	.mm_list = LIST_HEAD_INIT(ksm_mm_head.mm_list),
> @@ -275,26 +274,21 @@ static inline void free_mm_slot(struct mm_slot *mm_slot)
>  
>  static struct mm_slot *get_mm_slot(struct mm_struct *mm)
>  {
> -	struct mm_slot *mm_slot;
> -	struct hlist_head *bucket;
>  	struct hlist_node *node;
> +	struct mm_slot *slot;
> +
> +	hash_for_each_possible(mm_slots_hash, slot, node, link, (unsigned long)mm) 
> +		if (slot->mm == mm)
> +			return slot;
>  
> -	bucket = &mm_slots_hash[hash_ptr(mm, MM_SLOTS_HASH_SHIFT)];
> -	hlist_for_each_entry(mm_slot, node, bucket, link) {
> -		if (mm == mm_slot->mm)
> -			return mm_slot;
> -	}
>  	return NULL;
>  }
>  
>  static void insert_to_mm_slots_hash(struct mm_struct *mm,
>  				    struct mm_slot *mm_slot)
>  {
> -	struct hlist_head *bucket;
> -
> -	bucket = &mm_slots_hash[hash_ptr(mm, MM_SLOTS_HASH_SHIFT)];
>  	mm_slot->mm = mm;
> -	hlist_add_head(&mm_slot->link, bucket);
> +	hash_add(mm_slots_hash, &mm_slot->link, (unsigned long)mm);
>  }
>  
>  static inline int in_stable_tree(struct rmap_item *rmap_item)
> @@ -647,7 +641,7 @@ static int unmerge_and_remove_all_rmap_items(void)
>  		ksm_scan.mm_slot = list_entry(mm_slot->mm_list.next,
>  						struct mm_slot, mm_list);
>  		if (ksm_test_exit(mm)) {
> -			hlist_del(&mm_slot->link);
> +			hash_del(&mm_slot->link);
>  			list_del(&mm_slot->mm_list);
>  			spin_unlock(&ksm_mmlist_lock);
>  
> @@ -1392,7 +1386,7 @@ next_mm:
>  		 * or when all VM_MERGEABLE areas have been unmapped (and
>  		 * mmap_sem then protects against race with MADV_MERGEABLE).
>  		 */
> -		hlist_del(&slot->link);
> +		hash_del(&slot->link);
>  		list_del(&slot->mm_list);
>  		spin_unlock(&ksm_mmlist_lock);
>  
> @@ -1559,7 +1553,7 @@ void __ksm_exit(struct mm_struct *mm)
>  	mm_slot = get_mm_slot(mm);
>  	if (mm_slot && ksm_scan.mm_slot != mm_slot) {
>  		if (!mm_slot->rmap_list) {
> -			hlist_del(&mm_slot->link);
> +			hash_del(&mm_slot->link);
>  			list_del(&mm_slot->mm_list);
>  			easy_to_free = 1;
>  		} else {
> @@ -2035,6 +2029,7 @@ static int __init ksm_init(void)
>  	 */
>  	hotplug_memory_notifier(ksm_memory_callback, 100);
>  #endif
> +
>  	return 0;
>  
>  out_free:
> -- 
> 1.8.0

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: [PATCH 04/15] mm/huge_memory: use new hashtable implementation
  2012-12-17 15:01 ` [PATCH 04/15] mm/huge_memory: " Sasha Levin
@ 2012-12-19 22:26   ` David Rientjes
  2012-12-20  2:25     ` Sasha Levin
  0 siblings, 1 reply; 7+ messages in thread
From: David Rientjes @ 2012-12-19 22:26 UTC (permalink / raw)
  To: Sasha Levin
  Cc: Andrew Morton, Kirill A. Shutemov, Xiao Guangrong,
	Andrea Arcangeli, linux-mm, linux-kernel

On Mon, 17 Dec 2012, Sasha Levin wrote:

> diff --git a/mm/huge_memory.c b/mm/huge_memory.c
> index 827d9c8..2a0ef01 100644
> --- a/mm/huge_memory.c
> +++ b/mm/huge_memory.c
> @@ -20,6 +20,7 @@
>  #include <linux/mman.h>
>  #include <linux/pagemap.h>
>  
> +#include <linux/hashtable.h>
>  #include <asm/tlb.h>
>  #include <asm/pgalloc.h>
>  #include "internal.h"

Why group this with the asm includes?

> @@ -61,12 +62,12 @@ static DECLARE_WAIT_QUEUE_HEAD(khugepaged_wait);
>  static unsigned int khugepaged_max_ptes_none __read_mostly = HPAGE_PMD_NR-1;
>  
>  static int khugepaged(void *none);
> -static int mm_slots_hash_init(void);
>  static int khugepaged_slab_init(void);
>  static void khugepaged_slab_free(void);
>  

You're removing khugepaged_slab_free() too.

> -#define MM_SLOTS_HASH_HEADS 1024
> -static struct hlist_head *mm_slots_hash __read_mostly;
> +#define MM_SLOTS_HASH_BITS 10
> +static DEFINE_HASHTABLE(mm_slots_hash, MM_SLOTS_HASH_BITS);
> +

What happened to the __read_mostly?

This used to be dynamically allocated and would save the 8KB that you 
statically allocate if transparent hugepages cannot be used.  The generic 
hashtable implementation does not support dynamic allocation?

>  static struct kmem_cache *mm_slot_cache __read_mostly;
>  
>  /**
> @@ -633,12 +634,6 @@ static int __init hugepage_init(void)
>  	if (err)
>  		goto out;
>  
> -	err = mm_slots_hash_init();
> -	if (err) {
> -		khugepaged_slab_free();

This is the only use of khugepaged_slab_free(), so the function should be 
removed as well.

> -		goto out;
> -	}
> -
>  	register_shrinker(&huge_zero_page_shrinker);
>  
>  	/*
> @@ -1821,47 +1816,23 @@ static inline void free_mm_slot(struct mm_slot *mm_slot)
>  	kmem_cache_free(mm_slot_cache, mm_slot);
>  }
>  
> -static int __init mm_slots_hash_init(void)
> -{
> -	mm_slots_hash = kzalloc(MM_SLOTS_HASH_HEADS * sizeof(struct hlist_head),
> -				GFP_KERNEL);
> -	if (!mm_slots_hash)
> -		return -ENOMEM;
> -	return 0;
> -}
> -
> -#if 0
> -static void __init mm_slots_hash_free(void)
> -{
> -	kfree(mm_slots_hash);
> -	mm_slots_hash = NULL;
> -}
> -#endif
> -
>  static struct mm_slot *get_mm_slot(struct mm_struct *mm)
>  {
> -	struct mm_slot *mm_slot;
> -	struct hlist_head *bucket;
> +	struct mm_slot *slot;
>  	struct hlist_node *node;
>  
> -	bucket = &mm_slots_hash[((unsigned long)mm / sizeof(struct mm_struct))
> -				% MM_SLOTS_HASH_HEADS];
> -	hlist_for_each_entry(mm_slot, node, bucket, hash) {
> -		if (mm == mm_slot->mm)
> -			return mm_slot;
> -	}
> +	hash_for_each_possible(mm_slots_hash, slot, node, hash, (unsigned long) mm)
> +		if (slot->mm == mm)
> +			return slot;

Why these other changes (the naming of the variable, the ordering of the 
conditional)?

> +
>  	return NULL;
>  }
>  
>  static void insert_to_mm_slots_hash(struct mm_struct *mm,
>  				    struct mm_slot *mm_slot)
>  {
> -	struct hlist_head *bucket;
> -
> -	bucket = &mm_slots_hash[((unsigned long)mm / sizeof(struct mm_struct))
> -				% MM_SLOTS_HASH_HEADS];
>  	mm_slot->mm = mm;
> -	hlist_add_head(&mm_slot->hash, bucket);
> +	hash_add(mm_slots_hash, &mm_slot->hash, (long)mm);
>  }
>  
>  static inline int khugepaged_test_exit(struct mm_struct *mm)
> @@ -1930,7 +1901,7 @@ void __khugepaged_exit(struct mm_struct *mm)
>  	spin_lock(&khugepaged_mm_lock);
>  	mm_slot = get_mm_slot(mm);
>  	if (mm_slot && khugepaged_scan.mm_slot != mm_slot) {
> -		hlist_del(&mm_slot->hash);
> +		hash_del(&mm_slot->hash);
>  		list_del(&mm_slot->mm_node);
>  		free = 1;
>  	}
> @@ -2379,7 +2350,7 @@ static void collect_mm_slot(struct mm_slot *mm_slot)
>  
>  	if (khugepaged_test_exit(mm)) {
>  		/* free mm_slot */
> -		hlist_del(&mm_slot->hash);
> +		hash_del(&mm_slot->hash);
>  		list_del(&mm_slot->mm_node);
>  
>  		/*

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: [PATCH 04/15] mm/huge_memory: use new hashtable implementation
  2012-12-19 22:26   ` David Rientjes
@ 2012-12-20  2:25     ` Sasha Levin
  2012-12-20 20:28       ` David Rientjes
  0 siblings, 1 reply; 7+ messages in thread
From: Sasha Levin @ 2012-12-20  2:25 UTC (permalink / raw)
  To: David Rientjes
  Cc: Andrew Morton, Kirill A. Shutemov, Xiao Guangrong,
	Andrea Arcangeli, linux-mm, linux-kernel

On 12/19/2012 05:26 PM, David Rientjes wrote:
> This used to be dynamically allocated and would save the 8KB that you 
> statically allocate if transparent hugepages cannot be used.  The generic 
> hashtable implementation does not support dynamic allocation?

No, currently the hashtable only handles statically allocated hashtables.

In this case, the downside is that you'll waste 8KB if hugepages aren't available,
but the upside is that you'll have one less dereference when accessing the
hashtable.

If the 8KB saving is preferable here I'll drop the patch and come back when
dynamic hashtable is supported.


Thanks,
Sasha

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: [PATCH 04/15] mm/huge_memory: use new hashtable implementation
  2012-12-20  2:25     ` Sasha Levin
@ 2012-12-20 20:28       ` David Rientjes
  2012-12-20 20:30         ` Sasha Levin
  0 siblings, 1 reply; 7+ messages in thread
From: David Rientjes @ 2012-12-20 20:28 UTC (permalink / raw)
  To: Sasha Levin
  Cc: Andrew Morton, Kirill A. Shutemov, Xiao Guangrong,
	Andrea Arcangeli, linux-mm, linux-kernel

On Wed, 19 Dec 2012, Sasha Levin wrote:

> In this case, the downside is that you'll waste 8KB if hugepages aren't available,
> but the upside is that you'll have one less dereference when accessing the
> hashtable.
> 
> If the 8KB saving is preferable here I'll drop the patch and come back when
> dynamic hashtable is supported.
> 

If a distro releases with CONFIG_TRANSPARNET_HUGEPAGE=y and a user is 
running on a processor that does not support pse then this just cost them 
8KB for no reason.  The overhead by simply enabling 
CONFIG_TRANSPARENT_HUGEPAGE is worse in this scenario, but this is whole 
reason for having the dynamic allocation in the original code.  If there's 
a compelling reason for why we want this change, then that fact should at 
least be documented.

Could you propose a v2 that includes fixes for the other problems that 
were mentioned?

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: [PATCH 04/15] mm/huge_memory: use new hashtable implementation
  2012-12-20 20:28       ` David Rientjes
@ 2012-12-20 20:30         ` Sasha Levin
  0 siblings, 0 replies; 7+ messages in thread
From: Sasha Levin @ 2012-12-20 20:30 UTC (permalink / raw)
  To: David Rientjes
  Cc: Andrew Morton, Kirill A. Shutemov, Xiao Guangrong,
	Andrea Arcangeli, linux-mm, linux-kernel

On 12/20/2012 03:28 PM, David Rientjes wrote:
> On Wed, 19 Dec 2012, Sasha Levin wrote:
> 
>> In this case, the downside is that you'll waste 8KB if hugepages aren't available,
>> but the upside is that you'll have one less dereference when accessing the
>> hashtable.
>>
>> If the 8KB saving is preferable here I'll drop the patch and come back when
>> dynamic hashtable is supported.
>>
> 
> If a distro releases with CONFIG_TRANSPARNET_HUGEPAGE=y and a user is 
> running on a processor that does not support pse then this just cost them 
> 8KB for no reason.  The overhead by simply enabling 
> CONFIG_TRANSPARENT_HUGEPAGE is worse in this scenario, but this is whole 
> reason for having the dynamic allocation in the original code.  If there's 
> a compelling reason for why we want this change, then that fact should at 
> least be documented.
> 
> Could you propose a v2 that includes fixes for the other problems that 
> were mentioned?
> 

Sure, will do.


Thanks,
Sasha

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 7+ messages in thread

end of thread, other threads:[~2012-12-20 20:30 UTC | newest]

Thread overview: 7+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
     [not found] <1355756497-15834-1-git-send-email-sasha.levin@oracle.com>
2012-12-17 15:01 ` [PATCH 02/15] mm,ksm: use new hashtable implementation Sasha Levin
2012-12-18  3:25   ` Hugh Dickins
2012-12-17 15:01 ` [PATCH 04/15] mm/huge_memory: " Sasha Levin
2012-12-19 22:26   ` David Rientjes
2012-12-20  2:25     ` Sasha Levin
2012-12-20 20:28       ` David Rientjes
2012-12-20 20:30         ` Sasha Levin

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).