linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
* [RFC PATCH] mm/thp: Always allocate transparent hugepages on local node
@ 2014-11-24 14:19 Aneesh Kumar K.V
  2014-11-24 15:03 ` Kirill A. Shutemov
  0 siblings, 1 reply; 5+ messages in thread
From: Aneesh Kumar K.V @ 2014-11-24 14:19 UTC (permalink / raw)
  To: akpm, Kirill A. Shutemov; +Cc: linux-mm, linux-kernel, Aneesh Kumar K.V

This make sure that we try to allocate hugepages from local node. If
we can't we fallback to small page allocation based on
mempolicy. This is based on the observation that allocating pages
on local node is more beneficial that allocating hugepages on remote node.

Signed-off-by: Aneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com>
---
NOTE:
I am not sure whether we want this to be per system configurable ? If not
we could possibly remove alloc_hugepage_vma.

 mm/huge_memory.c | 14 ++++++++++----
 1 file changed, 10 insertions(+), 4 deletions(-)

diff --git a/mm/huge_memory.c b/mm/huge_memory.c
index de984159cf0b..b309705e7e96 100644
--- a/mm/huge_memory.c
+++ b/mm/huge_memory.c
@@ -775,6 +775,12 @@ static inline struct page *alloc_hugepage_vma(int defrag,
 			       HPAGE_PMD_ORDER, vma, haddr, nd);
 }
 
+static inline struct page *alloc_hugepage_exact_node(int node, int defrag)
+{
+	return alloc_pages_exact_node(node, alloc_hugepage_gfpmask(defrag, 0),
+				      HPAGE_PMD_ORDER);
+}
+
 /* Caller must hold page table lock. */
 static bool set_huge_zero_page(pgtable_t pgtable, struct mm_struct *mm,
 		struct vm_area_struct *vma, unsigned long haddr, pmd_t *pmd,
@@ -830,8 +836,8 @@ int do_huge_pmd_anonymous_page(struct mm_struct *mm, struct vm_area_struct *vma,
 		}
 		return 0;
 	}
-	page = alloc_hugepage_vma(transparent_hugepage_defrag(vma),
-			vma, haddr, numa_node_id(), 0);
+	page = alloc_hugepage_exact_node(numa_node_id(),
+					 transparent_hugepage_defrag(vma));
 	if (unlikely(!page)) {
 		count_vm_event(THP_FAULT_FALLBACK);
 		return VM_FAULT_FALLBACK;
@@ -1120,8 +1126,8 @@ int do_huge_pmd_wp_page(struct mm_struct *mm, struct vm_area_struct *vma,
 alloc:
 	if (transparent_hugepage_enabled(vma) &&
 	    !transparent_hugepage_debug_cow())
-		new_page = alloc_hugepage_vma(transparent_hugepage_defrag(vma),
-					      vma, haddr, numa_node_id(), 0);
+		new_page = alloc_hugepage_exact_node(numa_node_id(),
+					     transparent_hugepage_defrag(vma));
 	else
 		new_page = NULL;
 
-- 
2.1.0

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply related	[flat|nested] 5+ messages in thread

* Re: [RFC PATCH] mm/thp: Always allocate transparent hugepages on local node
  2014-11-24 14:19 [RFC PATCH] mm/thp: Always allocate transparent hugepages on local node Aneesh Kumar K.V
@ 2014-11-24 15:03 ` Kirill A. Shutemov
  2014-11-24 21:33   ` David Rientjes
  0 siblings, 1 reply; 5+ messages in thread
From: Kirill A. Shutemov @ 2014-11-24 15:03 UTC (permalink / raw)
  To: Aneesh Kumar K.V; +Cc: akpm, linux-mm, linux-kernel

On Mon, Nov 24, 2014 at 07:49:51PM +0530, Aneesh Kumar K.V wrote:
> This make sure that we try to allocate hugepages from local node. If
> we can't we fallback to small page allocation based on
> mempolicy. This is based on the observation that allocating pages
> on local node is more beneficial that allocating hugepages on remote node.

Local node on allocation is not necessary local node for use.
If policy says to use a specific node[s], we should follow.

I think it makes sense to force local allocation if policy is interleave
or if current node is in preferred or bind set.
 
-- 
 Kirill A. Shutemov

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 5+ messages in thread

* Re: [RFC PATCH] mm/thp: Always allocate transparent hugepages on local node
  2014-11-24 15:03 ` Kirill A. Shutemov
@ 2014-11-24 21:33   ` David Rientjes
  2014-11-25 14:17     ` Kirill A. Shutemov
  2014-11-27  6:32     ` Aneesh Kumar K.V
  0 siblings, 2 replies; 5+ messages in thread
From: David Rientjes @ 2014-11-24 21:33 UTC (permalink / raw)
  To: Kirill A. Shutemov; +Cc: Aneesh Kumar K.V, akpm, linux-mm, linux-kernel

On Mon, 24 Nov 2014, Kirill A. Shutemov wrote:

> > This make sure that we try to allocate hugepages from local node. If
> > we can't we fallback to small page allocation based on
> > mempolicy. This is based on the observation that allocating pages
> > on local node is more beneficial that allocating hugepages on remote node.
> 
> Local node on allocation is not necessary local node for use.
> If policy says to use a specific node[s], we should follow.
> 

True, and the interaction between thp and mempolicies is fragile: if a 
process has a MPOL_BIND mempolicy over a set of nodes, that does not 
necessarily mean that we want to allocate thp remotely if it will always 
be accessed remotely.  It's simple to benchmark and show that remote 
access latency of a hugepage can exceed that of local pages.  MPOL_BIND 
itself is a policy of exclusion, not inclusion, and it's difficult to 
define when local pages and its cost of allocation is better than remote 
thp.

For MPOL_BIND, if the local node is allowed then thp should be forced from 
that node, if the local node is disallowed then allocate from any node in 
the nodemask.  For MPOL_INTERLEAVE, I think we should only allocate thp 
from the next node in order, otherwise fail the allocation and fallback to 
small pages.  Is this what you meant as well?

> I think it makes sense to force local allocation if policy is interleave
> or if current node is in preferred or bind set.
>  

If local allocation were forced for MPOL_INTERLEAVE and all memory is 
initially faulted by cpus on a single node, then the policy has 
effectively become MPOL_DEFAULT, there's no interleave.

Aside: the patch is also buggy since it passes numa_node_id() and thp is 
supported on platforms that allow memoryless nodes.

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 5+ messages in thread

* Re: [RFC PATCH] mm/thp: Always allocate transparent hugepages on local node
  2014-11-24 21:33   ` David Rientjes
@ 2014-11-25 14:17     ` Kirill A. Shutemov
  2014-11-27  6:32     ` Aneesh Kumar K.V
  1 sibling, 0 replies; 5+ messages in thread
From: Kirill A. Shutemov @ 2014-11-25 14:17 UTC (permalink / raw)
  To: David Rientjes; +Cc: Aneesh Kumar K.V, akpm, linux-mm, linux-kernel

On Mon, Nov 24, 2014 at 01:33:42PM -0800, David Rientjes wrote:
> On Mon, 24 Nov 2014, Kirill A. Shutemov wrote:
> 
> > > This make sure that we try to allocate hugepages from local node. If
> > > we can't we fallback to small page allocation based on
> > > mempolicy. This is based on the observation that allocating pages
> > > on local node is more beneficial that allocating hugepages on remote node.
> > 
> > Local node on allocation is not necessary local node for use.
> > If policy says to use a specific node[s], we should follow.
> > 
> 
> True, and the interaction between thp and mempolicies is fragile: if a 
> process has a MPOL_BIND mempolicy over a set of nodes, that does not 
> necessarily mean that we want to allocate thp remotely if it will always 
> be accessed remotely.  It's simple to benchmark and show that remote 
> access latency of a hugepage can exceed that of local pages.  MPOL_BIND 
> itself is a policy of exclusion, not inclusion, and it's difficult to 
> define when local pages and its cost of allocation is better than remote 
> thp.
> 
> For MPOL_BIND, if the local node is allowed then thp should be forced from 
> that node, if the local node is disallowed then allocate from any node in 
> the nodemask.  For MPOL_INTERLEAVE, I think we should only allocate thp 
> from the next node in order, otherwise fail the allocation and fallback to 
> small pages.  Is this what you meant as well?

Correct.

> > I think it makes sense to force local allocation if policy is interleave
> > or if current node is in preferred or bind set.
> >  
> 
> If local allocation were forced for MPOL_INTERLEAVE and all memory is 
> initially faulted by cpus on a single node, then the policy has 
> effectively become MPOL_DEFAULT, there's no interleave.

You're right. I don't have much experience with mempolicy code.

-- 
 Kirill A. Shutemov

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 5+ messages in thread

* Re: [RFC PATCH] mm/thp: Always allocate transparent hugepages on local node
  2014-11-24 21:33   ` David Rientjes
  2014-11-25 14:17     ` Kirill A. Shutemov
@ 2014-11-27  6:32     ` Aneesh Kumar K.V
  1 sibling, 0 replies; 5+ messages in thread
From: Aneesh Kumar K.V @ 2014-11-27  6:32 UTC (permalink / raw)
  To: David Rientjes, Kirill A. Shutemov; +Cc: akpm, linux-mm, linux-kernel

David Rientjes <rientjes@google.com> writes:

> On Mon, 24 Nov 2014, Kirill A. Shutemov wrote:
>
>> > This make sure that we try to allocate hugepages from local node. If
>> > we can't we fallback to small page allocation based on
>> > mempolicy. This is based on the observation that allocating pages
>> > on local node is more beneficial that allocating hugepages on remote node.
>> 
>> Local node on allocation is not necessary local node for use.
>> If policy says to use a specific node[s], we should follow.
>> 
>
> True, and the interaction between thp and mempolicies is fragile: if a 
> process has a MPOL_BIND mempolicy over a set of nodes, that does not 
> necessarily mean that we want to allocate thp remotely if it will always 
> be accessed remotely.  It's simple to benchmark and show that remote 
> access latency of a hugepage can exceed that of local pages.  MPOL_BIND 
> itself is a policy of exclusion, not inclusion, and it's difficult to 
> define when local pages and its cost of allocation is better than remote 
> thp.
>
> For MPOL_BIND, if the local node is allowed then thp should be forced from 
> that node, if the local node is disallowed then allocate from any node in 
> the nodemask.  For MPOL_INTERLEAVE, I think we should only allocate thp 
> from the next node in order, otherwise fail the allocation and fallback to 
> small pages.  Is this what you meant as well?
>

Something like below

struct page *alloc_hugepage_vma(gfp_t gfp, struct vm_area_struct *vma,
				unsigned long addr, int order)
{
	struct page *page;
	nodemask_t *nmask;
	struct mempolicy *pol;
	int node = numa_node_id();
	unsigned int cpuset_mems_cookie;

retry_cpuset:
	pol = get_vma_policy(vma, addr);
	cpuset_mems_cookie = read_mems_allowed_begin();

	if (unlikely(pol->mode == MPOL_INTERLEAVE)) {
		unsigned nid;
		nid = interleave_nid(pol, vma, addr, PAGE_SHIFT + order);
		mpol_cond_put(pol);
		page = alloc_page_interleave(gfp, order, nid);
		if (unlikely(!page &&
			     read_mems_allowed_retry(cpuset_mems_cookie)))
			goto retry_cpuset;
		return page;
	}
	nmask = policy_nodemask(gfp, pol);
	if (!nmask || node_isset(node, *nmask)) {
		mpol_cond_put(pol);
		page = alloc_hugepage_exact_node(node, gfp, order);
		if (unlikely(!page &&
			     read_mems_allowed_retry(cpuset_mems_cookie)))
			goto retry_cpuset;
		return page;

	}
	/*
	 * if current node is not part of node mask, try
	 * the allocation from any node, and we can do retry
	 * in that case.
	 */
	page = __alloc_pages_nodemask(gfp, order,
				      policy_zonelist(gfp, pol, node),
				      nmask);
	mpol_cond_put(pol);
	if (unlikely(!page && read_mems_allowed_retry(cpuset_mems_cookie)))
		goto retry_cpuset;

	return page;
}

-aneesh

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 5+ messages in thread

end of thread, other threads:[~2014-11-27  6:43 UTC | newest]

Thread overview: 5+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2014-11-24 14:19 [RFC PATCH] mm/thp: Always allocate transparent hugepages on local node Aneesh Kumar K.V
2014-11-24 15:03 ` Kirill A. Shutemov
2014-11-24 21:33   ` David Rientjes
2014-11-25 14:17     ` Kirill A. Shutemov
2014-11-27  6:32     ` Aneesh Kumar K.V

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).