* [LSF/MM TOPIC] cpuset vs mempolicy related issues
@ 2017-02-03 9:17 Vlastimil Babka
2017-02-10 11:52 ` Vlastimil Babka
2017-02-28 20:34 ` Balbir Singh
0 siblings, 2 replies; 4+ messages in thread
From: Vlastimil Babka @ 2017-02-03 9:17 UTC (permalink / raw)
To: lsf-pc, linux-mm@kvack.org
Cc: Li Zefan, cgroups, LKML, Michal Hocko, David Rientjes,
Kirill A. Shutemov, Anshuman Khandual, Aneesh Kumar K.V,
Mel Gorman
Hi,
this mail tries to summarize the problems with current cpusets implementation
wrt memory restrictions, especially when used together with mempolicies.
The issues were initially discovered when working on the series fixing recent
premature OOM regressions [1] and then there was more staring at the code and
git archeology.
Possible spurious OOMs
Spurious OOM kills can happen due to updating cpuset's mems (or reattaching
task to different cpuset) racing with page allocation for a vma with mempolicy.
This probably originates with commit 19770b32609b ("mm: filter based on a
nodemask as well as a gfp_mask") or 58568d2a8215 ("cpuset,mm: update tasks'
mems_allowed in time"). Before the former commit, mempolicy node restrictions
were reflected with a custom zonelist, which was replaced by a simple pointer
swap when updated due to cpuset changes. After the commit, the nodemask used
by allocation is concurrently updated. Before the latter commit,
task->mems_allowed was protected by a generation counter and updated
synchronously. After the commit, it's updated concurrently.
These concurrent updates may cause page allocation to see all nodes as not
available due to mempolicy and cpusets, and lead to OOM.
This has already happened in the past and commit 708c1bbc9d0c ("mempolicy:
restructure rebinding-mempolicy functions") was fixing this by more complex
update protocol, which was then adjusted to use a seq-lock with cc9a6c877661
("cpuset: mm: reduce large amounts of memory barrier related damage v3").
However this only protects the task->mems_allowed updated and per-task
mempolicy updates. Per-vma mempolicy updates happen outside these protections
and the possibility of OOM was verified by testing [2].
Code complexity
As mentioned above, concurrent updates to task->mems_allowed and mempolicy are
rather complexi, see e.g. mpol_rebind_policy() and
cpuset_change_task_nodemask(). Fixing the spurious OOM problem with the current
approach [3] will introduce even more subtlety. This all comes from the
parallel updates. Originally, task->mems_allowed was a snapshot updated
synchronously. For mempolicy nodemask updates, we probably should not take an
on-stack snapshot of the nodemask in allocation path. But we can look at how
mbind() itself updates existing mempolicies, which is done by swapping in an
updated copy. One obvious idea is also to not touch mempolicy nodemask from
cpuset, because we check __cpuset_node_allowed() explicitly anyway (see next
point). That however doesn't seem feasible for all mempolicies because of the
relative nodes semantics (see also the semantics discussion).
Code efficiency
The get_page_from_freelist() function gets the nodemask parameter, which is
typically obtained from a task or vma mempolicy. Additionally, it will check
__cpuset_node_allowed when cpusets are enabled. This additional check is
wasteful when the cpuset restrictions were already applied to the mempolicy's
nodemask.
Issues with semantics
The __cpuset_node_allowed() function is not a simple check for
current->mems_allowed. It allows any node in interrupt, TIF_MEMDIE, PF_EXITING
or cpuset ancestors (without __GFP_HARDWALL). This works as intended if there
is no mempolicy, but once there is a nodemask from a mempolicy, the cpuset
restrictions are already applied to it, so the for_next_zone_zonelist_nodemask()
loop already filters the nodes and the __cpuset_node_allowed() decisions cannot
apply. It's true that allocations with mempolicies are typically user-space
pagefaults, thus __GFP_HARDWALL, not in interrupt, etc, but it's subtle. And
there is a number of driver allocations using alloc_pages() that implicitly use
tasks's mempolicy and thus can be affected by this discrepancy.
A related question is why we allow an allocation to escape cpuset restrictions,
but not also mempolicy restrictions. This includes allocations where we allow
dipping into memory reserves by ALLOC_NO_WATERMARKS, because
gfp_pfmemalloc_allowed(). It seems wrong to keep restricting such critical
allocations due to current task's mempolicy. We could set ac->nodemask to NULL
in such situations, but can we distinguish nodemasks coming from mempolicies
from nodemasks coming from e.g. HW restrictions on the memory placement?
Possible fix approach
Cpuset updates will rebind nodemasks only of those mempolicies that need it wrt
their relative nodes semantics (those are either created with the flag
MPOL_F_RELATIVE_NODES, or with neither RELATIVE nor STATIC flag). The others
(created with the STATIC flag) we can leave untouched. For mempolicies that we
keep rebinding, adopt the approach of mbind() that swaps an updated copy
instead of in-place changes. We can leave get_page_from_freelist() as it is and
nodes will be filtered orthogonally with mempolicy nodemask and cpuset check.
This will give us stable nodemask throughout the whole allocation without a
need for an on-stack copy. The next question is what to do with
current->mems_allowed. Do we keep the parallel modifications with seqlock
protection or e.g. try to go back to the synchronous copy approach?
Related to that is a remaining corner case with alloc_pages_vma() which has its
own seqlock-protected scope. There it calls policy_nodemask() which might
detect that there's no intersection between the mempolicy and cpuset and return
NULL nodemask. However, __alloc_pages_slowpath() has own seqlock scope, so if a
modification to mems_allowed (resulting in no intersection with mempolicy)
happens between the check in policy_nodemask() and reaching
__alloc_pages_slowpath(), the latter won't detect the modification and invoke
OOM before it can return with a failed allocation to alloc_pages_vma() and let
it detect a seqlock update and retry. One solution as shown in the RFC patch [3]
is to add another check for the cpuset/nodemask intersection before OOM. That
works, but it's a bit hacky and still produces an allocation failure warning.
On the other hand, we might also want to make things more robust in general and
prevent spurious OOMs due to no nodes being eligible for also any other reason,
such as buggy driver passing a wrong nodemask (which doesn't necessarily come
from a mempolicy).
[1] https://lkml.kernel.org/r/20170120103843.24587-1-vbabka@suse.cz
[2] https://lkml.kernel.org/r/a3bc44cd-3c81-c20e-aecb-525eb73b9bfe@suse.cz
[3] https://lkml.kernel.org/r/7c459f26-13a6-a817-e508-b65b903a8378@suse.cz
=====
Git archeology notes:
initial git commit 1da177e4c3f
- cpuset_zone_allowed checks current->mems_allowed
- there's no __GFP_HARDWALL and associated games, everything is thus effectively
HARDWALL
- mempolicies implemented by zonelist only, no nodemasks
- mempolicies not touched by cpuset updates
- updates of current->mems_allowed done synchronously using generation counter
2.6.14 (2005)
f90b1d2f1aaa ("[PATCH] cpusets: new __GFP_HARDWALL flag")
- still checking current->mems_allowed
- also checking ancestors for non-hardwall allocations
2.6.15
68860ec10bcc ("[PATCH] cpusets: automatic numa mempolicy rebinding")
- started rebinding task's mempolicies
- until then they behaved as static?
- mempolicies still zonelist based, so updates are simple zonelist pointer swap,
should be safe?
2.6.16 (2006)
4225399a66b3 ("[PATCH] cpuset: rebind vma mempolicies fix")
- introduced per-vma rebinding
- still ok, because simple zonelist pointer swap?
2.6.26 (2008)
19770b32609b ("mm: filter based on a nodemask as well as a gfp_mask")
- filtering by nodemask
- rebinds no longer a simple zonelist swap
f5b087b52f17 ("mempolicy: add MPOL_F_STATIC_NODES flag")
4c50bc0116cf ("mempolicy: add MPOL_F_RELATIVE_NODES flag")
- Documentation/vm/numa_memory_policy.txt has most details
2.6.30 (2009)
3b6766fe668b ("cpuset: rewrite update_tasks_nodemask()")
- just cleanup?
2.6.31 (2009)
58568d2a8215 ("cpuset,mm: update tasks' mems_allowed in time")
- remove cpuset_mems_generation, update concurrently
- apparently a bug fix for allocation on not allowed node due to rotor
2010:
708c1bbc9d0c ("mempolicy: restructure rebinding-mempolicy functions")
- the elaborate 2-step update protocol
2012:
cc9a6c877661 ("cpuset: mm: reduce large amounts of memory barrier related damage
v3")
- seqlock protection
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
^ permalink raw reply [flat|nested] 4+ messages in thread
* Re: [LSF/MM TOPIC] cpuset vs mempolicy related issues
2017-02-03 9:17 [LSF/MM TOPIC] cpuset vs mempolicy related issues Vlastimil Babka
@ 2017-02-10 11:52 ` Vlastimil Babka
2017-02-10 12:26 ` Michal Hocko
2017-02-28 20:34 ` Balbir Singh
1 sibling, 1 reply; 4+ messages in thread
From: Vlastimil Babka @ 2017-02-10 11:52 UTC (permalink / raw)
To: lsf-pc, linux-mm@kvack.org
Cc: Li Zefan, cgroups, LKML, Michal Hocko, David Rientjes,
Kirill A. Shutemov, Anshuman Khandual, Aneesh Kumar K.V,
Mel Gorman
On 02/03/2017 10:17 AM, Vlastimil Babka wrote:
> Possible fix approach
>
> Cpuset updates will rebind nodemasks only of those mempolicies that need it wrt
> their relative nodes semantics (those are either created with the flag
> MPOL_F_RELATIVE_NODES, or with neither RELATIVE nor STATIC flag). The others
> (created with the STATIC flag) we can leave untouched. For mempolicies that we
> keep rebinding, adopt the approach of mbind() that swaps an updated copy
> instead of in-place changes. We can leave get_page_from_freelist() as it is and
> nodes will be filtered orthogonally with mempolicy nodemask and cpuset check.
>
> This will give us stable nodemask throughout the whole allocation without a
> need for an on-stack copy. The next question is what to do with
> current->mems_allowed. Do we keep the parallel modifications with seqlock
> protection or e.g. try to go back to the synchronous copy approach?
>
> Related to that is a remaining corner case with alloc_pages_vma() which has its
> own seqlock-protected scope. There it calls policy_nodemask() which might
> detect that there's no intersection between the mempolicy and cpuset and return
> NULL nodemask. However, __alloc_pages_slowpath() has own seqlock scope, so if a
> modification to mems_allowed (resulting in no intersection with mempolicy)
> happens between the check in policy_nodemask() and reaching
> __alloc_pages_slowpath(), the latter won't detect the modification and invoke
> OOM before it can return with a failed allocation to alloc_pages_vma() and let
> it detect a seqlock update and retry. One solution as shown in the RFC patch [3]
> is to add another check for the cpuset/nodemask intersection before OOM. That
> works, but it's a bit hacky and still produces an allocation failure warning.
>
> On the other hand, we might also want to make things more robust in general and
> prevent spurious OOMs due to no nodes being eligible for also any other reason,
> such as buggy driver passing a wrong nodemask (which doesn't necessarily come
> from a mempolicy).
It occured to me that it could be possible to convert cpuset handling from
nodemask based to zonelist based, which means each cpuset would have its own set
of zonelists where only the allowed nodes (for hardwall) would be present. For
softwall we could have another set, where allowed nodes are prioritised, but all
would be present... or we would just use the system zonelists.
This means some extra memory overhead for each cpuset, but I'd expect the amount
of cpusets in the system should be relatively limited anyway. (Mempolicies used
to be based on zonelists in the past, but there the overhead might have been
more significant.)
We could then get rid of the task->mems_allowed and the related seqlock. Cpuset
updates would allocate new set of zonelists and then swap it. This would need
either refcounting or some rwsem to free the old version safely.
This together with reworked updating of mempolicies would provide the guarantee
that once we obtain the cpuset's zonelist and mempolicy's nodemask, we can check
it once for intersection, and then that result remains valid during the whole
allocation.
Another advantage is that for_next_zone_zonelist_nodemask() then provides the
complete filtering and we don't have to call __cpuset_zone_allowed().
Thoughts?
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
^ permalink raw reply [flat|nested] 4+ messages in thread
* Re: [LSF/MM TOPIC] cpuset vs mempolicy related issues
2017-02-10 11:52 ` Vlastimil Babka
@ 2017-02-10 12:26 ` Michal Hocko
0 siblings, 0 replies; 4+ messages in thread
From: Michal Hocko @ 2017-02-10 12:26 UTC (permalink / raw)
To: Vlastimil Babka
Cc: lsf-pc, linux-mm@kvack.org, Li Zefan, cgroups, LKML,
David Rientjes, Kirill A. Shutemov, Anshuman Khandual,
Aneesh Kumar K.V, Mel Gorman
On Fri 10-02-17 12:52:25, Vlastimil Babka wrote:
> On 02/03/2017 10:17 AM, Vlastimil Babka wrote:
> > Possible fix approach
> >
> > Cpuset updates will rebind nodemasks only of those mempolicies that need it wrt
> > their relative nodes semantics (those are either created with the flag
> > MPOL_F_RELATIVE_NODES, or with neither RELATIVE nor STATIC flag). The others
> > (created with the STATIC flag) we can leave untouched. For mempolicies that we
> > keep rebinding, adopt the approach of mbind() that swaps an updated copy
> > instead of in-place changes. We can leave get_page_from_freelist() as it is and
> > nodes will be filtered orthogonally with mempolicy nodemask and cpuset check.
> >
> > This will give us stable nodemask throughout the whole allocation without a
> > need for an on-stack copy. The next question is what to do with
> > current->mems_allowed. Do we keep the parallel modifications with seqlock
> > protection or e.g. try to go back to the synchronous copy approach?
> >
> > Related to that is a remaining corner case with alloc_pages_vma() which has its
> > own seqlock-protected scope. There it calls policy_nodemask() which might
> > detect that there's no intersection between the mempolicy and cpuset and return
> > NULL nodemask. However, __alloc_pages_slowpath() has own seqlock scope, so if a
> > modification to mems_allowed (resulting in no intersection with mempolicy)
> > happens between the check in policy_nodemask() and reaching
> > __alloc_pages_slowpath(), the latter won't detect the modification and invoke
> > OOM before it can return with a failed allocation to alloc_pages_vma() and let
> > it detect a seqlock update and retry. One solution as shown in the RFC patch [3]
> > is to add another check for the cpuset/nodemask intersection before OOM. That
> > works, but it's a bit hacky and still produces an allocation failure warning.
> >
> > On the other hand, we might also want to make things more robust in general and
> > prevent spurious OOMs due to no nodes being eligible for also any other reason,
> > such as buggy driver passing a wrong nodemask (which doesn't necessarily come
> > from a mempolicy).
>
> It occured to me that it could be possible to convert cpuset handling from
> nodemask based to zonelist based, which means each cpuset would have its own
> set of zonelists where only the allowed nodes (for hardwall) would be
> present. For softwall we could have another set, where allowed nodes are
> prioritised, but all would be present... or we would just use the system
> zonelists.
sounds like a good idea to me!
> This means some extra memory overhead for each cpuset, but I'd expect the
> amount of cpusets in the system should be relatively limited anyway.
> (Mempolicies used to be based on zonelists in the past, but there the
> overhead might have been more significant.)
I do not think this would ever be a problem.
> We could then get rid of the task->mems_allowed and the related seqlock.
> Cpuset updates would allocate new set of zonelists and then swap it. This
> would need either refcounting or some rwsem to free the old version safely.
yes, refcounting sounds reasonably.
> This together with reworked updating of mempolicies would provide the
> guarantee that once we obtain the cpuset's zonelist and mempolicy's
> nodemask, we can check it once for intersection, and then that result
> remains valid during the whole allocation.
I would really like to drop all/most of the mempolicy rebinding code
which is called when the cpuset is chaged. I guess we cannot avoid that
for MPOL_F_RELATIVE_NODES but other than that we should rely on
policy_nodemask I believe. If the intersection between nodemask and
cpuset is empty we can return NULL nodemask (we are doing that already
for MPOL_BIND) and then rely on zonelists to do the right thing.
> Another advantage is that for_next_zone_zonelist_nodemask() then provides
> the complete filtering and we don't have to call __cpuset_zone_allowed().
yes it would be also more natural and easier to understand. All the
subtle details are really hidden now. I wasn't really aware of
policy_nodemask returning NULL nodemask on empty intersection until recently
and then you still have to keep in mind that there is __cpuset_zone_allowed
at a place which does the cpuset part... This is really non-obvious.
--
Michal Hocko
SUSE Labs
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
^ permalink raw reply [flat|nested] 4+ messages in thread
* Re: [LSF/MM TOPIC] cpuset vs mempolicy related issues
2017-02-03 9:17 [LSF/MM TOPIC] cpuset vs mempolicy related issues Vlastimil Babka
2017-02-10 11:52 ` Vlastimil Babka
@ 2017-02-28 20:34 ` Balbir Singh
1 sibling, 0 replies; 4+ messages in thread
From: Balbir Singh @ 2017-02-28 20:34 UTC (permalink / raw)
To: Vlastimil Babka, lsf-pc, linux-mm@kvack.org
Cc: Li Zefan, cgroups, LKML, Michal Hocko, David Rientjes,
Kirill A. Shutemov, Anshuman Khandual, Aneesh Kumar K.V,
Mel Gorman
On 03/02/17 20:17, Vlastimil Babka wrote:
> Hi,
>
> this mail tries to summarize the problems with current cpusets implementation
> wrt memory restrictions, especially when used together with mempolicies.
> The issues were initially discovered when working on the series fixing recent
> premature OOM regressions [1] and then there was more staring at the code and
> git archeology.
>
> Possible spurious OOMs
>
> Spurious OOM kills can happen due to updating cpuset's mems (or reattaching
> task to different cpuset) racing with page allocation for a vma with mempolicy.
> This probably originates with commit 19770b32609b ("mm: filter based on a
> nodemask as well as a gfp_mask") or 58568d2a8215 ("cpuset,mm: update tasks'
> mems_allowed in time"). Before the former commit, mempolicy node restrictions
> were reflected with a custom zonelist, which was replaced by a simple pointer
> swap when updated due to cpuset changes. After the commit, the nodemask used
> by allocation is concurrently updated. Before the latter commit,
> task->mems_allowed was protected by a generation counter and updated
> synchronously. After the commit, it's updated concurrently.
> These concurrent updates may cause page allocation to see all nodes as not
> available due to mempolicy and cpusets, and lead to OOM.
>
> This has already happened in the past and commit 708c1bbc9d0c ("mempolicy:
> restructure rebinding-mempolicy functions") was fixing this by more complex
> update protocol, which was then adjusted to use a seq-lock with cc9a6c877661
> ("cpuset: mm: reduce large amounts of memory barrier related damage v3").
> However this only protects the task->mems_allowed updated and per-task
> mempolicy updates. Per-vma mempolicy updates happen outside these protections
> and the possibility of OOM was verified by testing [2].
>
> Code complexity
>
> As mentioned above, concurrent updates to task->mems_allowed and mempolicy are
> rather complexi, see e.g. mpol_rebind_policy() and
> cpuset_change_task_nodemask(). Fixing the spurious OOM problem with the current
> approach [3] will introduce even more subtlety. This all comes from the
> parallel updates. Originally, task->mems_allowed was a snapshot updated
> synchronously. For mempolicy nodemask updates, we probably should not take an
> on-stack snapshot of the nodemask in allocation path. But we can look at how
> mbind() itself updates existing mempolicies, which is done by swapping in an
> updated copy. One obvious idea is also to not touch mempolicy nodemask from
> cpuset, because we check __cpuset_node_allowed() explicitly anyway (see next
> point). That however doesn't seem feasible for all mempolicies because of the
> relative nodes semantics (see also the semantics discussion).
>
> Code efficiency
>
> The get_page_from_freelist() function gets the nodemask parameter, which is
> typically obtained from a task or vma mempolicy. Additionally, it will check
> __cpuset_node_allowed when cpusets are enabled. This additional check is
> wasteful when the cpuset restrictions were already applied to the mempolicy's
> nodemask.
>
I suspect the allocator has those checks for kernel allocations and policies control
user mode pages faulted in. I've been playing around with coherent memory and find
that cpusets/mempolicies/zonelists are so tightly bound that they are difficult
to extend for other purposes.
> Issues with semantics
>
> The __cpuset_node_allowed() function is not a simple check for
> current->mems_allowed. It allows any node in interrupt, TIF_MEMDIE, PF_EXITING
> or cpuset ancestors (without __GFP_HARDWALL).
I think this is how cpusets that are hardwalled, control allocations and allow
them to come from an ancestor in the hierarchy
This works as intended if there
> is no mempolicy, but once there is a nodemask from a mempolicy, the cpuset
> restrictions are already applied to it, so the for_next_zone_zonelist_nodemask()
> loop already filters the nodes and the __cpuset_node_allowed() decisions cannot
> apply.
Again I think the difference is the mempolicy is for user space fault handling
and the allocator is for common allocation.
It's true that allocations with mempolicies are typically user-space
> pagefaults, thus __GFP_HARDWALL, not in interrupt, etc, but it's subtle. And
> there is a number of driver allocations using alloc_pages() that implicitly use
> tasks's mempolicy and thus can be affected by this discrepancy.
>
> A related question is why we allow an allocation to escape cpuset restrictions,
> but not also mempolicy restrictions. This includes allocations where we allow
> dipping into memory reserves by ALLOC_NO_WATERMARKS, because
> gfp_pfmemalloc_allowed(). It seems wrong to keep restricting such critical
> allocations due to current task's mempolicy. We could set ac->nodemask to NULL
> in such situations, but can we distinguish nodemasks coming from mempolicies
> from nodemasks coming from e.g. HW restrictions on the memory placement?
>
I think that is a real good question. The only time I've seen a nodemask come
from a mempolicy is when the policy is MPOL_BIND, IIRC. That like you said
is already filtered with cpusets
> Possible fix approach
>
> Cpuset updates will rebind nodemasks only of those mempolicies that need it wrt
> their relative nodes semantics (those are either created with the flag
> MPOL_F_RELATIVE_NODES, or with neither RELATIVE nor STATIC flag). The others
> (created with the STATIC flag) we can leave untouched. For mempolicies that we
> keep rebinding, adopt the approach of mbind() that swaps an updated copy
> instead of in-place changes. We can leave get_page_from_freelist() as it is and
> nodes will be filtered orthogonally with mempolicy nodemask and cpuset check.
>
> This will give us stable nodemask throughout the whole allocation without a
> need for an on-stack copy. The next question is what to do with
> current->mems_allowed. Do we keep the parallel modifications with seqlock
> protection or e.g. try to go back to the synchronous copy approach?
>
> Related to that is a remaining corner case with alloc_pages_vma() which has its
> own seqlock-protected scope. There it calls policy_nodemask() which might
> detect that there's no intersection between the mempolicy and cpuset and return
> NULL nodemask. However, __alloc_pages_slowpath() has own seqlock scope, so if a
> modification to mems_allowed (resulting in no intersection with mempolicy)
> happens between the check in policy_nodemask() and reaching
> __alloc_pages_slowpath(), the latter won't detect the modification and invoke
> OOM before it can return with a failed allocation to alloc_pages_vma() and let
> it detect a seqlock update and retry. One solution as shown in the RFC patch [3]
> is to add another check for the cpuset/nodemask intersection before OOM. That
> works, but it's a bit hacky and still produces an allocation failure warning.
>
> On the other hand, we might also want to make things more robust in general and
> prevent spurious OOMs due to no nodes being eligible for also any other reason,
> such as buggy driver passing a wrong nodemask (which doesn't necessarily come
> from a mempolicy).
>
> [1] https://lkml.kernel.org/r/20170120103843.24587-1-vbabka@suse.cz
> [2] https://lkml.kernel.org/r/a3bc44cd-3c81-c20e-aecb-525eb73b9bfe@suse.cz
> [3] https://lkml.kernel.org/r/7c459f26-13a6-a817-e508-b65b903a8378@suse.cz
>
Thanks for looking into this. I suspect cpuset is quite static in nature today
(setup and use) and most applications are happy to live with it and cpusets.
But you are right that this needs fixing.
Balbir
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
^ permalink raw reply [flat|nested] 4+ messages in thread
end of thread, other threads:[~2017-02-28 20:34 UTC | newest]
Thread overview: 4+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2017-02-03 9:17 [LSF/MM TOPIC] cpuset vs mempolicy related issues Vlastimil Babka
2017-02-10 11:52 ` Vlastimil Babka
2017-02-10 12:26 ` Michal Hocko
2017-02-28 20:34 ` Balbir Singh
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).