* [LSF/MM ATTEND] Requests to attend MM Summit 2018
@ 2018-01-28 12:52 Anshuman Khandual
2018-01-29 13:14 ` Michal Hocko
0 siblings, 1 reply; 3+ messages in thread
From: Anshuman Khandual @ 2018-01-28 12:52 UTC (permalink / raw)
To: lsf-pc@lists.linux-foundation.org, linux-mm
Cc: Mike Kravetz, Laura Abbott, Joonsoo Kim, John Hubbard,
Jerome Glisse, Michal Hocko
Hello,
Apart from the "Rethinking NUMA" topic which I have proposed, I would
like to attend 2018 LSFMM to discuss following different topics.
A. HMM: (Jerome Glisse, John Hubbard, Michal Hocko)
I am interested in discussing future plans for HMM (including HMM CDM)
including improvement to mmu_notifier framework carrying more context
into it's callback etc.
B. HugeTLB: (Mike Kravetz, Michal Hocko)
I am interested in discussing about anything related to HugeTLB page
migration, SW/HW poisoning of HugeTLB pages including how to handle
memory failures in a smaller section of the HugeTLB page. I am also
interested in anything related to runtime gigantic HugeTLB pages
allocation and it's migration/poisoning etc.
C. CMA (Mike Kravetz, Laura Abbott, Joonsoo Kim)
1. Supporting hotplug memory as a CMA region
There are situations where a platform identified specific PFN range
can only be used for some low level debug/tracing purpose. The same
PFN range must be shared between multiple guests on a need basis,
hence its logical to expect the range to be hot add/removable in
each guest. But once available and online in the guest, it would
require a sort of guarantee of a large order allocation (almost the
entire range) into the memory to use it for aforesaid purpose.
Plugging the memory as ZONE_MOVABLE with MIGRATE_CMA makes sense in
this scenario but its not supported at the moment.
This basically extends the idea of relaxing CMA reservation and
declaration restrictions as pointed by Mike Kravetz.
2. Adding NUMA
Adding NUMA tracking information to individual CMA areas and use it
for alloc_cma() interface. In POWER8 KVM implementation, guest HPT
(Hash Page Table) is allocated from a predefined CMA region. NUMA
aligned allocation for HPT for any given guest VM can help improve
performance.
3. Reducing CMA allocation failures
CMA allocation failures are primarily because of not being unable to
isolate or migrate the given PFN range (Inside alloc_contig_range).
Is there a way to reduce the failure chances ?
D. MAP_CONTIG (Mike Kravetz, Laura Abbott, Michal Hocko)
I understand that a recent RFC from Mike Kravetz got debated but without
any conclusion about the viability to add MAP_CONTIG option for the user
space to request large contiguous physical memory. I will be really
interested to discuss any future plans on how kernel can help user space
with large physical contiguous memory if need arises.
(MAP_CONTIG RFC https://lkml.org/lkml/2017/10/3/992)
- Anshuman
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
^ permalink raw reply [flat|nested] 3+ messages in thread
* Re: [LSF/MM ATTEND] Requests to attend MM Summit 2018
2018-01-28 12:52 [LSF/MM ATTEND] Requests to attend MM Summit 2018 Anshuman Khandual
@ 2018-01-29 13:14 ` Michal Hocko
2018-02-02 10:47 ` Anshuman Khandual
0 siblings, 1 reply; 3+ messages in thread
From: Michal Hocko @ 2018-01-29 13:14 UTC (permalink / raw)
To: Anshuman Khandual
Cc: lsf-pc@lists.linux-foundation.org, linux-mm, Mike Kravetz,
Laura Abbott, Joonsoo Kim, John Hubbard, Jerome Glisse
On Sun 28-01-18 18:22:01, Anshuman Khandual wrote:
[...]
> 1. Supporting hotplug memory as a CMA region
>
> There are situations where a platform identified specific PFN range
> can only be used for some low level debug/tracing purpose. The same
> PFN range must be shared between multiple guests on a need basis,
> hence its logical to expect the range to be hot add/removable in
> each guest. But once available and online in the guest, it would
> require a sort of guarantee of a large order allocation (almost the
> entire range) into the memory to use it for aforesaid purpose.
> Plugging the memory as ZONE_MOVABLE with MIGRATE_CMA makes sense in
> this scenario but its not supported at the moment.
Isn't Joonsoo's[1] work doing exactly this?
[1] http://lkml.kernel.org/r/1512114786-5085-1-git-send-email-iamjoonsoo.kim@lge.com
Anyway, declaring CMA regions to the hotplugable memory sounds like a
misconfiguration. Unless I've missed anything CMA memory is not
migratable and it is far from trivial to change that.
> This basically extends the idea of relaxing CMA reservation and
> declaration restrictions as pointed by Mike Kravetz.
>
> 2. Adding NUMA
>
> Adding NUMA tracking information to individual CMA areas and use it
> for alloc_cma() interface. In POWER8 KVM implementation, guest HPT
> (Hash Page Table) is allocated from a predefined CMA region. NUMA
> aligned allocation for HPT for any given guest VM can help improve
> performance.
With CMA using ZONE_MOVABLE this should be rather straightforward. We
just need a way to distribute CMA regions over nodes and make the core
CMA allocator to fallback between nodes in a the nodlist order.
> 3. Reducing CMA allocation failures
>
> CMA allocation failures are primarily because of not being unable to
> isolate or migrate the given PFN range (Inside alloc_contig_range).
> Is there a way to reduce the failure chances ?
>
> D. MAP_CONTIG (Mike Kravetz, Laura Abbott, Michal Hocko)
>
> I understand that a recent RFC from Mike Kravetz got debated but without
> any conclusion about the viability to add MAP_CONTIG option for the user
> space to request large contiguous physical memory.
The conclusion was pretty clear AFAIR. Our allocator simply cannot
handle arbitrary sized large allocations so MAP_CONTIG is really hard to
provide to the userspace. If there are drivers (RDMA I suspect) which
would benefit from large allocations then they should use a custom mmap
implementation which preallocates the memory.
> I will be really
> interested to discuss any future plans on how kernel can help user space
> with large physical contiguous memory if need arises.
>
> (MAP_CONTIG RFC https://lkml.org/lkml/2017/10/3/992)
--
Michal Hocko
SUSE Labs
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
^ permalink raw reply [flat|nested] 3+ messages in thread
* Re: [LSF/MM ATTEND] Requests to attend MM Summit 2018
2018-01-29 13:14 ` Michal Hocko
@ 2018-02-02 10:47 ` Anshuman Khandual
0 siblings, 0 replies; 3+ messages in thread
From: Anshuman Khandual @ 2018-02-02 10:47 UTC (permalink / raw)
To: Michal Hocko, Anshuman Khandual
Cc: lsf-pc@lists.linux-foundation.org, linux-mm, Mike Kravetz,
Laura Abbott, Joonsoo Kim, John Hubbard, Jerome Glisse
On 01/29/2018 06:44 PM, Michal Hocko wrote:
> On Sun 28-01-18 18:22:01, Anshuman Khandual wrote:
> [...]
>> 1. Supporting hotplug memory as a CMA region
>>
>> There are situations where a platform identified specific PFN range
>> can only be used for some low level debug/tracing purpose. The same
>> PFN range must be shared between multiple guests on a need basis,
>> hence its logical to expect the range to be hot add/removable in
>> each guest. But once available and online in the guest, it would
>> require a sort of guarantee of a large order allocation (almost the
>> entire range) into the memory to use it for aforesaid purpose.
>> Plugging the memory as ZONE_MOVABLE with MIGRATE_CMA makes sense in
>> this scenario but its not supported at the moment.
>
> Isn't Joonsoo's[1] work doing exactly this?
>
> [1] http://lkml.kernel.org/r/1512114786-5085-1-git-send-email-iamjoonsoo.kim@lge.com
>
> Anyway, declaring CMA regions to the hotplugable memory sounds like a
> misconfiguration. Unless I've missed anything CMA memory is not
> migratable and it is far from trivial to change that.
Right, its far from trivial but I think worth considering given
the benefit of being able to allocate large contig range on it.
>
>> This basically extends the idea of relaxing CMA reservation and
>> declaration restrictions as pointed by Mike Kravetz.
>>
>> 2. Adding NUMA
>>
>> Adding NUMA tracking information to individual CMA areas and use it
>> for alloc_cma() interface. In POWER8 KVM implementation, guest HPT
>> (Hash Page Table) is allocated from a predefined CMA region. NUMA
>> aligned allocation for HPT for any given guest VM can help improve
>> performance.
>
> With CMA using ZONE_MOVABLE this should be rather straightforward. We
> just need a way to distribute CMA regions over nodes and make the core
> CMA allocator to fallback between nodes in a the nodlist order.
Right, something like that.
>
>> 3. Reducing CMA allocation failures
>>
>> CMA allocation failures are primarily because of not being unable to
>> isolate or migrate the given PFN range (Inside alloc_contig_range).
>> Is there a way to reduce the failure chances ?
>>
>> D. MAP_CONTIG (Mike Kravetz, Laura Abbott, Michal Hocko)
>>
>> I understand that a recent RFC from Mike Kravetz got debated but without
>> any conclusion about the viability to add MAP_CONTIG option for the user
>> space to request large contiguous physical memory.
>
> The conclusion was pretty clear AFAIR. Our allocator simply cannot
> handle arbitrary sized large allocations so MAP_CONTIG is really hard to
> provide to the userspace. If there are drivers (RDMA I suspect) which
> would benefit from large allocations then they should use a custom mmap
> implementation which preallocates the memory.
Looking at the previous discussions (https://lkml.org/lkml/2017/10/3/992)
seems like though we have some concerns about this kind of feature which
makes future compaction hence kernel ability to alloc higher order pages
difficult, as pointed out by other folks, I would still believe that this
is something worth considering in long term (obviously after addressing
some of the concerns raised).
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
^ permalink raw reply [flat|nested] 3+ messages in thread
end of thread, other threads:[~2018-02-02 10:47 UTC | newest]
Thread overview: 3+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2018-01-28 12:52 [LSF/MM ATTEND] Requests to attend MM Summit 2018 Anshuman Khandual
2018-01-29 13:14 ` Michal Hocko
2018-02-02 10:47 ` Anshuman Khandual
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).