From: Jerome Glisse <jglisse@redhat.com>
To: John Hubbard <jhubbard@nvidia.com>
Cc: Balbir Singh <bsingharora@gmail.com>,
Andrew Morton <akpm@linux-foundation.org>,
linux-kernel@vger.kernel.org, linux-mm <linux-mm@kvack.org>,
Naoya Horiguchi <n-horiguchi@ah.jp.nec.com>,
David Nellans <dnellans@nvidia.com>,
Evgeny Baskakov <ebaskakov@nvidia.com>,
Mark Hairgrove <mhairgrove@nvidia.com>,
Sherry Cheung <SCheung@nvidia.com>,
Subhash Gutti <sgutti@nvidia.com>
Subject: Re: [HMM 07/16] mm/migrate: new memory migration helper for use with device memory v4
Date: Thu, 16 Mar 2017 21:52:23 -0400 (EDT) [thread overview]
Message-ID: <2057035918.7910419.1489715543920.JavaMail.zimbra@redhat.com> (raw)
In-Reply-To: <94e0d115-7deb-c748-3dc2-60d6289e6551@nvidia.com>
> On 03/16/2017 05:45 PM, Balbir Singh wrote:
> > On Fri, Mar 17, 2017 at 11:22 AM, John Hubbard <jhubbard@nvidia.com> wrote:
> >> On 03/16/2017 04:05 PM, Andrew Morton wrote:
> >>>
> >>> On Thu, 16 Mar 2017 12:05:26 -0400 Jérôme Glisse <jglisse@redhat.com>
> >>> wrote:
> >>>
> >>>> +static inline struct page *migrate_pfn_to_page(unsigned long mpfn)
> >>>> +{
> >>>> + if (!(mpfn & MIGRATE_PFN_VALID))
> >>>> + return NULL;
> >>>> + return pfn_to_page(mpfn & MIGRATE_PFN_MASK);
> >>>> +}
> >>>
> >>>
> >>> i386 allnoconfig:
> >>>
> >>> In file included from mm/page_alloc.c:61:
> >>> ./include/linux/migrate.h: In function 'migrate_pfn_to_page':
> >>> ./include/linux/migrate.h:139: warning: left shift count >= width of type
> >>> ./include/linux/migrate.h:141: warning: left shift count >= width of type
> >>> ./include/linux/migrate.h: In function 'migrate_pfn_size':
> >>> ./include/linux/migrate.h:146: warning: left shift count >= width of type
> >>>
> >>
> >> It seems clear that this was never meant to work with < 64-bit pfns:
> >>
> >> // migrate.h excerpt:
> >> #define MIGRATE_PFN_VALID (1UL << (BITS_PER_LONG_LONG - 1))
> >> #define MIGRATE_PFN_MIGRATE (1UL << (BITS_PER_LONG_LONG - 2))
> >> #define MIGRATE_PFN_HUGE (1UL << (BITS_PER_LONG_LONG - 3))
> >> #define MIGRATE_PFN_LOCKED (1UL << (BITS_PER_LONG_LONG - 4))
> >> #define MIGRATE_PFN_WRITE (1UL << (BITS_PER_LONG_LONG - 5))
> >> #define MIGRATE_PFN_DEVICE (1UL << (BITS_PER_LONG_LONG - 6))
> >> #define MIGRATE_PFN_ERROR (1UL << (BITS_PER_LONG_LONG - 7))
> >> #define MIGRATE_PFN_MASK ((1UL << (BITS_PER_LONG_LONG -
> >> PAGE_SHIFT))
> >> - 1)
> >>
> >> ...obviously, there is not enough room for these flags, in a 32-bit pfn.
> >>
> >> So, given the current HMM design, I think we are going to have to provide
> >> a
> >> 32-bit version of these routines (migrate_pfn_to_page, and related) that
> >> is
> >> a no-op, right?
> >
> > Or make the HMM Kconfig feature 64BIT only by making it depend on 64BIT?
> >
>
> Yes, that was my first reaction too, but these particular routines are
> aspiring to be generic
> routines--in fact, you have had an influence there, because these might
> possibly help with NUMA
> migrations. :)
>
> So it would look odd to see this:
>
> #ifdef CONFIG_HMM
> int migrate_vma(const struct migrate_vma_ops *ops,
> struct vm_area_struct *vma,
> unsigned long mentries,
> unsigned long start,
> unsigned long end,
> unsigned long *src,
> unsigned long *dst,
> void *private)
> {
> //...implementation
> #endif
>
> ...because migrate_vma() does not sound HMM-specific, and it is, after all,
> in migrate.h and
> migrate.c. We probably want this a more generic approach (not sure if I've
> picked exactly the right
> token to #ifdef on, but it's close):
>
> #ifdef CONFIG_64BIT
> int migrate_vma(const struct migrate_vma_ops *ops,
> struct vm_area_struct *vma,
> unsigned long mentries,
> unsigned long start,
> unsigned long end,
> unsigned long *src,
> unsigned long *dst,
> void *private)
> {
> /* ... full implementation */
> }
>
> #else
> int migrate_vma(const struct migrate_vma_ops *ops,
> struct vm_area_struct *vma,
> unsigned long mentries,
> unsigned long start,
> unsigned long end,
> unsigned long *src,
> unsigned long *dst,
> void *private)
> {
> return -EINVAL; /* or something more appropriate */
> }
> #endif
>
> thanks
> John Hubbard
> NVIDIA
The original intention was for it to be 64bit only, 32bit is a dying
species and before splitting out hmm_ prefix from this code and moving
it to be generic it was behind a 64bit flag.
If latter one someone really care about 32bit we can only move to u64
Cheers,
Jérôme
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
next prev parent reply other threads:[~2017-03-17 1:52 UTC|newest]
Thread overview: 45+ messages / expand[flat|nested] mbox.gz Atom feed top
2017-03-16 16:05 [HMM 00/16] HMM (Heterogeneous Memory Management) v18 Jérôme Glisse
2017-03-16 16:05 ` [HMM 01/16] mm/memory/hotplug: convert device bool to int to allow for more flags v3 Jérôme Glisse
2017-03-19 20:08 ` Mel Gorman
2017-03-16 16:05 ` [HMM 02/16] mm/put_page: move ref decrement to put_zone_device_page() Jérôme Glisse
2017-03-19 20:08 ` Mel Gorman
2017-03-16 16:05 ` [HMM 03/16] mm/ZONE_DEVICE/free-page: callback when page is freed v3 Jérôme Glisse
2017-03-19 20:08 ` Mel Gorman
2017-03-16 16:05 ` [HMM 04/16] mm/ZONE_DEVICE/unaddressable: add support for un-addressable device memory v3 Jérôme Glisse
2017-03-19 20:09 ` Mel Gorman
2017-03-16 16:05 ` [HMM 05/16] mm/ZONE_DEVICE/x86: add support for un-addressable device memory Jérôme Glisse
2017-03-16 16:05 ` [HMM 06/16] mm/migrate: add new boolean copy flag to migratepage() callback Jérôme Glisse
2017-03-19 20:09 ` Mel Gorman
2017-03-16 16:05 ` [HMM 07/16] mm/migrate: new memory migration helper for use with device memory v4 Jérôme Glisse
2017-03-16 16:24 ` Reza Arbab
2017-03-16 20:58 ` Balbir Singh
2017-03-16 23:05 ` Andrew Morton
2017-03-17 0:22 ` John Hubbard
2017-03-17 0:45 ` Balbir Singh
2017-03-17 0:57 ` John Hubbard
2017-03-17 1:52 ` Jerome Glisse [this message]
2017-03-17 3:32 ` Andrew Morton
2017-03-17 3:42 ` Balbir Singh
2017-03-17 4:51 ` Balbir Singh
2017-03-17 7:17 ` John Hubbard
2017-03-16 16:05 ` [HMM 08/16] mm/migrate: migrate_vma() unmap page from vma while collecting pages Jérôme Glisse
2017-03-16 16:05 ` [HMM 09/16] mm/hmm: heterogeneous memory management (HMM for short) Jérôme Glisse
2017-03-19 20:09 ` Mel Gorman
2017-03-16 16:05 ` [HMM 10/16] mm/hmm/mirror: mirror process address space on device with HMM helpers Jérôme Glisse
2017-03-19 20:09 ` Mel Gorman
2017-03-16 16:05 ` [HMM 11/16] mm/hmm/mirror: helper to snapshot CPU page table v2 Jérôme Glisse
2017-03-19 20:09 ` Mel Gorman
2017-03-16 16:05 ` [HMM 12/16] mm/hmm/mirror: device page fault handler Jérôme Glisse
2017-03-16 16:05 ` [HMM 13/16] mm/hmm/migrate: support un-addressable ZONE_DEVICE page in migration Jérôme Glisse
2017-03-16 16:05 ` [HMM 14/16] mm/migrate: allow migrate_vma() to alloc new page on empty entry Jérôme Glisse
2017-03-16 16:05 ` [HMM 15/16] mm/hmm/devmem: device memory hotplug using ZONE_DEVICE Jérôme Glisse
2017-03-16 16:05 ` [HMM 16/16] mm/hmm/devmem: dummy HMM device for ZONE_DEVICE memory v2 Jérôme Glisse
2017-03-17 6:55 ` Bob Liu
2017-03-17 16:53 ` Jerome Glisse
2017-03-16 20:43 ` [HMM 00/16] HMM (Heterogeneous Memory Management) v18 Andrew Morton
2017-03-16 23:49 ` Jerome Glisse
2017-03-17 8:29 ` Bob Liu
2017-03-17 15:57 ` Jerome Glisse
2017-03-17 8:39 ` Bob Liu
2017-03-17 15:52 ` Jerome Glisse
2017-03-19 20:09 ` Mel Gorman
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=2057035918.7910419.1489715543920.JavaMail.zimbra@redhat.com \
--to=jglisse@redhat.com \
--cc=SCheung@nvidia.com \
--cc=akpm@linux-foundation.org \
--cc=bsingharora@gmail.com \
--cc=dnellans@nvidia.com \
--cc=ebaskakov@nvidia.com \
--cc=jhubbard@nvidia.com \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-mm@kvack.org \
--cc=mhairgrove@nvidia.com \
--cc=n-horiguchi@ah.jp.nec.com \
--cc=sgutti@nvidia.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).