From: Anshuman Khandual <khandual@linux.vnet.ibm.com>
To: Balbir Singh <bsingharora@gmail.com>,
linux-mm@kvack.org, linux-kernel@vger.kernel.org,
linuxppc-dev@lists.ozlabs.org
Cc: hughd@google.com, dave.hansen@intel.com,
aneesh.kumar@linux.vnet.ibm.com, kirill@shutemov.name,
n-horiguchi@ah.jp.nec.com, mgorman@techsingularity.net,
akpm@linux-foundation.org
Subject: Re: [PATCH 02/10] mm/hugetlb: Add PGD based implementation awareness
Date: Mon, 11 Apr 2016 10:55:05 +0530 [thread overview]
Message-ID: <570B3531.2000808@linux.vnet.ibm.com> (raw)
In-Reply-To: <570622B4.5020407@gmail.com>
On 04/07/2016 02:34 PM, Balbir Singh wrote:
>
>
> On 07/04/16 15:37, Anshuman Khandual wrote:
>> Currently the config ARCH_WANT_GENERAL_HUGETLB enabled functions like
>> 'huge_pte_alloc' and 'huge_pte_offset' dont take into account HugeTLB
>> page implementation at the PGD level. This is also true for functions
>> like 'follow_page_mask' which is called from move_pages() system call.
>> This lack of PGD level huge page support prohibits some architectures
>> to use these generic HugeTLB functions.
>>
>
> From what I know of move_pages(), it will always call follow_page_mask()
> with FOLL_GET (I could be wrong here) and the implementation below
> returns NULL for follow_huge_pgd().
You are right. This patch makes ARCH_WANT_GENERAL_HUGETLB functions aware
of PGD implementation so that we can do all transactions on 16GB pages
using these function instead of the present arch overrides. But that also
requires follow_page_mask() changes for every other access to the page
than the migrate_pages() usage.
But yes, we dont support migrate_pages() on PGD based pages yet, hence
it just returns NULL in that case. May be the commit message needs to
reflect this.
>
>> This change adds the required PGD based implementation awareness and
>> with that, more architectures like POWER which implements 16GB pages
>> at the PGD level along with the 16MB pages at the PMD level can now
>> use ARCH_WANT_GENERAL_HUGETLB config option.
>>
>> Signed-off-by: Anshuman Khandual <khandual@linux.vnet.ibm.com>
>> ---
>> include/linux/hugetlb.h | 3 +++
>> mm/gup.c | 6 ++++++
>> mm/hugetlb.c | 20 ++++++++++++++++++++
>> 3 files changed, 29 insertions(+)
>>
>> diff --git a/include/linux/hugetlb.h b/include/linux/hugetlb.h
>> index 7d953c2..71832e1 100644
>> --- a/include/linux/hugetlb.h
>> +++ b/include/linux/hugetlb.h
>> @@ -115,6 +115,8 @@ struct page *follow_huge_pmd(struct mm_struct *mm, unsigned long address,
>> pmd_t *pmd, int flags);
>> struct page *follow_huge_pud(struct mm_struct *mm, unsigned long address,
>> pud_t *pud, int flags);
>> +struct page *follow_huge_pgd(struct mm_struct *mm, unsigned long address,
>> + pgd_t *pgd, int flags);
>> int pmd_huge(pmd_t pmd);
>> int pud_huge(pud_t pmd);
>> unsigned long hugetlb_change_protection(struct vm_area_struct *vma,
>> @@ -143,6 +145,7 @@ static inline void hugetlb_show_meminfo(void)
>> }
>> #define follow_huge_pmd(mm, addr, pmd, flags) NULL
>> #define follow_huge_pud(mm, addr, pud, flags) NULL
>> +#define follow_huge_pgd(mm, addr, pgd, flags) NULL
>> #define prepare_hugepage_range(file, addr, len) (-EINVAL)
>> #define pmd_huge(x) 0
>> #define pud_huge(x) 0
>> diff --git a/mm/gup.c b/mm/gup.c
>> index fb87aea..9bac78c 100644
>> --- a/mm/gup.c
>> +++ b/mm/gup.c
>> @@ -234,6 +234,12 @@ struct page *follow_page_mask(struct vm_area_struct *vma,
>> pgd = pgd_offset(mm, address);
>> if (pgd_none(*pgd) || unlikely(pgd_bad(*pgd)))
>> return no_page_table(vma, flags);
>> + if (pgd_huge(*pgd) && vma->vm_flags & VM_HUGETLB) {
>> + page = follow_huge_pgd(mm, address, pgd, flags);
>> + if (page)
>> + return page;
>> + return no_page_table(vma, flags);
> This will return NULL as well?
That right, no_page_table() returns NULL for FOLL_GET when we fall through
after failing on follow_huge_pgd().
>> + }
>>
>> pud = pud_offset(pgd, address);
>> if (pud_none(*pud))
>> diff --git a/mm/hugetlb.c b/mm/hugetlb.c
>> index 19d0d08..5ea3158 100644
>> --- a/mm/hugetlb.c
>> +++ b/mm/hugetlb.c
>> @@ -4250,6 +4250,11 @@ pte_t *huge_pte_alloc(struct mm_struct *mm,
>> pte_t *pte = NULL;
>>
>> pgd = pgd_offset(mm, addr);
>> + if (sz == PGDIR_SIZE) {
>> + pte = (pte_t *)pgd;
>> + goto huge_pgd;
>> + }
>> +
>
> No allocation for a pgd slot - right?
No, its already allocated for the mm during creation.
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
next prev parent reply other threads:[~2016-04-11 5:25 UTC|newest]
Thread overview: 27+ messages / expand[flat|nested] mbox.gz Atom feed top
2016-04-07 5:37 [PATCH 00/10] Enable HugeTLB page migration on POWER Anshuman Khandual
2016-04-07 5:37 ` [PATCH 01/10] mm/mmap: Replace SHM_HUGE_MASK with MAP_HUGE_MASK inside mmap_pgoff Anshuman Khandual
2016-04-07 8:28 ` Balbir Singh
2016-04-13 7:54 ` Michal Hocko
2016-04-07 5:37 ` [PATCH 02/10] mm/hugetlb: Add PGD based implementation awareness Anshuman Khandual
2016-04-07 9:04 ` Balbir Singh
2016-04-11 5:25 ` Anshuman Khandual [this message]
2016-04-11 6:10 ` Anshuman Khandual
2016-04-07 5:37 ` [PATCH 03/10] mm/hugetlb: Protect follow_huge_(pud|pgd) functions from race Anshuman Khandual
2016-04-07 9:16 ` kbuild test robot
2016-04-18 8:44 ` Anshuman Khandual
2016-04-07 9:26 ` Balbir Singh
2016-04-11 5:39 ` Anshuman Khandual
2016-04-11 12:46 ` Balbir Singh
2016-04-07 9:34 ` kbuild test robot
2016-04-11 6:04 ` Anshuman Khandual
2016-04-18 8:42 ` Anshuman Khandual
2016-04-07 5:37 ` [PATCH 04/10] powerpc/hugetlb: Add ABI defines for MAP_HUGE_16MB and MAP_HUGE_16GB Anshuman Khandual
2016-04-07 5:37 ` [PATCH 05/10] powerpc/hugetlb: Split the function 'huge_pte_alloc' Anshuman Khandual
2016-04-11 13:51 ` Balbir Singh
2016-04-13 11:08 ` Anshuman Khandual
2016-04-07 5:37 ` [PATCH 06/10] powerpc/hugetlb: Split the function 'huge_pte_offset' Anshuman Khandual
2016-04-07 5:37 ` [PATCH 07/10] powerpc/hugetlb: Prepare arch functions for ARCH_WANT_GENERAL_HUGETLB Anshuman Khandual
2016-04-07 5:37 ` [PATCH 08/10] powerpc/hugetlb: Selectively enable ARCH_WANT_GENERAL_HUGETLB Anshuman Khandual
2016-04-07 5:37 ` [PATCH 09/10] powerpc/hugetlb: Selectively enable ARCH_ENABLE_HUGEPAGE_MIGRATION Anshuman Khandual
2016-04-07 5:37 ` [PATCH 10/10] selfttest/powerpc: Add memory page migration tests Anshuman Khandual
2016-04-18 8:52 ` [PATCH 00/10] Enable HugeTLB page migration on POWER Anshuman Khandual
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=570B3531.2000808@linux.vnet.ibm.com \
--to=khandual@linux.vnet.ibm.com \
--cc=akpm@linux-foundation.org \
--cc=aneesh.kumar@linux.vnet.ibm.com \
--cc=bsingharora@gmail.com \
--cc=dave.hansen@intel.com \
--cc=hughd@google.com \
--cc=kirill@shutemov.name \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-mm@kvack.org \
--cc=linuxppc-dev@lists.ozlabs.org \
--cc=mgorman@techsingularity.net \
--cc=n-horiguchi@ah.jp.nec.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).