From: "Aneesh Kumar K.V" <aneesh.kumar@linux.vnet.ibm.com>
To: christophe leroy <christophe.leroy@c-s.fr>,
Benjamin Herrenschmidt <benh@kernel.crashing.org>,
Paul Mackerras <paulus@samba.org>,
Michael Ellerman <mpe@ellerman.id.au>,
Scott Wood <oss@buserror.net>
Cc: linuxppc-dev@lists.ozlabs.org, linux-kernel@vger.kernel.org
Subject: Re: [PATCH 1/6] powerpc: port 64 bits pgtable_cache to 32 bits
Date: Mon, 15 Aug 2016 15:53:03 +0530 [thread overview]
Message-ID: <87fuq649h4.fsf@linux.vnet.ibm.com> (raw)
In-Reply-To: <df8b5a70-e705-e282-c33c-0c0b93d6b72e@c-s.fr>
christophe leroy <christophe.leroy@c-s.fr> writes:
> Le 14/08/2016 =C3=A0 16:17, Aneesh Kumar K.V a =C3=A9crit :
>> Christophe Leroy <christophe.leroy@c-s.fr> writes:
>>
>>> Today powerpc64 uses a set of pgtable_caches while powerpc32 uses
>>> standard pages when using 4k pages and a single pgtable_cache
>>> if using other size pages. In addition powerpc32 uses another cache
>>> when handling huge pages.
>>>
>>> In preparation of implementing huge pages on the 8xx, this patch
>>> replaces the specific powerpc32 handling by the 64 bits approach.
>>
>> Why is this needed ? Can you also summarize the page size used and the
>> hugepage format you are planning to use ? . What are the page sizes
>> supported by 8xx ? Also is the new code copy of existing powerpc64 4k
>> page size code ?
>
> 8xx supports two huge page sizes: 8M and 512k.
> As PGD entries points on 4M page tables, it means we are in an=20
> eterogenous situation:
> 1/ when using 8M huge pages, we are in the same situation as what is=20
> done for the BOOK3S (which supports 16M, 256M and 1G), that is several=20
> PDG entries pointing to a single PTE entry.
what is done for FSL BOOK3E ?
> 2/ when using 512k huge pages, we are in the same situation as whan is=20
> done for the BOOK3E: a PGD entry points to the hugepage table that=20
> handles several huge pages (in our case 8 huge pages)
>
what is done for Book3s with 4K linux page size. ?
So the idea here is to allocate different hugepte table based on
hugepage size requested and hence the need to switch from hugpte-cache
to a more generic PGT_CACHE ?
> The code from init_64 have been moved to a new file named init-common in=
=20
> order to be used by init_32 too.
> The code from the 64 bits .h has been copied into the 32 bits .h (indeed=
=20
> it's been copied twice as the .h are now duplicated into nohash and=20
> book3s versions)
That explanation made it a lot easy to follow the patch. Can we capture
that in commit message too. Also Do we support hugepage with both 4k and
16K linux page size ?. I guess we do because 8xx only do a two level
linux page table ?=20
>
> [...]
>
>>> diff --git a/arch/powerpc/mm/hugetlbpage.c b/arch/powerpc/mm/hugetlbpag=
e.c
>>> index 7372ee1..9164a77 100644
>>> --- a/arch/powerpc/mm/hugetlbpage.c
>>> +++ b/arch/powerpc/mm/hugetlbpage.c
>>> @@ -68,7 +68,7 @@ static int __hugepte_alloc(struct mm_struct *mm, huge=
pd_t *hpdp,
>>> #ifdef CONFIG_PPC_FSL_BOOK3E
>>> int i;
>>> int num_hugepd =3D 1 << (pshift - pdshift);
>>> - cachep =3D hugepte_cache;
>>> + cachep =3D PGT_CACHE(1);
>>> #else
>>> cachep =3D PGT_CACHE(pdshift - pshift);
>>> #endif
>>
>> Can you explain the usage of PGT_CACHE(1) ?
>
> [...]
>
>>
>> I still didn't quiet follow why we are replacing
>>
>> - hugepte_cache =3D kmem_cache_create("hugepte-cache", sizeof(pte_t),
>> - HUGEPD_SHIFT_MASK + 1, 0, NULL);
>> + pgtable_cache_add(1, NULL);
>>
>
> Euh ... Indeed I wanted something to replace hugepte_cache. But it looks=
=20
> like it should be something like PGT_CACHE(0) for 32 bits targets having=
=20
> 32 bits PTEs and PGT_CACHE(1) for 32 bits targets having 64 bits PTEs.=20
> But PGT_CACHE(0) doesn't exist (yet).
>
> Looking once more, that might not really be needed I think. I'll rework=20
> it and see what I can achieve.
>
Thanks
-aneesh
next prev parent reply other threads:[~2016-08-15 10:23 UTC|newest]
Thread overview: 16+ messages / expand[flat|nested] mbox.gz Atom feed top
2016-08-12 16:55 [PATCH 0/6] powerpc/8xx: implementation of huge pages Christophe Leroy
2016-08-12 16:55 ` [PATCH 1/6] powerpc: port 64 bits pgtable_cache to 32 bits Christophe Leroy
2016-08-14 14:17 ` Aneesh Kumar K.V
2016-08-14 18:51 ` christophe leroy
2016-08-15 10:23 ` Aneesh Kumar K.V [this message]
2016-08-12 16:55 ` [PATCH 2/6] powerpc: fix usage of _PAGE_RO in hugepage Christophe Leroy
2016-08-12 16:55 ` [PATCH 3/6] powerpc/8xx: use r3 to scratch CR in ITLBmiss Christophe Leroy
2016-08-12 16:55 ` [PATCH 4/6] powerpc/8xx: Move additional DTLBMiss handlers out of exception area Christophe Leroy
2016-08-12 16:55 ` [PATCH 5/6] powerpc/8xx: make user addr DTLB miss the short path Christophe Leroy
2016-08-12 16:55 ` [PATCH 6/6] powerpc/8xx: implementation of huge pages Christophe Leroy
2016-08-14 14:25 ` Aneesh Kumar K.V
2016-08-14 17:38 ` christophe leroy
2016-08-15 10:30 ` Aneesh Kumar K.V
2016-08-14 14:27 ` [PATCH 0/6] " Aneesh Kumar K.V
2016-08-14 17:33 ` christophe leroy
2016-08-15 10:31 ` Aneesh Kumar K.V
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=87fuq649h4.fsf@linux.vnet.ibm.com \
--to=aneesh.kumar@linux.vnet.ibm.com \
--cc=benh@kernel.crashing.org \
--cc=christophe.leroy@c-s.fr \
--cc=linux-kernel@vger.kernel.org \
--cc=linuxppc-dev@lists.ozlabs.org \
--cc=mpe@ellerman.id.au \
--cc=oss@buserror.net \
--cc=paulus@samba.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).