* [PATCH 6/8] mm/highmem: make kmap cache coloring aware
2014-07-22 19:01 [PATCH 0/8] xtensa: highmem support on cores with aliasing cache Max Filippov
@ 2014-07-22 19:01 ` Max Filippov
2014-07-22 19:01 ` Max Filippov
` (4 subsequent siblings)
5 siblings, 0 replies; 11+ messages in thread
From: Max Filippov @ 2014-07-22 19:01 UTC (permalink / raw)
To: linux-xtensa
Cc: Chris Zankel, Marc Gauthier, linux-kernel, Leonid Yegoshin,
linux-mm, linux-arch, linux-mips, David Rientjes, Max Filippov
From: Leonid Yegoshin <Leonid.Yegoshin@imgtec.com>
Provide hooks that allow architectures with aliasing cache to align
mapping address of high pages according to their color. Such architectures
may enforce similar coloring of low- and high-memory page mappings and
reuse existing cache management functions to support highmem.
Cc: linux-mm@kvack.org
Cc: linux-arch@vger.kernel.org
Cc: linux-mips@linux-mips.org
Cc: David Rientjes <rientjes@google.com>
Signed-off-by: Leonid Yegoshin <Leonid.Yegoshin@imgtec.com>
[ Max: extract architecture-independent part of the original patch, clean
up checkpatch and build warnings. ]
Signed-off-by: Max Filippov <jcmvbkbc@gmail.com>
---
Changes since the initial version:
- define set_pkmap_color(pg, cl) as do { } while (0) instead of /* */;
- rename is_no_more_pkmaps to no_more_pkmaps;
- change 'if (count > 0)' to 'if (count)' to better match the original
code behavior;
mm/highmem.c | 19 ++++++++++++++++---
1 file changed, 16 insertions(+), 3 deletions(-)
diff --git a/mm/highmem.c b/mm/highmem.c
index b32b70c..88fb62e 100644
--- a/mm/highmem.c
+++ b/mm/highmem.c
@@ -44,6 +44,14 @@ DEFINE_PER_CPU(int, __kmap_atomic_idx);
*/
#ifdef CONFIG_HIGHMEM
+#ifndef ARCH_PKMAP_COLORING
+#define set_pkmap_color(pg, cl) do { } while (0)
+#define get_last_pkmap_nr(p, cl) (p)
+#define get_next_pkmap_nr(p, cl) (((p) + 1) & LAST_PKMAP_MASK)
+#define no_more_pkmaps(p, cl) (!(p))
+#define get_next_pkmap_counter(c, cl) ((c) - 1)
+#endif
+
unsigned long totalhigh_pages __read_mostly;
EXPORT_SYMBOL(totalhigh_pages);
@@ -161,19 +169,24 @@ static inline unsigned long map_new_virtual(struct page *page)
{
unsigned long vaddr;
int count;
+ int color __maybe_unused;
+
+ set_pkmap_color(page, color);
+ last_pkmap_nr = get_last_pkmap_nr(last_pkmap_nr, color);
start:
count = LAST_PKMAP;
/* Find an empty entry */
for (;;) {
- last_pkmap_nr = (last_pkmap_nr + 1) & LAST_PKMAP_MASK;
- if (!last_pkmap_nr) {
+ last_pkmap_nr = get_next_pkmap_nr(last_pkmap_nr, color);
+ if (no_more_pkmaps(last_pkmap_nr, color)) {
flush_all_zero_pkmaps();
count = LAST_PKMAP;
}
if (!pkmap_count[last_pkmap_nr])
break; /* Found a usable entry */
- if (--count)
+ count = get_next_pkmap_counter(count, color);
+ if (count)
continue;
/*
--
1.8.1.4
^ permalink raw reply related [flat|nested] 11+ messages in thread
* [PATCH 6/8] mm/highmem: make kmap cache coloring aware
2014-07-22 19:01 [PATCH 0/8] xtensa: highmem support on cores with aliasing cache Max Filippov
2014-07-22 19:01 ` [PATCH 6/8] mm/highmem: make kmap cache coloring aware Max Filippov
@ 2014-07-22 19:01 ` Max Filippov
2014-07-22 19:35 ` Leonid Yegoshin
2014-07-22 19:01 ` Max Filippov
` (3 subsequent siblings)
5 siblings, 1 reply; 11+ messages in thread
From: Max Filippov @ 2014-07-22 19:01 UTC (permalink / raw)
To: linux-xtensa
Cc: Chris Zankel, Marc Gauthier, linux-kernel, Leonid Yegoshin,
linux-mm, linux-arch, linux-mips, David Rientjes, Max Filippov
From: Leonid Yegoshin <Leonid.Yegoshin@imgtec.com>
Provide hooks that allow architectures with aliasing cache to align
mapping address of high pages according to their color. Such architectures
may enforce similar coloring of low- and high-memory page mappings and
reuse existing cache management functions to support highmem.
Cc: linux-mm@kvack.org
Cc: linux-arch@vger.kernel.org
Cc: linux-mips@linux-mips.org
Cc: David Rientjes <rientjes@google.com>
Signed-off-by: Leonid Yegoshin <Leonid.Yegoshin@imgtec.com>
[ Max: extract architecture-independent part of the original patch, clean
up checkpatch and build warnings. ]
Signed-off-by: Max Filippov <jcmvbkbc@gmail.com>
---
Changes since the initial version:
- define set_pkmap_color(pg, cl) as do { } while (0) instead of /* */;
- rename is_no_more_pkmaps to no_more_pkmaps;
- change 'if (count > 0)' to 'if (count)' to better match the original
code behavior;
mm/highmem.c | 19 ++++++++++++++++---
1 file changed, 16 insertions(+), 3 deletions(-)
diff --git a/mm/highmem.c b/mm/highmem.c
index b32b70c..88fb62e 100644
--- a/mm/highmem.c
+++ b/mm/highmem.c
@@ -44,6 +44,14 @@ DEFINE_PER_CPU(int, __kmap_atomic_idx);
*/
#ifdef CONFIG_HIGHMEM
+#ifndef ARCH_PKMAP_COLORING
+#define set_pkmap_color(pg, cl) do { } while (0)
+#define get_last_pkmap_nr(p, cl) (p)
+#define get_next_pkmap_nr(p, cl) (((p) + 1) & LAST_PKMAP_MASK)
+#define no_more_pkmaps(p, cl) (!(p))
+#define get_next_pkmap_counter(c, cl) ((c) - 1)
+#endif
+
unsigned long totalhigh_pages __read_mostly;
EXPORT_SYMBOL(totalhigh_pages);
@@ -161,19 +169,24 @@ static inline unsigned long map_new_virtual(struct page *page)
{
unsigned long vaddr;
int count;
+ int color __maybe_unused;
+
+ set_pkmap_color(page, color);
+ last_pkmap_nr = get_last_pkmap_nr(last_pkmap_nr, color);
start:
count = LAST_PKMAP;
/* Find an empty entry */
for (;;) {
- last_pkmap_nr = (last_pkmap_nr + 1) & LAST_PKMAP_MASK;
- if (!last_pkmap_nr) {
+ last_pkmap_nr = get_next_pkmap_nr(last_pkmap_nr, color);
+ if (no_more_pkmaps(last_pkmap_nr, color)) {
flush_all_zero_pkmaps();
count = LAST_PKMAP;
}
if (!pkmap_count[last_pkmap_nr])
break; /* Found a usable entry */
- if (--count)
+ count = get_next_pkmap_counter(count, color);
+ if (count)
continue;
/*
--
1.8.1.4
^ permalink raw reply related [flat|nested] 11+ messages in thread
* Re: [PATCH 6/8] mm/highmem: make kmap cache coloring aware
2014-07-22 19:01 ` Max Filippov
@ 2014-07-22 19:35 ` Leonid Yegoshin
2014-07-22 19:35 ` Leonid Yegoshin
2014-07-22 19:46 ` Max Filippov
0 siblings, 2 replies; 11+ messages in thread
From: Leonid Yegoshin @ 2014-07-22 19:35 UTC (permalink / raw)
To: Max Filippov
Cc: linux-xtensa, Chris Zankel, Marc Gauthier, linux-kernel, linux-mm,
linux-arch, linux-mips, David Rientjes
On 07/22/2014 12:01 PM, Max Filippov wrote:
> From: Leonid Yegoshin <Leonid.Yegoshin@imgtec.com>
>
> Provide hooks that allow architectures with aliasing cache to align
> mapping address of high pages according to their color. Such architectures
> may enforce similar coloring of low- and high-memory page mappings and
> reuse existing cache management functions to support highmem.
>
> Cc: linux-mm@kvack.org
> Cc: linux-arch@vger.kernel.org
> Cc: linux-mips@linux-mips.org
> Cc: David Rientjes <rientjes@google.com>
> Signed-off-by: Leonid Yegoshin <Leonid.Yegoshin@imgtec.com>
> [ Max: extract architecture-independent part of the original patch, clean
> up checkpatch and build warnings. ]
> Signed-off-by: Max Filippov <jcmvbkbc@gmail.com>
> ---
> Changes since the initial version:
> - define set_pkmap_color(pg, cl) as do { } while (0) instead of /* */;
> - rename is_no_more_pkmaps to no_more_pkmaps;
> - change 'if (count > 0)' to 'if (count)' to better match the original
> code behavior;
>
> mm/highmem.c | 19 ++++++++++++++++---
> 1 file changed, 16 insertions(+), 3 deletions(-)
>
> diff --git a/mm/highmem.c b/mm/highmem.c
> index b32b70c..88fb62e 100644
> --- a/mm/highmem.c
> +++ b/mm/highmem.c
> @@ -44,6 +44,14 @@ DEFINE_PER_CPU(int, __kmap_atomic_idx);
> */
> #ifdef CONFIG_HIGHMEM
>
> +#ifndef ARCH_PKMAP_COLORING
> +#define set_pkmap_color(pg, cl) do { } while (0)
> +#define get_last_pkmap_nr(p, cl) (p)
> +#define get_next_pkmap_nr(p, cl) (((p) + 1) & LAST_PKMAP_MASK)
> +#define no_more_pkmaps(p, cl) (!(p))
> +#define get_next_pkmap_counter(c, cl) ((c) - 1)
> +#endif
> +
> unsigned long totalhigh_pages __read_mostly;
> EXPORT_SYMBOL(totalhigh_pages);
>
> @@ -161,19 +169,24 @@ static inline unsigned long map_new_virtual(struct page *page)
> {
> unsigned long vaddr;
> int count;
> + int color __maybe_unused;
> +
> + set_pkmap_color(page, color);
> + last_pkmap_nr = get_last_pkmap_nr(last_pkmap_nr, color);
>
> start:
> count = LAST_PKMAP;
> /* Find an empty entry */
> for (;;) {
> - last_pkmap_nr = (last_pkmap_nr + 1) & LAST_PKMAP_MASK;
> - if (!last_pkmap_nr) {
> + last_pkmap_nr = get_next_pkmap_nr(last_pkmap_nr, color);
> + if (no_more_pkmaps(last_pkmap_nr, color)) {
> flush_all_zero_pkmaps();
> count = LAST_PKMAP;
> }
> if (!pkmap_count[last_pkmap_nr])
> break; /* Found a usable entry */
> - if (--count)
> + count = get_next_pkmap_counter(count, color);
> + if (count)
> continue;
>
> /*
I would like to return back to "if (count >0)".
The reason is in easy way to jump through the same coloured pages - next
element is calculated via decrementing by non "1" value of colours and
it can easy become negative on last page available:
#define get_next_pkmap_counter(c,cl) (c - FIX_N_COLOURS)
where FIX_N_COLOURS is a max number of page colours.
Besides that it is a good practice in stopping cycle.
- Leonid.
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
^ permalink raw reply [flat|nested] 11+ messages in thread
* Re: [PATCH 6/8] mm/highmem: make kmap cache coloring aware
2014-07-22 19:35 ` Leonid Yegoshin
@ 2014-07-22 19:35 ` Leonid Yegoshin
2014-07-22 19:46 ` Max Filippov
1 sibling, 0 replies; 11+ messages in thread
From: Leonid Yegoshin @ 2014-07-22 19:35 UTC (permalink / raw)
To: Max Filippov
Cc: linux-xtensa, Chris Zankel, Marc Gauthier, linux-kernel, linux-mm,
linux-arch, linux-mips, David Rientjes
On 07/22/2014 12:01 PM, Max Filippov wrote:
> From: Leonid Yegoshin <Leonid.Yegoshin@imgtec.com>
>
> Provide hooks that allow architectures with aliasing cache to align
> mapping address of high pages according to their color. Such architectures
> may enforce similar coloring of low- and high-memory page mappings and
> reuse existing cache management functions to support highmem.
>
> Cc: linux-mm@kvack.org
> Cc: linux-arch@vger.kernel.org
> Cc: linux-mips@linux-mips.org
> Cc: David Rientjes <rientjes@google.com>
> Signed-off-by: Leonid Yegoshin <Leonid.Yegoshin@imgtec.com>
> [ Max: extract architecture-independent part of the original patch, clean
> up checkpatch and build warnings. ]
> Signed-off-by: Max Filippov <jcmvbkbc@gmail.com>
> ---
> Changes since the initial version:
> - define set_pkmap_color(pg, cl) as do { } while (0) instead of /* */;
> - rename is_no_more_pkmaps to no_more_pkmaps;
> - change 'if (count > 0)' to 'if (count)' to better match the original
> code behavior;
>
> mm/highmem.c | 19 ++++++++++++++++---
> 1 file changed, 16 insertions(+), 3 deletions(-)
>
> diff --git a/mm/highmem.c b/mm/highmem.c
> index b32b70c..88fb62e 100644
> --- a/mm/highmem.c
> +++ b/mm/highmem.c
> @@ -44,6 +44,14 @@ DEFINE_PER_CPU(int, __kmap_atomic_idx);
> */
> #ifdef CONFIG_HIGHMEM
>
> +#ifndef ARCH_PKMAP_COLORING
> +#define set_pkmap_color(pg, cl) do { } while (0)
> +#define get_last_pkmap_nr(p, cl) (p)
> +#define get_next_pkmap_nr(p, cl) (((p) + 1) & LAST_PKMAP_MASK)
> +#define no_more_pkmaps(p, cl) (!(p))
> +#define get_next_pkmap_counter(c, cl) ((c) - 1)
> +#endif
> +
> unsigned long totalhigh_pages __read_mostly;
> EXPORT_SYMBOL(totalhigh_pages);
>
> @@ -161,19 +169,24 @@ static inline unsigned long map_new_virtual(struct page *page)
> {
> unsigned long vaddr;
> int count;
> + int color __maybe_unused;
> +
> + set_pkmap_color(page, color);
> + last_pkmap_nr = get_last_pkmap_nr(last_pkmap_nr, color);
>
> start:
> count = LAST_PKMAP;
> /* Find an empty entry */
> for (;;) {
> - last_pkmap_nr = (last_pkmap_nr + 1) & LAST_PKMAP_MASK;
> - if (!last_pkmap_nr) {
> + last_pkmap_nr = get_next_pkmap_nr(last_pkmap_nr, color);
> + if (no_more_pkmaps(last_pkmap_nr, color)) {
> flush_all_zero_pkmaps();
> count = LAST_PKMAP;
> }
> if (!pkmap_count[last_pkmap_nr])
> break; /* Found a usable entry */
> - if (--count)
> + count = get_next_pkmap_counter(count, color);
> + if (count)
> continue;
>
> /*
I would like to return back to "if (count >0)".
The reason is in easy way to jump through the same coloured pages - next
element is calculated via decrementing by non "1" value of colours and
it can easy become negative on last page available:
#define get_next_pkmap_counter(c,cl) (c - FIX_N_COLOURS)
where FIX_N_COLOURS is a max number of page colours.
Besides that it is a good practice in stopping cycle.
- Leonid.
^ permalink raw reply [flat|nested] 11+ messages in thread
* Re: [PATCH 6/8] mm/highmem: make kmap cache coloring aware
2014-07-22 19:35 ` Leonid Yegoshin
2014-07-22 19:35 ` Leonid Yegoshin
@ 2014-07-22 19:46 ` Max Filippov
2014-07-22 19:46 ` Max Filippov
1 sibling, 1 reply; 11+ messages in thread
From: Max Filippov @ 2014-07-22 19:46 UTC (permalink / raw)
To: Leonid Yegoshin
Cc: linux-xtensa@linux-xtensa.org, Chris Zankel, Marc Gauthier, LKML,
linux-mm@kvack.org, Linux-Arch, Linux/MIPS Mailing List,
David Rientjes
On Tue, Jul 22, 2014 at 11:35 PM, Leonid Yegoshin
<Leonid.Yegoshin@imgtec.com> wrote:
> On 07/22/2014 12:01 PM, Max Filippov wrote:
>>
>> From: Leonid Yegoshin <Leonid.Yegoshin@imgtec.com>
>>
>> Provide hooks that allow architectures with aliasing cache to align
>> mapping address of high pages according to their color. Such architectures
>> may enforce similar coloring of low- and high-memory page mappings and
>> reuse existing cache management functions to support highmem.
>>
>> Cc: linux-mm@kvack.org
>> Cc: linux-arch@vger.kernel.org
>> Cc: linux-mips@linux-mips.org
>> Cc: David Rientjes <rientjes@google.com>
>> Signed-off-by: Leonid Yegoshin <Leonid.Yegoshin@imgtec.com>
>> [ Max: extract architecture-independent part of the original patch, clean
>> up checkpatch and build warnings. ]
>> Signed-off-by: Max Filippov <jcmvbkbc@gmail.com>
>> ---
>> Changes since the initial version:
>> - define set_pkmap_color(pg, cl) as do { } while (0) instead of /* */;
>> - rename is_no_more_pkmaps to no_more_pkmaps;
>> - change 'if (count > 0)' to 'if (count)' to better match the original
>> code behavior;
>>
>> mm/highmem.c | 19 ++++++++++++++++---
>> 1 file changed, 16 insertions(+), 3 deletions(-)
>>
>> diff --git a/mm/highmem.c b/mm/highmem.c
>> index b32b70c..88fb62e 100644
>> --- a/mm/highmem.c
>> +++ b/mm/highmem.c
>> @@ -44,6 +44,14 @@ DEFINE_PER_CPU(int, __kmap_atomic_idx);
>> */
>> #ifdef CONFIG_HIGHMEM
>> +#ifndef ARCH_PKMAP_COLORING
>> +#define set_pkmap_color(pg, cl) do { } while (0)
>> +#define get_last_pkmap_nr(p, cl) (p)
>> +#define get_next_pkmap_nr(p, cl) (((p) + 1) & LAST_PKMAP_MASK)
>> +#define no_more_pkmaps(p, cl) (!(p))
>> +#define get_next_pkmap_counter(c, cl) ((c) - 1)
>> +#endif
>> +
>> unsigned long totalhigh_pages __read_mostly;
>> EXPORT_SYMBOL(totalhigh_pages);
>> @@ -161,19 +169,24 @@ static inline unsigned long map_new_virtual(struct
>> page *page)
>> {
>> unsigned long vaddr;
>> int count;
>> + int color __maybe_unused;
>> +
>> + set_pkmap_color(page, color);
>> + last_pkmap_nr = get_last_pkmap_nr(last_pkmap_nr, color);
>> start:
>> count = LAST_PKMAP;
>> /* Find an empty entry */
>> for (;;) {
>> - last_pkmap_nr = (last_pkmap_nr + 1) & LAST_PKMAP_MASK;
>> - if (!last_pkmap_nr) {
>> + last_pkmap_nr = get_next_pkmap_nr(last_pkmap_nr, color);
>> + if (no_more_pkmaps(last_pkmap_nr, color)) {
>> flush_all_zero_pkmaps();
>> count = LAST_PKMAP;
>> }
>> if (!pkmap_count[last_pkmap_nr])
>> break; /* Found a usable entry */
>> - if (--count)
>> + count = get_next_pkmap_counter(count, color);
>> + if (count)
>> continue;
>> /*
>
> I would like to return back to "if (count >0)".
>
> The reason is in easy way to jump through the same coloured pages - next
> element is calculated via decrementing by non "1" value of colours and it
> can easy become negative on last page available:
>
> #define get_next_pkmap_counter(c,cl) (c - FIX_N_COLOURS)
>
> where FIX_N_COLOURS is a max number of page colours.
Initial value of c (i.e. LAST_PKMAP) should be a multiple of FIX_N_COLOURS,
so that should not be a problem.
> Besides that it is a good practice in stopping cycle.
But I agree with that.
--
Thanks.
-- Max
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
^ permalink raw reply [flat|nested] 11+ messages in thread
* Re: [PATCH 6/8] mm/highmem: make kmap cache coloring aware
2014-07-22 19:46 ` Max Filippov
@ 2014-07-22 19:46 ` Max Filippov
0 siblings, 0 replies; 11+ messages in thread
From: Max Filippov @ 2014-07-22 19:46 UTC (permalink / raw)
To: Leonid Yegoshin
Cc: linux-xtensa@linux-xtensa.org, Chris Zankel, Marc Gauthier, LKML,
linux-mm@kvack.org, Linux-Arch, Linux/MIPS Mailing List,
David Rientjes
On Tue, Jul 22, 2014 at 11:35 PM, Leonid Yegoshin
<Leonid.Yegoshin@imgtec.com> wrote:
> On 07/22/2014 12:01 PM, Max Filippov wrote:
>>
>> From: Leonid Yegoshin <Leonid.Yegoshin@imgtec.com>
>>
>> Provide hooks that allow architectures with aliasing cache to align
>> mapping address of high pages according to their color. Such architectures
>> may enforce similar coloring of low- and high-memory page mappings and
>> reuse existing cache management functions to support highmem.
>>
>> Cc: linux-mm@kvack.org
>> Cc: linux-arch@vger.kernel.org
>> Cc: linux-mips@linux-mips.org
>> Cc: David Rientjes <rientjes@google.com>
>> Signed-off-by: Leonid Yegoshin <Leonid.Yegoshin@imgtec.com>
>> [ Max: extract architecture-independent part of the original patch, clean
>> up checkpatch and build warnings. ]
>> Signed-off-by: Max Filippov <jcmvbkbc@gmail.com>
>> ---
>> Changes since the initial version:
>> - define set_pkmap_color(pg, cl) as do { } while (0) instead of /* */;
>> - rename is_no_more_pkmaps to no_more_pkmaps;
>> - change 'if (count > 0)' to 'if (count)' to better match the original
>> code behavior;
>>
>> mm/highmem.c | 19 ++++++++++++++++---
>> 1 file changed, 16 insertions(+), 3 deletions(-)
>>
>> diff --git a/mm/highmem.c b/mm/highmem.c
>> index b32b70c..88fb62e 100644
>> --- a/mm/highmem.c
>> +++ b/mm/highmem.c
>> @@ -44,6 +44,14 @@ DEFINE_PER_CPU(int, __kmap_atomic_idx);
>> */
>> #ifdef CONFIG_HIGHMEM
>> +#ifndef ARCH_PKMAP_COLORING
>> +#define set_pkmap_color(pg, cl) do { } while (0)
>> +#define get_last_pkmap_nr(p, cl) (p)
>> +#define get_next_pkmap_nr(p, cl) (((p) + 1) & LAST_PKMAP_MASK)
>> +#define no_more_pkmaps(p, cl) (!(p))
>> +#define get_next_pkmap_counter(c, cl) ((c) - 1)
>> +#endif
>> +
>> unsigned long totalhigh_pages __read_mostly;
>> EXPORT_SYMBOL(totalhigh_pages);
>> @@ -161,19 +169,24 @@ static inline unsigned long map_new_virtual(struct
>> page *page)
>> {
>> unsigned long vaddr;
>> int count;
>> + int color __maybe_unused;
>> +
>> + set_pkmap_color(page, color);
>> + last_pkmap_nr = get_last_pkmap_nr(last_pkmap_nr, color);
>> start:
>> count = LAST_PKMAP;
>> /* Find an empty entry */
>> for (;;) {
>> - last_pkmap_nr = (last_pkmap_nr + 1) & LAST_PKMAP_MASK;
>> - if (!last_pkmap_nr) {
>> + last_pkmap_nr = get_next_pkmap_nr(last_pkmap_nr, color);
>> + if (no_more_pkmaps(last_pkmap_nr, color)) {
>> flush_all_zero_pkmaps();
>> count = LAST_PKMAP;
>> }
>> if (!pkmap_count[last_pkmap_nr])
>> break; /* Found a usable entry */
>> - if (--count)
>> + count = get_next_pkmap_counter(count, color);
>> + if (count)
>> continue;
>> /*
>
> I would like to return back to "if (count >0)".
>
> The reason is in easy way to jump through the same coloured pages - next
> element is calculated via decrementing by non "1" value of colours and it
> can easy become negative on last page available:
>
> #define get_next_pkmap_counter(c,cl) (c - FIX_N_COLOURS)
>
> where FIX_N_COLOURS is a max number of page colours.
Initial value of c (i.e. LAST_PKMAP) should be a multiple of FIX_N_COLOURS,
so that should not be a problem.
> Besides that it is a good practice in stopping cycle.
But I agree with that.
--
Thanks.
-- Max
^ permalink raw reply [flat|nested] 11+ messages in thread
* [PATCH 6/8] mm/highmem: make kmap cache coloring aware
2014-07-22 19:01 [PATCH 0/8] xtensa: highmem support on cores with aliasing cache Max Filippov
2014-07-22 19:01 ` [PATCH 6/8] mm/highmem: make kmap cache coloring aware Max Filippov
2014-07-22 19:01 ` Max Filippov
@ 2014-07-22 19:01 ` Max Filippov
2014-07-22 19:01 ` [PATCH 7/8] xtensa: support aliasing cache in kmap Max Filippov
` (2 subsequent siblings)
5 siblings, 0 replies; 11+ messages in thread
From: Max Filippov @ 2014-07-22 19:01 UTC (permalink / raw)
To: linux-xtensa
Cc: Chris Zankel, Marc Gauthier, linux-kernel, Leonid Yegoshin,
linux-mm, linux-arch, linux-mips, David Rientjes, Max Filippov
From: Leonid Yegoshin <Leonid.Yegoshin@imgtec.com>
Provide hooks that allow architectures with aliasing cache to align
mapping address of high pages according to their color. Such architectures
may enforce similar coloring of low- and high-memory page mappings and
reuse existing cache management functions to support highmem.
Cc: linux-mm@kvack.org
Cc: linux-arch@vger.kernel.org
Cc: linux-mips@linux-mips.org
Cc: David Rientjes <rientjes@google.com>
Signed-off-by: Leonid Yegoshin <Leonid.Yegoshin@imgtec.com>
[ Max: extract architecture-independent part of the original patch, clean
up checkpatch and build warnings. ]
Signed-off-by: Max Filippov <jcmvbkbc@gmail.com>
---
Changes since the initial version:
- define set_pkmap_color(pg, cl) as do { } while (0) instead of /* */;
- rename is_no_more_pkmaps to no_more_pkmaps;
- change 'if (count > 0)' to 'if (count)' to better match the original
code behavior;
mm/highmem.c | 19 ++++++++++++++++---
1 file changed, 16 insertions(+), 3 deletions(-)
diff --git a/mm/highmem.c b/mm/highmem.c
index b32b70c..88fb62e 100644
--- a/mm/highmem.c
+++ b/mm/highmem.c
@@ -44,6 +44,14 @@ DEFINE_PER_CPU(int, __kmap_atomic_idx);
*/
#ifdef CONFIG_HIGHMEM
+#ifndef ARCH_PKMAP_COLORING
+#define set_pkmap_color(pg, cl) do { } while (0)
+#define get_last_pkmap_nr(p, cl) (p)
+#define get_next_pkmap_nr(p, cl) (((p) + 1) & LAST_PKMAP_MASK)
+#define no_more_pkmaps(p, cl) (!(p))
+#define get_next_pkmap_counter(c, cl) ((c) - 1)
+#endif
+
unsigned long totalhigh_pages __read_mostly;
EXPORT_SYMBOL(totalhigh_pages);
@@ -161,19 +169,24 @@ static inline unsigned long map_new_virtual(struct page *page)
{
unsigned long vaddr;
int count;
+ int color __maybe_unused;
+
+ set_pkmap_color(page, color);
+ last_pkmap_nr = get_last_pkmap_nr(last_pkmap_nr, color);
start:
count = LAST_PKMAP;
/* Find an empty entry */
for (;;) {
- last_pkmap_nr = (last_pkmap_nr + 1) & LAST_PKMAP_MASK;
- if (!last_pkmap_nr) {
+ last_pkmap_nr = get_next_pkmap_nr(last_pkmap_nr, color);
+ if (no_more_pkmaps(last_pkmap_nr, color)) {
flush_all_zero_pkmaps();
count = LAST_PKMAP;
}
if (!pkmap_count[last_pkmap_nr])
break; /* Found a usable entry */
- if (--count)
+ count = get_next_pkmap_counter(count, color);
+ if (count)
continue;
/*
--
1.8.1.4
^ permalink raw reply related [flat|nested] 11+ messages in thread
* [PATCH 7/8] xtensa: support aliasing cache in kmap
2014-07-22 19:01 [PATCH 0/8] xtensa: highmem support on cores with aliasing cache Max Filippov
` (2 preceding siblings ...)
2014-07-22 19:01 ` Max Filippov
@ 2014-07-22 19:01 ` Max Filippov
2014-07-22 19:01 ` Max Filippov
2014-07-22 19:01 ` Max Filippov
5 siblings, 0 replies; 11+ messages in thread
From: Max Filippov @ 2014-07-22 19:01 UTC (permalink / raw)
To: linux-xtensa
Cc: Chris Zankel, Marc Gauthier, linux-kernel, Max Filippov, linux-mm,
linux-arch, linux-mips, David Rientjes
Define ARCH_PKMAP_COLORING and provide corresponding macro definitions
on cores with aliasing data cache.
Instead of single last_pkmap_nr maintain an array last_pkmap_nr_arr of
pkmap counters for each page color. Make sure that kmap maps physical
page at virtual address with color matching its physical address.
Cc: linux-mm@kvack.org
Cc: linux-arch@vger.kernel.org
Cc: linux-mips@linux-mips.org
Cc: David Rientjes <rientjes@google.com>
Signed-off-by: Max Filippov <jcmvbkbc@gmail.com>
---
arch/xtensa/include/asm/highmem.h | 18 ++++++++++++++++--
arch/xtensa/mm/highmem.c | 1 +
2 files changed, 17 insertions(+), 2 deletions(-)
diff --git a/arch/xtensa/include/asm/highmem.h b/arch/xtensa/include/asm/highmem.h
index 2653ef5..a5c3380 100644
--- a/arch/xtensa/include/asm/highmem.h
+++ b/arch/xtensa/include/asm/highmem.h
@@ -17,14 +17,28 @@
#include <asm/kmap_types.h>
#include <asm/pgtable.h>
-#define PKMAP_BASE (FIXADDR_START - PMD_SIZE)
-#define LAST_PKMAP PTRS_PER_PTE
+#define PKMAP_BASE ((FIXADDR_START - \
+ (LAST_PKMAP + 1) * PAGE_SIZE) & PMD_MASK)
+#define LAST_PKMAP (PTRS_PER_PTE * DCACHE_N_COLORS)
#define LAST_PKMAP_MASK (LAST_PKMAP - 1)
#define PKMAP_NR(virt) (((virt) - PKMAP_BASE) >> PAGE_SHIFT)
#define PKMAP_ADDR(nr) (PKMAP_BASE + ((nr) << PAGE_SHIFT))
#define kmap_prot PAGE_KERNEL
+#if DCACHE_WAY_SIZE > PAGE_SIZE
+#define ARCH_PKMAP_COLORING
+#define set_pkmap_color(pg, cl) ((cl) = DCACHE_ALIAS(page_to_phys(pg)))
+#define get_last_pkmap_nr(p, cl) (last_pkmap_nr_arr[cl] + (cl))
+#define get_next_pkmap_nr(p, cl) \
+ ((last_pkmap_nr_arr[cl] = ((last_pkmap_nr_arr[cl] + DCACHE_N_COLORS) & \
+ LAST_PKMAP_MASK)) + (cl))
+#define no_more_pkmaps(p, cl) ((p) < DCACHE_N_COLORS)
+#define get_next_pkmap_counter(c, cl) ((c) - DCACHE_N_COLORS)
+
+extern unsigned int last_pkmap_nr_arr[];
+#endif
+
extern pte_t *pkmap_page_table;
void *kmap_high(struct page *page);
diff --git a/arch/xtensa/mm/highmem.c b/arch/xtensa/mm/highmem.c
index 466abae..3742a37 100644
--- a/arch/xtensa/mm/highmem.c
+++ b/arch/xtensa/mm/highmem.c
@@ -12,6 +12,7 @@
#include <linux/highmem.h>
#include <asm/tlbflush.h>
+unsigned int last_pkmap_nr_arr[DCACHE_N_COLORS];
static pte_t *kmap_pte;
static inline enum fixed_addresses kmap_idx(int type, unsigned long color)
--
1.8.1.4
^ permalink raw reply related [flat|nested] 11+ messages in thread
* [PATCH 7/8] xtensa: support aliasing cache in kmap
2014-07-22 19:01 [PATCH 0/8] xtensa: highmem support on cores with aliasing cache Max Filippov
` (3 preceding siblings ...)
2014-07-22 19:01 ` [PATCH 7/8] xtensa: support aliasing cache in kmap Max Filippov
@ 2014-07-22 19:01 ` Max Filippov
2014-07-22 19:01 ` Max Filippov
5 siblings, 0 replies; 11+ messages in thread
From: Max Filippov @ 2014-07-22 19:01 UTC (permalink / raw)
To: linux-xtensa
Cc: Chris Zankel, Marc Gauthier, linux-kernel, Max Filippov, linux-mm,
linux-arch, linux-mips, David Rientjes
Define ARCH_PKMAP_COLORING and provide corresponding macro definitions
on cores with aliasing data cache.
Instead of single last_pkmap_nr maintain an array last_pkmap_nr_arr of
pkmap counters for each page color. Make sure that kmap maps physical
page at virtual address with color matching its physical address.
Cc: linux-mm@kvack.org
Cc: linux-arch@vger.kernel.org
Cc: linux-mips@linux-mips.org
Cc: David Rientjes <rientjes@google.com>
Signed-off-by: Max Filippov <jcmvbkbc@gmail.com>
---
arch/xtensa/include/asm/highmem.h | 18 ++++++++++++++++--
arch/xtensa/mm/highmem.c | 1 +
2 files changed, 17 insertions(+), 2 deletions(-)
diff --git a/arch/xtensa/include/asm/highmem.h b/arch/xtensa/include/asm/highmem.h
index 2653ef5..a5c3380 100644
--- a/arch/xtensa/include/asm/highmem.h
+++ b/arch/xtensa/include/asm/highmem.h
@@ -17,14 +17,28 @@
#include <asm/kmap_types.h>
#include <asm/pgtable.h>
-#define PKMAP_BASE (FIXADDR_START - PMD_SIZE)
-#define LAST_PKMAP PTRS_PER_PTE
+#define PKMAP_BASE ((FIXADDR_START - \
+ (LAST_PKMAP + 1) * PAGE_SIZE) & PMD_MASK)
+#define LAST_PKMAP (PTRS_PER_PTE * DCACHE_N_COLORS)
#define LAST_PKMAP_MASK (LAST_PKMAP - 1)
#define PKMAP_NR(virt) (((virt) - PKMAP_BASE) >> PAGE_SHIFT)
#define PKMAP_ADDR(nr) (PKMAP_BASE + ((nr) << PAGE_SHIFT))
#define kmap_prot PAGE_KERNEL
+#if DCACHE_WAY_SIZE > PAGE_SIZE
+#define ARCH_PKMAP_COLORING
+#define set_pkmap_color(pg, cl) ((cl) = DCACHE_ALIAS(page_to_phys(pg)))
+#define get_last_pkmap_nr(p, cl) (last_pkmap_nr_arr[cl] + (cl))
+#define get_next_pkmap_nr(p, cl) \
+ ((last_pkmap_nr_arr[cl] = ((last_pkmap_nr_arr[cl] + DCACHE_N_COLORS) & \
+ LAST_PKMAP_MASK)) + (cl))
+#define no_more_pkmaps(p, cl) ((p) < DCACHE_N_COLORS)
+#define get_next_pkmap_counter(c, cl) ((c) - DCACHE_N_COLORS)
+
+extern unsigned int last_pkmap_nr_arr[];
+#endif
+
extern pte_t *pkmap_page_table;
void *kmap_high(struct page *page);
diff --git a/arch/xtensa/mm/highmem.c b/arch/xtensa/mm/highmem.c
index 466abae..3742a37 100644
--- a/arch/xtensa/mm/highmem.c
+++ b/arch/xtensa/mm/highmem.c
@@ -12,6 +12,7 @@
#include <linux/highmem.h>
#include <asm/tlbflush.h>
+unsigned int last_pkmap_nr_arr[DCACHE_N_COLORS];
static pte_t *kmap_pte;
static inline enum fixed_addresses kmap_idx(int type, unsigned long color)
--
1.8.1.4
^ permalink raw reply related [flat|nested] 11+ messages in thread
* [PATCH 7/8] xtensa: support aliasing cache in kmap
2014-07-22 19:01 [PATCH 0/8] xtensa: highmem support on cores with aliasing cache Max Filippov
` (4 preceding siblings ...)
2014-07-22 19:01 ` Max Filippov
@ 2014-07-22 19:01 ` Max Filippov
5 siblings, 0 replies; 11+ messages in thread
From: Max Filippov @ 2014-07-22 19:01 UTC (permalink / raw)
To: linux-xtensa
Cc: Chris Zankel, Marc Gauthier, linux-kernel, Max Filippov, linux-mm,
linux-arch, linux-mips, David Rientjes
Define ARCH_PKMAP_COLORING and provide corresponding macro definitions
on cores with aliasing data cache.
Instead of single last_pkmap_nr maintain an array last_pkmap_nr_arr of
pkmap counters for each page color. Make sure that kmap maps physical
page at virtual address with color matching its physical address.
Cc: linux-mm@kvack.org
Cc: linux-arch@vger.kernel.org
Cc: linux-mips@linux-mips.org
Cc: David Rientjes <rientjes@google.com>
Signed-off-by: Max Filippov <jcmvbkbc@gmail.com>
---
arch/xtensa/include/asm/highmem.h | 18 ++++++++++++++++--
arch/xtensa/mm/highmem.c | 1 +
2 files changed, 17 insertions(+), 2 deletions(-)
diff --git a/arch/xtensa/include/asm/highmem.h b/arch/xtensa/include/asm/highmem.h
index 2653ef5..a5c3380 100644
--- a/arch/xtensa/include/asm/highmem.h
+++ b/arch/xtensa/include/asm/highmem.h
@@ -17,14 +17,28 @@
#include <asm/kmap_types.h>
#include <asm/pgtable.h>
-#define PKMAP_BASE (FIXADDR_START - PMD_SIZE)
-#define LAST_PKMAP PTRS_PER_PTE
+#define PKMAP_BASE ((FIXADDR_START - \
+ (LAST_PKMAP + 1) * PAGE_SIZE) & PMD_MASK)
+#define LAST_PKMAP (PTRS_PER_PTE * DCACHE_N_COLORS)
#define LAST_PKMAP_MASK (LAST_PKMAP - 1)
#define PKMAP_NR(virt) (((virt) - PKMAP_BASE) >> PAGE_SHIFT)
#define PKMAP_ADDR(nr) (PKMAP_BASE + ((nr) << PAGE_SHIFT))
#define kmap_prot PAGE_KERNEL
+#if DCACHE_WAY_SIZE > PAGE_SIZE
+#define ARCH_PKMAP_COLORING
+#define set_pkmap_color(pg, cl) ((cl) = DCACHE_ALIAS(page_to_phys(pg)))
+#define get_last_pkmap_nr(p, cl) (last_pkmap_nr_arr[cl] + (cl))
+#define get_next_pkmap_nr(p, cl) \
+ ((last_pkmap_nr_arr[cl] = ((last_pkmap_nr_arr[cl] + DCACHE_N_COLORS) & \
+ LAST_PKMAP_MASK)) + (cl))
+#define no_more_pkmaps(p, cl) ((p) < DCACHE_N_COLORS)
+#define get_next_pkmap_counter(c, cl) ((c) - DCACHE_N_COLORS)
+
+extern unsigned int last_pkmap_nr_arr[];
+#endif
+
extern pte_t *pkmap_page_table;
void *kmap_high(struct page *page);
diff --git a/arch/xtensa/mm/highmem.c b/arch/xtensa/mm/highmem.c
index 466abae..3742a37 100644
--- a/arch/xtensa/mm/highmem.c
+++ b/arch/xtensa/mm/highmem.c
@@ -12,6 +12,7 @@
#include <linux/highmem.h>
#include <asm/tlbflush.h>
+unsigned int last_pkmap_nr_arr[DCACHE_N_COLORS];
static pte_t *kmap_pte;
static inline enum fixed_addresses kmap_idx(int type, unsigned long color)
--
1.8.1.4
^ permalink raw reply related [flat|nested] 11+ messages in thread