linux-arm-kernel.lists.infradead.org archive mirror
 help / color / mirror / Atom feed
* [RFC PATCH v2] prevent top pte being overwritten before flushing
@ 2012-10-14 17:42 Andrew Yan-Pai Chen
  2012-10-17  8:42 ` Andrew Yan-Pai Chen
  0 siblings, 1 reply; 6+ messages in thread
From: Andrew Yan-Pai Chen @ 2012-10-14 17:42 UTC (permalink / raw)
  To: linux-arm-kernel

From: Yan-Pai Chen <ypchen@faraday-tech.com>

Since flush_pfn_alias() is preemptible, it is possible to be
preempted just after set_top_pte() is done. If the process
which preempts the previous happened to invoke flush_pfn_alias()
with the same colour vaddr as that of the previous, the same
top pte will be overwritten. When switching back to the previous,
it attempts to flush cache lines with incorrect mapping. Then
no lines (or wrong lines) will be flushed because of the nature
of vipt caches.

flush_icache_alias() has the same problem as well. However, as it
could be called in SMP setups, we prevent concurrent overwrites of
top pte by having a lock on it.

Signed-off-by: JasonLin <wwlin@faraday-tech.com>
Signed-off-by: Yan-Pai Chen <ypchen@faraday-tech.com>
---
 arch/arm/mm/flush.c |   11 +++++++++++
 1 files changed, 11 insertions(+), 0 deletions(-)

diff --git a/arch/arm/mm/flush.c b/arch/arm/mm/flush.c
index 40ca11e..b6510f4 100644
--- a/arch/arm/mm/flush.c
+++ b/arch/arm/mm/flush.c
@@ -11,6 +11,7 @@
 #include <linux/mm.h>
 #include <linux/pagemap.h>
 #include <linux/highmem.h>
+#include <linux/spinlock.h>
 
 #include <asm/cacheflush.h>
 #include <asm/cachetype.h>
@@ -22,11 +23,15 @@
 
 #ifdef CONFIG_CPU_CACHE_VIPT
 
+static DEFINE_RAW_SPINLOCK(flush_lock);
+
+/* Beware that this function is not to be called for SMP setups. */
 static void flush_pfn_alias(unsigned long pfn, unsigned long vaddr)
 {
 	unsigned long to = FLUSH_ALIAS_START + (CACHE_COLOUR(vaddr) << PAGE_SHIFT);
 	const int zero = 0;
 
+	preempt_disable();
 	set_top_pte(to, pfn_pte(pfn, PAGE_KERNEL));
 
 	asm(	"mcrr	p15, 0, %1, %0, c14\n"
@@ -34,6 +39,8 @@ static void flush_pfn_alias(unsigned long pfn, unsigned long vaddr)
 	    :
 	    : "r" (to), "r" (to + PAGE_SIZE - L1_CACHE_BYTES), "r" (zero)
 	    : "cc");
+
+	preempt_enable();
 }
 
 static void flush_icache_alias(unsigned long pfn, unsigned long vaddr, unsigned long len)
@@ -42,9 +49,13 @@ static void flush_icache_alias(unsigned long pfn, unsigned long vaddr, unsigned
 	unsigned long offset = vaddr & (PAGE_SIZE - 1);
 	unsigned long to;
 
+	raw_spin_lock(&flush_lock);
+
 	set_top_pte(va, pfn_pte(pfn, PAGE_KERNEL));
 	to = va + offset;
 	flush_icache_range(to, to + len);
+
+	raw_spin_unlock(&flush_lock);
 }
 
 void flush_cache_mm(struct mm_struct *mm)
-- 
1.7.4.1

^ permalink raw reply related	[flat|nested] 6+ messages in thread

* [RFC PATCH v2] prevent top pte being overwritten before flushing
  2012-10-14 17:42 [RFC PATCH v2] prevent top pte being overwritten before flushing Andrew Yan-Pai Chen
@ 2012-10-17  8:42 ` Andrew Yan-Pai Chen
  2012-10-17  9:39   ` Will Deacon
  0 siblings, 1 reply; 6+ messages in thread
From: Andrew Yan-Pai Chen @ 2012-10-17  8:42 UTC (permalink / raw)
  To: linux-arm-kernel

On Mon, Oct 15, 2012 at 1:42 AM, Andrew Yan-Pai Chen
<yanpai.chen@gmail.com> wrote:
> From: Yan-Pai Chen <ypchen@faraday-tech.com>
>
> Since flush_pfn_alias() is preemptible, it is possible to be
> preempted just after set_top_pte() is done. If the process
> which preempts the previous happened to invoke flush_pfn_alias()
> with the same colour vaddr as that of the previous, the same
> top pte will be overwritten. When switching back to the previous,
> it attempts to flush cache lines with incorrect mapping. Then
> no lines (or wrong lines) will be flushed because of the nature
> of vipt caches.
>
> flush_icache_alias() has the same problem as well. However, as it
> could be called in SMP setups, we prevent concurrent overwrites of
> top pte by having a lock on it.
>
> Signed-off-by: JasonLin <wwlin@faraday-tech.com>
> Signed-off-by: Yan-Pai Chen <ypchen@faraday-tech.com>
> ---
>  arch/arm/mm/flush.c |   11 +++++++++++
>  1 files changed, 11 insertions(+), 0 deletions(-)
>
> diff --git a/arch/arm/mm/flush.c b/arch/arm/mm/flush.c
> index 40ca11e..b6510f4 100644
> --- a/arch/arm/mm/flush.c
> +++ b/arch/arm/mm/flush.c
> @@ -11,6 +11,7 @@
>  #include <linux/mm.h>
>  #include <linux/pagemap.h>
>  #include <linux/highmem.h>
> +#include <linux/spinlock.h>
>
>  #include <asm/cacheflush.h>
>  #include <asm/cachetype.h>
> @@ -22,11 +23,15 @@
>
>  #ifdef CONFIG_CPU_CACHE_VIPT
>
> +static DEFINE_RAW_SPINLOCK(flush_lock);
> +
> +/* Beware that this function is not to be called for SMP setups. */
>  static void flush_pfn_alias(unsigned long pfn, unsigned long vaddr)
>  {
>         unsigned long to = FLUSH_ALIAS_START + (CACHE_COLOUR(vaddr) <<
> PAGE_SHIFT);
>         const int zero = 0;
>
> +       preempt_disable();
>         set_top_pte(to, pfn_pte(pfn, PAGE_KERNEL));
>
>         asm(    "mcrr   p15, 0, %1, %0, c14\n"
> @@ -34,6 +39,8 @@ static void flush_pfn_alias(unsigned long pfn, unsigned
> long vaddr)
>             :
>             : "r" (to), "r" (to + PAGE_SIZE - L1_CACHE_BYTES), "r" (zero)
>             : "cc");
> +
> +       preempt_enable();
>  }
>
>  static void flush_icache_alias(unsigned long pfn, unsigned long vaddr,
> unsigned long len)
> @@ -42,9 +49,13 @@ static void flush_icache_alias(unsigned long pfn,
> unsigned long vaddr, unsigned
>         unsigned long offset = vaddr & (PAGE_SIZE - 1);
>         unsigned long to;
>
> +       raw_spin_lock(&flush_lock);
> +
>         set_top_pte(va, pfn_pte(pfn, PAGE_KERNEL));
>         to = va + offset;
>         flush_icache_range(to, to + len);
> +
> +       raw_spin_unlock(&flush_lock);
>  }
>
>  void flush_cache_mm(struct mm_struct *mm)
> --
> 1.7.4.1
>

Hi all,

Any ideas?

^ permalink raw reply	[flat|nested] 6+ messages in thread

* [RFC PATCH v2] prevent top pte being overwritten before flushing
  2012-10-17  8:42 ` Andrew Yan-Pai Chen
@ 2012-10-17  9:39   ` Will Deacon
  2012-10-17  9:54     ` Jason Lin
  0 siblings, 1 reply; 6+ messages in thread
From: Will Deacon @ 2012-10-17  9:39 UTC (permalink / raw)
  To: linux-arm-kernel

On Wed, Oct 17, 2012 at 09:42:19AM +0100, Andrew Yan-Pai Chen wrote:
> On Mon, Oct 15, 2012 at 1:42 AM, Andrew Yan-Pai Chen
> <yanpai.chen@gmail.com> wrote:
> > +static DEFINE_RAW_SPINLOCK(flush_lock);
> > +
> > +/* Beware that this function is not to be called for SMP setups. */
> >  static void flush_pfn_alias(unsigned long pfn, unsigned long vaddr)
> >  {
> >         unsigned long to = FLUSH_ALIAS_START + (CACHE_COLOUR(vaddr) <<
> > PAGE_SHIFT);
> >         const int zero = 0;
> >
> > +       preempt_disable();
> >         set_top_pte(to, pfn_pte(pfn, PAGE_KERNEL));
> >
> >         asm(    "mcrr   p15, 0, %1, %0, c14\n"
> > @@ -34,6 +39,8 @@ static void flush_pfn_alias(unsigned long pfn, unsigned
> > long vaddr)
> >             :
> >             : "r" (to), "r" (to + PAGE_SIZE - L1_CACHE_BYTES), "r" (zero)
> >             : "cc");
> > +
> > +       preempt_enable();
> >  }
> >
> >  static void flush_icache_alias(unsigned long pfn, unsigned long vaddr,
> > unsigned long len)
> > @@ -42,9 +49,13 @@ static void flush_icache_alias(unsigned long pfn,
> > unsigned long vaddr, unsigned
> >         unsigned long offset = vaddr & (PAGE_SIZE - 1);
> >         unsigned long to;
> >
> > +       raw_spin_lock(&flush_lock);
> > +
> >         set_top_pte(va, pfn_pte(pfn, PAGE_KERNEL));
> >         to = va + offset;
> >         flush_icache_range(to, to + len);
> > +
> > +       raw_spin_unlock(&flush_lock);
> >  }
> >
> >  void flush_cache_mm(struct mm_struct *mm)
> > --
> > 1.7.4.1
> >
> 
> Hi all,
> 
> Any ideas?

I wonder if we could use different ptes for each CPU (by hacking
pte_offset_kernel) and then get away with just disabling preemption in both
cases?

Will

^ permalink raw reply	[flat|nested] 6+ messages in thread

* [RFC PATCH v2] prevent top pte being overwritten before flushing
  2012-10-17  9:39   ` Will Deacon
@ 2012-10-17  9:54     ` Jason Lin
  2012-10-17  9:57       ` Will Deacon
  0 siblings, 1 reply; 6+ messages in thread
From: Jason Lin @ 2012-10-17  9:54 UTC (permalink / raw)
  To: linux-arm-kernel

2012/10/17 Will Deacon <will.deacon@arm.com>:
> On Wed, Oct 17, 2012 at 09:42:19AM +0100, Andrew Yan-Pai Chen wrote:
>> On Mon, Oct 15, 2012 at 1:42 AM, Andrew Yan-Pai Chen
>> <yanpai.chen@gmail.com> wrote:
>> > +static DEFINE_RAW_SPINLOCK(flush_lock);
>> > +
>> > +/* Beware that this function is not to be called for SMP setups. */
>> >  static void flush_pfn_alias(unsigned long pfn, unsigned long vaddr)
>> >  {
>> >         unsigned long to = FLUSH_ALIAS_START + (CACHE_COLOUR(vaddr) <<
>> > PAGE_SHIFT);
>> >         const int zero = 0;
>> >
>> > +       preempt_disable();
>> >         set_top_pte(to, pfn_pte(pfn, PAGE_KERNEL));
>> >
>> >         asm(    "mcrr   p15, 0, %1, %0, c14\n"
>> > @@ -34,6 +39,8 @@ static void flush_pfn_alias(unsigned long pfn, unsigned
>> > long vaddr)
>> >             :
>> >             : "r" (to), "r" (to + PAGE_SIZE - L1_CACHE_BYTES), "r" (zero)
>> >             : "cc");
>> > +
>> > +       preempt_enable();
>> >  }
>> >
>> >  static void flush_icache_alias(unsigned long pfn, unsigned long vaddr,
>> > unsigned long len)
>> > @@ -42,9 +49,13 @@ static void flush_icache_alias(unsigned long pfn,
>> > unsigned long vaddr, unsigned
>> >         unsigned long offset = vaddr & (PAGE_SIZE - 1);
>> >         unsigned long to;
>> >
>> > +       raw_spin_lock(&flush_lock);
>> > +
>> >         set_top_pte(va, pfn_pte(pfn, PAGE_KERNEL));
>> >         to = va + offset;
>> >         flush_icache_range(to, to + len);
>> > +
>> > +       raw_spin_unlock(&flush_lock);
>> >  }
>> >
>> >  void flush_cache_mm(struct mm_struct *mm)
>> > --
>> > 1.7.4.1
>> >
>>
>> Hi all,
>>
>> Any ideas?
>
> I wonder if we could use different ptes for each CPU (by hacking
> pte_offset_kernel) and then get away with just disabling preemption in both
> cases?
>
> Will

Single core processor will also cause this issue in flush_pfn_alias().
So it should use different ptes for each task.
Will it be so complicated?

Jason

^ permalink raw reply	[flat|nested] 6+ messages in thread

* [RFC PATCH v2] prevent top pte being overwritten before flushing
  2012-10-17  9:54     ` Jason Lin
@ 2012-10-17  9:57       ` Will Deacon
  2012-10-22  8:35         ` Andrew Yan-Pai Chen
  0 siblings, 1 reply; 6+ messages in thread
From: Will Deacon @ 2012-10-17  9:57 UTC (permalink / raw)
  To: linux-arm-kernel

On Wed, Oct 17, 2012 at 10:54:53AM +0100, Jason Lin wrote:
> 2012/10/17 Will Deacon <will.deacon@arm.com>:
> > On Wed, Oct 17, 2012 at 09:42:19AM +0100, Andrew Yan-Pai Chen wrote:
> >> On Mon, Oct 15, 2012 at 1:42 AM, Andrew Yan-Pai Chen
> >>
> >> Any ideas?
> >
> > I wonder if we could use different ptes for each CPU (by hacking
> > pte_offset_kernel) and then get away with just disabling preemption in both
> > cases?
> >
> > Will
> 
> Single core processor will also cause this issue in flush_pfn_alias().
> So it should use different ptes for each task.
> Will it be so complicated?

You can just disable preemption in that case -- it's the spin_lock that I'd
like to avoid.

Will

^ permalink raw reply	[flat|nested] 6+ messages in thread

* [RFC PATCH v2] prevent top pte being overwritten before flushing
  2012-10-17  9:57       ` Will Deacon
@ 2012-10-22  8:35         ` Andrew Yan-Pai Chen
  0 siblings, 0 replies; 6+ messages in thread
From: Andrew Yan-Pai Chen @ 2012-10-22  8:35 UTC (permalink / raw)
  To: linux-arm-kernel

On Wed, Oct 17, 2012 at 5:57 PM, Will Deacon <will.deacon@arm.com> wrote:
> On Wed, Oct 17, 2012 at 10:54:53AM +0100, Jason Lin wrote:
>> 2012/10/17 Will Deacon <will.deacon@arm.com>:
>> > On Wed, Oct 17, 2012 at 09:42:19AM +0100, Andrew Yan-Pai Chen wrote:
>> >> On Mon, Oct 15, 2012 at 1:42 AM, Andrew Yan-Pai Chen
>> >>
>> >> Any ideas?
>> >
>> > I wonder if we could use different ptes for each CPU (by hacking
>> > pte_offset_kernel) and then get away with just disabling preemption in both
>> > cases?
>> >
>> > Will
>>
>> Single core processor will also cause this issue in flush_pfn_alias().
>> So it should use different ptes for each task.
>> Will it be so complicated?
>
> You can just disable preemption in that case -- it's the spin_lock that I'd
> like to avoid.
>
> Will

If we have larger address space for VIPT aliasing flushing, 4 pages
per CPU (for 4 colours)
then we can avoid spinlocks by using per-cpu pte:

#define percpu_colour_offset    (0x4000 * smp_processor_id())

static inline void set_top_pte(unsigned long va, pte_t pte)
{
        pte_t *ptep = pte_offset_kernel(top_pmd, va + percpu_colour_offset);
        set_pte_ext(ptep, pte, 0);
        local_flush_tlb_kernel_page(va + percpu_colour_offset);
}

Is this what you mean, Will?

But considering set_top_pte() will also be called in other paths
(functions in highmem.c),
maybe we should keep set_top_pte() untouched but change the caller:

static void flush_icache_alias(unsigned long pfn, unsigned long vaddr,
unsigned long len)
{
        unsigned long va = FLUSH_ALIAS_START + (CACHE_COLOUR(vaddr) <<
PAGE_SHIFT);
        unsigned long offset = vaddr & (PAGE_SIZE - 1);
        unsigned long to;

        preempt_disable();

        va += percpu_colour_offset;
        set_top_pte(va, pfn_pte(pfn, PAGE_KERNEL));
        to = va + offset;
        flush_icache_range(to, to + len);

        preempt_enable();
}

BTW, shouldn't the spinlock in v6_*_user_highpage_aliasing be removed
as well? (disabling preemption instead)
There seems to be no multi-core processors using VIPT aliasing D-cache?


Any comment would be appreciated.

--
Regards,
Andrew

^ permalink raw reply	[flat|nested] 6+ messages in thread

end of thread, other threads:[~2012-10-22  8:35 UTC | newest]

Thread overview: 6+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2012-10-14 17:42 [RFC PATCH v2] prevent top pte being overwritten before flushing Andrew Yan-Pai Chen
2012-10-17  8:42 ` Andrew Yan-Pai Chen
2012-10-17  9:39   ` Will Deacon
2012-10-17  9:54     ` Jason Lin
2012-10-17  9:57       ` Will Deacon
2012-10-22  8:35         ` Andrew Yan-Pai Chen

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).