* [PATCH] arm64: enable EDAC on arm64
@ 2013-11-06 13:02 Rob Herring
2013-11-06 15:26 ` Catalin Marinas
0 siblings, 1 reply; 6+ messages in thread
From: Rob Herring @ 2013-11-06 13:02 UTC (permalink / raw)
To: linux-arm-kernel
From: Rob Herring <rob.herring@calxeda.com>
Implement atomic_scrub and enable EDAC for arm64.
Signed-off-by: Rob Herring <rob.herring@calxeda.com>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Will Deacon <will.deacon@arm.com>
---
arch/arm64/Kconfig | 1 +
arch/arm64/include/asm/edac.h | 44 +++++++++++++++++++++++++++++++++++++++++++
2 files changed, 45 insertions(+)
create mode 100644 arch/arm64/include/asm/edac.h
diff --git a/arch/arm64/Kconfig b/arch/arm64/Kconfig
index 7d70404..611f5f6 100644
--- a/arch/arm64/Kconfig
+++ b/arch/arm64/Kconfig
@@ -11,6 +11,7 @@ config ARM64
select BUILDTIME_EXTABLE_SORT
select CLONE_BACKWARDS
select COMMON_CLK
+ select EDAC_SUPPORT
select GENERIC_CLOCKEVENTS
select GENERIC_IOMAP
select GENERIC_IRQ_PROBE
diff --git a/arch/arm64/include/asm/edac.h b/arch/arm64/include/asm/edac.h
new file mode 100644
index 0000000..ad81a7a
--- /dev/null
+++ b/arch/arm64/include/asm/edac.h
@@ -0,0 +1,44 @@
+/*
+ * Copyright 2013 Calxeda, Inc.
+ * Based on PPC version Copyright 2007 MontaVista Software, Inc.
+ *
+ * This program is free software; you can redistribute it and/or modify it
+ * under the terms and conditions of the GNU General Public License,
+ * version 2, as published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope it will be useful, but WITHOUT
+ * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
+ * FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for
+ * more details.
+ *
+ * You should have received a copy of the GNU General Public License along with
+ * this program. If not, see <http://www.gnu.org/licenses/>.
+ */
+#ifndef ASM_EDAC_H
+#define ASM_EDAC_H
+/*
+ * ECC atomic, DMA, SMP and interrupt safe scrub function.
+ * Implements the per arch atomic_scrub() that EDAC use for software
+ * ECC scrubbing. It reads memory and then writes back the original
+ * value, allowing the hardware to detect and correct memory errors.
+ */
+static inline void atomic_scrub(void *va, u32 size)
+{
+ unsigned int *virt_addr = va;
+ unsigned int temp, temp2;
+ unsigned int i;
+
+ for (i = 0; i < size / sizeof(*virt_addr); i++, virt_addr++) {
+ /*
+ * No need to check for store failure, another write means
+ * the scrubbing has effectively already been done for us.
+ */
+ asm volatile("\n"
+ " ldxr %0, %2\n"
+ " stxr %w1, %0, %2\n"
+ : "=&r" (temp), "=&r" (temp2), "+Q" (virt_addr)
+ : : "cc");
+ }
+}
+
+#endif
--
1.8.1.2
^ permalink raw reply related [flat|nested] 6+ messages in thread* [PATCH] arm64: enable EDAC on arm64
2013-11-06 13:02 [PATCH] arm64: enable EDAC on arm64 Rob Herring
@ 2013-11-06 15:26 ` Catalin Marinas
2013-11-06 18:39 ` Rob Herring
0 siblings, 1 reply; 6+ messages in thread
From: Catalin Marinas @ 2013-11-06 15:26 UTC (permalink / raw)
To: linux-arm-kernel
On Wed, Nov 06, 2013 at 01:02:24PM +0000, Rob Herring wrote:
> +static inline void atomic_scrub(void *va, u32 size)
> +{
> + unsigned int *virt_addr = va;
> + unsigned int temp, temp2;
> + unsigned int i;
> +
> + for (i = 0; i < size / sizeof(*virt_addr); i++, virt_addr++) {
> + /*
> + * No need to check for store failure, another write means
> + * the scrubbing has effectively already been done for us.
> + */
> + asm volatile("\n"
> + " ldxr %0, %2\n"
> + " stxr %w1, %0, %2\n"
> + : "=&r" (temp), "=&r" (temp2), "+Q" (virt_addr)
> + : : "cc");
But failure of stxr does not necessarily mean another write. It can be
an interrupt, cache line migration etc. The exclusive monitor can be
emulated in many ways.
BTW, can you not use 64-bit loads/stores?
--
Catalin
^ permalink raw reply [flat|nested] 6+ messages in thread* [PATCH] arm64: enable EDAC on arm64
2013-11-06 15:26 ` Catalin Marinas
@ 2013-11-06 18:39 ` Rob Herring
2013-11-07 9:47 ` Catalin Marinas
2013-11-07 9:51 ` Will Deacon
0 siblings, 2 replies; 6+ messages in thread
From: Rob Herring @ 2013-11-06 18:39 UTC (permalink / raw)
To: linux-arm-kernel
On Wed, Nov 6, 2013 at 9:26 AM, Catalin Marinas <catalin.marinas@arm.com> wrote:
> On Wed, Nov 06, 2013 at 01:02:24PM +0000, Rob Herring wrote:
>> +static inline void atomic_scrub(void *va, u32 size)
>> +{
>> + unsigned int *virt_addr = va;
>> + unsigned int temp, temp2;
>> + unsigned int i;
>> +
>> + for (i = 0; i < size / sizeof(*virt_addr); i++, virt_addr++) {
>> + /*
>> + * No need to check for store failure, another write means
>> + * the scrubbing has effectively already been done for us.
>> + */
>> + asm volatile("\n"
>> + " ldxr %0, %2\n"
>> + " stxr %w1, %0, %2\n"
>> + : "=&r" (temp), "=&r" (temp2), "+Q" (virt_addr)
>> + : : "cc");
>
> But failure of stxr does not necessarily mean another write. It can be
> an interrupt, cache line migration etc. The exclusive monitor can be
> emulated in many ways.
Right, I was thinking I could simplify things.
In that case, I could implement this with just "atomic64_add(0,
virt_addr)", but is there any guarantee that atomic64_t has a size of
8 bytes and that I can simply increment an atomic64_t ptr?
> BTW, can you not use 64-bit loads/stores?
Correct, that should be a long instead of int.
Rob
^ permalink raw reply [flat|nested] 6+ messages in thread* [PATCH] arm64: enable EDAC on arm64
2013-11-06 18:39 ` Rob Herring
@ 2013-11-07 9:47 ` Catalin Marinas
2013-11-07 9:51 ` Will Deacon
1 sibling, 0 replies; 6+ messages in thread
From: Catalin Marinas @ 2013-11-07 9:47 UTC (permalink / raw)
To: linux-arm-kernel
On Wed, Nov 06, 2013 at 06:39:18PM +0000, Rob Herring wrote:
> On Wed, Nov 6, 2013 at 9:26 AM, Catalin Marinas <catalin.marinas@arm.com> wrote:
> > On Wed, Nov 06, 2013 at 01:02:24PM +0000, Rob Herring wrote:
> >> +static inline void atomic_scrub(void *va, u32 size)
> >> +{
> >> + unsigned int *virt_addr = va;
> >> + unsigned int temp, temp2;
> >> + unsigned int i;
> >> +
> >> + for (i = 0; i < size / sizeof(*virt_addr); i++, virt_addr++) {
> >> + /*
> >> + * No need to check for store failure, another write means
> >> + * the scrubbing has effectively already been done for us.
> >> + */
> >> + asm volatile("\n"
> >> + " ldxr %0, %2\n"
> >> + " stxr %w1, %0, %2\n"
> >> + : "=&r" (temp), "=&r" (temp2), "+Q" (virt_addr)
> >> + : : "cc");
> >
> > But failure of stxr does not necessarily mean another write. It can be
> > an interrupt, cache line migration etc. The exclusive monitor can be
> > emulated in many ways.
>
> Right, I was thinking I could simplify things.
>
> In that case, I could implement this with just "atomic64_add(0,
> virt_addr)", but is there any guarantee that atomic64_t has a size of
> 8 bytes and that I can simply increment an atomic64_t ptr?
I would rather add just an asm volatile (as you've done) to avoid the
'add' inside the loop and force casting to atomic64_t.
--
Catalin
^ permalink raw reply [flat|nested] 6+ messages in thread* [PATCH] arm64: enable EDAC on arm64
2013-11-06 18:39 ` Rob Herring
2013-11-07 9:47 ` Catalin Marinas
@ 2013-11-07 9:51 ` Will Deacon
2013-11-07 14:22 ` Rob Herring
1 sibling, 1 reply; 6+ messages in thread
From: Will Deacon @ 2013-11-07 9:51 UTC (permalink / raw)
To: linux-arm-kernel
On Wed, Nov 06, 2013 at 06:39:18PM +0000, Rob Herring wrote:
> On Wed, Nov 6, 2013 at 9:26 AM, Catalin Marinas <catalin.marinas@arm.com> wrote:
> > On Wed, Nov 06, 2013 at 01:02:24PM +0000, Rob Herring wrote:
> >> +static inline void atomic_scrub(void *va, u32 size)
> >> +{
> >> + unsigned int *virt_addr = va;
> >> + unsigned int temp, temp2;
> >> + unsigned int i;
> >> +
> >> + for (i = 0; i < size / sizeof(*virt_addr); i++, virt_addr++) {
> >> + /*
> >> + * No need to check for store failure, another write means
> >> + * the scrubbing has effectively already been done for us.
> >> + */
> >> + asm volatile("\n"
> >> + " ldxr %0, %2\n"
> >> + " stxr %w1, %0, %2\n"
> >> + : "=&r" (temp), "=&r" (temp2), "+Q" (virt_addr)
> >> + : : "cc");
> >
> > But failure of stxr does not necessarily mean another write. It can be
> > an interrupt, cache line migration etc. The exclusive monitor can be
> > emulated in many ways.
>
> Right, I was thinking I could simplify things.
>
> In that case, I could implement this with just "atomic64_add(0,
> virt_addr)", but is there any guarantee that atomic64_t has a size of
> 8 bytes and that I can simply increment an atomic64_t ptr?
>
> > BTW, can you not use 64-bit loads/stores?
>
> Correct, that should be a long instead of int.
Are we guaranteed that va is a 64-bit aligned pointer? Also, usual comment
about the "cc" clobber.
Will
^ permalink raw reply [flat|nested] 6+ messages in thread* [PATCH] arm64: enable EDAC on arm64
2013-11-07 9:51 ` Will Deacon
@ 2013-11-07 14:22 ` Rob Herring
0 siblings, 0 replies; 6+ messages in thread
From: Rob Herring @ 2013-11-07 14:22 UTC (permalink / raw)
To: linux-arm-kernel
On Thu, Nov 7, 2013 at 3:51 AM, Will Deacon <will.deacon@arm.com> wrote:
> On Wed, Nov 06, 2013 at 06:39:18PM +0000, Rob Herring wrote:
>> On Wed, Nov 6, 2013 at 9:26 AM, Catalin Marinas <catalin.marinas@arm.com> wrote:
>> > On Wed, Nov 06, 2013 at 01:02:24PM +0000, Rob Herring wrote:
>> >> +static inline void atomic_scrub(void *va, u32 size)
>> >> +{
>> >> + unsigned int *virt_addr = va;
>> >> + unsigned int temp, temp2;
>> >> + unsigned int i;
>> >> +
>> >> + for (i = 0; i < size / sizeof(*virt_addr); i++, virt_addr++) {
>> >> + /*
>> >> + * No need to check for store failure, another write means
>> >> + * the scrubbing has effectively already been done for us.
>> >> + */
>> >> + asm volatile("\n"
>> >> + " ldxr %0, %2\n"
>> >> + " stxr %w1, %0, %2\n"
>> >> + : "=&r" (temp), "=&r" (temp2), "+Q" (virt_addr)
>> >> + : : "cc");
>> >
>> > But failure of stxr does not necessarily mean another write. It can be
>> > an interrupt, cache line migration etc. The exclusive monitor can be
>> > emulated in many ways.
>>
>> Right, I was thinking I could simplify things.
>>
>> In that case, I could implement this with just "atomic64_add(0,
>> virt_addr)", but is there any guarantee that atomic64_t has a size of
>> 8 bytes and that I can simply increment an atomic64_t ptr?
>>
>> > BTW, can you not use 64-bit loads/stores?
>>
>> Correct, that should be a long instead of int.
>
> Are we guaranteed that va is a 64-bit aligned pointer? Also, usual comment
> about the "cc" clobber.
I'll have to check if the EDAC framework provides any guarantee, but
we can force the alignment widening the scrubbing range if needed.
Normally, DDR ECC would always be done on 64-bit units for 72-bit
DIMMs. I can't imagine you would ever have 32-bit wide DDR with ECC.
Rob
^ permalink raw reply [flat|nested] 6+ messages in thread
end of thread, other threads:[~2013-11-07 14:22 UTC | newest]
Thread overview: 6+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2013-11-06 13:02 [PATCH] arm64: enable EDAC on arm64 Rob Herring
2013-11-06 15:26 ` Catalin Marinas
2013-11-06 18:39 ` Rob Herring
2013-11-07 9:47 ` Catalin Marinas
2013-11-07 9:51 ` Will Deacon
2013-11-07 14:22 ` Rob Herring
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).