* [PATCH v12 1/6] uaccess: add generic fallback version of copy_mc_to_user()
2024-05-28 8:59 [PATCH v12 0/6]arm64: add ARCH_HAS_COPY_MC support Tong Tiangen
@ 2024-05-28 8:59 ` Tong Tiangen
2024-07-11 13:53 ` Mauro Carvalho Chehab
2024-08-19 9:57 ` Jonathan Cameron
2024-05-28 8:59 ` [PATCH v12 2/6] arm64: add support for ARCH_HAS_COPY_MC Tong Tiangen
` (4 subsequent siblings)
5 siblings, 2 replies; 24+ messages in thread
From: Tong Tiangen @ 2024-05-28 8:59 UTC (permalink / raw)
To: Mark Rutland, Catalin Marinas, Will Deacon, Andrew Morton,
James Morse, Robin Murphy, Andrey Konovalov, Dmitry Vyukov,
Vincenzo Frascino, Michael Ellerman, Nicholas Piggin,
Andrey Ryabinin, Alexander Potapenko, Christophe Leroy,
Aneesh Kumar K.V, Naveen N. Rao, Thomas Gleixner, Ingo Molnar,
Borislav Petkov, Dave Hansen, x86, H. Peter Anvin
Cc: linux-arm-kernel, linux-mm, linuxppc-dev, linux-kernel,
Tong Tiangen, wangkefeng.wang, Guohanjun
x86/powerpc has it's implementation of copy_mc_to_user(), we add generic
fallback in include/linux/uaccess.h prepare for other architechures to
enable CONFIG_ARCH_HAS_COPY_MC.
Signed-off-by: Tong Tiangen <tongtiangen@huawei.com>
Acked-by: Michael Ellerman <mpe@ellerman.id.au>
---
arch/powerpc/include/asm/uaccess.h | 1 +
arch/x86/include/asm/uaccess.h | 1 +
include/linux/uaccess.h | 8 ++++++++
3 files changed, 10 insertions(+)
diff --git a/arch/powerpc/include/asm/uaccess.h b/arch/powerpc/include/asm/uaccess.h
index de10437fd206..df42e6ad647f 100644
--- a/arch/powerpc/include/asm/uaccess.h
+++ b/arch/powerpc/include/asm/uaccess.h
@@ -381,6 +381,7 @@ copy_mc_to_user(void __user *to, const void *from, unsigned long n)
return n;
}
+#define copy_mc_to_user copy_mc_to_user
#endif
extern long __copy_from_user_flushcache(void *dst, const void __user *src,
diff --git a/arch/x86/include/asm/uaccess.h b/arch/x86/include/asm/uaccess.h
index 0f9bab92a43d..309f2439327e 100644
--- a/arch/x86/include/asm/uaccess.h
+++ b/arch/x86/include/asm/uaccess.h
@@ -497,6 +497,7 @@ copy_mc_to_kernel(void *to, const void *from, unsigned len);
unsigned long __must_check
copy_mc_to_user(void __user *to, const void *from, unsigned len);
+#define copy_mc_to_user copy_mc_to_user
#endif
/*
diff --git a/include/linux/uaccess.h b/include/linux/uaccess.h
index 3064314f4832..0dfa9241b6ee 100644
--- a/include/linux/uaccess.h
+++ b/include/linux/uaccess.h
@@ -205,6 +205,14 @@ copy_mc_to_kernel(void *dst, const void *src, size_t cnt)
}
#endif
+#ifndef copy_mc_to_user
+static inline unsigned long __must_check
+copy_mc_to_user(void *dst, const void *src, size_t cnt)
+{
+ return copy_to_user(dst, src, cnt);
+}
+#endif
+
static __always_inline void pagefault_disabled_inc(void)
{
current->pagefault_disabled++;
--
2.25.1
^ permalink raw reply related [flat|nested] 24+ messages in thread* Re: [PATCH v12 1/6] uaccess: add generic fallback version of copy_mc_to_user()
2024-05-28 8:59 ` [PATCH v12 1/6] uaccess: add generic fallback version of copy_mc_to_user() Tong Tiangen
@ 2024-07-11 13:53 ` Mauro Carvalho Chehab
2024-07-12 5:52 ` Mauro Carvalho Chehab
2024-08-19 9:57 ` Jonathan Cameron
1 sibling, 1 reply; 24+ messages in thread
From: Mauro Carvalho Chehab @ 2024-07-11 13:53 UTC (permalink / raw)
To: Tong Tiangen
Cc: Mark Rutland, Catalin Marinas, Will Deacon, Andrew Morton,
James Morse, Robin Murphy, Andrey Konovalov, Dmitry Vyukov,
Vincenzo Frascino, Michael Ellerman, Nicholas Piggin,
Andrey Ryabinin, Alexander Potapenko, Christophe Leroy,
Aneesh Kumar K.V, Naveen N. Rao, Thomas Gleixner, Ingo Molnar,
Borislav Petkov, Dave Hansen, x86, H. Peter Anvin,
linux-arm-kernel, linux-mm, linuxppc-dev, linux-kernel,
wangkefeng.wang, Guohanjun
Em Tue, 28 May 2024 16:59:10 +0800
Tong Tiangen <tongtiangen@huawei.com> escreveu:
> x86/powerpc has it's implementation of copy_mc_to_user(), we add generic
> fallback in include/linux/uaccess.h prepare for other architechures to
> enable CONFIG_ARCH_HAS_COPY_MC.
>
> Signed-off-by: Tong Tiangen <tongtiangen@huawei.com>
> Acked-by: Michael Ellerman <mpe@ellerman.id.au>
> ---
> arch/powerpc/include/asm/uaccess.h | 1 +
> arch/x86/include/asm/uaccess.h | 1 +
> include/linux/uaccess.h | 8 ++++++++
> 3 files changed, 10 insertions(+)
>
> diff --git a/arch/powerpc/include/asm/uaccess.h b/arch/powerpc/include/asm/uaccess.h
> index de10437fd206..df42e6ad647f 100644
> --- a/arch/powerpc/include/asm/uaccess.h
> +++ b/arch/powerpc/include/asm/uaccess.h
> @@ -381,6 +381,7 @@ copy_mc_to_user(void __user *to, const void *from, unsigned long n)
>
> return n;
> }
> +#define copy_mc_to_user copy_mc_to_user
Such define looks weird on my eyes. What to do you want to do here?
> #endif
>
> extern long __copy_from_user_flushcache(void *dst, const void __user *src,
> diff --git a/arch/x86/include/asm/uaccess.h b/arch/x86/include/asm/uaccess.h
> index 0f9bab92a43d..309f2439327e 100644
> --- a/arch/x86/include/asm/uaccess.h
> +++ b/arch/x86/include/asm/uaccess.h
> @@ -497,6 +497,7 @@ copy_mc_to_kernel(void *to, const void *from, unsigned len);
>
> unsigned long __must_check
> copy_mc_to_user(void __user *to, const void *from, unsigned len);
> +#define copy_mc_to_user copy_mc_to_user
Same here.
> #endif
>
> /*
> diff --git a/include/linux/uaccess.h b/include/linux/uaccess.h
> index 3064314f4832..0dfa9241b6ee 100644
> --- a/include/linux/uaccess.h
> +++ b/include/linux/uaccess.h
> @@ -205,6 +205,14 @@ copy_mc_to_kernel(void *dst, const void *src, size_t cnt)
> }
> #endif
>
> +#ifndef copy_mc_to_user
> +static inline unsigned long __must_check
> +copy_mc_to_user(void *dst, const void *src, size_t cnt)
> +{
> + return copy_to_user(dst, src, cnt);
> +}
> +#endif
> +
> static __always_inline void pagefault_disabled_inc(void)
> {
> current->pagefault_disabled++;
Thanks,
Mauro
^ permalink raw reply [flat|nested] 24+ messages in thread* Re: [PATCH v12 1/6] uaccess: add generic fallback version of copy_mc_to_user()
2024-07-11 13:53 ` Mauro Carvalho Chehab
@ 2024-07-12 5:52 ` Mauro Carvalho Chehab
0 siblings, 0 replies; 24+ messages in thread
From: Mauro Carvalho Chehab @ 2024-07-12 5:52 UTC (permalink / raw)
To: Tong Tiangen
Cc: Mark Rutland, Catalin Marinas, Will Deacon, Andrew Morton,
James Morse, Robin Murphy, Andrey Konovalov, Dmitry Vyukov,
Vincenzo Frascino, Michael Ellerman, Nicholas Piggin,
Andrey Ryabinin, Alexander Potapenko, Christophe Leroy,
Aneesh Kumar K.V, Naveen N. Rao, Thomas Gleixner, Ingo Molnar,
Borislav Petkov, Dave Hansen, x86, H. Peter Anvin,
linux-arm-kernel, linux-mm, linuxppc-dev, linux-kernel,
wangkefeng.wang, Guohanjun
Em Thu, 11 Jul 2024 15:53:43 +0200
Mauro Carvalho Chehab <mchehab+huawei@kernel.org> escreveu:
> Em Tue, 28 May 2024 16:59:10 +0800
> Tong Tiangen <tongtiangen@huawei.com> escreveu:
>
> > x86/powerpc has it's implementation of copy_mc_to_user(), we add generic
> > fallback in include/linux/uaccess.h prepare for other architechures to
> > enable CONFIG_ARCH_HAS_COPY_MC.
> >
> > Signed-off-by: Tong Tiangen <tongtiangen@huawei.com>
> > Acked-by: Michael Ellerman <mpe@ellerman.id.au>
> > ---
> > arch/powerpc/include/asm/uaccess.h | 1 +
> > arch/x86/include/asm/uaccess.h | 1 +
> > include/linux/uaccess.h | 8 ++++++++
> > 3 files changed, 10 insertions(+)
> >
> > diff --git a/arch/powerpc/include/asm/uaccess.h b/arch/powerpc/include/asm/uaccess.h
> > index de10437fd206..df42e6ad647f 100644
> > --- a/arch/powerpc/include/asm/uaccess.h
> > +++ b/arch/powerpc/include/asm/uaccess.h
> > @@ -381,6 +381,7 @@ copy_mc_to_user(void __user *to, const void *from, unsigned long n)
> >
> > return n;
> > }
> > +#define copy_mc_to_user copy_mc_to_user
>
> Such define looks weird on my eyes. What to do you want to do here?
Ok, other places at uaccess.h have the same pattern. After
sleeping over it, it is now clear to me the rationale:
you're using an inline to enforce typecast check, as using just a
macro won't be doing cast checks.
The define will let to use gcc preprocessor #if/#ifdef logic to check
if the symbol was defined or not, which makes sense as not all
architectures have it.
Clever.
Patch LGTM.
Reviewed-by: Mauro Carvalho Chehab <mchehab+huawei@kernel.org>
>
> > #endif
> >
> > extern long __copy_from_user_flushcache(void *dst, const void __user *src,
> > diff --git a/arch/x86/include/asm/uaccess.h b/arch/x86/include/asm/uaccess.h
> > index 0f9bab92a43d..309f2439327e 100644
> > --- a/arch/x86/include/asm/uaccess.h
> > +++ b/arch/x86/include/asm/uaccess.h
> > @@ -497,6 +497,7 @@ copy_mc_to_kernel(void *to, const void *from, unsigned len);
> >
> > unsigned long __must_check
> > copy_mc_to_user(void __user *to, const void *from, unsigned len);
> > +#define copy_mc_to_user copy_mc_to_user
>
> Same here.
>
> > #endif
> >
> > /*
> > diff --git a/include/linux/uaccess.h b/include/linux/uaccess.h
> > index 3064314f4832..0dfa9241b6ee 100644
> > --- a/include/linux/uaccess.h
> > +++ b/include/linux/uaccess.h
> > @@ -205,6 +205,14 @@ copy_mc_to_kernel(void *dst, const void *src, size_t cnt)
> > }
> > #endif
> >
> > +#ifndef copy_mc_to_user
> > +static inline unsigned long __must_check
> > +copy_mc_to_user(void *dst, const void *src, size_t cnt)
> > +{
> > + return copy_to_user(dst, src, cnt);
> > +}
> > +#endif
> > +
> > static __always_inline void pagefault_disabled_inc(void)
> > {
> > current->pagefault_disabled++;
>
>
>
> Thanks,
> Mauro
Thanks,
Mauro
^ permalink raw reply [flat|nested] 24+ messages in thread
* Re: [PATCH v12 1/6] uaccess: add generic fallback version of copy_mc_to_user()
2024-05-28 8:59 ` [PATCH v12 1/6] uaccess: add generic fallback version of copy_mc_to_user() Tong Tiangen
2024-07-11 13:53 ` Mauro Carvalho Chehab
@ 2024-08-19 9:57 ` Jonathan Cameron
2024-08-19 13:11 ` Tong Tiangen
1 sibling, 1 reply; 24+ messages in thread
From: Jonathan Cameron @ 2024-08-19 9:57 UTC (permalink / raw)
To: Tong Tiangen
Cc: Mark Rutland, Catalin Marinas, Will Deacon, Andrew Morton,
James Morse, Robin Murphy, Andrey Konovalov, Dmitry Vyukov,
Vincenzo Frascino, Michael Ellerman, Nicholas Piggin,
Andrey Ryabinin, Alexander Potapenko, Christophe Leroy,
Aneesh Kumar K.V, Naveen N. Rao, Thomas Gleixner, Ingo Molnar,
Borislav Petkov, Dave Hansen, x86, H. Peter Anvin,
linux-arm-kernel, linux-mm, linuxppc-dev, linux-kernel,
wangkefeng.wang, Guohanjun
On Tue, 28 May 2024 16:59:10 +0800
Tong Tiangen <tongtiangen@huawei.com> wrote:
> x86/powerpc has it's implementation of copy_mc_to_user(), we add generic
> fallback in include/linux/uaccess.h prepare for other architechures to
> enable CONFIG_ARCH_HAS_COPY_MC.
>
> Signed-off-by: Tong Tiangen <tongtiangen@huawei.com>
> Acked-by: Michael Ellerman <mpe@ellerman.id.au>
Seems like a sensible approach to me given existing fallbacks in x86
if the relevant features are disabled.
It may be worth exploring at some point if some of the special casing
in the callers of this function can also be remove now there
is a default version. There are some small differences but I've
not analyzed if they matter or not.
Reviewed-by: Jonathan Cameron <Jonathan.Cameron@huawei.com>
> ---
> arch/powerpc/include/asm/uaccess.h | 1 +
> arch/x86/include/asm/uaccess.h | 1 +
> include/linux/uaccess.h | 8 ++++++++
> 3 files changed, 10 insertions(+)
>
> diff --git a/arch/powerpc/include/asm/uaccess.h b/arch/powerpc/include/asm/uaccess.h
> index de10437fd206..df42e6ad647f 100644
> --- a/arch/powerpc/include/asm/uaccess.h
> +++ b/arch/powerpc/include/asm/uaccess.h
> @@ -381,6 +381,7 @@ copy_mc_to_user(void __user *to, const void *from, unsigned long n)
>
> return n;
> }
> +#define copy_mc_to_user copy_mc_to_user
> #endif
>
> extern long __copy_from_user_flushcache(void *dst, const void __user *src,
> diff --git a/arch/x86/include/asm/uaccess.h b/arch/x86/include/asm/uaccess.h
> index 0f9bab92a43d..309f2439327e 100644
> --- a/arch/x86/include/asm/uaccess.h
> +++ b/arch/x86/include/asm/uaccess.h
> @@ -497,6 +497,7 @@ copy_mc_to_kernel(void *to, const void *from, unsigned len);
>
> unsigned long __must_check
> copy_mc_to_user(void __user *to, const void *from, unsigned len);
> +#define copy_mc_to_user copy_mc_to_user
> #endif
>
> /*
> diff --git a/include/linux/uaccess.h b/include/linux/uaccess.h
> index 3064314f4832..0dfa9241b6ee 100644
> --- a/include/linux/uaccess.h
> +++ b/include/linux/uaccess.h
> @@ -205,6 +205,14 @@ copy_mc_to_kernel(void *dst, const void *src, size_t cnt)
> }
> #endif
>
> +#ifndef copy_mc_to_user
> +static inline unsigned long __must_check
> +copy_mc_to_user(void *dst, const void *src, size_t cnt)
> +{
> + return copy_to_user(dst, src, cnt);
> +}
> +#endif
> +
> static __always_inline void pagefault_disabled_inc(void)
> {
> current->pagefault_disabled++;
^ permalink raw reply [flat|nested] 24+ messages in thread* Re: [PATCH v12 1/6] uaccess: add generic fallback version of copy_mc_to_user()
2024-08-19 9:57 ` Jonathan Cameron
@ 2024-08-19 13:11 ` Tong Tiangen
0 siblings, 0 replies; 24+ messages in thread
From: Tong Tiangen @ 2024-08-19 13:11 UTC (permalink / raw)
To: Jonathan Cameron
Cc: Mark Rutland, Catalin Marinas, Will Deacon, Andrew Morton,
James Morse, Robin Murphy, Andrey Konovalov, Dmitry Vyukov,
Vincenzo Frascino, Michael Ellerman, Nicholas Piggin,
Andrey Ryabinin, Alexander Potapenko, Christophe Leroy,
Aneesh Kumar K.V, Naveen N. Rao, Thomas Gleixner, Ingo Molnar,
Borislav Petkov, Dave Hansen, x86, H. Peter Anvin,
linux-arm-kernel, linux-mm, linuxppc-dev, linux-kernel,
wangkefeng.wang, Guohanjun
在 2024/8/19 17:57, Jonathan Cameron 写道:
> On Tue, 28 May 2024 16:59:10 +0800
> Tong Tiangen <tongtiangen@huawei.com> wrote:
>
>> x86/powerpc has it's implementation of copy_mc_to_user(), we add generic
>> fallback in include/linux/uaccess.h prepare for other architechures to
>> enable CONFIG_ARCH_HAS_COPY_MC.
>>
>> Signed-off-by: Tong Tiangen <tongtiangen@huawei.com>
>> Acked-by: Michael Ellerman <mpe@ellerman.id.au>
> Seems like a sensible approach to me given existing fallbacks in x86
> if the relevant features are disabled.
>
> It may be worth exploring at some point if some of the special casing
> in the callers of this function can also be remove now there
> is a default version. There are some small differences but I've
> not analyzed if they matter or not.
>
> Reviewed-by: Jonathan Cameron <Jonathan.Cameron@huawei.com>
copy_mc_to_user() and copy_to_user() have the same logic of copying the
memory. The main difference is that the MC version can handle the
hardware error.
The implementation of MC version is related to the architecture.
Therefore, when the architecture does not implement the MC version, it
is logically correct to roll back to the no MC version.
Thanks :)
>
>> ---
>> arch/powerpc/include/asm/uaccess.h | 1 +
>> arch/x86/include/asm/uaccess.h | 1 +
>> include/linux/uaccess.h | 8 ++++++++
>> 3 files changed, 10 insertions(+)
>>
>> diff --git a/arch/powerpc/include/asm/uaccess.h b/arch/powerpc/include/asm/uaccess.h
>> index de10437fd206..df42e6ad647f 100644
>> --- a/arch/powerpc/include/asm/uaccess.h
>> +++ b/arch/powerpc/include/asm/uaccess.h
>> @@ -381,6 +381,7 @@ copy_mc_to_user(void __user *to, const void *from, unsigned long n)
>>
>> return n;
>> }
>> +#define copy_mc_to_user copy_mc_to_user
>> #endif
>>
>> extern long __copy_from_user_flushcache(void *dst, const void __user *src,
>> diff --git a/arch/x86/include/asm/uaccess.h b/arch/x86/include/asm/uaccess.h
>> index 0f9bab92a43d..309f2439327e 100644
>> --- a/arch/x86/include/asm/uaccess.h
>> +++ b/arch/x86/include/asm/uaccess.h
>> @@ -497,6 +497,7 @@ copy_mc_to_kernel(void *to, const void *from, unsigned len);
>>
>> unsigned long __must_check
>> copy_mc_to_user(void __user *to, const void *from, unsigned len);
>> +#define copy_mc_to_user copy_mc_to_user
>> #endif
>>
>> /*
>> diff --git a/include/linux/uaccess.h b/include/linux/uaccess.h
>> index 3064314f4832..0dfa9241b6ee 100644
>> --- a/include/linux/uaccess.h
>> +++ b/include/linux/uaccess.h
>> @@ -205,6 +205,14 @@ copy_mc_to_kernel(void *dst, const void *src, size_t cnt)
>> }
>> #endif
>>
>> +#ifndef copy_mc_to_user
>> +static inline unsigned long __must_check
>> +copy_mc_to_user(void *dst, const void *src, size_t cnt)
>> +{
>> + return copy_to_user(dst, src, cnt);
>> +}
>> +#endif
>> +
>> static __always_inline void pagefault_disabled_inc(void)
>> {
>> current->pagefault_disabled++;
>
> .
^ permalink raw reply [flat|nested] 24+ messages in thread
* [PATCH v12 2/6] arm64: add support for ARCH_HAS_COPY_MC
2024-05-28 8:59 [PATCH v12 0/6]arm64: add ARCH_HAS_COPY_MC support Tong Tiangen
2024-05-28 8:59 ` [PATCH v12 1/6] uaccess: add generic fallback version of copy_mc_to_user() Tong Tiangen
@ 2024-05-28 8:59 ` Tong Tiangen
2024-08-19 10:30 ` Jonathan Cameron
2024-08-19 17:29 ` Mark Rutland
2024-05-28 8:59 ` [PATCH v12 3/6] mm/hwpoison: return -EFAULT when copy fail in copy_mc_[user]_highpage() Tong Tiangen
` (3 subsequent siblings)
5 siblings, 2 replies; 24+ messages in thread
From: Tong Tiangen @ 2024-05-28 8:59 UTC (permalink / raw)
To: Mark Rutland, Catalin Marinas, Will Deacon, Andrew Morton,
James Morse, Robin Murphy, Andrey Konovalov, Dmitry Vyukov,
Vincenzo Frascino, Michael Ellerman, Nicholas Piggin,
Andrey Ryabinin, Alexander Potapenko, Christophe Leroy,
Aneesh Kumar K.V, Naveen N. Rao, Thomas Gleixner, Ingo Molnar,
Borislav Petkov, Dave Hansen, x86, H. Peter Anvin
Cc: linux-arm-kernel, linux-mm, linuxppc-dev, linux-kernel,
Tong Tiangen, wangkefeng.wang, Guohanjun
For the arm64 kernel, when it processes hardware memory errors for
synchronize notifications(do_sea()), if the errors is consumed within the
kernel, the current processing is panic. However, it is not optimal.
Take copy_from/to_user for example, If ld* triggers a memory error, even in
kernel mode, only the associated process is affected. Killing the user
process and isolating the corrupt page is a better choice.
New fixup type EX_TYPE_KACCESS_ERR_ZERO_ME_SAFE is added to identify insn
that can recover from memory errors triggered by access to kernel memory.
Signed-off-by: Tong Tiangen <tongtiangen@huawei.com>
---
arch/arm64/Kconfig | 1 +
arch/arm64/include/asm/asm-extable.h | 31 +++++++++++++++++++++++-----
arch/arm64/include/asm/asm-uaccess.h | 4 ++++
arch/arm64/include/asm/extable.h | 1 +
arch/arm64/lib/copy_to_user.S | 10 ++++-----
arch/arm64/mm/extable.c | 19 +++++++++++++++++
arch/arm64/mm/fault.c | 27 +++++++++++++++++-------
7 files changed, 75 insertions(+), 18 deletions(-)
diff --git a/arch/arm64/Kconfig b/arch/arm64/Kconfig
index 5d91259ee7b5..13ca06ddf3dd 100644
--- a/arch/arm64/Kconfig
+++ b/arch/arm64/Kconfig
@@ -20,6 +20,7 @@ config ARM64
select ARCH_ENABLE_SPLIT_PMD_PTLOCK if PGTABLE_LEVELS > 2
select ARCH_ENABLE_THP_MIGRATION if TRANSPARENT_HUGEPAGE
select ARCH_HAS_CACHE_LINE_SIZE
+ select ARCH_HAS_COPY_MC if ACPI_APEI_GHES
select ARCH_HAS_CURRENT_STACK_POINTER
select ARCH_HAS_DEBUG_VIRTUAL
select ARCH_HAS_DEBUG_VM_PGTABLE
diff --git a/arch/arm64/include/asm/asm-extable.h b/arch/arm64/include/asm/asm-extable.h
index 980d1dd8e1a3..9c0664fe1eb1 100644
--- a/arch/arm64/include/asm/asm-extable.h
+++ b/arch/arm64/include/asm/asm-extable.h
@@ -5,11 +5,13 @@
#include <linux/bits.h>
#include <asm/gpr-num.h>
-#define EX_TYPE_NONE 0
-#define EX_TYPE_BPF 1
-#define EX_TYPE_UACCESS_ERR_ZERO 2
-#define EX_TYPE_KACCESS_ERR_ZERO 3
-#define EX_TYPE_LOAD_UNALIGNED_ZEROPAD 4
+#define EX_TYPE_NONE 0
+#define EX_TYPE_BPF 1
+#define EX_TYPE_UACCESS_ERR_ZERO 2
+#define EX_TYPE_KACCESS_ERR_ZERO 3
+#define EX_TYPE_LOAD_UNALIGNED_ZEROPAD 4
+/* kernel access memory error safe */
+#define EX_TYPE_KACCESS_ERR_ZERO_ME_SAFE 5
/* Data fields for EX_TYPE_UACCESS_ERR_ZERO */
#define EX_DATA_REG_ERR_SHIFT 0
@@ -51,6 +53,17 @@
#define _ASM_EXTABLE_UACCESS(insn, fixup) \
_ASM_EXTABLE_UACCESS_ERR_ZERO(insn, fixup, wzr, wzr)
+#define _ASM_EXTABLE_KACCESS_ERR_ZERO_ME_SAFE(insn, fixup, err, zero) \
+ __ASM_EXTABLE_RAW(insn, fixup, \
+ EX_TYPE_KACCESS_ERR_ZERO_ME_SAFE, \
+ ( \
+ EX_DATA_REG(ERR, err) | \
+ EX_DATA_REG(ZERO, zero) \
+ ))
+
+#define _ASM_EXTABLE_KACCESS_ME_SAFE(insn, fixup) \
+ _ASM_EXTABLE_KACCESS_ERR_ZERO_ME_SAFE(insn, fixup, wzr, wzr)
+
/*
* Create an exception table entry for uaccess `insn`, which will branch to `fixup`
* when an unhandled fault is taken.
@@ -69,6 +82,14 @@
.endif
.endm
+/*
+ * Create an exception table entry for kaccess me(memory error) safe `insn`, which
+ * will branch to `fixup` when an unhandled fault is taken.
+ */
+ .macro _asm_extable_kaccess_me_safe, insn, fixup
+ _ASM_EXTABLE_KACCESS_ME_SAFE(\insn, \fixup)
+ .endm
+
#else /* __ASSEMBLY__ */
#include <linux/stringify.h>
diff --git a/arch/arm64/include/asm/asm-uaccess.h b/arch/arm64/include/asm/asm-uaccess.h
index 5b6efe8abeeb..7bbebfa5b710 100644
--- a/arch/arm64/include/asm/asm-uaccess.h
+++ b/arch/arm64/include/asm/asm-uaccess.h
@@ -57,6 +57,10 @@ alternative_else_nop_endif
.endm
#endif
+#define KERNEL_ME_SAFE(l, x...) \
+9999: x; \
+ _asm_extable_kaccess_me_safe 9999b, l
+
#define USER(l, x...) \
9999: x; \
_asm_extable_uaccess 9999b, l
diff --git a/arch/arm64/include/asm/extable.h b/arch/arm64/include/asm/extable.h
index 72b0e71cc3de..bc49443bc502 100644
--- a/arch/arm64/include/asm/extable.h
+++ b/arch/arm64/include/asm/extable.h
@@ -46,4 +46,5 @@ bool ex_handler_bpf(const struct exception_table_entry *ex,
#endif /* !CONFIG_BPF_JIT */
bool fixup_exception(struct pt_regs *regs);
+bool fixup_exception_me(struct pt_regs *regs);
#endif
diff --git a/arch/arm64/lib/copy_to_user.S b/arch/arm64/lib/copy_to_user.S
index 802231772608..2ac716c0d6d8 100644
--- a/arch/arm64/lib/copy_to_user.S
+++ b/arch/arm64/lib/copy_to_user.S
@@ -20,7 +20,7 @@
* x0 - bytes not copied
*/
.macro ldrb1 reg, ptr, val
- ldrb \reg, [\ptr], \val
+ KERNEL_ME_SAFE(9998f, ldrb \reg, [\ptr], \val)
.endm
.macro strb1 reg, ptr, val
@@ -28,7 +28,7 @@
.endm
.macro ldrh1 reg, ptr, val
- ldrh \reg, [\ptr], \val
+ KERNEL_ME_SAFE(9998f, ldrh \reg, [\ptr], \val)
.endm
.macro strh1 reg, ptr, val
@@ -36,7 +36,7 @@
.endm
.macro ldr1 reg, ptr, val
- ldr \reg, [\ptr], \val
+ KERNEL_ME_SAFE(9998f, ldr \reg, [\ptr], \val)
.endm
.macro str1 reg, ptr, val
@@ -44,7 +44,7 @@
.endm
.macro ldp1 reg1, reg2, ptr, val
- ldp \reg1, \reg2, [\ptr], \val
+ KERNEL_ME_SAFE(9998f, ldp \reg1, \reg2, [\ptr], \val)
.endm
.macro stp1 reg1, reg2, ptr, val
@@ -64,7 +64,7 @@ SYM_FUNC_START(__arch_copy_to_user)
9997: cmp dst, dstin
b.ne 9998f
// Before being absolutely sure we couldn't copy anything, try harder
- ldrb tmp1w, [srcin]
+KERNEL_ME_SAFE(9998f, ldrb tmp1w, [srcin])
USER(9998f, sttrb tmp1w, [dst])
add dst, dst, #1
9998: sub x0, end, dst // bytes not copied
diff --git a/arch/arm64/mm/extable.c b/arch/arm64/mm/extable.c
index 228d681a8715..8c690ae61944 100644
--- a/arch/arm64/mm/extable.c
+++ b/arch/arm64/mm/extable.c
@@ -72,7 +72,26 @@ bool fixup_exception(struct pt_regs *regs)
return ex_handler_uaccess_err_zero(ex, regs);
case EX_TYPE_LOAD_UNALIGNED_ZEROPAD:
return ex_handler_load_unaligned_zeropad(ex, regs);
+ case EX_TYPE_KACCESS_ERR_ZERO_ME_SAFE:
+ return false;
}
BUG();
}
+
+bool fixup_exception_me(struct pt_regs *regs)
+{
+ const struct exception_table_entry *ex;
+
+ ex = search_exception_tables(instruction_pointer(regs));
+ if (!ex)
+ return false;
+
+ switch (ex->type) {
+ case EX_TYPE_UACCESS_ERR_ZERO:
+ case EX_TYPE_KACCESS_ERR_ZERO_ME_SAFE:
+ return ex_handler_uaccess_err_zero(ex, regs);
+ }
+
+ return false;
+}
diff --git a/arch/arm64/mm/fault.c b/arch/arm64/mm/fault.c
index 451ba7cbd5ad..2dc65f99d389 100644
--- a/arch/arm64/mm/fault.c
+++ b/arch/arm64/mm/fault.c
@@ -708,21 +708,32 @@ static int do_bad(unsigned long far, unsigned long esr, struct pt_regs *regs)
return 1; /* "fault" */
}
+/*
+ * APEI claimed this as a firmware-first notification.
+ * Some processing deferred to task_work before ret_to_user().
+ */
+static bool do_apei_claim_sea(struct pt_regs *regs)
+{
+ if (user_mode(regs)) {
+ if (!apei_claim_sea(regs))
+ return true;
+ } else if (IS_ENABLED(CONFIG_ARCH_HAS_COPY_MC)) {
+ if (fixup_exception_me(regs) && !apei_claim_sea(regs))
+ return true;
+ }
+
+ return false;
+}
+
static int do_sea(unsigned long far, unsigned long esr, struct pt_regs *regs)
{
const struct fault_info *inf;
unsigned long siaddr;
- inf = esr_to_fault_info(esr);
-
- if (user_mode(regs) && apei_claim_sea(regs) == 0) {
- /*
- * APEI claimed this as a firmware-first notification.
- * Some processing deferred to task_work before ret_to_user().
- */
+ if (do_apei_claim_sea(regs))
return 0;
- }
+ inf = esr_to_fault_info(esr);
if (esr & ESR_ELx_FnV) {
siaddr = 0;
} else {
--
2.25.1
^ permalink raw reply related [flat|nested] 24+ messages in thread* Re: [PATCH v12 2/6] arm64: add support for ARCH_HAS_COPY_MC
2024-05-28 8:59 ` [PATCH v12 2/6] arm64: add support for ARCH_HAS_COPY_MC Tong Tiangen
@ 2024-08-19 10:30 ` Jonathan Cameron
2024-08-20 2:43 ` Tong Tiangen
2024-08-19 17:29 ` Mark Rutland
1 sibling, 1 reply; 24+ messages in thread
From: Jonathan Cameron @ 2024-08-19 10:30 UTC (permalink / raw)
To: Tong Tiangen
Cc: Mark Rutland, Catalin Marinas, Will Deacon, Andrew Morton,
James Morse, Robin Murphy, Andrey Konovalov, Dmitry Vyukov,
Vincenzo Frascino, Michael Ellerman, Nicholas Piggin,
Andrey Ryabinin, Alexander Potapenko, Christophe Leroy,
Aneesh Kumar K.V, Naveen N. Rao, Thomas Gleixner, Ingo Molnar,
Borislav Petkov, Dave Hansen, x86, H. Peter Anvin,
linux-arm-kernel, linux-mm, linuxppc-dev, linux-kernel,
wangkefeng.wang, Guohanjun
On Tue, 28 May 2024 16:59:11 +0800
Tong Tiangen <tongtiangen@huawei.com> wrote:
> For the arm64 kernel, when it processes hardware memory errors for
> synchronize notifications(do_sea()), if the errors is consumed within the
> kernel, the current processing is panic. However, it is not optimal.
>
> Take copy_from/to_user for example, If ld* triggers a memory error, even in
> kernel mode, only the associated process is affected. Killing the user
> process and isolating the corrupt page is a better choice.
>
> New fixup type EX_TYPE_KACCESS_ERR_ZERO_ME_SAFE is added to identify insn
> that can recover from memory errors triggered by access to kernel memory.
>
> Signed-off-by: Tong Tiangen <tongtiangen@huawei.com>
Hi - this is going slow :(
A few comments inline in the meantime but this really needs ARM maintainers
to take a (hopefully final) look.
Jonathan
> diff --git a/arch/arm64/include/asm/asm-extable.h b/arch/arm64/include/asm/asm-extable.h
> index 980d1dd8e1a3..9c0664fe1eb1 100644
> --- a/arch/arm64/include/asm/asm-extable.h
> +++ b/arch/arm64/include/asm/asm-extable.h
> @@ -5,11 +5,13 @@
> #include <linux/bits.h>
> #include <asm/gpr-num.h>
>
> -#define EX_TYPE_NONE 0
> -#define EX_TYPE_BPF 1
> -#define EX_TYPE_UACCESS_ERR_ZERO 2
> -#define EX_TYPE_KACCESS_ERR_ZERO 3
> -#define EX_TYPE_LOAD_UNALIGNED_ZEROPAD 4
> +#define EX_TYPE_NONE 0
> +#define EX_TYPE_BPF 1
> +#define EX_TYPE_UACCESS_ERR_ZERO 2
> +#define EX_TYPE_KACCESS_ERR_ZERO 3
> +#define EX_TYPE_LOAD_UNALIGNED_ZEROPAD 4
> +/* kernel access memory error safe */
> +#define EX_TYPE_KACCESS_ERR_ZERO_ME_SAFE 5
Does anyone care enough about the alignment to bother realigning for one
long line? I'd be tempted not to bother, but up to maintainers.
> diff --git a/arch/arm64/lib/copy_to_user.S b/arch/arm64/lib/copy_to_user.S
> index 802231772608..2ac716c0d6d8 100644
> --- a/arch/arm64/lib/copy_to_user.S
> +++ b/arch/arm64/lib/copy_to_user.S
> @@ -20,7 +20,7 @@
> * x0 - bytes not copied
> */
> .macro ldrb1 reg, ptr, val
> - ldrb \reg, [\ptr], \val
> + KERNEL_ME_SAFE(9998f, ldrb \reg, [\ptr], \val)
> .endm
>
> .macro strb1 reg, ptr, val
> @@ -28,7 +28,7 @@
> .endm
>
> .macro ldrh1 reg, ptr, val
> - ldrh \reg, [\ptr], \val
> + KERNEL_ME_SAFE(9998f, ldrh \reg, [\ptr], \val)
> .endm
>
> .macro strh1 reg, ptr, val
> @@ -36,7 +36,7 @@
> .endm
>
> .macro ldr1 reg, ptr, val
> - ldr \reg, [\ptr], \val
> + KERNEL_ME_SAFE(9998f, ldr \reg, [\ptr], \val)
> .endm
>
> .macro str1 reg, ptr, val
> @@ -44,7 +44,7 @@
> .endm
>
> .macro ldp1 reg1, reg2, ptr, val
> - ldp \reg1, \reg2, [\ptr], \val
> + KERNEL_ME_SAFE(9998f, ldp \reg1, \reg2, [\ptr], \val)
> .endm
>
> .macro stp1 reg1, reg2, ptr, val
> @@ -64,7 +64,7 @@ SYM_FUNC_START(__arch_copy_to_user)
> 9997: cmp dst, dstin
> b.ne 9998f
> // Before being absolutely sure we couldn't copy anything, try harder
> - ldrb tmp1w, [srcin]
> +KERNEL_ME_SAFE(9998f, ldrb tmp1w, [srcin])
Alignment looks off?
> USER(9998f, sttrb tmp1w, [dst])
> add dst, dst, #1
> 9998: sub x0, end, dst // bytes not copied
> diff --git a/arch/arm64/mm/fault.c b/arch/arm64/mm/fault.c
> index 451ba7cbd5ad..2dc65f99d389 100644
> --- a/arch/arm64/mm/fault.c
> +++ b/arch/arm64/mm/fault.c
> @@ -708,21 +708,32 @@ static int do_bad(unsigned long far, unsigned long esr, struct pt_regs *regs)
> return 1; /* "fault" */
> }
>
> +/*
> + * APEI claimed this as a firmware-first notification.
> + * Some processing deferred to task_work before ret_to_user().
> + */
> +static bool do_apei_claim_sea(struct pt_regs *regs)
> +{
> + if (user_mode(regs)) {
> + if (!apei_claim_sea(regs))
I'd keep to the the (apei_claim_sea(regs) == 0)
used in the original code. That hints to the reader that we are
interested here in an 'error' code rather than apei_claim_sea() returning
a bool. I initially wondered why we return true when the code
fails to claim it.
Also, perhaps if you return 0 for success and an error code if not
you could just make this
if (user_mode(regs))
return apei_claim_sea(regs);
if (IS_ENABLED(CONFIG_ARCH_HAS_COPY_MC)) {
if (fixup_exception_me(regs)) {
return apei_claim_sea(regs);
}
}
return false;
or maybe even (I may have messed this up, but I think this logic
works).
if (!user_mode(regs) && IS_ENABLED(CONFIG_ARCH_HAS_COPY_MC)) {
if (!fixup_exception_me(regs))
return false;
}
return apei_claim_sea(regs);
> + return true;
> + } else if (IS_ENABLED(CONFIG_ARCH_HAS_COPY_MC)) {
> + if (fixup_exception_me(regs) && !apei_claim_sea(regs))
Same here with using apei_claim_sea(regs) == 0 so it's obvious we
are checking for an error, not a boolean.
> + return true;
> + }
> +
> + return false;
> +}
> +
> static int do_sea(unsigned long far, unsigned long esr, struct pt_regs *regs)
> {
> const struct fault_info *inf;
> unsigned long siaddr;
>
> - inf = esr_to_fault_info(esr);
> -
> - if (user_mode(regs) && apei_claim_sea(regs) == 0) {
> - /*
> - * APEI claimed this as a firmware-first notification.
> - * Some processing deferred to task_work before ret_to_user().
> - */
> + if (do_apei_claim_sea(regs))
It might be made sense to factor this out first, then could be reviewed
as a noop before the new stuff is added. Still it's not much code, so doesn't
really matter.
Might be worth keeping to returning 0 for success, error code
otherwise as per apei_claim_sea(regs)
The bool returning functions in the nearby code tend to be is_xxxx
not things that succeed or not.
If you change it to return int make this
if (do_apei_claim_sea(regs) == 0)
so it's obvious this is the no error case.
> return 0;
> - }
>
> + inf = esr_to_fault_info(esr);
> if (esr & ESR_ELx_FnV) {
> siaddr = 0;
> } else {
^ permalink raw reply [flat|nested] 24+ messages in thread* Re: [PATCH v12 2/6] arm64: add support for ARCH_HAS_COPY_MC
2024-08-19 10:30 ` Jonathan Cameron
@ 2024-08-20 2:43 ` Tong Tiangen
0 siblings, 0 replies; 24+ messages in thread
From: Tong Tiangen @ 2024-08-20 2:43 UTC (permalink / raw)
To: Jonathan Cameron
Cc: Mark Rutland, Catalin Marinas, Will Deacon, Andrew Morton,
James Morse, Robin Murphy, Andrey Konovalov, Dmitry Vyukov,
Vincenzo Frascino, Michael Ellerman, Nicholas Piggin,
Andrey Ryabinin, Alexander Potapenko, Christophe Leroy,
Aneesh Kumar K.V, Naveen N. Rao, Thomas Gleixner, Ingo Molnar,
Borislav Petkov, Dave Hansen, x86, H. Peter Anvin,
linux-arm-kernel, linux-mm, linuxppc-dev, linux-kernel,
wangkefeng.wang, Guohanjun
在 2024/8/19 18:30, Jonathan Cameron 写道:
> On Tue, 28 May 2024 16:59:11 +0800
> Tong Tiangen <tongtiangen@huawei.com> wrote:
>
>> For the arm64 kernel, when it processes hardware memory errors for
>> synchronize notifications(do_sea()), if the errors is consumed within the
>> kernel, the current processing is panic. However, it is not optimal.
>>
>> Take copy_from/to_user for example, If ld* triggers a memory error, even in
>> kernel mode, only the associated process is affected. Killing the user
>> process and isolating the corrupt page is a better choice.
>>
>> New fixup type EX_TYPE_KACCESS_ERR_ZERO_ME_SAFE is added to identify insn
>> that can recover from memory errors triggered by access to kernel memory.
>>
>> Signed-off-by: Tong Tiangen <tongtiangen@huawei.com>
>
> Hi - this is going slow :(
>
> A few comments inline in the meantime but this really needs ARM maintainers
> to take a (hopefully final) look.
>
> Jonathan
>
>
>> diff --git a/arch/arm64/include/asm/asm-extable.h b/arch/arm64/include/asm/asm-extable.h
>> index 980d1dd8e1a3..9c0664fe1eb1 100644
>> --- a/arch/arm64/include/asm/asm-extable.h
>> +++ b/arch/arm64/include/asm/asm-extable.h
>> @@ -5,11 +5,13 @@
>> #include <linux/bits.h>
>> #include <asm/gpr-num.h>
>>
>> -#define EX_TYPE_NONE 0
>> -#define EX_TYPE_BPF 1
>> -#define EX_TYPE_UACCESS_ERR_ZERO 2
>> -#define EX_TYPE_KACCESS_ERR_ZERO 3
>> -#define EX_TYPE_LOAD_UNALIGNED_ZEROPAD 4
>> +#define EX_TYPE_NONE 0
>> +#define EX_TYPE_BPF 1
>> +#define EX_TYPE_UACCESS_ERR_ZERO 2
>> +#define EX_TYPE_KACCESS_ERR_ZERO 3
>> +#define EX_TYPE_LOAD_UNALIGNED_ZEROPAD 4
>> +/* kernel access memory error safe */
>> +#define EX_TYPE_KACCESS_ERR_ZERO_ME_SAFE 5
>
> Does anyone care enough about the alignment to bother realigning for one
> long line? I'd be tempted not to bother, but up to maintainers.
>
>
>> diff --git a/arch/arm64/lib/copy_to_user.S b/arch/arm64/lib/copy_to_user.S
>> index 802231772608..2ac716c0d6d8 100644
>> --- a/arch/arm64/lib/copy_to_user.S
>> +++ b/arch/arm64/lib/copy_to_user.S
>> @@ -20,7 +20,7 @@
>> * x0 - bytes not copied
>> */
>> .macro ldrb1 reg, ptr, val
>> - ldrb \reg, [\ptr], \val
>> + KERNEL_ME_SAFE(9998f, ldrb \reg, [\ptr], \val)
>> .endm
>>
>> .macro strb1 reg, ptr, val
>> @@ -28,7 +28,7 @@
>> .endm
>>
>> .macro ldrh1 reg, ptr, val
>> - ldrh \reg, [\ptr], \val
>> + KERNEL_ME_SAFE(9998f, ldrh \reg, [\ptr], \val)
>> .endm
>>
>> .macro strh1 reg, ptr, val
>> @@ -36,7 +36,7 @@
>> .endm
>>
>> .macro ldr1 reg, ptr, val
>> - ldr \reg, [\ptr], \val
>> + KERNEL_ME_SAFE(9998f, ldr \reg, [\ptr], \val)
>> .endm
>>
>> .macro str1 reg, ptr, val
>> @@ -44,7 +44,7 @@
>> .endm
>>
>> .macro ldp1 reg1, reg2, ptr, val
>> - ldp \reg1, \reg2, [\ptr], \val
>> + KERNEL_ME_SAFE(9998f, ldp \reg1, \reg2, [\ptr], \val)
>> .endm
>>
>> .macro stp1 reg1, reg2, ptr, val
>> @@ -64,7 +64,7 @@ SYM_FUNC_START(__arch_copy_to_user)
>> 9997: cmp dst, dstin
>> b.ne 9998f
>> // Before being absolutely sure we couldn't copy anything, try harder
>> - ldrb tmp1w, [srcin]
>> +KERNEL_ME_SAFE(9998f, ldrb tmp1w, [srcin])
>
> Alignment looks off?
Hi, Jonathan:
How about we change this in conjunction with mark's suggestion? :)
>
>> USER(9998f, sttrb tmp1w, [dst])
>> add dst, dst, #1
>> 9998: sub x0, end, dst // bytes not copied
>
>
>
>> diff --git a/arch/arm64/mm/fault.c b/arch/arm64/mm/fault.c
>> index 451ba7cbd5ad..2dc65f99d389 100644
>> --- a/arch/arm64/mm/fault.c
>> +++ b/arch/arm64/mm/fault.c
>> @@ -708,21 +708,32 @@ static int do_bad(unsigned long far, unsigned long esr, struct pt_regs *regs)
>> return 1; /* "fault" */
>> }
>>
>> +/*
>> + * APEI claimed this as a firmware-first notification.
>> + * Some processing deferred to task_work before ret_to_user().
>> + */
>> +static bool do_apei_claim_sea(struct pt_regs *regs)
>> +{
>> + if (user_mode(regs)) {
>> + if (!apei_claim_sea(regs))
>
> I'd keep to the the (apei_claim_sea(regs) == 0)
> used in the original code. That hints to the reader that we are
> interested here in an 'error' code rather than apei_claim_sea() returning
> a bool. I initially wondered why we return true when the code
> fails to claim it.
>
> Also, perhaps if you return 0 for success and an error code if not
> you could just make this
>
> if (user_mode(regs))
> return apei_claim_sea(regs);
>
> if (IS_ENABLED(CONFIG_ARCH_HAS_COPY_MC)) {
> if (fixup_exception_me(regs)) {
> return apei_claim_sea(regs);
> }
> }
>
> return false;
>
> or maybe even (I may have messed this up, but I think this logic
> works).
>
> if (!user_mode(regs) && IS_ENABLED(CONFIG_ARCH_HAS_COPY_MC)) {
> if (!fixup_exception_me(regs))
> return false;
> }
> return apei_claim_sea(regs);
>
>
>> + return true;
>> + } else if (IS_ENABLED(CONFIG_ARCH_HAS_COPY_MC)) {
>> + if (fixup_exception_me(regs) && !apei_claim_sea(regs))
>
> Same here with using apei_claim_sea(regs) == 0 so it's obvious we
> are checking for an error, not a boolean.
>
>> + return true;
>> + }
>> +
>> + return false;
>> +}
>> +
>> static int do_sea(unsigned long far, unsigned long esr, struct pt_regs *regs)
>> {
>> const struct fault_info *inf;
>> unsigned long siaddr;
>>
>> - inf = esr_to_fault_info(esr);
>> -
>> - if (user_mode(regs) && apei_claim_sea(regs) == 0) {
>> - /*
>> - * APEI claimed this as a firmware-first notification.
>> - * Some processing deferred to task_work before ret_to_user().
>> - */
>> + if (do_apei_claim_sea(regs))
>
> It might be made sense to factor this out first, then could be reviewed
> as a noop before the new stuff is added. Still it's not much code, so doesn't
> really matter.
> Might be worth keeping to returning 0 for success, error code
> otherwise as per apei_claim_sea(regs)
>
> The bool returning functions in the nearby code tend to be is_xxxx
> not things that succeed or not.
>
> If you change it to return int make this
> if (do_apei_claim_sea(regs) == 0)
> so it's obvious this is the no error case.
>
My fault, treating the return value of apei_claim_sea() as bool has
caused some confusion. Perhaps using "== 0" can reduce this confuse.
Here's the change:
static int do_apei_claim_sea(struct pt_regs *regs)
{
if (!user_mode(regs) && IS_ENABLED(CONFIG_ARCH_HAS_COPY_MC)) {
if (!fixup_exception_me(regs)))
return -ENOENT;
}
return apei_claim_sea(regs);
}
static int do_sea(...)
{
[...]
if (do_apei_claim_sea(regs) == 0)
return 0;
[...]
}
I'll modify it later with the comments of mark.
Thanks,
Tong.
>> return 0;
>> - }
>>
>> + inf = esr_to_fault_info(esr);
>> if (esr & ESR_ELx_FnV) {
>> siaddr = 0;
>> } else {
>
> .
^ permalink raw reply [flat|nested] 24+ messages in thread
* Re: [PATCH v12 2/6] arm64: add support for ARCH_HAS_COPY_MC
2024-05-28 8:59 ` [PATCH v12 2/6] arm64: add support for ARCH_HAS_COPY_MC Tong Tiangen
2024-08-19 10:30 ` Jonathan Cameron
@ 2024-08-19 17:29 ` Mark Rutland
2024-08-20 2:11 ` Tong Tiangen
1 sibling, 1 reply; 24+ messages in thread
From: Mark Rutland @ 2024-08-19 17:29 UTC (permalink / raw)
To: Tong Tiangen
Cc: Catalin Marinas, Will Deacon, Andrew Morton, James Morse,
Robin Murphy, Andrey Konovalov, Dmitry Vyukov, Vincenzo Frascino,
Michael Ellerman, Nicholas Piggin, Andrey Ryabinin,
Alexander Potapenko, Christophe Leroy, Aneesh Kumar K.V,
Naveen N. Rao, Thomas Gleixner, Ingo Molnar, Borislav Petkov,
Dave Hansen, x86, H. Peter Anvin, linux-arm-kernel, linux-mm,
linuxppc-dev, linux-kernel, wangkefeng.wang, Guohanjun
Hi Tong,
On Tue, May 28, 2024 at 04:59:11PM +0800, Tong Tiangen wrote:
> For the arm64 kernel, when it processes hardware memory errors for
> synchronize notifications(do_sea()), if the errors is consumed within the
> kernel, the current processing is panic. However, it is not optimal.
>
> Take copy_from/to_user for example, If ld* triggers a memory error, even in
> kernel mode, only the associated process is affected. Killing the user
> process and isolating the corrupt page is a better choice.
>
> New fixup type EX_TYPE_KACCESS_ERR_ZERO_ME_SAFE is added to identify insn
> that can recover from memory errors triggered by access to kernel memory.
>
> Signed-off-by: Tong Tiangen <tongtiangen@huawei.com>
Generally this looks ok, but I have a couple of comments below.
> ---
> arch/arm64/Kconfig | 1 +
> arch/arm64/include/asm/asm-extable.h | 31 +++++++++++++++++++++++-----
> arch/arm64/include/asm/asm-uaccess.h | 4 ++++
> arch/arm64/include/asm/extable.h | 1 +
> arch/arm64/lib/copy_to_user.S | 10 ++++-----
> arch/arm64/mm/extable.c | 19 +++++++++++++++++
> arch/arm64/mm/fault.c | 27 +++++++++++++++++-------
> 7 files changed, 75 insertions(+), 18 deletions(-)
>
> diff --git a/arch/arm64/Kconfig b/arch/arm64/Kconfig
> index 5d91259ee7b5..13ca06ddf3dd 100644
> --- a/arch/arm64/Kconfig
> +++ b/arch/arm64/Kconfig
> @@ -20,6 +20,7 @@ config ARM64
> select ARCH_ENABLE_SPLIT_PMD_PTLOCK if PGTABLE_LEVELS > 2
> select ARCH_ENABLE_THP_MIGRATION if TRANSPARENT_HUGEPAGE
> select ARCH_HAS_CACHE_LINE_SIZE
> + select ARCH_HAS_COPY_MC if ACPI_APEI_GHES
> select ARCH_HAS_CURRENT_STACK_POINTER
> select ARCH_HAS_DEBUG_VIRTUAL
> select ARCH_HAS_DEBUG_VM_PGTABLE
> diff --git a/arch/arm64/include/asm/asm-extable.h b/arch/arm64/include/asm/asm-extable.h
> index 980d1dd8e1a3..9c0664fe1eb1 100644
> --- a/arch/arm64/include/asm/asm-extable.h
> +++ b/arch/arm64/include/asm/asm-extable.h
> @@ -5,11 +5,13 @@
> #include <linux/bits.h>
> #include <asm/gpr-num.h>
>
> -#define EX_TYPE_NONE 0
> -#define EX_TYPE_BPF 1
> -#define EX_TYPE_UACCESS_ERR_ZERO 2
> -#define EX_TYPE_KACCESS_ERR_ZERO 3
> -#define EX_TYPE_LOAD_UNALIGNED_ZEROPAD 4
> +#define EX_TYPE_NONE 0
> +#define EX_TYPE_BPF 1
> +#define EX_TYPE_UACCESS_ERR_ZERO 2
> +#define EX_TYPE_KACCESS_ERR_ZERO 3
> +#define EX_TYPE_LOAD_UNALIGNED_ZEROPAD 4
> +/* kernel access memory error safe */
> +#define EX_TYPE_KACCESS_ERR_ZERO_ME_SAFE 5
Could we please use 'MEM_ERR', and likewise for the macros below? That's
more obvious than 'ME_SAFE', and we wouldn't need the comment here.
Likewise elsewhere in this patch and the series.
To Jonathan's comment, I do prefer these numbers are aligned, so aside
from the naming, the diff above looks good.
>
> /* Data fields for EX_TYPE_UACCESS_ERR_ZERO */
> #define EX_DATA_REG_ERR_SHIFT 0
> @@ -51,6 +53,17 @@
> #define _ASM_EXTABLE_UACCESS(insn, fixup) \
> _ASM_EXTABLE_UACCESS_ERR_ZERO(insn, fixup, wzr, wzr)
>
> +#define _ASM_EXTABLE_KACCESS_ERR_ZERO_ME_SAFE(insn, fixup, err, zero) \
> + __ASM_EXTABLE_RAW(insn, fixup, \
> + EX_TYPE_KACCESS_ERR_ZERO_ME_SAFE, \
> + ( \
> + EX_DATA_REG(ERR, err) | \
> + EX_DATA_REG(ZERO, zero) \
> + ))
> +
> +#define _ASM_EXTABLE_KACCESS_ME_SAFE(insn, fixup) \
> + _ASM_EXTABLE_KACCESS_ERR_ZERO_ME_SAFE(insn, fixup, wzr, wzr)
> +
> /*
> * Create an exception table entry for uaccess `insn`, which will branch to `fixup`
> * when an unhandled fault is taken.
> @@ -69,6 +82,14 @@
> .endif
> .endm
>
> +/*
> + * Create an exception table entry for kaccess me(memory error) safe `insn`, which
> + * will branch to `fixup` when an unhandled fault is taken.
> + */
> + .macro _asm_extable_kaccess_me_safe, insn, fixup
> + _ASM_EXTABLE_KACCESS_ME_SAFE(\insn, \fixup)
> + .endm
> +
With the naming above, I think this can be:
| /*
| * Create an exception table entry for kaccess `insn`, which will branch to
| * `fixup` when a memory error is taken
| */
| .macro _asm_extable_kaccess_mem_err, insn, fixup
| _ASM_EXTABLE_KACCESS_MEM_ERR(\insn, \fixup)
| .endm
> #else /* __ASSEMBLY__ */
>
> #include <linux/stringify.h>
> diff --git a/arch/arm64/include/asm/asm-uaccess.h b/arch/arm64/include/asm/asm-uaccess.h
> index 5b6efe8abeeb..7bbebfa5b710 100644
> --- a/arch/arm64/include/asm/asm-uaccess.h
> +++ b/arch/arm64/include/asm/asm-uaccess.h
> @@ -57,6 +57,10 @@ alternative_else_nop_endif
> .endm
> #endif
>
> +#define KERNEL_ME_SAFE(l, x...) \
> +9999: x; \
> + _asm_extable_kaccess_me_safe 9999b, l
> +
> #define USER(l, x...) \
> 9999: x; \
> _asm_extable_uaccess 9999b, l
> diff --git a/arch/arm64/include/asm/extable.h b/arch/arm64/include/asm/extable.h
> index 72b0e71cc3de..bc49443bc502 100644
> --- a/arch/arm64/include/asm/extable.h
> +++ b/arch/arm64/include/asm/extable.h
> @@ -46,4 +46,5 @@ bool ex_handler_bpf(const struct exception_table_entry *ex,
> #endif /* !CONFIG_BPF_JIT */
>
> bool fixup_exception(struct pt_regs *regs);
> +bool fixup_exception_me(struct pt_regs *regs);
> #endif
> diff --git a/arch/arm64/lib/copy_to_user.S b/arch/arm64/lib/copy_to_user.S
> index 802231772608..2ac716c0d6d8 100644
> --- a/arch/arm64/lib/copy_to_user.S
> +++ b/arch/arm64/lib/copy_to_user.S
> @@ -20,7 +20,7 @@
> * x0 - bytes not copied
> */
> .macro ldrb1 reg, ptr, val
> - ldrb \reg, [\ptr], \val
> + KERNEL_ME_SAFE(9998f, ldrb \reg, [\ptr], \val)
> .endm
>
> .macro strb1 reg, ptr, val
> @@ -28,7 +28,7 @@
> .endm
>
> .macro ldrh1 reg, ptr, val
> - ldrh \reg, [\ptr], \val
> + KERNEL_ME_SAFE(9998f, ldrh \reg, [\ptr], \val)
> .endm
>
> .macro strh1 reg, ptr, val
> @@ -36,7 +36,7 @@
> .endm
>
> .macro ldr1 reg, ptr, val
> - ldr \reg, [\ptr], \val
> + KERNEL_ME_SAFE(9998f, ldr \reg, [\ptr], \val)
> .endm
>
> .macro str1 reg, ptr, val
> @@ -44,7 +44,7 @@
> .endm
>
> .macro ldp1 reg1, reg2, ptr, val
> - ldp \reg1, \reg2, [\ptr], \val
> + KERNEL_ME_SAFE(9998f, ldp \reg1, \reg2, [\ptr], \val)
> .endm
>
> .macro stp1 reg1, reg2, ptr, val
These changes mean that regular copy_to_user() will handle kernel memory
errors, rather than only doing that in copy_mc_to_user(). If that's
intentional, please call that out explicitly in the commit message.
> @@ -64,7 +64,7 @@ SYM_FUNC_START(__arch_copy_to_user)
> 9997: cmp dst, dstin
> b.ne 9998f
> // Before being absolutely sure we couldn't copy anything, try harder
> - ldrb tmp1w, [srcin]
> +KERNEL_ME_SAFE(9998f, ldrb tmp1w, [srcin])
> USER(9998f, sttrb tmp1w, [dst])
> add dst, dst, #1
> 9998: sub x0, end, dst // bytes not copied
Same comment as above.
> diff --git a/arch/arm64/mm/extable.c b/arch/arm64/mm/extable.c
> index 228d681a8715..8c690ae61944 100644
> --- a/arch/arm64/mm/extable.c
> +++ b/arch/arm64/mm/extable.c
> @@ -72,7 +72,26 @@ bool fixup_exception(struct pt_regs *regs)
> return ex_handler_uaccess_err_zero(ex, regs);
> case EX_TYPE_LOAD_UNALIGNED_ZEROPAD:
> return ex_handler_load_unaligned_zeropad(ex, regs);
> + case EX_TYPE_KACCESS_ERR_ZERO_ME_SAFE:
> + return false;
> }
>
> BUG();
> }
> +
> +bool fixup_exception_me(struct pt_regs *regs)
> +{
> + const struct exception_table_entry *ex;
> +
> + ex = search_exception_tables(instruction_pointer(regs));
> + if (!ex)
> + return false;
> +
> + switch (ex->type) {
> + case EX_TYPE_UACCESS_ERR_ZERO:
> + case EX_TYPE_KACCESS_ERR_ZERO_ME_SAFE:
> + return ex_handler_uaccess_err_zero(ex, regs);
> + }
> +
> + return false;
> +}
> diff --git a/arch/arm64/mm/fault.c b/arch/arm64/mm/fault.c
> index 451ba7cbd5ad..2dc65f99d389 100644
> --- a/arch/arm64/mm/fault.c
> +++ b/arch/arm64/mm/fault.c
> @@ -708,21 +708,32 @@ static int do_bad(unsigned long far, unsigned long esr, struct pt_regs *regs)
> return 1; /* "fault" */
> }
>
> +/*
> + * APEI claimed this as a firmware-first notification.
> + * Some processing deferred to task_work before ret_to_user().
> + */
> +static bool do_apei_claim_sea(struct pt_regs *regs)
> +{
> + if (user_mode(regs)) {
> + if (!apei_claim_sea(regs))
> + return true;
> + } else if (IS_ENABLED(CONFIG_ARCH_HAS_COPY_MC)) {
> + if (fixup_exception_me(regs) && !apei_claim_sea(regs))
> + return true;
> + }
> +
> + return false;
> +}
Hmm... that'll fixup the exception even if we don't manage to claim a
the SEA. I suspect this should probably be:
static bool do_apei_claim_sea(struct pt_regs *regs)
{
if (apei_claim_sea(regs))
return false;
if (user_mode(regs))
return true;
if (IS_ENABLED(CONFIG_ARCH_HAS_COPY_MC))
return !fixup_excepton_mem_err(regs);
return false;
}
... unless we *don't* want to claim the SEA in the case we don't have a
fixup?
Mark.
> +
> static int do_sea(unsigned long far, unsigned long esr, struct pt_regs *regs)
> {
> const struct fault_info *inf;
> unsigned long siaddr;
>
> - inf = esr_to_fault_info(esr);
> -
> - if (user_mode(regs) && apei_claim_sea(regs) == 0) {
> - /*
> - * APEI claimed this as a firmware-first notification.
> - * Some processing deferred to task_work before ret_to_user().
> - */
> + if (do_apei_claim_sea(regs))
> return 0;
> - }
>
> + inf = esr_to_fault_info(esr);
> if (esr & ESR_ELx_FnV) {
> siaddr = 0;
> } else {
> --
> 2.25.1
>
>
^ permalink raw reply [flat|nested] 24+ messages in thread* Re: [PATCH v12 2/6] arm64: add support for ARCH_HAS_COPY_MC
2024-08-19 17:29 ` Mark Rutland
@ 2024-08-20 2:11 ` Tong Tiangen
2024-08-20 9:12 ` Mark Rutland
0 siblings, 1 reply; 24+ messages in thread
From: Tong Tiangen @ 2024-08-20 2:11 UTC (permalink / raw)
To: Mark Rutland
Cc: Catalin Marinas, Will Deacon, Andrew Morton, James Morse,
Robin Murphy, Andrey Konovalov, Dmitry Vyukov, Vincenzo Frascino,
Michael Ellerman, Nicholas Piggin, Andrey Ryabinin,
Alexander Potapenko, Christophe Leroy, Aneesh Kumar K.V,
Naveen N. Rao, Thomas Gleixner, Ingo Molnar, Borislav Petkov,
Dave Hansen, x86, H. Peter Anvin, linux-arm-kernel, linux-mm,
linuxppc-dev, linux-kernel, wangkefeng.wang, Guohanjun
在 2024/8/20 1:29, Mark Rutland 写道:
> Hi Tong,
>
> On Tue, May 28, 2024 at 04:59:11PM +0800, Tong Tiangen wrote:
>> For the arm64 kernel, when it processes hardware memory errors for
>> synchronize notifications(do_sea()), if the errors is consumed within the
>> kernel, the current processing is panic. However, it is not optimal.
>>
>> Take copy_from/to_user for example, If ld* triggers a memory error, even in
>> kernel mode, only the associated process is affected. Killing the user
>> process and isolating the corrupt page is a better choice.
>>
>> New fixup type EX_TYPE_KACCESS_ERR_ZERO_ME_SAFE is added to identify insn
>> that can recover from memory errors triggered by access to kernel memory.
>>
>> Signed-off-by: Tong Tiangen <tongtiangen@huawei.com>
>
> Generally this looks ok, but I have a couple of comments below.
>
>> ---
>> arch/arm64/Kconfig | 1 +
>> arch/arm64/include/asm/asm-extable.h | 31 +++++++++++++++++++++++-----
>> arch/arm64/include/asm/asm-uaccess.h | 4 ++++
>> arch/arm64/include/asm/extable.h | 1 +
>> arch/arm64/lib/copy_to_user.S | 10 ++++-----
>> arch/arm64/mm/extable.c | 19 +++++++++++++++++
>> arch/arm64/mm/fault.c | 27 +++++++++++++++++-------
>> 7 files changed, 75 insertions(+), 18 deletions(-)
>>
>> diff --git a/arch/arm64/Kconfig b/arch/arm64/Kconfig
>> index 5d91259ee7b5..13ca06ddf3dd 100644
>> --- a/arch/arm64/Kconfig
>> +++ b/arch/arm64/Kconfig
>> @@ -20,6 +20,7 @@ config ARM64
>> select ARCH_ENABLE_SPLIT_PMD_PTLOCK if PGTABLE_LEVELS > 2
>> select ARCH_ENABLE_THP_MIGRATION if TRANSPARENT_HUGEPAGE
>> select ARCH_HAS_CACHE_LINE_SIZE
>> + select ARCH_HAS_COPY_MC if ACPI_APEI_GHES
>> select ARCH_HAS_CURRENT_STACK_POINTER
>> select ARCH_HAS_DEBUG_VIRTUAL
>> select ARCH_HAS_DEBUG_VM_PGTABLE
>> diff --git a/arch/arm64/include/asm/asm-extable.h b/arch/arm64/include/asm/asm-extable.h
>> index 980d1dd8e1a3..9c0664fe1eb1 100644
>> --- a/arch/arm64/include/asm/asm-extable.h
>> +++ b/arch/arm64/include/asm/asm-extable.h
>> @@ -5,11 +5,13 @@
>> #include <linux/bits.h>
>> #include <asm/gpr-num.h>
>>
>> -#define EX_TYPE_NONE 0
>> -#define EX_TYPE_BPF 1
>> -#define EX_TYPE_UACCESS_ERR_ZERO 2
>> -#define EX_TYPE_KACCESS_ERR_ZERO 3
>> -#define EX_TYPE_LOAD_UNALIGNED_ZEROPAD 4
>> +#define EX_TYPE_NONE 0
>> +#define EX_TYPE_BPF 1
>> +#define EX_TYPE_UACCESS_ERR_ZERO 2
>> +#define EX_TYPE_KACCESS_ERR_ZERO 3
>> +#define EX_TYPE_LOAD_UNALIGNED_ZEROPAD 4
>> +/* kernel access memory error safe */
>> +#define EX_TYPE_KACCESS_ERR_ZERO_ME_SAFE 5
>
> Could we please use 'MEM_ERR', and likewise for the macros below? That's
> more obvious than 'ME_SAFE', and we wouldn't need the comment here.
> Likewise elsewhere in this patch and the series.
>
> To Jonathan's comment, I do prefer these numbers are aligned, so aside
> from the naming, the diff above looks good.
OK, I also modified other locations to use 'MEM_ERR'.
>
>>
>> /* Data fields for EX_TYPE_UACCESS_ERR_ZERO */
>> #define EX_DATA_REG_ERR_SHIFT 0
>> @@ -51,6 +53,17 @@
>> #define _ASM_EXTABLE_UACCESS(insn, fixup) \
>> _ASM_EXTABLE_UACCESS_ERR_ZERO(insn, fixup, wzr, wzr)
>>
>> +#define _ASM_EXTABLE_KACCESS_ERR_ZERO_ME_SAFE(insn, fixup, err, zero) \
>> + __ASM_EXTABLE_RAW(insn, fixup, \
>> + EX_TYPE_KACCESS_ERR_ZERO_ME_SAFE, \
>> + ( \
>> + EX_DATA_REG(ERR, err) | \
>> + EX_DATA_REG(ZERO, zero) \
>> + ))
>> +
>> +#define _ASM_EXTABLE_KACCESS_ME_SAFE(insn, fixup) \
>> + _ASM_EXTABLE_KACCESS_ERR_ZERO_ME_SAFE(insn, fixup, wzr, wzr)
>> +
>> /*
>> * Create an exception table entry for uaccess `insn`, which will branch to `fixup`
>> * when an unhandled fault is taken.
>> @@ -69,6 +82,14 @@
>> .endif
>> .endm
>>
>> +/*
>> + * Create an exception table entry for kaccess me(memory error) safe `insn`, which
>> + * will branch to `fixup` when an unhandled fault is taken.
>> + */
>> + .macro _asm_extable_kaccess_me_safe, insn, fixup
>> + _ASM_EXTABLE_KACCESS_ME_SAFE(\insn, \fixup)
>> + .endm
>> +
>
> With the naming above, I think this can be:
>
> | /*
> | * Create an exception table entry for kaccess `insn`, which will branch to
> | * `fixup` when a memory error is taken
> | */
> | .macro _asm_extable_kaccess_mem_err, insn, fixup
> | _ASM_EXTABLE_KACCESS_MEM_ERR(\insn, \fixup)
> | .endm
>
OK, will be fixed next version.
>> #else /* __ASSEMBLY__ */
>>
>> #include <linux/stringify.h>
>> diff --git a/arch/arm64/include/asm/asm-uaccess.h b/arch/arm64/include/asm/asm-uaccess.h
>> index 5b6efe8abeeb..7bbebfa5b710 100644
>> --- a/arch/arm64/include/asm/asm-uaccess.h
>> +++ b/arch/arm64/include/asm/asm-uaccess.h
>> @@ -57,6 +57,10 @@ alternative_else_nop_endif
>> .endm
>> #endif
>>
>> +#define KERNEL_ME_SAFE(l, x...) \
>> +9999: x; \
>> + _asm_extable_kaccess_me_safe 9999b, l
>> +
>> #define USER(l, x...) \
>> 9999: x; \
>> _asm_extable_uaccess 9999b, l
>> diff --git a/arch/arm64/include/asm/extable.h b/arch/arm64/include/asm/extable.h
>> index 72b0e71cc3de..bc49443bc502 100644
>> --- a/arch/arm64/include/asm/extable.h
>> +++ b/arch/arm64/include/asm/extable.h
>> @@ -46,4 +46,5 @@ bool ex_handler_bpf(const struct exception_table_entry *ex,
>> #endif /* !CONFIG_BPF_JIT */
>>
>> bool fixup_exception(struct pt_regs *regs);
>> +bool fixup_exception_me(struct pt_regs *regs);
>> #endif
>> diff --git a/arch/arm64/lib/copy_to_user.S b/arch/arm64/lib/copy_to_user.S
>> index 802231772608..2ac716c0d6d8 100644
>> --- a/arch/arm64/lib/copy_to_user.S
>> +++ b/arch/arm64/lib/copy_to_user.S
>> @@ -20,7 +20,7 @@
>> * x0 - bytes not copied
>> */
>> .macro ldrb1 reg, ptr, val
>> - ldrb \reg, [\ptr], \val
>> + KERNEL_ME_SAFE(9998f, ldrb \reg, [\ptr], \val)
>> .endm
>>
>> .macro strb1 reg, ptr, val
>> @@ -28,7 +28,7 @@
>> .endm
>>
>> .macro ldrh1 reg, ptr, val
>> - ldrh \reg, [\ptr], \val
>> + KERNEL_ME_SAFE(9998f, ldrh \reg, [\ptr], \val)
>> .endm
>>
>> .macro strh1 reg, ptr, val
>> @@ -36,7 +36,7 @@
>> .endm
>>
>> .macro ldr1 reg, ptr, val
>> - ldr \reg, [\ptr], \val
>> + KERNEL_ME_SAFE(9998f, ldr \reg, [\ptr], \val)
>> .endm
>>
>> .macro str1 reg, ptr, val
>> @@ -44,7 +44,7 @@
>> .endm
>>
>> .macro ldp1 reg1, reg2, ptr, val
>> - ldp \reg1, \reg2, [\ptr], \val
>> + KERNEL_ME_SAFE(9998f, ldp \reg1, \reg2, [\ptr], \val)
>> .endm
>>
>> .macro stp1 reg1, reg2, ptr, val
>
> These changes mean that regular copy_to_user() will handle kernel memory
> errors, rather than only doing that in copy_mc_to_user(). If that's
> intentional, please call that out explicitly in the commit message.
Yes. This is the purpose of the modification. If the copy_to_user()
function encounters a memory error, this uaccess affects only the
current process. and only need to kill the current process instead of
the entire kernel panic. Do not add copy_mc_to_user() so that
copy_to_user() can process memory errors.
I'll add a description in the commit msg next version.
>
>> @@ -64,7 +64,7 @@ SYM_FUNC_START(__arch_copy_to_user)
>> 9997: cmp dst, dstin
>> b.ne 9998f
>> // Before being absolutely sure we couldn't copy anything, try harder
>> - ldrb tmp1w, [srcin]
>> +KERNEL_ME_SAFE(9998f, ldrb tmp1w, [srcin])
>> USER(9998f, sttrb tmp1w, [dst])
>> add dst, dst, #1
>> 9998: sub x0, end, dst // bytes not copied
>
> Same comment as above.
>
>> diff --git a/arch/arm64/mm/extable.c b/arch/arm64/mm/extable.c
>> index 228d681a8715..8c690ae61944 100644
>> --- a/arch/arm64/mm/extable.c
>> +++ b/arch/arm64/mm/extable.c
>> @@ -72,7 +72,26 @@ bool fixup_exception(struct pt_regs *regs)
>> return ex_handler_uaccess_err_zero(ex, regs);
>> case EX_TYPE_LOAD_UNALIGNED_ZEROPAD:
>> return ex_handler_load_unaligned_zeropad(ex, regs);
>> + case EX_TYPE_KACCESS_ERR_ZERO_ME_SAFE:
>> + return false;
>> }
>>
>> BUG();
>> }
>> +
>> +bool fixup_exception_me(struct pt_regs *regs)
>> +{
>> + const struct exception_table_entry *ex;
>> +
>> + ex = search_exception_tables(instruction_pointer(regs));
>> + if (!ex)
>> + return false;
>> +
>> + switch (ex->type) {
>> + case EX_TYPE_UACCESS_ERR_ZERO:
>> + case EX_TYPE_KACCESS_ERR_ZERO_ME_SAFE:
>> + return ex_handler_uaccess_err_zero(ex, regs);
>> + }
>> +
>> + return false;
>> +}
>> diff --git a/arch/arm64/mm/fault.c b/arch/arm64/mm/fault.c
>> index 451ba7cbd5ad..2dc65f99d389 100644
>> --- a/arch/arm64/mm/fault.c
>> +++ b/arch/arm64/mm/fault.c
>> @@ -708,21 +708,32 @@ static int do_bad(unsigned long far, unsigned long esr, struct pt_regs *regs)
>> return 1; /* "fault" */
>> }
>>
>> +/*
>> + * APEI claimed this as a firmware-first notification.
>> + * Some processing deferred to task_work before ret_to_user().
>> + */
>> +static bool do_apei_claim_sea(struct pt_regs *regs)
>> +{
>> + if (user_mode(regs)) {
>> + if (!apei_claim_sea(regs))
>> + return true;
>> + } else if (IS_ENABLED(CONFIG_ARCH_HAS_COPY_MC)) {
>> + if (fixup_exception_me(regs) && !apei_claim_sea(regs))
>> + return true;
>> + }
>> +
>> + return false;
>> +}
>
> Hmm... that'll fixup the exception even if we don't manage to claim a
> the SEA. I suspect this should probably be:
>
> static bool do_apei_claim_sea(struct pt_regs *regs)
> {
> if (apei_claim_sea(regs))
> return false;
> if (user_mode(regs))
> return true;
> if (IS_ENABLED(CONFIG_ARCH_HAS_COPY_MC))
> return !fixup_excepton_mem_err(regs);
>
> return false;
> }
>
> ... unless we *don't* want to claim the SEA in the case we don't have a
> fixup?
>
> Mark.
>
Yes. My original meaning here is that if not have fixup, panic is
performed in do_sea() according to the original logic, and claim sea is
not required.
Thanks,
Tong.
>> +
>> static int do_sea(unsigned long far, unsigned long esr, struct pt_regs *regs)
>> {
>> const struct fault_info *inf;
>> unsigned long siaddr;
>>
>> - inf = esr_to_fault_info(esr);
>> -
>> - if (user_mode(regs) && apei_claim_sea(regs) == 0) {
>> - /*
>> - * APEI claimed this as a firmware-first notification.
>> - * Some processing deferred to task_work before ret_to_user().
>> - */
>> + if (do_apei_claim_sea(regs))
>> return 0;
>> - }
>>
>> + inf = esr_to_fault_info(esr);
>> if (esr & ESR_ELx_FnV) {
>> siaddr = 0;
>> } else {
>> --
>> 2.25.1
>>
>>
>
> .
^ permalink raw reply [flat|nested] 24+ messages in thread* Re: [PATCH v12 2/6] arm64: add support for ARCH_HAS_COPY_MC
2024-08-20 2:11 ` Tong Tiangen
@ 2024-08-20 9:12 ` Mark Rutland
2024-08-20 13:26 ` Tong Tiangen
0 siblings, 1 reply; 24+ messages in thread
From: Mark Rutland @ 2024-08-20 9:12 UTC (permalink / raw)
To: Tong Tiangen
Cc: Catalin Marinas, Will Deacon, Andrew Morton, James Morse,
Robin Murphy, Andrey Konovalov, Dmitry Vyukov, Vincenzo Frascino,
Michael Ellerman, Nicholas Piggin, Andrey Ryabinin,
Alexander Potapenko, Christophe Leroy, Aneesh Kumar K.V,
Naveen N. Rao, Thomas Gleixner, Ingo Molnar, Borislav Petkov,
Dave Hansen, x86, H. Peter Anvin, linux-arm-kernel, linux-mm,
linuxppc-dev, linux-kernel, wangkefeng.wang, Guohanjun
On Tue, Aug 20, 2024 at 10:11:45AM +0800, Tong Tiangen wrote:
> 在 2024/8/20 1:29, Mark Rutland 写道:
> > Hi Tong,
> >
> > On Tue, May 28, 2024 at 04:59:11PM +0800, Tong Tiangen wrote:
> > > For the arm64 kernel, when it processes hardware memory errors for
> > > synchronize notifications(do_sea()), if the errors is consumed within the
> > > kernel, the current processing is panic. However, it is not optimal.
> > >
> > > Take copy_from/to_user for example, If ld* triggers a memory error, even in
> > > kernel mode, only the associated process is affected. Killing the user
> > > process and isolating the corrupt page is a better choice.
> > >
> > > New fixup type EX_TYPE_KACCESS_ERR_ZERO_ME_SAFE is added to identify insn
> > > that can recover from memory errors triggered by access to kernel memory.
> > >
> > > Signed-off-by: Tong Tiangen <tongtiangen@huawei.com>
[...]
> > > diff --git a/arch/arm64/include/asm/asm-extable.h b/arch/arm64/include/asm/asm-extable.h
> > > index 980d1dd8e1a3..9c0664fe1eb1 100644
> > > --- a/arch/arm64/include/asm/asm-extable.h
> > > +++ b/arch/arm64/include/asm/asm-extable.h
> > > @@ -5,11 +5,13 @@
> > > #include <linux/bits.h>
> > > #include <asm/gpr-num.h>
> > > -#define EX_TYPE_NONE 0
> > > -#define EX_TYPE_BPF 1
> > > -#define EX_TYPE_UACCESS_ERR_ZERO 2
> > > -#define EX_TYPE_KACCESS_ERR_ZERO 3
> > > -#define EX_TYPE_LOAD_UNALIGNED_ZEROPAD 4
> > > +#define EX_TYPE_NONE 0
> > > +#define EX_TYPE_BPF 1
> > > +#define EX_TYPE_UACCESS_ERR_ZERO 2
> > > +#define EX_TYPE_KACCESS_ERR_ZERO 3
> > > +#define EX_TYPE_LOAD_UNALIGNED_ZEROPAD 4
> > > +/* kernel access memory error safe */
> > > +#define EX_TYPE_KACCESS_ERR_ZERO_ME_SAFE 5
> >
> > Could we please use 'MEM_ERR', and likewise for the macros below? That's
> > more obvious than 'ME_SAFE', and we wouldn't need the comment here.
> > Likewise elsewhere in this patch and the series.
> >
> > To Jonathan's comment, I do prefer these numbers are aligned, so aside
> > from the naming, the diff above looks good.
>
> OK, I also modified other locations to use 'MEM_ERR'.
Thanks!
[...]
> > > diff --git a/arch/arm64/lib/copy_to_user.S b/arch/arm64/lib/copy_to_user.S
> > > index 802231772608..2ac716c0d6d8 100644
> > > --- a/arch/arm64/lib/copy_to_user.S
> > > +++ b/arch/arm64/lib/copy_to_user.S
> > > @@ -20,7 +20,7 @@
> > > * x0 - bytes not copied
> > > */
> > > .macro ldrb1 reg, ptr, val
> > > - ldrb \reg, [\ptr], \val
> > > + KERNEL_ME_SAFE(9998f, ldrb \reg, [\ptr], \val)
> > > .endm
> > > .macro strb1 reg, ptr, val
> > > @@ -28,7 +28,7 @@
> > > .endm
> > > .macro ldrh1 reg, ptr, val
> > > - ldrh \reg, [\ptr], \val
> > > + KERNEL_ME_SAFE(9998f, ldrh \reg, [\ptr], \val)
> > > .endm
> > > .macro strh1 reg, ptr, val
> > > @@ -36,7 +36,7 @@
> > > .endm
> > > .macro ldr1 reg, ptr, val
> > > - ldr \reg, [\ptr], \val
> > > + KERNEL_ME_SAFE(9998f, ldr \reg, [\ptr], \val)
> > > .endm
> > > .macro str1 reg, ptr, val
> > > @@ -44,7 +44,7 @@
> > > .endm
> > > .macro ldp1 reg1, reg2, ptr, val
> > > - ldp \reg1, \reg2, [\ptr], \val
> > > + KERNEL_ME_SAFE(9998f, ldp \reg1, \reg2, [\ptr], \val)
> > > .endm
> > > .macro stp1 reg1, reg2, ptr, val
> >
> > These changes mean that regular copy_to_user() will handle kernel memory
> > errors, rather than only doing that in copy_mc_to_user(). If that's
> > intentional, please call that out explicitly in the commit message.
>
> Yes. This is the purpose of the modification. If the copy_to_user()
> function encounters a memory error, this uaccess affects only the
> current process. and only need to kill the current process instead of
> the entire kernel panic. Do not add copy_mc_to_user() so that
> copy_to_user() can process memory errors.
>
> I'll add a description in the commit msg next version.
Ok; why do powerpc and x86 have separate copy_mc_to_user()
implementations, then?
[...]
> > > +/*
> > > + * APEI claimed this as a firmware-first notification.
> > > + * Some processing deferred to task_work before ret_to_user().
> > > + */
> > > +static bool do_apei_claim_sea(struct pt_regs *regs)
> > > +{
> > > + if (user_mode(regs)) {
> > > + if (!apei_claim_sea(regs))
> > > + return true;
> > > + } else if (IS_ENABLED(CONFIG_ARCH_HAS_COPY_MC)) {
> > > + if (fixup_exception_me(regs) && !apei_claim_sea(regs))
> > > + return true;
> > > + }
> > > +
> > > + return false;
> > > +}
> >
> > Hmm... that'll fixup the exception even if we don't manage to claim a
> > the SEA. I suspect this should probably be:
> >
> > static bool do_apei_claim_sea(struct pt_regs *regs)
> > {
> > if (apei_claim_sea(regs))
> > return false;
> > if (user_mode(regs))
> > return true;
> > if (IS_ENABLED(CONFIG_ARCH_HAS_COPY_MC))
> > return !fixup_excepton_mem_err(regs);
> >
> > return false;
> > }
> >
> > ... unless we *don't* want to claim the SEA in the case we don't have a
> > fixup?
> >
> > Mark.
> >
>
> Yes. My original meaning here is that if not have fixup, panic is
> performed in do_sea() according to the original logic, and claim sea is
> not required.
AFAICT my suggestion doesn't change that; if we don't have a fixup the
proprosed do_apei_claim_sea() would return false, and so do_sea() would
caryy on to arm64_notify_die(...).
I'm specifically asking if we need to avoid calling apei_claim_sea()
when we don't have a fixup handler, or if calling that would be fine.
One important thing is that if apei_claim_sea() fails to claim the SEA,
we'd like to panic(), and in that case it'd be good to have not applied
the fixup handler, so that the pt_regs::pc shows where the fault was
taken from.
Mark.
^ permalink raw reply [flat|nested] 24+ messages in thread* Re: [PATCH v12 2/6] arm64: add support for ARCH_HAS_COPY_MC
2024-08-20 9:12 ` Mark Rutland
@ 2024-08-20 13:26 ` Tong Tiangen
0 siblings, 0 replies; 24+ messages in thread
From: Tong Tiangen @ 2024-08-20 13:26 UTC (permalink / raw)
To: Mark Rutland, dan.j.williams
Cc: Catalin Marinas, Will Deacon, Andrew Morton, James Morse,
Robin Murphy, Andrey Konovalov, Dmitry Vyukov, Vincenzo Frascino,
Michael Ellerman, Nicholas Piggin, Andrey Ryabinin,
Alexander Potapenko, Christophe Leroy, Aneesh Kumar K.V,
Naveen N. Rao, Thomas Gleixner, Ingo Molnar, Borislav Petkov,
Dave Hansen, x86, H. Peter Anvin, linux-arm-kernel, linux-mm,
linuxppc-dev, linux-kernel, wangkefeng.wang, Guohanjun
在 2024/8/20 17:12, Mark Rutland 写道:
> On Tue, Aug 20, 2024 at 10:11:45AM +0800, Tong Tiangen wrote:
>> 在 2024/8/20 1:29, Mark Rutland 写道:
>>> Hi Tong,
>>>
>>> On Tue, May 28, 2024 at 04:59:11PM +0800, Tong Tiangen wrote:
>>>> For the arm64 kernel, when it processes hardware memory errors for
>>>> synchronize notifications(do_sea()), if the errors is consumed within the
>>>> kernel, the current processing is panic. However, it is not optimal.
>>>>
>>>> Take copy_from/to_user for example, If ld* triggers a memory error, even in
>>>> kernel mode, only the associated process is affected. Killing the user
>>>> process and isolating the corrupt page is a better choice.
>>>>
>>>> New fixup type EX_TYPE_KACCESS_ERR_ZERO_ME_SAFE is added to identify insn
>>>> that can recover from memory errors triggered by access to kernel memory.
>>>>
>>>> Signed-off-by: Tong Tiangen <tongtiangen@huawei.com>
>
> [...]
>
>>>> diff --git a/arch/arm64/include/asm/asm-extable.h b/arch/arm64/include/asm/asm-extable.h
>>>> index 980d1dd8e1a3..9c0664fe1eb1 100644
>>>> --- a/arch/arm64/include/asm/asm-extable.h
>>>> +++ b/arch/arm64/include/asm/asm-extable.h
>>>> @@ -5,11 +5,13 @@
>>>> #include <linux/bits.h>
>>>> #include <asm/gpr-num.h>
>>>> -#define EX_TYPE_NONE 0
>>>> -#define EX_TYPE_BPF 1
>>>> -#define EX_TYPE_UACCESS_ERR_ZERO 2
>>>> -#define EX_TYPE_KACCESS_ERR_ZERO 3
>>>> -#define EX_TYPE_LOAD_UNALIGNED_ZEROPAD 4
>>>> +#define EX_TYPE_NONE 0
>>>> +#define EX_TYPE_BPF 1
>>>> +#define EX_TYPE_UACCESS_ERR_ZERO 2
>>>> +#define EX_TYPE_KACCESS_ERR_ZERO 3
>>>> +#define EX_TYPE_LOAD_UNALIGNED_ZEROPAD 4
>>>> +/* kernel access memory error safe */
>>>> +#define EX_TYPE_KACCESS_ERR_ZERO_ME_SAFE 5
>>>
>>> Could we please use 'MEM_ERR', and likewise for the macros below? That's
>>> more obvious than 'ME_SAFE', and we wouldn't need the comment here.
>>> Likewise elsewhere in this patch and the series.
>>>
>>> To Jonathan's comment, I do prefer these numbers are aligned, so aside
>>> from the naming, the diff above looks good.
>>
>> OK, I also modified other locations to use 'MEM_ERR'.
>
> Thanks!
>
> [...]
>
>>>> diff --git a/arch/arm64/lib/copy_to_user.S b/arch/arm64/lib/copy_to_user.S
>>>> index 802231772608..2ac716c0d6d8 100644
>>>> --- a/arch/arm64/lib/copy_to_user.S
>>>> +++ b/arch/arm64/lib/copy_to_user.S
>>>> @@ -20,7 +20,7 @@
>>>> * x0 - bytes not copied
>>>> */
>>>> .macro ldrb1 reg, ptr, val
>>>> - ldrb \reg, [\ptr], \val
>>>> + KERNEL_ME_SAFE(9998f, ldrb \reg, [\ptr], \val)
>>>> .endm
>>>> .macro strb1 reg, ptr, val
>>>> @@ -28,7 +28,7 @@
>>>> .endm
>>>> .macro ldrh1 reg, ptr, val
>>>> - ldrh \reg, [\ptr], \val
>>>> + KERNEL_ME_SAFE(9998f, ldrh \reg, [\ptr], \val)
>>>> .endm
>>>> .macro strh1 reg, ptr, val
>>>> @@ -36,7 +36,7 @@
>>>> .endm
>>>> .macro ldr1 reg, ptr, val
>>>> - ldr \reg, [\ptr], \val
>>>> + KERNEL_ME_SAFE(9998f, ldr \reg, [\ptr], \val)
>>>> .endm
>>>> .macro str1 reg, ptr, val
>>>> @@ -44,7 +44,7 @@
>>>> .endm
>>>> .macro ldp1 reg1, reg2, ptr, val
>>>> - ldp \reg1, \reg2, [\ptr], \val
>>>> + KERNEL_ME_SAFE(9998f, ldp \reg1, \reg2, [\ptr], \val)
>>>> .endm
>>>> .macro stp1 reg1, reg2, ptr, val
>>>
>>> These changes mean that regular copy_to_user() will handle kernel memory
>>> errors, rather than only doing that in copy_mc_to_user(). If that's
>>> intentional, please call that out explicitly in the commit message.
>>
>> Yes. This is the purpose of the modification. If the copy_to_user()
>> function encounters a memory error, this uaccess affects only the
>> current process. and only need to kill the current process instead of
>> the entire kernel panic. Do not add copy_mc_to_user() so that
>> copy_to_user() can process memory errors.
>>
>> I'll add a description in the commit msg next version.
>
> Ok; why do powerpc and x86 have separate copy_mc_to_user()
> implementations, then?
Taking x86 as an example:
unsigned long __must_check copy_mc_to_user(...)
{
unsigned long ret;
if (copy_mc_fragile_enabled) {
instrument_copy_to_user(dst, src, len);
__uaccess_begin();
ret = copy_mc_fragile((__force void *)dst, src, len);
__uaccess_end();
return ret;
}
if (static_cpu_has(X86_FEATURE_ERMS)) {
instrument_copy_to_user(dst, src, len);
__uaccess_begin();
ret = copy_mc_enhanced_fast_string((__force void *)dst, src, len);
__uaccess_end();
return ret;
}
return copy_user_generic((__force void *)dst, src, len);
}
Through checking the source code, I found that "copy_mc_fragile_enabled"
and "X86_FEATURE_ERMS" both rely on the hardware features of x86. I
cannot explain the reasons for the details, but I feel that these are
related to the hardware implementation.
Dan Williams should be able to explain the reason.
Hi Dan:
We need your help:)
Compared to copy_to_user(), copy_mc_to_user() added memory error
handling. My question is why the error handling is not directly
implemented on copy_to_user(), but instead the copy_mc_to_user()
function is added? Related to hardware features or performance
considerations ?
Thanks,
Tong.
>
> [...]
>
>>>> +/*
>>>> + * APEI claimed this as a firmware-first notification.
>>>> + * Some processing deferred to task_work before ret_to_user().
>>>> + */
>>>> +static bool do_apei_claim_sea(struct pt_regs *regs)
>>>> +{
>>>> + if (user_mode(regs)) {
>>>> + if (!apei_claim_sea(regs))
>>>> + return true;
>>>> + } else if (IS_ENABLED(CONFIG_ARCH_HAS_COPY_MC)) {
>>>> + if (fixup_exception_me(regs) && !apei_claim_sea(regs))
>>>> + return true;
>>>> + }
>>>> +
>>>> + return false;
>>>> +}
>>>
>>> Hmm... that'll fixup the exception even if we don't manage to claim a
>>> the SEA. I suspect this should probably be:
>>>
>>> static bool do_apei_claim_sea(struct pt_regs *regs)
>>> {
>>> if (apei_claim_sea(regs))
>>> return false;
>>> if (user_mode(regs))
>>> return true;
>>> if (IS_ENABLED(CONFIG_ARCH_HAS_COPY_MC))
>>> return !fixup_excepton_mem_err(regs);
>>>
>>> return false;
>>> }
>>>
>>> ... unless we *don't* want to claim the SEA in the case we don't have a
>>> fixup?
>>>
>>> Mark.
>>>
>>
>> Yes. My original meaning here is that if not have fixup, panic is
>> performed in do_sea() according to the original logic, and claim sea is
>> not required.
>
> AFAICT my suggestion doesn't change that; if we don't have a fixup the
> proprosed do_apei_claim_sea() would return false, and so do_sea() would
> caryy on to arm64_notify_die(...).
>
> I'm specifically asking if we need to avoid calling apei_claim_sea()
> when we don't have a fixup handler, or if calling that would be fine.
>
> One important thing is that if apei_claim_sea() fails to claim the SEA,
> we'd like to panic(), and in that case it'd be good to have not applied
> the fixup handler, so that the pt_regs::pc shows where the fault was
> taken from.
>
> Mark.
I roughly understand what you mean. The prerequisite of fixup is sea
claimed succeed. But the fixup here actually just set the regs->pc, and
not applied the fixup handler here. If claim sea fails, it will directly
panic() here without applying the fixup handler.
Thanks,
Tong.
>
> .
^ permalink raw reply [flat|nested] 24+ messages in thread
* [PATCH v12 3/6] mm/hwpoison: return -EFAULT when copy fail in copy_mc_[user]_highpage()
2024-05-28 8:59 [PATCH v12 0/6]arm64: add ARCH_HAS_COPY_MC support Tong Tiangen
2024-05-28 8:59 ` [PATCH v12 1/6] uaccess: add generic fallback version of copy_mc_to_user() Tong Tiangen
2024-05-28 8:59 ` [PATCH v12 2/6] arm64: add support for ARCH_HAS_COPY_MC Tong Tiangen
@ 2024-05-28 8:59 ` Tong Tiangen
2024-08-19 11:43 ` Jonathan Cameron
2024-05-28 8:59 ` [PATCH v12 4/6] arm64: support copy_mc_[user]_highpage() Tong Tiangen
` (2 subsequent siblings)
5 siblings, 1 reply; 24+ messages in thread
From: Tong Tiangen @ 2024-05-28 8:59 UTC (permalink / raw)
To: Mark Rutland, Catalin Marinas, Will Deacon, Andrew Morton,
James Morse, Robin Murphy, Andrey Konovalov, Dmitry Vyukov,
Vincenzo Frascino, Michael Ellerman, Nicholas Piggin,
Andrey Ryabinin, Alexander Potapenko, Christophe Leroy,
Aneesh Kumar K.V, Naveen N. Rao, Thomas Gleixner, Ingo Molnar,
Borislav Petkov, Dave Hansen, x86, H. Peter Anvin
Cc: linux-arm-kernel, linux-mm, linuxppc-dev, linux-kernel,
Tong Tiangen, wangkefeng.wang, Guohanjun
If hardware errors are encountered during page copying, returning the bytes
not copied is not meaningful, and the caller cannot do any processing on
the remaining data. Returning -EFAULT is more reasonable, which represents
a hardware error encountered during the copying.
Signed-off-by: Tong Tiangen <tongtiangen@huawei.com>
---
include/linux/highmem.h | 8 ++++----
mm/khugepaged.c | 4 ++--
2 files changed, 6 insertions(+), 6 deletions(-)
diff --git a/include/linux/highmem.h b/include/linux/highmem.h
index 00341b56d291..64a567d5ad6f 100644
--- a/include/linux/highmem.h
+++ b/include/linux/highmem.h
@@ -335,8 +335,8 @@ static inline void copy_highpage(struct page *to, struct page *from)
/*
* If architecture supports machine check exception handling, define the
* #MC versions of copy_user_highpage and copy_highpage. They copy a memory
- * page with #MC in source page (@from) handled, and return the number
- * of bytes not copied if there was a #MC, otherwise 0 for success.
+ * page with #MC in source page (@from) handled, and return -EFAULT if there
+ * was a #MC, otherwise 0 for success.
*/
static inline int copy_mc_user_highpage(struct page *to, struct page *from,
unsigned long vaddr, struct vm_area_struct *vma)
@@ -352,7 +352,7 @@ static inline int copy_mc_user_highpage(struct page *to, struct page *from,
kunmap_local(vto);
kunmap_local(vfrom);
- return ret;
+ return ret ? -EFAULT : 0;
}
static inline int copy_mc_highpage(struct page *to, struct page *from)
@@ -368,7 +368,7 @@ static inline int copy_mc_highpage(struct page *to, struct page *from)
kunmap_local(vto);
kunmap_local(vfrom);
- return ret;
+ return ret ? -EFAULT : 0;
}
#else
static inline int copy_mc_user_highpage(struct page *to, struct page *from,
diff --git a/mm/khugepaged.c b/mm/khugepaged.c
index 774a97e6e2da..cce838e85967 100644
--- a/mm/khugepaged.c
+++ b/mm/khugepaged.c
@@ -798,7 +798,7 @@ static int __collapse_huge_page_copy(pte_t *pte, struct folio *folio,
continue;
}
src_page = pte_page(pteval);
- if (copy_mc_user_highpage(page, src_page, src_addr, vma) > 0) {
+ if (copy_mc_user_highpage(page, src_page, src_addr, vma)) {
result = SCAN_COPY_MC;
break;
}
@@ -2042,7 +2042,7 @@ static int collapse_file(struct mm_struct *mm, unsigned long addr,
index++;
dst++;
}
- if (copy_mc_highpage(dst, folio_page(folio, 0)) > 0) {
+ if (copy_mc_highpage(dst, folio_page(folio, 0))) {
result = SCAN_COPY_MC;
goto rollback;
}
--
2.25.1
^ permalink raw reply related [flat|nested] 24+ messages in thread* Re: [PATCH v12 3/6] mm/hwpoison: return -EFAULT when copy fail in copy_mc_[user]_highpage()
2024-05-28 8:59 ` [PATCH v12 3/6] mm/hwpoison: return -EFAULT when copy fail in copy_mc_[user]_highpage() Tong Tiangen
@ 2024-08-19 11:43 ` Jonathan Cameron
0 siblings, 0 replies; 24+ messages in thread
From: Jonathan Cameron @ 2024-08-19 11:43 UTC (permalink / raw)
To: Tong Tiangen
Cc: Mark Rutland, Catalin Marinas, Will Deacon, Andrew Morton,
James Morse, Robin Murphy, Andrey Konovalov, Dmitry Vyukov,
Vincenzo Frascino, Michael Ellerman, Nicholas Piggin,
Andrey Ryabinin, Alexander Potapenko, Christophe Leroy,
Aneesh Kumar K.V, Naveen N. Rao, Thomas Gleixner, Ingo Molnar,
Borislav Petkov, Dave Hansen, x86, H. Peter Anvin,
linux-arm-kernel, linux-mm, linuxppc-dev, linux-kernel,
wangkefeng.wang, Guohanjun
On Tue, 28 May 2024 16:59:12 +0800
Tong Tiangen <tongtiangen@huawei.com> wrote:
> If hardware errors are encountered during page copying, returning the bytes
> not copied is not meaningful, and the caller cannot do any processing on
> the remaining data. Returning -EFAULT is more reasonable, which represents
> a hardware error encountered during the copying.
>
> Signed-off-by: Tong Tiangen <tongtiangen@huawei.com>
LGTM
Reviewed-by: Jonathan Cameron <Jonathan.Cameron@huawei.com>
> ---
> include/linux/highmem.h | 8 ++++----
> mm/khugepaged.c | 4 ++--
> 2 files changed, 6 insertions(+), 6 deletions(-)
>
> diff --git a/include/linux/highmem.h b/include/linux/highmem.h
> index 00341b56d291..64a567d5ad6f 100644
> --- a/include/linux/highmem.h
> +++ b/include/linux/highmem.h
> @@ -335,8 +335,8 @@ static inline void copy_highpage(struct page *to, struct page *from)
> /*
> * If architecture supports machine check exception handling, define the
> * #MC versions of copy_user_highpage and copy_highpage. They copy a memory
> - * page with #MC in source page (@from) handled, and return the number
> - * of bytes not copied if there was a #MC, otherwise 0 for success.
> + * page with #MC in source page (@from) handled, and return -EFAULT if there
> + * was a #MC, otherwise 0 for success.
> */
> static inline int copy_mc_user_highpage(struct page *to, struct page *from,
> unsigned long vaddr, struct vm_area_struct *vma)
> @@ -352,7 +352,7 @@ static inline int copy_mc_user_highpage(struct page *to, struct page *from,
> kunmap_local(vto);
> kunmap_local(vfrom);
>
> - return ret;
> + return ret ? -EFAULT : 0;
> }
>
> static inline int copy_mc_highpage(struct page *to, struct page *from)
> @@ -368,7 +368,7 @@ static inline int copy_mc_highpage(struct page *to, struct page *from)
> kunmap_local(vto);
> kunmap_local(vfrom);
>
> - return ret;
> + return ret ? -EFAULT : 0;
> }
> #else
> static inline int copy_mc_user_highpage(struct page *to, struct page *from,
> diff --git a/mm/khugepaged.c b/mm/khugepaged.c
> index 774a97e6e2da..cce838e85967 100644
> --- a/mm/khugepaged.c
> +++ b/mm/khugepaged.c
> @@ -798,7 +798,7 @@ static int __collapse_huge_page_copy(pte_t *pte, struct folio *folio,
> continue;
> }
> src_page = pte_page(pteval);
> - if (copy_mc_user_highpage(page, src_page, src_addr, vma) > 0) {
> + if (copy_mc_user_highpage(page, src_page, src_addr, vma)) {
> result = SCAN_COPY_MC;
> break;
> }
> @@ -2042,7 +2042,7 @@ static int collapse_file(struct mm_struct *mm, unsigned long addr,
> index++;
> dst++;
> }
> - if (copy_mc_highpage(dst, folio_page(folio, 0)) > 0) {
> + if (copy_mc_highpage(dst, folio_page(folio, 0))) {
> result = SCAN_COPY_MC;
> goto rollback;
> }
^ permalink raw reply [flat|nested] 24+ messages in thread
* [PATCH v12 4/6] arm64: support copy_mc_[user]_highpage()
2024-05-28 8:59 [PATCH v12 0/6]arm64: add ARCH_HAS_COPY_MC support Tong Tiangen
` (2 preceding siblings ...)
2024-05-28 8:59 ` [PATCH v12 3/6] mm/hwpoison: return -EFAULT when copy fail in copy_mc_[user]_highpage() Tong Tiangen
@ 2024-05-28 8:59 ` Tong Tiangen
2024-08-19 11:56 ` Jonathan Cameron
2024-05-28 8:59 ` [PATCH v12 5/6] arm64: introduce copy_mc_to_kernel() implementation Tong Tiangen
2024-05-28 8:59 ` [PATCH v12 6/6] arm64: send SIGBUS to user process for SEA exception Tong Tiangen
5 siblings, 1 reply; 24+ messages in thread
From: Tong Tiangen @ 2024-05-28 8:59 UTC (permalink / raw)
To: Mark Rutland, Catalin Marinas, Will Deacon, Andrew Morton,
James Morse, Robin Murphy, Andrey Konovalov, Dmitry Vyukov,
Vincenzo Frascino, Michael Ellerman, Nicholas Piggin,
Andrey Ryabinin, Alexander Potapenko, Christophe Leroy,
Aneesh Kumar K.V, Naveen N. Rao, Thomas Gleixner, Ingo Molnar,
Borislav Petkov, Dave Hansen, x86, H. Peter Anvin
Cc: linux-arm-kernel, linux-mm, linuxppc-dev, linux-kernel,
Tong Tiangen, wangkefeng.wang, Guohanjun
Currently, many scenarios that can tolerate memory errors when copying page
have been supported in the kernel[1~5], all of which are implemented by
copy_mc_[user]_highpage(). arm64 should also support this mechanism.
Due to mte, arm64 needs to have its own copy_mc_[user]_highpage()
architecture implementation, macros __HAVE_ARCH_COPY_MC_HIGHPAGE and
__HAVE_ARCH_COPY_MC_USER_HIGHPAGE have been added to control it.
Add new helper copy_mc_page() which provide a page copy implementation with
hardware memory error safe. The code logic of copy_mc_page() is the same as
copy_page(), the main difference is that the ldp insn of copy_mc_page()
contains the fixup type EX_TYPE_KACCESS_ERR_ZERO_ME_SAFE, therefore, the
main logic is extracted to copy_page_template.S.
[1] commit d302c2398ba2 ("mm, hwpoison: when copy-on-write hits poison, take page offline")
[2] commit 1cb9dc4b475c ("mm: hwpoison: support recovery from HugePage copy-on-write faults")
[3] commit 6b970599e807 ("mm: hwpoison: support recovery from ksm_might_need_to_copy()")
[4] commit 98c76c9f1ef7 ("mm/khugepaged: recover from poisoned anonymous memory")
[5] commit 12904d953364 ("mm/khugepaged: recover from poisoned file-backed memory")
Signed-off-by: Tong Tiangen <tongtiangen@huawei.com>
---
arch/arm64/include/asm/mte.h | 9 +++++
arch/arm64/include/asm/page.h | 10 ++++++
arch/arm64/lib/Makefile | 2 ++
arch/arm64/lib/copy_mc_page.S | 35 ++++++++++++++++++
arch/arm64/lib/copy_page.S | 50 +++-----------------------
arch/arm64/lib/copy_page_template.S | 56 +++++++++++++++++++++++++++++
arch/arm64/lib/mte.S | 29 +++++++++++++++
arch/arm64/mm/copypage.c | 45 +++++++++++++++++++++++
include/linux/highmem.h | 8 +++++
9 files changed, 199 insertions(+), 45 deletions(-)
create mode 100644 arch/arm64/lib/copy_mc_page.S
create mode 100644 arch/arm64/lib/copy_page_template.S
diff --git a/arch/arm64/include/asm/mte.h b/arch/arm64/include/asm/mte.h
index 91fbd5c8a391..dc68337c2623 100644
--- a/arch/arm64/include/asm/mte.h
+++ b/arch/arm64/include/asm/mte.h
@@ -92,6 +92,11 @@ static inline bool try_page_mte_tagging(struct page *page)
void mte_zero_clear_page_tags(void *addr);
void mte_sync_tags(pte_t pte, unsigned int nr_pages);
void mte_copy_page_tags(void *kto, const void *kfrom);
+
+#ifdef CONFIG_ARCH_HAS_COPY_MC
+int mte_copy_mc_page_tags(void *kto, const void *kfrom);
+#endif
+
void mte_thread_init_user(void);
void mte_thread_switch(struct task_struct *next);
void mte_cpu_setup(void);
@@ -128,6 +133,10 @@ static inline void mte_sync_tags(pte_t pte, unsigned int nr_pages)
static inline void mte_copy_page_tags(void *kto, const void *kfrom)
{
}
+static inline int mte_copy_mc_page_tags(void *kto, const void *kfrom)
+{
+ return 0;
+}
static inline void mte_thread_init_user(void)
{
}
diff --git a/arch/arm64/include/asm/page.h b/arch/arm64/include/asm/page.h
index 2312e6ee595f..304cc86b8a10 100644
--- a/arch/arm64/include/asm/page.h
+++ b/arch/arm64/include/asm/page.h
@@ -29,6 +29,16 @@ void copy_user_highpage(struct page *to, struct page *from,
void copy_highpage(struct page *to, struct page *from);
#define __HAVE_ARCH_COPY_HIGHPAGE
+#ifdef CONFIG_ARCH_HAS_COPY_MC
+int copy_mc_page(void *to, const void *from);
+int copy_mc_highpage(struct page *to, struct page *from);
+#define __HAVE_ARCH_COPY_MC_HIGHPAGE
+
+int copy_mc_user_highpage(struct page *to, struct page *from,
+ unsigned long vaddr, struct vm_area_struct *vma);
+#define __HAVE_ARCH_COPY_MC_USER_HIGHPAGE
+#endif
+
struct folio *vma_alloc_zeroed_movable_folio(struct vm_area_struct *vma,
unsigned long vaddr);
#define vma_alloc_zeroed_movable_folio vma_alloc_zeroed_movable_folio
diff --git a/arch/arm64/lib/Makefile b/arch/arm64/lib/Makefile
index 13e6a2829116..65ec3d24d32d 100644
--- a/arch/arm64/lib/Makefile
+++ b/arch/arm64/lib/Makefile
@@ -13,6 +13,8 @@ endif
lib-$(CONFIG_ARCH_HAS_UACCESS_FLUSHCACHE) += uaccess_flushcache.o
+lib-$(CONFIG_ARCH_HAS_COPY_MC) += copy_mc_page.o
+
obj-$(CONFIG_CRC32) += crc32.o
obj-$(CONFIG_FUNCTION_ERROR_INJECTION) += error-inject.o
diff --git a/arch/arm64/lib/copy_mc_page.S b/arch/arm64/lib/copy_mc_page.S
new file mode 100644
index 000000000000..7e633a6aa89f
--- /dev/null
+++ b/arch/arm64/lib/copy_mc_page.S
@@ -0,0 +1,35 @@
+#include <linux/linkage.h>
+#include <linux/const.h>
+#include <asm/assembler.h>
+#include <asm/page.h>
+#include <asm/cpufeature.h>
+#include <asm/alternative.h>
+#include <asm/asm-extable.h>
+#include <asm/asm-uaccess.h>
+
+/*
+ * Copy a page from src to dest (both are page aligned) with memory error safe
+ *
+ * Parameters:
+ * x0 - dest
+ * x1 - src
+ * Returns:
+ * x0 - Return 0 if copy success, or -EFAULT if anything goes wrong
+ * while copying.
+ */
+ .macro ldp1 reg1, reg2, ptr, val
+ KERNEL_ME_SAFE(9998f, ldp \reg1, \reg2, [\ptr, \val])
+ .endm
+
+SYM_FUNC_START(__pi_copy_mc_page)
+#include "copy_page_template.S"
+
+ mov x0, #0
+ ret
+
+9998: mov x0, #-EFAULT
+ ret
+
+SYM_FUNC_END(__pi_copy_mc_page)
+SYM_FUNC_ALIAS(copy_mc_page, __pi_copy_mc_page)
+EXPORT_SYMBOL(copy_mc_page)
diff --git a/arch/arm64/lib/copy_page.S b/arch/arm64/lib/copy_page.S
index 6a56d7cf309d..5499f507bb75 100644
--- a/arch/arm64/lib/copy_page.S
+++ b/arch/arm64/lib/copy_page.S
@@ -17,52 +17,12 @@
* x0 - dest
* x1 - src
*/
-SYM_FUNC_START(__pi_copy_page)
- ldp x2, x3, [x1]
- ldp x4, x5, [x1, #16]
- ldp x6, x7, [x1, #32]
- ldp x8, x9, [x1, #48]
- ldp x10, x11, [x1, #64]
- ldp x12, x13, [x1, #80]
- ldp x14, x15, [x1, #96]
- ldp x16, x17, [x1, #112]
-
- add x0, x0, #256
- add x1, x1, #128
-1:
- tst x0, #(PAGE_SIZE - 1)
-
- stnp x2, x3, [x0, #-256]
- ldp x2, x3, [x1]
- stnp x4, x5, [x0, #16 - 256]
- ldp x4, x5, [x1, #16]
- stnp x6, x7, [x0, #32 - 256]
- ldp x6, x7, [x1, #32]
- stnp x8, x9, [x0, #48 - 256]
- ldp x8, x9, [x1, #48]
- stnp x10, x11, [x0, #64 - 256]
- ldp x10, x11, [x1, #64]
- stnp x12, x13, [x0, #80 - 256]
- ldp x12, x13, [x1, #80]
- stnp x14, x15, [x0, #96 - 256]
- ldp x14, x15, [x1, #96]
- stnp x16, x17, [x0, #112 - 256]
- ldp x16, x17, [x1, #112]
-
- add x0, x0, #128
- add x1, x1, #128
-
- b.ne 1b
-
- stnp x2, x3, [x0, #-256]
- stnp x4, x5, [x0, #16 - 256]
- stnp x6, x7, [x0, #32 - 256]
- stnp x8, x9, [x0, #48 - 256]
- stnp x10, x11, [x0, #64 - 256]
- stnp x12, x13, [x0, #80 - 256]
- stnp x14, x15, [x0, #96 - 256]
- stnp x16, x17, [x0, #112 - 256]
+ .macro ldp1 reg1, reg2, ptr, val
+ ldp \reg1, \reg2, [\ptr, \val]
+ .endm
+SYM_FUNC_START(__pi_copy_page)
+#include "copy_page_template.S"
ret
SYM_FUNC_END(__pi_copy_page)
SYM_FUNC_ALIAS(copy_page, __pi_copy_page)
diff --git a/arch/arm64/lib/copy_page_template.S b/arch/arm64/lib/copy_page_template.S
new file mode 100644
index 000000000000..b3ddec2c7a27
--- /dev/null
+++ b/arch/arm64/lib/copy_page_template.S
@@ -0,0 +1,56 @@
+/* SPDX-License-Identifier: GPL-2.0-only */
+/*
+ * Copyright (C) 2012 ARM Ltd.
+ */
+
+/*
+ * Copy a page from src to dest (both are page aligned)
+ *
+ * Parameters:
+ * x0 - dest
+ * x1 - src
+ */
+ ldp1 x2, x3, x1, #0
+ ldp1 x4, x5, x1, #16
+ ldp1 x6, x7, x1, #32
+ ldp1 x8, x9, x1, #48
+ ldp1 x10, x11, x1, #64
+ ldp1 x12, x13, x1, #80
+ ldp1 x14, x15, x1, #96
+ ldp1 x16, x17, x1, #112
+
+ add x0, x0, #256
+ add x1, x1, #128
+1:
+ tst x0, #(PAGE_SIZE - 1)
+
+ stnp x2, x3, [x0, #-256]
+ ldp1 x2, x3, x1, #0
+ stnp x4, x5, [x0, #16 - 256]
+ ldp1 x4, x5, x1, #16
+ stnp x6, x7, [x0, #32 - 256]
+ ldp1 x6, x7, x1, #32
+ stnp x8, x9, [x0, #48 - 256]
+ ldp1 x8, x9, x1, #48
+ stnp x10, x11, [x0, #64 - 256]
+ ldp1 x10, x11, x1, #64
+ stnp x12, x13, [x0, #80 - 256]
+ ldp1 x12, x13, x1, #80
+ stnp x14, x15, [x0, #96 - 256]
+ ldp1 x14, x15, x1, #96
+ stnp x16, x17, [x0, #112 - 256]
+ ldp1 x16, x17, x1, #112
+
+ add x0, x0, #128
+ add x1, x1, #128
+
+ b.ne 1b
+
+ stnp x2, x3, [x0, #-256]
+ stnp x4, x5, [x0, #16 - 256]
+ stnp x6, x7, [x0, #32 - 256]
+ stnp x8, x9, [x0, #48 - 256]
+ stnp x10, x11, [x0, #64 - 256]
+ stnp x12, x13, [x0, #80 - 256]
+ stnp x14, x15, [x0, #96 - 256]
+ stnp x16, x17, [x0, #112 - 256]
diff --git a/arch/arm64/lib/mte.S b/arch/arm64/lib/mte.S
index 5018ac03b6bf..50ef24318281 100644
--- a/arch/arm64/lib/mte.S
+++ b/arch/arm64/lib/mte.S
@@ -80,6 +80,35 @@ SYM_FUNC_START(mte_copy_page_tags)
ret
SYM_FUNC_END(mte_copy_page_tags)
+#ifdef CONFIG_ARCH_HAS_COPY_MC
+/*
+ * Copy the tags from the source page to the destination one wiht machine check safe
+ * x0 - address of the destination page
+ * x1 - address of the source page
+ * Returns:
+ * x0 - Return 0 if copy success, or
+ * -EFAULT if anything goes wrong while copying.
+ */
+SYM_FUNC_START(mte_copy_mc_page_tags)
+ mov x2, x0
+ mov x3, x1
+ multitag_transfer_size x5, x6
+1:
+KERNEL_ME_SAFE(2f, ldgm x4, [x3])
+ stgm x4, [x2]
+ add x2, x2, x5
+ add x3, x3, x5
+ tst x2, #(PAGE_SIZE - 1)
+ b.ne 1b
+
+ mov x0, #0
+ ret
+
+2: mov x0, #-EFAULT
+ ret
+SYM_FUNC_END(mte_copy_mc_page_tags)
+#endif
+
/*
* Read tags from a user buffer (one tag per byte) and set the corresponding
* tags at the given kernel address. Used by PTRACE_POKEMTETAGS.
diff --git a/arch/arm64/mm/copypage.c b/arch/arm64/mm/copypage.c
index a7bb20055ce0..ff0d9ceea2a4 100644
--- a/arch/arm64/mm/copypage.c
+++ b/arch/arm64/mm/copypage.c
@@ -40,3 +40,48 @@ void copy_user_highpage(struct page *to, struct page *from,
flush_dcache_page(to);
}
EXPORT_SYMBOL_GPL(copy_user_highpage);
+
+#ifdef CONFIG_ARCH_HAS_COPY_MC
+/*
+ * Return -EFAULT if anything goes wrong while copying page or mte.
+ */
+int copy_mc_highpage(struct page *to, struct page *from)
+{
+ void *kto = page_address(to);
+ void *kfrom = page_address(from);
+ int ret;
+
+ ret = copy_mc_page(kto, kfrom);
+ if (ret)
+ return -EFAULT;
+
+ if (kasan_hw_tags_enabled())
+ page_kasan_tag_reset(to);
+
+ if (system_supports_mte() && page_mte_tagged(from)) {
+ /* It's a new page, shouldn't have been tagged yet */
+ WARN_ON_ONCE(!try_page_mte_tagging(to));
+ ret = mte_copy_mc_page_tags(kto, kfrom);
+ if (ret)
+ return -EFAULT;
+
+ set_page_mte_tagged(to);
+ }
+
+ return 0;
+}
+EXPORT_SYMBOL(copy_mc_highpage);
+
+int copy_mc_user_highpage(struct page *to, struct page *from,
+ unsigned long vaddr, struct vm_area_struct *vma)
+{
+ int ret;
+
+ ret = copy_mc_highpage(to, from);
+ if (!ret)
+ flush_dcache_page(to);
+
+ return ret;
+}
+EXPORT_SYMBOL_GPL(copy_mc_user_highpage);
+#endif
diff --git a/include/linux/highmem.h b/include/linux/highmem.h
index 64a567d5ad6f..7a8ddb2b67e6 100644
--- a/include/linux/highmem.h
+++ b/include/linux/highmem.h
@@ -332,6 +332,7 @@ static inline void copy_highpage(struct page *to, struct page *from)
#endif
#ifdef copy_mc_to_kernel
+#ifndef __HAVE_ARCH_COPY_MC_USER_HIGHPAGE
/*
* If architecture supports machine check exception handling, define the
* #MC versions of copy_user_highpage and copy_highpage. They copy a memory
@@ -354,7 +355,9 @@ static inline int copy_mc_user_highpage(struct page *to, struct page *from,
return ret ? -EFAULT : 0;
}
+#endif
+#ifndef __HAVE_ARCH_COPY_MC_HIGHPAGE
static inline int copy_mc_highpage(struct page *to, struct page *from)
{
unsigned long ret;
@@ -370,20 +373,25 @@ static inline int copy_mc_highpage(struct page *to, struct page *from)
return ret ? -EFAULT : 0;
}
+#endif
#else
+#ifndef __HAVE_ARCH_COPY_MC_USER_HIGHPAGE
static inline int copy_mc_user_highpage(struct page *to, struct page *from,
unsigned long vaddr, struct vm_area_struct *vma)
{
copy_user_highpage(to, from, vaddr, vma);
return 0;
}
+#endif
+#ifndef __HAVE_ARCH_COPY_MC_HIGHPAGE
static inline int copy_mc_highpage(struct page *to, struct page *from)
{
copy_highpage(to, from);
return 0;
}
#endif
+#endif
static inline void memcpy_page(struct page *dst_page, size_t dst_off,
struct page *src_page, size_t src_off,
--
2.25.1
^ permalink raw reply related [flat|nested] 24+ messages in thread* Re: [PATCH v12 4/6] arm64: support copy_mc_[user]_highpage()
2024-05-28 8:59 ` [PATCH v12 4/6] arm64: support copy_mc_[user]_highpage() Tong Tiangen
@ 2024-08-19 11:56 ` Jonathan Cameron
2024-08-20 3:02 ` Tong Tiangen
0 siblings, 1 reply; 24+ messages in thread
From: Jonathan Cameron @ 2024-08-19 11:56 UTC (permalink / raw)
To: Tong Tiangen
Cc: Mark Rutland, Catalin Marinas, Will Deacon, Andrew Morton,
James Morse, Robin Murphy, Andrey Konovalov, Dmitry Vyukov,
Vincenzo Frascino, Michael Ellerman, Nicholas Piggin,
Andrey Ryabinin, Alexander Potapenko, Christophe Leroy,
Aneesh Kumar K.V, Naveen N. Rao, Thomas Gleixner, Ingo Molnar,
Borislav Petkov, Dave Hansen, x86, H. Peter Anvin,
linux-arm-kernel, linux-mm, linuxppc-dev, linux-kernel,
wangkefeng.wang, Guohanjun
On Tue, 28 May 2024 16:59:13 +0800
Tong Tiangen <tongtiangen@huawei.com> wrote:
> Currently, many scenarios that can tolerate memory errors when copying page
> have been supported in the kernel[1~5], all of which are implemented by
> copy_mc_[user]_highpage(). arm64 should also support this mechanism.
>
> Due to mte, arm64 needs to have its own copy_mc_[user]_highpage()
> architecture implementation, macros __HAVE_ARCH_COPY_MC_HIGHPAGE and
> __HAVE_ARCH_COPY_MC_USER_HIGHPAGE have been added to control it.
>
> Add new helper copy_mc_page() which provide a page copy implementation with
> hardware memory error safe. The code logic of copy_mc_page() is the same as
> copy_page(), the main difference is that the ldp insn of copy_mc_page()
> contains the fixup type EX_TYPE_KACCESS_ERR_ZERO_ME_SAFE, therefore, the
> main logic is extracted to copy_page_template.S.
>
> [1] commit d302c2398ba2 ("mm, hwpoison: when copy-on-write hits poison, take page offline")
> [2] commit 1cb9dc4b475c ("mm: hwpoison: support recovery from HugePage copy-on-write faults")
> [3] commit 6b970599e807 ("mm: hwpoison: support recovery from ksm_might_need_to_copy()")
> [4] commit 98c76c9f1ef7 ("mm/khugepaged: recover from poisoned anonymous memory")
> [5] commit 12904d953364 ("mm/khugepaged: recover from poisoned file-backed memory")
>
> Signed-off-by: Tong Tiangen <tongtiangen@huawei.com>
Trivial stuff inline.
Jonathan
> diff --git a/arch/arm64/lib/mte.S b/arch/arm64/lib/mte.S
> index 5018ac03b6bf..50ef24318281 100644
> --- a/arch/arm64/lib/mte.S
> +++ b/arch/arm64/lib/mte.S
> @@ -80,6 +80,35 @@ SYM_FUNC_START(mte_copy_page_tags)
> ret
> SYM_FUNC_END(mte_copy_page_tags)
>
> +#ifdef CONFIG_ARCH_HAS_COPY_MC
> +/*
> + * Copy the tags from the source page to the destination one wiht machine check safe
Spell check.
with
Also, maybe reword given machine check doesn't make sense on arm64.
> + * x0 - address of the destination page
> + * x1 - address of the source page
> + * Returns:
> + * x0 - Return 0 if copy success, or
> + * -EFAULT if anything goes wrong while copying.
> + */
> +SYM_FUNC_START(mte_copy_mc_page_tags)
> + mov x2, x0
> + mov x3, x1
> + multitag_transfer_size x5, x6
> +1:
> +KERNEL_ME_SAFE(2f, ldgm x4, [x3])
> + stgm x4, [x2]
> + add x2, x2, x5
> + add x3, x3, x5
> + tst x2, #(PAGE_SIZE - 1)
> + b.ne 1b
> +
> + mov x0, #0
> + ret
> +
> +2: mov x0, #-EFAULT
> + ret
> +SYM_FUNC_END(mte_copy_mc_page_tags)
> +#endif
> +
> /*
> * Read tags from a user buffer (one tag per byte) and set the corresponding
> * tags at the given kernel address. Used by PTRACE_POKEMTETAGS.
> diff --git a/arch/arm64/mm/copypage.c b/arch/arm64/mm/copypage.c
> index a7bb20055ce0..ff0d9ceea2a4 100644
> --- a/arch/arm64/mm/copypage.c
> +++ b/arch/arm64/mm/copypage.c
> @@ -40,3 +40,48 @@ void copy_user_highpage(struct page *to, struct page *from,
> +
> +int copy_mc_user_highpage(struct page *to, struct page *from,
> + unsigned long vaddr, struct vm_area_struct *vma)
> +{
> + int ret;
> +
> + ret = copy_mc_highpage(to, from);
> + if (!ret)
> + flush_dcache_page(to);
Personally I'd always keep the error out of line as it tends to be
more readable when reviewing a lot of code.
if (ret)
return ret;
flush_dcache_page(to);
return 0;
> +
> + return ret;
> +}
> +EXPORT_SYMBOL_GPL(copy_mc_user_highpage);
> +#endif
^ permalink raw reply [flat|nested] 24+ messages in thread* Re: [PATCH v12 4/6] arm64: support copy_mc_[user]_highpage()
2024-08-19 11:56 ` Jonathan Cameron
@ 2024-08-20 3:02 ` Tong Tiangen
2024-08-21 11:28 ` Jonathan Cameron
0 siblings, 1 reply; 24+ messages in thread
From: Tong Tiangen @ 2024-08-20 3:02 UTC (permalink / raw)
To: Jonathan Cameron
Cc: Mark Rutland, Catalin Marinas, Will Deacon, Andrew Morton,
James Morse, Robin Murphy, Andrey Konovalov, Dmitry Vyukov,
Vincenzo Frascino, Michael Ellerman, Nicholas Piggin,
Andrey Ryabinin, Alexander Potapenko, Christophe Leroy,
Aneesh Kumar K.V, Naveen N. Rao, Thomas Gleixner, Ingo Molnar,
Borislav Petkov, Dave Hansen, x86, H. Peter Anvin,
linux-arm-kernel, linux-mm, linuxppc-dev, linux-kernel,
wangkefeng.wang, Guohanjun
在 2024/8/19 19:56, Jonathan Cameron 写道:
> On Tue, 28 May 2024 16:59:13 +0800
> Tong Tiangen <tongtiangen@huawei.com> wrote:
>
>> Currently, many scenarios that can tolerate memory errors when copying page
>> have been supported in the kernel[1~5], all of which are implemented by
>> copy_mc_[user]_highpage(). arm64 should also support this mechanism.
>>
>> Due to mte, arm64 needs to have its own copy_mc_[user]_highpage()
>> architecture implementation, macros __HAVE_ARCH_COPY_MC_HIGHPAGE and
>> __HAVE_ARCH_COPY_MC_USER_HIGHPAGE have been added to control it.
>>
>> Add new helper copy_mc_page() which provide a page copy implementation with
>> hardware memory error safe. The code logic of copy_mc_page() is the same as
>> copy_page(), the main difference is that the ldp insn of copy_mc_page()
>> contains the fixup type EX_TYPE_KACCESS_ERR_ZERO_ME_SAFE, therefore, the
>> main logic is extracted to copy_page_template.S.
>>
>> [1] commit d302c2398ba2 ("mm, hwpoison: when copy-on-write hits poison, take page offline")
>> [2] commit 1cb9dc4b475c ("mm: hwpoison: support recovery from HugePage copy-on-write faults")
>> [3] commit 6b970599e807 ("mm: hwpoison: support recovery from ksm_might_need_to_copy()")
>> [4] commit 98c76c9f1ef7 ("mm/khugepaged: recover from poisoned anonymous memory")
>> [5] commit 12904d953364 ("mm/khugepaged: recover from poisoned file-backed memory")
>>
>> Signed-off-by: Tong Tiangen <tongtiangen@huawei.com>
> Trivial stuff inline.
>
> Jonathan
I'm sorry, I may not have understood what you meant. Where is the better
place to do inline? :)
Thanks,
Tong.
>
>
>> diff --git a/arch/arm64/lib/mte.S b/arch/arm64/lib/mte.S
>> index 5018ac03b6bf..50ef24318281 100644
>> --- a/arch/arm64/lib/mte.S
>> +++ b/arch/arm64/lib/mte.S
>> @@ -80,6 +80,35 @@ SYM_FUNC_START(mte_copy_page_tags)
>> ret
>> SYM_FUNC_END(mte_copy_page_tags)
>>
>> +#ifdef CONFIG_ARCH_HAS_COPY_MC
>> +/*
>> + * Copy the tags from the source page to the destination one wiht machine check safe
> Spell check.
> with >
> Also, maybe reword given machine check doesn't make sense on arm64.
OK.
>
>
>> + * x0 - address of the destination page
>> + * x1 - address of the source page
>> + * Returns:
>> + * x0 - Return 0 if copy success, or
>> + * -EFAULT if anything goes wrong while copying.
>> + */
>> +SYM_FUNC_START(mte_copy_mc_page_tags)
>> + mov x2, x0
>> + mov x3, x1
>> + multitag_transfer_size x5, x6
>> +1:
>> +KERNEL_ME_SAFE(2f, ldgm x4, [x3])
>> + stgm x4, [x2]
>> + add x2, x2, x5
>> + add x3, x3, x5
>> + tst x2, #(PAGE_SIZE - 1)
>> + b.ne 1b
>> +
>> + mov x0, #0
>> + ret
>> +
>> +2: mov x0, #-EFAULT
>> + ret
>> +SYM_FUNC_END(mte_copy_mc_page_tags)
>> +#endif
>> +
>> /*
>> * Read tags from a user buffer (one tag per byte) and set the corresponding
>> * tags at the given kernel address. Used by PTRACE_POKEMTETAGS.
>> diff --git a/arch/arm64/mm/copypage.c b/arch/arm64/mm/copypage.c
>> index a7bb20055ce0..ff0d9ceea2a4 100644
>> --- a/arch/arm64/mm/copypage.c
>> +++ b/arch/arm64/mm/copypage.c
>> @@ -40,3 +40,48 @@ void copy_user_highpage(struct page *to, struct page *from,
>
>> +
>> +int copy_mc_user_highpage(struct page *to, struct page *from,
>> + unsigned long vaddr, struct vm_area_struct *vma)
>> +{
>> + int ret;
>> +
>> + ret = copy_mc_highpage(to, from);
>> + if (!ret)
>> + flush_dcache_page(to);
> Personally I'd always keep the error out of line as it tends to be
> more readable when reviewing a lot of code.
> if (ret)
> return ret;
>
> flush_dcache_page(to);
>
> return 0;
This is more reasonable, and it is more readable to eliminate errors in
time.
Thanks,
Tong.
>> +
>> + return ret;
>> +}
>> +EXPORT_SYMBOL_GPL(copy_mc_user_highpage);
>> +#endif
>
> .
^ permalink raw reply [flat|nested] 24+ messages in thread* Re: [PATCH v12 4/6] arm64: support copy_mc_[user]_highpage()
2024-08-20 3:02 ` Tong Tiangen
@ 2024-08-21 11:28 ` Jonathan Cameron
2024-08-21 14:20 ` Tong Tiangen
0 siblings, 1 reply; 24+ messages in thread
From: Jonathan Cameron @ 2024-08-21 11:28 UTC (permalink / raw)
To: Tong Tiangen
Cc: Mark Rutland, Catalin Marinas, Will Deacon, Andrew Morton,
James Morse, Robin Murphy, Andrey Konovalov, Dmitry Vyukov,
Vincenzo Frascino, Michael Ellerman, Nicholas Piggin,
Andrey Ryabinin, Alexander Potapenko, Christophe Leroy,
Aneesh Kumar K.V, Naveen N. Rao, Thomas Gleixner, Ingo Molnar,
Borislav Petkov, Dave Hansen, x86, H. Peter Anvin,
linux-arm-kernel, linux-mm, linuxppc-dev, linux-kernel,
wangkefeng.wang, Guohanjun
On Tue, 20 Aug 2024 11:02:05 +0800
Tong Tiangen <tongtiangen@huawei.com> wrote:
> 在 2024/8/19 19:56, Jonathan Cameron 写道:
> > On Tue, 28 May 2024 16:59:13 +0800
> > Tong Tiangen <tongtiangen@huawei.com> wrote:
> >
> >> Currently, many scenarios that can tolerate memory errors when copying page
> >> have been supported in the kernel[1~5], all of which are implemented by
> >> copy_mc_[user]_highpage(). arm64 should also support this mechanism.
> >>
> >> Due to mte, arm64 needs to have its own copy_mc_[user]_highpage()
> >> architecture implementation, macros __HAVE_ARCH_COPY_MC_HIGHPAGE and
> >> __HAVE_ARCH_COPY_MC_USER_HIGHPAGE have been added to control it.
> >>
> >> Add new helper copy_mc_page() which provide a page copy implementation with
> >> hardware memory error safe. The code logic of copy_mc_page() is the same as
> >> copy_page(), the main difference is that the ldp insn of copy_mc_page()
> >> contains the fixup type EX_TYPE_KACCESS_ERR_ZERO_ME_SAFE, therefore, the
> >> main logic is extracted to copy_page_template.S.
> >>
> >> [1] commit d302c2398ba2 ("mm, hwpoison: when copy-on-write hits poison, take page offline")
> >> [2] commit 1cb9dc4b475c ("mm: hwpoison: support recovery from HugePage copy-on-write faults")
> >> [3] commit 6b970599e807 ("mm: hwpoison: support recovery from ksm_might_need_to_copy()")
> >> [4] commit 98c76c9f1ef7 ("mm/khugepaged: recover from poisoned anonymous memory")
> >> [5] commit 12904d953364 ("mm/khugepaged: recover from poisoned file-backed memory")
> >>
> >> Signed-off-by: Tong Tiangen <tongtiangen@huawei.com>
> > Trivial stuff inline.
> >
> > Jonathan
>
> I'm sorry, I may not have understood what you meant. Where is the better
> place to do inline? :)
Oops. All I meant was:
My comments are inline - as in within the patch later in the email.
^ permalink raw reply [flat|nested] 24+ messages in thread* Re: [PATCH v12 4/6] arm64: support copy_mc_[user]_highpage()
2024-08-21 11:28 ` Jonathan Cameron
@ 2024-08-21 14:20 ` Tong Tiangen
0 siblings, 0 replies; 24+ messages in thread
From: Tong Tiangen @ 2024-08-21 14:20 UTC (permalink / raw)
To: Jonathan Cameron
Cc: Mark Rutland, Catalin Marinas, Will Deacon, Andrew Morton,
James Morse, Robin Murphy, Andrey Konovalov, Dmitry Vyukov,
Vincenzo Frascino, Michael Ellerman, Nicholas Piggin,
Andrey Ryabinin, Alexander Potapenko, Christophe Leroy,
Aneesh Kumar K.V, Naveen N. Rao, Thomas Gleixner, Ingo Molnar,
Borislav Petkov, Dave Hansen, x86, H. Peter Anvin,
linux-arm-kernel, linux-mm, linuxppc-dev, linux-kernel,
wangkefeng.wang, Guohanjun
在 2024/8/21 19:28, Jonathan Cameron 写道:
> On Tue, 20 Aug 2024 11:02:05 +0800
> Tong Tiangen <tongtiangen@huawei.com> wrote:
>
>> 在 2024/8/19 19:56, Jonathan Cameron 写道:
>>> On Tue, 28 May 2024 16:59:13 +0800
>>> Tong Tiangen <tongtiangen@huawei.com> wrote:
>>>
>>>> Currently, many scenarios that can tolerate memory errors when copying page
>>>> have been supported in the kernel[1~5], all of which are implemented by
>>>> copy_mc_[user]_highpage(). arm64 should also support this mechanism.
>>>>
>>>> Due to mte, arm64 needs to have its own copy_mc_[user]_highpage()
>>>> architecture implementation, macros __HAVE_ARCH_COPY_MC_HIGHPAGE and
>>>> __HAVE_ARCH_COPY_MC_USER_HIGHPAGE have been added to control it.
>>>>
>>>> Add new helper copy_mc_page() which provide a page copy implementation with
>>>> hardware memory error safe. The code logic of copy_mc_page() is the same as
>>>> copy_page(), the main difference is that the ldp insn of copy_mc_page()
>>>> contains the fixup type EX_TYPE_KACCESS_ERR_ZERO_ME_SAFE, therefore, the
>>>> main logic is extracted to copy_page_template.S.
>>>>
>>>> [1] commit d302c2398ba2 ("mm, hwpoison: when copy-on-write hits poison, take page offline")
>>>> [2] commit 1cb9dc4b475c ("mm: hwpoison: support recovery from HugePage copy-on-write faults")
>>>> [3] commit 6b970599e807 ("mm: hwpoison: support recovery from ksm_might_need_to_copy()")
>>>> [4] commit 98c76c9f1ef7 ("mm/khugepaged: recover from poisoned anonymous memory")
>>>> [5] commit 12904d953364 ("mm/khugepaged: recover from poisoned file-backed memory")
>>>>
>>>> Signed-off-by: Tong Tiangen <tongtiangen@huawei.com>
>>> Trivial stuff inline.
>>>
>>> Jonathan
>>
>> I'm sorry, I may not have understood what you meant. Where is the better
>> place to do inline? :)
> Oops. All I meant was:
>
> My comments are inline - as in within the patch later in the email.
>
OK, It's my fault for not getting your point :)
Thanks Jonathan.
>
>
> .
^ permalink raw reply [flat|nested] 24+ messages in thread
* [PATCH v12 5/6] arm64: introduce copy_mc_to_kernel() implementation
2024-05-28 8:59 [PATCH v12 0/6]arm64: add ARCH_HAS_COPY_MC support Tong Tiangen
` (3 preceding siblings ...)
2024-05-28 8:59 ` [PATCH v12 4/6] arm64: support copy_mc_[user]_highpage() Tong Tiangen
@ 2024-05-28 8:59 ` Tong Tiangen
2024-05-28 8:59 ` [PATCH v12 6/6] arm64: send SIGBUS to user process for SEA exception Tong Tiangen
5 siblings, 0 replies; 24+ messages in thread
From: Tong Tiangen @ 2024-05-28 8:59 UTC (permalink / raw)
To: Mark Rutland, Catalin Marinas, Will Deacon, Andrew Morton,
James Morse, Robin Murphy, Andrey Konovalov, Dmitry Vyukov,
Vincenzo Frascino, Michael Ellerman, Nicholas Piggin,
Andrey Ryabinin, Alexander Potapenko, Christophe Leroy,
Aneesh Kumar K.V, Naveen N. Rao, Thomas Gleixner, Ingo Molnar,
Borislav Petkov, Dave Hansen, x86, H. Peter Anvin
Cc: linux-arm-kernel, linux-mm, linuxppc-dev, linux-kernel,
Tong Tiangen, wangkefeng.wang, Guohanjun
The copy_mc_to_kernel() helper is memory copy implementation that handles
source exceptions. It can be used in memory copy scenarios that tolerate
hardware memory errors(e.g: pmem_read/dax_copy_to_iter).
Currnently, only x86 and ppc suuport this helper, after arm64 support
ARCH_HAS_COPY_MC , we introduce copy_mc_to_kernel() implementation.
Also add memcpy_mc() for memory copy that handles source exceptions.
Because there is no GPR is available for saving "bytes not copied" in
memcpy(), the mempcy_mc() is referenced to the implementation of
copy_from_user().
Signed-off-by: Tong Tiangen <tongtiangen@huawei.com>
---
arch/arm64/include/asm/string.h | 5 +++
arch/arm64/include/asm/uaccess.h | 18 ++++++++
arch/arm64/lib/Makefile | 2 +-
arch/arm64/lib/memcpy_mc.S | 73 ++++++++++++++++++++++++++++++++
mm/kasan/shadow.c | 12 ++++++
5 files changed, 109 insertions(+), 1 deletion(-)
create mode 100644 arch/arm64/lib/memcpy_mc.S
diff --git a/arch/arm64/include/asm/string.h b/arch/arm64/include/asm/string.h
index 3a3264ff47b9..23eca4fb24fa 100644
--- a/arch/arm64/include/asm/string.h
+++ b/arch/arm64/include/asm/string.h
@@ -35,6 +35,10 @@ extern void *memchr(const void *, int, __kernel_size_t);
extern void *memcpy(void *, const void *, __kernel_size_t);
extern void *__memcpy(void *, const void *, __kernel_size_t);
+#define __HAVE_ARCH_MEMCPY_MC
+extern int memcpy_mc(void *, const void *, __kernel_size_t);
+extern int __memcpy_mc(void *, const void *, __kernel_size_t);
+
#define __HAVE_ARCH_MEMMOVE
extern void *memmove(void *, const void *, __kernel_size_t);
extern void *__memmove(void *, const void *, __kernel_size_t);
@@ -57,6 +61,7 @@ void memcpy_flushcache(void *dst, const void *src, size_t cnt);
*/
#define memcpy(dst, src, len) __memcpy(dst, src, len)
+#define memcpy_mc(dst, src, len) __memcpy_mc(dst, src, len)
#define memmove(dst, src, len) __memmove(dst, src, len)
#define memset(s, c, n) __memset(s, c, n)
diff --git a/arch/arm64/include/asm/uaccess.h b/arch/arm64/include/asm/uaccess.h
index 14be5000c5a0..07c1aeaeb094 100644
--- a/arch/arm64/include/asm/uaccess.h
+++ b/arch/arm64/include/asm/uaccess.h
@@ -425,4 +425,22 @@ static inline size_t probe_subpage_writeable(const char __user *uaddr,
#endif /* CONFIG_ARCH_HAS_SUBPAGE_FAULTS */
+#ifdef CONFIG_ARCH_HAS_COPY_MC
+/**
+ * copy_mc_to_kernel - memory copy that handles source exceptions
+ *
+ * @to: destination address
+ * @from: source address
+ * @size: number of bytes to copy
+ *
+ * Return 0 for success, or bytes not copied.
+ */
+static inline unsigned long __must_check
+copy_mc_to_kernel(void *to, const void *from, unsigned long size)
+{
+ return memcpy_mc(to, from, size);
+}
+#define copy_mc_to_kernel copy_mc_to_kernel
+#endif
+
#endif /* __ASM_UACCESS_H */
diff --git a/arch/arm64/lib/Makefile b/arch/arm64/lib/Makefile
index 65ec3d24d32d..0d2fc251fbae 100644
--- a/arch/arm64/lib/Makefile
+++ b/arch/arm64/lib/Makefile
@@ -13,7 +13,7 @@ endif
lib-$(CONFIG_ARCH_HAS_UACCESS_FLUSHCACHE) += uaccess_flushcache.o
-lib-$(CONFIG_ARCH_HAS_COPY_MC) += copy_mc_page.o
+lib-$(CONFIG_ARCH_HAS_COPY_MC) += copy_mc_page.o memcpy_mc.o
obj-$(CONFIG_CRC32) += crc32.o
diff --git a/arch/arm64/lib/memcpy_mc.S b/arch/arm64/lib/memcpy_mc.S
new file mode 100644
index 000000000000..1798090eba06
--- /dev/null
+++ b/arch/arm64/lib/memcpy_mc.S
@@ -0,0 +1,73 @@
+/* SPDX-License-Identifier: GPL-2.0-only */
+/*
+ * Copyright (C) 2013 ARM Ltd.
+ * Copyright (C) 2013 Linaro.
+ *
+ * This code is based on glibc cortex strings work originally authored by Linaro
+ * be found @
+ *
+ * http://bazaar.launchpad.net/~linaro-toolchain-dev/cortex-strings/trunk/
+ * files/head:/src/aarch64/
+ */
+
+#include <linux/linkage.h>
+#include <asm/assembler.h>
+#include <asm/cache.h>
+#include <asm/asm-uaccess.h>
+
+/*
+ * Copy a buffer from src to dest (alignment handled by the hardware)
+ *
+ * Parameters:
+ * x0 - dest
+ * x1 - src
+ * x2 - n
+ * Returns:
+ * x0 - dest
+ */
+ .macro ldrb1 reg, ptr, val
+ KERNEL_ME_SAFE(9997f, ldrb \reg, [\ptr], \val)
+ .endm
+
+ .macro strb1 reg, ptr, val
+ strb \reg, [\ptr], \val
+ .endm
+
+ .macro ldrh1 reg, ptr, val
+ KERNEL_ME_SAFE(9997f, ldrh \reg, [\ptr], \val)
+ .endm
+
+ .macro strh1 reg, ptr, val
+ strh \reg, [\ptr], \val
+ .endm
+
+ .macro ldr1 reg, ptr, val
+ KERNEL_ME_SAFE(9997f, ldr \reg, [\ptr], \val)
+ .endm
+
+ .macro str1 reg, ptr, val
+ str \reg, [\ptr], \val
+ .endm
+
+ .macro ldp1 reg1, reg2, ptr, val
+ KERNEL_ME_SAFE(9997f, ldp \reg1, \reg2, [\ptr], \val)
+ .endm
+
+ .macro stp1 reg1, reg2, ptr, val
+ stp \reg1, \reg2, [\ptr], \val
+ .endm
+
+end .req x5
+SYM_FUNC_START(__memcpy_mc)
+ add end, x0, x2
+#include "copy_template.S"
+ mov x0, #0 // Nothing to copy
+ ret
+
+ // Exception fixups
+9997: sub x0, end, dst // bytes not copied
+ ret
+SYM_FUNC_END(__memcpy_mc)
+EXPORT_SYMBOL(__memcpy_mc)
+SYM_FUNC_ALIAS_WEAK(memcpy_mc, __memcpy_mc)
+EXPORT_SYMBOL(memcpy_mc)
diff --git a/mm/kasan/shadow.c b/mm/kasan/shadow.c
index d6210ca48dda..e23632391eac 100644
--- a/mm/kasan/shadow.c
+++ b/mm/kasan/shadow.c
@@ -79,6 +79,18 @@ void *memcpy(void *dest, const void *src, size_t len)
}
#endif
+#ifdef __HAVE_ARCH_MEMCPY_MC
+#undef memcpy_mc
+int memcpy_mc(void *dest, const void *src, size_t len)
+{
+ if (!kasan_check_range(src, len, false, _RET_IP_) ||
+ !kasan_check_range(dest, len, true, _RET_IP_))
+ return (int)len;
+
+ return __memcpy_mc(dest, src, len);
+}
+#endif
+
void *__asan_memset(void *addr, int c, ssize_t len)
{
if (!kasan_check_range(addr, len, true, _RET_IP_))
--
2.25.1
^ permalink raw reply related [flat|nested] 24+ messages in thread* [PATCH v12 6/6] arm64: send SIGBUS to user process for SEA exception
2024-05-28 8:59 [PATCH v12 0/6]arm64: add ARCH_HAS_COPY_MC support Tong Tiangen
` (4 preceding siblings ...)
2024-05-28 8:59 ` [PATCH v12 5/6] arm64: introduce copy_mc_to_kernel() implementation Tong Tiangen
@ 2024-05-28 8:59 ` Tong Tiangen
2024-08-19 12:08 ` Jonathan Cameron
5 siblings, 1 reply; 24+ messages in thread
From: Tong Tiangen @ 2024-05-28 8:59 UTC (permalink / raw)
To: Mark Rutland, Catalin Marinas, Will Deacon, Andrew Morton,
James Morse, Robin Murphy, Andrey Konovalov, Dmitry Vyukov,
Vincenzo Frascino, Michael Ellerman, Nicholas Piggin,
Andrey Ryabinin, Alexander Potapenko, Christophe Leroy,
Aneesh Kumar K.V, Naveen N. Rao, Thomas Gleixner, Ingo Molnar,
Borislav Petkov, Dave Hansen, x86, H. Peter Anvin
Cc: linux-arm-kernel, linux-mm, linuxppc-dev, linux-kernel,
Tong Tiangen, wangkefeng.wang, Guohanjun
For SEA exception, kernel require take some action to recover from memory
error, such as isolate poison page adn kill failure thread, which are done
in memory_failure().
During our test, the failure thread cannot be killed due to this issue[1],
Here, I temporarily workaround this issue by sending signals to user
processes in do_sea(). After [1] is merged, this patch can be rolled back
or the SIGBUS will be sent repeated.
[1]https://lore.kernel.org/lkml/20240204080144.7977-1-xueshuai@linux.alibaba.com/
Signed-off-by: Tong Tiangen <tongtiangen@huawei.com>
---
arch/arm64/mm/fault.c | 14 +++++++++++---
1 file changed, 11 insertions(+), 3 deletions(-)
diff --git a/arch/arm64/mm/fault.c b/arch/arm64/mm/fault.c
index 2dc65f99d389..37d7e74d9aee 100644
--- a/arch/arm64/mm/fault.c
+++ b/arch/arm64/mm/fault.c
@@ -730,9 +730,6 @@ static int do_sea(unsigned long far, unsigned long esr, struct pt_regs *regs)
const struct fault_info *inf;
unsigned long siaddr;
- if (do_apei_claim_sea(regs))
- return 0;
-
inf = esr_to_fault_info(esr);
if (esr & ESR_ELx_FnV) {
siaddr = 0;
@@ -744,6 +741,17 @@ static int do_sea(unsigned long far, unsigned long esr, struct pt_regs *regs)
*/
siaddr = untagged_addr(far);
}
+
+ if (do_apei_claim_sea(regs)) {
+ if (current->mm) {
+ set_thread_esr(0, esr);
+ arm64_force_sig_fault(inf->sig, inf->code, siaddr,
+ "Uncorrected memory error on access \
+ to poison memory\n");
+ }
+ return 0;
+ }
+
arm64_notify_die(inf->name, regs, inf->sig, inf->code, siaddr, esr);
return 0;
--
2.25.1
^ permalink raw reply related [flat|nested] 24+ messages in thread* Re: [PATCH v12 6/6] arm64: send SIGBUS to user process for SEA exception
2024-05-28 8:59 ` [PATCH v12 6/6] arm64: send SIGBUS to user process for SEA exception Tong Tiangen
@ 2024-08-19 12:08 ` Jonathan Cameron
2024-08-20 3:45 ` Tong Tiangen
0 siblings, 1 reply; 24+ messages in thread
From: Jonathan Cameron @ 2024-08-19 12:08 UTC (permalink / raw)
To: Tong Tiangen
Cc: Mark Rutland, Catalin Marinas, Will Deacon, Andrew Morton,
James Morse, Robin Murphy, Andrey Konovalov, Dmitry Vyukov,
Vincenzo Frascino, Michael Ellerman, Nicholas Piggin,
Andrey Ryabinin, Alexander Potapenko, Christophe Leroy,
Aneesh Kumar K.V, Naveen N. Rao, Thomas Gleixner, Ingo Molnar,
Borislav Petkov, Dave Hansen, x86, H. Peter Anvin,
linux-arm-kernel, linux-mm, linuxppc-dev, linux-kernel,
wangkefeng.wang, Guohanjun
On Tue, 28 May 2024 16:59:15 +0800
Tong Tiangen <tongtiangen@huawei.com> wrote:
> For SEA exception, kernel require take some action to recover from memory
> error, such as isolate poison page adn kill failure thread, which are done
> in memory_failure().
>
> During our test, the failure thread cannot be killed due to this issue[1],
> Here, I temporarily workaround this issue by sending signals to user
> processes in do_sea(). After [1] is merged, this patch can be rolled back
> or the SIGBUS will be sent repeated.
>
> [1]https://lore.kernel.org/lkml/20240204080144.7977-1-xueshuai@linux.alibaba.com/
What's the status of that one? Seems better to help get that in than
carry this.
Jonathan
^ permalink raw reply [flat|nested] 24+ messages in thread
* Re: [PATCH v12 6/6] arm64: send SIGBUS to user process for SEA exception
2024-08-19 12:08 ` Jonathan Cameron
@ 2024-08-20 3:45 ` Tong Tiangen
0 siblings, 0 replies; 24+ messages in thread
From: Tong Tiangen @ 2024-08-20 3:45 UTC (permalink / raw)
To: Jonathan Cameron
Cc: Mark Rutland, Catalin Marinas, Will Deacon, Andrew Morton,
James Morse, Robin Murphy, Andrey Konovalov, Dmitry Vyukov,
Vincenzo Frascino, Michael Ellerman, Nicholas Piggin,
Andrey Ryabinin, Alexander Potapenko, Christophe Leroy,
Aneesh Kumar K.V, Naveen N. Rao, Thomas Gleixner, Ingo Molnar,
Borislav Petkov, Dave Hansen, x86, H. Peter Anvin,
linux-arm-kernel, linux-mm, linuxppc-dev, linux-kernel,
wangkefeng.wang, Guohanjun
在 2024/8/19 20:08, Jonathan Cameron 写道:
> On Tue, 28 May 2024 16:59:15 +0800
> Tong Tiangen <tongtiangen@huawei.com> wrote:
>
>> For SEA exception, kernel require take some action to recover from memory
>> error, such as isolate poison page adn kill failure thread, which are done
>> in memory_failure().
>>
>> During our test, the failure thread cannot be killed due to this issue[1],
>> Here, I temporarily workaround this issue by sending signals to user
>> processes in do_sea(). After [1] is merged, this patch can be rolled back
>> or the SIGBUS will be sent repeated.
>>
>> [1]https://lore.kernel.org/lkml/20240204080144.7977-1-xueshuai@linux.alibaba.com/
> What's the status of that one? Seems better to help get that in than
> carry this.
>
> Jonathan
> .
That patch set has not been incorporated yet. The latest one is still
v11.
The consideration here was to ensure the functional integrity of
this feature. Considering that this may cause confusion, it is not
appropriate to make this temporary modification here. Otherwise, this
patch will not be included. Related impacts are described in patch 0.
Thanks,
Tong.
^ permalink raw reply [flat|nested] 24+ messages in thread