* [v3 PATCH] arm64: mm: Fix kexec failure after pte_mkwrite_novma() change
@ 2025-12-04 6:27 Jianpeng Chang
2025-12-04 8:07 ` Anshuman Khandual
` (2 more replies)
0 siblings, 3 replies; 9+ messages in thread
From: Jianpeng Chang @ 2025-12-04 6:27 UTC (permalink / raw)
To: catalin.marinas, will, ying.huang, ardb, anshuman.khandual
Cc: linux-arm-kernel, linux-kernel, Jianpeng Chang
Commit 143937ca51cc ("arm64, mm: avoid always making PTE dirty in
pte_mkwrite()") modified pte_mkwrite_novma() to only clear PTE_RDONLY
when the page is already dirty (PTE_DIRTY is set). While this optimization
prevents unnecessary dirty page marking in normal memory management paths,
it breaks kexec on some platforms like NXP LS1043.
The issue occurs in the kexec code path:
1. machine_kexec_post_load() calls trans_pgd_create_copy() to create a
writable copy of the linear mapping
2. _copy_pte() calls pte_mkwrite_novma() to ensure all pages in the copy
are writable for the new kernel image copying
3. With the new logic, clean pages (without PTE_DIRTY) remain read-only
4. When kexec tries to copy the new kernel image through the linear
mapping, it fails on read-only pages, causing the system to hang
after "Bye!"
The same issue affects hibernation which uses the same trans_pgd code path.
Fix this by marking pages dirty with pte_mkdirty() in _copy_pte(), which
ensures pte_mkwrite_novma() clears PTE_RDONLY for both kexec and
hibernation, making all pages in the temporary mapping writable regardless
of their dirty state. This preserves the original commit's optimization
for normal memory management while fixing the kexec/hibernation regression.
Using pte_mkdirty() causes redundant bit operations when the page is
already writable (redundant PTE_RDONLY clearing), but this is acceptable
since it's not a hot path and only affects kexec/hibernation scenarios.
Fixes: 143937ca51cc ("arm64, mm: avoid always making PTE dirty in pte_mkwrite()")
Signed-off-by: Jianpeng Chang <jianpeng.chang.cn@windriver.com>
Reviewed-by: Huang Ying <ying.huang@linux.alibaba.com>
---
v3:
- Add the description about pte_mkdirty in commit message
- Note that the redundant bit operations in commit message
- Fix the comments following the suggestions
v2: https://lore.kernel.org/all/20251202022707.2720933-1-jianpeng.chang.cn@windriver.com/
- Use pte_mkwrite_novma(pte_mkdirty(pte)) instead of manual bit manipulation
- Updated comments to clarify pte_mkwrite_novma() alone cannot be used
v1: https://lore.kernel.org/all/20251127034350.3600454-1-jianpeng.chang.cn@windriver.com/
arch/arm64/mm/trans_pgd.c | 17 +++++++++++++++--
1 file changed, 15 insertions(+), 2 deletions(-)
diff --git a/arch/arm64/mm/trans_pgd.c b/arch/arm64/mm/trans_pgd.c
index 18543b603c77..766883780d2a 100644
--- a/arch/arm64/mm/trans_pgd.c
+++ b/arch/arm64/mm/trans_pgd.c
@@ -40,8 +40,14 @@ static void _copy_pte(pte_t *dst_ptep, pte_t *src_ptep, unsigned long addr)
* Resume will overwrite areas that may be marked
* read only (code, rodata). Clear the RDONLY bit from
* the temporary mappings we use during restore.
+ *
+ * For both kexec and hibernation, writable accesses are required
+ * for all pages in the linear map to copy over new kernel image.
+ * Hence mark these pages dirty first via pte_mkdirty() to ensure
+ * pte_mkwrite_novma() subsequently clears PTE_RDONLY - providing
+ * required write access for the pages.
*/
- __set_pte(dst_ptep, pte_mkwrite_novma(pte));
+ __set_pte(dst_ptep, pte_mkwrite_novma(pte_mkdirty(pte)));
} else if (!pte_none(pte)) {
/*
* debug_pagealloc will removed the PTE_VALID bit if
@@ -57,7 +63,14 @@ static void _copy_pte(pte_t *dst_ptep, pte_t *src_ptep, unsigned long addr)
*/
BUG_ON(!pfn_valid(pte_pfn(pte)));
- __set_pte(dst_ptep, pte_mkvalid(pte_mkwrite_novma(pte)));
+ /*
+ * For both kexec and hibernation, writable accesses are required
+ * for all pages in the linear map to copy over new kernel image.
+ * Hence mark these pages dirty first via pte_mkdirty() to ensure
+ * pte_mkwrite_novma() subsequently clears PTE_RDONLY - providing
+ * required write access for the pages.
+ */
+ __set_pte(dst_ptep, pte_mkvalid(pte_mkwrite_novma(pte_mkdirty(pte))));
}
}
--
2.52.0
^ permalink raw reply related [flat|nested] 9+ messages in thread* Re: [v3 PATCH] arm64: mm: Fix kexec failure after pte_mkwrite_novma() change 2025-12-04 6:27 [v3 PATCH] arm64: mm: Fix kexec failure after pte_mkwrite_novma() change Jianpeng Chang @ 2025-12-04 8:07 ` Anshuman Khandual 2025-12-04 8:16 ` Chang, Jianpeng (CN) 2026-01-02 18:53 ` Catalin Marinas 2026-02-12 18:51 ` Guenter Roeck 2 siblings, 1 reply; 9+ messages in thread From: Anshuman Khandual @ 2025-12-04 8:07 UTC (permalink / raw) To: Jianpeng Chang, catalin.marinas, will, ying.huang, ardb Cc: linux-arm-kernel, linux-kernel On 04/12/25 11:57 AM, Jianpeng Chang wrote: > Commit 143937ca51cc ("arm64, mm: avoid always making PTE dirty in > pte_mkwrite()") modified pte_mkwrite_novma() to only clear PTE_RDONLY > when the page is already dirty (PTE_DIRTY is set). While this optimization > prevents unnecessary dirty page marking in normal memory management paths, > it breaks kexec on some platforms like NXP LS1043. Why is this problem only applicable for NXP LS1043 ? OR is that the only platform you have observed the issue ? although that is problematic else where as well. > > The issue occurs in the kexec code path: > 1. machine_kexec_post_load() calls trans_pgd_create_copy() to create a > writable copy of the linear mapping > 2. _copy_pte() calls pte_mkwrite_novma() to ensure all pages in the copy > are writable for the new kernel image copying > 3. With the new logic, clean pages (without PTE_DIRTY) remain read-only > 4. When kexec tries to copy the new kernel image through the linear > mapping, it fails on read-only pages, causing the system to hang > after "Bye!" > > The same issue affects hibernation which uses the same trans_pgd code path. > > Fix this by marking pages dirty with pte_mkdirty() in _copy_pte(), which > ensures pte_mkwrite_novma() clears PTE_RDONLY for both kexec and > hibernation, making all pages in the temporary mapping writable regardless > of their dirty state. This preserves the original commit's optimization > for normal memory management while fixing the kexec/hibernation regression. > > Using pte_mkdirty() causes redundant bit operations when the page is > already writable (redundant PTE_RDONLY clearing), but this is acceptable > since it's not a hot path and only affects kexec/hibernation scenarios. > > Fixes: 143937ca51cc ("arm64, mm: avoid always making PTE dirty in pte_mkwrite()") > Signed-off-by: Jianpeng Chang <jianpeng.chang.cn@windriver.com> > Reviewed-by: Huang Ying <ying.huang@linux.alibaba.com> > --- > v3: > - Add the description about pte_mkdirty in commit message > - Note that the redundant bit operations in commit message > - Fix the comments following the suggestions > v2: https://lore.kernel.org/all/20251202022707.2720933-1-jianpeng.chang.cn@windriver.com/ > - Use pte_mkwrite_novma(pte_mkdirty(pte)) instead of manual bit manipulation > - Updated comments to clarify pte_mkwrite_novma() alone cannot be used > v1: https://lore.kernel.org/all/20251127034350.3600454-1-jianpeng.chang.cn@windriver.com/ > > arch/arm64/mm/trans_pgd.c | 17 +++++++++++++++-- > 1 file changed, 15 insertions(+), 2 deletions(-) > > diff --git a/arch/arm64/mm/trans_pgd.c b/arch/arm64/mm/trans_pgd.c > index 18543b603c77..766883780d2a 100644 > --- a/arch/arm64/mm/trans_pgd.c > +++ b/arch/arm64/mm/trans_pgd.c > @@ -40,8 +40,14 @@ static void _copy_pte(pte_t *dst_ptep, pte_t *src_ptep, unsigned long addr) > * Resume will overwrite areas that may be marked > * read only (code, rodata). Clear the RDONLY bit from > * the temporary mappings we use during restore. > + * > + * For both kexec and hibernation, writable accesses are required > + * for all pages in the linear map to copy over new kernel image. > + * Hence mark these pages dirty first via pte_mkdirty() to ensure > + * pte_mkwrite_novma() subsequently clears PTE_RDONLY - providing > + * required write access for the pages. > */ > - __set_pte(dst_ptep, pte_mkwrite_novma(pte)); > + __set_pte(dst_ptep, pte_mkwrite_novma(pte_mkdirty(pte))); > } else if (!pte_none(pte)) { > /* > * debug_pagealloc will removed the PTE_VALID bit if > @@ -57,7 +63,14 @@ static void _copy_pte(pte_t *dst_ptep, pte_t *src_ptep, unsigned long addr) > */ > BUG_ON(!pfn_valid(pte_pfn(pte))); > > - __set_pte(dst_ptep, pte_mkvalid(pte_mkwrite_novma(pte))); > + /* > + * For both kexec and hibernation, writable accesses are required > + * for all pages in the linear map to copy over new kernel image. > + * Hence mark these pages dirty first via pte_mkdirty() to ensure > + * pte_mkwrite_novma() subsequently clears PTE_RDONLY - providing > + * required write access for the pages. > + */ > + __set_pte(dst_ptep, pte_mkvalid(pte_mkwrite_novma(pte_mkdirty(pte)))); > } > } > ^ permalink raw reply [flat|nested] 9+ messages in thread
* Re: [v3 PATCH] arm64: mm: Fix kexec failure after pte_mkwrite_novma() change 2025-12-04 8:07 ` Anshuman Khandual @ 2025-12-04 8:16 ` Chang, Jianpeng (CN) 2025-12-10 7:31 ` Jianpeng Chang 0 siblings, 1 reply; 9+ messages in thread From: Chang, Jianpeng (CN) @ 2025-12-04 8:16 UTC (permalink / raw) To: Anshuman Khandual, catalin.marinas, will, ying.huang, ardb Cc: linux-arm-kernel, linux-kernel On 12/4/2025 4:07 PM, Anshuman Khandual wrote: > CAUTION: This email comes from a non Wind River email account! > Do not click links or open attachments unless you recognize the sender and know the content is safe. > > On 04/12/25 11:57 AM, Jianpeng Chang wrote: >> Commit 143937ca51cc ("arm64, mm: avoid always making PTE dirty in >> pte_mkwrite()") modified pte_mkwrite_novma() to only clear PTE_RDONLY >> when the page is already dirty (PTE_DIRTY is set). While this optimization >> prevents unnecessary dirty page marking in normal memory management paths, >> it breaks kexec on some platforms like NXP LS1043. > > Why is this problem only applicable for NXP LS1043 ? OR is that the only > platform you have observed the issue ? although that is problematic else > where as well. Not only 1043. I found it on the NXP LS1043, and I have both NXP LS1043 and LS1046 boards available. They both have this issue. > >> >> The issue occurs in the kexec code path: >> 1. machine_kexec_post_load() calls trans_pgd_create_copy() to create a >> writable copy of the linear mapping >> 2. _copy_pte() calls pte_mkwrite_novma() to ensure all pages in the copy >> are writable for the new kernel image copying >> 3. With the new logic, clean pages (without PTE_DIRTY) remain read-only >> 4. When kexec tries to copy the new kernel image through the linear >> mapping, it fails on read-only pages, causing the system to hang >> after "Bye!" >> >> The same issue affects hibernation which uses the same trans_pgd code path. >> >> Fix this by marking pages dirty with pte_mkdirty() in _copy_pte(), which >> ensures pte_mkwrite_novma() clears PTE_RDONLY for both kexec and >> hibernation, making all pages in the temporary mapping writable regardless >> of their dirty state. This preserves the original commit's optimization >> for normal memory management while fixing the kexec/hibernation regression. >> >> Using pte_mkdirty() causes redundant bit operations when the page is >> already writable (redundant PTE_RDONLY clearing), but this is acceptable >> since it's not a hot path and only affects kexec/hibernation scenarios. >> >> Fixes: 143937ca51cc ("arm64, mm: avoid always making PTE dirty in pte_mkwrite()") >> Signed-off-by: Jianpeng Chang <jianpeng.chang.cn@windriver.com> >> Reviewed-by: Huang Ying <ying.huang@linux.alibaba.com> >> --- >> v3: >> - Add the description about pte_mkdirty in commit message >> - Note that the redundant bit operations in commit message >> - Fix the comments following the suggestions >> v2: https://lore.kernel.org/all/20251202022707.2720933-1-jianpeng.chang.cn@windriver.com/ >> - Use pte_mkwrite_novma(pte_mkdirty(pte)) instead of manual bit manipulation >> - Updated comments to clarify pte_mkwrite_novma() alone cannot be used >> v1: https://lore.kernel.org/all/20251127034350.3600454-1-jianpeng.chang.cn@windriver.com/ >> >> arch/arm64/mm/trans_pgd.c | 17 +++++++++++++++-- >> 1 file changed, 15 insertions(+), 2 deletions(-) >> >> diff --git a/arch/arm64/mm/trans_pgd.c b/arch/arm64/mm/trans_pgd.c >> index 18543b603c77..766883780d2a 100644 >> --- a/arch/arm64/mm/trans_pgd.c >> +++ b/arch/arm64/mm/trans_pgd.c >> @@ -40,8 +40,14 @@ static void _copy_pte(pte_t *dst_ptep, pte_t *src_ptep, unsigned long addr) >> * Resume will overwrite areas that may be marked >> * read only (code, rodata). Clear the RDONLY bit from >> * the temporary mappings we use during restore. >> + * >> + * For both kexec and hibernation, writable accesses are required >> + * for all pages in the linear map to copy over new kernel image. >> + * Hence mark these pages dirty first via pte_mkdirty() to ensure >> + * pte_mkwrite_novma() subsequently clears PTE_RDONLY - providing >> + * required write access for the pages. >> */ >> - __set_pte(dst_ptep, pte_mkwrite_novma(pte)); >> + __set_pte(dst_ptep, pte_mkwrite_novma(pte_mkdirty(pte))); >> } else if (!pte_none(pte)) { >> /* >> * debug_pagealloc will removed the PTE_VALID bit if >> @@ -57,7 +63,14 @@ static void _copy_pte(pte_t *dst_ptep, pte_t *src_ptep, unsigned long addr) >> */ >> BUG_ON(!pfn_valid(pte_pfn(pte))); >> >> - __set_pte(dst_ptep, pte_mkvalid(pte_mkwrite_novma(pte))); >> + /* >> + * For both kexec and hibernation, writable accesses are required >> + * for all pages in the linear map to copy over new kernel image. >> + * Hence mark these pages dirty first via pte_mkdirty() to ensure >> + * pte_mkwrite_novma() subsequently clears PTE_RDONLY - providing >> + * required write access for the pages. >> + */ >> + __set_pte(dst_ptep, pte_mkvalid(pte_mkwrite_novma(pte_mkdirty(pte)))); >> } >> } >> > ^ permalink raw reply [flat|nested] 9+ messages in thread
* Re: [v3 PATCH] arm64: mm: Fix kexec failure after pte_mkwrite_novma() change 2025-12-04 8:16 ` Chang, Jianpeng (CN) @ 2025-12-10 7:31 ` Jianpeng Chang 0 siblings, 0 replies; 9+ messages in thread From: Jianpeng Chang @ 2025-12-10 7:31 UTC (permalink / raw) To: Anshuman Khandual, catalin.marinas, will, ying.huang, ardb Cc: linux-arm-kernel, linux-kernel On 12/4/25 4:16 PM, Chang, Jianpeng (CN) wrote: > > > On 12/4/2025 4:07 PM, Anshuman Khandual wrote: >> CAUTION: This email comes from a non Wind River email account! >> Do not click links or open attachments unless you recognize the sender >> and know the content is safe. >> >> On 04/12/25 11:57 AM, Jianpeng Chang wrote: >>> Commit 143937ca51cc ("arm64, mm: avoid always making PTE dirty in >>> pte_mkwrite()") modified pte_mkwrite_novma() to only clear PTE_RDONLY >>> when the page is already dirty (PTE_DIRTY is set). While this >>> optimization >>> prevents unnecessary dirty page marking in normal memory management >>> paths, >>> it breaks kexec on some platforms like NXP LS1043. >> >> Why is this problem only applicable for NXP LS1043 ? OR is that the only >> platform you have observed the issue ? although that is problematic else >> where as well. > > Not only 1043. I found it on the NXP LS1043, and I have both NXP LS1043 > and LS1046 boards available. They both have this issue. Hi Anshuman, Just following up on my previous response from a week ago, any updates? I borrowed an IMX8 board, which differs from the LS1043 as it's based on Cortex-A72 + Cortex-A53, and conducted the same test. When reproducing the issue on LS1043, I used the same Image as the first kernel with: kexec -l /boot/Image --reuse-cmdline. However, I couldn't reproduce it on IMX8 initially - the second kernel booted normally. Here are the differences between IMX8 and LS1043: root@nxp-ls1043:~# cat /proc/iomem | grep Kernel 81000000-824effff : Kernel code 82700000-82a8ffff : Kernel data root@nxp-ls1043:~# kexec -l /boot/Image --reuse-cmdline -d 2>&1 | grep segment image_arm64_load: kernel_segment: 0000000080000000 arm64_load_other_segments:730: purgatory sink: 0x0 nr_segments = 3 segment[0].buf = 0xffff9c5a2010 segment[0].bufsz = 0x194e200 segment[0].mem = 0x80000000 segment[0].memsz = 0x1a90000 segment[1].buf = 0xaaaad60cc180 segment[1].bufsz = 0xfa81 segment[1].mem = 0x81a90000 segment[1].memsz = 0x10000 segment[2].buf = 0xaaaad60dc1c0 segment[2].bufsz = 0x3660 segment[2].mem = 0x81aa0000 segment[2].memsz = 0x4000 root@nxp-imx8:~# cat /proc/iomem | grep Kernel 8a0000000-8a19bffff : Kernel code 8a1c20000-8a1f7ffff : Kernel data root@nxp-imx8:~# kexec -l /boot/Image --reuse-cmdline -d 2>&1 | grep segment image_arm64_load: kernel_segment: 0000000080200000 arm64_load_other_segments:730: purgatory sink: 0x0 nr_segments = 3 segment[0].buf = 0xffff990da010 segment[0].bufsz = 0x1ea2200 segment[0].mem = 0x80200000 segment[0].memsz = 0x1f80000 segment[1].buf = 0xffff99084010 segment[1].bufsz = 0x29839 segment[1].mem = 0x82180000 segment[1].memsz = 0x2a000 segment[2].buf = 0xaaaab0cdbc10 segment[2].bufsz = 0x3680 segment[2].mem = 0x821aa000 segment[2].memsz = 0x4000 From the logs, on LS1043, the second kernel segments happen to overlap with the kernel code pages, which are read-only. I was able to reproduce the same issue on IMX8 by forcing the overlap: kexec -l /boot/Image --reuse-cmdline --mem-min=0x898000000 --mem-max=0x8a1000000 root@nxp-imx8:~# kexec -l /boot/Image --reuse-cmdline --mem-min=0x898000000 --mem-max=0x8a1000000 -d 2>&1 | grep segment image_arm64_load: kernel_segment: 0000000898000000 arm64_load_other_segments:730: purgatory sink: 0x0 nr_segments = 3 segment[0].buf = 0xffff95e0a010 segment[0].bufsz = 0x1ea2200 segment[0].mem = 0x898000000 overlap segment[0].memsz = 0x1f80000 segment[1].buf = 0xffff95db4010 segment[1].bufsz = 0x29839 segment[1].mem = 0x899f80000 segment[1].memsz = 0x2a000 segment[2].buf = 0xaaaac05fbc10 segment[2].bufsz = 0x3680 segment[2].mem = 0x899faa000 segment[2].memsz = 0x4000 This explains why we haven't seen similar reports - the issue is memory layout dependent. However, I still prefer this fix because it's universal and works regardless of memory layout or kexec-tools address selection. We cannot expect kexec-tools to always find the "right" memory location, and fundamentally, we expect this temporary page table to be writable. I'm happy to know if you need any additional information or clarification. Thanks, Jianpeng > >> >>> >>> The issue occurs in the kexec code path: >>> 1. machine_kexec_post_load() calls trans_pgd_create_copy() to create a >>> writable copy of the linear mapping >>> 2. _copy_pte() calls pte_mkwrite_novma() to ensure all pages in the copy >>> are writable for the new kernel image copying >>> 3. With the new logic, clean pages (without PTE_DIRTY) remain read-only >>> 4. When kexec tries to copy the new kernel image through the linear >>> mapping, it fails on read-only pages, causing the system to hang >>> after "Bye!" >>> >>> The same issue affects hibernation which uses the same trans_pgd code >>> path. >>> >>> Fix this by marking pages dirty with pte_mkdirty() in _copy_pte(), which >>> ensures pte_mkwrite_novma() clears PTE_RDONLY for both kexec and >>> hibernation, making all pages in the temporary mapping writable >>> regardless >>> of their dirty state. This preserves the original commit's optimization >>> for normal memory management while fixing the kexec/hibernation >>> regression. >>> >>> Using pte_mkdirty() causes redundant bit operations when the page is >>> already writable (redundant PTE_RDONLY clearing), but this is acceptable >>> since it's not a hot path and only affects kexec/hibernation scenarios. >>> >>> Fixes: 143937ca51cc ("arm64, mm: avoid always making PTE dirty in >>> pte_mkwrite()") >>> Signed-off-by: Jianpeng Chang <jianpeng.chang.cn@windriver.com> >>> Reviewed-by: Huang Ying <ying.huang@linux.alibaba.com> >>> --- >>> v3: >>> - Add the description about pte_mkdirty in commit message >>> - Note that the redundant bit operations in commit message >>> - Fix the comments following the suggestions >>> v2: https://lore.kernel.org/all/20251202022707.2720933-1- >>> jianpeng.chang.cn@windriver.com/ >>> - Use pte_mkwrite_novma(pte_mkdirty(pte)) instead of manual bit >>> manipulation >>> - Updated comments to clarify pte_mkwrite_novma() alone cannot be >>> used >>> v1: https://lore.kernel.org/all/20251127034350.3600454-1- >>> jianpeng.chang.cn@windriver.com/ >>> >>> arch/arm64/mm/trans_pgd.c | 17 +++++++++++++++-- >>> 1 file changed, 15 insertions(+), 2 deletions(-) >>> >>> diff --git a/arch/arm64/mm/trans_pgd.c b/arch/arm64/mm/trans_pgd.c >>> index 18543b603c77..766883780d2a 100644 >>> --- a/arch/arm64/mm/trans_pgd.c >>> +++ b/arch/arm64/mm/trans_pgd.c >>> @@ -40,8 +40,14 @@ static void _copy_pte(pte_t *dst_ptep, pte_t >>> *src_ptep, unsigned long addr) >>> * Resume will overwrite areas that may be marked >>> * read only (code, rodata). Clear the RDONLY bit from >>> * the temporary mappings we use during restore. >>> + * >>> + * For both kexec and hibernation, writable accesses >>> are required >>> + * for all pages in the linear map to copy over new >>> kernel image. >>> + * Hence mark these pages dirty first via pte_mkdirty() >>> to ensure >>> + * pte_mkwrite_novma() subsequently clears PTE_RDONLY - >>> providing >>> + * required write access for the pages. >>> */ >>> - __set_pte(dst_ptep, pte_mkwrite_novma(pte)); >>> + __set_pte(dst_ptep, pte_mkwrite_novma(pte_mkdirty(pte))); >>> } else if (!pte_none(pte)) { >>> /* >>> * debug_pagealloc will removed the PTE_VALID bit if >>> @@ -57,7 +63,14 @@ static void _copy_pte(pte_t *dst_ptep, pte_t >>> *src_ptep, unsigned long addr) >>> */ >>> BUG_ON(!pfn_valid(pte_pfn(pte))); >>> >>> - __set_pte(dst_ptep, pte_mkvalid(pte_mkwrite_novma(pte))); >>> + /* >>> + * For both kexec and hibernation, writable accesses >>> are required >>> + * for all pages in the linear map to copy over new >>> kernel image. >>> + * Hence mark these pages dirty first via pte_mkdirty() >>> to ensure >>> + * pte_mkwrite_novma() subsequently clears PTE_RDONLY - >>> providing >>> + * required write access for the pages. >>> + */ >>> + __set_pte(dst_ptep, >>> pte_mkvalid(pte_mkwrite_novma(pte_mkdirty(pte)))); >>> } >>> } >>> >> > ^ permalink raw reply [flat|nested] 9+ messages in thread
* Re: [v3 PATCH] arm64: mm: Fix kexec failure after pte_mkwrite_novma() change 2025-12-04 6:27 [v3 PATCH] arm64: mm: Fix kexec failure after pte_mkwrite_novma() change Jianpeng Chang 2025-12-04 8:07 ` Anshuman Khandual @ 2026-01-02 18:53 ` Catalin Marinas 2026-01-06 13:30 ` Huang, Ying 2026-02-12 18:51 ` Guenter Roeck 2 siblings, 1 reply; 9+ messages in thread From: Catalin Marinas @ 2026-01-02 18:53 UTC (permalink / raw) To: Jianpeng Chang Cc: will, ying.huang, ardb, anshuman.khandual, linux-arm-kernel, linux-kernel On Thu, Dec 04, 2025 at 02:27:22PM +0800, Jianpeng Chang wrote: > Commit 143937ca51cc ("arm64, mm: avoid always making PTE dirty in > pte_mkwrite()") modified pte_mkwrite_novma() to only clear PTE_RDONLY > when the page is already dirty (PTE_DIRTY is set). While this optimization > prevents unnecessary dirty page marking in normal memory management paths, > it breaks kexec on some platforms like NXP LS1043. > > The issue occurs in the kexec code path: > 1. machine_kexec_post_load() calls trans_pgd_create_copy() to create a > writable copy of the linear mapping > 2. _copy_pte() calls pte_mkwrite_novma() to ensure all pages in the copy > are writable for the new kernel image copying > 3. With the new logic, clean pages (without PTE_DIRTY) remain read-only > 4. When kexec tries to copy the new kernel image through the linear > mapping, it fails on read-only pages, causing the system to hang > after "Bye!" > > The same issue affects hibernation which uses the same trans_pgd code path. > > Fix this by marking pages dirty with pte_mkdirty() in _copy_pte(), which > ensures pte_mkwrite_novma() clears PTE_RDONLY for both kexec and > hibernation, making all pages in the temporary mapping writable regardless > of their dirty state. This preserves the original commit's optimization > for normal memory management while fixing the kexec/hibernation regression. > > Using pte_mkdirty() causes redundant bit operations when the page is > already writable (redundant PTE_RDONLY clearing), but this is acceptable > since it's not a hot path and only affects kexec/hibernation scenarios. > > Fixes: 143937ca51cc ("arm64, mm: avoid always making PTE dirty in pte_mkwrite()") > Signed-off-by: Jianpeng Chang <jianpeng.chang.cn@windriver.com> > Reviewed-by: Huang Ying <ying.huang@linux.alibaba.com> > --- > v3: > - Add the description about pte_mkdirty in commit message > - Note that the redundant bit operations in commit message > - Fix the comments following the suggestions > v2: https://lore.kernel.org/all/20251202022707.2720933-1-jianpeng.chang.cn@windriver.com/ > - Use pte_mkwrite_novma(pte_mkdirty(pte)) instead of manual bit manipulation > - Updated comments to clarify pte_mkwrite_novma() alone cannot be used > v1: https://lore.kernel.org/all/20251127034350.3600454-1-jianpeng.chang.cn@windriver.com/ > > arch/arm64/mm/trans_pgd.c | 17 +++++++++++++++-- > 1 file changed, 15 insertions(+), 2 deletions(-) > > diff --git a/arch/arm64/mm/trans_pgd.c b/arch/arm64/mm/trans_pgd.c > index 18543b603c77..766883780d2a 100644 > --- a/arch/arm64/mm/trans_pgd.c > +++ b/arch/arm64/mm/trans_pgd.c > @@ -40,8 +40,14 @@ static void _copy_pte(pte_t *dst_ptep, pte_t *src_ptep, unsigned long addr) > * Resume will overwrite areas that may be marked > * read only (code, rodata). Clear the RDONLY bit from > * the temporary mappings we use during restore. > + * > + * For both kexec and hibernation, writable accesses are required > + * for all pages in the linear map to copy over new kernel image. > + * Hence mark these pages dirty first via pte_mkdirty() to ensure > + * pte_mkwrite_novma() subsequently clears PTE_RDONLY - providing > + * required write access for the pages. > */ > - __set_pte(dst_ptep, pte_mkwrite_novma(pte)); > + __set_pte(dst_ptep, pte_mkwrite_novma(pte_mkdirty(pte))); > } else if (!pte_none(pte)) { > /* > * debug_pagealloc will removed the PTE_VALID bit if > @@ -57,7 +63,14 @@ static void _copy_pte(pte_t *dst_ptep, pte_t *src_ptep, unsigned long addr) > */ > BUG_ON(!pfn_valid(pte_pfn(pte))); > > - __set_pte(dst_ptep, pte_mkvalid(pte_mkwrite_novma(pte))); > + /* > + * For both kexec and hibernation, writable accesses are required > + * for all pages in the linear map to copy over new kernel image. > + * Hence mark these pages dirty first via pte_mkdirty() to ensure > + * pte_mkwrite_novma() subsequently clears PTE_RDONLY - providing > + * required write access for the pages. > + */ > + __set_pte(dst_ptep, pte_mkvalid(pte_mkwrite_novma(pte_mkdirty(pte)))); > } > } Looking through the history, in 4.16 commit 41acec624087 ("arm64: kpti: Make use of nG dependent on arm64_kernel_unmapped_at_el0()") simplified PAGE_KERNEL to only depend on PROT_NORMAL. All correct so far with PAGE_KERNEL still having PTE_DIRTY. Later on in 5.4, commit aa57157be69f ("arm64: Ensure VM_WRITE|VM_SHARED ptes are clean by default") dropped PTE_DIRTY from PROT_NORMAL. This wasn't an issue even with DBM disabled as we don't set PTE_RDONLY, so it's considered pte_hw_dirty() anyway. Huang's commit you mentioned changed the assumptions above, so pte_mkwrite() no longer makes a read-only (kernel) pte fully writeable. This is fine for user mappings (either trap or DBM will make it fully writeable) but not for kernel mappings. Your commit above should work but I wonder whether it's better to go back to having the kernel mappings marked dirty irrespective of their permission: --------------8<--------------------------- diff --git a/arch/arm64/include/asm/pgtable-prot.h b/arch/arm64/include/asm/pgtable-prot.h index 161e8660eddd..113c257d19c4 100644 --- a/arch/arm64/include/asm/pgtable-prot.h +++ b/arch/arm64/include/asm/pgtable-prot.h @@ -50,11 +50,11 @@ #define _PAGE_DEFAULT (_PROT_DEFAULT | PTE_ATTRINDX(MT_NORMAL)) -#define _PAGE_KERNEL (PROT_NORMAL) -#define _PAGE_KERNEL_RO ((PROT_NORMAL & ~PTE_WRITE) | PTE_RDONLY) -#define _PAGE_KERNEL_ROX ((PROT_NORMAL & ~(PTE_WRITE | PTE_PXN)) | PTE_RDONLY) -#define _PAGE_KERNEL_EXEC (PROT_NORMAL & ~PTE_PXN) -#define _PAGE_KERNEL_EXEC_CONT ((PROT_NORMAL & ~PTE_PXN) | PTE_CONT) +#define _PAGE_KERNEL (PROT_NORMAL | PTE_DIRTY) +#define _PAGE_KERNEL_RO ((PROT_NORMAL & ~PTE_WRITE) | PTE_RDONLY | PTE_DIRTY) +#define _PAGE_KERNEL_ROX ((PROT_NORMAL & ~(PTE_WRITE | PTE_PXN)) | PTE_RDONLY | PTE_DIRTY) +#define _PAGE_KERNEL_EXEC ((PROT_NORMAL & ~PTE_PXN) | PTE_DIRTY) +#define _PAGE_KERNEL_EXEC_CONT ((PROT_NORMAL & ~PTE_PXN) | PTE_CONT | PTE_DIRTY) #define _PAGE_SHARED (_PAGE_DEFAULT | PTE_USER | PTE_RDONLY | PTE_NG | PTE_PXN | PTE_UXN | PTE_WRITE) #define _PAGE_SHARED_EXEC (_PAGE_DEFAULT | PTE_USER | PTE_RDONLY | PTE_NG | PTE_PXN | PTE_WRITE) --------------8<--------------------------- -- Catalin ^ permalink raw reply related [flat|nested] 9+ messages in thread
* Re: [v3 PATCH] arm64: mm: Fix kexec failure after pte_mkwrite_novma() change 2026-01-02 18:53 ` Catalin Marinas @ 2026-01-06 13:30 ` Huang, Ying 2026-01-09 11:51 ` Will Deacon 0 siblings, 1 reply; 9+ messages in thread From: Huang, Ying @ 2026-01-06 13:30 UTC (permalink / raw) To: Catalin Marinas Cc: Jianpeng Chang, will, ardb, anshuman.khandual, linux-arm-kernel, linux-kernel Hi, Catalin, Sorry for late reply. Catalin Marinas <catalin.marinas@arm.com> writes: > On Thu, Dec 04, 2025 at 02:27:22PM +0800, Jianpeng Chang wrote: >> Commit 143937ca51cc ("arm64, mm: avoid always making PTE dirty in >> pte_mkwrite()") modified pte_mkwrite_novma() to only clear PTE_RDONLY >> when the page is already dirty (PTE_DIRTY is set). While this optimization >> prevents unnecessary dirty page marking in normal memory management paths, >> it breaks kexec on some platforms like NXP LS1043. >> >> The issue occurs in the kexec code path: >> 1. machine_kexec_post_load() calls trans_pgd_create_copy() to create a >> writable copy of the linear mapping >> 2. _copy_pte() calls pte_mkwrite_novma() to ensure all pages in the copy >> are writable for the new kernel image copying >> 3. With the new logic, clean pages (without PTE_DIRTY) remain read-only >> 4. When kexec tries to copy the new kernel image through the linear >> mapping, it fails on read-only pages, causing the system to hang >> after "Bye!" >> >> The same issue affects hibernation which uses the same trans_pgd code path. >> >> Fix this by marking pages dirty with pte_mkdirty() in _copy_pte(), which >> ensures pte_mkwrite_novma() clears PTE_RDONLY for both kexec and >> hibernation, making all pages in the temporary mapping writable regardless >> of their dirty state. This preserves the original commit's optimization >> for normal memory management while fixing the kexec/hibernation regression. >> >> Using pte_mkdirty() causes redundant bit operations when the page is >> already writable (redundant PTE_RDONLY clearing), but this is acceptable >> since it's not a hot path and only affects kexec/hibernation scenarios. >> >> Fixes: 143937ca51cc ("arm64, mm: avoid always making PTE dirty in pte_mkwrite()") >> Signed-off-by: Jianpeng Chang <jianpeng.chang.cn@windriver.com> >> Reviewed-by: Huang Ying <ying.huang@linux.alibaba.com> >> --- >> v3: >> - Add the description about pte_mkdirty in commit message >> - Note that the redundant bit operations in commit message >> - Fix the comments following the suggestions >> v2: https://lore.kernel.org/all/20251202022707.2720933-1-jianpeng.chang.cn@windriver.com/ >> - Use pte_mkwrite_novma(pte_mkdirty(pte)) instead of manual bit manipulation >> - Updated comments to clarify pte_mkwrite_novma() alone cannot be used >> v1: https://lore.kernel.org/all/20251127034350.3600454-1-jianpeng.chang.cn@windriver.com/ >> >> arch/arm64/mm/trans_pgd.c | 17 +++++++++++++++-- >> 1 file changed, 15 insertions(+), 2 deletions(-) >> >> diff --git a/arch/arm64/mm/trans_pgd.c b/arch/arm64/mm/trans_pgd.c >> index 18543b603c77..766883780d2a 100644 >> --- a/arch/arm64/mm/trans_pgd.c >> +++ b/arch/arm64/mm/trans_pgd.c >> @@ -40,8 +40,14 @@ static void _copy_pte(pte_t *dst_ptep, pte_t *src_ptep, unsigned long addr) >> * Resume will overwrite areas that may be marked >> * read only (code, rodata). Clear the RDONLY bit from >> * the temporary mappings we use during restore. >> + * >> + * For both kexec and hibernation, writable accesses are required >> + * for all pages in the linear map to copy over new kernel image. >> + * Hence mark these pages dirty first via pte_mkdirty() to ensure >> + * pte_mkwrite_novma() subsequently clears PTE_RDONLY - providing >> + * required write access for the pages. >> */ >> - __set_pte(dst_ptep, pte_mkwrite_novma(pte)); >> + __set_pte(dst_ptep, pte_mkwrite_novma(pte_mkdirty(pte))); >> } else if (!pte_none(pte)) { >> /* >> * debug_pagealloc will removed the PTE_VALID bit if >> @@ -57,7 +63,14 @@ static void _copy_pte(pte_t *dst_ptep, pte_t *src_ptep, unsigned long addr) >> */ >> BUG_ON(!pfn_valid(pte_pfn(pte))); >> >> - __set_pte(dst_ptep, pte_mkvalid(pte_mkwrite_novma(pte))); >> + /* >> + * For both kexec and hibernation, writable accesses are required >> + * for all pages in the linear map to copy over new kernel image. >> + * Hence mark these pages dirty first via pte_mkdirty() to ensure >> + * pte_mkwrite_novma() subsequently clears PTE_RDONLY - providing >> + * required write access for the pages. >> + */ >> + __set_pte(dst_ptep, pte_mkvalid(pte_mkwrite_novma(pte_mkdirty(pte)))); >> } >> } > > Looking through the history, in 4.16 commit 41acec624087 ("arm64: kpti: > Make use of nG dependent on arm64_kernel_unmapped_at_el0()") simplified > PAGE_KERNEL to only depend on PROT_NORMAL. All correct so far with > PAGE_KERNEL still having PTE_DIRTY. > > Later on in 5.4, commit aa57157be69f ("arm64: Ensure VM_WRITE|VM_SHARED > ptes are clean by default") dropped PTE_DIRTY from PROT_NORMAL. This > wasn't an issue even with DBM disabled as we don't set PTE_RDONLY, so > it's considered pte_hw_dirty() anyway. Regardless of the kexec issue, I think that it's reasonable to set PTE_DIRTY if PTE_WRITE and !PTE_RDONLY. It's more consistent. > Huang's commit you mentioned changed the assumptions above, so > pte_mkwrite() no longer makes a read-only (kernel) pte fully writeable. > This is fine for user mappings (either trap or DBM will make it fully > writeable) but not for kernel mappings. > > Your commit above should work but I wonder whether it's better to go > back to having the kernel mappings marked dirty irrespective of their > permission: > > --------------8<--------------------------- > > diff --git a/arch/arm64/include/asm/pgtable-prot.h b/arch/arm64/include/asm/pgtable-prot.h > index 161e8660eddd..113c257d19c4 100644 > --- a/arch/arm64/include/asm/pgtable-prot.h > +++ b/arch/arm64/include/asm/pgtable-prot.h > @@ -50,11 +50,11 @@ > > #define _PAGE_DEFAULT (_PROT_DEFAULT | PTE_ATTRINDX(MT_NORMAL)) > > -#define _PAGE_KERNEL (PROT_NORMAL) > -#define _PAGE_KERNEL_RO ((PROT_NORMAL & ~PTE_WRITE) | PTE_RDONLY) > -#define _PAGE_KERNEL_ROX ((PROT_NORMAL & ~(PTE_WRITE | PTE_PXN)) | PTE_RDONLY) > -#define _PAGE_KERNEL_EXEC (PROT_NORMAL & ~PTE_PXN) > -#define _PAGE_KERNEL_EXEC_CONT ((PROT_NORMAL & ~PTE_PXN) | PTE_CONT) > +#define _PAGE_KERNEL (PROT_NORMAL | PTE_DIRTY) > +#define _PAGE_KERNEL_RO ((PROT_NORMAL & ~PTE_WRITE) | PTE_RDONLY | PTE_DIRTY) > +#define _PAGE_KERNEL_ROX ((PROT_NORMAL & ~(PTE_WRITE | PTE_PXN)) | PTE_RDONLY | PTE_DIRTY) IMHO, it appears not absolutely natural to make read-only kernel mapping dirty unconditionally. However it should work. I have no strong opinions here too. > +#define _PAGE_KERNEL_EXEC ((PROT_NORMAL & ~PTE_PXN) | PTE_DIRTY) > +#define _PAGE_KERNEL_EXEC_CONT ((PROT_NORMAL & ~PTE_PXN) | PTE_CONT | PTE_DIRTY) > > #define _PAGE_SHARED (_PAGE_DEFAULT | PTE_USER | PTE_RDONLY | PTE_NG | PTE_PXN | PTE_UXN | PTE_WRITE) > #define _PAGE_SHARED_EXEC (_PAGE_DEFAULT | PTE_USER | PTE_RDONLY | PTE_NG | PTE_PXN | PTE_WRITE) > --------------8<--------------------------- --- Best Regards, Huang, Ying ^ permalink raw reply [flat|nested] 9+ messages in thread
* Re: [v3 PATCH] arm64: mm: Fix kexec failure after pte_mkwrite_novma() change 2026-01-06 13:30 ` Huang, Ying @ 2026-01-09 11:51 ` Will Deacon 0 siblings, 0 replies; 9+ messages in thread From: Will Deacon @ 2026-01-09 11:51 UTC (permalink / raw) To: Huang, Ying Cc: Catalin Marinas, Jianpeng Chang, ardb, anshuman.khandual, linux-arm-kernel, linux-kernel On Tue, Jan 06, 2026 at 09:30:23PM +0800, Huang, Ying wrote: > Catalin Marinas <catalin.marinas@arm.com> writes: > > diff --git a/arch/arm64/include/asm/pgtable-prot.h b/arch/arm64/include/asm/pgtable-prot.h > > index 161e8660eddd..113c257d19c4 100644 > > --- a/arch/arm64/include/asm/pgtable-prot.h > > +++ b/arch/arm64/include/asm/pgtable-prot.h > > @@ -50,11 +50,11 @@ > > > > #define _PAGE_DEFAULT (_PROT_DEFAULT | PTE_ATTRINDX(MT_NORMAL)) > > > > -#define _PAGE_KERNEL (PROT_NORMAL) > > -#define _PAGE_KERNEL_RO ((PROT_NORMAL & ~PTE_WRITE) | PTE_RDONLY) > > -#define _PAGE_KERNEL_ROX ((PROT_NORMAL & ~(PTE_WRITE | PTE_PXN)) | PTE_RDONLY) > > -#define _PAGE_KERNEL_EXEC (PROT_NORMAL & ~PTE_PXN) > > -#define _PAGE_KERNEL_EXEC_CONT ((PROT_NORMAL & ~PTE_PXN) | PTE_CONT) > > +#define _PAGE_KERNEL (PROT_NORMAL | PTE_DIRTY) > > +#define _PAGE_KERNEL_RO ((PROT_NORMAL & ~PTE_WRITE) | PTE_RDONLY | PTE_DIRTY) > > +#define _PAGE_KERNEL_ROX ((PROT_NORMAL & ~(PTE_WRITE | PTE_PXN)) | PTE_RDONLY | PTE_DIRTY) > > IMHO, it appears not absolutely natural to make read-only kernel mapping > dirty unconditionally. However it should work. I have no strong > opinions here too. So I think that's what we *used* to do and it's also what some other architectures (notably, 32-bit ARM) continue to do. In which case, I think I'd prefer to follow the tried-and-tested approach of marking all kernel mappings as dirty unless there's a technical downside of doing that. Will ^ permalink raw reply [flat|nested] 9+ messages in thread
* Re: [v3 PATCH] arm64: mm: Fix kexec failure after pte_mkwrite_novma() change 2025-12-04 6:27 [v3 PATCH] arm64: mm: Fix kexec failure after pte_mkwrite_novma() change Jianpeng Chang 2025-12-04 8:07 ` Anshuman Khandual 2026-01-02 18:53 ` Catalin Marinas @ 2026-02-12 18:51 ` Guenter Roeck 2026-02-13 11:56 ` Will Deacon 2 siblings, 1 reply; 9+ messages in thread From: Guenter Roeck @ 2026-02-12 18:51 UTC (permalink / raw) To: Jianpeng Chang Cc: catalin.marinas, will, ying.huang, ardb, anshuman.khandual, linux-arm-kernel, linux-kernel Hi, On Thu, Dec 04, 2025 at 02:27:22PM +0800, Jianpeng Chang wrote: > Commit 143937ca51cc ("arm64, mm: avoid always making PTE dirty in > pte_mkwrite()") modified pte_mkwrite_novma() to only clear PTE_RDONLY > when the page is already dirty (PTE_DIRTY is set). While this optimization > prevents unnecessary dirty page marking in normal memory management paths, > it breaks kexec on some platforms like NXP LS1043. > > The issue occurs in the kexec code path: > 1. machine_kexec_post_load() calls trans_pgd_create_copy() to create a > writable copy of the linear mapping > 2. _copy_pte() calls pte_mkwrite_novma() to ensure all pages in the copy > are writable for the new kernel image copying > 3. With the new logic, clean pages (without PTE_DIRTY) remain read-only > 4. When kexec tries to copy the new kernel image through the linear > mapping, it fails on read-only pages, causing the system to hang > after "Bye!" > > The same issue affects hibernation which uses the same trans_pgd code path. > > Fix this by marking pages dirty with pte_mkdirty() in _copy_pte(), which > ensures pte_mkwrite_novma() clears PTE_RDONLY for both kexec and > hibernation, making all pages in the temporary mapping writable regardless > of their dirty state. This preserves the original commit's optimization > for normal memory management while fixing the kexec/hibernation regression. > > Using pte_mkdirty() causes redundant bit operations when the page is > already writable (redundant PTE_RDONLY clearing), but this is acceptable > since it's not a hot path and only affects kexec/hibernation scenarios. > > Fixes: 143937ca51cc ("arm64, mm: avoid always making PTE dirty in pte_mkwrite()") > Signed-off-by: Jianpeng Chang <jianpeng.chang.cn@windriver.com> > Reviewed-by: Huang Ying <ying.huang@linux.alibaba.com> We (Google) experience this problem with servers utilizing the Ampere Siryn CPU. It now bubbled down all the way to v6.6.y (and maybe further), essentially making kexec unusable on affected systems unless the backport of commit 143937ca51cc is dropped. What is the status of this patch ? Thanks, Guenter ^ permalink raw reply [flat|nested] 9+ messages in thread
* Re: [v3 PATCH] arm64: mm: Fix kexec failure after pte_mkwrite_novma() change 2026-02-12 18:51 ` Guenter Roeck @ 2026-02-13 11:56 ` Will Deacon 0 siblings, 0 replies; 9+ messages in thread From: Will Deacon @ 2026-02-13 11:56 UTC (permalink / raw) To: Guenter Roeck Cc: Jianpeng Chang, catalin.marinas, ying.huang, ardb, anshuman.khandual, linux-arm-kernel, linux-kernel On Thu, Feb 12, 2026 at 10:51:45AM -0800, Guenter Roeck wrote: > On Thu, Dec 04, 2025 at 02:27:22PM +0800, Jianpeng Chang wrote: > > Commit 143937ca51cc ("arm64, mm: avoid always making PTE dirty in > > pte_mkwrite()") modified pte_mkwrite_novma() to only clear PTE_RDONLY > > when the page is already dirty (PTE_DIRTY is set). While this optimization > > prevents unnecessary dirty page marking in normal memory management paths, > > it breaks kexec on some platforms like NXP LS1043. > > > > The issue occurs in the kexec code path: > > 1. machine_kexec_post_load() calls trans_pgd_create_copy() to create a > > writable copy of the linear mapping > > 2. _copy_pte() calls pte_mkwrite_novma() to ensure all pages in the copy > > are writable for the new kernel image copying > > 3. With the new logic, clean pages (without PTE_DIRTY) remain read-only > > 4. When kexec tries to copy the new kernel image through the linear > > mapping, it fails on read-only pages, causing the system to hang > > after "Bye!" > > > > The same issue affects hibernation which uses the same trans_pgd code path. > > > > Fix this by marking pages dirty with pte_mkdirty() in _copy_pte(), which > > ensures pte_mkwrite_novma() clears PTE_RDONLY for both kexec and > > hibernation, making all pages in the temporary mapping writable regardless > > of their dirty state. This preserves the original commit's optimization > > for normal memory management while fixing the kexec/hibernation regression. > > > > Using pte_mkdirty() causes redundant bit operations when the page is > > already writable (redundant PTE_RDONLY clearing), but this is acceptable > > since it's not a hot path and only affects kexec/hibernation scenarios. > > > > Fixes: 143937ca51cc ("arm64, mm: avoid always making PTE dirty in pte_mkwrite()") > > Signed-off-by: Jianpeng Chang <jianpeng.chang.cn@windriver.com> > > Reviewed-by: Huang Ying <ying.huang@linux.alibaba.com> > > We (Google) experience this problem with servers utilizing the Ampere Siryn > CPU. It now bubbled down all the way to v6.6.y (and maybe further), > essentially making kexec unusable on affected systems unless the backport > of commit 143937ca51cc is dropped. > > What is the status of this patch ? Catalin and I would prefer to treat kernel mappings as dirty, as suggested in: https://lore.kernel.org/r/aVgUPNzXHHIBhh5A@arm.com If somebody sends a (tested) patch, we'll take it. Will ^ permalink raw reply [flat|nested] 9+ messages in thread
end of thread, other threads:[~2026-02-13 11:56 UTC | newest] Thread overview: 9+ messages (download: mbox.gz follow: Atom feed -- links below jump to the message on this page -- 2025-12-04 6:27 [v3 PATCH] arm64: mm: Fix kexec failure after pte_mkwrite_novma() change Jianpeng Chang 2025-12-04 8:07 ` Anshuman Khandual 2025-12-04 8:16 ` Chang, Jianpeng (CN) 2025-12-10 7:31 ` Jianpeng Chang 2026-01-02 18:53 ` Catalin Marinas 2026-01-06 13:30 ` Huang, Ying 2026-01-09 11:51 ` Will Deacon 2026-02-12 18:51 ` Guenter Roeck 2026-02-13 11:56 ` Will Deacon
This is a public inbox, see mirroring instructions for how to clone and mirror all data and code used for this inbox