* [PATCH v3 1/5] mm/cma: use nth_page() in place of direct struct page manipulation.
2023-09-13 20:12 [PATCH v3 0/5] Use nth_page() in place of direct struct page manipulation Zi Yan
@ 2023-09-13 20:12 ` Zi Yan
2023-09-14 2:16 ` Zi Yan
2023-09-13 20:12 ` [PATCH v3 2/5] mm/hugetlb: " Zi Yan
` (4 subsequent siblings)
5 siblings, 1 reply; 13+ messages in thread
From: Zi Yan @ 2023-09-13 20:12 UTC (permalink / raw)
To: linux-mm, linux-kernel, linux-mips
Cc: Zi Yan, Andrew Morton, Thomas Bogendoerfer,
Matthew Wilcox (Oracle), David Hildenbrand, Mike Kravetz,
Muchun Song, Mike Rapoport (IBM), stable, Muchun Song
From: Zi Yan <ziy@nvidia.com>
When dealing with hugetlb pages, manipulating struct page pointers
directly can get to wrong struct page, since struct page is not guaranteed
to be contiguous on SPARSEMEM without VMEMMAP. Use nth_page() to handle
it properly.
Fixes: 2813b9c02962 ("kasan, mm, arm64: tag non slab memory allocated via pagealloc")
Cc: <stable@vger.kernel.org>
Signed-off-by: Zi Yan <ziy@nvidia.com>
Reviewed-by: Muchun Song <songmuchun@bytedance.com>
---
mm/cma.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/mm/cma.c b/mm/cma.c
index da2967c6a223..2b2494fd6b59 100644
--- a/mm/cma.c
+++ b/mm/cma.c
@@ -505,7 +505,7 @@ struct page *cma_alloc(struct cma *cma, unsigned long count,
*/
if (page) {
for (i = 0; i < count; i++)
- page_kasan_tag_reset(page + i);
+ page_kasan_tag_reset(nth_page(page, i));
}
if (ret && !no_warn) {
--
2.40.1
^ permalink raw reply related [flat|nested] 13+ messages in thread
* Re: [PATCH v3 1/5] mm/cma: use nth_page() in place of direct struct page manipulation.
2023-09-13 20:12 ` [PATCH v3 1/5] mm/cma: use " Zi Yan
@ 2023-09-14 2:16 ` Zi Yan
2023-09-14 19:05 ` Andrew Morton
0 siblings, 1 reply; 13+ messages in thread
From: Zi Yan @ 2023-09-14 2:16 UTC (permalink / raw)
To: linux-mm, linux-kernel, linux-mips
Cc: Zi Yan, Andrew Morton, Thomas Bogendoerfer,
"Matthew Wilcox (Oracle)", David Hildenbrand,
Mike Kravetz, Muchun Song, "Mike Rapoport (IBM)", stable,
Muchun Song
[-- Attachment #1: Type: text/plain, Size: 1220 bytes --]
On 13 Sep 2023, at 16:12, Zi Yan wrote:
> From: Zi Yan <ziy@nvidia.com>
>
> When dealing with hugetlb pages, manipulating struct page pointers
> directly can get to wrong struct page, since struct page is not guaranteed
> to be contiguous on SPARSEMEM without VMEMMAP. Use nth_page() to handle
> it properly.
>
> Fixes: 2813b9c02962 ("kasan, mm, arm64: tag non slab memory allocated via pagealloc")
> Cc: <stable@vger.kernel.org>
> Signed-off-by: Zi Yan <ziy@nvidia.com>
> Reviewed-by: Muchun Song <songmuchun@bytedance.com>
> ---
> mm/cma.c | 2 +-
> 1 file changed, 1 insertion(+), 1 deletion(-)
>
> diff --git a/mm/cma.c b/mm/cma.c
> index da2967c6a223..2b2494fd6b59 100644
> --- a/mm/cma.c
> +++ b/mm/cma.c
> @@ -505,7 +505,7 @@ struct page *cma_alloc(struct cma *cma, unsigned long count,
> */
> if (page) {
> for (i = 0; i < count; i++)
> - page_kasan_tag_reset(page + i);
> + page_kasan_tag_reset(nth_page(page, i));
> }
>
> if (ret && !no_warn) {
> --
> 2.40.1
Without the fix, page_kasan_tag_reset() could reset wrong page tags, causing
a wrong kasan result. No related bug is reported. The fix comes from code
inspection.
--
Best Regards,
Yan, Zi
[-- Attachment #2: OpenPGP digital signature --]
[-- Type: application/pgp-signature, Size: 854 bytes --]
^ permalink raw reply [flat|nested] 13+ messages in thread
* Re: [PATCH v3 1/5] mm/cma: use nth_page() in place of direct struct page manipulation.
2023-09-14 2:16 ` Zi Yan
@ 2023-09-14 19:05 ` Andrew Morton
0 siblings, 0 replies; 13+ messages in thread
From: Andrew Morton @ 2023-09-14 19:05 UTC (permalink / raw)
To: Zi Yan
Cc: linux-mm, linux-kernel, linux-mips, Thomas Bogendoerfer,
"Matthew Wilcox (Oracle)", David Hildenbrand,
Mike Kravetz, Muchun Song, "Mike Rapoport (IBM)", stable,
Muchun Song
On Wed, 13 Sep 2023 22:16:45 -0400 Zi Yan <ziy@nvidia.com> wrote:
> No related bug is reported. The fix comes from code
> inspection.
OK, thanks. Given that none of these appear urgent, I'll move the
series from the 6.6-rcX queue (mm-hotfixes-unstable) and into the
6.7-rc1 queue (mm-unstable), while retaining the cc:stable. To give
them more testing time before landing in mainline and -stable kernels.
^ permalink raw reply [flat|nested] 13+ messages in thread
* [PATCH v3 2/5] mm/hugetlb: use nth_page() in place of direct struct page manipulation.
2023-09-13 20:12 [PATCH v3 0/5] Use nth_page() in place of direct struct page manipulation Zi Yan
2023-09-13 20:12 ` [PATCH v3 1/5] mm/cma: use " Zi Yan
@ 2023-09-13 20:12 ` Zi Yan
2023-09-14 2:20 ` Zi Yan
2023-09-13 20:12 ` [PATCH v3 3/5] mm/memory_hotplug: use pfn math " Zi Yan
` (3 subsequent siblings)
5 siblings, 1 reply; 13+ messages in thread
From: Zi Yan @ 2023-09-13 20:12 UTC (permalink / raw)
To: linux-mm, linux-kernel, linux-mips
Cc: Zi Yan, Andrew Morton, Thomas Bogendoerfer,
Matthew Wilcox (Oracle), David Hildenbrand, Mike Kravetz,
Muchun Song, Mike Rapoport (IBM), stable, Muchun Song
From: Zi Yan <ziy@nvidia.com>
When dealing with hugetlb pages, manipulating struct page pointers
directly can get to wrong struct page, since struct page is not guaranteed
to be contiguous on SPARSEMEM without VMEMMAP. Use nth_page() to handle
it properly.
Fixes: 57a196a58421 ("hugetlb: simplify hugetlb handling in follow_page_mask")
Cc: <stable@vger.kernel.org>
Signed-off-by: Zi Yan <ziy@nvidia.com>
Reviewed-by: Muchun Song <songmuchun@bytedance.com>
---
mm/hugetlb.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/mm/hugetlb.c b/mm/hugetlb.c
index af74e83d92aa..8e68e6c53e66 100644
--- a/mm/hugetlb.c
+++ b/mm/hugetlb.c
@@ -6469,7 +6469,7 @@ struct page *hugetlb_follow_page_mask(struct vm_area_struct *vma,
}
}
- page += ((address & ~huge_page_mask(h)) >> PAGE_SHIFT);
+ page = nth_page(page, ((address & ~huge_page_mask(h)) >> PAGE_SHIFT));
/*
* Note that page may be a sub-page, and with vmemmap
--
2.40.1
^ permalink raw reply related [flat|nested] 13+ messages in thread
* Re: [PATCH v3 2/5] mm/hugetlb: use nth_page() in place of direct struct page manipulation.
2023-09-13 20:12 ` [PATCH v3 2/5] mm/hugetlb: " Zi Yan
@ 2023-09-14 2:20 ` Zi Yan
0 siblings, 0 replies; 13+ messages in thread
From: Zi Yan @ 2023-09-14 2:20 UTC (permalink / raw)
To: linux-mm, linux-kernel, linux-mips
Cc: Zi Yan, Andrew Morton, Thomas Bogendoerfer,
"Matthew Wilcox (Oracle)", David Hildenbrand,
Mike Kravetz, Muchun Song, "Mike Rapoport (IBM)", stable,
Muchun Song
[-- Attachment #1: Type: text/plain, Size: 1303 bytes --]
On 13 Sep 2023, at 16:12, Zi Yan wrote:
> From: Zi Yan <ziy@nvidia.com>
>
> When dealing with hugetlb pages, manipulating struct page pointers
> directly can get to wrong struct page, since struct page is not guaranteed
> to be contiguous on SPARSEMEM without VMEMMAP. Use nth_page() to handle
> it properly.
>
> Fixes: 57a196a58421 ("hugetlb: simplify hugetlb handling in follow_page_mask")
> Cc: <stable@vger.kernel.org>
> Signed-off-by: Zi Yan <ziy@nvidia.com>
> Reviewed-by: Muchun Song <songmuchun@bytedance.com>
> ---
> mm/hugetlb.c | 2 +-
> 1 file changed, 1 insertion(+), 1 deletion(-)
>
> diff --git a/mm/hugetlb.c b/mm/hugetlb.c
> index af74e83d92aa..8e68e6c53e66 100644
> --- a/mm/hugetlb.c
> +++ b/mm/hugetlb.c
> @@ -6469,7 +6469,7 @@ struct page *hugetlb_follow_page_mask(struct vm_area_struct *vma,
> }
> }
>
> - page += ((address & ~huge_page_mask(h)) >> PAGE_SHIFT);
> + page = nth_page(page, ((address & ~huge_page_mask(h)) >> PAGE_SHIFT));
>
> /*
> * Note that page may be a sub-page, and with vmemmap
> --
> 2.40.1
A wrong or non-existing page might be tried to be grabbed, either leading to
a non freeable page or kernel memory access errors. No bug is reported.
It comes from code inspection.
--
Best Regards,
Yan, Zi
[-- Attachment #2: OpenPGP digital signature --]
[-- Type: application/pgp-signature, Size: 854 bytes --]
^ permalink raw reply [flat|nested] 13+ messages in thread
* [PATCH v3 3/5] mm/memory_hotplug: use pfn math in place of direct struct page manipulation.
2023-09-13 20:12 [PATCH v3 0/5] Use nth_page() in place of direct struct page manipulation Zi Yan
2023-09-13 20:12 ` [PATCH v3 1/5] mm/cma: use " Zi Yan
2023-09-13 20:12 ` [PATCH v3 2/5] mm/hugetlb: " Zi Yan
@ 2023-09-13 20:12 ` Zi Yan
2023-09-14 2:35 ` Zi Yan
2023-09-13 20:12 ` [PATCH v3 4/5] fs: use nth_page() " Zi Yan
` (2 subsequent siblings)
5 siblings, 1 reply; 13+ messages in thread
From: Zi Yan @ 2023-09-13 20:12 UTC (permalink / raw)
To: linux-mm, linux-kernel, linux-mips
Cc: Zi Yan, Andrew Morton, Thomas Bogendoerfer,
Matthew Wilcox (Oracle), David Hildenbrand, Mike Kravetz,
Muchun Song, Mike Rapoport (IBM), stable, Muchun Song
From: Zi Yan <ziy@nvidia.com>
When dealing with hugetlb pages, manipulating struct page pointers
directly can get to wrong struct page, since struct page is not guaranteed
to be contiguous on SPARSEMEM without VMEMMAP. Use pfn calculation to
handle it properly.
Fixes: eeb0efd071d8 ("mm,memory_hotplug: fix scan_movable_pages() for gigantic hugepages")
Cc: <stable@vger.kernel.org>
Signed-off-by: Zi Yan <ziy@nvidia.com>
Reviewed-by: Muchun Song <songmuchun@bytedance.com>
Acked-by: David Hildenbrand <david@redhat.com>
---
mm/memory_hotplug.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/mm/memory_hotplug.c b/mm/memory_hotplug.c
index 1b03f4ec6fd2..3b301c4023ff 100644
--- a/mm/memory_hotplug.c
+++ b/mm/memory_hotplug.c
@@ -1689,7 +1689,7 @@ static int scan_movable_pages(unsigned long start, unsigned long end,
*/
if (HPageMigratable(head))
goto found;
- skip = compound_nr(head) - (page - head);
+ skip = compound_nr(head) - (pfn - page_to_pfn(head));
pfn += skip - 1;
}
return -ENOENT;
--
2.40.1
^ permalink raw reply related [flat|nested] 13+ messages in thread
* Re: [PATCH v3 3/5] mm/memory_hotplug: use pfn math in place of direct struct page manipulation.
2023-09-13 20:12 ` [PATCH v3 3/5] mm/memory_hotplug: use pfn math " Zi Yan
@ 2023-09-14 2:35 ` Zi Yan
0 siblings, 0 replies; 13+ messages in thread
From: Zi Yan @ 2023-09-14 2:35 UTC (permalink / raw)
To: linux-mm, linux-kernel, linux-mips
Cc: Zi Yan, Andrew Morton, Thomas Bogendoerfer,
"Matthew Wilcox (Oracle)", David Hildenbrand,
Mike Kravetz, Muchun Song, "Mike Rapoport (IBM)", stable,
Muchun Song
[-- Attachment #1: Type: text/plain, Size: 1475 bytes --]
On 13 Sep 2023, at 16:12, Zi Yan wrote:
> From: Zi Yan <ziy@nvidia.com>
>
> When dealing with hugetlb pages, manipulating struct page pointers
> directly can get to wrong struct page, since struct page is not guaranteed
> to be contiguous on SPARSEMEM without VMEMMAP. Use pfn calculation to
> handle it properly.
>
> Fixes: eeb0efd071d8 ("mm,memory_hotplug: fix scan_movable_pages() for gigantic hugepages")
> Cc: <stable@vger.kernel.org>
> Signed-off-by: Zi Yan <ziy@nvidia.com>
> Reviewed-by: Muchun Song <songmuchun@bytedance.com>
> Acked-by: David Hildenbrand <david@redhat.com>
> ---
> mm/memory_hotplug.c | 2 +-
> 1 file changed, 1 insertion(+), 1 deletion(-)
>
> diff --git a/mm/memory_hotplug.c b/mm/memory_hotplug.c
> index 1b03f4ec6fd2..3b301c4023ff 100644
> --- a/mm/memory_hotplug.c
> +++ b/mm/memory_hotplug.c
> @@ -1689,7 +1689,7 @@ static int scan_movable_pages(unsigned long start, unsigned long end,
> */
> if (HPageMigratable(head))
> goto found;
> - skip = compound_nr(head) - (page - head);
> + skip = compound_nr(head) - (pfn - page_to_pfn(head));
> pfn += skip - 1;
> }
> return -ENOENT;
> --
> 2.40.1
Without the fix, a wrong number of page might be skipped. Since skip cannot be
negative, scan_movable_page() will end early and might miss a movable page with
-ENOENT. This might fail offline_pages(). No bug is reported. The fix comes
from code inspection.
--
Best Regards,
Yan, Zi
[-- Attachment #2: OpenPGP digital signature --]
[-- Type: application/pgp-signature, Size: 854 bytes --]
^ permalink raw reply [flat|nested] 13+ messages in thread
* [PATCH v3 4/5] fs: use nth_page() in place of direct struct page manipulation.
2023-09-13 20:12 [PATCH v3 0/5] Use nth_page() in place of direct struct page manipulation Zi Yan
` (2 preceding siblings ...)
2023-09-13 20:12 ` [PATCH v3 3/5] mm/memory_hotplug: use pfn math " Zi Yan
@ 2023-09-13 20:12 ` Zi Yan
2023-09-14 2:45 ` [PATCH v3 4/5] " Zi Yan
2023-09-13 20:12 ` [PATCH v3 5/5] mips: " Zi Yan
2023-09-13 21:12 ` [PATCH v3 0/5] Use " Andrew Morton
5 siblings, 1 reply; 13+ messages in thread
From: Zi Yan @ 2023-09-13 20:12 UTC (permalink / raw)
To: linux-mm, linux-kernel, linux-mips
Cc: Zi Yan, Andrew Morton, Thomas Bogendoerfer,
Matthew Wilcox (Oracle), David Hildenbrand, Mike Kravetz,
Muchun Song, Mike Rapoport (IBM), stable, Muchun Song
From: Zi Yan <ziy@nvidia.com>
When dealing with hugetlb pages, struct page is not guaranteed to be
contiguous on SPARSEMEM without VMEMMAP. Use nth_page() to handle it
properly.
Fixes: 38c1ddbde6c6 ("hugetlbfs: improve read HWPOISON hugepage")
Cc: <stable@vger.kernel.org>
Signed-off-by: Zi Yan <ziy@nvidia.com>
Reviewed-by: Muchun Song <songmuchun@bytedance.com>
---
fs/hugetlbfs/inode.c | 4 ++--
1 file changed, 2 insertions(+), 2 deletions(-)
diff --git a/fs/hugetlbfs/inode.c b/fs/hugetlbfs/inode.c
index 7083fa0caaab..14d3d28e41b0 100644
--- a/fs/hugetlbfs/inode.c
+++ b/fs/hugetlbfs/inode.c
@@ -295,7 +295,7 @@ static size_t adjust_range_hwpoison(struct page *page, size_t offset, size_t byt
size_t res = 0;
/* First subpage to start the loop. */
- page += offset / PAGE_SIZE;
+ page = nth_page(page, offset / PAGE_SIZE);
offset %= PAGE_SIZE;
while (1) {
if (is_raw_hwpoison_page_in_hugepage(page))
@@ -309,7 +309,7 @@ static size_t adjust_range_hwpoison(struct page *page, size_t offset, size_t byt
break;
offset += n;
if (offset == PAGE_SIZE) {
- page++;
+ page = nth_page(page, 1);
offset = 0;
}
}
--
2.40.1
^ permalink raw reply related [flat|nested] 13+ messages in thread
* Re: [PATCH v3 4/5] use nth_page() in place of direct struct page manipulation.
2023-09-13 20:12 ` [PATCH v3 4/5] fs: use nth_page() " Zi Yan
@ 2023-09-14 2:45 ` Zi Yan
0 siblings, 0 replies; 13+ messages in thread
From: Zi Yan @ 2023-09-14 2:45 UTC (permalink / raw)
To: linux-mm, linux-kernel, linux-mips
Cc: Zi Yan, Andrew Morton, Thomas Bogendoerfer,
"Matthew Wilcox (Oracle)", David Hildenbrand,
Mike Kravetz, Muchun Song, "Mike Rapoport (IBM)", stable,
Muchun Song
[-- Attachment #1: Type: text/plain, Size: 1528 bytes --]
On 13 Sep 2023, at 16:12, Zi Yan wrote:
> From: Zi Yan <ziy@nvidia.com>
>
> When dealing with hugetlb pages, struct page is not guaranteed to be
> contiguous on SPARSEMEM without VMEMMAP. Use nth_page() to handle it
> properly.
>
> Fixes: 38c1ddbde6c6 ("hugetlbfs: improve read HWPOISON hugepage")
> Cc: <stable@vger.kernel.org>
> Signed-off-by: Zi Yan <ziy@nvidia.com>
> Reviewed-by: Muchun Song <songmuchun@bytedance.com>
> ---
> fs/hugetlbfs/inode.c | 4 ++--
> 1 file changed, 2 insertions(+), 2 deletions(-)
>
> diff --git a/fs/hugetlbfs/inode.c b/fs/hugetlbfs/inode.c
> index 7083fa0caaab..14d3d28e41b0 100644
> --- a/fs/hugetlbfs/inode.c
> +++ b/fs/hugetlbfs/inode.c
> @@ -295,7 +295,7 @@ static size_t adjust_range_hwpoison(struct page *page, size_t offset, size_t byt
> size_t res = 0;
>
> /* First subpage to start the loop. */
> - page += offset / PAGE_SIZE;
> + page = nth_page(page, offset / PAGE_SIZE);
> offset %= PAGE_SIZE;
> while (1) {
> if (is_raw_hwpoison_page_in_hugepage(page))
> @@ -309,7 +309,7 @@ static size_t adjust_range_hwpoison(struct page *page, size_t offset, size_t byt
> break;
> offset += n;
> if (offset == PAGE_SIZE) {
> - page++;
> + page = nth_page(page, 1);
> offset = 0;
> }
> }
> --
> 2.40.1
Without the fix, a wrong subpage might be checked for HWPoison, causing wrong
number of bytes of a page copied to user space. No bug is reported. The fix
comes from code inspection.
--
Best Regards,
Yan, Zi
[-- Attachment #2: OpenPGP digital signature --]
[-- Type: application/pgp-signature, Size: 854 bytes --]
^ permalink raw reply [flat|nested] 13+ messages in thread
* [PATCH v3 5/5] mips: use nth_page() in place of direct struct page manipulation.
2023-09-13 20:12 [PATCH v3 0/5] Use nth_page() in place of direct struct page manipulation Zi Yan
` (3 preceding siblings ...)
2023-09-13 20:12 ` [PATCH v3 4/5] fs: use nth_page() " Zi Yan
@ 2023-09-13 20:12 ` Zi Yan
2023-09-14 2:46 ` Zi Yan
2023-09-13 21:12 ` [PATCH v3 0/5] Use " Andrew Morton
5 siblings, 1 reply; 13+ messages in thread
From: Zi Yan @ 2023-09-13 20:12 UTC (permalink / raw)
To: linux-mm, linux-kernel, linux-mips
Cc: Zi Yan, Andrew Morton, Thomas Bogendoerfer,
Matthew Wilcox (Oracle), David Hildenbrand, Mike Kravetz,
Muchun Song, Mike Rapoport (IBM), stable
From: Zi Yan <ziy@nvidia.com>
__flush_dcache_pages() is called during hugetlb migration via
migrate_pages() -> migrate_hugetlbs() -> unmap_and_move_huge_page()
-> move_to_new_folio() -> flush_dcache_folio(). And with hugetlb and
without sparsemem vmemmap, struct page is not guaranteed to be contiguous
beyond a section. Use nth_page() instead.
Fixes: 15fa3e8e3269 ("mips: implement the new page table range API")
Cc: <stable@vger.kernel.org>
Signed-off-by: Zi Yan <ziy@nvidia.com>
---
arch/mips/mm/cache.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/arch/mips/mm/cache.c b/arch/mips/mm/cache.c
index 02042100e267..7f830634dbe7 100644
--- a/arch/mips/mm/cache.c
+++ b/arch/mips/mm/cache.c
@@ -117,7 +117,7 @@ void __flush_dcache_pages(struct page *page, unsigned int nr)
* get faulted into the tlb (and thus flushed) anyways.
*/
for (i = 0; i < nr; i++) {
- addr = (unsigned long)kmap_local_page(page + i);
+ addr = (unsigned long)kmap_local_page(nth_page(page, i));
flush_data_cache_page(addr);
kunmap_local((void *)addr);
}
--
2.40.1
^ permalink raw reply related [flat|nested] 13+ messages in thread
* Re: [PATCH v3 5/5] mips: use nth_page() in place of direct struct page manipulation.
2023-09-13 20:12 ` [PATCH v3 5/5] mips: " Zi Yan
@ 2023-09-14 2:46 ` Zi Yan
0 siblings, 0 replies; 13+ messages in thread
From: Zi Yan @ 2023-09-14 2:46 UTC (permalink / raw)
To: linux-mm, linux-kernel, linux-mips
Cc: Zi Yan, Andrew Morton, Thomas Bogendoerfer,
"Matthew Wilcox (Oracle)", David Hildenbrand,
Mike Kravetz, Muchun Song, "Mike Rapoport (IBM)", stable
[-- Attachment #1: Type: text/plain, Size: 1378 bytes --]
On 13 Sep 2023, at 16:12, Zi Yan wrote:
> From: Zi Yan <ziy@nvidia.com>
>
> __flush_dcache_pages() is called during hugetlb migration via
> migrate_pages() -> migrate_hugetlbs() -> unmap_and_move_huge_page()
> -> move_to_new_folio() -> flush_dcache_folio(). And with hugetlb and
> without sparsemem vmemmap, struct page is not guaranteed to be contiguous
> beyond a section. Use nth_page() instead.
>
> Fixes: 15fa3e8e3269 ("mips: implement the new page table range API")
> Cc: <stable@vger.kernel.org>
> Signed-off-by: Zi Yan <ziy@nvidia.com>
> ---
> arch/mips/mm/cache.c | 2 +-
> 1 file changed, 1 insertion(+), 1 deletion(-)
>
> diff --git a/arch/mips/mm/cache.c b/arch/mips/mm/cache.c
> index 02042100e267..7f830634dbe7 100644
> --- a/arch/mips/mm/cache.c
> +++ b/arch/mips/mm/cache.c
> @@ -117,7 +117,7 @@ void __flush_dcache_pages(struct page *page, unsigned int nr)
> * get faulted into the tlb (and thus flushed) anyways.
> */
> for (i = 0; i < nr; i++) {
> - addr = (unsigned long)kmap_local_page(page + i);
> + addr = (unsigned long)kmap_local_page(nth_page(page, i));
> flush_data_cache_page(addr);
> kunmap_local((void *)addr);
> }
> --
> 2.40.1
Without the fix, a wrong address might be used for data cache page flush.
No bug is reported. The fix comes from code inspection.
--
Best Regards,
Yan, Zi
[-- Attachment #2: OpenPGP digital signature --]
[-- Type: application/pgp-signature, Size: 854 bytes --]
^ permalink raw reply [flat|nested] 13+ messages in thread
* Re: [PATCH v3 0/5] Use nth_page() in place of direct struct page manipulation
2023-09-13 20:12 [PATCH v3 0/5] Use nth_page() in place of direct struct page manipulation Zi Yan
` (4 preceding siblings ...)
2023-09-13 20:12 ` [PATCH v3 5/5] mips: " Zi Yan
@ 2023-09-13 21:12 ` Andrew Morton
5 siblings, 0 replies; 13+ messages in thread
From: Andrew Morton @ 2023-09-13 21:12 UTC (permalink / raw)
To: Zi Yan
Cc: Zi Yan, linux-mm, linux-kernel, linux-mips, Thomas Bogendoerfer,
Matthew Wilcox (Oracle), David Hildenbrand, Mike Kravetz,
Muchun Song, Mike Rapoport (IBM)
On Wed, 13 Sep 2023 16:12:43 -0400 Zi Yan <zi.yan@sent.com> wrote:
> On SPARSEMEM without VMEMMAP, struct page is not guaranteed to be
> contiguous, since each memory section's memmap might be allocated
> independently. hugetlb pages can go beyond a memory section size, thus
> direct struct page manipulation on hugetlb pages/subpages might give
> wrong struct page. Kernel provides nth_page() to do the manipulation
> properly. Use that whenever code can see hugetlb pages.
for (each patch) {
Can we please explain why -stable backporting is recommended?
Such an explanation will, as always, include a description of the
user-visible effects of the bug.
Some of the Fixes: targets are very old: 5 years. Has something
changed to bring these flaws to light? Or is it from code inspection?
}
Thanks.
^ permalink raw reply [flat|nested] 13+ messages in thread