public inbox for stable@vger.kernel.org
 help / color / mirror / Atom feed
* [PATCH v2] mm/memblock: fix off-by-one page leak in reserve_mem_release_by_name()
       [not found] <V1-MESSAGE-ID>
@ 2026-04-14 10:42 ` DaeMyung Kang
  0 siblings, 0 replies; 3+ messages in thread
From: DaeMyung Kang @ 2026-04-14 10:42 UTC (permalink / raw)
  To: Mike Rapoport, Andrew Morton
  Cc: Masami Hiramatsu, Steven Rostedt, Donet Tom, stable, linux-mm,
	linux-kernel, DaeMyung Kang

free_reserved_area() treats its 'end' argument as exclusive: it aligns
end down via 'end & PAGE_MASK' and iterates with 'pos < end'.

reserve_mem_release_by_name() instead passes 'start + map->size - 1',
which causes the last page of a page-aligned reservation to never be
freed. For a reservation spanning N pages, only N - 1 pages are
released back to the allocator.

Fix it by passing the exclusive end address, 'start + map->size'.

Fixes: 74e2498ccf7b ("mm/memblock: Add reserved memory release function")
Cc: stable@vger.kernel.org
Signed-off-by: DaeMyung Kang <charsyam@gmail.com>
---
Changes in v2:
 - Add Fixes: tag and Cc: stable (per Donet Tom's review).
 - v1: https://lore.kernel.org/lkml/

 mm/memblock.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/mm/memblock.c b/mm/memblock.c
index b3ddfdec7a80..d4a02f1750e9 100644
--- a/mm/memblock.c
+++ b/mm/memblock.c
@@ -2434,7 +2434,7 @@ int reserve_mem_release_by_name(const char *name)
 		return 0;
 
 	start = phys_to_virt(map->start);
-	end = start + map->size - 1;
+	end = start + map->size;
 	snprintf(buf, sizeof(buf), "reserve_mem:%s", name);
 	free_reserved_area(start, end, 0, buf);
 	map->size = 0;
-- 
2.43.0


^ permalink raw reply related	[flat|nested] 3+ messages in thread

* [PATCH v2] mm/memblock: fix off-by-one page leak in reserve_mem_release_by_name()
       [not found] <20260414094439.982853-1-charsyam@gmail.com>
@ 2026-04-14 10:43 ` DaeMyung Kang
  2026-04-14 11:13   ` Donet Tom
  0 siblings, 1 reply; 3+ messages in thread
From: DaeMyung Kang @ 2026-04-14 10:43 UTC (permalink / raw)
  To: Mike Rapoport, Andrew Morton
  Cc: Masami Hiramatsu, Steven Rostedt, Donet Tom, stable, linux-mm,
	linux-kernel, DaeMyung Kang

free_reserved_area() treats its 'end' argument as exclusive: it aligns
end down via 'end & PAGE_MASK' and iterates with 'pos < end'.

reserve_mem_release_by_name() instead passes 'start + map->size - 1',
which causes the last page of a page-aligned reservation to never be
freed. For a reservation spanning N pages, only N - 1 pages are
released back to the allocator.

Fix it by passing the exclusive end address, 'start + map->size'.

Fixes: 74e2498ccf7b ("mm/memblock: Add reserved memory release function")
Cc: stable@vger.kernel.org
Signed-off-by: DaeMyung Kang <charsyam@gmail.com>
---
Changes in v2:
 - Add Fixes: tag and Cc: stable (per Donet Tom's review).
 - v1: https://lore.kernel.org/lkml/

 mm/memblock.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/mm/memblock.c b/mm/memblock.c
index b3ddfdec7a80..d4a02f1750e9 100644
--- a/mm/memblock.c
+++ b/mm/memblock.c
@@ -2434,7 +2434,7 @@ int reserve_mem_release_by_name(const char *name)
 		return 0;
 
 	start = phys_to_virt(map->start);
-	end = start + map->size - 1;
+	end = start + map->size;
 	snprintf(buf, sizeof(buf), "reserve_mem:%s", name);
 	free_reserved_area(start, end, 0, buf);
 	map->size = 0;
-- 
2.43.0


^ permalink raw reply related	[flat|nested] 3+ messages in thread

* Re: [PATCH v2] mm/memblock: fix off-by-one page leak in reserve_mem_release_by_name()
  2026-04-14 10:43 ` DaeMyung Kang
@ 2026-04-14 11:13   ` Donet Tom
  0 siblings, 0 replies; 3+ messages in thread
From: Donet Tom @ 2026-04-14 11:13 UTC (permalink / raw)
  To: DaeMyung Kang, Mike Rapoport, Andrew Morton
  Cc: Masami Hiramatsu, Steven Rostedt, stable, linux-mm, linux-kernel

Hi

On 4/14/26 4:13 PM, DaeMyung Kang wrote:
> free_reserved_area() treats its 'end' argument as exclusive: it aligns
> end down via 'end & PAGE_MASK' and iterates with 'pos < end'.
>
> reserve_mem_release_by_name() instead passes 'start + map->size - 1',
> which causes the last page of a page-aligned reservation to never be
> freed. For a reservation spanning N pages, only N - 1 pages are
> released back to the allocator.
>
> Fix it by passing the exclusive end address, 'start + map->size'.
>
> Fixes: 74e2498ccf7b ("mm/memblock: Add reserved memory release function")
> Cc: stable@vger.kernel.org
> Signed-off-by: DaeMyung Kang <charsyam@gmail.com>


I think it might be better to send v2 as a separate patch  rather than 
as a reply to the previous version.

This patch looks good to me.

Reviewed-by: Donet Tom donettom@linux.ibm.com

-Donet


> ---
> Changes in v2:
>   - Add Fixes: tag and Cc: stable (per Donet Tom's review).
>   - v1: https://lore.kernel.org/lkml/
>
>   mm/memblock.c | 2 +-
>   1 file changed, 1 insertion(+), 1 deletion(-)
>
> diff --git a/mm/memblock.c b/mm/memblock.c
> index b3ddfdec7a80..d4a02f1750e9 100644
> --- a/mm/memblock.c
> +++ b/mm/memblock.c
> @@ -2434,7 +2434,7 @@ int reserve_mem_release_by_name(const char *name)
>   		return 0;
>   
>   	start = phys_to_virt(map->start);
> -	end = start + map->size - 1;
> +	end = start + map->size;
>   	snprintf(buf, sizeof(buf), "reserve_mem:%s", name);
>   	free_reserved_area(start, end, 0, buf);
>   	map->size = 0;

^ permalink raw reply	[flat|nested] 3+ messages in thread

end of thread, other threads:[~2026-04-14 11:13 UTC | newest]

Thread overview: 3+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
     [not found] <V1-MESSAGE-ID>
2026-04-14 10:42 ` [PATCH v2] mm/memblock: fix off-by-one page leak in reserve_mem_release_by_name() DaeMyung Kang
     [not found] <20260414094439.982853-1-charsyam@gmail.com>
2026-04-14 10:43 ` DaeMyung Kang
2026-04-14 11:13   ` Donet Tom

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox