linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
From: Mike Rapoport <rppt@kernel.org>
To: Usama Arif <usama.arif@bytedance.com>
Cc: linux-mm@kvack.org, muchun.song@linux.dev,
	mike.kravetz@oracle.com, linux-kernel@vger.kernel.org,
	fam.zheng@bytedance.com, liangma@liangbit.com,
	simon.evans@bytedance.com, punit.agrawal@bytedance.com
Subject: Re: [RFC 2/4] mm/memblock: Add hugepage_size member to struct memblock_region
Date: Wed, 26 Jul 2023 14:01:13 +0300	[thread overview]
Message-ID: <20230726110113.GT1901145@kernel.org> (raw)
In-Reply-To: <20230724134644.1299963-3-usama.arif@bytedance.com>

On Mon, Jul 24, 2023 at 02:46:42PM +0100, Usama Arif wrote:
> This propagates the hugepage size from the memblock APIs
> (memblock_alloc_try_nid_raw and memblock_alloc_range_nid)
> so that it can be stored in struct memblock region. This does not
> introduce any functional change and hugepage_size is not used in
> this commit. It is just a setup for the next commit where huge_pagesize
> is used to skip initialization of struct pages that will be freed later
> when HVO is enabled.
> 
> Signed-off-by: Usama Arif <usama.arif@bytedance.com>
> ---
>  arch/arm64/mm/kasan_init.c                   |  2 +-
>  arch/powerpc/platforms/pasemi/iommu.c        |  2 +-
>  arch/powerpc/platforms/pseries/setup.c       |  4 +-
>  arch/powerpc/sysdev/dart_iommu.c             |  2 +-
>  include/linux/memblock.h                     |  8 ++-
>  mm/cma.c                                     |  4 +-
>  mm/hugetlb.c                                 |  6 +-
>  mm/memblock.c                                | 60 ++++++++++++--------
>  mm/mm_init.c                                 |  2 +-
>  mm/sparse-vmemmap.c                          |  2 +-
>  tools/testing/memblock/tests/alloc_nid_api.c |  2 +-
>  11 files changed, 56 insertions(+), 38 deletions(-)
> 

[ snip ]

> diff --git a/include/linux/memblock.h b/include/linux/memblock.h
> index f71ff9f0ec81..bb8019540d73 100644
> --- a/include/linux/memblock.h
> +++ b/include/linux/memblock.h
> @@ -63,6 +63,7 @@ struct memblock_region {
>  #ifdef CONFIG_NUMA
>  	int nid;
>  #endif
> +	phys_addr_t hugepage_size;
>  };
>  
>  /**
> @@ -400,7 +401,8 @@ phys_addr_t memblock_phys_alloc_range(phys_addr_t size, phys_addr_t align,
>  				      phys_addr_t start, phys_addr_t end);
>  phys_addr_t memblock_alloc_range_nid(phys_addr_t size,
>  				      phys_addr_t align, phys_addr_t start,
> -				      phys_addr_t end, int nid, bool exact_nid);
> +				      phys_addr_t end, int nid, bool exact_nid,
> +				      phys_addr_t hugepage_size);

Rather than adding yet another parameter to memblock_phys_alloc_range() we
can have an API that sets a flag on the reserved regions.
With this the hugetlb reservation code can set a flag when HVO is
enabled and memmap_init_reserved_pages() will skip regions with this flag
set.

>  phys_addr_t memblock_phys_alloc_try_nid(phys_addr_t size, phys_addr_t align, int nid);
>  
>  static __always_inline phys_addr_t memblock_phys_alloc(phys_addr_t size,
> @@ -415,7 +417,7 @@ void *memblock_alloc_exact_nid_raw(phys_addr_t size, phys_addr_t align,
>  				 int nid);
>  void *memblock_alloc_try_nid_raw(phys_addr_t size, phys_addr_t align,
>  				 phys_addr_t min_addr, phys_addr_t max_addr,
> -				 int nid);
> +				 int nid, phys_addr_t hugepage_size);
>  void *memblock_alloc_try_nid(phys_addr_t size, phys_addr_t align,
>  			     phys_addr_t min_addr, phys_addr_t max_addr,
>  			     int nid);
> @@ -431,7 +433,7 @@ static inline void *memblock_alloc_raw(phys_addr_t size,
>  {
>  	return memblock_alloc_try_nid_raw(size, align, MEMBLOCK_LOW_LIMIT,
>  					  MEMBLOCK_ALLOC_ACCESSIBLE,
> -					  NUMA_NO_NODE);
> +					  NUMA_NO_NODE, 0);
>  }
>  
>  static inline void *memblock_alloc_from(phys_addr_t size,

-- 
Sincerely yours,
Mike.


  reply	other threads:[~2023-07-26 11:01 UTC|newest]

Thread overview: 10+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2023-07-24 13:46 [RFC 0/4] mm/memblock: Skip prep and initialization of struct pages freed later by HVO Usama Arif
2023-07-24 13:46 ` [RFC 1/4] mm/hugetlb: Skip prep of tail pages when HVO is enabled Usama Arif
2023-07-24 13:46 ` [RFC 2/4] mm/memblock: Add hugepage_size member to struct memblock_region Usama Arif
2023-07-26 11:01   ` Mike Rapoport [this message]
2023-07-26 15:02     ` [External] " Usama Arif
2023-07-27  4:30       ` Mike Rapoport
2023-07-27 20:56         ` Usama Arif
2023-07-24 13:46 ` [RFC 3/4] mm/hugetlb_vmemmap: Use nid of the head page to reallocate it Usama Arif
2023-07-24 13:46 ` [RFC 4/4] mm/memblock: Skip initialization of struct pages freed later by HVO Usama Arif
2023-07-26 10:34 ` [RFC 0/4] mm/memblock: Skip prep and " Usama Arif

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20230726110113.GT1901145@kernel.org \
    --to=rppt@kernel.org \
    --cc=fam.zheng@bytedance.com \
    --cc=liangma@liangbit.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-mm@kvack.org \
    --cc=mike.kravetz@oracle.com \
    --cc=muchun.song@linux.dev \
    --cc=punit.agrawal@bytedance.com \
    --cc=simon.evans@bytedance.com \
    --cc=usama.arif@bytedance.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).