From: Nathan Fontenot <nfont@linux.vnet.ibm.com>
To: Li Zhong <zhong@linux.vnet.ibm.com>, linuxppc-dev@lists.ozlabs.org
Cc: paulus@samba.org
Subject: Re: [PATCH 3/4] powerpc: implement vmemmap_free()
Date: Thu, 24 Jul 2014 10:12:48 -0500 [thread overview]
Message-ID: <53D12270.6090507@linux.vnet.ibm.com> (raw)
In-Reply-To: <1402475019-19699-3-git-send-email-zhong@linux.vnet.ibm.com>
On 06/11/2014 03:23 AM, Li Zhong wrote:
> vmemmap_free() does the opposite of vmemap_populate().
> This patch also puts vmemmap_free() and vmemmap_list_free() into
> CONFIG_MEMMORY_HOTPLUG.
>
> Signed-off-by: Li Zhong <zhong@linux.vnet.ibm.com>
> Cc: Nathan Fontenot <nfont@linux.vnet.ibm.com>
Acked-by: Nathan Fontenot <nfont@linux.vnet.ibm.com>
> ---
> arch/powerpc/mm/init_64.c | 85 ++++++++++++++++++++++++++++++++++-----------
> 1 file changed, 64 insertions(+), 21 deletions(-)
>
> diff --git a/arch/powerpc/mm/init_64.c b/arch/powerpc/mm/init_64.c
> index 69203c8..4963790 100644
> --- a/arch/powerpc/mm/init_64.c
> +++ b/arch/powerpc/mm/init_64.c
> @@ -298,6 +298,37 @@ static __meminit void vmemmap_list_populate(unsigned long phys,
> vmemmap_list = vmem_back;
> }
>
> +int __meminit vmemmap_populate(unsigned long start, unsigned long end, int node)
> +{
> + unsigned long page_size = 1 << mmu_psize_defs[mmu_vmemmap_psize].shift;
> +
> + /* Align to the page size of the linear mapping. */
> + start = _ALIGN_DOWN(start, page_size);
> +
> + pr_debug("vmemmap_populate %lx..%lx, node %d\n", start, end, node);
> +
> + for (; start < end; start += page_size) {
> + void *p;
> +
> + if (vmemmap_populated(start, page_size))
> + continue;
> +
> + p = vmemmap_alloc_block(page_size, node);
> + if (!p)
> + return -ENOMEM;
> +
> + vmemmap_list_populate(__pa(p), start, node);
> +
> + pr_debug(" * %016lx..%016lx allocated at %p\n",
> + start, start + page_size, p);
> +
> + vmemmap_create_mapping(start, page_size, __pa(p));
> + }
> +
> + return 0;
> +}
> +
> +#ifdef CONFIG_MEMORY_HOTPLUG
> static unsigned long vmemmap_list_free(unsigned long start)
> {
> struct vmemmap_backing *vmem_back, *vmem_back_prev;
> @@ -330,40 +361,52 @@ static unsigned long vmemmap_list_free(unsigned long start)
> return vmem_back->phys;
> }
>
> -int __meminit vmemmap_populate(unsigned long start, unsigned long end, int node)
> +void __ref vmemmap_free(unsigned long start, unsigned long end)
> {
> unsigned long page_size = 1 << mmu_psize_defs[mmu_vmemmap_psize].shift;
>
> - /* Align to the page size of the linear mapping. */
> start = _ALIGN_DOWN(start, page_size);
>
> - pr_debug("vmemmap_populate %lx..%lx, node %d\n", start, end, node);
> + pr_debug("vmemmap_free %lx...%lx\n", start, end);
>
> for (; start < end; start += page_size) {
> - void *p;
> + unsigned long addr;
>
> + /*
> + * the section has already be marked as invalid, so
> + * vmemmap_populated() true means some other sections still
> + * in this page, so skip it.
> + */
> if (vmemmap_populated(start, page_size))
> continue;
>
> - p = vmemmap_alloc_block(page_size, node);
> - if (!p)
> - return -ENOMEM;
> -
> - vmemmap_list_populate(__pa(p), start, node);
> -
> - pr_debug(" * %016lx..%016lx allocated at %p\n",
> - start, start + page_size, p);
> -
> - vmemmap_create_mapping(start, page_size, __pa(p));
> + addr = vmemmap_list_free(start);
> + if (addr) {
> + struct page *page = pfn_to_page(addr >> PAGE_SHIFT);
> +
> + if (PageReserved(page)) {
> + /* allocated from bootmem */
> + if (page_size < PAGE_SIZE) {
> + /*
> + * this shouldn't happen, but if it is
> + * the case, leave the memory there
> + */
> + WARN_ON_ONCE(1);
> + } else {
> + unsigned int nr_pages =
> + 1 << get_order(page_size);
> + while (nr_pages--)
> + free_reserved_page(page++);
> + }
> + } else
> + free_pages((unsigned long)(__va(addr)),
> + get_order(page_size));
> +
> + vmemmap_remove_mapping(start, page_size);
> + }
> }
> -
> - return 0;
> -}
> -
> -void vmemmap_free(unsigned long start, unsigned long end)
> -{
> }
> -
> +#endif
> void register_page_bootmem_memmap(unsigned long section_nr,
> struct page *start_page, unsigned long size)
> {
>
next prev parent reply other threads:[~2014-07-24 15:12 UTC|newest]
Thread overview: 8+ messages / expand[flat|nested] mbox.gz Atom feed top
2014-06-11 8:23 [PATCH 1/4] powerpc: implement vmemmap_list_free() Li Zhong
2014-06-11 8:23 ` [PATCH 2/4] powerpc: implement vmemmap_remove_mapping() for BOOK3S Li Zhong
2014-07-24 15:12 ` Nathan Fontenot
2014-06-11 8:23 ` [PATCH 3/4] powerpc: implement vmemmap_free() Li Zhong
2014-07-24 15:12 ` Nathan Fontenot [this message]
2014-06-11 8:23 ` [PATCH 4/4] powerpc: start loop at section start of start in vmemmap_populated() Li Zhong
2014-07-24 15:13 ` Nathan Fontenot
2014-07-24 15:11 ` [PATCH 1/4] powerpc: implement vmemmap_list_free() Nathan Fontenot
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=53D12270.6090507@linux.vnet.ibm.com \
--to=nfont@linux.vnet.ibm.com \
--cc=linuxppc-dev@lists.ozlabs.org \
--cc=paulus@samba.org \
--cc=zhong@linux.vnet.ibm.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).