From: RashmicaGupta <rashmica.g@gmail.com>
To: Nicholas Piggin <npiggin@gmail.com>, bsingharora@gmail.com
Cc: linuxppc-dev@lists.ozlabs.org, bsingharora@gmail.com,
dllehr@us.ibm.com, mpe@ellerman.id.au
Subject: Re: [RFC] Remove memory from nodes for memtrace.
Date: Thu, 23 Feb 2017 17:19:20 +1100 [thread overview]
Message-ID: <086a2e02-5f8b-5752-7f4d-34e9f3d12feb@gmail.com> (raw)
In-Reply-To: <20170223135635.5321dcc5@roar.ozlabs.ibm.com>
On 23/02/17 14:56, Nicholas Piggin wrote:
>
>> +
>> +static bool memtrace_offline_pages(u32 nid, u64 start_pfn, u64 nr_pages)
>> +{
>> + u64 end_pfn = start_pfn + nr_pages - 1;
>> +
>> + if (walk_memory_range(start_pfn, end_pfn, NULL,
>> + check_memblock_online))
>> + return false;
>> +
>> + walk_memory_range(start_pfn, end_pfn, (void *)MEM_GOING_OFFLINE,
>> + change_memblock_state);
>> +
>> + if (offline_pages(start_pfn, nr_pages)) {
>> + walk_memory_range(start_pfn, end_pfn, (void *)MEM_ONLINE,
>> + change_memblock_state);
>> + return false;
>> + }
>> +
>> + walk_memory_range(start_pfn, end_pfn, (void *)MEM_OFFLINE,
>> + change_memblock_state);
>> +
>> + /* RCU grace period? */
>> + flush_memory_region((u64)__va(start_pfn << PAGE_SHIFT), nr_pages << PAGE_SHIFT);
>> +
>> + remove_memory(nid, start_pfn << PAGE_SHIFT, nr_pages << PAGE_SHIFT);
>> +
>> + return true;
>> +}
> This is the tricky part. Memory hotplug APIs don't seem well suited for
> what we're trying to do... Anyway, do a bit of grepping around for
> definitions of some of these calls, and how other code uses them. For
> example, remove_memory comment says caller must hold
> lock_device_hotplug() first, so we're missing that at least. I think
> that's also needed over the memblock state changes.
>
> We don't need an RCU grace period there AFAICT, because offline_pages
> should have us covered.
>
>
> I haven't looked at memory hotplug enough to know why we're open-coding
> the memblock stuff there. It would be nice to just be able to call
> memblock_remove() like the pseries hotplug code does.
remove_memory() calls memblock_remove() after confirming that
the memory is offlined. That seems sensible to me.
>
> I *think* it is because hot remove mostly comes from when we know about
> an online region of memory and we want to take it down. In this case we
> also are trying to discover if those addresses are covered by online
> memory. Still, I wonder if there are better memblock APIs to do this
> with? Balbir may have a better idea of that?
>
>
next prev parent reply other threads:[~2017-02-23 6:19 UTC|newest]
Thread overview: 10+ messages / expand[flat|nested] mbox.gz Atom feed top
2017-02-22 21:39 [RFC] Remove memory from nodes for memtrace Rashmica Gupta
2017-02-23 3:38 ` Stewart Smith
2017-02-23 3:56 ` Nicholas Piggin
2017-02-23 4:01 ` Andrew Donnellan
2017-02-23 4:23 ` RashmicaGupta
2017-02-23 5:01 ` Andrew Donnellan
2017-02-23 4:29 ` RashmicaGupta
2017-02-23 6:19 ` RashmicaGupta [this message]
2017-02-24 23:47 ` Balbir Singh
2017-02-26 23:54 ` Oliver O'Halloran
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=086a2e02-5f8b-5752-7f4d-34e9f3d12feb@gmail.com \
--to=rashmica.g@gmail.com \
--cc=bsingharora@gmail.com \
--cc=dllehr@us.ibm.com \
--cc=linuxppc-dev@lists.ozlabs.org \
--cc=mpe@ellerman.id.au \
--cc=npiggin@gmail.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).