From: Vlastimil Babka <vbabka@suse.cz>
To: Pavel Tatashin <pasha.tatashin@soleen.com>,
David Hildenbrand <david@redhat.com>
Cc: Michal Hocko <mhocko@suse.com>,
LKML <linux-kernel@vger.kernel.org>,
Andrew Morton <akpm@linux-foundation.org>,
linux-mm <linux-mm@kvack.org>
Subject: Re: [PATCH] mm/memory_hotplug: drain per-cpu pages again during memory offline
Date: Fri, 4 Sep 2020 08:32:54 +0200 [thread overview]
Message-ID: <adb59e82-d760-5ed1-bf20-817cc200aff7@suse.cz> (raw)
In-Reply-To: <CA+CK2bBTfmhTWNRrxnVKi=iknqq-iZxNZSnwNA9C9tWAJzRxmw@mail.gmail.com>
On 9/3/20 8:23 PM, Pavel Tatashin wrote:
>>
>> As expressed in reply to v2, I dislike this hack. There is strong
>> synchronization, just PCP is special. Allocating from MIGRATE_ISOLATE is
>> just plain ugly.
>>
>> Can't we temporarily disable PCP (while some pageblock in the zone is
>> isolated, which we know e.g., due to the counter), so no new pages get
>> put into PCP lists after draining, and re-enable after no pageblocks are
>> isolated again? We keep draining the PCP, so it doesn't seem to be of a
>> lot of use during that period, no? It's a performance hit already.
>>
>> Then, we would only need exactly one drain. And we would only have to
>> check on the free path whether PCP is temporarily disabled.
>
> Hm, we could use a static branches to disable it, that would keep
> release code just as fast, but I am worried it will make code even
> uglier. Let's see what others in this thread think about this idea.
Maybe we could just set pcp->high = 0 or something, make sure the
pcplist user only reads this value while irqs are disabled. Then the the
IPI in drain_all_pages() should guarantee there's nobody racing freeing
to pcplist. But careful to not introduce bugs like the one fixed with
[1]. And not sure if this guarantee survives when RT comes and replaces
the disabled irq's with local_lock or something.
[1]
https://lore.kernel.org/linux-mm/1597150703-19003-1-git-send-email-charante@codeaurora.org/
> Thank you,
> Pasha
>
next prev parent reply other threads:[~2020-09-04 6:33 UTC|newest]
Thread overview: 23+ messages / expand[flat|nested] mbox.gz Atom feed top
2020-09-01 12:46 [PATCH] mm/memory_hotplug: drain per-cpu pages again during memory offline Pavel Tatashin
2020-09-01 18:37 ` David Rientjes
2020-09-02 14:01 ` Michal Hocko
2020-09-02 14:10 ` Michal Hocko
2020-09-02 14:31 ` Pavel Tatashin
2020-09-02 14:49 ` Vlastimil Babka
2020-09-02 14:08 ` Michal Hocko
2020-09-02 14:26 ` Pavel Tatashin
2020-09-02 14:55 ` Vlastimil Babka
2020-09-02 15:13 ` Michal Hocko
2020-09-02 15:40 ` Pavel Tatashin
2020-09-02 17:51 ` Vlastimil Babka
2020-09-03 6:38 ` Michal Hocko
2020-09-03 18:20 ` David Hildenbrand
2020-09-03 18:23 ` Pavel Tatashin
2020-09-03 18:31 ` David Hildenbrand
2020-09-04 7:02 ` Michal Hocko
2020-09-04 14:25 ` Pavel Tatashin
2020-09-07 7:26 ` Michal Hocko
2020-09-04 6:32 ` Vlastimil Babka [this message]
2020-09-03 7:07 ` Michal Hocko
2020-09-03 13:43 ` Pavel Tatashin
2020-09-03 13:50 ` Vlastimil Babka
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=adb59e82-d760-5ed1-bf20-817cc200aff7@suse.cz \
--to=vbabka@suse.cz \
--cc=akpm@linux-foundation.org \
--cc=david@redhat.com \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-mm@kvack.org \
--cc=mhocko@suse.com \
--cc=pasha.tatashin@soleen.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).