From: Tejun Heo <tj@kernel.org>
To: Mel Gorman <mgorman@techsingularity.net>
Cc: Vlastimil Babka <vbabka@suse.cz>,
Andrew Morton <akpm@linux-foundation.org>,
Linux Kernel <linux-kernel@vger.kernel.org>,
Linux-MM <linux-mm@kvack.org>,
Hillf Danton <hillf.zj@alibaba-inc.com>,
Jesper Dangaard Brouer <brouer@redhat.com>,
Petr Mladek <pmladek@suse.cz>
Subject: Re: [PATCH 3/4] mm, page_alloc: Drain per-cpu pages from workqueue context
Date: Tue, 24 Jan 2017 21:02:20 -0500 [thread overview]
Message-ID: <20170125020220.GA2727@mtj.duckdns.org> (raw)
In-Reply-To: <20170124235457.x7ssjun5ht2ycyac@techsingularity.net>
Hello,
On Tue, Jan 24, 2017 at 11:54:57PM +0000, Mel Gorman wrote:
> @@ -2402,24 +2415,16 @@ void drain_all_pages(struct zone *zone)
> cpumask_clear_cpu(cpu, &cpus_with_pcps);
> }
>
> + for_each_cpu(cpu, &cpus_with_pcps) {
> + struct work_struct *work = per_cpu_ptr(&pcpu_drain, cpu);
> + INIT_WORK(work, drain_local_pages_wq);
> + schedule_work_on(cpu, work);
> }
> + for_each_cpu(cpu, &cpus_with_pcps)
> + flush_work(per_cpu_ptr(&pcpu_drain, cpu));
> +
> put_online_cpus();
> + mutex_unlock(&pcpu_drain_mutex);
Looks good to me.
Thanks.
--
tejun
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
next prev parent reply other threads:[~2017-01-25 2:02 UTC|newest]
Thread overview: 26+ messages / expand[flat|nested] mbox.gz Atom feed top
2017-01-17 9:29 [PATCH 0/4] Use per-cpu allocator for !irq requests and prepare for a bulk allocator v4 Mel Gorman
2017-01-17 9:29 ` [PATCH 1/4] mm, page_alloc: Split buffered_rmqueue Mel Gorman
2017-01-17 18:07 ` Jesper Dangaard Brouer
2017-01-17 18:17 ` Vlastimil Babka
2017-01-17 20:20 ` Mel Gorman
2017-01-17 21:07 ` Mel Gorman
2017-01-17 21:24 ` Vlastimil Babka
2017-01-17 9:29 ` [PATCH 2/4] mm, page_alloc: Split alloc_pages_nodemask Mel Gorman
2017-01-17 9:29 ` [PATCH 3/4] mm, page_alloc: Drain per-cpu pages from workqueue context Mel Gorman
2017-01-20 14:26 ` Vlastimil Babka
2017-01-20 15:26 ` Mel Gorman
2017-01-23 16:29 ` Petr Mladek
2017-01-23 16:50 ` Mel Gorman
2017-01-23 17:03 ` Tejun Heo
2017-01-23 20:04 ` Mel Gorman
2017-01-23 20:55 ` Tejun Heo
2017-01-23 23:04 ` Mel Gorman
2017-01-24 16:07 ` Tejun Heo
2017-01-24 23:54 ` Mel Gorman
2017-01-25 2:02 ` Tejun Heo [this message]
2017-01-25 8:30 ` Mel Gorman
2017-01-24 11:08 ` Vlastimil Babka
2017-01-17 9:29 ` [PATCH 4/4] mm, page_alloc: Only use per-cpu allocator for irq-safe requests Mel Gorman
2017-01-20 15:02 ` Vlastimil Babka
2017-01-23 11:17 ` Mel Gorman
-- strict thread matches above, loose matches on Subject: below --
2017-01-23 15:39 [PATCH 0/4] Use per-cpu allocator for !irq requests and prepare for a bulk allocator v5 Mel Gorman
2017-01-23 15:39 ` [PATCH 3/4] mm, page_alloc: Drain per-cpu pages from workqueue context Mel Gorman
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20170125020220.GA2727@mtj.duckdns.org \
--to=tj@kernel.org \
--cc=akpm@linux-foundation.org \
--cc=brouer@redhat.com \
--cc=hillf.zj@alibaba-inc.com \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-mm@kvack.org \
--cc=mgorman@techsingularity.net \
--cc=pmladek@suse.cz \
--cc=vbabka@suse.cz \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).