linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
From: Petr Mladek <pmladek@suse.com>
To: Mel Gorman <mgorman@techsingularity.net>
Cc: Vlastimil Babka <vbabka@suse.cz>,
	Andrew Morton <akpm@linux-foundation.org>,
	Linux Kernel <linux-kernel@vger.kernel.org>,
	Linux-MM <linux-mm@kvack.org>,
	Hillf Danton <hillf.zj@alibaba-inc.com>,
	Jesper Dangaard Brouer <brouer@redhat.com>,
	Tejun Heo <tj@kernel.org>
Subject: Re: [PATCH 3/4] mm, page_alloc: Drain per-cpu pages from workqueue context
Date: Mon, 23 Jan 2017 17:29:20 +0100	[thread overview]
Message-ID: <20170123162841.GA6620@pathway.suse.cz> (raw)
In-Reply-To: <20170120152606.w3hb53m2w6thzsqq@techsingularity.net>

On Fri 2017-01-20 15:26:06, Mel Gorman wrote:
> On Fri, Jan 20, 2017 at 03:26:05PM +0100, Vlastimil Babka wrote:
> > > @@ -2392,8 +2404,24 @@ void drain_all_pages(struct zone *zone)
> > >  		else
> > >  			cpumask_clear_cpu(cpu, &cpus_with_pcps);
> > >  	}
> > > -	on_each_cpu_mask(&cpus_with_pcps, (smp_call_func_t) drain_local_pages,
> > > -								zone, 1);
> > > +
> > > +	if (works) {
> > > +		for_each_cpu(cpu, &cpus_with_pcps) {
> > > +			struct work_struct *work = per_cpu_ptr(works, cpu);
> > > +			INIT_WORK(work, drain_local_pages_wq);
> > > +			schedule_work_on(cpu, work);
> > 
> > This translates to queue_work_on(), which has the comment of "We queue
> > the work to a specific CPU, the caller must ensure it can't go away.",
> > so is this safe? lru_add_drain_all() uses get_online_cpus() around this.
> > 
> 
> get_online_cpus() would be required.
> 
> > schedule_work_on() also uses the generic system_wq, while lru drain has
> > its own workqueue with WQ_MEM_RECLAIM so it seems that would be useful
> > here as well?
> > 
> 
> I would be reluctant to introduce a dedicated queue unless there was a
> definite case where an OOM occurred because pages were pinned on per-cpu
> lists and couldn't be drained because the buddy allocator was depleted.
> As it was, I thought the fallback case was excessively paranoid.

I guess that you know it but it is not clear from the above paragraph.

WQ_MEM_RECLAIM makes sure that there is a rescue worker available.
It is used when all workers are busy (blocked by an allocation
request) and new worker (kthread) cannot be forked because
the fork would need an allocation as well.

The fallback below solves the situation when struct work cannot
be allocated. But it does not solve the situation when there is
no worker to actually proceed the work. I am not sure if this
is relevant for drain_all_pages().

Best Regards,
Petr

> > > +		}
> > > +		for_each_cpu(cpu, &cpus_with_pcps)
> > > +			flush_work(per_cpu_ptr(works, cpu));
> > > +	} else {
> > > +		for_each_cpu(cpu, &cpus_with_pcps) {
> > > +			struct work_struct work;
> > > +
> > > +			INIT_WORK(&work, drain_local_pages_wq);
> > > +			schedule_work_on(cpu, &work);
> > > +			flush_work(&work);
> > 
> > Totally out of scope, but I wonder if schedule_on_each_cpu() could use
> > the same fallback that's here?
> > 
> 
> I'm not aware of a case where it really has been a problem. I only considered
> it here as the likely caller is in a context that is failing allocations.
> 
> -- 
> Mel Gorman
> SUSE Labs

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

  reply	other threads:[~2017-01-23 16:29 UTC|newest]

Thread overview: 26+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2017-01-17  9:29 [PATCH 0/4] Use per-cpu allocator for !irq requests and prepare for a bulk allocator v4 Mel Gorman
2017-01-17  9:29 ` [PATCH 1/4] mm, page_alloc: Split buffered_rmqueue Mel Gorman
2017-01-17 18:07   ` Jesper Dangaard Brouer
2017-01-17 18:17     ` Vlastimil Babka
2017-01-17 20:20       ` Mel Gorman
2017-01-17 21:07         ` Mel Gorman
2017-01-17 21:24           ` Vlastimil Babka
2017-01-17  9:29 ` [PATCH 2/4] mm, page_alloc: Split alloc_pages_nodemask Mel Gorman
2017-01-17  9:29 ` [PATCH 3/4] mm, page_alloc: Drain per-cpu pages from workqueue context Mel Gorman
2017-01-20 14:26   ` Vlastimil Babka
2017-01-20 15:26     ` Mel Gorman
2017-01-23 16:29       ` Petr Mladek [this message]
2017-01-23 16:50         ` Mel Gorman
2017-01-23 17:03       ` Tejun Heo
2017-01-23 20:04         ` Mel Gorman
2017-01-23 20:55           ` Tejun Heo
2017-01-23 23:04             ` Mel Gorman
2017-01-24 16:07               ` Tejun Heo
2017-01-24 23:54                 ` Mel Gorman
2017-01-25  2:02                   ` Tejun Heo
2017-01-25  8:30                     ` Mel Gorman
2017-01-24 11:08   ` Vlastimil Babka
2017-01-17  9:29 ` [PATCH 4/4] mm, page_alloc: Only use per-cpu allocator for irq-safe requests Mel Gorman
2017-01-20 15:02   ` Vlastimil Babka
2017-01-23 11:17     ` Mel Gorman
  -- strict thread matches above, loose matches on Subject: below --
2017-01-23 15:39 [PATCH 0/4] Use per-cpu allocator for !irq requests and prepare for a bulk allocator v5 Mel Gorman
2017-01-23 15:39 ` [PATCH 3/4] mm, page_alloc: Drain per-cpu pages from workqueue context Mel Gorman

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20170123162841.GA6620@pathway.suse.cz \
    --to=pmladek@suse.com \
    --cc=akpm@linux-foundation.org \
    --cc=brouer@redhat.com \
    --cc=hillf.zj@alibaba-inc.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-mm@kvack.org \
    --cc=mgorman@techsingularity.net \
    --cc=tj@kernel.org \
    --cc=vbabka@suse.cz \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).