linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
From: Frederic Weisbecker <frederic@kernel.org>
To: Mel Gorman <mgorman@techsingularity.net>
Cc: Vlastimil Babka <vbabka@suse.cz>,
	Sebastian Andrzej Siewior <bigeasy@linutronix.de>,
	linux-mm@kvack.org, tglx@linutronix.de
Subject: Re: [PATCH 0/2] mm/swap: Add locking for pagevec
Date: Tue, 16 Oct 2018 18:26:23 +0200	[thread overview]
Message-ID: <20181016162622.GA12144@lerouge> (raw)
In-Reply-To: <20181015095048.GG5819@techsingularity.net>

On Mon, Oct 15, 2018 at 10:50:48AM +0100, Mel Gorman wrote:
> On Fri, Oct 12, 2018 at 09:21:41AM +0200, Vlastimil Babka wrote:
> > On 9/14/18 4:59 PM, Sebastian Andrzej Siewior wrote:
> > I think this evaluation is missing the other side of the story, and
> > that's the cost of using a spinlock (even uncontended) instead of
> > disabling preemption. The expectation for LRU pagevec is that the local
> > operations will be much more common than draining of other CPU's, so
> > it's optimized for the former.
> > 
> 
> Agreed, the drain operation should be extremely rare except under heavy
> memory pressure, particularly if mixed with THP allocations. The overall
> intent seems to be improving lockdep coverage but I don't think we
> should take a hit in the common case just to get that coverage. Bear in
> mind that the main point of the pagevec (whether it's true or not) is to
> avoid the much heavier LRU lock.

So indeed, if the only purpose of this patch were to make lockdep wiser,
a pair of spin_lock_acquire() / spin_unlock_release() would be enough to
teach it and would avoid the overhead.

Now another significant incentive behind this change is to improve CPU isolation.
Workloads relying on owning the entire CPU without being disturbed are interested
in this as it allows to offload some noise. It's no big deal for those who can
tolerate rare events but often CPU isolation is combined with deterministic latency
requirements.

So, I'm not saying this per-CPU spinlock is necessarily the right answer, I
don't know that code enough to have an opinion, but I still wish we can find
a solution.

Thanks.

  reply	other threads:[~2018-10-16 16:26 UTC|newest]

Thread overview: 12+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2018-09-14 14:59 [PATCH 0/2] mm/swap: Add locking for pagevec Sebastian Andrzej Siewior
2018-09-14 14:59 ` [PATCH 1/2] mm/swap: Add pagevec locking Sebastian Andrzej Siewior
2018-09-30  3:16   ` [LKP] [mm/swap] d884021f52: will-it-scale.per_process_ops -2.4% regression kernel test robot
2018-09-30  8:17     ` Sebastian Andrzej Siewior
2018-09-14 14:59 ` [PATCH 2/2] mm/swap: Access struct pagevec remotely Sebastian Andrzej Siewior
2018-11-09 23:06   ` Andrew Morton
2018-10-12  7:21 ` [PATCH 0/2] mm/swap: Add locking for pagevec Vlastimil Babka
2018-10-15  9:50   ` Mel Gorman
2018-10-16 16:26     ` Frederic Weisbecker [this message]
2018-10-16 17:13       ` Thomas Gleixner
2018-10-16 19:54         ` Frederic Weisbecker
2018-10-16 20:44           ` Thomas Gleixner

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20181016162622.GA12144@lerouge \
    --to=frederic@kernel.org \
    --cc=bigeasy@linutronix.de \
    --cc=linux-mm@kvack.org \
    --cc=mgorman@techsingularity.net \
    --cc=tglx@linutronix.de \
    --cc=vbabka@suse.cz \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).