public inbox for linux-kernel@vger.kernel.org
 help / color / mirror / Atom feed
From: Manfred Spraul <manfred@colorfullife.com>
To: LKML <linux-kernel@vger.kernel.org>,
	Peter Zijlstra <peterz@infradead.org>,
	Davidlohr Bueso <dave@stgolabs.net>
Cc: Thomas Gleixner <tglx@linutronix.de>, Ingo Molnar <mingo@elte.hu>,
	"H. Peter Anvin" <hpa@zytor.com>,
	1vier1@web.de, Andrew Morton <akpm@linux-foundation.org>,
	torvalds@linux-foundation.org, xiaolong.ye@intel.com,
	felixh@informatik.uni-bremen.de,
	Manfred Spraul <manfred@colorfullife.com>
Subject: Re: [lkp] [ipc/sem.c]  5864a2fd30:  aim9.shared_memory.ops_per_sec -13.0%
Date: Wed, 19 Oct 2016 06:38:14 +0200	[thread overview]
Message-ID: <1476851896-3590-1-git-send-email-manfred@colorfullife.com> (raw)
In-Reply-To: <20161017022504.GG22605@yexl-desktop>

Hi,

as discussed before:
The root cause for the performance regression is the smp_mb() that was
added into the fast path.

I see two options:
1) switch to full spin_lock()/spin_unlock() for the rare codepath,
  then the fast path doesn't need the smp_mb() anymore.

2) confirm that no arch needs the smp_mb(), then remove it.
  - powerpc is ok after commit
     6262db7c088b ("powerpc/spinlock: Fix spin_unlock_wait()")
  - arm is ok after commit
     d86b8da04dfa ("arm64: spinlock: serialise spin_unlock_wait against concurrent lockers")
  - for x86 is ok after commit
     2c6100227116 ("locking/qspinlock: Fix spin_unlock_wait() some more")
  - for the remaining SMP architectures, I don't have a status.

I would prefer the approach 1:
The memory ordering provided by spin_lock()/spin_unlock() is clear.

Thus:
Attached are patches for approach 1:

- Patch 1 replaces spin_unlock_wait() with spin_lock()/spin_unlock() and
  removes all memory barriers that are then unnecessary.

- Patch 2 adds the hysteresis code: This makes the rare codepath
  extremely rare.
  It also corrects some wrong comments, e.g. regarding switching
  from global lock to per-sem lock (we "must' switch, not we "can"
  switch as written right now).

The patches passed stress-testing.

What do you think?
My initial idea was to aim for 4.10, then we have more time to decide.

--
        Manfred

  reply	other threads:[~2016-10-19  4:38 UTC|newest]

Thread overview: 6+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2016-10-17  2:25 [lkp] [ipc/sem.c] 5864a2fd30: aim9.shared_memory.ops_per_sec -13.0% regression kernel test robot
2016-10-19  4:38 ` Manfred Spraul [this message]
2016-10-19  4:38   ` [PATCH 1/2] ipc/sem.c: Avoid using spin_unlock_wait() Manfred Spraul
2016-10-19  4:38   ` [PATCH 2/2] ipc/sem: Add hysteresis Manfred Spraul
2016-10-20  0:21   ` [lkp] [ipc/sem.c] 5864a2fd30: aim9.shared_memory.ops_per_sec -13.0% Andrew Morton
2016-10-20  4:46     ` Manfred Spraul

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=1476851896-3590-1-git-send-email-manfred@colorfullife.com \
    --to=manfred@colorfullife.com \
    --cc=1vier1@web.de \
    --cc=akpm@linux-foundation.org \
    --cc=dave@stgolabs.net \
    --cc=felixh@informatik.uni-bremen.de \
    --cc=hpa@zytor.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=mingo@elte.hu \
    --cc=peterz@infradead.org \
    --cc=tglx@linutronix.de \
    --cc=torvalds@linux-foundation.org \
    --cc=xiaolong.ye@intel.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox