linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH] signals: Avoid unnecessary taking of sighand->siglock
@ 2016-09-22 18:25 Waiman Long
  2016-09-23  9:32 ` Oleg Nesterov
  0 siblings, 1 reply; 3+ messages in thread
From: Waiman Long @ 2016-09-22 18:25 UTC (permalink / raw)
  To: Andrew Morton, Ingo Molnar, Oleg Nesterov, Thomas Gleixner,
	Stas Sergeev
  Cc: linux-kernel, Scott J Norton, Douglas Hatch, Waiman Long

When running certain database workload on a high-end system with many
CPUs, it was found that spinlock contention in the sigprocmask syscalls
became a significant portion of the overall CPU cycles as shown below.

  9.30%  9.30%  905387  dataserver  /proc/kcore 0x7fff8163f4d2
  [k] _raw_spin_lock_irq
            |
            ---_raw_spin_lock_irq
               |
               |--99.34%-- __set_current_blocked
               |          sigprocmask
               |          sys_rt_sigprocmask
               |          system_call_fastpath
               |          |
               |          |--50.63%-- __swapcontext
               |          |          |
               |          |          |--99.91%-- upsleepgeneric
               |          |
               |          |--49.36%-- __setcontext
               |          |          ktskRun

Looking further into the swapcontext function in glibc, it was found
that the function always call sigprocmask() without checking if there
are changes in the signal mask.

This patch adds a check in the __set_current_blocked() function to
avoid taking the sighand->siglock spinlock if there is no change in
the signal mask. This will prevent unneeded spinlock contention when
many threads are trying to call sigprocmask().

With this patch applied, the spinlock contention in sigprocmask() was
gone.

This patch is currently only active for 64-bit architectures.

Signed-off-by: Waiman Long <Waiman.Long@hpe.com>
---
 kernel/signal.c |   10 ++++++++++
 1 files changed, 10 insertions(+), 0 deletions(-)

diff --git a/kernel/signal.c b/kernel/signal.c
index af21afc..5850b11 100644
--- a/kernel/signal.c
+++ b/kernel/signal.c
@@ -2485,6 +2485,16 @@ void __set_current_blocked(const sigset_t *newset)
 {
 	struct task_struct *tsk = current;
 
+	/*
+	 * In case the signal mask hasn't changed, we won't need to take
+	 * the lock. As the current blocked mask can be modified by other
+	 * CPUs, we need to do an atomic read without lock. In other words,
+	 * this check will only be done on 64-bit systems.
+	 */
+#if _NSIG_WORDS == 1
+	if (READ_ONCE(tsk->blocked.sig[0]) == newset->sig[0])
+		return;
+#endif
 	spin_lock_irq(&tsk->sighand->siglock);
 	__set_task_blocked(tsk, newset);
 	spin_unlock_irq(&tsk->sighand->siglock);
-- 
1.7.1

^ permalink raw reply related	[flat|nested] 3+ messages in thread

end of thread, other threads:[~2016-09-26 21:55 UTC | newest]

Thread overview: 3+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2016-09-22 18:25 [PATCH] signals: Avoid unnecessary taking of sighand->siglock Waiman Long
2016-09-23  9:32 ` Oleg Nesterov
2016-09-26 21:21   ` Waiman Long

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).