From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1753234AbbJOTB1 (ORCPT ); Thu, 15 Oct 2015 15:01:27 -0400 Received: from g1t6225.austin.hp.com ([15.73.96.126]:41401 "EHLO g1t6225.austin.hp.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1753024AbbJOTBU (ORCPT ); Thu, 15 Oct 2015 15:01:20 -0400 Message-ID: <1444935676.29506.15.camel@j-VirtualBox> Subject: Re: [PATCH v2 0/4] timer: Improve itimers scalability From: Jason Low To: Ingo Molnar Cc: Jason Low , Peter Zijlstra , Thomas Gleixner , linux-kernel@vger.kernel.org, Oleg Nesterov , "Paul E. McKenney" , Frederic Weisbecker , Davidlohr Bueso , Steven Rostedt , Andrew Morton , George Spelvin , hideaki.kimura@hpe.com, terry.rudd@hpe.com, scott.norton@hpe.com Date: Thu, 15 Oct 2015 12:01:16 -0700 In-Reply-To: <20151015084702.GA16953@gmail.com> References: <1444849677-29330-1-git-send-email-jason.low2@hp.com> <20151015084702.GA16953@gmail.com> Content-Type: text/plain; charset="UTF-8" X-Mailer: Evolution 3.2.3-0ubuntu6 Content-Transfer-Encoding: 7bit Mime-Version: 1.0 Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Thu, 2015-10-15 at 10:47 +0200, Ingo Molnar wrote: > * Jason Low wrote: > > > While running a database workload on a 16 socket machine, there were > > scalability issues related to itimers. The following link contains a > > more detailed summary of the issues at the application level. > > > > https://lkml.org/lkml/2015/8/26/737 > > > > Commit 1018016c706f addressed the issue with the thread_group_cputimer > > spinlock taking up a significant portion of total run time. > > This patch series addresses the secondary issue where a lot of time is > > spent trying to acquire the sighand lock. It was found in some cases > > that 200+ threads were simultaneously contending for the same sighand > > lock, reducing throughput by more than 30%. > > > > With this patch set (along with commit 1018016c706f mentioned above), > > the performance hit of itimers almost completely goes away on the > > 16 socket system. > > > > Jason Low (4): > > timer: Optimize fastpath_timer_check() > > timer: Check thread timers only when there are active thread timers > > timer: Convert cputimer->running to bool > > timer: Reduce unnecessary sighand lock contention > > > > include/linux/init_task.h | 3 +- > > include/linux/sched.h | 9 ++++-- > > kernel/fork.c | 2 +- > > kernel/time/posix-cpu-timers.c | 63 ++++++++++++++++++++++++++++----------- > > 4 files changed, 54 insertions(+), 23 deletions(-) > > Is there some itimers benchmark that can be used to measure the effects of these > changes? Yes, we also wrote a micro benchmark which generates cache misses and measures the average cost of each cache miss (with itimers enabled). We used this while writing and testing patches, since it takes a bit longer to set up and run the database. Jason