From: Rik van Riel <riel@redhat.com>
To: Davidlohr Bueso <davidlohr.bueso@hp.com>
Cc: Linus Torvalds <torvalds@linux-foundation.org>,
Emmanuel Benisty <benisty.e@gmail.com>,
"Vinod, Chegu" <chegu_vinod@hp.com>,
"Low, Jason" <jason.low2@hp.com>,
Peter Zijlstra <a.p.zijlstra@chello.nl>,
"H. Peter Anvin" <hpa@zytor.com>,
Andrew Morton <akpm@linux-foundation.org>,
aquini@redhat.com, Michel Lespinasse <walken@google.com>,
Ingo Molnar <mingo@kernel.org>,
Larry Woodman <lwoodman@redhat.com>,
Linux Kernel Mailing List <linux-kernel@vger.kernel.org>,
Steven Rostedt <rostedt@goodmis.org>,
Thomas Gleixner <tglx@linutronix.de>
Subject: Re: [PATCH v2 0/4] ipc: reduce ipc lock contention
Date: Tue, 05 Mar 2013 12:10:01 -0500 [thread overview]
Message-ID: <513626E9.2040509@redhat.com> (raw)
In-Reply-To: <1362476149.2225.50.camel@buesod1.americas.hpqcorp.net>
On 03/05/2013 04:35 AM, Davidlohr Bueso wrote:
> 2) While on an Oracle swingbench DSS (data mining) workload the
> improvements are not as exciting as with Rik's benchmark, we can see
> some positive numbers. For an 8 socket machine the following are the
> percentages of %sys time incurred in the ipc lock:
>
> Baseline (3.9-rc1):
> 100 swingbench users: 8,74%
> 400 swingbench users: 21,86%
> 800 swingbench users: 84,35%
>
> With this patchset:
> 100 swingbench users: 8,11%
> 400 swingbench users: 19,93%
> 800 swingbench users: 77,69%
Does the swingbench DSS workload use multiple semaphores, or
just one?
Your patches look like a great start to make the semaphores
more scalable. If the swingbench DSS workload uses multiple
semaphores, I have ideas for follow-up patches to make things
scale better.
What does ipcs output look like while running swingbench DSS?
--
All rights reversed
next prev parent reply other threads:[~2013-03-05 17:10 UTC|newest]
Thread overview: 22+ messages / expand[flat|nested] mbox.gz Atom feed top
2013-03-05 9:35 [PATCH v2 0/4] ipc: reduce ipc lock contention Davidlohr Bueso
2013-03-05 15:40 ` Linus Torvalds
2013-03-06 7:13 ` Davidlohr Bueso
2013-03-06 16:01 ` Linus Torvalds
2013-03-05 17:10 ` Rik van Riel [this message]
2013-03-05 19:42 ` Waiman Long
2013-03-05 20:52 ` Linus Torvalds
2013-03-05 20:53 ` Rik van Riel
2013-03-06 3:46 ` Waiman Long
2013-03-06 3:53 ` Rik van Riel
2013-03-06 7:14 ` Davidlohr Bueso
2013-03-07 8:45 ` Peter Zijlstra
2013-03-07 12:55 ` Chris Mason
2013-03-07 15:54 ` Dave Kleikamp
2013-03-07 16:42 ` Chris Mason
2013-03-06 22:13 ` [PATCH v2 5/4] ipc: replace ipc_perm.lock with an rwlock Rik van Riel
2013-03-06 22:14 ` [PATCH v2 6/4] ipc: open code and rename sem_lock Rik van Riel
2013-03-06 22:15 ` [PATCH v2 7/4] ipc: fine grained locking for semtimedop Rik van Riel
2013-03-06 22:57 ` Linus Torvalds
2013-03-06 23:09 ` Rik van Riel
2013-03-07 1:36 ` Davidlohr Bueso
2013-03-07 20:55 ` Rik van Riel
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=513626E9.2040509@redhat.com \
--to=riel@redhat.com \
--cc=a.p.zijlstra@chello.nl \
--cc=akpm@linux-foundation.org \
--cc=aquini@redhat.com \
--cc=benisty.e@gmail.com \
--cc=chegu_vinod@hp.com \
--cc=davidlohr.bueso@hp.com \
--cc=hpa@zytor.com \
--cc=jason.low2@hp.com \
--cc=linux-kernel@vger.kernel.org \
--cc=lwoodman@redhat.com \
--cc=mingo@kernel.org \
--cc=rostedt@goodmis.org \
--cc=tglx@linutronix.de \
--cc=torvalds@linux-foundation.org \
--cc=walken@google.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox