From: Waiman Long <waiman.long@hp.com>
To: Ingo Molnar <mingo@kernel.org>
Cc: Peter Zijlstra <peterz@infradead.org>,
Ingo Molnar <mingo@redhat.com>, Arnd Bergmann <arnd@arndb.de>,
linux-arch@vger.kernel.org, linux-kernel@vger.kernel.org,
Scott J Norton <scott.norton@hp.com>,
Douglas Hatch <doug.hatch@hp.com>
Subject: Re: [PATCH v2 2/2] locking/qrwlock: Don't contend with readers when setting _QW_WAITING
Date: Fri, 12 Jun 2015 18:58:28 -0400 [thread overview]
Message-ID: <557B6414.4080800@hp.com> (raw)
In-Reply-To: <20150612084543.GA24472@gmail.com>
On 06/12/2015 04:45 AM, Ingo Molnar wrote:
> * Waiman Long<waiman.long@hp.com> wrote:
>
>>> Mind posting the microbenchmark?
>> I have attached the tool that I used for testing.
> Thanks, that's interesting!
>
> Btw., we could also do something like this in user-space, in tools/perf/bench/, we
> have no 'perf bench locking' subcommand yet.
>
> We already build and measure simple x86 kernel methods there such as memset() and
> memcpy():
>
> triton:~/tip> perf bench mem memcpy -r all
> # Running 'mem/memcpy' benchmark:
>
> Routine default (Default memcpy() provided by glibc)
> # Copying 1MB Bytes ...
>
> 1.385195 GB/Sec
> 4.982462 GB/Sec (with prefault)
>
> Routine x86-64-unrolled (unrolled memcpy() in arch/x86/lib/memcpy_64.S)
> # Copying 1MB Bytes ...
>
> 1.627604 GB/Sec
> 5.336407 GB/Sec (with prefault)
>
> Routine x86-64-movsq (movsq-based memcpy() in arch/x86/lib/memcpy_64.S)
> # Copying 1MB Bytes ...
>
> 2.132233 GB/Sec
> 4.264465 GB/Sec (with prefault)
>
> Routine x86-64-movsb (movsb-based memcpy() in arch/x86/lib/memcpy_64.S)
> # Copying 1MB Bytes ...
>
> 1.490935 GB/Sec
> 7.128193 GB/Sec (with prefault)
>
> Locking primitives would certainly be more complex build in user-space - but we
> could shuffle things around in kernel headers as well to make it easier to test in
> user-space.
>
> That's how we can build lockdep in user-space for example, see tools/lib/lockdep.
>
> Just a thought.
>
> Thanks,
>
> Ingo
I guess we can build user-space version of spinlock and rwlock, but we
can't do that for sleeping lock like mutex and rwsem. Preemption in user
space will also affect how those locking test will behave. Anyway, I
will give it a thought on how to do that in perf bench when I have time.
Cheers,
Longman
next prev parent reply other threads:[~2015-06-12 22:58 UTC|newest]
Thread overview: 10+ messages / expand[flat|nested] mbox.gz Atom feed top
2015-06-09 15:19 [PATCH 0/2 v2] locking/qrwlock: Fix interrupt handling problem Waiman Long
2015-06-09 15:19 ` [PATCH v2 1/2] locking/qrwlock: Fix bug in interrupt handling code Waiman Long
2015-06-11 14:21 ` Will Deacon
2015-06-13 3:16 ` Waiman Long
2015-06-09 15:19 ` [PATCH v2 2/2] locking/qrwlock: Don't contend with readers when setting _QW_WAITING Waiman Long
2015-06-10 7:35 ` Ingo Molnar
2015-06-10 16:28 ` Waiman Long
2015-06-12 8:45 ` Ingo Molnar
2015-06-12 22:58 ` Waiman Long [this message]
2015-06-19 17:59 ` [tip:locking/core] locking/qrwlock: Don' t " tip-bot for Waiman Long
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=557B6414.4080800@hp.com \
--to=waiman.long@hp.com \
--cc=arnd@arndb.de \
--cc=doug.hatch@hp.com \
--cc=linux-arch@vger.kernel.org \
--cc=linux-kernel@vger.kernel.org \
--cc=mingo@kernel.org \
--cc=mingo@redhat.com \
--cc=peterz@infradead.org \
--cc=scott.norton@hp.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox