From: Vikram Mulukutla <markivx@codeaurora.org>
To: Will Deacon <will.deacon@arm.com>
Cc: qiaozhou <qiaozhou@asrmicro.com>,
Thomas Gleixner <tglx@linutronix.de>,
John Stultz <john.stultz@linaro.org>,
sboyd@codeaurora.org, LKML <linux-kernel@vger.kernel.org>,
Wang Wilbur <wilburwang@asrmicro.com>,
Marc Zyngier <marc.zyngier@arm.com>,
Peter Zijlstra <peterz@infradead.org>,
linux-kernel-owner@vger.kernel.org, sudeep.holla@arm.com
Subject: Re: [Question]: try to fix contention between expire_timers and try_to_del_timer_sync
Date: Fri, 25 Aug 2017 13:25:42 -0700 [thread overview]
Message-ID: <fd24e530d09f31656e6df4c6ecbbb6e0@codeaurora.org> (raw)
In-Reply-To: <9f86bd426bbaede9de6d38cb047bd6fa@codeaurora.org>
On 2017-08-25 12:48, Vikram Mulukutla wrote:
>
> If I understand the code correctly, the upper 32 bits of an ARM64
> virtual
> address will overflow when 1 is added to it, and so we'll keep WFE'ing
> on
> every subsequent cpu_relax invoked from the same PC, until we cross the
> hard-coded threshold, right?
>
Oops, misread that. Second time we enter cpu_relax from the same PC, we
do a WFE. Then we stop doing the WFE until we hit the threshold using
the
per-cpu counter. So with a higher threshold, we wait for more
cpu_relax()
calls before starting the WFE again.
So a lower threshold implies we should hit WFE branch sooner. It seems
that since my test keeps the while loop going for a full 5 seconds, a
lower
threshold will obviously result in more WFEs and lower the
lock-acquired-count.
I guess we want a high threshold but not so high that the little CPU has
to wait a while before the big CPU counts up to the threshold, is that
correct?
Thanks,
Vikram
--
Qualcomm Innovation Center, Inc. is a member of Code Aurora Forum,
a Linux Foundation Collaborative Project
next prev parent reply other threads:[~2017-08-25 20:25 UTC|newest]
Thread overview: 23+ messages / expand[flat|nested] mbox.gz Atom feed top
[not found] <3d2459c7-defd-a47e-6cea-007c10cecaac@asrmicro.com>
2017-07-26 14:16 ` [Question]: try to fix contention between expire_timers and try_to_del_timer_sync Thomas Gleixner
2017-07-27 1:29 ` qiaozhou
2017-07-27 15:14 ` Will Deacon
2017-07-27 15:19 ` Thomas Gleixner
2017-07-28 1:10 ` Vikram Mulukutla
2017-07-28 9:28 ` Peter Zijlstra
2017-07-28 19:11 ` Vikram Mulukutla
2017-07-28 9:28 ` Will Deacon
2017-07-28 19:09 ` Vikram Mulukutla
2017-07-31 11:20 ` qiaozhou
2017-08-01 7:37 ` qiaozhou
2017-08-03 23:32 ` Vikram Mulukutla
2017-08-04 3:15 ` qiaozhou
2017-07-31 13:13 ` Will Deacon
2017-08-03 23:25 ` Vikram Mulukutla
2017-08-15 18:40 ` Will Deacon
2017-08-25 19:48 ` Vikram Mulukutla
2017-08-25 20:25 ` Vikram Mulukutla [this message]
2017-08-28 23:12 ` Vikram Mulukutla
2017-09-06 11:19 ` qiaozhou
2017-09-25 11:02 ` qiaozhou
2017-10-02 14:14 ` Will Deacon
2017-10-11 8:33 ` qiaozhou
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=fd24e530d09f31656e6df4c6ecbbb6e0@codeaurora.org \
--to=markivx@codeaurora.org \
--cc=john.stultz@linaro.org \
--cc=linux-kernel-owner@vger.kernel.org \
--cc=linux-kernel@vger.kernel.org \
--cc=marc.zyngier@arm.com \
--cc=peterz@infradead.org \
--cc=qiaozhou@asrmicro.com \
--cc=sboyd@codeaurora.org \
--cc=sudeep.holla@arm.com \
--cc=tglx@linutronix.de \
--cc=wilburwang@asrmicro.com \
--cc=will.deacon@arm.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).