public inbox for linux-kernel@vger.kernel.org
 help / color / mirror / Atom feed
From: Daniel Thompson <daniel.thompson@linaro.org>
To: Stephen Boyd <sboyd@codeaurora.org>
Cc: Thomas Gleixner <tglx@linutronix.de>,
	John Stultz <john.stultz@linaro.org>,
	linux-kernel@vger.kernel.org, patches@linaro.org,
	linaro-kernel@lists.linaro.org,
	Sumit Semwal <sumit.semwal@linaro.org>,
	Steven Rostedt <rostedt@goodmis.org>
Subject: Re: [PATCH v3 0/4] sched_clock: Optimize and avoid deadlock during read from NMI
Date: Sun, 08 Feb 2015 20:09:29 +0800	[thread overview]
Message-ID: <54D751F9.10002@linaro.org> (raw)
In-Reply-To: <54D33263.4060707@linaro.org>

On 05/02/15 17:05, Daniel Thompson wrote:
> On 05/02/15 00:50, Stephen Boyd wrote:
>> On 01/30, Daniel Thompson wrote:
>>> This patchset optimizes the generic sched_clock implementation to
>>> significantly reduce the data cache profile. It also makes it safe to call
>>> sched_clock() from NMI (or FIQ on ARM).
>>>
>>> The data cache profile of sched_clock() in both the original code and
>>> my previous patch was somewhere between 2 and 3 (64-byte) cache lines,
>>> depending on alignment of struct clock_data. After patching, the cache
>>> profile for the normal case should be a single cacheline.
>>>
>>> NMI safety was tested on i.MX6 with perf drowning the system in FIQs and
>>> using the perf handler to check that sched_clock() returned monotonic
>>> values. At the same time I forcefully reduced kt_wrap so that
>>> update_sched_clock() is being called at >1000Hz.
>>>
>>> Without the patches the above system is grossly unstable, surviving
>>> [9K,115K,25K] perf event cycles during three separate runs. With the
>>> patch I ran for over 9M perf event cycles before getting bored.
>>
>> I wanted to see if there was any speedup from these changes so I
>> made a tight loop around sched_clock() that ran for 10 seconds
>> and I ran it 10 times before and after this patch series:
>>
>>         unsigned long long clock, start_clock;
>>         int count = 0; 
>>
>>         clock = start_clock = sched_clock();
>>         while ((clock - start_clock) < 10ULL * NSEC_PER_SEC) {
>>                 clock = sched_clock();
>>                 count++;
>>         }
>>
>>         pr_info("Made %d calls in %llu ns\n", count, clock - start_clock);
>>
>> Before
>> ------
>>  Made 19218953 calls in 10000000439 ns
>>  Made 19212790 calls in 10000000438 ns
>>  Made 19217121 calls in 10000000142 ns
>>  Made 19227304 calls in 10000000142 ns
>>  Made 19217559 calls in 10000000142 ns
>>  Made 19230193 calls in 10000000290 ns
>>  Made 19212715 calls in 10000000290 ns
>>  Made 19234446 calls in 10000000438 ns
>>  Made 19226274 calls in 10000000439 ns
>>  Made 19236118 calls in 10000000143 ns
>>  
>> After
>> -----
>>  Made 19434797 calls in 10000000438 ns
>>  Made 19435733 calls in 10000000439 ns
>>  Made 19434499 calls in 10000000438 ns
>>  Made 19438482 calls in 10000000438 ns
>>  Made 19435604 calls in 10000000142 ns
>>  Made 19438551 calls in 10000000438 ns
>>  Made 19444550 calls in 10000000290 ns
>>  Made 19437580 calls in 10000000290 ns
>>  Made 19439429 calls in 10000048142 ns
>>  Made 19439493 calls in 10000000438 ns
>>
>> So it seems to be a small improvement.
>>
> 
> Awesome!
> 
> I guess this is mostly the effect of simplifying the suspend logic since
> the changes to the cache profile probably wouldn't reveal much in such a
> tight loop.
> 
> I will re-run this after acting on your other review comments. BTW what
> device did you run on?

I ran the same test on my Snapdragon 600 board. The results are a little
odd. There is an odd quantization effect that I cannot easily explain
and the results of the v4 patch seem almost too good to be true.

My results are below but I'd be very interested to see what results you
get with the v4 patch!

Latest (branchless approach):

Made 18736519 calls in 10000000439 ns
Made 19958774 calls in 10000000439 ns
Made 18736500 calls in 10000000587 ns
Made 21703993 calls in 10000000439 ns
Made 18734458 calls in 10000000142 ns
Made 18736175 calls in 10000000439 ns
Made 19961406 calls in 10000000291 ns
Made 19953920 calls in 10000000143 ns
Made 21709619 calls in 10000000290 ns
Made 18734077 calls in 10000000142 ns

v3:

Made 15971788 calls in 10000000438 ns
Made 14594357 calls in 10000000734 ns
Made 14590951 calls in 10000000735 ns
Made 14595048 calls in 10000000290 ns
Made 14595157 calls in 10000000143 ns
Made 14594117 calls in 10000000142 ns
Made 14597277 calls in 10000000142 ns
Made 14594472 calls in 10000000586 ns
Made 14601292 calls in 10000000587 ns
Made 15968630 calls in 10000000587 ns

Current:

Made 14274871 calls in 10000000587 ns
Made 15634136 calls in 10000000587 ns
Made 16453528 calls in 10000000142 ns
Made 14275854 calls in 10000000586 ns
Made 15634128 calls in 10000000438 ns
Made 14277672 calls in 10000000143 ns
Made 14282904 calls in 10000000290 ns
Made 14278673 calls in 10000000142 ns
Made 14276096 calls in 10000000290 ns
Made 14275336 calls in 10000000143 ns

  reply	other threads:[~2015-02-08 12:09 UTC|newest]

Thread overview: 35+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2015-01-21 16:53 [RFC PATCH] sched_clock: Avoid tearing during read from NMI Daniel Thompson
2015-01-21 17:29 ` John Stultz
2015-01-21 20:20   ` Daniel Thompson
2015-01-21 20:58   ` Stephen Boyd
2015-01-22 13:06 ` [PATCH v2] sched_clock: Avoid deadlock " Daniel Thompson
2015-01-30 19:03 ` [PATCH v3 0/4] sched_clock: Optimize and avoid " Daniel Thompson
2015-01-30 19:03   ` [PATCH v3 1/4] sched_clock: Match scope of read and write seqcounts Daniel Thompson
2015-01-30 19:03   ` [PATCH v3 2/4] sched_clock: Optimize cache line usage Daniel Thompson
2015-02-05  1:14     ` Stephen Boyd
2015-02-05 10:21       ` Daniel Thompson
2015-01-30 19:03   ` [PATCH v3 3/4] sched_clock: Remove suspend from clock_read_data Daniel Thompson
2015-01-30 19:03   ` [PATCH v3 4/4] sched_clock: Avoid deadlock during read from NMI Daniel Thompson
2015-02-05  1:23     ` Stephen Boyd
2015-02-05  1:48       ` Steven Rostedt
2015-02-05  6:23         ` Stephen Boyd
2015-02-05  0:50   ` [PATCH v3 0/4] sched_clock: Optimize and avoid " Stephen Boyd
2015-02-05  9:05     ` Daniel Thompson
2015-02-08 12:09       ` Daniel Thompson [this message]
2015-02-09 22:08         ` Stephen Boyd
2015-02-08 12:02 ` [PATCH v4 0/5] " Daniel Thompson
2015-02-08 12:02   ` [PATCH v4 1/5] sched_clock: Match scope of read and write seqcounts Daniel Thompson
2015-02-08 12:02   ` [PATCH v4 2/5] sched_clock: Optimize cache line usage Daniel Thompson
2015-02-09  1:28     ` Will Deacon
2015-02-09  9:47       ` Daniel Thompson
2015-02-10  2:37         ` Stephen Boyd
2015-02-08 12:02   ` [PATCH v4 3/5] sched_clock: Remove suspend from clock_read_data Daniel Thompson
2015-02-08 12:02   ` [PATCH v4 4/5] sched_clock: Remove redundant notrace from update function Daniel Thompson
2015-02-08 12:02   ` [PATCH v4 5/5] sched_clock: Avoid deadlock during read from NMI Daniel Thompson
2015-02-13  3:49   ` [PATCH v4 0/5] sched_clock: Optimize and avoid " Stephen Boyd
2015-03-02 15:56 ` [PATCH v5 " Daniel Thompson
2015-03-02 15:56   ` [PATCH v5 1/5] sched_clock: Match scope of read and write seqcounts Daniel Thompson
2015-03-02 15:56   ` [PATCH v5 2/5] sched_clock: Optimize cache line usage Daniel Thompson
2015-03-02 15:56   ` [PATCH v5 3/5] sched_clock: Remove suspend from clock_read_data Daniel Thompson
2015-03-02 15:56   ` [PATCH v5 4/5] sched_clock: Remove redundant notrace from update function Daniel Thompson
2015-03-02 15:56   ` [PATCH v5 5/5] sched_clock: Avoid deadlock during read from NMI Daniel Thompson

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=54D751F9.10002@linaro.org \
    --to=daniel.thompson@linaro.org \
    --cc=john.stultz@linaro.org \
    --cc=linaro-kernel@lists.linaro.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=patches@linaro.org \
    --cc=rostedt@goodmis.org \
    --cc=sboyd@codeaurora.org \
    --cc=sumit.semwal@linaro.org \
    --cc=tglx@linutronix.de \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox