From mboxrd@z Thu Jan 1 00:00:00 1970 From: Grant Grundler Date: Thu, 20 Nov 2003 17:25:45 +0000 Subject: Re: [PATCH] - sched_clock() broken for ia64 SN platform Message-Id: List-Id: References: In-Reply-To: MIME-Version: 1.0 Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: 7bit To: linux-ia64@vger.kernel.org On Wed, Nov 19, 2003 at 08:09:15PM -0800, John Hawkes wrote: > I'd like to hear an argument about why sched_clock() needs sub-microsecond > accuracy, instead of just using jiffies, when one use of sched_clock() is to > compare a delta time against cache_decay_ticks, which is a > "jiffies"-granularity value, and the other use is to determine the relative > computebound-vs-interactive characteristics of the process. In general, it seems like bouncing the jiffies cacheline around is more of a problem than the need for accuracy. This sounds like a similar problem Jack Steiner wrote about before (updating interrupt counts): | Updating the counter causes a cache line to be bounced between | cpus at a rate of at least HZ*active_cpus. (The number of bus transactions | is at least 2X higher because the line is first obtained for | shared usage, then upgraded to modified. In addition, multiple references | are made to the line for each interrupt. On a big system, it is unlikely that | a cpu can hold the line for entire time that the interrupt is being | serviced). hth, grant