public inbox for linux-kernel@vger.kernel.org
 help / color / mirror / Atom feed
From: Nick Piggin <nickpiggin@yahoo.com.au>
To: Roland McGrath <roland@redhat.com>
Cc: Linus Torvalds <torvalds@osdl.org>, Andrew Morton <akpm@osdl.org>,
	Linux Kernel Mailing List <linux-kernel@vger.kernel.org>,
	Ulrich Drepper <drepper@redhat.com>,
	Christoph Lameter <clameter@sgi.com>
Subject: Re: [PATCH] CPU time clock support in clock_* syscalls
Date: Tue, 05 Oct 2004 16:43:45 +1000	[thread overview]
Message-ID: <416242A1.3060203@yahoo.com.au> (raw)
In-Reply-To: <200410050515.i955Fa15004063@magilla.sf.frob.com>

Roland McGrath wrote:

>  /*
> + * This is called on clock ticks and on context switches.
> + * Bank in p->sched_time the ns elapsed since the last tick or switch.
> + */
> +static void update_cpu_clock(task_t *p, runqueue_t *rq,
> +			     unsigned long long now)
> +{
> +	unsigned long long last = max(p->timestamp, rq->timestamp_last_tick);
> +	p->sched_time += now - last;
> +}

This looks wrong. But update_cpu_clock is never called from another
CPU. In which case you don't need to worry about timestamp_last_tick.

> +
> +/*
> + * Return current->sched_time plus any more ns on the sched_clock
> + * that have not yet been banked.
> + */
> +unsigned long long current_sched_time(const task_t *tsk)
> +{
> +	unsigned long long ns;
> +	local_irq_disable();
> +	ns = max(tsk->timestamp, task_rq(tsk)->timestamp_last_tick);
> +	ns = tsk->sched_time + (sched_clock() - ns);
> +	local_irq_enable();
> +	return ns;
> +}
> +

This doesn't perform the timestamp_last_tick "normalisation" properly
either.

It also seems to conveniently ignore locking when reading those values
off another CPU. Not a big deal for dynamic load calculations, but I'm
not so sure about your usage...?

Lastly, even when using timestamp_last_tick correctly, I think sched_clock
will still drift around slightly, especially if a task switches CPUs a lot
(but not restricted to moving CPUs). Again this is something that I
haven't thought about much because it is not a big deal for current usage.


  parent reply	other threads:[~2004-10-05  6:43 UTC|newest]

Thread overview: 18+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2004-10-05  5:15 [PATCH] CPU time clock support in clock_* syscalls Roland McGrath
2004-10-05  5:20 ` Andrew Morton
2004-10-05  5:27   ` Roland McGrath
2004-10-05  6:43 ` Nick Piggin [this message]
2004-10-05 18:28   ` Roland McGrath
2004-10-05 23:18     ` Nick Piggin
2004-10-06  0:33       ` Roland McGrath
2004-10-06  0:51         ` Nick Piggin
2004-10-05 15:39 ` Christoph Lameter
2004-10-05 18:38   ` Roland McGrath
2004-10-05 20:53     ` Christoph Lameter
2004-10-05 21:22       ` Roland McGrath
2004-10-05 21:28         ` Christoph Lameter
2004-10-06  0:35           ` Roland McGrath
2004-10-06  1:21             ` Christoph Lameter
2004-10-07 19:45               ` Roland McGrath
2004-10-07 20:24                 ` Christoph Lameter
2004-10-05 17:29 ` Christoph Lameter

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=416242A1.3060203@yahoo.com.au \
    --to=nickpiggin@yahoo.com.au \
    --cc=akpm@osdl.org \
    --cc=clameter@sgi.com \
    --cc=drepper@redhat.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=roland@redhat.com \
    --cc=torvalds@osdl.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox