public inbox for linux-ia64@vger.kernel.org
 help / color / mirror / Atom feed
From: "Chen, Kenneth W" <kenneth.w.chen@intel.com>
To: linux-ia64@vger.kernel.org
Subject: RE: Next Revison of timer patches with split into nanosecond, time_interpolator and debug patch
Date: Wed, 14 Jul 2004 00:46:57 +0000	[thread overview]
Message-ID: <200407140044.i6E0i8Y04656@unix-os.sc.intel.com> (raw)
In-Reply-To: <Pine.LNX.4.58.0407131137160.3189@schroedinger.engr.sgi.com>

Christoph Lameter wrote on Tuesday, July 13, 2004 11:47 AM
>
> This version has
>  1. Patch split into three pieces:
>     A) the nanosecond patch. Probably controversial. Provides gettimeofday
>        using nsec resolution and patches posix-timers so that they
>        actually return nanosecond resolution.
>     B) The time interpolator patches to implement generic routines to
>        use any available counter for a time interpolator and provide IA64
>        fastcall support.
>     C) Patch to add debugging features. This now includes counters for
>        the various behaviors of the asm routines. Fallback, retries
>        and error conditions.
>
>  2. Style changes and conformance to calling conventions
>  3. Further minor fixes.
>
...

> +	add r2 = TI_FLAGS+IA64_TASK_SIZE,r16
> +	tnat.nz p6,p0 = r32               // guard against NaT args
> +(p6)    br.cond.spnt.few .fail_einval

This adds stall cycles because of RAW dependency with p6.  Nat arg isn't
a common case and shouldn't stall speed path. This br can be collapsed
with the one that checks r33 nat.


> +	ld4 r2 = [r2]
> +	movl r31 = xtime_lock
> +	tnat.nz p6,p0 = r33
> +	movl r30 = time_interpolator

Too many movl, consider using @gprel?


> +	ld4 r20 = [r31]		//  xtime_lock.sequence
> +	mf

convert to ld4.acq??


> +	ld8 r24 = [r24]		// time_interpolator_last_counter
> +(p6)	mov r2 = ar.itc		// CPU_TIMER
> +(p9)	br.spnt.many fsys_fallback_syscall
> +(p7)	ld8 r2 = [r29]		// readq
> +(p8)	ld4 r2 = [r29]		// readw
> +	and r20 = ~1,r20		// Make sequence even to force retry if odd

Need .pred.rel.mutex p6,p7,p8 to keep assembler quiet.


> + 	ld8 r21 = [r21]		// xtime.tv_sec
> +	ld8 r22 = [r22]		// xtime_tv_nsec

Probably not going to help much, but it should be scheduled way above,
right after seqread to hide some part of memory latency.


> +ENTRY(fsys_clock_gettime)

Implementation looked the same as fsys_gettimeofday except the last
div 1000 part.  Any thing can be done to merge the two functions?



  reply	other threads:[~2004-07-14  0:46 UTC|newest]

Thread overview: 6+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2004-07-13 18:47 Next Revison of timer patches with split into nanosecond, Christoph Lameter
2004-07-14  0:46 ` Chen, Kenneth W [this message]
2004-07-14  1:57 ` David Mosberger
2004-07-14  2:01 ` Next Revison of timer patches with split into nanosecond, time_interpolator and debug patch Matthew Wilcox
2004-07-14  2:06 ` Next Revison of timer patches with split into nanosecond, Christoph Lameter
2004-07-14  4:59 ` Next Revison of timer patches with split into nanosecond, time_interpolator and debug patch Chen, Kenneth W

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=200407140044.i6E0i8Y04656@unix-os.sc.intel.com \
    --to=kenneth.w.chen@intel.com \
    --cc=linux-ia64@vger.kernel.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox