public inbox for linux-kernel@vger.kernel.org
 help / color / mirror / Atom feed
From: Con Kolivas <kernel@kolivas.org>
To: ck@vds.kolivas.org
Cc: David Lang <david.lang@digitalinsight.com>,
	Bill Davidsen <davidsen@tmr.com>,
	linux kernel mailing list <linux-kernel@vger.kernel.org>
Subject: Re: [ANNOUNCE] Interbench v0.20 - Interactivity benchmark
Date: Thu, 14 Jul 2005 11:00:37 +1000	[thread overview]
Message-ID: <200507141100.41159.kernel@kolivas.org> (raw)
In-Reply-To: <200507141046.27788.kernel@kolivas.org>

[-- Attachment #1: Type: text/plain, Size: 3489 bytes --]

On Thu, 14 Jul 2005 10:46, Con Kolivas wrote:
> On Thu, 14 Jul 2005 10:31, David Lang wrote:
> > On Thu, 14 Jul 2005, Con Kolivas wrote:
> > > On Thu, 14 Jul 2005 03:54, Bill Davidsen wrote:
> > >> Con Kolivas wrote:
> > >>> On Tue, 12 Jul 2005 21:57, David Lang wrote:
> > >>>> for audio and video this would seem to be a fairly simple scaleing
> > >>>> factor (or just doing a fixed amount of work rather then a fixed
> > >>>> percentage of the CPU worth of work), however for X it is probably
> > >>>> much more complicated (is the X load really linearly random in how
> > >>>> much work it does, or is it weighted towards small amounts with
> > >>>> occasional large amounts hitting? I would guess that at least beyond
> > >>>> a certin point the liklyhood of that much work being needed would be
> > >>>> lower)
> > >>>
> > >>> Actually I don't disagree. What I mean by hardware changes is more
> > >>> along the lines of changing the hard disk type in the same setup.
> > >>> That's what I mean by careful with the benchmarking. Taking the
> > >>> results from an athlon XP and comparing it to an altix is silly for
> > >>> example.
> > >>
> > >> I'm going to cautiously disagree. If the CPU needed was scaled so it
> > >> represented a fixed number of cycles (operations, work units) then the
> > >> effect of faster CPU would be shown. And the total power of all
> > >> attached CPUs should be taken into account, using HT or SMP does have
> > >> an effect of feel.
> > >
> > > That is rather hard to do because each architecture's interpretation of
> > > fixed number of cycles is different and this doesn't represent their
> > > speed in the real world. The calculation when interbench is first run
> > > to see how many "loops per ms" took quite a bit of effort to find just
> > > how many loops each different cpu would do per ms and then find a way
> > > to make that not change through compiler optimised code. The "loops per
> > > ms" parameter did not end up being proportional to cpu Mhz except on
> > > the same cpu type.
> >
> > right, but the amount of cpu required to do a specific task will also
> > vary significantly between CPU families for the same task as well. as
> > long as the loops don't get optimized away by the compiler I think you
> > can setup some loops to do the same work on each CPU, even if they take
> > significantly different amounts of time (as an off-the-wall, obviously
> > untested example you could make your 'loop' be a calculation of Pi and
> > for the 'audio' test you compute the first 100 digits, for the video test
> > you compute the first 1000 digits, and for the X test you compute a
> > random number of digits between 10 and 10000)
>
> Once again I don't disagree, and the current system of loops_per_ms does
> exactly that and can be simply used as a fixed number of loops already. My
> point was there'd be argument about what sort of "loop" or load should be
> used as each cpu type would do different "loops" faster and they won't
> necessarily represent video, audio or X in the real world. Currently the
> loop in interbench is simply:
> 	for (i = 0 ; i < loops ; i++)
> 	     asm volatile("" : : : "memory");
>
> and if noone argues i can use that for fixed workload.

What I mean is if you take the loops_per_ms from one machine and plug it in 
using the -l option on another you can do it now without any modification to 
the interbench code.

Cheers,
Con

[-- Attachment #2: Type: application/pgp-signature, Size: 189 bytes --]

  reply	other threads:[~2005-07-14  1:03 UTC|newest]

Thread overview: 22+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2005-07-12 11:10 [ANNOUNCE] Interbench v0.20 - Interactivity benchmark Con Kolivas
2005-07-12 11:57 ` David Lang
2005-07-12 12:02   ` Con Kolivas
2005-07-12 12:17     ` David Lang
2005-07-12 12:23       ` Con Kolivas
2005-07-12 15:18       ` Lee Revell
2005-07-12 17:55         ` David Lang
2005-07-12 18:33           ` Lee Revell
2005-07-12 20:55       ` Al Boldi
2005-07-12 21:32         ` Con Kolivas
2005-07-13 17:54     ` Bill Davidsen
2005-07-14  0:21       ` Con Kolivas
2005-07-14  0:31         ` David Lang
2005-07-14  0:46           ` Con Kolivas
2005-07-14  1:00             ` Con Kolivas [this message]
2005-07-15 12:50         ` Bill Davidsen
2005-07-15 10:41           ` kernel
2005-07-18 14:51             ` Bill Davidsen
2005-07-13 11:27 ` szonyi calin
2005-07-13 17:34   ` Lee Revell
2005-07-13 23:57     ` Con Kolivas
2005-07-16 20:28       ` Lee Revell

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=200507141100.41159.kernel@kolivas.org \
    --to=kernel@kolivas.org \
    --cc=ck@vds.kolivas.org \
    --cc=david.lang@digitalinsight.com \
    --cc=davidsen@tmr.com \
    --cc=linux-kernel@vger.kernel.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox