public inbox for linux-kernel@vger.kernel.org
 help / color / mirror / Atom feed
From: Davide Libenzi <davidel@xmail.virusscreen.com>
To: "Bill Hartner" <bhartner@us.ibm.com>
Cc: Andrea Arcangeli <andrea@suse.de>,
	Mike Kravetz <mkravetz@sequent.com>,
	lse-tech@lists.sourceforge.net, linux-kernel@vger.kernel.org
Subject: Re: sched_test_yield benchmark
Date: Fri, 19 Jan 2001 08:35:55 -0800	[thread overview]
Message-ID: <01011908355500.01005@ewok.dev.mycio.com> (raw)
In-Reply-To: <OF6A979B75.1B1BFB9F-ON852569D9.004851F0@raleigh.ibm.com>
In-Reply-To: <OF6A979B75.1B1BFB9F-ON852569D9.004851F0@raleigh.ibm.com>

On Friday 19 January 2001 07:59, Bill Hartner wrote:
> Just a couple of notes on the sched_test_yield benchmark.
> I posted it to the mailing list in Dec.  I have a todo to get
> a home for it.  There are some issues though.  See below.
>
> (1) Beware of the changes in sys_sched_yield() for 2.4.0.  Depending
>     on how many processors on the test system and how many threads
>     created, schedule() may or may not be called when calling
>     sched_yield().

In Your test You're using at least 16 tasks with an 8 way SMP, so schedule() 
should be always called ( if You're using my test suite tasks are always 
running ).



>
> (3) For the i386 arch :
>
>     My observations were made on an 8-way 550 Mhz PIII Xeon 2MB L2 cache.

Hey, this should be the machine I've lost two days ago :^)


>
>     The task structures are page aligned.  So when running the benchmark
>     you may see what I *suspect* are L1/L2 cache effects.  The set of
>     yielding threads will read the same page offsets in the task struct
>     and will dirty the same page offsets on it's kernel stack.  So
>     depending on the number of threads and the locations of their task
>     structs in physical memory and the associatively of the caches, you
>     may see (for example) results like :
>
>                 **       **             **
>     50 50 50 50 75 50 50 35 50 50 50 50 75
>
>     Also, the number of threads, the order of the task structs on the
>     run_queue, thread migration from cpu to cpu, and how many times
>     recalculate is done may vary the results from run to run.

Yep, this is the issue.
Why not move scheduling fields in a separate structure with a different 
alignment :

struct s_sched_fields {
 ...
} sched_fields[];

inline struct s_sched_fields * get_sched_fields_ptr(task_struct * ) {

}

This will reduce the probability that scheduling fields will fall onto the 
same cache line.



- Davide
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@vger.kernel.org
Please read the FAQ at http://www.tux.org/lkml/

      parent reply	other threads:[~2001-01-19 16:35 UTC|newest]

Thread overview: 3+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2001-01-19 14:30 sched_test_yield benchmark Bill Hartner
2001-01-19 14:52 ` [Lse-tech] " Andi Kleen
2001-01-19 16:35 ` Davide Libenzi [this message]

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=01011908355500.01005@ewok.dev.mycio.com \
    --to=davidel@xmail.virusscreen.com \
    --cc=andrea@suse.de \
    --cc=bhartner@us.ibm.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=lse-tech@lists.sourceforge.net \
    --cc=mkravetz@sequent.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox