From: Jie Chen <chen@jlab.org>
To: Ingo Molnar <mingo@elte.hu>
Cc: Simon Holm Th??gersen <odie@cs.aau.dk>,
Eric Dumazet <dada1@cosmosbay.com>,
linux-kernel@vger.kernel.org,
Peter Zijlstra <a.p.zijlstra@chello.nl>
Subject: Re: Possible bug from kernel 2.6.22 and above, 2.6.24-rc4
Date: Wed, 05 Dec 2007 12:47:58 -0500 [thread overview]
Message-ID: <4756E44E.8080607@jlab.org> (raw)
In-Reply-To: <20071205164723.GA25641@elte.hu>
Ingo Molnar wrote:
> * Jie Chen <chen@jlab.org> wrote:
>
>>> the moment you saturate the system a bit more, the numbers should
>>> improve even with such a ping-pong test.
>> You are right. If I manually do load balance (bind unrelated processes
>> on the other cores), my test code perform as well as it did in the
>> kernel 2.6.21.
>
> so right now the results dont seem to be too bad to me - the higher
> overhead comes from two threads running on two different cores and
> incurring the overhead of cross-core communications. In a true
> spread-out workloads that synchronize occasionally you'd get the same
> kind of overhead so in fact this behavior is more informative of the
> real overhead i guess. In 2.6.21 the two threads would stick on the same
> core and produce artificially low latency - which would only be true in
> a real spread-out workload if all tasks ran on the same core. (which is
> hardly the thing you want on openmp)
>
I use pthread_setaffinity_np call to bind one thread to one core. Unless
the kernel 2.6.21 does not honor the affinity, I do not see the
difference running two threads on two cores between the new kernel and
the old kernel. My test code does not do any numerical calculation, but
it does spin waiting on shared/non-shared flags. The reason I am using
the affinity is to test synchronization overheads among different cores.
In either the new and the old kernel, I do see 200% CPU usage when I ran
my test code for two threads. Does this mean two threads are running on
two cores? Also I verify a thread is indeed bound to a core by using
pthread_getaffinity_np.
> In any case, if i misinterpreted your numbers or if you just disagree,
> or if have a workload/test that shows worse performance that it
> could/should, let me know.
>
> Ingo
Hi, Ingo:
Since I am using affinity flag to bind each thread to a different core,
the synchronization overhead should increases as the number of
cores/threads increases. But what we observed in the new kernel is the
opposite. The barrier overhead of two threads is 8.93 micro seconds vs
1.86 microseconds for 8 threads (the old kernel is 0.49 vs 1.86). This
will confuse most of people who study the synchronization/communication
scalability. I know my test code is not real-world computation which
usually use up all cores. I hope I have explained myself clearly. Thank
you very much.
--
###############################################
Jie Chen
Scientific Computing Group
Thomas Jefferson National Accelerator Facility
12000, Jefferson Ave.
Newport News, VA 23606
(757)269-5046 (office) (757)269-6248 (fax)
chen@jlab.org
###############################################
next prev parent reply other threads:[~2007-12-05 17:48 UTC|newest]
Thread overview: 35+ messages / expand[flat|nested] mbox.gz Atom feed top
2007-11-21 20:34 Possible bug from kernel 2.6.22 and above Jie Chen
2007-11-21 22:14 ` Eric Dumazet
2007-11-22 1:52 ` Jie Chen
2007-11-22 2:32 ` Simon Holm Thøgersen
2007-11-22 2:58 ` Jie Chen
2007-11-22 20:19 ` Matt Mackall
2007-12-04 13:17 ` Possible bug from kernel 2.6.22 and above, 2.6.24-rc4 Ingo Molnar
2007-12-04 15:41 ` Jie Chen
2007-12-05 15:29 ` Jie Chen
2007-12-05 15:40 ` Ingo Molnar
2007-12-05 16:16 ` Eric Dumazet
2007-12-05 16:25 ` Ingo Molnar
2007-12-05 16:29 ` Eric Dumazet
2007-12-05 16:22 ` Jie Chen
2007-12-05 16:47 ` Ingo Molnar
2007-12-05 17:47 ` Jie Chen [this message]
2007-12-05 20:03 ` Ingo Molnar
2007-12-05 20:23 ` Jie Chen
2007-12-05 20:46 ` Ingo Molnar
2007-12-05 20:52 ` Jie Chen
2007-12-05 21:02 ` Ingo Molnar
2007-12-05 22:16 ` Jie Chen
2007-12-06 10:43 ` Ingo Molnar
2007-12-06 16:29 ` Jie Chen
2007-12-10 10:59 ` Ingo Molnar
2007-12-10 20:04 ` Jie Chen
2007-12-11 10:51 ` Ingo Molnar
2007-12-11 15:28 ` Jie Chen
2007-12-11 15:52 ` Ingo Molnar
2007-12-11 16:39 ` Jie Chen
2007-12-11 21:23 ` Ingo Molnar
2007-12-11 22:11 ` Jie Chen
2007-12-12 12:49 ` Peter Zijlstra
2007-12-05 20:36 ` Possible bug from kernel 2.6.22 and above Peter Zijlstra
2007-12-05 20:53 ` Jie Chen
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=4756E44E.8080607@jlab.org \
--to=chen@jlab.org \
--cc=a.p.zijlstra@chello.nl \
--cc=dada1@cosmosbay.com \
--cc=linux-kernel@vger.kernel.org \
--cc=mingo@elte.hu \
--cc=odie@cs.aau.dk \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox