From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1754508AbXLEUXS (ORCPT ); Wed, 5 Dec 2007 15:23:18 -0500 Received: (majordomo@vger.kernel.org) by vger.kernel.org id S1750954AbXLEUXJ (ORCPT ); Wed, 5 Dec 2007 15:23:09 -0500 Received: from ccs17.jlab.org ([129.57.35.82]:43316 "EHLO ccs17.jlab.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1751568AbXLEUXI (ORCPT ); Wed, 5 Dec 2007 15:23:08 -0500 Message-ID: <475708A7.4030708@jlab.org> Date: Wed, 05 Dec 2007 15:23:03 -0500 From: Jie Chen Organization: Jefferson Lab User-Agent: Thunderbird 2.0.0.9 (X11/20071031) MIME-Version: 1.0 To: Ingo Molnar CC: Simon Holm Th??gersen , Eric Dumazet , linux-kernel@vger.kernel.org, Peter Zijlstra Subject: Re: Possible bug from kernel 2.6.22 and above, 2.6.24-rc4 References: <4744ADA9.7040905@cosmosbay.com> <4744E0DC.7050808@jlab.org> <1195698770.11808.4.camel@odie.local> <4744F042.4070002@jlab.org> <20071204131707.GA4232@elte.hu> <4756C3D9.9030107@jlab.org> <20071205154014.GA6491@elte.hu> <4756D058.1070500@jlab.org> <20071205164723.GA25641@elte.hu> <4756E44E.8080607@jlab.org> <20071205200343.GA14570@elte.hu> In-Reply-To: <20071205200343.GA14570@elte.hu> Content-Type: text/plain; charset=ISO-8859-1; format=flowed Content-Transfer-Encoding: 7bit Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Ingo Molnar wrote: > * Jie Chen wrote: > >> Since I am using affinity flag to bind each thread to a different >> core, the synchronization overhead should increases as the number of >> cores/threads increases. But what we observed in the new kernel is the >> opposite. The barrier overhead of two threads is 8.93 micro seconds vs >> 1.86 microseconds for 8 threads (the old kernel is 0.49 vs 1.86). This >> will confuse most of people who study the >> synchronization/communication scalability. I know my test code is not >> real-world computation which usually use up all cores. I hope I have >> explained myself clearly. Thank you very much. > > btw., could you try to not use the affinity mask and let the scheduler > manage the spreading of tasks? It generally has a better knowledge about > how tasks interrelate. > > Ingo Hi, Ingo: I just disabled the affinity mask and reran the test. There were no significant changes for two threads (barrier overhead is around 9 microseconds). As for 8 threads, the barrier overhead actually drops a little, which is good. Let me know whether I can be any help. Thank you very much. -- ############################################### Jie Chen Scientific Computing Group Thomas Jefferson National Accelerator Facility 12000, Jefferson Ave. Newport News, VA 23606 (757)269-5046 (office) (757)269-6248 (fax) chen@jlab.org ###############################################