From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1751538Ab1AEJIe (ORCPT ); Wed, 5 Jan 2011 04:08:34 -0500 Received: from mx1.redhat.com ([209.132.183.28]:57887 "EHLO mx1.redhat.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1751298Ab1AEJIb (ORCPT ); Wed, 5 Jan 2011 04:08:31 -0500 Message-ID: <4D2434F6.4020904@redhat.com> Date: Wed, 05 Jan 2011 11:08:06 +0200 From: Avi Kivity User-Agent: Mozilla/5.0 (X11; U; Linux x86_64; en-US; rv:1.9.2.13) Gecko/20101209 Fedora/3.1.7-0.35.b3pre.fc14 Lightning/1.0b3pre Thunderbird/3.1.7 MIME-Version: 1.0 To: KOSAKI Motohiro CC: Rik van Riel , kvm@vger.kernel.org, linux-kernel@vger.kernel.org, Srivatsa Vaddagiri , Peter Zijlstra , Mike Galbraith , Chris Wright Subject: Re: [RFC -v3 PATCH 2/3] sched: add yield_to function References: <20110105110837.B62A.A69D9226@jp.fujitsu.com> <4D242D60.9060301@redhat.com> <20110105173823.B658.A69D9226@jp.fujitsu.com> In-Reply-To: <20110105173823.B658.A69D9226@jp.fujitsu.com> Content-Type: text/plain; charset=ISO-8859-1; format=flowed Content-Transfer-Encoding: 7bit Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On 01/05/2011 10:40 AM, KOSAKI Motohiro wrote: > > On 01/05/2011 04:39 AM, KOSAKI Motohiro wrote: > > > > On 01/04/2011 08:14 AM, KOSAKI Motohiro wrote: > > > > > Also, If pthread_cond_signal() call sys_yield_to imlicitly, we can > > > > > avoid almost Nehalem (and other P2P cache arch) lock unfairness > > > > > problem. (probaby creating pthread_condattr_setautoyield_np or similar > > > > > knob is good one) > > > > > > > > Often, the thread calling pthread_cond_signal() wants to continue > > > > executing, not yield. > > > > > > Then, it doesn't work. > > > > > > After calling pthread_cond_signal(), T1 which cond_signal caller and T2 > > > which waked start to GIL grab race. But usually T1 is always win because > > > lock variable is in T1's cpu cache. Why kernel and userland have so much > > > different result? One of a reason is glibc doesn't have any ticket lock scheme. > > > > > > If you are interesting GIL mess and issue, please feel free to ask more. > > > > I suggest looking into an explicit round-robin scheme, where each thread > > adds itself to a queue and an unlock wakes up the first waiter. > > I'm sure you haven't try your scheme. but I did. It's slow. Won't anything with a heavily contented global/giant lock be slow? What's the average lock hold time per thread? 10%? 50%? 90%? -- error compiling committee.c: too many arguments to function