From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from mailman by lists.gnu.org with tmda-scanned (Exim 4.43) id 1LkojV-0005XU-7q for qemu-devel@nongnu.org; Fri, 20 Mar 2009 20:06:37 -0400 Received: from exim by lists.gnu.org with spam-scanned (Exim 4.43) id 1LkojQ-0005XF-O1 for qemu-devel@nongnu.org; Fri, 20 Mar 2009 20:06:36 -0400 Received: from [199.232.76.173] (port=45103 helo=monty-python.gnu.org) by lists.gnu.org with esmtp (Exim 4.43) id 1LkojQ-0005XC-KR for qemu-devel@nongnu.org; Fri, 20 Mar 2009 20:06:32 -0400 Received: from mx2.redhat.com ([66.187.237.31]:37761) by monty-python.gnu.org with esmtp (Exim 4.60) (envelope-from ) id 1LkojQ-0007Fa-4e for qemu-devel@nongnu.org; Fri, 20 Mar 2009 20:06:32 -0400 Date: Fri, 20 Mar 2009 21:06:17 -0300 From: Marcelo Tosatti Message-ID: <20090321000617.GA26654@amt.cnet> References: <20090319145705.988576615@localhost.localdomain> <20090319150537.910101569@localhost.localdomain> <49C3D5D5.4020006@codemonkey.ws> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <49C3D5D5.4020006@codemonkey.ws> Subject: [Qemu-devel] Re: [patch 2/7] qemu: separate thread for io Reply-To: qemu-devel@nongnu.org List-Id: qemu-devel.nongnu.org List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , To: Anthony Liguori Cc: qemu-devel@nongnu.org On Fri, Mar 20, 2009 at 12:43:49PM -0500, Anthony Liguori wrote: >> + qemu_mutex_lock(&qemu_fair_mutex); >> + qemu_mutex_unlock(&qemu_fair_mutex); >> > > This is extremely subtle. I think it can be made a bit more explicit > via something like: > > int generation = qemu_io_generation() + 1; > > qemu_mutex_unlock(&qemu_global_mutex); > > if (timeout && !has_work(env)) > wait_signal(timeout); > > qemu_mutex_lock(&qemu_global_mutex) > > while (qemu_io_generation() < generation) > qemu_cond_wait(&qemu_io_generation_cond, &qemu_global_mutex); > > Then, in main_loop_wait(): > >> void main_loop_wait(int timeout) >> { >> IOHandlerRecord *ioh; >> @@ -3660,7 +3806,7 @@ void main_loop_wait(int timeout) >> */ >> qemu_mutex_unlock(&qemu_global_mutex); >> ret = select(nfds + 1, &rfds, &wfds, &xfds, &tv); >> - qemu_mutex_lock(&qemu_global_mutex); >> > > qemu_mutex_lock(&qemu_global_mutex); > qemu_io_generation_add(); > qemu_cond_signal(&qemu_io_generation_cond); > > This will ensure that the TCG loop backs off of qemu_global_mutex for at > least one main_loop_wait() iteration. I don't see much benefit. It has the disadvantage of introducing more state (the generation counter, the conditional), and the potential problems associated with this state such as: - If there is no event for the iothread to process, TCG will throttle unnecessarily (i can't see how that could happen, but is a possibility) until some event breaks it out of select() thus increasing the generation counter. - Its possible to miss signals (again, I can't see in happening in the scheme you suggest), but.. Also note there is no need to be completly fair. It is fine to eventually be unfair. Do you worry about interaction between the two locks? Can make the lock ordering documented. Will spend some time thinking about this, need to take multiple "starved" threads into account. Thanks.