From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from mailman by lists.gnu.org with tmda-scanned (Exim 4.43) id 1NkhJd-0000hG-8D for qemu-devel@nongnu.org; Thu, 25 Feb 2010 12:15:57 -0500 Received: from [199.232.76.173] (port=59004 helo=monty-python.gnu.org) by lists.gnu.org with esmtp (Exim 4.43) id 1NkhJc-0000gw-Jk for qemu-devel@nongnu.org; Thu, 25 Feb 2010 12:15:56 -0500 Received: from Debian-exim by monty-python.gnu.org with spam-scanned (Exim 4.60) (envelope-from ) id 1NkhJb-00015o-1L for qemu-devel@nongnu.org; Thu, 25 Feb 2010 12:15:56 -0500 Received: from qw-out-1920.google.com ([74.125.92.149]:51335) by monty-python.gnu.org with esmtp (Exim 4.60) (envelope-from ) id 1NkhJa-00015k-QO for qemu-devel@nongnu.org; Thu, 25 Feb 2010 12:15:54 -0500 Received: by qw-out-1920.google.com with SMTP id 5so30692qwf.4 for ; Thu, 25 Feb 2010 09:15:54 -0800 (PST) Message-ID: <4B86B044.6020501@codemonkey.ws> Date: Thu, 25 Feb 2010 11:15:48 -0600 From: Anthony Liguori MIME-Version: 1.0 Subject: Re: [Qemu-devel] Re: [RFC][PATCH] performance improvement for windows guests, running on top of virtio block device References: <1263195647.2005.44.camel@localhost> <201002240258.19045.paul@codesourcery.com> <4B853ED3.3060707@codemonkey.ws> <201002251506.05318.paul@codesourcery.com> <4B86AF40.5090401@redhat.com> In-Reply-To: <4B86AF40.5090401@redhat.com> Content-Type: text/plain; charset=UTF-8; format=flowed Content-Transfer-Encoding: 7bit List-Id: qemu-devel.nongnu.org List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , To: Avi Kivity Cc: Vadim Rozenfeld , Dor Laor , Christoph Hellwig , Paul Brook , qemu-devel@nongnu.org On 02/25/2010 11:11 AM, Avi Kivity wrote: > On 02/25/2010 05:06 PM, Paul Brook wrote: >>>> Idle bottom halves (i.e. qemu_bh_schedule_idle) are just bugs >>>> waiting to >>>> happen, and should never be used for anything. >>> Idle bottom halves make considerable more sense than the normal bottom >>> halves. >>> >>> The fact that rescheduling a bottom half within a bottom half >>> results in >>> an infinite loop is absurd. It is equally absurd that bottoms halves >>> alter the select timeout. The result of that is that if a bottom half >>> schedules another bottom half, and that bottom half schedules the >>> previous, you get a tight infinite loop. Since bottom halves are used >>> often times deep within functions, the result is very subtle infinite >>> loops (that we've absolutely encountered in the past). >> I disagree. The "select timeout" is a completely irrelevant >> implementation >> detail. Anything that relies on it is just plain wrong. If you >> require a delay >> then you should be using a timer. If scheduling a BH directly then >> you should >> expect it to be processed without delay. > > I agree. Further, once we fine-grain device threading, the iothread > essentially disappears and is replaced by device-specific threads. > There's no "idle" anymore. That's a nice idea, but how is io dispatch handled? Is everything synchronous or do we continue to program asynchronously? It's very difficult to mix concepts. I personally don't anticipate per-device threading but rather anticipate re-entrant device models. I would expect all I/O to be dispatched within the I/O thread and the VCPU threads to be able to execute device models simultaneously with the I/O thread. Regards, Anthony Liguori