From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from mailman by lists.gnu.org with tmda-scanned (Exim 4.43) id 1Nl2IH-0001cF-5h for qemu-devel@nongnu.org; Fri, 26 Feb 2010 10:39:57 -0500 Received: from [199.232.76.173] (port=60970 helo=monty-python.gnu.org) by lists.gnu.org with esmtp (Exim 4.43) id 1Nl2IF-0001aa-Oo for qemu-devel@nongnu.org; Fri, 26 Feb 2010 10:39:55 -0500 Received: from Debian-exim by monty-python.gnu.org with spam-scanned (Exim 4.60) (envelope-from ) id 1Nl2IE-0000AS-Li for qemu-devel@nongnu.org; Fri, 26 Feb 2010 10:39:55 -0500 Received: from mx1.redhat.com ([209.132.183.28]:63907) by monty-python.gnu.org with esmtp (Exim 4.60) (envelope-from ) id 1Nl2IE-0000AI-8m for qemu-devel@nongnu.org; Fri, 26 Feb 2010 10:39:54 -0500 Message-ID: <4B87EB32.2000707@redhat.com> Date: Fri, 26 Feb 2010 17:39:30 +0200 From: Avi Kivity MIME-Version: 1.0 Subject: Re: [Qemu-devel] Re: [RFC][PATCH] performance improvement for windows guests, running on top of virtio block device References: <1263195647.2005.44.camel@localhost> <201002240258.19045.paul@codesourcery.com> <4B853ED3.3060707@codemonkey.ws> <201002251506.05318.paul@codesourcery.com> <4B86AF40.5090401@redhat.com> <4B86B044.6020501@codemonkey.ws> <4B86B45B.5020507@redhat.com> <4B86D59C.3000300@codemonkey.ws> <4B878A97.2080405@redhat.com> <4B87DC67.8030603@codemonkey.ws> In-Reply-To: <4B87DC67.8030603@codemonkey.ws> Content-Type: text/plain; charset=UTF-8; format=flowed Content-Transfer-Encoding: 7bit List-Id: qemu-devel.nongnu.org List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , To: Anthony Liguori Cc: Vadim Rozenfeld , Dor Laor , Christoph Hellwig , Paul Brook , qemu-devel@nongnu.org On 02/26/2010 04:36 PM, Anthony Liguori wrote: > On 02/26/2010 02:47 AM, Avi Kivity wrote: >> qcow2 is still not fully asynchronous. All the other format drivers >> (except raw) are fully synchronous. If we had a threaded >> infrastructure, we could convert them all in a day. As it is, you >> can only use the other block format drivers in 'qemu-img convert'. > > I've got a healthy amount of scepticism that it's that easy. But I'm > happy to consider patches :-) I'd be happy to have time to write them. > >>> If the device models are re-entrant, that reduces a ton of the >>> demand on the qemu_mutex which means that IO thread can run >>> uncontended. While we have evidence that the VCPU threads and IO >>> threads are competing with each other today, I don't think we have >>> any evidence to suggest that the IO thread is self-starving itself >>> with long running events. >> >> I agree we have no evidence and that this is all speculation. But >> consider a 64-vcpu guest, it has a 1:64 ratio of vcpu time >> (initiations) to iothread time (completions). If each vcpu generates >> 5000 initiations per second, the iothread needs to handle 320,000 >> completions per second. At that rate you will see some internal >> competition. That thread will also have a hard time shuffling data >> since every completion's data will reside in the wrong cpu cache. > > Ultimately, it depends on what you're optimizing for. If you've got a > 64-vcpu guest on a 128-way box, then sure, we want to have 64 IO > threads because that will absolutely increase throughput. > > But realistically, it's more likely that if you've got a 64-vcpu > guest, you're on a 1024-way box and you've got 64 guests running at > once. Having 64 IO threads per VM means you've got 4k threads > floating. It's still just as likely that one completion will get > delayed by something less important. Now with all of these threads on > a box like this, you get nasty NUMA interactions too. I'm not suggesting to scale out - the number of vcpus (across all guests) will usually be higher than the number of cpus. But if you have multiple device threads, the scheduler has flexibility in placing them around and filling bubbles. A single heavily loaded iothread is more difficult. > The difference between the two models is that with threads, we rely on > pre-emption to enforce fairness and the Linux scheduler to perform > scheduling. With a single IO thread, we're determining execution > order and priority. We could define priorities with multiple threads as well (using thread priorities), and we'd never have a short task delayed behind a long task, unless the host is out of resources. > > A lot of main loops have a notion of priority for timer and idle > callbacks. For something that is latency sensitive, you absolutely > could introduce the concept of priority for bottom halves. It would > ensure that a +1 priority bottom half would get scheduled before > handling any lower priority I/O/BHs. What if it becomes available after the low prio task has started to run? > >> Note, an alternative to multiple iothreads is to move completion >> handling back to vcpus, provided we can steer the handler close to >> the guest completion handler. > > Looking at something like linux-aio, I think we might actually want to > do that. We can submit the request from the VCPU thread and we can > certainly program the signal to get delivered to that VCPU thread. > Maintaining affinity for the request is likely a benefit. Likely to benefit when we have multiqueue virtio. > >>> >>> For host services though, it's much more difficult to isolate them >>> like this. >> >> What do you mean by host services? > > Things like VNC and live migration. Things that aren't directly > related to a guest's activity. One model I can imagine is to continue > to relegate these things to a single IO thread, but then move device > driven callbacks either back to the originating thread or to a > dedicated device callback thread. Host services generally have a much > lower priority. Or just 'a thread'. Nothing prevents vnc or live migration from running in a thread, using the current code. > >>> I'm not necessarily claiming that this will never be the right thing >>> to do, but I don't think we really have the evidence today to >>> suggest that we should focus on this in the short term. >> >> Agreed. We will start to see evidence (one way or the other) as >> fully loaded 64-vcpu guests are benchmarked. Another driver may be >> real-time guests; if a timer can be deferred by some block device >> initiation or completion, then we can say goodbye to any realtime >> guarantees we want to make. > > I'm wary of making decisions based on performance of a 64-vcpu guest. > It's an important workload to characterize because it's an extreme > case but I think 64 1-vcpu guests will continue to be significantly > more important than 1 64-vcpu guest. Agreed. 64-vcpu guests will make the headlines and marketing checklists, though. -- Do not meddle in the internals of kernels, for they are subtle and quick to panic.