From mboxrd@z Thu Jan 1 00:00:00 1970 From: Sasha Levin Subject: Re: [PATCH 5/5] ioeventfd: Introduce KVM_IOEVENTFD_FLAG_SOCKET Date: Thu, 14 Jul 2011 16:00:09 +0300 Message-ID: <1310648409.21171.34.camel@lappy> References: <1309927078-5983-1-git-send-email-levinsasha928@gmail.com> <1310276083.2393.6.camel@sasha> <20110710080559.GC1630@redhat.com> <1310469824.2393.22.camel@sasha> <4E1C2F59.90600@redhat.com> <4E1D442E.6090308@redhat.com> <4E1D9623.70801@redhat.com> <4E1D9E75.1070901@redhat.com> <4E1E9A3B.7090200@kernel.org> <4E1EA455.4010608@redhat.com> <4E1EA8A2.9020304@redhat.com> <4E1EBB7A.3030809@redhat.com> <4E1ED913.6070003@redhat.com> <1310646737.21171.23.camel@lappy> <4E1EE519.1020608@redhat.com> Mime-Version: 1.0 Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: 7bit Cc: Pekka Enberg , "Michael S. Tsirkin" , kvm@vger.kernel.org, Ingo Molnar , Marcelo Tosatti To: Avi Kivity Return-path: Received: from mail-fx0-f52.google.com ([209.85.161.52]:39411 "EHLO mail-fx0-f52.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1754564Ab1GNNA0 (ORCPT ); Thu, 14 Jul 2011 09:00:26 -0400 Received: by fxd18 with SMTP id 18so1130521fxd.11 for ; Thu, 14 Jul 2011 06:00:25 -0700 (PDT) In-Reply-To: <4E1EE519.1020608@redhat.com> Sender: kvm-owner@vger.kernel.org List-ID: On Thu, 2011-07-14 at 15:46 +0300, Avi Kivity wrote: > On 07/14/2011 03:32 PM, Sasha Levin wrote: > > On Thu, 2011-07-14 at 14:54 +0300, Avi Kivity wrote: > > > On 07/14/2011 01:30 PM, Pekka Enberg wrote: > > > > We want to use 8250 emulation instead of virtio-serial because it's > > > > more compatible with kernel debugging mechanisms. Also, it makes > > > > debugging virtio code much easier when we don't need to use virtio to > > > > deliver console output while debugging it. We want to make it fast so > > > > that we don't need to switch over to another console type after early > > > > boot. > > > > > > > > What's unreasonable about that? > > > > > > Does virtio debugging really need super-fast serial? Does it need > > > serial at all? > > > > > > > Does it need super-fast serial? No, although it's nice. Does it need > > serial at all? Definitely. > > Why? virtio is mature. It's not some early boot thing which fails and > kills the guest. Even if you get an oops, usually the guest is still alive. virtio is mature, /tools/kvm isn't :) > > > It's not just virtio which can fail running on virtio-console, it's also > > the threadpool, the eventfd mechanism and even the PCI management > > module. You can't really debug it if you can't depend on your debugging > > mechanism to properly work. > > Wait, those are guest things, not host things. Yes, as you said in the previous mail, both KVM and virtio are very stable. /tools/kvm was the one who was being debugged most of the time. > > So far, serial is the simplest, most effective, and never-failing method > > we had for working on guests, I don't see how we can work without it at > > the moment. > > I really can't remember the last time I used the serial console for the > guest. In the early early days, sure, but now? > I don't know, if it works fine why not use it when you need simple serial connection? It's also useful for kernel hackers who break early boot things :) > > I agree here that the performance even with 256 vcpus would be terrible > > and no 'real' users would be doing that until the infrastructure could > > provide reasonable performance. > > > > The two uses I see for it are: > > > > 1. Stressing out the usermode code. One of the reasons qemu can't > > properly do 64 vcpus now is not just due to the KVM kernel code, it's > > also due to qemu itself. We're trying to avoid doing the same > > with /tools/kvm. > > It won't help without a 1024 cpu host. As soon as you put a real > workload on the guest, it will thrash and any scaling issue in qemu or > tools/kvm will be drowned in the noise. > > > 2. Preventing future scalability problems. Currently we can't do 1024 > > vcpus because it breaks coalesced MMIO - which is IMO not a valid reason > > for not scaling up to 1024 vcpus (and by scaling I mean running without > > errors, without regards to performance). > > That's not what scaling means (not to say that it wouldn't be nice to > fix coalesced mmio). > > btw, why are you so eager to run 1024 vcpu guests? usually, if you have > a need for such large systems, you're really performance sensitive. > It's not a good case for virtualization. > > I may have went too far with 1024, I have only tested it on 254 vcpus so far - I'll change that in my patch. It's also not just a KVM issue. Take for example the RCU issue which we were able to detect with /tools/kvm just by trying more than 30 vcpus and noticing that RCU was broken with a recent kernel. Testing the kernel on guests with large amount of vcpus or virtual memory might prove beneficial not only for KVM itself. -- Sasha.