From mboxrd@z Thu Jan 1 00:00:00 1970 From: Andrea Arcangeli Subject: Re: [kvm-devel] performance with guests running 2.4 kernels (specifically RHEL3) Date: Wed, 28 May 2008 16:48:50 +0200 Message-ID: <20080528144850.GX27375@duo.random> References: <482C1633.5070302@qumranet.com> <482E5F9C.6000207@cisco.com> <482FCEE1.5040306@qumranet.com> <4830F90A.1020809@cisco.com> <4830FE8D.6010006@cisco.com> <48318E64.8090706@qumranet.com> <4832DDEB.4000100@qumranet.com> <4835EEF5.9010600@cisco.com> <483D391F.7050007@qumranet.com> <483D6898.2050605@cisco.com> Mime-Version: 1.0 Content-Type: text/plain; charset=us-ascii Cc: Avi Kivity , kvm@vger.kernel.org To: "David S. Ahern" Return-path: Received: from host36-195-149-62.serverdedicati.aruba.it ([62.149.195.36]:49488 "EHLO mx.cpushare.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1751171AbYE1O4r (ORCPT ); Wed, 28 May 2008 10:56:47 -0400 Content-Disposition: inline In-Reply-To: <483D6898.2050605@cisco.com> Sender: kvm-owner@vger.kernel.org List-ID: On Wed, May 28, 2008 at 08:13:44AM -0600, David S. Ahern wrote: > Weird. Could it be something about the hosts? Note that the VM itself will never make use of kmap. The VM is "data" agonistic. The VM has never any idea with the data contained by the pages. kmap/kmap_atomic/kunmap_atomic are only need to access _data_. Only I/O (if not using DMA, or because of bounce buffers) and page faults triggered in user process context, or other operations again done from user process context will call into kmap or kmap_atomic. And if KVM is inefficient in handling kmap/kmap_atomic that will lead to the user process running slower, and in turn generating less pressure to the guest and host VM if something. Guest will run slower than it should if KVM isn't optimized for the workload but it shouldn't alter any VM kernel thread CPU usage, only the CPU usage of the guest process context and host system time in qemu task should go up, nothing else. This is again because the VM will never care about the data contents and it'll never invoked kmap/kmap_atomic. So I never found a relation to the symptom reported of VM kernel threads going weird, with KVM optimal handling of kmap ptes.