From mboxrd@z Thu Jan 1 00:00:00 1970 From: Avi Kivity Subject: Re: [kvm-devel] performance with guests running 2.4 kernels (specifically RHEL3) Date: Wed, 28 May 2008 13:51:11 +0300 Message-ID: <483D391F.7050007@qumranet.com> References: <48054518.3000104@cisco.com> <4805BCF1.6040605@qumranet.com> <4807BD53.6020304@cisco.com> <48085485.3090205@qumranet.com> <480C188F.3020101@cisco.com> <480C5C39.4040300@qumranet.com> <480E492B.3060500@cisco.com> <480EEDA0.3080209@qumranet.com> <480F546C.2030608@cisco.com> <481215DE.3000302@cisco.com> <20080428181550.GA3965@dmt> <4816617F.3080403@cisco.com> <4817F30C.6050308@cisco.com> <48184228.2020701@qumranet.com> <481876A9.1010806@cisco.com> <48187903.2070409@qumranet.com> <4826E744.1080107@qumranet.com> <4826F668.6030305@qumranet.com> <48290FC2.4070505@cisco.com> <48294272.5020801@qumranet.com> <482B4D29.7010202@cisco.com> <482C1633.5070302@qumranet.com> <482E5F9C.6000207@cisco.com> <482FCEE1.5040306@qumranet.com> <4830F90A.1020809@cisco.com> <4830FE8D.6010006@cisco.com> <4 8318E64.8090706@qumranet.com> <4832DDEB.4000100@qumranet.com> <4835EEF5.9010600@cisco.com> Mime-Version: 1.0 Content-Type: text/plain; charset=ISO-8859-1; format=flowed Content-Transfer-Encoding: 7bit Cc: kvm@vger.kernel.org To: "David S. Ahern" Return-path: Received: from bzq-179-150-194.static.bezeqint.net ([212.179.150.194]:14284 "EHLO il.qumranet.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1752222AbYE1KvS (ORCPT ); Wed, 28 May 2008 06:51:18 -0400 In-Reply-To: <4835EEF5.9010600@cisco.com> Sender: kvm-owner@vger.kernel.org List-ID: David S. Ahern wrote: > The short answer is that I am still see large system time hiccups in the > guests due to kscand in the guest scanning its active lists. I do see > better response for a KVM_MAX_PTE_HISTORY of 3 than with 4. (For > completeness I also tried a history of 2, but it performed worse than 3 > which is no surprise given the meaning of it.) > > > I have been able to scratch out a simplistic program that stimulates > kscand activity similar to what is going on in my real guest (see > attached). The program requests a memory allocation, initializes it (to > get it backed) and then in a loop sweeps through the memory in chunks > similar to a program using parts of its memory here and there but > eventually accessing all of it. > > Start the RHEL3/CentOS 3 guest with *2GB* of RAM (or more). The key is > using a fair amount of highmem. Start a couple of instances of the > attached. For example, I've been using these 2: > > memuser 768M 120 5 300 > memuser 384M 300 10 600 > > Together these instances take up a 1GB of RAM and once initialized > consume very little CPU. On kvm they make kscand and kswapd go nuts > every 5-15 minutes. For comparison, I do not see the same behavior for > an identical setup running on esx 3.5. > I haven't been able to reproduce this: > [root@localhost root]# ps -elf | grep -E 'memuser|kscand' > 1 S root 7 1 1 75 0 - 0 schedu 10:07 ? > 00:00:26 [kscand] > 0 S root 1464 1 1 75 0 - 196986 schedu 10:20 pts/0 > 00:00:21 ./memuser 768M 120 5 300 > 0 S root 1465 1 0 75 0 - 98683 schedu 10:20 pts/0 > 00:00:10 ./memuser 384M 300 10 600 > 0 S root 2148 1293 0 75 0 - 922 pipe_w 10:48 pts/0 > 00:00:00 grep -E memuser|kscand The workload has been running for about half an hour, and kswapd cpu usage doesn't seem significant. This is a 2GB guest running with my patch ported to kvm.git HEAD. Guest is has 2G of memory. -- Do not meddle in the internals of kernels, for they are subtle and quick to panic.