From mboxrd@z Thu Jan 1 00:00:00 1970 From: Andi Kleen Subject: Re: [RFC PATCH 3/3] KVM: MMU: Optimize guest page table walk Date: Fri, 29 Apr 2011 08:59:56 +0200 Message-ID: <20110429065956.GA3985@one.firstfloor.org> References: <20110419033220.e527bcae.takuya.yoshikawa@gmail.com> <20110419033814.3cc7ab5e.takuya.yoshikawa@gmail.com> <4DAEA123.3020403@redhat.com> <20110429143808.29c51c6a.takuya.yoshikawa@gmail.com> Mime-Version: 1.0 Content-Type: text/plain; charset=us-ascii Cc: Andi Kleen , Avi Kivity , mtosatti@redhat.com, kvm@vger.kernel.org, yoshikawa.takuya@oss.ntt.co.jp To: Takuya Yoshikawa Return-path: Received: from one.firstfloor.org ([213.235.205.2]:60928 "EHLO one.firstfloor.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1752318Ab1D2HAB (ORCPT ); Fri, 29 Apr 2011 03:00:01 -0400 Content-Disposition: inline In-Reply-To: <20110429143808.29c51c6a.takuya.yoshikawa@gmail.com> Sender: kvm-owner@vger.kernel.org List-ID: > Only I can guess for that reason is the reduction of some function calls > by inlining some functions. Yes once at a time cfu was inline too and just checked for the right sizes and the used g*u, but it got lost in the "icache over everything else" mania which is unfortunately en vogue for quite some time in kernel land (aka measuring patches only based on their impact on the .text size, not the actual performance) But you might getter better gains by fixing this general c*u() regression. -Andi -- ak@linux.intel.com -- Speaking for myself only.