From mboxrd@z Thu Jan 1 00:00:00 1970 From: Avi Kivity Subject: Re: [PATCH 1/1 v2] KVM: MMU: Optimize guest page table walk Date: Sun, 24 Apr 2011 10:27:06 +0300 Message-ID: <4DB3D0CA.6050007@redhat.com> References: <20110422003222.9d08aee3.takuya.yoshikawa@gmail.com> <20110422003444.5b3a876a.takuya.yoshikawa@gmail.com> Mime-Version: 1.0 Content-Type: text/plain; charset=ISO-8859-1; format=flowed Content-Transfer-Encoding: 7bit Cc: mtosatti@redhat.com, kvm@vger.kernel.org, yoshikawa.takuya@oss.ntt.co.jp, xiaoguangrong@cn.fujitsu.com, Joerg.Roedel@amd.com To: Takuya Yoshikawa Return-path: Received: from mx1.redhat.com ([209.132.183.28]:3447 "EHLO mx1.redhat.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1750807Ab1DXH1V (ORCPT ); Sun, 24 Apr 2011 03:27:21 -0400 In-Reply-To: <20110422003444.5b3a876a.takuya.yoshikawa@gmail.com> Sender: kvm-owner@vger.kernel.org List-ID: On 04/21/2011 06:34 PM, Takuya Yoshikawa wrote: > From: Takuya Yoshikawa > > This patch optimizes the guest page table walk by using get_user() > instead of copy_from_user(). > > With this patch applied, paging64_walk_addr_generic() has become > about 0.5us to 1.0us faster on my Phenom II machine with NPT on. Applied, thanks. Care to send a follow-on patch that makes cmpxchg_gpte() use ptep_user instead of calculating it by itself? -- I have a truly marvellous patch that fixes the bug which this signature is too narrow to contain.