From mboxrd@z Thu Jan 1 00:00:00 1970 From: Avi Kivity Subject: Re: [PATCH] kvm: rework remove-write-access for a slot Date: Sun, 06 Jun 2010 19:04:25 +0300 Message-ID: <4C0BC709.5040307@redhat.com> References: <4C061C16.9040208@cn.fujitsu.com> <4C063E01.7040206@redhat.com> <4C08B5D0.6090104@cn.fujitsu.com> Mime-Version: 1.0 Content-Type: text/plain; charset=UTF-8; format=flowed Content-Transfer-Encoding: 7bit Cc: Marcelo Tosatti , LKML , kvm@vger.kernel.org To: Lai Jiangshan Return-path: In-Reply-To: <4C08B5D0.6090104@cn.fujitsu.com> Sender: linux-kernel-owner@vger.kernel.org List-Id: kvm.vger.kernel.org On 06/04/2010 11:14 AM, Lai Jiangshan wrote: > > >> - I thought of a different approach to write protection: write protect >> the L4 sptes, on write fault add write permission to the L4 spte and >> write protect the L3 sptes that it points to, etc. This method can use >> the slot bitmap to reduce the number of write faults. However we can >> reintroduce the slot bitmap if/when we use the method, this shouldn't >> block the patch. >> > It is very a good approach and it is blazing fast. > > I have no time to implement it currently, > could you update it into the TODO list? > Done. >>> +static void rmapp_remove_write_access(struct kvm *kvm, unsigned long >>> *rmapp) >>> +{ >>> + u64 *spte = rmap_next(kvm, rmapp, NULL); >>> + >>> + while (spte) { >>> + /* avoid RMW */ >>> + if (is_writable_pte(*spte)) >>> + *spte&= ~PT_WRITABLE_MASK; >>> >> Must use an atomic operation here to avoid losing dirty or accessed bit. >> >> > Atomic operation is too expensive, I retained the comment "/* avoid RMW */" > and wait someone take a good approach for it. > You are right, it is an existing problem. I just posted a patchset which fixes the problem, when it's merged please rebase on top. -- error compiling committee.c: too many arguments to function