From mboxrd@z Thu Jan 1 00:00:00 1970 From: Takuya Yoshikawa Subject: Re: [PATCH RFC 5/5] KVM: This is the main part of the "moving dirty bitmaps to user space" Date: Mon, 12 Apr 2010 11:29:12 +0900 Message-ID: <4BC28578.4050102@oss.ntt.co.jp> References: <20100409182732.857de4db.yoshikawa.takuya@oss.ntt.co.jp> <20100409183808.b72fc9a3.yoshikawa.takuya@oss.ntt.co.jp> <4BC2051B.9090101@redhat.com> Mime-Version: 1.0 Content-Type: text/plain; charset=ISO-8859-1; format=flowed Content-Transfer-Encoding: 7bit Cc: mtosatti@redhat.com, kvm@vger.kernel.org, fernando@oss.ntt.co.jp To: Avi Kivity Return-path: Received: from serv2.oss.ntt.co.jp ([222.151.198.100]:45299 "EHLO serv2.oss.ntt.co.jp" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1751935Ab0DLC0H (ORCPT ); Sun, 11 Apr 2010 22:26:07 -0400 In-Reply-To: <4BC2051B.9090101@redhat.com> Sender: kvm-owner@vger.kernel.org List-ID: (2010/04/12 2:21), Avi Kivity wrote: > On 04/09/2010 12:38 PM, Takuya Yoshikawa wrote: >> By this patch, bitmap allocation is replaced with do_mmap() and >> bitmap manipulation is replaced with *_user() functions. >> >> Note that this does not change the APIs between kernel and user space. >> To get more advantage from this hack, we need to add a new interface >> for triggering the bitmap swith and getting the bitmap addresses: the >> addresses is in user space and we can export them to qemu. >> > > I would like to merge the new API together with the patchset, since > without it, the improvement is insufficient. OK, it would be only a simple API functions. > >> TODO: >> 1. We want to use copy_in_user() for 32bit case too. > > Definitely. Why doesn't it work now? Sadly we don't have that for 32bit. We have to implement by ourselves. I tested two temporary implementations for 32bit: 1. This version using copy_from_user() and copy_to_user() with not nice vmalloc(). 2. Loop with __get_user() and __put_user(). The result was 1 is much faster than 2. > >> Note that this is only for the compatibility issue: in the future, >> we hope, qemu will not need to use this ioctl. >> 2. We have to implement test_bit_user() to avoid extra set_bit. > > This was important in the days of shadow paging. I'm not so sure about > it with nested paging, since we'll typically only fault a page once per > iteration. Since we're very likely to actually write these days, the > extra access is wasteful. Nice news for me! So all we need to ask x86(asm-generic) people to merge are: set bit user and copy_in_user 32bit version. > >> >> +int kvm_arch_create_dirty_bitmap(struct kvm_memory_slot *memslot) >> +{ >> + unsigned long user_addr1; >> + unsigned long user_addr2; >> + int dirty_bytes = kvm_dirty_bitmap_bytes(memslot); >> + >> + down_write(¤t->mm->mmap_sem); >> + user_addr1 = do_mmap(NULL, 0, dirty_bytes, >> + PROT_READ | PROT_WRITE, >> + MAP_PRIVATE | MAP_ANONYMOUS, 0); >> + if (IS_ERR((void *)user_addr1)) { >> + up_write(¤t->mm->mmap_sem); >> + return PTR_ERR((void *)user_addr1); >> + } >> + user_addr2 = do_mmap(NULL, 0, dirty_bytes, >> + PROT_READ | PROT_WRITE, >> + MAP_PRIVATE | MAP_ANONYMOUS, 0); >> + if (IS_ERR((void *)user_addr2)) { >> + do_munmap(current->mm, user_addr1, dirty_bytes); >> + up_write(¤t->mm->mmap_sem); >> + return PTR_ERR((void *)user_addr2); >> + } >> + up_write(¤t->mm->mmap_sem); > > I think a single do_mmap() call for both bitmaps is simpler in terms of > error handling. OK, I'll fix in the next version. >