linux-arm-kernel.lists.infradead.org archive mirror
 help / color / mirror / Atom feed
From: pbonzini@redhat.com (Paolo Bonzini)
To: linux-arm-kernel@lists.infradead.org
Subject: [PATCH v14 3/7] KVM: x86: switch to kvm_get_dirty_log_protect
Date: Fri, 14 Nov 2014 11:03:55 +0100	[thread overview]
Message-ID: <5465D38B.1090605@redhat.com> (raw)
In-Reply-To: <546563BF.9060407@samsung.com>



On 14/11/2014 03:06, Mario Smarduch wrote:
> Hi Paolo,
> 
>   I changed your patch a little to use a Kconfig symbol,
> hope that's fine with you.

Of course, thanks.

Paolo

> - Mario
> 
> On 11/13/2014 05:57 PM, Mario Smarduch wrote:
>> From: Paolo Bonzini <pbonzini@redhat.com>
>>
>> We now have a generic function that does most of the work of
>> kvm_vm_ioctl_get_dirty_log, now use it.
>>
>> Signed-off-by: Mario Smarduch <m.smarduch@samsung.com>
>> ---
>>  arch/x86/include/asm/kvm_host.h |    3 --
>>  arch/x86/kvm/Kconfig            |    1 +
>>  arch/x86/kvm/mmu.c              |    4 +--
>>  arch/x86/kvm/x86.c              |   64 ++++++---------------------------------
>>  4 files changed, 12 insertions(+), 60 deletions(-)
>>
>> diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h
>> index 7c492ed..934dc24 100644
>> --- a/arch/x86/include/asm/kvm_host.h
>> +++ b/arch/x86/include/asm/kvm_host.h
>> @@ -805,9 +805,6 @@ void kvm_mmu_set_mask_ptes(u64 user_mask, u64 accessed_mask,
>>  
>>  void kvm_mmu_reset_context(struct kvm_vcpu *vcpu);
>>  void kvm_mmu_slot_remove_write_access(struct kvm *kvm, int slot);
>> -void kvm_mmu_write_protect_pt_masked(struct kvm *kvm,
>> -				     struct kvm_memory_slot *slot,
>> -				     gfn_t gfn_offset, unsigned long mask);
>>  void kvm_mmu_zap_all(struct kvm *kvm);
>>  void kvm_mmu_invalidate_mmio_sptes(struct kvm *kvm);
>>  unsigned int kvm_mmu_calculate_mmu_pages(struct kvm *kvm);
>> diff --git a/arch/x86/kvm/Kconfig b/arch/x86/kvm/Kconfig
>> index f9d16ff..d073594 100644
>> --- a/arch/x86/kvm/Kconfig
>> +++ b/arch/x86/kvm/Kconfig
>> @@ -39,6 +39,7 @@ config KVM
>>  	select PERF_EVENTS
>>  	select HAVE_KVM_MSI
>>  	select HAVE_KVM_CPU_RELAX_INTERCEPT
>> +	select KVM_GENERIC_DIRTYLOG_READ_PROTECT
>>  	select KVM_VFIO
>>  	---help---
>>  	  Support hosting fully virtualized guest machines using hardware
>> diff --git a/arch/x86/kvm/mmu.c b/arch/x86/kvm/mmu.c
>> index 9314678..bf6b82c 100644
>> --- a/arch/x86/kvm/mmu.c
>> +++ b/arch/x86/kvm/mmu.c
>> @@ -1224,7 +1224,7 @@ static bool __rmap_write_protect(struct kvm *kvm, unsigned long *rmapp,
>>  }
>>  
>>  /**
>> - * kvm_mmu_write_protect_pt_masked - write protect selected PT level pages
>> + * kvm_arch_mmu_write_protect_pt_masked - write protect selected PT level pages
>>   * @kvm: kvm instance
>>   * @slot: slot to protect
>>   * @gfn_offset: start of the BITS_PER_LONG pages we care about
>> @@ -1233,7 +1233,7 @@ static bool __rmap_write_protect(struct kvm *kvm, unsigned long *rmapp,
>>   * Used when we do not need to care about huge page mappings: e.g. during dirty
>>   * logging we do not have any such mappings.
>>   */
>> -void kvm_mmu_write_protect_pt_masked(struct kvm *kvm,
>> +void kvm_arch_mmu_write_protect_pt_masked(struct kvm *kvm,
>>  				     struct kvm_memory_slot *slot,
>>  				     gfn_t gfn_offset, unsigned long mask)
>>  {
>> diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
>> index 8f1e22d..9f8ae9a 100644
>> --- a/arch/x86/kvm/x86.c
>> +++ b/arch/x86/kvm/x86.c
>> @@ -3606,77 +3606,31 @@ static int kvm_vm_ioctl_reinject(struct kvm *kvm,
>>   *
>>   *   1. Take a snapshot of the bit and clear it if needed.
>>   *   2. Write protect the corresponding page.
>> - *   3. Flush TLB's if needed.
>> - *   4. Copy the snapshot to the userspace.
>> + *   3. Copy the snapshot to the userspace.
>> + *   4. Flush TLB's if needed.
>>   *
>> - * Between 2 and 3, the guest may write to the page using the remaining TLB
>> - * entry.  This is not a problem because the page will be reported dirty at
>> - * step 4 using the snapshot taken before and step 3 ensures that successive
>> - * writes will be logged for the next call.
>> + * Between 2 and 4, the guest may write to the page using the remaining TLB
>> + * entry.  This is not a problem because the page is reported dirty using
>> + * the snapshot taken before and step 4 ensures that writes done after
>> + * exiting to userspace will be logged for the next call.
>>   */
>>  int kvm_vm_ioctl_get_dirty_log(struct kvm *kvm, struct kvm_dirty_log *log)
>>  {
>> -	int r;
>> -	struct kvm_memory_slot *memslot;
>> -	unsigned long n, i;
>> -	unsigned long *dirty_bitmap;
>> -	unsigned long *dirty_bitmap_buffer;
>>  	bool is_dirty = false;
>> +	int r;
>>  
>>  	mutex_lock(&kvm->slots_lock);
>>  
>> -	r = -EINVAL;
>> -	if (log->slot >= KVM_USER_MEM_SLOTS)
>> -		goto out;
>> -
>> -	memslot = id_to_memslot(kvm->memslots, log->slot);
>> -
>> -	dirty_bitmap = memslot->dirty_bitmap;
>> -	r = -ENOENT;
>> -	if (!dirty_bitmap)
>> -		goto out;
>> -
>> -	n = kvm_dirty_bitmap_bytes(memslot);
>> -
>> -	dirty_bitmap_buffer = dirty_bitmap + n / sizeof(long);
>> -	memset(dirty_bitmap_buffer, 0, n);
>> -
>> -	spin_lock(&kvm->mmu_lock);
>> -
>> -	for (i = 0; i < n / sizeof(long); i++) {
>> -		unsigned long mask;
>> -		gfn_t offset;
>> -
>> -		if (!dirty_bitmap[i])
>> -			continue;
>> -
>> -		is_dirty = true;
>> -
>> -		mask = xchg(&dirty_bitmap[i], 0);
>> -		dirty_bitmap_buffer[i] = mask;
>> -
>> -		offset = i * BITS_PER_LONG;
>> -		kvm_mmu_write_protect_pt_masked(kvm, memslot, offset, mask);
>> -	}
>> -
>> -	spin_unlock(&kvm->mmu_lock);
>> -
>> -	/* See the comments in kvm_mmu_slot_remove_write_access(). */
>> -	lockdep_assert_held(&kvm->slots_lock);
>> +	r = kvm_get_dirty_log_protect(kvm, log, &is_dirty);
>>  
>>  	/*
>>  	 * All the TLBs can be flushed out of mmu lock, see the comments in
>>  	 * kvm_mmu_slot_remove_write_access().
>>  	 */
>> +	lockdep_assert_held(&kvm->slots_lock);
>>  	if (is_dirty)
>>  		kvm_flush_remote_tlbs(kvm);
>>  
>> -	r = -EFAULT;
>> -	if (copy_to_user(log->dirty_bitmap, dirty_bitmap_buffer, n))
>> -		goto out;
>> -
>> -	r = 0;
>> -out:
>>  	mutex_unlock(&kvm->slots_lock);
>>  	return r;
>>  }
>>
> 
> --
> To unsubscribe from this list: send the line "unsubscribe kvm" in
> the body of a message to majordomo at vger.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html
> 

  reply	other threads:[~2014-11-14 10:03 UTC|newest]

Thread overview: 26+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2014-11-14  1:57 [PATCH v14 0/7] KVM/arm/x86: dirty page logging for ARMv7 (3.17.0-rc1) Mario Smarduch
2014-11-14  1:57 ` [PATCH v14 1/7] KVM: Add architecture-defined TLB flush support Mario Smarduch
2014-11-22 17:08   ` Christoffer Dall
2014-11-14  1:57 ` [PATCH v14 2/7] KVM: Add generic support for dirty page logging Mario Smarduch
2014-11-22 19:13   ` Christoffer Dall
2014-11-14  1:57 ` [PATCH v14 3/7] KVM: x86: switch to kvm_get_dirty_log_protect Mario Smarduch
2014-11-14  2:06   ` Mario Smarduch
2014-11-14 10:03     ` Paolo Bonzini [this message]
2014-11-22 19:19   ` Christoffer Dall
2014-11-24 18:35     ` Mario Smarduch
2014-12-08 23:12     ` Mario Smarduch
2014-12-09 19:42       ` Paolo Bonzini
2014-11-14  1:57 ` [PATCH v14 4/7] KVM: arm: Add ARMv7 API to flush TLBs Mario Smarduch
2014-11-14  1:57 ` [PATCH v14 5/7] KVM: arm: Add initial dirty page locking support Mario Smarduch
2014-11-22 19:33   ` Christoffer Dall
2014-11-24 18:44     ` Mario Smarduch
2014-11-14  1:57 ` [PATCH v14 6/7] KVM: arm: dirty logging write protect support Mario Smarduch
2014-11-22 19:40   ` Christoffer Dall
2014-11-24 18:47     ` Mario Smarduch
2014-11-14  1:57 ` [PATCH v14 7/7] KVM: arm: page logging 2nd stage fault handling Mario Smarduch
2014-11-14 16:45   ` Marc Zyngier
2014-11-14 18:53     ` Mario Smarduch
2014-11-14  8:06 ` [PATCH v14 0/7] KVM/arm/x86: dirty page logging for ARMv7 (3.17.0-rc1) Cornelia Huck
2014-11-14 18:57   ` Mario Smarduch
2014-11-25 10:22 ` Christoffer Dall
2014-11-25 21:57   ` Mario Smarduch

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=5465D38B.1090605@redhat.com \
    --to=pbonzini@redhat.com \
    --cc=linux-arm-kernel@lists.infradead.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).