kvm.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Mario Smarduch <m.smarduch@samsung.com>
To: Marc Zyngier <marc.zyngier@arm.com>
Cc: "kvmarm@lists.cs.columbia.edu" <kvmarm@lists.cs.columbia.edu>,
	"christoffer.dall@linaro.org" <christoffer.dall@linaro.org>,
	이정석 <jays.lee@samsung.com>, 정성진 <sungjinn.chung@samsung.com>,
	"kvm@vger.kernel.org" <kvm@vger.kernel.org>
Subject: Re: [PATCH 2/3] migration dirtybitmap support ARMv7
Date: Tue, 15 Apr 2014 18:18:14 -0700	[thread overview]
Message-ID: <534DDA56.1020803@samsung.com> (raw)
In-Reply-To: <534CF4CE.2060600@arm.com>

On 04/15/2014 01:58 AM, Marc Zyngier wrote:
> 
> Why do you nuke the whole TLBs for this VM? I assume you're going to
> repeatedly call this for all the huge pages, aren't you? Can you delay
> this flush to do it only once?
> 
>> +	get_page(virt_to_page(pte));
>> +	return true;
>> +}
>> +
>> +/*
>> + * Called from QEMU when migration dirty logging is started. Write the protect
>> + * current set. Future faults writes are tracked through WP of when dirty log
>> + * log.
> 
> Same as above.
> 
>> + */
>> +
>> +void kvm_mmu_slot_remove_write_access(struct kvm *kvm, int slot)
>> +{
>> +	pgd_t *pgd;
>> +	pud_t *pud;
>> +	pmd_t *pmd;
>> +	pte_t *pte, new_pte;
>> +	pgd_t *pgdp = kvm->arch.pgd;
>> +	struct kvm_memory_slot *memslot = id_to_memslot(kvm->memslots, slot);
>> +	u64 start = memslot->base_gfn << PAGE_SHIFT;
>> +	u64 end = (memslot->base_gfn + memslot->npages) << PAGE_SHIFT;
>> +	u64 addr = start, addr1;
>> +
>> +	spin_lock(&kvm->mmu_lock);
>> +	kvm->arch.migration_in_progress = true;
>> +	while (addr < end) {
>> +		if (need_resched() || spin_needbreak(&kvm->mmu_lock)) {
>> +			kvm_tlb_flush_vmid(kvm);
> 
> Looks like you're extremely flush happy. If you're holding the lock, why
> do you need all the extra flushes in the previous function?

Reduced it to one flush, upon termination of the write protect loop.




>> +
>> +		if (kvm_pmd_huge(*pmd)) {
>> +			if (!split_pmd(kvm, pmd, addr)) {
>> +				kvm->arch.migration_in_progress = false;
>> +				return;
> 
> Bang, you're dead.
Yes added the unlock, also added return code in get dirty log function
to abort migration.

> 

> 

>>  		pte_t new_pte = pfn_pte(pfn, PAGE_S2);
>>  		if (writable) {
>> +			if (migration_active && hugetlb) {
>> +				/* get back pfn from fault_ipa */
>> +				pfn += (fault_ipa >> PAGE_SHIFT) &
>> +					((1 << (PMD_SHIFT - PAGE_SHIFT))-1);
>> +				new_pte = pfn_pte(pfn, PAGE_S2);
> 
> Please explain this.
 Next patch series will update this, there was another
problem of handling pmd huge pages and directing them to
pte handling.




      reply	other threads:[~2014-04-16  1:18 UTC|newest]

Thread overview: 3+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2014-04-15  1:24 [PATCH 2/3] migration dirtybitmap support ARMv7 Mario Smarduch
2014-04-15  8:58 ` Marc Zyngier
2014-04-16  1:18   ` Mario Smarduch [this message]

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=534DDA56.1020803@samsung.com \
    --to=m.smarduch@samsung.com \
    --cc=christoffer.dall@linaro.org \
    --cc=jays.lee@samsung.com \
    --cc=kvm@vger.kernel.org \
    --cc=kvmarm@lists.cs.columbia.edu \
    --cc=marc.zyngier@arm.com \
    --cc=sungjinn.chung@samsung.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).