public inbox for kvm@vger.kernel.org
 help / color / mirror / Atom feed
From: Joerg Roedel <joro@8bytes.org>
To: Marcelo Tosatti <mtosatti@redhat.com>
Cc: Joerg Roedel <joerg.roedel@amd.com>, Avi Kivity <avi@redhat.com>,
	kvm@vger.kernel.org, linux-kernel@vger.kernel.org
Subject: Re: [PATCH] kvm mmu: add support for 1GB pages in shadow paging code
Date: Sat, 28 Mar 2009 23:04:29 +0100	[thread overview]
Message-ID: <20090328220429.GF31080@8bytes.org> (raw)
In-Reply-To: <20090328212834.GA4694@amt.cnet>

On Sat, Mar 28, 2009 at 06:28:35PM -0300, Marcelo Tosatti wrote:
> > I have searched this bug for quite some time with no real luck. Maybe
> > some other reviewers have more luck than I had by now.
> 
> Sorry, I can't spot what is wrong here. Avi?
> 
> Perhaps it helps if you provide some info of the hang when guest
> allocates hugepages on boot (its probably and endless fault that can't
> be corrected?).

I will try to find out why the guest stucks. I also created a full
mmu trace of a boot crash case. But its size was around 170MB and I
found no real problem there.

> Also another point is that the large huge page at 0-1GB will never
> be created, because it crosses slot boundary.

The instabilies only occur if the guest has enough memory to use a
gbpage in its own direct mapping. They also go away when I boot it with
nogbpages command line option. So its likely that is has something to do
with the processing of guest gbpages in the softmmu code. But I looked
over this code again and again. There seems to be no bug.

> 
> > Signed-off-by: Joerg Roedel <joerg.roedel@amd.com>
> > ---
> >  arch/x86/kvm/mmu.c         |   56 +++++++++++++++++++++++++++++++------------
> >  arch/x86/kvm/paging_tmpl.h |   35 +++++++++++++++++++++------
> >  arch/x86/kvm/svm.c         |    2 +-
> >  3 files changed, 68 insertions(+), 25 deletions(-)
> > 
> > +	psize = backing_size(vcpu, vcpu->arch.update_pte.gfn);
> 
> This can block, and this path holds mmu_lock. Thats why it needs to be
> done in guess_page_from_pte_write.

Ah true. Thanks for pointing this out. The previous code in the
guess_page function makes sense now.

> > +	if ((sp->role.level == PT_DIRECTORY_LEVEL) &&
> > +	    (psize >= KVM_PAGE_SIZE_2M)) {
> > +		psize = KVM_PAGE_SIZE_2M;
> > +		vcpu->arch.update_pte.gfn &= ~(KVM_PAGES_PER_2M_PAGE-1);
> > +		vcpu->arch.update_pte.pfn &= ~(KVM_PAGES_PER_2M_PAGE-1);
> > +	} else if ((sp->role.level == PT_MIDDLE_LEVEL) &&
> > +		   (psize == KVM_PAGE_SIZE_1G)) {
> > +		vcpu->arch.update_pte.gfn &= ~(KVM_PAGES_PER_1G_PAGE-1);
> > +		vcpu->arch.update_pte.pfn &= ~(KVM_PAGES_PER_1G_PAGE-1);
> > +	} else
> > +		goto out_pde;
> 
> Better just zap the entry in case its a 1GB one and let the
> fault path handle it.

Yes, that probably better.

	Joerg

  reply	other threads:[~2009-03-28 22:04 UTC|newest]

Thread overview: 5+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2009-03-27 14:35 [PATCH] kvm mmu: add support for 1GB pages in shadow paging code Joerg Roedel
2009-03-28 21:28 ` Marcelo Tosatti
2009-03-28 22:04   ` Joerg Roedel [this message]
2009-03-29 11:59 ` Avi Kivity
2009-03-29 12:50   ` Joerg Roedel

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20090328220429.GF31080@8bytes.org \
    --to=joro@8bytes.org \
    --cc=avi@redhat.com \
    --cc=joerg.roedel@amd.com \
    --cc=kvm@vger.kernel.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=mtosatti@redhat.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox