public inbox for kvm@vger.kernel.org
 help / color / mirror / Atom feed
From: Will Deacon <will.deacon@arm.com>
To: Joel Schopp <joel.schopp@amd.com>
Cc: "pbonzini@redhat.com" <pbonzini@redhat.com>,
	"gleb@kernel.org" <gleb@kernel.org>,
	"peter.maydell@linaro.org" <peter.maydell@linaro.org>,
	"kvmarm@lists.cs.columbia.edu" <kvmarm@lists.cs.columbia.edu>,
	"kvm@vger.kernel.org" <kvm@vger.kernel.org>,
	Christoffer Dall <christoffer.dall@linaro.org>,
	Marc Zyngier <Marc.Zyngier@arm.com>,
	Don Dutile <ddutile@redhat.com>
Subject: Re: [PATCH v2] kvm: arm64: vgic: fix hyp panic with 64k pages on juno platform
Date: Fri, 25 Jul 2014 17:09:24 +0100	[thread overview]
Message-ID: <20140725160924.GM5269@arm.com> (raw)
In-Reply-To: <53D27E22.2050602@amd.com>

On Fri, Jul 25, 2014 at 04:56:18PM +0100, Joel Schopp wrote:
> 
> On 07/25/2014 10:29 AM, Will Deacon wrote:
> > If the physical address of GICV isn't page-aligned, then we end up
> > creating a stage-2 mapping of the page containing it, which causes us to
> > map neighbouring memory locations directly into the guest.
> >
> > As an example, consider a platform with GICV at physical 0x2c02f000
> > running a 64k-page host kernel. If qemu maps this into the guest at
> > 0x80010000, then guest physical addresses 0x80010000 - 0x8001efff will
> > map host physical region 0x2c020000 - 0x2c02efff. Accesses to these
> > physical regions may cause UNPREDICTABLE behaviour, for example, on the
> > Juno platform this will cause an SError exception to EL3, which brings
> > down the entire physical CPU resulting in RCU stalls / HYP panics / host
> > crashing / wasted weeks of debugging.
> No denying this is a problem.
> > SBSA recommends that systems alias the 4k GICV across the bounding 64k
> > region, in which case GICV physical could be described as 0x2c020000 in
> > the above scenario.
> The problem with this patch is the gicv is really 8K.  The reason you
> would map at a 60K offset (0xf000), and why we do on our SOC, is so that
> the 8K gicv would pick up the last 4K from the first page and the first
> 4K from the next page.  With your patch it is impossible to map all 8K
> of the gicv with 64K pages.

Please, help me with an alternative. If we drop the size alignment check,
then we can miss some dangerous cases such as the one highlighted previously
by Peter.

> My SOC which works fine with kvm now will go to not working with kvm
> after this patch. 

Right, but my only alternative is have CONFIG_KVM depends on !64K_PAGES,
which sucks for everybody. Your device-tree entry has to change *anyway*,
because as it stands we're mapping 60k of unknown stuff into the guest,
which the kernel needs to know is safe.

Will

  parent reply	other threads:[~2014-07-25 16:09 UTC|newest]

Thread overview: 8+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2014-07-25 15:29 [PATCH v2] kvm: arm64: vgic: fix hyp panic with 64k pages on juno platform Will Deacon
2014-07-25 15:56 ` Joel Schopp
2014-07-25 16:02   ` Peter Maydell
2014-07-25 16:24     ` Joel Schopp
2014-07-25 16:38       ` Will Deacon
2014-07-25 16:09   ` Will Deacon [this message]
2014-07-30 10:47 ` Marc Zyngier
2014-07-30 12:55   ` Christoffer Dall

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20140725160924.GM5269@arm.com \
    --to=will.deacon@arm.com \
    --cc=Marc.Zyngier@arm.com \
    --cc=christoffer.dall@linaro.org \
    --cc=ddutile@redhat.com \
    --cc=gleb@kernel.org \
    --cc=joel.schopp@amd.com \
    --cc=kvm@vger.kernel.org \
    --cc=kvmarm@lists.cs.columbia.edu \
    --cc=pbonzini@redhat.com \
    --cc=peter.maydell@linaro.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox