qemu-devel.nongnu.org archive mirror
 help / color / mirror / Atom feed
From: Richard Henderson <richard.henderson@linaro.org>
To: Alistair Francis <alistair23@gmail.com>
Cc: "open list:RISC-V" <qemu-riscv@nongnu.org>,
	Palmer Dabbelt <palmer@dabbelt.com>,
	Alistair Francis <alistair.francis@wdc.com>,
	"qemu-devel@nongnu.org Developers" <qemu-devel@nongnu.org>
Subject: Re: [PATCH for 5.0 v1 1/2] riscv: Don't use stage-2 PTE lookup protection flags
Date: Sat, 27 Jun 2020 15:48:36 -0700	[thread overview]
Message-ID: <cb8ccef8-4a5d-9689-5027-a697fa859746@linaro.org> (raw)
In-Reply-To: <CAKmqyKPdGp+5n_fRuzi74JK8z8rcXMU+KiJw5v2nTMApHqXauA@mail.gmail.com>

On 6/25/20 12:02 PM, Alistair Francis wrote:
>> (3) Do we need to validate vbase_prot for write before updating the PTE for
>> Access or Dirty?  That seems like a loop-hole to allow silent modification of
>> hypervisor read-only memory.
> 
> That's a good point.
> 
> Updating the accessed bit seems correct to me as we did access it and
> that doesn't then provide write permissions.

I guess my first question is: Does the stage2 hypervisor pte provide read-only
memory?

If not, all of this is moot.

However, if it does, consider:

  (1) The guest os creates a stage1 page table with a leaf table
      within the read-only memory.  This is obviously hokey.

  (2) The guest os accesses a virtual address that utilizes the
      aforementioned PTE, the hardware (qemu) updates the
      accessed bit.

  (3) The read-only page has now been modified.  Oops.

>> I do wonder if it might be easier to manage all of this by using additional
>> TLBs to handle the stage2 and physical address spaces.  That's probably too
>> invasive for this stage of development though.
> 
> Do you mean change riscv_cpu_mmu_index() to take into account
> virtulisation and have more then the current 3 (M, S and U) MMU
> indexes?

I had been thinking that you might be able to use some form of mmu-indexed
load/lookup instead of address_space_ldq.  Which would require 1 mmuidx that is
physically mapped (same as M?) and another that uses only the hypervisor's
second stage lookup.


r~


  reply	other threads:[~2020-06-27 22:49 UTC|newest]

Thread overview: 11+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2020-03-26 22:44 [PATCH for 5.0 v1 0/2] RISC-V: Fix Hypervisor guest user space Alistair Francis
2020-03-26 22:44 ` [PATCH for 5.0 v1 1/2] riscv: Don't use stage-2 PTE lookup protection flags Alistair Francis
2020-03-26 23:50   ` Richard Henderson
2020-06-25 19:02     ` Alistair Francis
2020-06-27 22:48       ` Richard Henderson [this message]
2020-03-26 22:44 ` [PATCH for 5.0 v1 2/2] riscv: AND stage-1 and stage-2 " Alistair Francis
2020-03-26 23:32   ` Richard Henderson
2020-03-26 23:45     ` Alistair Francis
2020-03-27  0:00 ` [PATCH for 5.0 v1 0/2] RISC-V: Fix Hypervisor guest user space Palmer Dabbelt
2020-03-30  4:23   ` Anup Patel
2020-04-20 19:16 ` Alistair Francis

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=cb8ccef8-4a5d-9689-5027-a697fa859746@linaro.org \
    --to=richard.henderson@linaro.org \
    --cc=alistair.francis@wdc.com \
    --cc=alistair23@gmail.com \
    --cc=palmer@dabbelt.com \
    --cc=qemu-devel@nongnu.org \
    --cc=qemu-riscv@nongnu.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).