* [Qemu-devel] [PATCH 0/3] Support M-profile MPU regions smaller than 1K
@ 2018-06-20 13:06 Peter Maydell
2018-06-20 13:06 ` [Qemu-devel] [PATCH 1/3] tcg: Support MMU protection regions smaller than TARGET_PAGE_SIZE Peter Maydell
` (3 more replies)
0 siblings, 4 replies; 13+ messages in thread
From: Peter Maydell @ 2018-06-20 13:06 UTC (permalink / raw)
To: qemu-arm, qemu-devel; +Cc: patches, Paolo Bonzini, Richard Henderson
The Arm M-profile MPU allows the guest to specify access
permissions at a very fine granularity (down to a 32-byte
alignment for region start and and addresses). Currently
we insist that regions are page-aligned because the core
TCG code can't handle anything else.
This patchset relaxes that restriction, so that we can handle
small MPU regions for reading and writing (but not yet for
execution). It does that by marking the TLB entry for any
page which includes a small region with a flag TLB_RECHECK.
This flag causes us to always take the slow-path for accesses.
In the slow path we can then special case them to always call
tlb_fill() again, so we have the correct information for the
exact address being accessed.
Patch 1 adds support to the accel/tcg code. Patch 2 then
enables using it for the PMSAv7 (v7M) MPU, and patch 3
does the same for the PMSAv8 (v8M) MPU and SAU.
Because we don't yet support execution from small regions,
the PMSA code has some corner cases where it retains the
previous behaviour so we don't break previously-working
guests:
* if the MPU region is small, we don't mark it PROT_EXEC
even if the guest asked for it (so execution will cause
an MPU exception)
* we ignore the fact that the SAU region might be smaller
than a page
(Unfortunately the old code *intended* to make small-region
accesses non-executable but due to bugs didn't actually
succeed in doing that, so this might possibly cause some
previously working-by-accident code to break.)
I would ideally in future like to add execution support, but
this is somewhat tricky. My rough sketch for it looks like:
* get_page_addr_code() should return -1 for "not actually
a full page of RAM" (deleting all the current tricky code
for trying to handle it being a memory region or unmapped)
* its callsites in the TB hashtable lookup code should handle
-1 by returning "no TB found"
* the weird call to get_page_addr_code() in the xtensa itlb_hit_test
helper should be replaced by a call to tlb_fill() or some
xtensa-internal function (I think all it is trying to do is
cause the exceptions for MMU faults)
* tb_gen_code() should in this case generate a one-instruction
TB and tell its caller not to cache it
* the translate.c code for targets that need this probably needs
fixing up to make sure it can handle the case of "the load
code byte/word/etc functions might return failure or cause
an exception"
This would also have the advantage that it naturally allows
you to execute (slowly) from any MMIO region, and we could drop
the broken request-mmio-pointer machinery.
In any case that is too much for 3.0, I think; I'd like to get this
R/W small-region code into 3.0, though.
thanks
-- PMM
Peter Maydell (3):
tcg: Support MMU protection regions smaller than TARGET_PAGE_SIZE
target/arm: Set page (region) size in get_phys_addr_pmsav7()
target/arm: Handle small regions in get_phys_addr_pmsav8()
accel/tcg/softmmu_template.h | 24 ++++---
include/exec/cpu-all.h | 5 +-
accel/tcg/cputlb.c | 131 +++++++++++++++++++++++++++++------
target/arm/helper.c | 115 +++++++++++++++++++++---------
4 files changed, 211 insertions(+), 64 deletions(-)
--
2.17.1
^ permalink raw reply [flat|nested] 13+ messages in thread
* [Qemu-devel] [PATCH 1/3] tcg: Support MMU protection regions smaller than TARGET_PAGE_SIZE
2018-06-20 13:06 [Qemu-devel] [PATCH 0/3] Support M-profile MPU regions smaller than 1K Peter Maydell
@ 2018-06-20 13:06 ` Peter Maydell
2018-06-20 15:46 ` Mark Cave-Ayland
2018-06-30 19:20 ` Max Filippov
2018-06-20 13:06 ` [Qemu-devel] [PATCH 2/3] target/arm: Set page (region) size in get_phys_addr_pmsav7() Peter Maydell
` (2 subsequent siblings)
3 siblings, 2 replies; 13+ messages in thread
From: Peter Maydell @ 2018-06-20 13:06 UTC (permalink / raw)
To: qemu-arm, qemu-devel; +Cc: patches, Paolo Bonzini, Richard Henderson
Add support for MMU protection regions that are smaller than
TARGET_PAGE_SIZE. We do this by marking the TLB entry for those
pages with a flag TLB_RECHECK. This flag causes us to always
take the slow-path for accesses. In the slow path we can then
special case them to always call tlb_fill() again, so we have
the correct information for the exact address being accessed.
This change allows us to handle reading and writing from small
regions; we cannot deal with execution from the small region.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
---
accel/tcg/softmmu_template.h | 24 ++++---
include/exec/cpu-all.h | 5 +-
accel/tcg/cputlb.c | 131 +++++++++++++++++++++++++++++------
3 files changed, 130 insertions(+), 30 deletions(-)
diff --git a/accel/tcg/softmmu_template.h b/accel/tcg/softmmu_template.h
index 239ea6692b4..c47591c9709 100644
--- a/accel/tcg/softmmu_template.h
+++ b/accel/tcg/softmmu_template.h
@@ -98,10 +98,12 @@
static inline DATA_TYPE glue(io_read, SUFFIX)(CPUArchState *env,
size_t mmu_idx, size_t index,
target_ulong addr,
- uintptr_t retaddr)
+ uintptr_t retaddr,
+ bool recheck)
{
CPUIOTLBEntry *iotlbentry = &env->iotlb[mmu_idx][index];
- return io_readx(env, iotlbentry, mmu_idx, addr, retaddr, DATA_SIZE);
+ return io_readx(env, iotlbentry, mmu_idx, addr, retaddr, recheck,
+ DATA_SIZE);
}
#endif
@@ -138,7 +140,8 @@ WORD_TYPE helper_le_ld_name(CPUArchState *env, target_ulong addr,
/* ??? Note that the io helpers always read data in the target
byte ordering. We should push the LE/BE request down into io. */
- res = glue(io_read, SUFFIX)(env, mmu_idx, index, addr, retaddr);
+ res = glue(io_read, SUFFIX)(env, mmu_idx, index, addr, retaddr,
+ tlb_addr & TLB_RECHECK);
res = TGT_LE(res);
return res;
}
@@ -205,7 +208,8 @@ WORD_TYPE helper_be_ld_name(CPUArchState *env, target_ulong addr,
/* ??? Note that the io helpers always read data in the target
byte ordering. We should push the LE/BE request down into io. */
- res = glue(io_read, SUFFIX)(env, mmu_idx, index, addr, retaddr);
+ res = glue(io_read, SUFFIX)(env, mmu_idx, index, addr, retaddr,
+ tlb_addr & TLB_RECHECK);
res = TGT_BE(res);
return res;
}
@@ -259,10 +263,12 @@ static inline void glue(io_write, SUFFIX)(CPUArchState *env,
size_t mmu_idx, size_t index,
DATA_TYPE val,
target_ulong addr,
- uintptr_t retaddr)
+ uintptr_t retaddr,
+ bool recheck)
{
CPUIOTLBEntry *iotlbentry = &env->iotlb[mmu_idx][index];
- return io_writex(env, iotlbentry, mmu_idx, val, addr, retaddr, DATA_SIZE);
+ return io_writex(env, iotlbentry, mmu_idx, val, addr, retaddr,
+ recheck, DATA_SIZE);
}
void helper_le_st_name(CPUArchState *env, target_ulong addr, DATA_TYPE val,
@@ -298,7 +304,8 @@ void helper_le_st_name(CPUArchState *env, target_ulong addr, DATA_TYPE val,
/* ??? Note that the io helpers always read data in the target
byte ordering. We should push the LE/BE request down into io. */
val = TGT_LE(val);
- glue(io_write, SUFFIX)(env, mmu_idx, index, val, addr, retaddr);
+ glue(io_write, SUFFIX)(env, mmu_idx, index, val, addr,
+ retaddr, tlb_addr & TLB_RECHECK);
return;
}
@@ -375,7 +382,8 @@ void helper_be_st_name(CPUArchState *env, target_ulong addr, DATA_TYPE val,
/* ??? Note that the io helpers always read data in the target
byte ordering. We should push the LE/BE request down into io. */
val = TGT_BE(val);
- glue(io_write, SUFFIX)(env, mmu_idx, index, val, addr, retaddr);
+ glue(io_write, SUFFIX)(env, mmu_idx, index, val, addr, retaddr,
+ tlb_addr & TLB_RECHECK);
return;
}
diff --git a/include/exec/cpu-all.h b/include/exec/cpu-all.h
index 7fa726b8e36..7338f57062f 100644
--- a/include/exec/cpu-all.h
+++ b/include/exec/cpu-all.h
@@ -330,11 +330,14 @@ CPUArchState *cpu_copy(CPUArchState *env);
#define TLB_NOTDIRTY (1 << (TARGET_PAGE_BITS - 2))
/* Set if TLB entry is an IO callback. */
#define TLB_MMIO (1 << (TARGET_PAGE_BITS - 3))
+/* Set if TLB entry must have MMU lookup repeated for every access */
+#define TLB_RECHECK (1 << (TARGET_PAGE_BITS - 4))
/* Use this mask to check interception with an alignment mask
* in a TCG backend.
*/
-#define TLB_FLAGS_MASK (TLB_INVALID_MASK | TLB_NOTDIRTY | TLB_MMIO)
+#define TLB_FLAGS_MASK (TLB_INVALID_MASK | TLB_NOTDIRTY | TLB_MMIO \
+ | TLB_RECHECK)
void dump_exec_info(FILE *f, fprintf_function cpu_fprintf);
void dump_opcount_info(FILE *f, fprintf_function cpu_fprintf);
diff --git a/accel/tcg/cputlb.c b/accel/tcg/cputlb.c
index 0a721bb9c40..d893452295f 100644
--- a/accel/tcg/cputlb.c
+++ b/accel/tcg/cputlb.c
@@ -621,27 +621,42 @@ void tlb_set_page_with_attrs(CPUState *cpu, target_ulong vaddr,
target_ulong code_address;
uintptr_t addend;
CPUTLBEntry *te, *tv, tn;
- hwaddr iotlb, xlat, sz;
+ hwaddr iotlb, xlat, sz, paddr_page;
+ target_ulong vaddr_page;
unsigned vidx = env->vtlb_index++ % CPU_VTLB_SIZE;
int asidx = cpu_asidx_from_attrs(cpu, attrs);
assert_cpu_is_self(cpu);
- assert(size >= TARGET_PAGE_SIZE);
- if (size != TARGET_PAGE_SIZE) {
- tlb_add_large_page(env, vaddr, size);
- }
- sz = size;
- section = address_space_translate_for_iotlb(cpu, asidx, paddr, &xlat, &sz,
- attrs, &prot);
+ if (size < TARGET_PAGE_SIZE) {
+ sz = TARGET_PAGE_SIZE;
+ } else {
+ if (size > TARGET_PAGE_SIZE) {
+ tlb_add_large_page(env, vaddr, size);
+ }
+ sz = size;
+ }
+ vaddr_page = vaddr & TARGET_PAGE_MASK;
+ paddr_page = paddr & TARGET_PAGE_MASK;
+
+ section = address_space_translate_for_iotlb(cpu, asidx, paddr_page,
+ &xlat, &sz, attrs, &prot);
assert(sz >= TARGET_PAGE_SIZE);
tlb_debug("vaddr=" TARGET_FMT_lx " paddr=0x" TARGET_FMT_plx
" prot=%x idx=%d\n",
vaddr, paddr, prot, mmu_idx);
- address = vaddr;
- if (!memory_region_is_ram(section->mr) && !memory_region_is_romd(section->mr)) {
+ address = vaddr_page;
+ if (size < TARGET_PAGE_SIZE) {
+ /*
+ * Slow-path the TLB entries; we will repeat the MMU check and TLB
+ * fill on every access.
+ */
+ address |= TLB_RECHECK;
+ }
+ if (!memory_region_is_ram(section->mr) &&
+ !memory_region_is_romd(section->mr)) {
/* IO memory case */
address |= TLB_MMIO;
addend = 0;
@@ -651,10 +666,10 @@ void tlb_set_page_with_attrs(CPUState *cpu, target_ulong vaddr,
}
code_address = address;
- iotlb = memory_region_section_get_iotlb(cpu, section, vaddr, paddr, xlat,
- prot, &address);
+ iotlb = memory_region_section_get_iotlb(cpu, section, vaddr_page,
+ paddr_page, xlat, prot, &address);
- index = (vaddr >> TARGET_PAGE_BITS) & (CPU_TLB_SIZE - 1);
+ index = (vaddr_page >> TARGET_PAGE_BITS) & (CPU_TLB_SIZE - 1);
te = &env->tlb_table[mmu_idx][index];
/* do not discard the translation in te, evict it into a victim tlb */
tv = &env->tlb_v_table[mmu_idx][vidx];
@@ -670,18 +685,18 @@ void tlb_set_page_with_attrs(CPUState *cpu, target_ulong vaddr,
* TARGET_PAGE_BITS, and either
* + the ram_addr_t of the page base of the target RAM (if NOTDIRTY or ROM)
* + the offset within section->mr of the page base (otherwise)
- * We subtract the vaddr (which is page aligned and thus won't
+ * We subtract the vaddr_page (which is page aligned and thus won't
* disturb the low bits) to give an offset which can be added to the
* (non-page-aligned) vaddr of the eventual memory access to get
* the MemoryRegion offset for the access. Note that the vaddr we
* subtract here is that of the page base, and not the same as the
* vaddr we add back in io_readx()/io_writex()/get_page_addr_code().
*/
- env->iotlb[mmu_idx][index].addr = iotlb - vaddr;
+ env->iotlb[mmu_idx][index].addr = iotlb - vaddr_page;
env->iotlb[mmu_idx][index].attrs = attrs;
/* Now calculate the new entry */
- tn.addend = addend - vaddr;
+ tn.addend = addend - vaddr_page;
if (prot & PAGE_READ) {
tn.addr_read = address;
} else {
@@ -702,7 +717,7 @@ void tlb_set_page_with_attrs(CPUState *cpu, target_ulong vaddr,
tn.addr_write = address | TLB_MMIO;
} else if (memory_region_is_ram(section->mr)
&& cpu_physical_memory_is_clean(
- memory_region_get_ram_addr(section->mr) + xlat)) {
+ memory_region_get_ram_addr(section->mr) + xlat)) {
tn.addr_write = address | TLB_NOTDIRTY;
} else {
tn.addr_write = address;
@@ -775,7 +790,8 @@ static inline ram_addr_t qemu_ram_addr_from_host_nofail(void *ptr)
static uint64_t io_readx(CPUArchState *env, CPUIOTLBEntry *iotlbentry,
int mmu_idx,
- target_ulong addr, uintptr_t retaddr, int size)
+ target_ulong addr, uintptr_t retaddr,
+ bool recheck, int size)
{
CPUState *cpu = ENV_GET_CPU(env);
hwaddr mr_offset;
@@ -785,6 +801,29 @@ static uint64_t io_readx(CPUArchState *env, CPUIOTLBEntry *iotlbentry,
bool locked = false;
MemTxResult r;
+ if (recheck) {
+ /*
+ * This is a TLB_RECHECK access, where the MMU protection
+ * covers a smaller range than a target page, and we must
+ * repeat the MMU check here. This tlb_fill() call might
+ * longjump out if this access should cause a guest exception.
+ */
+ int index;
+ target_ulong tlb_addr;
+
+ tlb_fill(cpu, addr, size, MMU_DATA_LOAD, mmu_idx, retaddr);
+
+ index = (addr >> TARGET_PAGE_BITS) & (CPU_TLB_SIZE - 1);
+ tlb_addr = env->tlb_table[mmu_idx][index].addr_read;
+ if (!(tlb_addr & ~(TARGET_PAGE_MASK | TLB_RECHECK))) {
+ /* RAM access */
+ uintptr_t haddr = addr + env->tlb_table[mmu_idx][index].addend;
+
+ return ldn_p((void *)haddr, size);
+ }
+ /* Fall through for handling IO accesses */
+ }
+
section = iotlb_to_section(cpu, iotlbentry->addr, iotlbentry->attrs);
mr = section->mr;
mr_offset = (iotlbentry->addr & TARGET_PAGE_MASK) + addr;
@@ -819,7 +858,7 @@ static uint64_t io_readx(CPUArchState *env, CPUIOTLBEntry *iotlbentry,
static void io_writex(CPUArchState *env, CPUIOTLBEntry *iotlbentry,
int mmu_idx,
uint64_t val, target_ulong addr,
- uintptr_t retaddr, int size)
+ uintptr_t retaddr, bool recheck, int size)
{
CPUState *cpu = ENV_GET_CPU(env);
hwaddr mr_offset;
@@ -828,6 +867,30 @@ static void io_writex(CPUArchState *env, CPUIOTLBEntry *iotlbentry,
bool locked = false;
MemTxResult r;
+ if (recheck) {
+ /*
+ * This is a TLB_RECHECK access, where the MMU protection
+ * covers a smaller range than a target page, and we must
+ * repeat the MMU check here. This tlb_fill() call might
+ * longjump out if this access should cause a guest exception.
+ */
+ int index;
+ target_ulong tlb_addr;
+
+ tlb_fill(cpu, addr, size, MMU_DATA_STORE, mmu_idx, retaddr);
+
+ index = (addr >> TARGET_PAGE_BITS) & (CPU_TLB_SIZE - 1);
+ tlb_addr = env->tlb_table[mmu_idx][index].addr_write;
+ if (!(tlb_addr & ~(TARGET_PAGE_MASK | TLB_RECHECK))) {
+ /* RAM access */
+ uintptr_t haddr = addr + env->tlb_table[mmu_idx][index].addend;
+
+ stn_p((void *)haddr, size, val);
+ return;
+ }
+ /* Fall through for handling IO accesses */
+ }
+
section = iotlb_to_section(cpu, iotlbentry->addr, iotlbentry->attrs);
mr = section->mr;
mr_offset = (iotlbentry->addr & TARGET_PAGE_MASK) + addr;
@@ -911,6 +974,32 @@ tb_page_addr_t get_page_addr_code(CPUArchState *env, target_ulong addr)
tlb_fill(ENV_GET_CPU(env), addr, 0, MMU_INST_FETCH, mmu_idx, 0);
}
}
+
+ if (unlikely(env->tlb_table[mmu_idx][index].addr_code & TLB_RECHECK)) {
+ /*
+ * This is a TLB_RECHECK access, where the MMU protection
+ * covers a smaller range than a target page, and we must
+ * repeat the MMU check here. This tlb_fill() call might
+ * longjump out if this access should cause a guest exception.
+ */
+ int index;
+ target_ulong tlb_addr;
+
+ tlb_fill(cpu, addr, 0, MMU_INST_FETCH, mmu_idx, 0);
+
+ index = (addr >> TARGET_PAGE_BITS) & (CPU_TLB_SIZE - 1);
+ tlb_addr = env->tlb_table[mmu_idx][index].addr_code;
+ if (!(tlb_addr & ~(TARGET_PAGE_MASK | TLB_RECHECK))) {
+ /* RAM access. We can't handle this, so for now just stop */
+ cpu_abort(cpu, "Unable to handle guest executing from RAM within "
+ "a small MPU region at 0x" TARGET_FMT_lx, addr);
+ }
+ /*
+ * Fall through to handle IO accesses (which will almost certainly
+ * also result in failure)
+ */
+ }
+
iotlbentry = &env->iotlb[mmu_idx][index];
section = iotlb_to_section(cpu, iotlbentry->addr, iotlbentry->attrs);
mr = section->mr;
@@ -1019,8 +1108,8 @@ static void *atomic_mmu_lookup(CPUArchState *env, target_ulong addr,
tlb_addr = tlbe->addr_write & ~TLB_INVALID_MASK;
}
- /* Notice an IO access */
- if (unlikely(tlb_addr & TLB_MMIO)) {
+ /* Notice an IO access or a needs-MMU-lookup access */
+ if (unlikely(tlb_addr & (TLB_MMIO | TLB_RECHECK))) {
/* There's really nothing that can be done to
support this apart from stop-the-world. */
goto stop_the_world;
--
2.17.1
^ permalink raw reply related [flat|nested] 13+ messages in thread
* [Qemu-devel] [PATCH 2/3] target/arm: Set page (region) size in get_phys_addr_pmsav7()
2018-06-20 13:06 [Qemu-devel] [PATCH 0/3] Support M-profile MPU regions smaller than 1K Peter Maydell
2018-06-20 13:06 ` [Qemu-devel] [PATCH 1/3] tcg: Support MMU protection regions smaller than TARGET_PAGE_SIZE Peter Maydell
@ 2018-06-20 13:06 ` Peter Maydell
2018-06-20 13:06 ` [Qemu-devel] [PATCH 3/3] target/arm: Handle small regions in get_phys_addr_pmsav8() Peter Maydell
2018-06-20 19:22 ` [Qemu-devel] [PATCH 0/3] Support M-profile MPU regions smaller than 1K Richard Henderson
3 siblings, 0 replies; 13+ messages in thread
From: Peter Maydell @ 2018-06-20 13:06 UTC (permalink / raw)
To: qemu-arm, qemu-devel; +Cc: patches, Paolo Bonzini, Richard Henderson
We want to handle small MPU region sizes for ARMv7M. To do this,
make get_phys_addr_pmsav7() set the page size to the region
size if it is less that TARGET_PAGE_SIZE, rather than working
only in TARGET_PAGE_SIZE chunks.
Since the core TCG code con't handle execution from small
MPU regions, we strip the exec permission from them so that
any execution attempts will cause an MPU exception, rather
than allowing it to end up with a cpu_abort() in
get_page_addr_code().
(The previous code's intention was to make any small page be
treated as having no permissions, but unfortunately errors
in the implementation meant that it didn't behave that way.
It's possible that some binaries using small regions were
accidentally working with our old behaviour and won't now.)
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
---
target/arm/helper.c | 37 ++++++++++++++++++++++++++-----------
1 file changed, 26 insertions(+), 11 deletions(-)
diff --git a/target/arm/helper.c b/target/arm/helper.c
index 1248d84e6fa..a7edeb66633 100644
--- a/target/arm/helper.c
+++ b/target/arm/helper.c
@@ -9596,6 +9596,7 @@ static inline bool m_is_system_region(CPUARMState *env, uint32_t address)
static bool get_phys_addr_pmsav7(CPUARMState *env, uint32_t address,
MMUAccessType access_type, ARMMMUIdx mmu_idx,
hwaddr *phys_ptr, int *prot,
+ target_ulong *page_size,
ARMMMUFaultInfo *fi)
{
ARMCPU *cpu = arm_env_get_cpu(env);
@@ -9603,6 +9604,7 @@ static bool get_phys_addr_pmsav7(CPUARMState *env, uint32_t address,
bool is_user = regime_is_user(env, mmu_idx);
*phys_ptr = address;
+ *page_size = TARGET_PAGE_SIZE;
*prot = 0;
if (regime_translation_disabled(env, mmu_idx) ||
@@ -9675,16 +9677,12 @@ static bool get_phys_addr_pmsav7(CPUARMState *env, uint32_t address,
rsize++;
}
}
- if (rsize < TARGET_PAGE_BITS) {
- qemu_log_mask(LOG_UNIMP,
- "DRSR[%d]: No support for MPU (sub)region size of"
- " %" PRIu32 " bytes. Minimum is %d.\n",
- n, (1 << rsize), TARGET_PAGE_SIZE);
- continue;
- }
if (srdis) {
continue;
}
+ if (rsize < TARGET_PAGE_BITS) {
+ *page_size = 1 << rsize;
+ }
break;
}
@@ -9765,6 +9763,17 @@ static bool get_phys_addr_pmsav7(CPUARMState *env, uint32_t address,
fi->type = ARMFault_Permission;
fi->level = 1;
+ /*
+ * Core QEMU code can't handle execution from small pages yet, so
+ * don't try it. This way we'll get an MPU exception, rather than
+ * eventually causing QEMU to exit in get_page_addr_code().
+ */
+ if (*page_size < TARGET_PAGE_SIZE && (*prot & PAGE_EXEC)) {
+ qemu_log_mask(LOG_UNIMP,
+ "MPU: No support for execution from regions "
+ "smaller than 1K\n");
+ *prot &= ~PAGE_EXEC;
+ }
return !(*prot & (1 << access_type));
}
@@ -10334,7 +10343,7 @@ static bool get_phys_addr(CPUARMState *env, target_ulong address,
} else if (arm_feature(env, ARM_FEATURE_V7)) {
/* PMSAv7 */
ret = get_phys_addr_pmsav7(env, address, access_type, mmu_idx,
- phys_ptr, prot, fi);
+ phys_ptr, prot, page_size, fi);
} else {
/* Pre-v7 MPU */
ret = get_phys_addr_pmsav5(env, address, access_type, mmu_idx,
@@ -10396,9 +10405,15 @@ bool arm_tlb_fill(CPUState *cs, vaddr address,
core_to_arm_mmu_idx(env, mmu_idx), &phys_addr,
&attrs, &prot, &page_size, fi, NULL);
if (!ret) {
- /* Map a single [sub]page. */
- phys_addr &= TARGET_PAGE_MASK;
- address &= TARGET_PAGE_MASK;
+ /*
+ * Map a single [sub]page. Regions smaller than our declared
+ * target page size are handled specially, so for those we
+ * pass in the exact addresses.
+ */
+ if (page_size >= TARGET_PAGE_SIZE) {
+ phys_addr &= TARGET_PAGE_MASK;
+ address &= TARGET_PAGE_MASK;
+ }
tlb_set_page_with_attrs(cs, address, phys_addr, attrs,
prot, mmu_idx, page_size);
return 0;
--
2.17.1
^ permalink raw reply related [flat|nested] 13+ messages in thread
* [Qemu-devel] [PATCH 3/3] target/arm: Handle small regions in get_phys_addr_pmsav8()
2018-06-20 13:06 [Qemu-devel] [PATCH 0/3] Support M-profile MPU regions smaller than 1K Peter Maydell
2018-06-20 13:06 ` [Qemu-devel] [PATCH 1/3] tcg: Support MMU protection regions smaller than TARGET_PAGE_SIZE Peter Maydell
2018-06-20 13:06 ` [Qemu-devel] [PATCH 2/3] target/arm: Set page (region) size in get_phys_addr_pmsav7() Peter Maydell
@ 2018-06-20 13:06 ` Peter Maydell
2018-06-20 19:22 ` [Qemu-devel] [PATCH 0/3] Support M-profile MPU regions smaller than 1K Richard Henderson
3 siblings, 0 replies; 13+ messages in thread
From: Peter Maydell @ 2018-06-20 13:06 UTC (permalink / raw)
To: qemu-arm, qemu-devel; +Cc: patches, Paolo Bonzini, Richard Henderson
Allow ARMv8M to handle small MPU and SAU region sizes, by making
get_phys_add_pmsav8() set the page size to the 1 if the MPU or
SAU region covers less than a TARGET_PAGE_SIZE.
We choose to use a size of 1 because it makes no difference to
the core code, and avoids having to track both the base and
limit for SAU and MPU and then convert into an artificially
restricted "page size" that the core code will then ignore.
Since the core TCG code can't handle execution from small
MPU regions, we strip the exec permission from them so that
any execution attempts will cause an MPU exception, rather
than allowing it to end up with a cpu_abort() in
get_page_addr_code().
(The previous code's intention was to make any small page be
treated as having no permissions, but unfortunately errors
in the implementation meant that it didn't behave that way.
It's possible that some binaries using small regions were
accidentally working with our old behaviour and won't now.)
We also retain an existing bug, where we ignored the possibility
that the SAU region might not cover the entire page, in the
case of executable regions. This is necessary because some
currently-working guest code images rely on being able to
execute from addresses which are covered by a page-sized
MPU region but a smaller SAU region. We can remove this
workaround if we ever support execution from small regions.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
---
target/arm/helper.c | 78 ++++++++++++++++++++++++++++++++-------------
1 file changed, 55 insertions(+), 23 deletions(-)
diff --git a/target/arm/helper.c b/target/arm/helper.c
index a7edeb66633..3c6a4c565b1 100644
--- a/target/arm/helper.c
+++ b/target/arm/helper.c
@@ -41,6 +41,7 @@ static bool get_phys_addr_lpae(CPUARMState *env, target_ulong address,
/* Security attributes for an address, as returned by v8m_security_lookup. */
typedef struct V8M_SAttributes {
+ bool subpage; /* true if these attrs don't cover the whole TARGET_PAGE */
bool ns;
bool nsc;
uint8_t sregion;
@@ -9804,6 +9805,8 @@ static void v8m_security_lookup(CPUARMState *env, uint32_t address,
int r;
bool idau_exempt = false, idau_ns = true, idau_nsc = true;
int idau_region = IREGION_NOTVALID;
+ uint32_t addr_page_base = address & TARGET_PAGE_MASK;
+ uint32_t addr_page_limit = addr_page_base + (TARGET_PAGE_SIZE - 1);
if (cpu->idau) {
IDAUInterfaceClass *iic = IDAU_INTERFACE_GET_CLASS(cpu->idau);
@@ -9841,6 +9844,9 @@ static void v8m_security_lookup(CPUARMState *env, uint32_t address,
uint32_t limit = env->sau.rlar[r] | 0x1f;
if (base <= address && limit >= address) {
+ if (base > addr_page_base || limit < addr_page_limit) {
+ sattrs->subpage = true;
+ }
if (sattrs->srvalid) {
/* If we hit in more than one region then we must report
* as Secure, not NS-Callable, with no valid region
@@ -9880,13 +9886,16 @@ static void v8m_security_lookup(CPUARMState *env, uint32_t address,
static bool pmsav8_mpu_lookup(CPUARMState *env, uint32_t address,
MMUAccessType access_type, ARMMMUIdx mmu_idx,
hwaddr *phys_ptr, MemTxAttrs *txattrs,
- int *prot, ARMMMUFaultInfo *fi, uint32_t *mregion)
+ int *prot, bool *is_subpage,
+ ARMMMUFaultInfo *fi, uint32_t *mregion)
{
/* Perform a PMSAv8 MPU lookup (without also doing the SAU check
* that a full phys-to-virt translation does).
* mregion is (if not NULL) set to the region number which matched,
* or -1 if no region number is returned (MPU off, address did not
* hit a region, address hit in multiple regions).
+ * We set is_subpage to true if the region hit doesn't cover the
+ * entire TARGET_PAGE the address is within.
*/
ARMCPU *cpu = arm_env_get_cpu(env);
bool is_user = regime_is_user(env, mmu_idx);
@@ -9894,7 +9903,10 @@ static bool pmsav8_mpu_lookup(CPUARMState *env, uint32_t address,
int n;
int matchregion = -1;
bool hit = false;
+ uint32_t addr_page_base = address & TARGET_PAGE_MASK;
+ uint32_t addr_page_limit = addr_page_base + (TARGET_PAGE_SIZE - 1);
+ *is_subpage = false;
*phys_ptr = address;
*prot = 0;
if (mregion) {
@@ -9932,6 +9944,10 @@ static bool pmsav8_mpu_lookup(CPUARMState *env, uint32_t address,
continue;
}
+ if (base > addr_page_base || limit < addr_page_limit) {
+ *is_subpage = true;
+ }
+
if (hit) {
/* Multiple regions match -- always a failure (unlike
* PMSAv7 where highest-numbered-region wins)
@@ -9943,23 +9959,6 @@ static bool pmsav8_mpu_lookup(CPUARMState *env, uint32_t address,
matchregion = n;
hit = true;
-
- if (base & ~TARGET_PAGE_MASK) {
- qemu_log_mask(LOG_UNIMP,
- "MPU_RBAR[%d]: No support for MPU region base"
- "address of 0x%" PRIx32 ". Minimum alignment is "
- "%d\n",
- n, base, TARGET_PAGE_BITS);
- continue;
- }
- if ((limit + 1) & ~TARGET_PAGE_MASK) {
- qemu_log_mask(LOG_UNIMP,
- "MPU_RBAR[%d]: No support for MPU region limit"
- "address of 0x%" PRIx32 ". Minimum alignment is "
- "%d\n",
- n, limit, TARGET_PAGE_BITS);
- continue;
- }
}
}
@@ -9995,6 +9994,18 @@ static bool pmsav8_mpu_lookup(CPUARMState *env, uint32_t address,
fi->type = ARMFault_Permission;
fi->level = 1;
+ /*
+ * Core QEMU code can't handle execution from small pages yet, so
+ * don't try it. This means any attempted execution will generate
+ * an MPU exception, rather than eventually causing QEMU to exit in
+ * get_page_addr_code().
+ */
+ if (*is_subpage && (*prot & PAGE_EXEC)) {
+ qemu_log_mask(LOG_UNIMP,
+ "MPU: No support for execution from regions "
+ "smaller than 1K\n");
+ *prot &= ~PAGE_EXEC;
+ }
return !(*prot & (1 << access_type));
}
@@ -10002,10 +10013,13 @@ static bool pmsav8_mpu_lookup(CPUARMState *env, uint32_t address,
static bool get_phys_addr_pmsav8(CPUARMState *env, uint32_t address,
MMUAccessType access_type, ARMMMUIdx mmu_idx,
hwaddr *phys_ptr, MemTxAttrs *txattrs,
- int *prot, ARMMMUFaultInfo *fi)
+ int *prot, target_ulong *page_size,
+ ARMMMUFaultInfo *fi)
{
uint32_t secure = regime_is_secure(env, mmu_idx);
V8M_SAttributes sattrs = {};
+ bool ret;
+ bool mpu_is_subpage;
if (arm_feature(env, ARM_FEATURE_M_SECURITY)) {
v8m_security_lookup(env, address, access_type, mmu_idx, &sattrs);
@@ -10033,6 +10047,7 @@ static bool get_phys_addr_pmsav8(CPUARMState *env, uint32_t address,
} else {
fi->type = ARMFault_QEMU_SFault;
}
+ *page_size = sattrs.subpage ? 1 : TARGET_PAGE_SIZE;
*phys_ptr = address;
*prot = 0;
return true;
@@ -10055,6 +10070,7 @@ static bool get_phys_addr_pmsav8(CPUARMState *env, uint32_t address,
* for M_FAKE_FSR_SFAULT in arm_v7m_cpu_do_interrupt().
*/
fi->type = ARMFault_QEMU_SFault;
+ *page_size = sattrs.subpage ? 1 : TARGET_PAGE_SIZE;
*phys_ptr = address;
*prot = 0;
return true;
@@ -10062,8 +10078,22 @@ static bool get_phys_addr_pmsav8(CPUARMState *env, uint32_t address,
}
}
- return pmsav8_mpu_lookup(env, address, access_type, mmu_idx, phys_ptr,
- txattrs, prot, fi, NULL);
+ ret = pmsav8_mpu_lookup(env, address, access_type, mmu_idx, phys_ptr,
+ txattrs, prot, &mpu_is_subpage, fi, NULL);
+ /*
+ * TODO: this is a temporary hack to ignore the fact that the SAU region
+ * is smaller than a page if this is an executable region. We never
+ * supported small MPU regions, but we did (accidentally) allow small
+ * SAU regions, and if we now made small SAU regions not be executable
+ * then this would break previously working guest code. We can't
+ * remove this until/unless we implement support for execution from
+ * small regions.
+ */
+ if (*prot & PAGE_EXEC) {
+ sattrs.subpage = false;
+ }
+ *page_size = sattrs.subpage || mpu_is_subpage ? 1 : TARGET_PAGE_SIZE;
+ return ret;
}
static bool get_phys_addr_pmsav5(CPUARMState *env, uint32_t address,
@@ -10339,7 +10369,7 @@ static bool get_phys_addr(CPUARMState *env, target_ulong address,
if (arm_feature(env, ARM_FEATURE_V8)) {
/* PMSAv8 */
ret = get_phys_addr_pmsav8(env, address, access_type, mmu_idx,
- phys_ptr, attrs, prot, fi);
+ phys_ptr, attrs, prot, page_size, fi);
} else if (arm_feature(env, ARM_FEATURE_V7)) {
/* PMSAv7 */
ret = get_phys_addr_pmsav7(env, address, access_type, mmu_idx,
@@ -10757,6 +10787,7 @@ uint32_t HELPER(v7m_tt)(CPUARMState *env, uint32_t addr, uint32_t op)
uint32_t mregion;
bool targetpriv;
bool targetsec = env->v7m.secure;
+ bool is_subpage;
/* Work out what the security state and privilege level we're
* interested in is...
@@ -10786,7 +10817,8 @@ uint32_t HELPER(v7m_tt)(CPUARMState *env, uint32_t addr, uint32_t op)
if (arm_current_el(env) != 0 || alt) {
/* We can ignore the return value as prot is always set */
pmsav8_mpu_lookup(env, addr, MMU_DATA_LOAD, mmu_idx,
- &phys_addr, &attrs, &prot, &fi, &mregion);
+ &phys_addr, &attrs, &prot, &is_subpage,
+ &fi, &mregion);
if (mregion == -1) {
mrvalid = false;
mregion = 0;
--
2.17.1
^ permalink raw reply related [flat|nested] 13+ messages in thread
* Re: [Qemu-devel] [PATCH 1/3] tcg: Support MMU protection regions smaller than TARGET_PAGE_SIZE
2018-06-20 13:06 ` [Qemu-devel] [PATCH 1/3] tcg: Support MMU protection regions smaller than TARGET_PAGE_SIZE Peter Maydell
@ 2018-06-20 15:46 ` Mark Cave-Ayland
2018-06-20 16:21 ` Peter Maydell
2018-06-30 19:20 ` Max Filippov
1 sibling, 1 reply; 13+ messages in thread
From: Mark Cave-Ayland @ 2018-06-20 15:46 UTC (permalink / raw)
To: Peter Maydell, qemu-arm, qemu-devel
Cc: Paolo Bonzini, Richard Henderson, patches
On 20/06/18 14:06, Peter Maydell wrote:
> Add support for MMU protection regions that are smaller than
> TARGET_PAGE_SIZE. We do this by marking the TLB entry for those
> pages with a flag TLB_RECHECK. This flag causes us to always
> take the slow-path for accesses. In the slow path we can then
> special case them to always call tlb_fill() again, so we have
> the correct information for the exact address being accessed.
>
> This change allows us to handle reading and writing from small
> regions; we cannot deal with execution from the small region.
>
> Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
> ---
> accel/tcg/softmmu_template.h | 24 ++++---
> include/exec/cpu-all.h | 5 +-
> accel/tcg/cputlb.c | 131 +++++++++++++++++++++++++++++------
> 3 files changed, 130 insertions(+), 30 deletions(-)
>
> diff --git a/accel/tcg/softmmu_template.h b/accel/tcg/softmmu_template.h
> index 239ea6692b4..c47591c9709 100644
> --- a/accel/tcg/softmmu_template.h
> +++ b/accel/tcg/softmmu_template.h
> @@ -98,10 +98,12 @@
> static inline DATA_TYPE glue(io_read, SUFFIX)(CPUArchState *env,
> size_t mmu_idx, size_t index,
> target_ulong addr,
> - uintptr_t retaddr)
> + uintptr_t retaddr,
> + bool recheck)
> {
> CPUIOTLBEntry *iotlbentry = &env->iotlb[mmu_idx][index];
> - return io_readx(env, iotlbentry, mmu_idx, addr, retaddr, DATA_SIZE);
> + return io_readx(env, iotlbentry, mmu_idx, addr, retaddr, recheck,
> + DATA_SIZE);
> }
> #endif
>
> @@ -138,7 +140,8 @@ WORD_TYPE helper_le_ld_name(CPUArchState *env, target_ulong addr,
>
> /* ??? Note that the io helpers always read data in the target
> byte ordering. We should push the LE/BE request down into io. */
> - res = glue(io_read, SUFFIX)(env, mmu_idx, index, addr, retaddr);
> + res = glue(io_read, SUFFIX)(env, mmu_idx, index, addr, retaddr,
> + tlb_addr & TLB_RECHECK);
> res = TGT_LE(res);
> return res;
> }
> @@ -205,7 +208,8 @@ WORD_TYPE helper_be_ld_name(CPUArchState *env, target_ulong addr,
>
> /* ??? Note that the io helpers always read data in the target
> byte ordering. We should push the LE/BE request down into io. */
> - res = glue(io_read, SUFFIX)(env, mmu_idx, index, addr, retaddr);
> + res = glue(io_read, SUFFIX)(env, mmu_idx, index, addr, retaddr,
> + tlb_addr & TLB_RECHECK);
> res = TGT_BE(res);
> return res;
> }
> @@ -259,10 +263,12 @@ static inline void glue(io_write, SUFFIX)(CPUArchState *env,
> size_t mmu_idx, size_t index,
> DATA_TYPE val,
> target_ulong addr,
> - uintptr_t retaddr)
> + uintptr_t retaddr,
> + bool recheck)
> {
> CPUIOTLBEntry *iotlbentry = &env->iotlb[mmu_idx][index];
> - return io_writex(env, iotlbentry, mmu_idx, val, addr, retaddr, DATA_SIZE);
> + return io_writex(env, iotlbentry, mmu_idx, val, addr, retaddr,
> + recheck, DATA_SIZE);
> }
>
> void helper_le_st_name(CPUArchState *env, target_ulong addr, DATA_TYPE val,
> @@ -298,7 +304,8 @@ void helper_le_st_name(CPUArchState *env, target_ulong addr, DATA_TYPE val,
> /* ??? Note that the io helpers always read data in the target
> byte ordering. We should push the LE/BE request down into io. */
> val = TGT_LE(val);
> - glue(io_write, SUFFIX)(env, mmu_idx, index, val, addr, retaddr);
> + glue(io_write, SUFFIX)(env, mmu_idx, index, val, addr,
> + retaddr, tlb_addr & TLB_RECHECK);
> return;
> }
>
> @@ -375,7 +382,8 @@ void helper_be_st_name(CPUArchState *env, target_ulong addr, DATA_TYPE val,
> /* ??? Note that the io helpers always read data in the target
> byte ordering. We should push the LE/BE request down into io. */
> val = TGT_BE(val);
> - glue(io_write, SUFFIX)(env, mmu_idx, index, val, addr, retaddr);
> + glue(io_write, SUFFIX)(env, mmu_idx, index, val, addr, retaddr,
> + tlb_addr & TLB_RECHECK);
> return;
> }
>
> diff --git a/include/exec/cpu-all.h b/include/exec/cpu-all.h
> index 7fa726b8e36..7338f57062f 100644
> --- a/include/exec/cpu-all.h
> +++ b/include/exec/cpu-all.h
> @@ -330,11 +330,14 @@ CPUArchState *cpu_copy(CPUArchState *env);
> #define TLB_NOTDIRTY (1 << (TARGET_PAGE_BITS - 2))
> /* Set if TLB entry is an IO callback. */
> #define TLB_MMIO (1 << (TARGET_PAGE_BITS - 3))
> +/* Set if TLB entry must have MMU lookup repeated for every access */
> +#define TLB_RECHECK (1 << (TARGET_PAGE_BITS - 4))
>
> /* Use this mask to check interception with an alignment mask
> * in a TCG backend.
> */
> -#define TLB_FLAGS_MASK (TLB_INVALID_MASK | TLB_NOTDIRTY | TLB_MMIO)
> +#define TLB_FLAGS_MASK (TLB_INVALID_MASK | TLB_NOTDIRTY | TLB_MMIO \
> + | TLB_RECHECK)
>
> void dump_exec_info(FILE *f, fprintf_function cpu_fprintf);
> void dump_opcount_info(FILE *f, fprintf_function cpu_fprintf);
> diff --git a/accel/tcg/cputlb.c b/accel/tcg/cputlb.c
> index 0a721bb9c40..d893452295f 100644
> --- a/accel/tcg/cputlb.c
> +++ b/accel/tcg/cputlb.c
> @@ -621,27 +621,42 @@ void tlb_set_page_with_attrs(CPUState *cpu, target_ulong vaddr,
> target_ulong code_address;
> uintptr_t addend;
> CPUTLBEntry *te, *tv, tn;
> - hwaddr iotlb, xlat, sz;
> + hwaddr iotlb, xlat, sz, paddr_page;
> + target_ulong vaddr_page;
> unsigned vidx = env->vtlb_index++ % CPU_VTLB_SIZE;
> int asidx = cpu_asidx_from_attrs(cpu, attrs);
>
> assert_cpu_is_self(cpu);
> - assert(size >= TARGET_PAGE_SIZE);
> - if (size != TARGET_PAGE_SIZE) {
> - tlb_add_large_page(env, vaddr, size);
> - }
>
> - sz = size;
> - section = address_space_translate_for_iotlb(cpu, asidx, paddr, &xlat, &sz,
> - attrs, &prot);
> + if (size < TARGET_PAGE_SIZE) {
> + sz = TARGET_PAGE_SIZE;
> + } else {
> + if (size > TARGET_PAGE_SIZE) {
> + tlb_add_large_page(env, vaddr, size);
> + }
> + sz = size;
> + }
> + vaddr_page = vaddr & TARGET_PAGE_MASK;
> + paddr_page = paddr & TARGET_PAGE_MASK;
> +
> + section = address_space_translate_for_iotlb(cpu, asidx, paddr_page,
> + &xlat, &sz, attrs, &prot);
> assert(sz >= TARGET_PAGE_SIZE);
>
> tlb_debug("vaddr=" TARGET_FMT_lx " paddr=0x" TARGET_FMT_plx
> " prot=%x idx=%d\n",
> vaddr, paddr, prot, mmu_idx);
>
> - address = vaddr;
> - if (!memory_region_is_ram(section->mr) && !memory_region_is_romd(section->mr)) {
> + address = vaddr_page;
> + if (size < TARGET_PAGE_SIZE) {
> + /*
> + * Slow-path the TLB entries; we will repeat the MMU check and TLB
> + * fill on every access.
> + */
> + address |= TLB_RECHECK;
> + }
> + if (!memory_region_is_ram(section->mr) &&
> + !memory_region_is_romd(section->mr)) {
> /* IO memory case */
> address |= TLB_MMIO;
> addend = 0;
> @@ -651,10 +666,10 @@ void tlb_set_page_with_attrs(CPUState *cpu, target_ulong vaddr,
> }
>
> code_address = address;
> - iotlb = memory_region_section_get_iotlb(cpu, section, vaddr, paddr, xlat,
> - prot, &address);
> + iotlb = memory_region_section_get_iotlb(cpu, section, vaddr_page,
> + paddr_page, xlat, prot, &address);
>
> - index = (vaddr >> TARGET_PAGE_BITS) & (CPU_TLB_SIZE - 1);
> + index = (vaddr_page >> TARGET_PAGE_BITS) & (CPU_TLB_SIZE - 1);
> te = &env->tlb_table[mmu_idx][index];
> /* do not discard the translation in te, evict it into a victim tlb */
> tv = &env->tlb_v_table[mmu_idx][vidx];
> @@ -670,18 +685,18 @@ void tlb_set_page_with_attrs(CPUState *cpu, target_ulong vaddr,
> * TARGET_PAGE_BITS, and either
> * + the ram_addr_t of the page base of the target RAM (if NOTDIRTY or ROM)
> * + the offset within section->mr of the page base (otherwise)
> - * We subtract the vaddr (which is page aligned and thus won't
> + * We subtract the vaddr_page (which is page aligned and thus won't
> * disturb the low bits) to give an offset which can be added to the
> * (non-page-aligned) vaddr of the eventual memory access to get
> * the MemoryRegion offset for the access. Note that the vaddr we
> * subtract here is that of the page base, and not the same as the
> * vaddr we add back in io_readx()/io_writex()/get_page_addr_code().
> */
> - env->iotlb[mmu_idx][index].addr = iotlb - vaddr;
> + env->iotlb[mmu_idx][index].addr = iotlb - vaddr_page;
> env->iotlb[mmu_idx][index].attrs = attrs;
>
> /* Now calculate the new entry */
> - tn.addend = addend - vaddr;
> + tn.addend = addend - vaddr_page;
> if (prot & PAGE_READ) {
> tn.addr_read = address;
> } else {
> @@ -702,7 +717,7 @@ void tlb_set_page_with_attrs(CPUState *cpu, target_ulong vaddr,
> tn.addr_write = address | TLB_MMIO;
> } else if (memory_region_is_ram(section->mr)
> && cpu_physical_memory_is_clean(
> - memory_region_get_ram_addr(section->mr) + xlat)) {
> + memory_region_get_ram_addr(section->mr) + xlat)) {
> tn.addr_write = address | TLB_NOTDIRTY;
> } else {
> tn.addr_write = address;
> @@ -775,7 +790,8 @@ static inline ram_addr_t qemu_ram_addr_from_host_nofail(void *ptr)
>
> static uint64_t io_readx(CPUArchState *env, CPUIOTLBEntry *iotlbentry,
> int mmu_idx,
> - target_ulong addr, uintptr_t retaddr, int size)
> + target_ulong addr, uintptr_t retaddr,
> + bool recheck, int size)
> {
> CPUState *cpu = ENV_GET_CPU(env);
> hwaddr mr_offset;
> @@ -785,6 +801,29 @@ static uint64_t io_readx(CPUArchState *env, CPUIOTLBEntry *iotlbentry,
> bool locked = false;
> MemTxResult r;
>
> + if (recheck) {
> + /*
> + * This is a TLB_RECHECK access, where the MMU protection
> + * covers a smaller range than a target page, and we must
> + * repeat the MMU check here. This tlb_fill() call might
> + * longjump out if this access should cause a guest exception.
> + */
> + int index;
> + target_ulong tlb_addr;
> +
> + tlb_fill(cpu, addr, size, MMU_DATA_LOAD, mmu_idx, retaddr);
> +
> + index = (addr >> TARGET_PAGE_BITS) & (CPU_TLB_SIZE - 1);
> + tlb_addr = env->tlb_table[mmu_idx][index].addr_read;
> + if (!(tlb_addr & ~(TARGET_PAGE_MASK | TLB_RECHECK))) {
> + /* RAM access */
> + uintptr_t haddr = addr + env->tlb_table[mmu_idx][index].addend;
> +
> + return ldn_p((void *)haddr, size);
> + }
> + /* Fall through for handling IO accesses */
> + }
> +
> section = iotlb_to_section(cpu, iotlbentry->addr, iotlbentry->attrs);
> mr = section->mr;
> mr_offset = (iotlbentry->addr & TARGET_PAGE_MASK) + addr;
> @@ -819,7 +858,7 @@ static uint64_t io_readx(CPUArchState *env, CPUIOTLBEntry *iotlbentry,
> static void io_writex(CPUArchState *env, CPUIOTLBEntry *iotlbentry,
> int mmu_idx,
> uint64_t val, target_ulong addr,
> - uintptr_t retaddr, int size)
> + uintptr_t retaddr, bool recheck, int size)
> {
> CPUState *cpu = ENV_GET_CPU(env);
> hwaddr mr_offset;
> @@ -828,6 +867,30 @@ static void io_writex(CPUArchState *env, CPUIOTLBEntry *iotlbentry,
> bool locked = false;
> MemTxResult r;
>
> + if (recheck) {
> + /*
> + * This is a TLB_RECHECK access, where the MMU protection
> + * covers a smaller range than a target page, and we must
> + * repeat the MMU check here. This tlb_fill() call might
> + * longjump out if this access should cause a guest exception.
> + */
> + int index;
> + target_ulong tlb_addr;
> +
> + tlb_fill(cpu, addr, size, MMU_DATA_STORE, mmu_idx, retaddr);
> +
> + index = (addr >> TARGET_PAGE_BITS) & (CPU_TLB_SIZE - 1);
> + tlb_addr = env->tlb_table[mmu_idx][index].addr_write;
> + if (!(tlb_addr & ~(TARGET_PAGE_MASK | TLB_RECHECK))) {
> + /* RAM access */
> + uintptr_t haddr = addr + env->tlb_table[mmu_idx][index].addend;
> +
> + stn_p((void *)haddr, size, val);
> + return;
> + }
> + /* Fall through for handling IO accesses */
> + }
> +
> section = iotlb_to_section(cpu, iotlbentry->addr, iotlbentry->attrs);
> mr = section->mr;
> mr_offset = (iotlbentry->addr & TARGET_PAGE_MASK) + addr;
> @@ -911,6 +974,32 @@ tb_page_addr_t get_page_addr_code(CPUArchState *env, target_ulong addr)
> tlb_fill(ENV_GET_CPU(env), addr, 0, MMU_INST_FETCH, mmu_idx, 0);
> }
> }
> +
> + if (unlikely(env->tlb_table[mmu_idx][index].addr_code & TLB_RECHECK)) {
> + /*
> + * This is a TLB_RECHECK access, where the MMU protection
> + * covers a smaller range than a target page, and we must
> + * repeat the MMU check here. This tlb_fill() call might
> + * longjump out if this access should cause a guest exception.
> + */
> + int index;
> + target_ulong tlb_addr;
> +
> + tlb_fill(cpu, addr, 0, MMU_INST_FETCH, mmu_idx, 0);
> +
> + index = (addr >> TARGET_PAGE_BITS) & (CPU_TLB_SIZE - 1);
> + tlb_addr = env->tlb_table[mmu_idx][index].addr_code;
> + if (!(tlb_addr & ~(TARGET_PAGE_MASK | TLB_RECHECK))) {
> + /* RAM access. We can't handle this, so for now just stop */
> + cpu_abort(cpu, "Unable to handle guest executing from RAM within "
> + "a small MPU region at 0x" TARGET_FMT_lx, addr);
> + }
> + /*
> + * Fall through to handle IO accesses (which will almost certainly
> + * also result in failure)
> + */
> + }
> +
> iotlbentry = &env->iotlb[mmu_idx][index];
> section = iotlb_to_section(cpu, iotlbentry->addr, iotlbentry->attrs);
> mr = section->mr;
> @@ -1019,8 +1108,8 @@ static void *atomic_mmu_lookup(CPUArchState *env, target_ulong addr,
> tlb_addr = tlbe->addr_write & ~TLB_INVALID_MASK;
> }
>
> - /* Notice an IO access */
> - if (unlikely(tlb_addr & TLB_MMIO)) {
> + /* Notice an IO access or a needs-MMU-lookup access */
> + if (unlikely(tlb_addr & (TLB_MMIO | TLB_RECHECK))) {
> /* There's really nothing that can be done to
> support this apart from stop-the-world. */
> goto stop_the_world;
This patch is very interesting as forcing the slow path is something
required to implement the sun4u MMU IE (invert endian) bit - for some
background see Richard's email at
https://lists.gnu.org/archive/html/qemu-devel/2017-08/msg02835.html.
Presumably there is nothing here that would prevent the slow path being
used outside of TLB_RECHECK?
ATB,
Mark.
^ permalink raw reply [flat|nested] 13+ messages in thread
* Re: [Qemu-devel] [PATCH 1/3] tcg: Support MMU protection regions smaller than TARGET_PAGE_SIZE
2018-06-20 15:46 ` Mark Cave-Ayland
@ 2018-06-20 16:21 ` Peter Maydell
0 siblings, 0 replies; 13+ messages in thread
From: Peter Maydell @ 2018-06-20 16:21 UTC (permalink / raw)
To: Mark Cave-Ayland
Cc: qemu-arm, QEMU Developers, Paolo Bonzini, Richard Henderson,
patches@linaro.org
On 20 June 2018 at 16:46, Mark Cave-Ayland
<mark.cave-ayland@ilande.co.uk> wrote:
> On 20/06/18 14:06, Peter Maydell wrote:
>
>> Add support for MMU protection regions that are smaller than
>> TARGET_PAGE_SIZE. We do this by marking the TLB entry for those
>> pages with a flag TLB_RECHECK. This flag causes us to always
>> take the slow-path for accesses. In the slow path we can then
>> special case them to always call tlb_fill() again, so we have
>> the correct information for the exact address being accessed.
>>
>> This change allows us to handle reading and writing from small
>> regions; we cannot deal with execution from the small region.
>>
>> Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
> This patch is very interesting as forcing the slow path is something
> required to implement the sun4u MMU IE (invert endian) bit - for some
> background see Richard's email at
> https://lists.gnu.org/archive/html/qemu-devel/2017-08/msg02835.html.
>
> Presumably there is nothing here that would prevent the slow path being used
> outside of TLB_RECHECK?
Nope; we already have various things that force a slowpath.
Essentially all you need to do is ensure that some lowbit in
the tlb addr fields is set, and we're only moderately
constrained in how many of those we have. You might or might
not be able to use TLB_RECHECK, I don't know.
thanks
-- PMM
^ permalink raw reply [flat|nested] 13+ messages in thread
* Re: [Qemu-devel] [PATCH 0/3] Support M-profile MPU regions smaller than 1K
2018-06-20 13:06 [Qemu-devel] [PATCH 0/3] Support M-profile MPU regions smaller than 1K Peter Maydell
` (2 preceding siblings ...)
2018-06-20 13:06 ` [Qemu-devel] [PATCH 3/3] target/arm: Handle small regions in get_phys_addr_pmsav8() Peter Maydell
@ 2018-06-20 19:22 ` Richard Henderson
3 siblings, 0 replies; 13+ messages in thread
From: Richard Henderson @ 2018-06-20 19:22 UTC (permalink / raw)
To: Peter Maydell, qemu-arm, qemu-devel; +Cc: patches, Paolo Bonzini
On 06/20/2018 03:06 AM, Peter Maydell wrote:
> The Arm M-profile MPU allows the guest to specify access
> permissions at a very fine granularity (down to a 32-byte
> alignment for region start and and addresses). Currently
> we insist that regions are page-aligned because the core
> TCG code can't handle anything else.
>
> This patchset relaxes that restriction, so that we can handle
> small MPU regions for reading and writing (but not yet for
> execution). It does that by marking the TLB entry for any
> page which includes a small region with a flag TLB_RECHECK.
> This flag causes us to always take the slow-path for accesses.
> In the slow path we can then special case them to always call
> tlb_fill() again, so we have the correct information for the
> exact address being accessed.
>
> Patch 1 adds support to the accel/tcg code. Patch 2 then
> enables using it for the PMSAv7 (v7M) MPU, and patch 3
> does the same for the PMSAv8 (v8M) MPU and SAU.
> Because we don't yet support execution from small regions,
> the PMSA code has some corner cases where it retains the
> previous behaviour so we don't break previously-working
> guests:
> * if the MPU region is small, we don't mark it PROT_EXEC
> even if the guest asked for it (so execution will cause
> an MPU exception)
> * we ignore the fact that the SAU region might be smaller
> than a page
>
> (Unfortunately the old code *intended* to make small-region
> accesses non-executable but due to bugs didn't actually
> succeed in doing that, so this might possibly cause some
> previously working-by-accident code to break.)
>
> I would ideally in future like to add execution support, but
> this is somewhat tricky. My rough sketch for it looks like:
> * get_page_addr_code() should return -1 for "not actually
> a full page of RAM" (deleting all the current tricky code
> for trying to handle it being a memory region or unmapped)
> * its callsites in the TB hashtable lookup code should handle
> -1 by returning "no TB found"
> * the weird call to get_page_addr_code() in the xtensa itlb_hit_test
> helper should be replaced by a call to tlb_fill() or some
> xtensa-internal function (I think all it is trying to do is
> cause the exceptions for MMU faults)
> * tb_gen_code() should in this case generate a one-instruction
> TB and tell its caller not to cache it
> * the translate.c code for targets that need this probably needs
> fixing up to make sure it can handle the case of "the load
> code byte/word/etc functions might return failure or cause
> an exception"
>
> This would also have the advantage that it naturally allows
> you to execute (slowly) from any MMIO region, and we could drop
> the broken request-mmio-pointer machinery.
>
> In any case that is too much for 3.0, I think; I'd like to get this
> R/W small-region code into 3.0, though.
>
> thanks
> -- PMM
>
> Peter Maydell (3):
> tcg: Support MMU protection regions smaller than TARGET_PAGE_SIZE
> target/arm: Set page (region) size in get_phys_addr_pmsav7()
> target/arm: Handle small regions in get_phys_addr_pmsav8()
Whole series:
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
I assume you'll take this all via target-arm.next?
r~
^ permalink raw reply [flat|nested] 13+ messages in thread
* Re: [Qemu-devel] [PATCH 1/3] tcg: Support MMU protection regions smaller than TARGET_PAGE_SIZE
2018-06-20 13:06 ` [Qemu-devel] [PATCH 1/3] tcg: Support MMU protection regions smaller than TARGET_PAGE_SIZE Peter Maydell
2018-06-20 15:46 ` Mark Cave-Ayland
@ 2018-06-30 19:20 ` Max Filippov
2018-06-30 19:42 ` Max Filippov
1 sibling, 1 reply; 13+ messages in thread
From: Max Filippov @ 2018-06-30 19:20 UTC (permalink / raw)
To: Peter Maydell
Cc: qemu-arm, qemu-devel, Paolo Bonzini, Richard Henderson,
Patch Tracking
Hi Peter,
On Wed, Jun 20, 2018 at 6:06 AM, Peter Maydell <peter.maydell@linaro.org> wrote:
> Add support for MMU protection regions that are smaller than
> TARGET_PAGE_SIZE. We do this by marking the TLB entry for those
> pages with a flag TLB_RECHECK. This flag causes us to always
> take the slow-path for accesses. In the slow path we can then
> special case them to always call tlb_fill() again, so we have
> the correct information for the exact address being accessed.
>
> This change allows us to handle reading and writing from small
> regions; we cannot deal with execution from the small region.
>
> Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
> ---
> accel/tcg/softmmu_template.h | 24 ++++---
> include/exec/cpu-all.h | 5 +-
> accel/tcg/cputlb.c | 131 +++++++++++++++++++++++++++++------
> 3 files changed, 130 insertions(+), 30 deletions(-)
I'm observing the following failure with xtensa tests:
(qemu) qemu: fatal: Unable to handle guest executing from RAM within a
small MPU region at 0xd0000804
Bisection points to this patch. Any idea what happened?
--
Thanks.
-- Max
^ permalink raw reply [flat|nested] 13+ messages in thread
* Re: [Qemu-devel] [PATCH 1/3] tcg: Support MMU protection regions smaller than TARGET_PAGE_SIZE
2018-06-30 19:20 ` Max Filippov
@ 2018-06-30 19:42 ` Max Filippov
2018-06-30 19:50 ` Max Filippov
2018-06-30 20:08 ` Peter Maydell
0 siblings, 2 replies; 13+ messages in thread
From: Max Filippov @ 2018-06-30 19:42 UTC (permalink / raw)
To: Peter Maydell
Cc: qemu-arm, qemu-devel, Paolo Bonzini, Richard Henderson,
Patch Tracking, Laurent Vivier
On Sat, Jun 30, 2018 at 12:20 PM, Max Filippov <jcmvbkbc@gmail.com> wrote:
> Hi Peter,
>
> On Wed, Jun 20, 2018 at 6:06 AM, Peter Maydell <peter.maydell@linaro.org> wrote:
>> Add support for MMU protection regions that are smaller than
>> TARGET_PAGE_SIZE. We do this by marking the TLB entry for those
>> pages with a flag TLB_RECHECK. This flag causes us to always
>> take the slow-path for accesses. In the slow path we can then
>> special case them to always call tlb_fill() again, so we have
>> the correct information for the exact address being accessed.
>>
>> This change allows us to handle reading and writing from small
>> regions; we cannot deal with execution from the small region.
>>
>> Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
>> ---
>> accel/tcg/softmmu_template.h | 24 ++++---
>> include/exec/cpu-all.h | 5 +-
>> accel/tcg/cputlb.c | 131 +++++++++++++++++++++++++++++------
>> 3 files changed, 130 insertions(+), 30 deletions(-)
>
> I'm observing the following failure with xtensa tests:
>
> (qemu) qemu: fatal: Unable to handle guest executing from RAM within a
> small MPU region at 0xd0000804
>
> Bisection points to this patch. Any idea what happened?
Ok, I think I've found the issue: the following check in the
get_page_addr_code does not work correctly when -1 is in the
addr_code in the QEMU TLB:
if (unlikely(env->tlb_table[mmu_idx][index].addr_code & TLB_RECHECK))
tlb_set_page_with_attrs sets addr_code to -1 in the TLB entry
when the translation is not executable.
--
Thanks.
-- Max
^ permalink raw reply [flat|nested] 13+ messages in thread
* Re: [Qemu-devel] [PATCH 1/3] tcg: Support MMU protection regions smaller than TARGET_PAGE_SIZE
2018-06-30 19:42 ` Max Filippov
@ 2018-06-30 19:50 ` Max Filippov
2018-06-30 20:08 ` Peter Maydell
1 sibling, 0 replies; 13+ messages in thread
From: Max Filippov @ 2018-06-30 19:50 UTC (permalink / raw)
To: Peter Maydell
Cc: qemu-arm, qemu-devel, Paolo Bonzini, Richard Henderson,
Patch Tracking, Laurent Vivier
On Sat, Jun 30, 2018 at 12:42 PM, Max Filippov <jcmvbkbc@gmail.com> wrote:
> On Sat, Jun 30, 2018 at 12:20 PM, Max Filippov <jcmvbkbc@gmail.com> wrote:
>> Hi Peter,
>>
>> On Wed, Jun 20, 2018 at 6:06 AM, Peter Maydell <peter.maydell@linaro.org> wrote:
>>> Add support for MMU protection regions that are smaller than
>>> TARGET_PAGE_SIZE. We do this by marking the TLB entry for those
>>> pages with a flag TLB_RECHECK. This flag causes us to always
>>> take the slow-path for accesses. In the slow path we can then
>>> special case them to always call tlb_fill() again, so we have
>>> the correct information for the exact address being accessed.
>>>
>>> This change allows us to handle reading and writing from small
>>> regions; we cannot deal with execution from the small region.
>>>
>>> Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
>>> ---
>>> accel/tcg/softmmu_template.h | 24 ++++---
>>> include/exec/cpu-all.h | 5 +-
>>> accel/tcg/cputlb.c | 131 +++++++++++++++++++++++++++++------
>>> 3 files changed, 130 insertions(+), 30 deletions(-)
>>
>> I'm observing the following failure with xtensa tests:
>>
>> (qemu) qemu: fatal: Unable to handle guest executing from RAM within a
>> small MPU region at 0xd0000804
>>
>> Bisection points to this patch. Any idea what happened?
>
> Ok, I think I've found the issue: the following check in the
> get_page_addr_code does not work correctly when -1 is in the
> addr_code in the QEMU TLB:
>
> if (unlikely(env->tlb_table[mmu_idx][index].addr_code & TLB_RECHECK))
>
> tlb_set_page_with_attrs sets addr_code to -1 in the TLB entry
> when the translation is not executable.
Looks like it can be fixed with the following:
diff --git a/accel/tcg/cputlb.c b/accel/tcg/cputlb.c
index eebe97dabb75..633cffe9ed74 100644
--- a/accel/tcg/cputlb.c
+++ b/accel/tcg/cputlb.c
@@ -692,16 +692,16 @@ void tlb_set_page_with_attrs(CPUState *cpu,
target_ulong vaddr,
if (prot & PAGE_READ) {
tn.addr_read = address;
} else {
- tn.addr_read = -1;
+ tn.addr_read = TLB_INVALID_MASK;
}
if (prot & PAGE_EXEC) {
tn.addr_code = code_address;
} else {
- tn.addr_code = -1;
+ tn.addr_code = TLB_INVALID_MASK;
}
- tn.addr_write = -1;
+ tn.addr_write = TLB_INVALID_MASK;
if (prot & PAGE_WRITE) {
if ((memory_region_is_ram(section->mr) && section->readonly)
|| memory_region_is_romd(section->mr)) {
--
Thanks.
-- Max
^ permalink raw reply related [flat|nested] 13+ messages in thread
* Re: [Qemu-devel] [PATCH 1/3] tcg: Support MMU protection regions smaller than TARGET_PAGE_SIZE
2018-06-30 19:42 ` Max Filippov
2018-06-30 19:50 ` Max Filippov
@ 2018-06-30 20:08 ` Peter Maydell
2018-06-30 20:10 ` Peter Maydell
1 sibling, 1 reply; 13+ messages in thread
From: Peter Maydell @ 2018-06-30 20:08 UTC (permalink / raw)
To: Max Filippov
Cc: qemu-arm, qemu-devel, Paolo Bonzini, Richard Henderson,
Patch Tracking, Laurent Vivier
On 30 June 2018 at 20:42, Max Filippov <jcmvbkbc@gmail.com> wrote:
> On Sat, Jun 30, 2018 at 12:20 PM, Max Filippov <jcmvbkbc@gmail.com> wrote:
>> I'm observing the following failure with xtensa tests:
>>
>> (qemu) qemu: fatal: Unable to handle guest executing from RAM within a
>> small MPU region at 0xd0000804
>>
>> Bisection points to this patch. Any idea what happened?
>
> Ok, I think I've found the issue: the following check in the
> get_page_addr_code does not work correctly when -1 is in the
> addr_code in the QEMU TLB:
>
> if (unlikely(env->tlb_table[mmu_idx][index].addr_code & TLB_RECHECK))
Yes, Laurent ran into that and I sent a fix out on Friday:
http://patchwork.ozlabs.org/project/qemu-devel/list/?series=52914
-- could you give that patchset a try?
thanks
-- PMM
^ permalink raw reply [flat|nested] 13+ messages in thread
* Re: [Qemu-devel] [PATCH 1/3] tcg: Support MMU protection regions smaller than TARGET_PAGE_SIZE
2018-06-30 20:08 ` Peter Maydell
@ 2018-06-30 20:10 ` Peter Maydell
2018-06-30 20:26 ` Max Filippov
0 siblings, 1 reply; 13+ messages in thread
From: Peter Maydell @ 2018-06-30 20:10 UTC (permalink / raw)
To: Max Filippov
Cc: qemu-arm, qemu-devel, Paolo Bonzini, Richard Henderson,
Patch Tracking, Laurent Vivier
On 30 June 2018 at 21:08, Peter Maydell <peter.maydell@linaro.org> wrote:
> On 30 June 2018 at 20:42, Max Filippov <jcmvbkbc@gmail.com> wrote:
>> On Sat, Jun 30, 2018 at 12:20 PM, Max Filippov <jcmvbkbc@gmail.com> wrote:
>>> I'm observing the following failure with xtensa tests:
>>>
>>> (qemu) qemu: fatal: Unable to handle guest executing from RAM within a
>>> small MPU region at 0xd0000804
>>>
>>> Bisection points to this patch. Any idea what happened?
>>
>> Ok, I think I've found the issue: the following check in the
>> get_page_addr_code does not work correctly when -1 is in the
>> addr_code in the QEMU TLB:
>>
>> if (unlikely(env->tlb_table[mmu_idx][index].addr_code & TLB_RECHECK))
>
> Yes, Laurent ran into that and I sent a fix out on Friday:
> http://patchwork.ozlabs.org/project/qemu-devel/list/?series=52914
...oh, no, wait, you've hit the other bug I sent a fix for:
http://patchwork.ozlabs.org/patch/937029/
thanks
-- PMM
^ permalink raw reply [flat|nested] 13+ messages in thread
* Re: [Qemu-devel] [PATCH 1/3] tcg: Support MMU protection regions smaller than TARGET_PAGE_SIZE
2018-06-30 20:10 ` Peter Maydell
@ 2018-06-30 20:26 ` Max Filippov
0 siblings, 0 replies; 13+ messages in thread
From: Max Filippov @ 2018-06-30 20:26 UTC (permalink / raw)
To: Peter Maydell
Cc: qemu-arm, qemu-devel, Paolo Bonzini, Richard Henderson,
Patch Tracking, Laurent Vivier
On Sat, Jun 30, 2018 at 1:10 PM, Peter Maydell <peter.maydell@linaro.org> wrote:
> On 30 June 2018 at 21:08, Peter Maydell <peter.maydell@linaro.org> wrote:
>> On 30 June 2018 at 20:42, Max Filippov <jcmvbkbc@gmail.com> wrote:
>>> On Sat, Jun 30, 2018 at 12:20 PM, Max Filippov <jcmvbkbc@gmail.com> wrote:
>>>> I'm observing the following failure with xtensa tests:
>>>>
>>>> (qemu) qemu: fatal: Unable to handle guest executing from RAM within a
>>>> small MPU region at 0xd0000804
>>>>
>>>> Bisection points to this patch. Any idea what happened?
>>>
>>> Ok, I think I've found the issue: the following check in the
>>> get_page_addr_code does not work correctly when -1 is in the
>>> addr_code in the QEMU TLB:
>>>
>>> if (unlikely(env->tlb_table[mmu_idx][index].addr_code & TLB_RECHECK))
>>
>> Yes, Laurent ran into that and I sent a fix out on Friday:
>> http://patchwork.ozlabs.org/project/qemu-devel/list/?series=52914
>
> ...oh, no, wait, you've hit the other bug I sent a fix for:
> http://patchwork.ozlabs.org/patch/937029/
Thanks, that last one fixes it for me.
--
Thanks.
-- Max
^ permalink raw reply [flat|nested] 13+ messages in thread
end of thread, other threads:[~2018-06-30 20:26 UTC | newest]
Thread overview: 13+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2018-06-20 13:06 [Qemu-devel] [PATCH 0/3] Support M-profile MPU regions smaller than 1K Peter Maydell
2018-06-20 13:06 ` [Qemu-devel] [PATCH 1/3] tcg: Support MMU protection regions smaller than TARGET_PAGE_SIZE Peter Maydell
2018-06-20 15:46 ` Mark Cave-Ayland
2018-06-20 16:21 ` Peter Maydell
2018-06-30 19:20 ` Max Filippov
2018-06-30 19:42 ` Max Filippov
2018-06-30 19:50 ` Max Filippov
2018-06-30 20:08 ` Peter Maydell
2018-06-30 20:10 ` Peter Maydell
2018-06-30 20:26 ` Max Filippov
2018-06-20 13:06 ` [Qemu-devel] [PATCH 2/3] target/arm: Set page (region) size in get_phys_addr_pmsav7() Peter Maydell
2018-06-20 13:06 ` [Qemu-devel] [PATCH 3/3] target/arm: Handle small regions in get_phys_addr_pmsav8() Peter Maydell
2018-06-20 19:22 ` [Qemu-devel] [PATCH 0/3] Support M-profile MPU regions smaller than 1K Richard Henderson
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).