* [Qemu-devel] [PATCH v1 0/2] s390x/tcg: LAP support using immediate TLB invalidation
@ 2017-10-16 20:23 David Hildenbrand
2017-10-16 20:23 ` [Qemu-devel] [PATCH v1 1/2] accel/tcg: allow to invalidate a write TLB entry immediately David Hildenbrand
` (3 more replies)
0 siblings, 4 replies; 12+ messages in thread
From: David Hildenbrand @ 2017-10-16 20:23 UTC (permalink / raw)
To: qemu-devel
Cc: qemu-s390x, Thomas Huth, Richard Henderson, Alexander Graf,
cohuck, Peter Maydell
Details about Low-Address Protection can be found in description of
patch 1 and 2. It is basically a subpage protection of the first two
pages of every address space (for which it is enabled).
We can achieve this by simply directly invalidating the TLB entry and
therefore forcing every write accesses onto these two pages into the slow
path.
With this patch, I can boot Linux just fine (which uses LAP). This also
makes all related kvm-unit-tests that we have pass.
RFC -> v1:
- fix LAP range check (Thomas)
- SIGP fix got picked up
Based on: https://github.com/cohuck/qemu.git s390-next
Available on: https://github.com/dhildenb/qemu.git s390x_lap
David Hildenbrand (2):
accel/tcg: allow to invalidate a write TLB entry immediately
s390x/tcg: low-address protection support
accel/tcg/cputlb.c | 5 ++-
accel/tcg/softmmu_template.h | 4 +-
include/exec/cpu-all.h | 3 ++
target/s390x/excp_helper.c | 3 +-
target/s390x/mem_helper.c | 8 ----
target/s390x/mmu_helper.c | 94 +++++++++++++++++++++++++++-----------------
6 files changed, 69 insertions(+), 48 deletions(-)
--
2.13.5
^ permalink raw reply [flat|nested] 12+ messages in thread
* [Qemu-devel] [PATCH v1 1/2] accel/tcg: allow to invalidate a write TLB entry immediately
2017-10-16 20:23 [Qemu-devel] [PATCH v1 0/2] s390x/tcg: LAP support using immediate TLB invalidation David Hildenbrand
@ 2017-10-16 20:23 ` David Hildenbrand
2017-10-16 20:23 ` [Qemu-devel] [PATCH v1 2/2] s390x/tcg: low-address protection support David Hildenbrand
` (2 subsequent siblings)
3 siblings, 0 replies; 12+ messages in thread
From: David Hildenbrand @ 2017-10-16 20:23 UTC (permalink / raw)
To: qemu-devel
Cc: qemu-s390x, Thomas Huth, Richard Henderson, Alexander Graf,
cohuck, Peter Maydell, David Hildenbrand
Background: s390x implements Low-Address Protection (LAP). If LAP is
enabled, writing to effective addresses (before any transaltion)
0-511 and 4096-4607 triggers a protection exception.
So we have subpage protection on the first two pages of every address
space (where the lowcore - the CPU private data resides).
By immediately invalidating the write entry but allowing the caller to
continue, we force every write access onto these first two pages into
the slow path. we will get a tlb fault with the specific accessed
addresses and can then evaluate if protection applies or not.
We have to make sure to ignore the invalid bit if tlb_fill() succeeds.
Signed-off-by: David Hildenbrand <david@redhat.com>
---
accel/tcg/cputlb.c | 5 ++++-
accel/tcg/softmmu_template.h | 4 ++--
include/exec/cpu-all.h | 3 +++
3 files changed, 9 insertions(+), 3 deletions(-)
diff --git a/accel/tcg/cputlb.c b/accel/tcg/cputlb.c
index 5b1ef1442c..a23919c3a8 100644
--- a/accel/tcg/cputlb.c
+++ b/accel/tcg/cputlb.c
@@ -694,6 +694,9 @@ void tlb_set_page_with_attrs(CPUState *cpu, target_ulong vaddr,
} else {
tn.addr_write = address;
}
+ if (prot & PAGE_WRITE_INV) {
+ tn.addr_write |= TLB_INVALID_MASK;
+ }
}
/* Pairs with flag setting in tlb_reset_dirty_range */
@@ -978,7 +981,7 @@ static void *atomic_mmu_lookup(CPUArchState *env, target_ulong addr,
if (!VICTIM_TLB_HIT(addr_write, addr)) {
tlb_fill(ENV_GET_CPU(env), addr, MMU_DATA_STORE, mmu_idx, retaddr);
}
- tlb_addr = tlbe->addr_write;
+ tlb_addr = tlbe->addr_write & ~TLB_INVALID_MASK;
}
/* Check notdirty */
diff --git a/accel/tcg/softmmu_template.h b/accel/tcg/softmmu_template.h
index d7563292a5..3fc5144316 100644
--- a/accel/tcg/softmmu_template.h
+++ b/accel/tcg/softmmu_template.h
@@ -285,7 +285,7 @@ void helper_le_st_name(CPUArchState *env, target_ulong addr, DATA_TYPE val,
if (!VICTIM_TLB_HIT(addr_write, addr)) {
tlb_fill(ENV_GET_CPU(env), addr, MMU_DATA_STORE, mmu_idx, retaddr);
}
- tlb_addr = env->tlb_table[mmu_idx][index].addr_write;
+ tlb_addr = env->tlb_table[mmu_idx][index].addr_write & ~TLB_INVALID_MASK;
}
/* Handle an IO access. */
@@ -361,7 +361,7 @@ void helper_be_st_name(CPUArchState *env, target_ulong addr, DATA_TYPE val,
if (!VICTIM_TLB_HIT(addr_write, addr)) {
tlb_fill(ENV_GET_CPU(env), addr, MMU_DATA_STORE, mmu_idx, retaddr);
}
- tlb_addr = env->tlb_table[mmu_idx][index].addr_write;
+ tlb_addr = env->tlb_table[mmu_idx][index].addr_write & ~TLB_INVALID_MASK;
}
/* Handle an IO access. */
diff --git a/include/exec/cpu-all.h b/include/exec/cpu-all.h
index 778031c3d7..0b141683f0 100644
--- a/include/exec/cpu-all.h
+++ b/include/exec/cpu-all.h
@@ -245,6 +245,9 @@ extern intptr_t qemu_host_page_mask;
/* original state of the write flag (used when tracking self-modifying
code */
#define PAGE_WRITE_ORG 0x0010
+/* Invalidate the TLB entry immediately, helpful for s390x
+ * Low-Address-Protection. Used with PAGE_WRITE in tlb_set_page_with_attrs() */
+#define PAGE_WRITE_INV 0x0040
#if defined(CONFIG_BSD) && defined(CONFIG_USER_ONLY)
/* FIXME: Code that sets/uses this is broken and needs to go away. */
#define PAGE_RESERVED 0x0020
--
2.13.5
^ permalink raw reply related [flat|nested] 12+ messages in thread
* [Qemu-devel] [PATCH v1 2/2] s390x/tcg: low-address protection support
2017-10-16 20:23 [Qemu-devel] [PATCH v1 0/2] s390x/tcg: LAP support using immediate TLB invalidation David Hildenbrand
2017-10-16 20:23 ` [Qemu-devel] [PATCH v1 1/2] accel/tcg: allow to invalidate a write TLB entry immediately David Hildenbrand
@ 2017-10-16 20:23 ` David Hildenbrand
2017-10-18 18:21 ` Thomas Huth
2017-10-17 8:47 ` [Qemu-devel] [PATCH v1 0/2] s390x/tcg: LAP support using immediate TLB invalidation Cornelia Huck
2017-10-17 12:17 ` Cornelia Huck
3 siblings, 1 reply; 12+ messages in thread
From: David Hildenbrand @ 2017-10-16 20:23 UTC (permalink / raw)
To: qemu-devel
Cc: qemu-s390x, Thomas Huth, Richard Henderson, Alexander Graf,
cohuck, Peter Maydell, David Hildenbrand
This is a neat way to implement low address protection, whereby
only the first 512 bytes of the first two pages (each 4096 bytes) of
every address space are protected.
Store a tec of 0 for the access exception, this is what is defined by
Enhanced Suppression on Protection in case of a low address protection
(Bit 61 set to 0, rest undefined).
We have to make sure to to pass the access address, not the masked page
address into mmu_translate*().
Drop the check from testblock. So we can properly test this via
kvm-unit-tests.
This will check every access going through one of the MMUs.
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: David Hildenbrand <david@redhat.com>
---
target/s390x/excp_helper.c | 3 +-
target/s390x/mem_helper.c | 8 ----
target/s390x/mmu_helper.c | 94 +++++++++++++++++++++++++++++-----------------
3 files changed, 60 insertions(+), 45 deletions(-)
diff --git a/target/s390x/excp_helper.c b/target/s390x/excp_helper.c
index cff308a18d..e04b670663 100644
--- a/target/s390x/excp_helper.c
+++ b/target/s390x/excp_helper.c
@@ -95,7 +95,6 @@ int s390_cpu_handle_mmu_fault(CPUState *cs, vaddr orig_vaddr,
DPRINTF("%s: address 0x%" VADDR_PRIx " rw %d mmu_idx %d\n",
__func__, orig_vaddr, rw, mmu_idx);
- orig_vaddr &= TARGET_PAGE_MASK;
vaddr = orig_vaddr;
if (mmu_idx < MMU_REAL_IDX) {
@@ -127,7 +126,7 @@ int s390_cpu_handle_mmu_fault(CPUState *cs, vaddr orig_vaddr,
qemu_log_mask(CPU_LOG_MMU, "%s: set tlb %" PRIx64 " -> %" PRIx64 " (%x)\n",
__func__, (uint64_t)vaddr, (uint64_t)raddr, prot);
- tlb_set_page(cs, orig_vaddr, raddr, prot,
+ tlb_set_page(cs, orig_vaddr & TARGET_PAGE_MASK, raddr, prot,
mmu_idx, TARGET_PAGE_SIZE);
return 0;
diff --git a/target/s390x/mem_helper.c b/target/s390x/mem_helper.c
index bbbe1c62b3..69a16867d4 100644
--- a/target/s390x/mem_helper.c
+++ b/target/s390x/mem_helper.c
@@ -1687,18 +1687,10 @@ void HELPER(stctl)(CPUS390XState *env, uint32_t r1, uint64_t a2, uint32_t r3)
uint32_t HELPER(testblock)(CPUS390XState *env, uint64_t real_addr)
{
uintptr_t ra = GETPC();
- CPUState *cs = CPU(s390_env_get_cpu(env));
int i;
real_addr = wrap_address(env, real_addr) & TARGET_PAGE_MASK;
- /* Check low-address protection */
- if ((env->cregs[0] & CR0_LOWPROT) && real_addr < 0x2000) {
- cpu_restore_state(cs, ra);
- program_interrupt(env, PGM_PROTECTION, 4);
- return 1;
- }
-
for (i = 0; i < TARGET_PAGE_SIZE; i += 8) {
cpu_stq_real_ra(env, real_addr + i, 0, ra);
}
diff --git a/target/s390x/mmu_helper.c b/target/s390x/mmu_helper.c
index 9daa0fd8e2..9806685bee 100644
--- a/target/s390x/mmu_helper.c
+++ b/target/s390x/mmu_helper.c
@@ -106,6 +106,35 @@ static void trigger_page_fault(CPUS390XState *env, target_ulong vaddr,
trigger_access_exception(env, type, ilen, tec);
}
+/* check whether the address would be proteted by Low-Address Protection */
+static bool is_low_address(uint64_t addr)
+{
+ return addr < 512 || (addr >= 4096 && addr <= 4607);
+}
+
+/* check whether Low-Address Protection is enabled for mmu_translate() */
+static bool lowprot_enabled(const CPUS390XState *env, uint64_t asc)
+{
+ if (!(env->cregs[0] & CR0_LOWPROT)) {
+ return false;
+ }
+ if (!(env->psw.mask & PSW_MASK_DAT)) {
+ return true;
+ }
+
+ /* Check the private-space control bit */
+ switch (asc) {
+ case PSW_ASC_PRIMARY:
+ return !(env->cregs[1] & _ASCE_PRIVATE_SPACE);
+ case PSW_ASC_SECONDARY:
+ return !(env->cregs[7] & _ASCE_PRIVATE_SPACE);
+ case PSW_ASC_HOME:
+ return !(env->cregs[13] & _ASCE_PRIVATE_SPACE);
+ default:
+ g_assert_not_reached();
+ }
+}
+
/**
* Translate real address to absolute (= physical)
* address by taking care of the prefix mapping.
@@ -323,6 +352,24 @@ int mmu_translate(CPUS390XState *env, target_ulong vaddr, int rw, uint64_t asc,
}
*flags = PAGE_READ | PAGE_WRITE | PAGE_EXEC;
+ if (is_low_address(vaddr & TARGET_PAGE_MASK) && lowprot_enabled(env, asc)) {
+ /*
+ * If any part of this page is currently protected, make sure the
+ * TLB entry will not be reused.
+ *
+ * As the protected range is always the first 512 bytes of the
+ * two first pages, we are able to catch all writes to these areas
+ * just by looking at the start address (triggering the tlb miss).
+ */
+ *flags |= PAGE_WRITE_INV;
+ if (is_low_address(vaddr) && rw == MMU_DATA_STORE) {
+ if (exc) {
+ trigger_access_exception(env, PGM_PROTECTION, ILEN_AUTO, 0);
+ }
+ return -EACCES;
+ }
+ }
+
vaddr &= TARGET_PAGE_MASK;
if (!(env->psw.mask & PSW_MASK_DAT)) {
@@ -392,50 +439,17 @@ int mmu_translate(CPUS390XState *env, target_ulong vaddr, int rw, uint64_t asc,
}
/**
- * lowprot_enabled: Check whether low-address protection is enabled
- */
-static bool lowprot_enabled(const CPUS390XState *env)
-{
- if (!(env->cregs[0] & CR0_LOWPROT)) {
- return false;
- }
- if (!(env->psw.mask & PSW_MASK_DAT)) {
- return true;
- }
-
- /* Check the private-space control bit */
- switch (env->psw.mask & PSW_MASK_ASC) {
- case PSW_ASC_PRIMARY:
- return !(env->cregs[1] & _ASCE_PRIVATE_SPACE);
- case PSW_ASC_SECONDARY:
- return !(env->cregs[7] & _ASCE_PRIVATE_SPACE);
- case PSW_ASC_HOME:
- return !(env->cregs[13] & _ASCE_PRIVATE_SPACE);
- default:
- /* We don't support access register mode */
- error_report("unsupported addressing mode");
- exit(1);
- }
-}
-
-/**
* translate_pages: Translate a set of consecutive logical page addresses
* to absolute addresses
*/
static int translate_pages(S390CPU *cpu, vaddr addr, int nr_pages,
target_ulong *pages, bool is_write)
{
- bool lowprot = is_write && lowprot_enabled(&cpu->env);
uint64_t asc = cpu->env.psw.mask & PSW_MASK_ASC;
CPUS390XState *env = &cpu->env;
int ret, i, pflags;
for (i = 0; i < nr_pages; i++) {
- /* Low-address protection? */
- if (lowprot && (addr < 512 || (addr >= 4096 && addr < 4096 + 512))) {
- trigger_access_exception(env, PGM_PROTECTION, ILEN_AUTO, 0);
- return -EACCES;
- }
ret = mmu_translate(env, addr, is_write, asc, &pages[i], &pflags, true);
if (ret) {
return ret;
@@ -509,9 +523,19 @@ int s390_cpu_virt_mem_rw(S390CPU *cpu, vaddr laddr, uint8_t ar, void *hostbuf,
int mmu_translate_real(CPUS390XState *env, target_ulong raddr, int rw,
target_ulong *addr, int *flags)
{
- /* TODO: low address protection once we flush the tlb on cr changes */
+ const bool lowprot_enabled = env->cregs[0] & CR0_LOWPROT;
+
*flags = PAGE_READ | PAGE_WRITE;
- *addr = mmu_real2abs(env, raddr);
+ if (is_low_address(raddr & TARGET_PAGE_MASK) && lowprot_enabled) {
+ /* see comment in mmu_translate() how this works */
+ *flags |= PAGE_WRITE_INV;
+ if (is_low_address(raddr) && rw == MMU_DATA_STORE) {
+ trigger_access_exception(env, PGM_PROTECTION, ILEN_AUTO, 0);
+ return -EACCES;
+ }
+ }
+
+ *addr = mmu_real2abs(env, raddr & TARGET_PAGE_MASK);
/* TODO: storage key handling */
return 0;
--
2.13.5
^ permalink raw reply related [flat|nested] 12+ messages in thread
* Re: [Qemu-devel] [PATCH v1 0/2] s390x/tcg: LAP support using immediate TLB invalidation
2017-10-16 20:23 [Qemu-devel] [PATCH v1 0/2] s390x/tcg: LAP support using immediate TLB invalidation David Hildenbrand
2017-10-16 20:23 ` [Qemu-devel] [PATCH v1 1/2] accel/tcg: allow to invalidate a write TLB entry immediately David Hildenbrand
2017-10-16 20:23 ` [Qemu-devel] [PATCH v1 2/2] s390x/tcg: low-address protection support David Hildenbrand
@ 2017-10-17 8:47 ` Cornelia Huck
2017-10-17 9:22 ` David Hildenbrand
2017-10-17 12:17 ` Cornelia Huck
3 siblings, 1 reply; 12+ messages in thread
From: Cornelia Huck @ 2017-10-17 8:47 UTC (permalink / raw)
To: David Hildenbrand
Cc: qemu-devel, qemu-s390x, Thomas Huth, Richard Henderson,
Alexander Graf, Peter Maydell
On Mon, 16 Oct 2017 22:23:56 +0200
David Hildenbrand <david@redhat.com> wrote:
> Details about Low-Address Protection can be found in description of
> patch 1 and 2. It is basically a subpage protection of the first two
> pages of every address space (for which it is enabled).
>
> We can achieve this by simply directly invalidating the TLB entry and
> therefore forcing every write accesses onto these two pages into the slow
> path.
>
> With this patch, I can boot Linux just fine (which uses LAP). This also
> makes all related kvm-unit-tests that we have pass.
Tested with a kernel based on the s390/features branch (4.14-rc2 + s390
patches) and the initrd from the debian installer, had udevd shot down
by the oomkiller. That happened only once, so it was probably an
unrelated fluke, but that combination worked well before.
[More config data + logs available on request.]
>
>
> RFC -> v1:
> - fix LAP range check (Thomas)
> - SIGP fix got picked up
>
> Based on: https://github.com/cohuck/qemu.git s390-next
> Available on: https://github.com/dhildenb/qemu.git s390x_lap
>
>
> David Hildenbrand (2):
> accel/tcg: allow to invalidate a write TLB entry immediately
> s390x/tcg: low-address protection support
>
> accel/tcg/cputlb.c | 5 ++-
> accel/tcg/softmmu_template.h | 4 +-
> include/exec/cpu-all.h | 3 ++
> target/s390x/excp_helper.c | 3 +-
> target/s390x/mem_helper.c | 8 ----
> target/s390x/mmu_helper.c | 94 +++++++++++++++++++++++++++-----------------
> 6 files changed, 69 insertions(+), 48 deletions(-)
>
^ permalink raw reply [flat|nested] 12+ messages in thread
* Re: [Qemu-devel] [PATCH v1 0/2] s390x/tcg: LAP support using immediate TLB invalidation
2017-10-17 8:47 ` [Qemu-devel] [PATCH v1 0/2] s390x/tcg: LAP support using immediate TLB invalidation Cornelia Huck
@ 2017-10-17 9:22 ` David Hildenbrand
2017-10-17 9:35 ` Cornelia Huck
0 siblings, 1 reply; 12+ messages in thread
From: David Hildenbrand @ 2017-10-17 9:22 UTC (permalink / raw)
To: Cornelia Huck
Cc: qemu-devel, qemu-s390x, Thomas Huth, Richard Henderson,
Alexander Graf, Peter Maydell
On 17.10.2017 10:47, Cornelia Huck wrote:
> On Mon, 16 Oct 2017 22:23:56 +0200
> David Hildenbrand <david@redhat.com> wrote:
>
>> Details about Low-Address Protection can be found in description of
>> patch 1 and 2. It is basically a subpage protection of the first two
>> pages of every address space (for which it is enabled).
>>
>> We can achieve this by simply directly invalidating the TLB entry and
>> therefore forcing every write accesses onto these two pages into the slow
>> path.
>>
>> With this patch, I can boot Linux just fine (which uses LAP). This also
>> makes all related kvm-unit-tests that we have pass.
>
> Tested with a kernel based on the s390/features branch (4.14-rc2 + s390
> patches) and the initrd from the debian installer, had udevd shot down
> by the oomkiller. That happened only once, so it was probably an
> unrelated fluke, but that combination worked well before.
>
Very unlikely, on invalid programming exceptions you would get a kernel
panic, not run oom. (not saying it isn't possible, rather that it is
very unlikely).
Can you reproduce with more memory? Have you enabled SMP? (little higher
memory consumption)
I am running (almost) the same setup with 500M and haven't observed any
such thing.
> [More config data + logs available on request.]
Can you send via (private) mail? I can have a look.
Thanks for testing!
--
Thanks,
David
^ permalink raw reply [flat|nested] 12+ messages in thread
* Re: [Qemu-devel] [PATCH v1 0/2] s390x/tcg: LAP support using immediate TLB invalidation
2017-10-17 9:22 ` David Hildenbrand
@ 2017-10-17 9:35 ` Cornelia Huck
0 siblings, 0 replies; 12+ messages in thread
From: Cornelia Huck @ 2017-10-17 9:35 UTC (permalink / raw)
To: David Hildenbrand
Cc: qemu-devel, qemu-s390x, Thomas Huth, Richard Henderson,
Alexander Graf, Peter Maydell
On Tue, 17 Oct 2017 11:22:19 +0200
David Hildenbrand <david@redhat.com> wrote:
> On 17.10.2017 10:47, Cornelia Huck wrote:
> > On Mon, 16 Oct 2017 22:23:56 +0200
> > David Hildenbrand <david@redhat.com> wrote:
> >
> >> Details about Low-Address Protection can be found in description of
> >> patch 1 and 2. It is basically a subpage protection of the first two
> >> pages of every address space (for which it is enabled).
> >>
> >> We can achieve this by simply directly invalidating the TLB entry and
> >> therefore forcing every write accesses onto these two pages into the slow
> >> path.
> >>
> >> With this patch, I can boot Linux just fine (which uses LAP). This also
> >> makes all related kvm-unit-tests that we have pass.
> >
> > Tested with a kernel based on the s390/features branch (4.14-rc2 + s390
> > patches) and the initrd from the debian installer, had udevd shot down
> > by the oomkiller. That happened only once, so it was probably an
> > unrelated fluke, but that combination worked well before.
> >
>
> Very unlikely, on invalid programming exceptions you would get a kernel
> panic, not run oom. (not saying it isn't possible, rather that it is
> very unlikely).
That's what I thought as well.
>
> Can you reproduce with more memory? Have you enabled SMP? (little higher
> memory consumption)
SMP is on. I could not reproduce it again...
>
> I am running (almost) the same setup with 500M and haven't observed any
> such thing.
...so I think it really is an unrelated fluke (and I'll simply make the
machine a bit larger).
^ permalink raw reply [flat|nested] 12+ messages in thread
* Re: [Qemu-devel] [PATCH v1 0/2] s390x/tcg: LAP support using immediate TLB invalidation
2017-10-16 20:23 [Qemu-devel] [PATCH v1 0/2] s390x/tcg: LAP support using immediate TLB invalidation David Hildenbrand
` (2 preceding siblings ...)
2017-10-17 8:47 ` [Qemu-devel] [PATCH v1 0/2] s390x/tcg: LAP support using immediate TLB invalidation Cornelia Huck
@ 2017-10-17 12:17 ` Cornelia Huck
2017-10-20 8:25 ` Cornelia Huck
3 siblings, 1 reply; 12+ messages in thread
From: Cornelia Huck @ 2017-10-17 12:17 UTC (permalink / raw)
To: David Hildenbrand
Cc: qemu-devel, qemu-s390x, Thomas Huth, Richard Henderson,
Alexander Graf, Peter Maydell
On Mon, 16 Oct 2017 22:23:56 +0200
David Hildenbrand <david@redhat.com> wrote:
> Details about Low-Address Protection can be found in description of
> patch 1 and 2. It is basically a subpage protection of the first two
> pages of every address space (for which it is enabled).
>
> We can achieve this by simply directly invalidating the TLB entry and
> therefore forcing every write accesses onto these two pages into the slow
> path.
>
> With this patch, I can boot Linux just fine (which uses LAP). This also
> makes all related kvm-unit-tests that we have pass.
>
>
> RFC -> v1:
> - fix LAP range check (Thomas)
> - SIGP fix got picked up
>
> Based on: https://github.com/cohuck/qemu.git s390-next
> Available on: https://github.com/dhildenb/qemu.git s390x_lap
>
>
> David Hildenbrand (2):
> accel/tcg: allow to invalidate a write TLB entry immediately
> s390x/tcg: low-address protection support
>
> accel/tcg/cputlb.c | 5 ++-
> accel/tcg/softmmu_template.h | 4 +-
> include/exec/cpu-all.h | 3 ++
> target/s390x/excp_helper.c | 3 +-
> target/s390x/mem_helper.c | 8 ----
> target/s390x/mmu_helper.c | 94 +++++++++++++++++++++++++++-----------------
> 6 files changed, 69 insertions(+), 48 deletions(-)
>
As the oom was not reproducible (and seems to be only a problem of the
machine being run with barely enough memory), I'm inclined to queue
this to s390-next after I got some acks/r-bs for patch 1.
^ permalink raw reply [flat|nested] 12+ messages in thread
* Re: [Qemu-devel] [PATCH v1 2/2] s390x/tcg: low-address protection support
2017-10-16 20:23 ` [Qemu-devel] [PATCH v1 2/2] s390x/tcg: low-address protection support David Hildenbrand
@ 2017-10-18 18:21 ` Thomas Huth
2017-10-18 19:34 ` David Hildenbrand
0 siblings, 1 reply; 12+ messages in thread
From: Thomas Huth @ 2017-10-18 18:21 UTC (permalink / raw)
To: David Hildenbrand, qemu-devel
Cc: qemu-s390x, Richard Henderson, Alexander Graf, cohuck,
Peter Maydell
On 16.10.2017 22:23, David Hildenbrand wrote:
> This is a neat way to implement low address protection, whereby
> only the first 512 bytes of the first two pages (each 4096 bytes) of
> every address space are protected.
>
> Store a tec of 0 for the access exception, this is what is defined by
> Enhanced Suppression on Protection in case of a low address protection
> (Bit 61 set to 0, rest undefined).
>
> We have to make sure to to pass the access address, not the masked page
> address into mmu_translate*().
>
> Drop the check from testblock. So we can properly test this via
> kvm-unit-tests.
>
> This will check every access going through one of the MMUs.
>
> Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
> Signed-off-by: David Hildenbrand <david@redhat.com>
> ---
> target/s390x/excp_helper.c | 3 +-
> target/s390x/mem_helper.c | 8 ----
> target/s390x/mmu_helper.c | 94 +++++++++++++++++++++++++++++-----------------
> 3 files changed, 60 insertions(+), 45 deletions(-)
[...]
> diff --git a/target/s390x/mmu_helper.c b/target/s390x/mmu_helper.c
> index 9daa0fd8e2..9806685bee 100644
> --- a/target/s390x/mmu_helper.c
> +++ b/target/s390x/mmu_helper.c
> @@ -106,6 +106,35 @@ static void trigger_page_fault(CPUS390XState *env, target_ulong vaddr,
> trigger_access_exception(env, type, ilen, tec);
> }
>
> +/* check whether the address would be proteted by Low-Address Protection */
> +static bool is_low_address(uint64_t addr)
> +{
> + return addr < 512 || (addr >= 4096 && addr <= 4607);
> +}
Just cosmetic, but I'd rather either use "<=" or "<" both times, so:
return addr <= 511 || (addr >= 4096 && addr <= 4607);
or:
return addr < 512 || (addr >= 4096 && addr < 4608);
> +/* check whether Low-Address Protection is enabled for mmu_translate() */
> +static bool lowprot_enabled(const CPUS390XState *env, uint64_t asc)
> +{
> + if (!(env->cregs[0] & CR0_LOWPROT)) {
> + return false;
> + }
> + if (!(env->psw.mask & PSW_MASK_DAT)) {
> + return true;
> + }
> +
> + /* Check the private-space control bit */
> + switch (asc) {
> + case PSW_ASC_PRIMARY:
> + return !(env->cregs[1] & _ASCE_PRIVATE_SPACE);
> + case PSW_ASC_SECONDARY:
> + return !(env->cregs[7] & _ASCE_PRIVATE_SPACE);
> + case PSW_ASC_HOME:
> + return !(env->cregs[13] & _ASCE_PRIVATE_SPACE);
> + default:
> + g_assert_not_reached();
Well, this is certainly reachable - if the guest was running in access
register mode. So it might be nicer to the user if you keep the
error_report() here?
> + }
> +}
> +
> /**
> * Translate real address to absolute (= physical)
> * address by taking care of the prefix mapping.
> @@ -323,6 +352,24 @@ int mmu_translate(CPUS390XState *env, target_ulong vaddr, int rw, uint64_t asc,
> }
>
> *flags = PAGE_READ | PAGE_WRITE | PAGE_EXEC;
> + if (is_low_address(vaddr & TARGET_PAGE_MASK) && lowprot_enabled(env, asc)) {
> + /*
> + * If any part of this page is currently protected, make sure the
> + * TLB entry will not be reused.
> + *
> + * As the protected range is always the first 512 bytes of the
> + * two first pages, we are able to catch all writes to these areas
> + * just by looking at the start address (triggering the tlb miss).
> + */
> + *flags |= PAGE_WRITE_INV;
> + if (is_low_address(vaddr) && rw == MMU_DATA_STORE) {
> + if (exc) {
> + trigger_access_exception(env, PGM_PROTECTION, ILEN_AUTO, 0);
> + }
> + return -EACCES;
> + }
> + }
> +
> vaddr &= TARGET_PAGE_MASK;
>
> if (!(env->psw.mask & PSW_MASK_DAT)) {
> @@ -392,50 +439,17 @@ int mmu_translate(CPUS390XState *env, target_ulong vaddr, int rw, uint64_t asc,
> }
>
> /**
> - * lowprot_enabled: Check whether low-address protection is enabled
> - */
> -static bool lowprot_enabled(const CPUS390XState *env)
> -{
> - if (!(env->cregs[0] & CR0_LOWPROT)) {
> - return false;
> - }
> - if (!(env->psw.mask & PSW_MASK_DAT)) {
> - return true;
> - }
> -
> - /* Check the private-space control bit */
> - switch (env->psw.mask & PSW_MASK_ASC) {
> - case PSW_ASC_PRIMARY:
> - return !(env->cregs[1] & _ASCE_PRIVATE_SPACE);
> - case PSW_ASC_SECONDARY:
> - return !(env->cregs[7] & _ASCE_PRIVATE_SPACE);
> - case PSW_ASC_HOME:
> - return !(env->cregs[13] & _ASCE_PRIVATE_SPACE);
> - default:
> - /* We don't support access register mode */
> - error_report("unsupported addressing mode");
> - exit(1);
> - }
> -}
Apart from the nits, the patch looks fine to me.
Thomas
^ permalink raw reply [flat|nested] 12+ messages in thread
* Re: [Qemu-devel] [PATCH v1 2/2] s390x/tcg: low-address protection support
2017-10-18 18:21 ` Thomas Huth
@ 2017-10-18 19:34 ` David Hildenbrand
2017-10-19 8:56 ` Cornelia Huck
0 siblings, 1 reply; 12+ messages in thread
From: David Hildenbrand @ 2017-10-18 19:34 UTC (permalink / raw)
To: Thomas Huth, qemu-devel
Cc: qemu-s390x, Richard Henderson, Alexander Graf, cohuck,
Peter Maydell
On 18.10.2017 20:21, Thomas Huth wrote:
> On 16.10.2017 22:23, David Hildenbrand wrote:
>> This is a neat way to implement low address protection, whereby
>> only the first 512 bytes of the first two pages (each 4096 bytes) of
>> every address space are protected.
>>
>> Store a tec of 0 for the access exception, this is what is defined by
>> Enhanced Suppression on Protection in case of a low address protection
>> (Bit 61 set to 0, rest undefined).
>>
>> We have to make sure to to pass the access address, not the masked page
>> address into mmu_translate*().
>>
>> Drop the check from testblock. So we can properly test this via
>> kvm-unit-tests.
>>
>> This will check every access going through one of the MMUs.
>>
>> Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
>> Signed-off-by: David Hildenbrand <david@redhat.com>
>> ---
>> target/s390x/excp_helper.c | 3 +-
>> target/s390x/mem_helper.c | 8 ----
>> target/s390x/mmu_helper.c | 94 +++++++++++++++++++++++++++++-----------------
>> 3 files changed, 60 insertions(+), 45 deletions(-)
> [...]
>> diff --git a/target/s390x/mmu_helper.c b/target/s390x/mmu_helper.c
>> index 9daa0fd8e2..9806685bee 100644
>> --- a/target/s390x/mmu_helper.c
>> +++ b/target/s390x/mmu_helper.c
>> @@ -106,6 +106,35 @@ static void trigger_page_fault(CPUS390XState *env, target_ulong vaddr,
>> trigger_access_exception(env, type, ilen, tec);
>> }
>>
>> +/* check whether the address would be proteted by Low-Address Protection */
>> +static bool is_low_address(uint64_t addr)
>> +{
>> + return addr < 512 || (addr >= 4096 && addr <= 4607);
>> +}
>
> Just cosmetic, but I'd rather either use "<=" or "<" both times, so:
>
> return addr <= 511 || (addr >= 4096 && addr <= 4607);
>
That one then, as it matches the wording in the PoP.
>> + /* Check the private-space control bit */
>> + switch (asc) {
>> + case PSW_ASC_PRIMARY:
>> + return !(env->cregs[1] & _ASCE_PRIVATE_SPACE);
>> + case PSW_ASC_SECONDARY:
>> + return !(env->cregs[7] & _ASCE_PRIVATE_SPACE);
>> + case PSW_ASC_HOME:
>> + return !(env->cregs[13] & _ASCE_PRIVATE_SPACE);
>> + default:
>> + g_assert_not_reached();
>
> Well, this is certainly reachable - if the guest was running in access
> register mode. So it might be nicer to the user if you keep the
> error_report() here?
Right, this would be reachable via translate_pages(), but not via the
tlb. Although unlikely to hit it at that point, we can keep the error.
Conny, can you fix these two up or do you want me to resend?
--
Thanks,
David
^ permalink raw reply [flat|nested] 12+ messages in thread
* Re: [Qemu-devel] [PATCH v1 2/2] s390x/tcg: low-address protection support
2017-10-18 19:34 ` David Hildenbrand
@ 2017-10-19 8:56 ` Cornelia Huck
2017-10-19 15:54 ` David Hildenbrand
0 siblings, 1 reply; 12+ messages in thread
From: Cornelia Huck @ 2017-10-19 8:56 UTC (permalink / raw)
To: David Hildenbrand
Cc: Thomas Huth, qemu-devel, qemu-s390x, Richard Henderson,
Alexander Graf, Peter Maydell
On Wed, 18 Oct 2017 21:34:07 +0200
David Hildenbrand <david@redhat.com> wrote:
> On 18.10.2017 20:21, Thomas Huth wrote:
> > On 16.10.2017 22:23, David Hildenbrand wrote:
> >> This is a neat way to implement low address protection, whereby
> >> only the first 512 bytes of the first two pages (each 4096 bytes) of
> >> every address space are protected.
> >>
> >> Store a tec of 0 for the access exception, this is what is defined by
> >> Enhanced Suppression on Protection in case of a low address protection
> >> (Bit 61 set to 0, rest undefined).
> >>
> >> We have to make sure to to pass the access address, not the masked page
> >> address into mmu_translate*().
> >>
> >> Drop the check from testblock. So we can properly test this via
> >> kvm-unit-tests.
> >>
> >> This will check every access going through one of the MMUs.
> >>
> >> Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
> >> Signed-off-by: David Hildenbrand <david@redhat.com>
> >> ---
> >> target/s390x/excp_helper.c | 3 +-
> >> target/s390x/mem_helper.c | 8 ----
> >> target/s390x/mmu_helper.c | 94 +++++++++++++++++++++++++++++-----------------
> >> 3 files changed, 60 insertions(+), 45 deletions(-)
> > [...]
> >> diff --git a/target/s390x/mmu_helper.c b/target/s390x/mmu_helper.c
> >> index 9daa0fd8e2..9806685bee 100644
> >> --- a/target/s390x/mmu_helper.c
> >> +++ b/target/s390x/mmu_helper.c
> >> @@ -106,6 +106,35 @@ static void trigger_page_fault(CPUS390XState *env, target_ulong vaddr,
> >> trigger_access_exception(env, type, ilen, tec);
> >> }
> >>
> >> +/* check whether the address would be proteted by Low-Address Protection */
> >> +static bool is_low_address(uint64_t addr)
> >> +{
> >> + return addr < 512 || (addr >= 4096 && addr <= 4607);
> >> +}
> >
> > Just cosmetic, but I'd rather either use "<=" or "<" both times, so:
> >
> > return addr <= 511 || (addr >= 4096 && addr <= 4607);
> >
>
> That one then, as it matches the wording in the PoP.
>
> >> + /* Check the private-space control bit */
> >> + switch (asc) {
> >> + case PSW_ASC_PRIMARY:
> >> + return !(env->cregs[1] & _ASCE_PRIVATE_SPACE);
> >> + case PSW_ASC_SECONDARY:
> >> + return !(env->cregs[7] & _ASCE_PRIVATE_SPACE);
> >> + case PSW_ASC_HOME:
> >> + return !(env->cregs[13] & _ASCE_PRIVATE_SPACE);
> >> + default:
> >> + g_assert_not_reached();
> >
> > Well, this is certainly reachable - if the guest was running in access
> > register mode. So it might be nicer to the user if you keep the
> > error_report() here?
>
> Right, this would be reachable via translate_pages(), but not via the
> tlb. Although unlikely to hit it at that point, we can keep the error.
>
> Conny, can you fix these two up or do you want me to resend?
Fixed it up. Can you please verify that
git://github.com/cohuck/qemu lap
looks sane?
^ permalink raw reply [flat|nested] 12+ messages in thread
* Re: [Qemu-devel] [PATCH v1 2/2] s390x/tcg: low-address protection support
2017-10-19 8:56 ` Cornelia Huck
@ 2017-10-19 15:54 ` David Hildenbrand
0 siblings, 0 replies; 12+ messages in thread
From: David Hildenbrand @ 2017-10-19 15:54 UTC (permalink / raw)
To: Cornelia Huck
Cc: Thomas Huth, qemu-devel, qemu-s390x, Richard Henderson,
Alexander Graf, Peter Maydell
On 19.10.2017 10:56, Cornelia Huck wrote:
> git://github.com/cohuck/qemu lap
Did a diff to my branch, looks very good!
Thanks!
--
Thanks,
David
^ permalink raw reply [flat|nested] 12+ messages in thread
* Re: [Qemu-devel] [PATCH v1 0/2] s390x/tcg: LAP support using immediate TLB invalidation
2017-10-17 12:17 ` Cornelia Huck
@ 2017-10-20 8:25 ` Cornelia Huck
0 siblings, 0 replies; 12+ messages in thread
From: Cornelia Huck @ 2017-10-20 8:25 UTC (permalink / raw)
To: David Hildenbrand
Cc: qemu-devel, qemu-s390x, Thomas Huth, Richard Henderson,
Alexander Graf, Peter Maydell
On Tue, 17 Oct 2017 14:17:06 +0200
Cornelia Huck <cohuck@redhat.com> wrote:
> On Mon, 16 Oct 2017 22:23:56 +0200
> David Hildenbrand <david@redhat.com> wrote:
>
> > Details about Low-Address Protection can be found in description of
> > patch 1 and 2. It is basically a subpage protection of the first two
> > pages of every address space (for which it is enabled).
> >
> > We can achieve this by simply directly invalidating the TLB entry and
> > therefore forcing every write accesses onto these two pages into the slow
> > path.
> >
> > With this patch, I can boot Linux just fine (which uses LAP). This also
> > makes all related kvm-unit-tests that we have pass.
> >
> >
> > RFC -> v1:
> > - fix LAP range check (Thomas)
> > - SIGP fix got picked up
> >
> > Based on: https://github.com/cohuck/qemu.git s390-next
> > Available on: https://github.com/dhildenb/qemu.git s390x_lap
> >
> >
> > David Hildenbrand (2):
> > accel/tcg: allow to invalidate a write TLB entry immediately
> > s390x/tcg: low-address protection support
> >
> > accel/tcg/cputlb.c | 5 ++-
> > accel/tcg/softmmu_template.h | 4 +-
> > include/exec/cpu-all.h | 3 ++
> > target/s390x/excp_helper.c | 3 +-
> > target/s390x/mem_helper.c | 8 ----
> > target/s390x/mmu_helper.c | 94 +++++++++++++++++++++++++++-----------------
> > 6 files changed, 69 insertions(+), 48 deletions(-)
> >
>
> As the oom was not reproducible (and seems to be only a problem of the
> machine being run with barely enough memory), I'm inclined to queue
> this to s390-next after I got some acks/r-bs for patch 1.
And as I want to send this in my last pull req before softfreeze, I
included it anyway :)
Thanks, applied.
^ permalink raw reply [flat|nested] 12+ messages in thread
end of thread, other threads:[~2017-10-20 8:25 UTC | newest]
Thread overview: 12+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2017-10-16 20:23 [Qemu-devel] [PATCH v1 0/2] s390x/tcg: LAP support using immediate TLB invalidation David Hildenbrand
2017-10-16 20:23 ` [Qemu-devel] [PATCH v1 1/2] accel/tcg: allow to invalidate a write TLB entry immediately David Hildenbrand
2017-10-16 20:23 ` [Qemu-devel] [PATCH v1 2/2] s390x/tcg: low-address protection support David Hildenbrand
2017-10-18 18:21 ` Thomas Huth
2017-10-18 19:34 ` David Hildenbrand
2017-10-19 8:56 ` Cornelia Huck
2017-10-19 15:54 ` David Hildenbrand
2017-10-17 8:47 ` [Qemu-devel] [PATCH v1 0/2] s390x/tcg: LAP support using immediate TLB invalidation Cornelia Huck
2017-10-17 9:22 ` David Hildenbrand
2017-10-17 9:35 ` Cornelia Huck
2017-10-17 12:17 ` Cornelia Huck
2017-10-20 8:25 ` Cornelia Huck
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).