From: Richard Henderson <rth@twiddle.net>
To: Alvise Rigo <a.rigo@virtualopensystems.com>,
qemu-devel@nongnu.org, mttcg@greensocs.com
Cc: claudio.fontana@huawei.com, jani.kokkonen@huawei.com,
tech@virtualopensystems.com, alex.bennee@linaro.org,
pbonzini@redhat.com
Subject: Re: [Qemu-devel] [RFC v5 2/6] softmmu: Add new TLB_EXCL flag
Date: Wed, 30 Sep 2015 13:34:53 +1000 [thread overview]
Message-ID: <560B585D.6010103@twiddle.net> (raw)
In-Reply-To: <1443083566-10994-3-git-send-email-a.rigo@virtualopensystems.com>
On 09/24/2015 06:32 PM, Alvise Rigo wrote:
> + if (unlikely(!(te->addr_write & TLB_MMIO) && (te->addr_write & TLB_EXCL))) {
> + /* We are removing an exclusive entry, set the page to dirty. This
> + * is not be necessary if the vCPU has performed both SC and LL. */
> + hwaddr hw_addr = (env->iotlb[mmu_idx][index].addr & TARGET_PAGE_MASK) +
> + (te->addr_write & TARGET_PAGE_MASK);
> + cpu_physical_memory_set_excl_dirty(hw_addr, cpu->cpu_index);
> + }
Um... this seems dangerous.
(1) I don't see why EXCL support should differ whether MMIO is set or not.
Either we support exclusive accesses on memory-mapped io like we do on ram (in
which case this is wrong) or we don't (in which case this is unnecessary).
(2) Doesn't this prevent a target from accessing a page during a ll/sc sequence
that aliases within our trivial hash? Such a case on arm might be
mov r3, #0x100000
ldrex r0, [r2]
ldr r1, [r2, r3]
add r0, r0, r1
strex r0, [r2]
AFAIK, Alpha is the only target we have which specifies that any normal memory
access during a ll+sc sequence may fail the sc.
(3) I'm finding the "clean/dirty" words less helpful than they could be,
especially since "clean" implies "some cpu has an excl lock on the page",
which is reverse of what seems natural but understandable given the
implementation. Perhaps we could rename these helpers?
> @@ -376,6 +392,28 @@ tb_page_addr_t get_page_addr_code(CPUArchState *env1, target_ulong addr)
> return qemu_ram_addr_from_host_nofail(p);
> }
>
> +/* Atomic insn translation TLB support. */
> +#define EXCLUSIVE_RESET_ADDR ULLONG_MAX
> +/* For every vCPU compare the exclusive address and reset it in case of a
> + * match. Since only one vCPU is running at once, no lock has to be held to
> + * guard this operation. */
> +static inline void lookup_and_reset_cpus_ll_addr(hwaddr addr, hwaddr size)
> +{
> + CPUState *cpu;
> + CPUArchState *acpu;
> +
> + CPU_FOREACH(cpu) {
> + acpu = (CPUArchState *)cpu->env_ptr;
> +
> + if (acpu->excl_protected_range.begin != EXCLUSIVE_RESET_ADDR &&
> + ranges_overlap(acpu->excl_protected_range.begin,
> + acpu->excl_protected_range.end - acpu->excl_protected_range.begin,
> + addr, size)) {
Watch the indentation here... it ought to line up with the previous argument on
the line above, just after the (. This may require you split the subtract
across the line too but that's ok.
> void dump_exec_info(FILE *f, fprintf_function cpu_fprintf);
> void dump_opcount_info(FILE *f, fprintf_function cpu_fprintf);
> diff --git a/include/exec/cpu-defs.h b/include/exec/cpu-defs.h
> index 98b9cff..a67f295 100644
> --- a/include/exec/cpu-defs.h
> +++ b/include/exec/cpu-defs.h
> @@ -27,6 +27,7 @@
> #include <inttypes.h>
> #include "qemu/osdep.h"
> #include "qemu/queue.h"
> +#include "qemu/range.h"
> #include "tcg-target.h"
> #ifndef CONFIG_USER_ONLY
> #include "exec/hwaddr.h"
> @@ -150,5 +151,16 @@ typedef struct CPUIOTLBEntry {
> #define CPU_COMMON \
> /* soft mmu support */ \
> CPU_COMMON_TLB \
> + \
> + /* Used by the atomic insn translation backend. */ \
> + int ll_sc_context; \
> + /* vCPU current exclusive addresses range.
> + * The address is set to EXCLUSIVE_RESET_ADDR if the vCPU is not.
> + * in the middle of a LL/SC. */ \
> + struct Range excl_protected_range; \
> + /* Used to carry the SC result but also to flag a normal (legacy)
> + * store access made by a stcond (see softmmu_template.h). */ \
> + int excl_succeeded; \
This seems to be required by softmmu_template.h? In which case this must be in
CPU_COMMON_TLB.
Please expand on this "legacy store access" comment here. I don't understand.
> +
>
> #endif
> diff --git a/softmmu_template.h b/softmmu_template.h
> index d42d89d..e4431e8 100644
> --- a/softmmu_template.h
> +++ b/softmmu_template.h
> @@ -409,19 +409,53 @@ void helper_le_st_name(CPUArchState *env, target_ulong addr, DATA_TYPE val,
> tlb_addr = env->tlb_table[mmu_idx][index].addr_write;
> }
>
> - /* Handle an IO access. */
> + /* Handle an IO access or exclusive access. */
> if (unlikely(tlb_addr & ~TARGET_PAGE_MASK)) {
> - CPUIOTLBEntry *iotlbentry;
> - if ((addr & (DATA_SIZE - 1)) != 0) {
> - goto do_unaligned_access;
> + CPUIOTLBEntry *iotlbentry = &env->iotlb[mmu_idx][index];
> +
> + if ((tlb_addr & ~TARGET_PAGE_MASK) == TLB_EXCL) {
> + /* The slow-path has been forced since we are writing to
> + * exclusive-protected memory. */
> + hwaddr hw_addr = (iotlbentry->addr & TARGET_PAGE_MASK) + addr;
> +
> + /* The function lookup_and_reset_cpus_ll_addr could have reset the
> + * exclusive address. Fail the SC in this case.
> + * N.B.: Here excl_succeeded == 0 means that helper_le_st_name has
> + * not been called by a softmmu_llsc_template.h. */
> + if(env->excl_succeeded) {
> + if (env->excl_protected_range.begin != hw_addr) {
> + /* The vCPU is SC-ing to an unprotected address. */
> + env->excl_protected_range.begin = EXCLUSIVE_RESET_ADDR;
> + env->excl_succeeded = 0;
> +
> + return;
Huh? How does a normal store ever fail. Surely you aren't using the normal
store path to support true store_conditional?
> + }
> +
> + cpu_physical_memory_set_excl_dirty(hw_addr, ENV_GET_CPU(env)->cpu_index);
> + }
> +
> + haddr = addr + env->tlb_table[mmu_idx][index].addend;
> + #if DATA_SIZE == 1
> + glue(glue(st, SUFFIX), _p)((uint8_t *)haddr, val);
> + #else
> + glue(glue(st, SUFFIX), _le_p)((uint8_t *)haddr, val);
> + #endif
You're assuming that TLB_EXCL can never have any other TLB_ bits set. We
definitely have to support TLB_EXCL+TLB_NOTDIRTY, and probably +TLB_MMIO too
(iirc that's how watchpoints are implemeneted).
r~
next prev parent reply other threads:[~2015-09-30 3:40 UTC|newest]
Thread overview: 27+ messages / expand[flat|nested] mbox.gz Atom feed top
2015-09-24 8:32 [Qemu-devel] [RFC v5 0/6] Slow-path for atomic instruction translation Alvise Rigo
2015-09-24 8:32 ` [Qemu-devel] [RFC v5 1/6] exec.c: Add new exclusive bitmap to ram_list Alvise Rigo
2015-09-26 17:15 ` Richard Henderson
2015-09-28 7:28 ` alvise rigo
2015-09-24 8:32 ` [Qemu-devel] [RFC v5 2/6] softmmu: Add new TLB_EXCL flag Alvise Rigo
2015-09-30 3:34 ` Richard Henderson [this message]
2015-09-30 9:24 ` alvise rigo
2015-09-30 11:09 ` Peter Maydell
2015-09-30 12:44 ` alvise rigo
2015-09-30 20:37 ` Richard Henderson
2015-09-24 8:32 ` [Qemu-devel] [RFC v5 3/6] softmmu: Add helpers for a new slowpath Alvise Rigo
2015-09-30 3:58 ` Richard Henderson
2015-09-30 9:46 ` alvise rigo
2015-09-30 20:42 ` Richard Henderson
2015-10-01 8:05 ` alvise rigo
2015-10-01 19:34 ` Richard Henderson
2015-09-24 8:32 ` [Qemu-devel] [RFC v5 4/6] target-arm: Create new runtime helpers for excl accesses Alvise Rigo
2015-09-30 4:03 ` Richard Henderson
2015-09-30 10:16 ` alvise rigo
2015-09-24 8:32 ` [Qemu-devel] [RFC v5 5/6] configure: Use slow-path for atomic only when the softmmu is enabled Alvise Rigo
2015-09-30 4:05 ` Richard Henderson
2015-09-30 9:51 ` alvise rigo
2015-09-24 8:32 ` [Qemu-devel] [RFC v5 6/6] target-arm: translate: Use ld/st excl for atomic insns Alvise Rigo
2015-09-30 4:44 ` [Qemu-devel] [RFC v5 0/6] Slow-path for atomic instruction translation Paolo Bonzini
2015-09-30 8:14 ` alvise rigo
2015-09-30 13:20 ` Paolo Bonzini
2015-10-01 19:32 ` Emilio G. Cota
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=560B585D.6010103@twiddle.net \
--to=rth@twiddle.net \
--cc=a.rigo@virtualopensystems.com \
--cc=alex.bennee@linaro.org \
--cc=claudio.fontana@huawei.com \
--cc=jani.kokkonen@huawei.com \
--cc=mttcg@greensocs.com \
--cc=pbonzini@redhat.com \
--cc=qemu-devel@nongnu.org \
--cc=tech@virtualopensystems.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).