* [PATCH v3 00/12] Start replacing target_ulong with vaddr
@ 2023-06-21 13:56 Anton Johansson via
2023-06-21 13:56 ` [PATCH v3 01/12] accel: Replace target_ulong in tlb_*() Anton Johansson via
` (12 more replies)
0 siblings, 13 replies; 16+ messages in thread
From: Anton Johansson via @ 2023-06-21 13:56 UTC (permalink / raw)
To: qemu-devel
Cc: ale, richard.henderson, pbonzini, eduardo, philmd,
marcel.apfelbaum, wangyanan55
This is a first patchset in removing target_ulong from non-target/
directories. As use of target_ulong is spread accross the codebase we
are attempting to target as few maintainers as possible with each
patchset in order to ease reviewing.
The following instances of target_ulong remain in accel/ and tcg/
- atomic helpers (atomic_common.c.inc), cpu_atomic_*()
(atomic_template.h,) and cpu_[st|ld]*()
(cputlb.c/ldst_common.c.inc) are only used in target/ and can
be pulled out into a separate target-specific file;
- walk_memory_regions() is used in user-exec.c and
linux-user/elfload.c;
- kvm_find_sw_breakpoint() in kvm-all.c used in target/;
Changes in v2:
- addr argument in tb_invalidate_phys_addr() changed from vaddr
to hwaddr;
- Removed previous patch:
"[PATCH 4/8] accel/tcg: Replace target_ulong with vaddr in helper_unaligned_*()"
as these functions are removed by Richard's patches;
- First patch:
"[PATCH 1/8] accel: Replace `target_ulong` with `vaddr` in TB/TLB"
has been split into patches 1-7 to ease reviewing;
- Pulled in target/ changes to cpu_get_tb_cpu_state() into this
patchset. This was done to avoid pointer casts to target_ulong *
which would break for 32-bit targets on a 64-bit BE host;
Note the small target/ changes are collected in a single
patch to not break bisection. If it's still desirable to split
based on maintainer, let me know;
- `last` argument of pageflags_[find|next] changed from target_long
to vaddr. This change was left out of the last patchset due to
triggering a "Bad ram pointer" error (softmmu/physmem.c:2273)
when running make check for a i386-softmmu target.
I was not able to recreate this on master or post rebase on
Richard's tcg-once branch.
Changes in v3:
- Rebased on master
Finally, the grand goal is to allow for heterogeneous QEMU binaries
consisting of multiple frontends.
RFC: https://lists.nongnu.org/archive/html/qemu-devel/2022-12/msg04518.html
Anton Johansson (12):
accel: Replace target_ulong in tlb_*()
accel/tcg/translate-all.c: Widen pc and cs_base
target: Widen pc/cs_base in cpu_get_tb_cpu_state
accel/tcg/cputlb.c: Widen CPUTLBEntry access functions
accel/tcg/cputlb.c: Widen addr in MMULookupPageData
accel/tcg/cpu-exec.c: Widen pc to vaddr
accel/tcg: Widen pc to vaddr in CPUJumpCache
accel: Replace target_ulong with vaddr in probe_*()
accel/tcg: Replace target_ulong with vaddr in *_mmu_lookup()
accel/tcg: Replace target_ulong with vaddr in translator_*()
accel/tcg: Replace target_ulong with vaddr in page_*()
cpu: Replace target_ulong with hwaddr in tb_invalidate_phys_addr()
accel/stubs/tcg-stub.c | 6 +-
accel/tcg/cpu-exec.c | 43 ++++---
accel/tcg/cputlb.c | 233 +++++++++++++++++------------------
accel/tcg/internal.h | 6 +-
accel/tcg/tb-hash.h | 12 +-
accel/tcg/tb-jmp-cache.h | 2 +-
accel/tcg/tb-maint.c | 2 +-
accel/tcg/translate-all.c | 13 +-
accel/tcg/translator.c | 10 +-
accel/tcg/user-exec.c | 58 +++++----
cpu.c | 2 +-
include/exec/cpu-all.h | 10 +-
include/exec/cpu-defs.h | 4 +-
include/exec/cpu_ldst.h | 10 +-
include/exec/exec-all.h | 95 +++++++-------
include/exec/translate-all.h | 2 +-
include/exec/translator.h | 6 +-
include/qemu/plugin-memory.h | 2 +-
target/alpha/cpu.h | 4 +-
target/arm/cpu.h | 4 +-
target/arm/helper.c | 4 +-
target/avr/cpu.h | 4 +-
target/cris/cpu.h | 4 +-
target/hexagon/cpu.h | 4 +-
target/hppa/cpu.h | 5 +-
target/i386/cpu.h | 4 +-
target/loongarch/cpu.h | 6 +-
target/m68k/cpu.h | 4 +-
target/microblaze/cpu.h | 4 +-
target/mips/cpu.h | 4 +-
target/nios2/cpu.h | 4 +-
target/openrisc/cpu.h | 5 +-
target/ppc/cpu.h | 8 +-
target/ppc/helper_regs.c | 4 +-
target/riscv/cpu.h | 4 +-
target/riscv/cpu_helper.c | 4 +-
target/rx/cpu.h | 4 +-
target/s390x/cpu.h | 4 +-
target/sh4/cpu.h | 4 +-
target/sparc/cpu.h | 4 +-
target/tricore/cpu.h | 4 +-
target/xtensa/cpu.h | 4 +-
42 files changed, 307 insertions(+), 313 deletions(-)
--
2.41.0
^ permalink raw reply [flat|nested] 16+ messages in thread
* [PATCH v3 01/12] accel: Replace target_ulong in tlb_*()
2023-06-21 13:56 [PATCH v3 00/12] Start replacing target_ulong with vaddr Anton Johansson via
@ 2023-06-21 13:56 ` Anton Johansson via
2023-06-21 13:56 ` [PATCH v3 02/12] accel/tcg/translate-all.c: Widen pc and cs_base Anton Johansson via
` (11 subsequent siblings)
12 siblings, 0 replies; 16+ messages in thread
From: Anton Johansson via @ 2023-06-21 13:56 UTC (permalink / raw)
To: qemu-devel
Cc: ale, richard.henderson, pbonzini, eduardo, philmd,
marcel.apfelbaum, wangyanan55
Replaces target_ulong with vaddr for guest virtual addresses in tlb_*()
functions and auxilliary structs.
Signed-off-by: Anton Johansson <anjo@rev.ng>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
---
accel/stubs/tcg-stub.c | 2 +-
accel/tcg/cputlb.c | 177 +++++++++++++++++------------------
accel/tcg/tb-maint.c | 2 +-
include/exec/cpu-defs.h | 4 +-
include/exec/exec-all.h | 79 ++++++++--------
include/qemu/plugin-memory.h | 2 +-
6 files changed, 131 insertions(+), 135 deletions(-)
diff --git a/accel/stubs/tcg-stub.c b/accel/stubs/tcg-stub.c
index 813695b402..0998e601ad 100644
--- a/accel/stubs/tcg-stub.c
+++ b/accel/stubs/tcg-stub.c
@@ -18,7 +18,7 @@ void tb_flush(CPUState *cpu)
{
}
-void tlb_set_dirty(CPUState *cpu, target_ulong vaddr)
+void tlb_set_dirty(CPUState *cpu, vaddr vaddr)
{
}
diff --git a/accel/tcg/cputlb.c b/accel/tcg/cputlb.c
index 14ce97c33b..5caeccb52d 100644
--- a/accel/tcg/cputlb.c
+++ b/accel/tcg/cputlb.c
@@ -427,7 +427,7 @@ void tlb_flush_all_cpus_synced(CPUState *src_cpu)
}
static bool tlb_hit_page_mask_anyprot(CPUTLBEntry *tlb_entry,
- target_ulong page, target_ulong mask)
+ vaddr page, vaddr mask)
{
page &= mask;
mask &= TARGET_PAGE_MASK | TLB_INVALID_MASK;
@@ -437,8 +437,7 @@ static bool tlb_hit_page_mask_anyprot(CPUTLBEntry *tlb_entry,
page == (tlb_entry->addr_code & mask));
}
-static inline bool tlb_hit_page_anyprot(CPUTLBEntry *tlb_entry,
- target_ulong page)
+static inline bool tlb_hit_page_anyprot(CPUTLBEntry *tlb_entry, vaddr page)
{
return tlb_hit_page_mask_anyprot(tlb_entry, page, -1);
}
@@ -454,8 +453,8 @@ static inline bool tlb_entry_is_empty(const CPUTLBEntry *te)
/* Called with tlb_c.lock held */
static bool tlb_flush_entry_mask_locked(CPUTLBEntry *tlb_entry,
- target_ulong page,
- target_ulong mask)
+ vaddr page,
+ vaddr mask)
{
if (tlb_hit_page_mask_anyprot(tlb_entry, page, mask)) {
memset(tlb_entry, -1, sizeof(*tlb_entry));
@@ -464,16 +463,15 @@ static bool tlb_flush_entry_mask_locked(CPUTLBEntry *tlb_entry,
return false;
}
-static inline bool tlb_flush_entry_locked(CPUTLBEntry *tlb_entry,
- target_ulong page)
+static inline bool tlb_flush_entry_locked(CPUTLBEntry *tlb_entry, vaddr page)
{
return tlb_flush_entry_mask_locked(tlb_entry, page, -1);
}
/* Called with tlb_c.lock held */
static void tlb_flush_vtlb_page_mask_locked(CPUArchState *env, int mmu_idx,
- target_ulong page,
- target_ulong mask)
+ vaddr page,
+ vaddr mask)
{
CPUTLBDesc *d = &env_tlb(env)->d[mmu_idx];
int k;
@@ -487,21 +485,20 @@ static void tlb_flush_vtlb_page_mask_locked(CPUArchState *env, int mmu_idx,
}
static inline void tlb_flush_vtlb_page_locked(CPUArchState *env, int mmu_idx,
- target_ulong page)
+ vaddr page)
{
tlb_flush_vtlb_page_mask_locked(env, mmu_idx, page, -1);
}
-static void tlb_flush_page_locked(CPUArchState *env, int midx,
- target_ulong page)
+static void tlb_flush_page_locked(CPUArchState *env, int midx, vaddr page)
{
- target_ulong lp_addr = env_tlb(env)->d[midx].large_page_addr;
- target_ulong lp_mask = env_tlb(env)->d[midx].large_page_mask;
+ vaddr lp_addr = env_tlb(env)->d[midx].large_page_addr;
+ vaddr lp_mask = env_tlb(env)->d[midx].large_page_mask;
/* Check if we need to flush due to large pages. */
if ((page & lp_mask) == lp_addr) {
- tlb_debug("forcing full flush midx %d ("
- TARGET_FMT_lx "/" TARGET_FMT_lx ")\n",
+ tlb_debug("forcing full flush midx %d (%"
+ VADDR_PRIx "/%" VADDR_PRIx ")\n",
midx, lp_addr, lp_mask);
tlb_flush_one_mmuidx_locked(env, midx, get_clock_realtime());
} else {
@@ -522,7 +519,7 @@ static void tlb_flush_page_locked(CPUArchState *env, int midx,
* at @addr from the tlbs indicated by @idxmap from @cpu.
*/
static void tlb_flush_page_by_mmuidx_async_0(CPUState *cpu,
- target_ulong addr,
+ vaddr addr,
uint16_t idxmap)
{
CPUArchState *env = cpu->env_ptr;
@@ -530,7 +527,7 @@ static void tlb_flush_page_by_mmuidx_async_0(CPUState *cpu,
assert_cpu_is_self(cpu);
- tlb_debug("page addr:" TARGET_FMT_lx " mmu_map:0x%x\n", addr, idxmap);
+ tlb_debug("page addr: %" VADDR_PRIx " mmu_map:0x%x\n", addr, idxmap);
qemu_spin_lock(&env_tlb(env)->c.lock);
for (mmu_idx = 0; mmu_idx < NB_MMU_MODES; mmu_idx++) {
@@ -561,15 +558,15 @@ static void tlb_flush_page_by_mmuidx_async_0(CPUState *cpu,
static void tlb_flush_page_by_mmuidx_async_1(CPUState *cpu,
run_on_cpu_data data)
{
- target_ulong addr_and_idxmap = (target_ulong) data.target_ptr;
- target_ulong addr = addr_and_idxmap & TARGET_PAGE_MASK;
+ vaddr addr_and_idxmap = data.target_ptr;
+ vaddr addr = addr_and_idxmap & TARGET_PAGE_MASK;
uint16_t idxmap = addr_and_idxmap & ~TARGET_PAGE_MASK;
tlb_flush_page_by_mmuidx_async_0(cpu, addr, idxmap);
}
typedef struct {
- target_ulong addr;
+ vaddr addr;
uint16_t idxmap;
} TLBFlushPageByMMUIdxData;
@@ -592,9 +589,9 @@ static void tlb_flush_page_by_mmuidx_async_2(CPUState *cpu,
g_free(d);
}
-void tlb_flush_page_by_mmuidx(CPUState *cpu, target_ulong addr, uint16_t idxmap)
+void tlb_flush_page_by_mmuidx(CPUState *cpu, vaddr addr, uint16_t idxmap)
{
- tlb_debug("addr: "TARGET_FMT_lx" mmu_idx:%" PRIx16 "\n", addr, idxmap);
+ tlb_debug("addr: %" VADDR_PRIx " mmu_idx:%" PRIx16 "\n", addr, idxmap);
/* This should already be page aligned */
addr &= TARGET_PAGE_MASK;
@@ -620,15 +617,15 @@ void tlb_flush_page_by_mmuidx(CPUState *cpu, target_ulong addr, uint16_t idxmap)
}
}
-void tlb_flush_page(CPUState *cpu, target_ulong addr)
+void tlb_flush_page(CPUState *cpu, vaddr addr)
{
tlb_flush_page_by_mmuidx(cpu, addr, ALL_MMUIDX_BITS);
}
-void tlb_flush_page_by_mmuidx_all_cpus(CPUState *src_cpu, target_ulong addr,
+void tlb_flush_page_by_mmuidx_all_cpus(CPUState *src_cpu, vaddr addr,
uint16_t idxmap)
{
- tlb_debug("addr: "TARGET_FMT_lx" mmu_idx:%"PRIx16"\n", addr, idxmap);
+ tlb_debug("addr: %" VADDR_PRIx " mmu_idx:%"PRIx16"\n", addr, idxmap);
/* This should already be page aligned */
addr &= TARGET_PAGE_MASK;
@@ -660,16 +657,16 @@ void tlb_flush_page_by_mmuidx_all_cpus(CPUState *src_cpu, target_ulong addr,
tlb_flush_page_by_mmuidx_async_0(src_cpu, addr, idxmap);
}
-void tlb_flush_page_all_cpus(CPUState *src, target_ulong addr)
+void tlb_flush_page_all_cpus(CPUState *src, vaddr addr)
{
tlb_flush_page_by_mmuidx_all_cpus(src, addr, ALL_MMUIDX_BITS);
}
void tlb_flush_page_by_mmuidx_all_cpus_synced(CPUState *src_cpu,
- target_ulong addr,
+ vaddr addr,
uint16_t idxmap)
{
- tlb_debug("addr: "TARGET_FMT_lx" mmu_idx:%"PRIx16"\n", addr, idxmap);
+ tlb_debug("addr: %" VADDR_PRIx " mmu_idx:%"PRIx16"\n", addr, idxmap);
/* This should already be page aligned */
addr &= TARGET_PAGE_MASK;
@@ -706,18 +703,18 @@ void tlb_flush_page_by_mmuidx_all_cpus_synced(CPUState *src_cpu,
}
}
-void tlb_flush_page_all_cpus_synced(CPUState *src, target_ulong addr)
+void tlb_flush_page_all_cpus_synced(CPUState *src, vaddr addr)
{
tlb_flush_page_by_mmuidx_all_cpus_synced(src, addr, ALL_MMUIDX_BITS);
}
static void tlb_flush_range_locked(CPUArchState *env, int midx,
- target_ulong addr, target_ulong len,
+ vaddr addr, vaddr len,
unsigned bits)
{
CPUTLBDesc *d = &env_tlb(env)->d[midx];
CPUTLBDescFast *f = &env_tlb(env)->f[midx];
- target_ulong mask = MAKE_64BIT_MASK(0, bits);
+ vaddr mask = MAKE_64BIT_MASK(0, bits);
/*
* If @bits is smaller than the tlb size, there may be multiple entries
@@ -731,7 +728,7 @@ static void tlb_flush_range_locked(CPUArchState *env, int midx,
*/
if (mask < f->mask || len > f->mask) {
tlb_debug("forcing full flush midx %d ("
- TARGET_FMT_lx "/" TARGET_FMT_lx "+" TARGET_FMT_lx ")\n",
+ "%" VADDR_PRIx "/%" VADDR_PRIx "+%" VADDR_PRIx ")\n",
midx, addr, mask, len);
tlb_flush_one_mmuidx_locked(env, midx, get_clock_realtime());
return;
@@ -744,14 +741,14 @@ static void tlb_flush_range_locked(CPUArchState *env, int midx,
*/
if (((addr + len - 1) & d->large_page_mask) == d->large_page_addr) {
tlb_debug("forcing full flush midx %d ("
- TARGET_FMT_lx "/" TARGET_FMT_lx ")\n",
+ "%" VADDR_PRIx "/%" VADDR_PRIx ")\n",
midx, d->large_page_addr, d->large_page_mask);
tlb_flush_one_mmuidx_locked(env, midx, get_clock_realtime());
return;
}
- for (target_ulong i = 0; i < len; i += TARGET_PAGE_SIZE) {
- target_ulong page = addr + i;
+ for (vaddr i = 0; i < len; i += TARGET_PAGE_SIZE) {
+ vaddr page = addr + i;
CPUTLBEntry *entry = tlb_entry(env, midx, page);
if (tlb_flush_entry_mask_locked(entry, page, mask)) {
@@ -762,8 +759,8 @@ static void tlb_flush_range_locked(CPUArchState *env, int midx,
}
typedef struct {
- target_ulong addr;
- target_ulong len;
+ vaddr addr;
+ vaddr len;
uint16_t idxmap;
uint16_t bits;
} TLBFlushRangeData;
@@ -776,7 +773,7 @@ static void tlb_flush_range_by_mmuidx_async_0(CPUState *cpu,
assert_cpu_is_self(cpu);
- tlb_debug("range:" TARGET_FMT_lx "/%u+" TARGET_FMT_lx " mmu_map:0x%x\n",
+ tlb_debug("range: %" VADDR_PRIx "/%u+%" VADDR_PRIx " mmu_map:0x%x\n",
d.addr, d.bits, d.len, d.idxmap);
qemu_spin_lock(&env_tlb(env)->c.lock);
@@ -801,7 +798,7 @@ static void tlb_flush_range_by_mmuidx_async_0(CPUState *cpu,
* overlap the flushed pages, which includes the previous.
*/
d.addr -= TARGET_PAGE_SIZE;
- for (target_ulong i = 0, n = d.len / TARGET_PAGE_SIZE + 1; i < n; i++) {
+ for (vaddr i = 0, n = d.len / TARGET_PAGE_SIZE + 1; i < n; i++) {
tb_jmp_cache_clear_page(cpu, d.addr);
d.addr += TARGET_PAGE_SIZE;
}
@@ -815,8 +812,8 @@ static void tlb_flush_range_by_mmuidx_async_1(CPUState *cpu,
g_free(d);
}
-void tlb_flush_range_by_mmuidx(CPUState *cpu, target_ulong addr,
- target_ulong len, uint16_t idxmap,
+void tlb_flush_range_by_mmuidx(CPUState *cpu, vaddr addr,
+ vaddr len, uint16_t idxmap,
unsigned bits)
{
TLBFlushRangeData d;
@@ -851,14 +848,14 @@ void tlb_flush_range_by_mmuidx(CPUState *cpu, target_ulong addr,
}
}
-void tlb_flush_page_bits_by_mmuidx(CPUState *cpu, target_ulong addr,
+void tlb_flush_page_bits_by_mmuidx(CPUState *cpu, vaddr addr,
uint16_t idxmap, unsigned bits)
{
tlb_flush_range_by_mmuidx(cpu, addr, TARGET_PAGE_SIZE, idxmap, bits);
}
void tlb_flush_range_by_mmuidx_all_cpus(CPUState *src_cpu,
- target_ulong addr, target_ulong len,
+ vaddr addr, vaddr len,
uint16_t idxmap, unsigned bits)
{
TLBFlushRangeData d;
@@ -898,16 +895,16 @@ void tlb_flush_range_by_mmuidx_all_cpus(CPUState *src_cpu,
}
void tlb_flush_page_bits_by_mmuidx_all_cpus(CPUState *src_cpu,
- target_ulong addr,
- uint16_t idxmap, unsigned bits)
+ vaddr addr, uint16_t idxmap,
+ unsigned bits)
{
tlb_flush_range_by_mmuidx_all_cpus(src_cpu, addr, TARGET_PAGE_SIZE,
idxmap, bits);
}
void tlb_flush_range_by_mmuidx_all_cpus_synced(CPUState *src_cpu,
- target_ulong addr,
- target_ulong len,
+ vaddr addr,
+ vaddr len,
uint16_t idxmap,
unsigned bits)
{
@@ -949,7 +946,7 @@ void tlb_flush_range_by_mmuidx_all_cpus_synced(CPUState *src_cpu,
}
void tlb_flush_page_bits_by_mmuidx_all_cpus_synced(CPUState *src_cpu,
- target_ulong addr,
+ vaddr addr,
uint16_t idxmap,
unsigned bits)
{
@@ -1055,32 +1052,32 @@ void tlb_reset_dirty(CPUState *cpu, ram_addr_t start1, ram_addr_t length)
/* Called with tlb_c.lock held */
static inline void tlb_set_dirty1_locked(CPUTLBEntry *tlb_entry,
- target_ulong vaddr)
+ vaddr addr)
{
- if (tlb_entry->addr_write == (vaddr | TLB_NOTDIRTY)) {
- tlb_entry->addr_write = vaddr;
+ if (tlb_entry->addr_write == (addr | TLB_NOTDIRTY)) {
+ tlb_entry->addr_write = addr;
}
}
/* update the TLB corresponding to virtual page vaddr
so that it is no longer dirty */
-void tlb_set_dirty(CPUState *cpu, target_ulong vaddr)
+void tlb_set_dirty(CPUState *cpu, vaddr addr)
{
CPUArchState *env = cpu->env_ptr;
int mmu_idx;
assert_cpu_is_self(cpu);
- vaddr &= TARGET_PAGE_MASK;
+ addr &= TARGET_PAGE_MASK;
qemu_spin_lock(&env_tlb(env)->c.lock);
for (mmu_idx = 0; mmu_idx < NB_MMU_MODES; mmu_idx++) {
- tlb_set_dirty1_locked(tlb_entry(env, mmu_idx, vaddr), vaddr);
+ tlb_set_dirty1_locked(tlb_entry(env, mmu_idx, addr), addr);
}
for (mmu_idx = 0; mmu_idx < NB_MMU_MODES; mmu_idx++) {
int k;
for (k = 0; k < CPU_VTLB_SIZE; k++) {
- tlb_set_dirty1_locked(&env_tlb(env)->d[mmu_idx].vtable[k], vaddr);
+ tlb_set_dirty1_locked(&env_tlb(env)->d[mmu_idx].vtable[k], addr);
}
}
qemu_spin_unlock(&env_tlb(env)->c.lock);
@@ -1089,20 +1086,20 @@ void tlb_set_dirty(CPUState *cpu, target_ulong vaddr)
/* Our TLB does not support large pages, so remember the area covered by
large pages and trigger a full TLB flush if these are invalidated. */
static void tlb_add_large_page(CPUArchState *env, int mmu_idx,
- target_ulong vaddr, target_ulong size)
+ vaddr addr, uint64_t size)
{
- target_ulong lp_addr = env_tlb(env)->d[mmu_idx].large_page_addr;
- target_ulong lp_mask = ~(size - 1);
+ vaddr lp_addr = env_tlb(env)->d[mmu_idx].large_page_addr;
+ vaddr lp_mask = ~(size - 1);
- if (lp_addr == (target_ulong)-1) {
+ if (lp_addr == (vaddr)-1) {
/* No previous large page. */
- lp_addr = vaddr;
+ lp_addr = addr;
} else {
/* Extend the existing region to include the new page.
This is a compromise between unnecessary flushes and
the cost of maintaining a full variable size TLB. */
lp_mask &= env_tlb(env)->d[mmu_idx].large_page_mask;
- while (((lp_addr ^ vaddr) & lp_mask) != 0) {
+ while (((lp_addr ^ addr) & lp_mask) != 0) {
lp_mask <<= 1;
}
}
@@ -1119,19 +1116,19 @@ static void tlb_add_large_page(CPUArchState *env, int mmu_idx,
* critical section.
*/
void tlb_set_page_full(CPUState *cpu, int mmu_idx,
- target_ulong vaddr, CPUTLBEntryFull *full)
+ vaddr addr, CPUTLBEntryFull *full)
{
CPUArchState *env = cpu->env_ptr;
CPUTLB *tlb = env_tlb(env);
CPUTLBDesc *desc = &tlb->d[mmu_idx];
MemoryRegionSection *section;
unsigned int index;
- target_ulong address;
- target_ulong write_address;
+ vaddr address;
+ vaddr write_address;
uintptr_t addend;
CPUTLBEntry *te, tn;
hwaddr iotlb, xlat, sz, paddr_page;
- target_ulong vaddr_page;
+ vaddr addr_page;
int asidx, wp_flags, prot;
bool is_ram, is_romd;
@@ -1141,9 +1138,9 @@ void tlb_set_page_full(CPUState *cpu, int mmu_idx,
sz = TARGET_PAGE_SIZE;
} else {
sz = (hwaddr)1 << full->lg_page_size;
- tlb_add_large_page(env, mmu_idx, vaddr, sz);
+ tlb_add_large_page(env, mmu_idx, addr, sz);
}
- vaddr_page = vaddr & TARGET_PAGE_MASK;
+ addr_page = addr & TARGET_PAGE_MASK;
paddr_page = full->phys_addr & TARGET_PAGE_MASK;
prot = full->prot;
@@ -1152,11 +1149,11 @@ void tlb_set_page_full(CPUState *cpu, int mmu_idx,
&xlat, &sz, full->attrs, &prot);
assert(sz >= TARGET_PAGE_SIZE);
- tlb_debug("vaddr=" TARGET_FMT_lx " paddr=0x" HWADDR_FMT_plx
+ tlb_debug("vaddr=%" VADDR_PRIx " paddr=0x" HWADDR_FMT_plx
" prot=%x idx=%d\n",
- vaddr, full->phys_addr, prot, mmu_idx);
+ addr, full->phys_addr, prot, mmu_idx);
- address = vaddr_page;
+ address = addr_page;
if (full->lg_page_size < TARGET_PAGE_BITS) {
/* Repeat the MMU check and TLB fill on every access. */
address |= TLB_INVALID_MASK;
@@ -1204,11 +1201,11 @@ void tlb_set_page_full(CPUState *cpu, int mmu_idx,
}
}
- wp_flags = cpu_watchpoint_address_matches(cpu, vaddr_page,
+ wp_flags = cpu_watchpoint_address_matches(cpu, addr_page,
TARGET_PAGE_SIZE);
- index = tlb_index(env, mmu_idx, vaddr_page);
- te = tlb_entry(env, mmu_idx, vaddr_page);
+ index = tlb_index(env, mmu_idx, addr_page);
+ te = tlb_entry(env, mmu_idx, addr_page);
/*
* Hold the TLB lock for the rest of the function. We could acquire/release
@@ -1223,13 +1220,13 @@ void tlb_set_page_full(CPUState *cpu, int mmu_idx,
tlb->c.dirty |= 1 << mmu_idx;
/* Make sure there's no cached translation for the new page. */
- tlb_flush_vtlb_page_locked(env, mmu_idx, vaddr_page);
+ tlb_flush_vtlb_page_locked(env, mmu_idx, addr_page);
/*
* Only evict the old entry to the victim tlb if it's for a
* different page; otherwise just overwrite the stale data.
*/
- if (!tlb_hit_page_anyprot(te, vaddr_page) && !tlb_entry_is_empty(te)) {
+ if (!tlb_hit_page_anyprot(te, addr_page) && !tlb_entry_is_empty(te)) {
unsigned vidx = desc->vindex++ % CPU_VTLB_SIZE;
CPUTLBEntry *tv = &desc->vtable[vidx];
@@ -1253,11 +1250,11 @@ void tlb_set_page_full(CPUState *cpu, int mmu_idx,
* vaddr we add back in io_readx()/io_writex()/get_page_addr_code().
*/
desc->fulltlb[index] = *full;
- desc->fulltlb[index].xlat_section = iotlb - vaddr_page;
+ desc->fulltlb[index].xlat_section = iotlb - addr_page;
desc->fulltlb[index].phys_addr = paddr_page;
/* Now calculate the new entry */
- tn.addend = addend - vaddr_page;
+ tn.addend = addend - addr_page;
if (prot & PAGE_READ) {
tn.addr_read = address;
if (wp_flags & BP_MEM_READ) {
@@ -1289,9 +1286,9 @@ void tlb_set_page_full(CPUState *cpu, int mmu_idx,
qemu_spin_unlock(&tlb->c.lock);
}
-void tlb_set_page_with_attrs(CPUState *cpu, target_ulong vaddr,
+void tlb_set_page_with_attrs(CPUState *cpu, vaddr addr,
hwaddr paddr, MemTxAttrs attrs, int prot,
- int mmu_idx, target_ulong size)
+ int mmu_idx, uint64_t size)
{
CPUTLBEntryFull full = {
.phys_addr = paddr,
@@ -1301,14 +1298,14 @@ void tlb_set_page_with_attrs(CPUState *cpu, target_ulong vaddr,
};
assert(is_power_of_2(size));
- tlb_set_page_full(cpu, mmu_idx, vaddr, &full);
+ tlb_set_page_full(cpu, mmu_idx, addr, &full);
}
-void tlb_set_page(CPUState *cpu, target_ulong vaddr,
+void tlb_set_page(CPUState *cpu, vaddr addr,
hwaddr paddr, int prot,
- int mmu_idx, target_ulong size)
+ int mmu_idx, uint64_t size)
{
- tlb_set_page_with_attrs(cpu, vaddr, paddr, MEMTXATTRS_UNSPECIFIED,
+ tlb_set_page_with_attrs(cpu, addr, paddr, MEMTXATTRS_UNSPECIFIED,
prot, mmu_idx, size);
}
@@ -1317,7 +1314,7 @@ void tlb_set_page(CPUState *cpu, target_ulong vaddr,
* caller's prior references to the TLB table (e.g. CPUTLBEntry pointers) must
* be discarded and looked up again (e.g. via tlb_entry()).
*/
-static void tlb_fill(CPUState *cpu, target_ulong addr, int size,
+static void tlb_fill(CPUState *cpu, vaddr addr, int size,
MMUAccessType access_type, int mmu_idx, uintptr_t retaddr)
{
bool ok;
@@ -1357,7 +1354,7 @@ static inline void cpu_transaction_failed(CPUState *cpu, hwaddr physaddr,
}
static uint64_t io_readx(CPUArchState *env, CPUTLBEntryFull *full,
- int mmu_idx, target_ulong addr, uintptr_t retaddr,
+ int mmu_idx, vaddr addr, uintptr_t retaddr,
MMUAccessType access_type, MemOp op)
{
CPUState *cpu = env_cpu(env);
@@ -1407,7 +1404,7 @@ static void save_iotlb_data(CPUState *cs, MemoryRegionSection *section,
}
static void io_writex(CPUArchState *env, CPUTLBEntryFull *full,
- int mmu_idx, uint64_t val, target_ulong addr,
+ int mmu_idx, uint64_t val, vaddr addr,
uintptr_t retaddr, MemOp op)
{
CPUState *cpu = env_cpu(env);
@@ -1449,7 +1446,7 @@ static void io_writex(CPUArchState *env, CPUTLBEntryFull *full,
/* Return true if ADDR is present in the victim tlb, and has been copied
back to the main tlb. */
static bool victim_tlb_hit(CPUArchState *env, size_t mmu_idx, size_t index,
- MMUAccessType access_type, target_ulong page)
+ MMUAccessType access_type, vaddr page)
{
size_t vidx;
@@ -1691,13 +1688,13 @@ tb_page_addr_t get_page_addr_code_hostp(CPUArchState *env, target_ulong addr,
* from the same thread (which a mem callback will be) this is safe.
*/
-bool tlb_plugin_lookup(CPUState *cpu, target_ulong addr, int mmu_idx,
+bool tlb_plugin_lookup(CPUState *cpu, vaddr addr, int mmu_idx,
bool is_store, struct qemu_plugin_hwaddr *data)
{
CPUArchState *env = cpu->env_ptr;
CPUTLBEntry *tlbe = tlb_entry(env, mmu_idx, addr);
uintptr_t index = tlb_index(env, mmu_idx, addr);
- target_ulong tlb_addr = is_store ? tlb_addr_write(tlbe) : tlbe->addr_read;
+ vaddr tlb_addr = is_store ? tlb_addr_write(tlbe) : tlbe->addr_read;
if (likely(tlb_hit(tlb_addr, addr))) {
/* We must have an iotlb entry for MMIO */
diff --git a/accel/tcg/tb-maint.c b/accel/tcg/tb-maint.c
index 892eecda2d..3541419845 100644
--- a/accel/tcg/tb-maint.c
+++ b/accel/tcg/tb-maint.c
@@ -98,7 +98,7 @@ static void tb_remove_all(void)
/* Call with mmap_lock held. */
static void tb_record(TranslationBlock *tb, PageDesc *p1, PageDesc *p2)
{
- target_ulong addr;
+ vaddr addr;
int flags;
assert_memory_lock();
diff --git a/include/exec/cpu-defs.h b/include/exec/cpu-defs.h
index 4cb77c8dec..e6a079402e 100644
--- a/include/exec/cpu-defs.h
+++ b/include/exec/cpu-defs.h
@@ -147,8 +147,8 @@ typedef struct CPUTLBDesc {
* we must flush the entire tlb. The region is matched if
* (addr & large_page_mask) == large_page_addr.
*/
- target_ulong large_page_addr;
- target_ulong large_page_mask;
+ vaddr large_page_addr;
+ vaddr large_page_mask;
/* host time (in ns) at the beginning of the time window */
int64_t window_begin_ns;
/* maximum number of entries observed in the window */
diff --git a/include/exec/exec-all.h b/include/exec/exec-all.h
index 698943d58f..f5508e242b 100644
--- a/include/exec/exec-all.h
+++ b/include/exec/exec-all.h
@@ -94,7 +94,7 @@ void tlb_destroy(CPUState *cpu);
* Flush one page from the TLB of the specified CPU, for all
* MMU indexes.
*/
-void tlb_flush_page(CPUState *cpu, target_ulong addr);
+void tlb_flush_page(CPUState *cpu, vaddr addr);
/**
* tlb_flush_page_all_cpus:
* @cpu: src CPU of the flush
@@ -103,7 +103,7 @@ void tlb_flush_page(CPUState *cpu, target_ulong addr);
* Flush one page from the TLB of the specified CPU, for all
* MMU indexes.
*/
-void tlb_flush_page_all_cpus(CPUState *src, target_ulong addr);
+void tlb_flush_page_all_cpus(CPUState *src, vaddr addr);
/**
* tlb_flush_page_all_cpus_synced:
* @cpu: src CPU of the flush
@@ -115,7 +115,7 @@ void tlb_flush_page_all_cpus(CPUState *src, target_ulong addr);
* the source vCPUs safe work is complete. This will depend on when
* the guests translation ends the TB.
*/
-void tlb_flush_page_all_cpus_synced(CPUState *src, target_ulong addr);
+void tlb_flush_page_all_cpus_synced(CPUState *src, vaddr addr);
/**
* tlb_flush:
* @cpu: CPU whose TLB should be flushed
@@ -150,7 +150,7 @@ void tlb_flush_all_cpus_synced(CPUState *src_cpu);
* Flush one page from the TLB of the specified CPU, for the specified
* MMU indexes.
*/
-void tlb_flush_page_by_mmuidx(CPUState *cpu, target_ulong addr,
+void tlb_flush_page_by_mmuidx(CPUState *cpu, vaddr addr,
uint16_t idxmap);
/**
* tlb_flush_page_by_mmuidx_all_cpus:
@@ -161,7 +161,7 @@ void tlb_flush_page_by_mmuidx(CPUState *cpu, target_ulong addr,
* Flush one page from the TLB of all CPUs, for the specified
* MMU indexes.
*/
-void tlb_flush_page_by_mmuidx_all_cpus(CPUState *cpu, target_ulong addr,
+void tlb_flush_page_by_mmuidx_all_cpus(CPUState *cpu, vaddr addr,
uint16_t idxmap);
/**
* tlb_flush_page_by_mmuidx_all_cpus_synced:
@@ -175,7 +175,7 @@ void tlb_flush_page_by_mmuidx_all_cpus(CPUState *cpu, target_ulong addr,
* complete once the source vCPUs safe work is complete. This will
* depend on when the guests translation ends the TB.
*/
-void tlb_flush_page_by_mmuidx_all_cpus_synced(CPUState *cpu, target_ulong addr,
+void tlb_flush_page_by_mmuidx_all_cpus_synced(CPUState *cpu, vaddr addr,
uint16_t idxmap);
/**
* tlb_flush_by_mmuidx:
@@ -218,14 +218,14 @@ void tlb_flush_by_mmuidx_all_cpus_synced(CPUState *cpu, uint16_t idxmap);
*
* Similar to tlb_flush_page_mask, but with a bitmap of indexes.
*/
-void tlb_flush_page_bits_by_mmuidx(CPUState *cpu, target_ulong addr,
+void tlb_flush_page_bits_by_mmuidx(CPUState *cpu, vaddr addr,
uint16_t idxmap, unsigned bits);
/* Similarly, with broadcast and syncing. */
-void tlb_flush_page_bits_by_mmuidx_all_cpus(CPUState *cpu, target_ulong addr,
+void tlb_flush_page_bits_by_mmuidx_all_cpus(CPUState *cpu, vaddr addr,
uint16_t idxmap, unsigned bits);
void tlb_flush_page_bits_by_mmuidx_all_cpus_synced
- (CPUState *cpu, target_ulong addr, uint16_t idxmap, unsigned bits);
+ (CPUState *cpu, vaddr addr, uint16_t idxmap, unsigned bits);
/**
* tlb_flush_range_by_mmuidx
@@ -238,17 +238,17 @@ void tlb_flush_page_bits_by_mmuidx_all_cpus_synced
* For each mmuidx in @idxmap, flush all pages within [@addr,@addr+@len),
* comparing only the low @bits worth of each virtual page.
*/
-void tlb_flush_range_by_mmuidx(CPUState *cpu, target_ulong addr,
- target_ulong len, uint16_t idxmap,
+void tlb_flush_range_by_mmuidx(CPUState *cpu, vaddr addr,
+ vaddr len, uint16_t idxmap,
unsigned bits);
/* Similarly, with broadcast and syncing. */
-void tlb_flush_range_by_mmuidx_all_cpus(CPUState *cpu, target_ulong addr,
- target_ulong len, uint16_t idxmap,
+void tlb_flush_range_by_mmuidx_all_cpus(CPUState *cpu, vaddr addr,
+ vaddr len, uint16_t idxmap,
unsigned bits);
void tlb_flush_range_by_mmuidx_all_cpus_synced(CPUState *cpu,
- target_ulong addr,
- target_ulong len,
+ vaddr addr,
+ vaddr len,
uint16_t idxmap,
unsigned bits);
@@ -256,7 +256,7 @@ void tlb_flush_range_by_mmuidx_all_cpus_synced(CPUState *cpu,
* tlb_set_page_full:
* @cpu: CPU context
* @mmu_idx: mmu index of the tlb to modify
- * @vaddr: virtual address of the entry to add
+ * @addr: virtual address of the entry to add
* @full: the details of the tlb entry
*
* Add an entry to @cpu tlb index @mmu_idx. All of the fields of
@@ -271,13 +271,13 @@ void tlb_flush_range_by_mmuidx_all_cpus_synced(CPUState *cpu,
* single TARGET_PAGE_SIZE region is mapped; @full->lg_page_size is only
* used by tlb_flush_page.
*/
-void tlb_set_page_full(CPUState *cpu, int mmu_idx, target_ulong vaddr,
+void tlb_set_page_full(CPUState *cpu, int mmu_idx, vaddr addr,
CPUTLBEntryFull *full);
/**
* tlb_set_page_with_attrs:
* @cpu: CPU to add this TLB entry for
- * @vaddr: virtual address of page to add entry for
+ * @addr: virtual address of page to add entry for
* @paddr: physical address of the page
* @attrs: memory transaction attributes
* @prot: access permissions (PAGE_READ/PAGE_WRITE/PAGE_EXEC bits)
@@ -285,7 +285,7 @@ void tlb_set_page_full(CPUState *cpu, int mmu_idx, target_ulong vaddr,
* @size: size of the page in bytes
*
* Add an entry to this CPU's TLB (a mapping from virtual address
- * @vaddr to physical address @paddr) with the specified memory
+ * @addr to physical address @paddr) with the specified memory
* transaction attributes. This is generally called by the target CPU
* specific code after it has been called through the tlb_fill()
* entry point and performed a successful page table walk to find
@@ -296,18 +296,18 @@ void tlb_set_page_full(CPUState *cpu, int mmu_idx, target_ulong vaddr,
* single TARGET_PAGE_SIZE region is mapped; the supplied @size is only
* used by tlb_flush_page.
*/
-void tlb_set_page_with_attrs(CPUState *cpu, target_ulong vaddr,
+void tlb_set_page_with_attrs(CPUState *cpu, vaddr addr,
hwaddr paddr, MemTxAttrs attrs,
- int prot, int mmu_idx, target_ulong size);
+ int prot, int mmu_idx, vaddr size);
/* tlb_set_page:
*
* This function is equivalent to calling tlb_set_page_with_attrs()
* with an @attrs argument of MEMTXATTRS_UNSPECIFIED. It's provided
* as a convenience for CPUs which don't use memory transaction attributes.
*/
-void tlb_set_page(CPUState *cpu, target_ulong vaddr,
+void tlb_set_page(CPUState *cpu, vaddr addr,
hwaddr paddr, int prot,
- int mmu_idx, target_ulong size);
+ int mmu_idx, vaddr size);
#else
static inline void tlb_init(CPUState *cpu)
{
@@ -315,14 +315,13 @@ static inline void tlb_init(CPUState *cpu)
static inline void tlb_destroy(CPUState *cpu)
{
}
-static inline void tlb_flush_page(CPUState *cpu, target_ulong addr)
+static inline void tlb_flush_page(CPUState *cpu, vaddr addr)
{
}
-static inline void tlb_flush_page_all_cpus(CPUState *src, target_ulong addr)
+static inline void tlb_flush_page_all_cpus(CPUState *src, vaddr addr)
{
}
-static inline void tlb_flush_page_all_cpus_synced(CPUState *src,
- target_ulong addr)
+static inline void tlb_flush_page_all_cpus_synced(CPUState *src, vaddr addr)
{
}
static inline void tlb_flush(CPUState *cpu)
@@ -335,7 +334,7 @@ static inline void tlb_flush_all_cpus_synced(CPUState *src_cpu)
{
}
static inline void tlb_flush_page_by_mmuidx(CPUState *cpu,
- target_ulong addr, uint16_t idxmap)
+ vaddr addr, uint16_t idxmap)
{
}
@@ -343,12 +342,12 @@ static inline void tlb_flush_by_mmuidx(CPUState *cpu, uint16_t idxmap)
{
}
static inline void tlb_flush_page_by_mmuidx_all_cpus(CPUState *cpu,
- target_ulong addr,
+ vaddr addr,
uint16_t idxmap)
{
}
static inline void tlb_flush_page_by_mmuidx_all_cpus_synced(CPUState *cpu,
- target_ulong addr,
+ vaddr addr,
uint16_t idxmap)
{
}
@@ -361,37 +360,37 @@ static inline void tlb_flush_by_mmuidx_all_cpus_synced(CPUState *cpu,
{
}
static inline void tlb_flush_page_bits_by_mmuidx(CPUState *cpu,
- target_ulong addr,
+ vaddr addr,
uint16_t idxmap,
unsigned bits)
{
}
static inline void tlb_flush_page_bits_by_mmuidx_all_cpus(CPUState *cpu,
- target_ulong addr,
+ vaddr addr,
uint16_t idxmap,
unsigned bits)
{
}
static inline void
-tlb_flush_page_bits_by_mmuidx_all_cpus_synced(CPUState *cpu, target_ulong addr,
+tlb_flush_page_bits_by_mmuidx_all_cpus_synced(CPUState *cpu, vaddr addr,
uint16_t idxmap, unsigned bits)
{
}
-static inline void tlb_flush_range_by_mmuidx(CPUState *cpu, target_ulong addr,
- target_ulong len, uint16_t idxmap,
+static inline void tlb_flush_range_by_mmuidx(CPUState *cpu, vaddr addr,
+ vaddr len, uint16_t idxmap,
unsigned bits)
{
}
static inline void tlb_flush_range_by_mmuidx_all_cpus(CPUState *cpu,
- target_ulong addr,
- target_ulong len,
+ vaddr addr,
+ vaddr len,
uint16_t idxmap,
unsigned bits)
{
}
static inline void tlb_flush_range_by_mmuidx_all_cpus_synced(CPUState *cpu,
- target_ulong addr,
- target_long len,
+ vaddr addr,
+ vaddr len,
uint16_t idxmap,
unsigned bits)
{
@@ -663,7 +662,7 @@ static inline void mmap_lock(void) {}
static inline void mmap_unlock(void) {}
void tlb_reset_dirty(CPUState *cpu, ram_addr_t start1, ram_addr_t length);
-void tlb_set_dirty(CPUState *cpu, target_ulong vaddr);
+void tlb_set_dirty(CPUState *cpu, vaddr addr);
MemoryRegionSection *
address_space_translate_for_iotlb(CPUState *cpu, int asidx, hwaddr addr,
diff --git a/include/qemu/plugin-memory.h b/include/qemu/plugin-memory.h
index 6fd539022a..43165f2452 100644
--- a/include/qemu/plugin-memory.h
+++ b/include/qemu/plugin-memory.h
@@ -37,7 +37,7 @@ struct qemu_plugin_hwaddr {
* It would only fail if not called from an instrumented memory access
* which would be an abuse of the API.
*/
-bool tlb_plugin_lookup(CPUState *cpu, target_ulong addr, int mmu_idx,
+bool tlb_plugin_lookup(CPUState *cpu, vaddr addr, int mmu_idx,
bool is_store, struct qemu_plugin_hwaddr *data);
#endif /* PLUGIN_MEMORY_H */
--
2.41.0
^ permalink raw reply related [flat|nested] 16+ messages in thread
* [PATCH v3 02/12] accel/tcg/translate-all.c: Widen pc and cs_base
2023-06-21 13:56 [PATCH v3 00/12] Start replacing target_ulong with vaddr Anton Johansson via
2023-06-21 13:56 ` [PATCH v3 01/12] accel: Replace target_ulong in tlb_*() Anton Johansson via
@ 2023-06-21 13:56 ` Anton Johansson via
2023-06-21 13:56 ` [PATCH v3 03/12] target: Widen pc/cs_base in cpu_get_tb_cpu_state Anton Johansson via
` (10 subsequent siblings)
12 siblings, 0 replies; 16+ messages in thread
From: Anton Johansson via @ 2023-06-21 13:56 UTC (permalink / raw)
To: qemu-devel
Cc: ale, richard.henderson, pbonzini, eduardo, philmd,
marcel.apfelbaum, wangyanan55
Signed-off-by: Anton Johansson <anjo@rev.ng>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
---
accel/tcg/internal.h | 6 +++---
accel/tcg/translate-all.c | 10 +++++-----
2 files changed, 8 insertions(+), 8 deletions(-)
diff --git a/accel/tcg/internal.h b/accel/tcg/internal.h
index 65380ccb42..91f308bdfa 100644
--- a/accel/tcg/internal.h
+++ b/accel/tcg/internal.h
@@ -42,8 +42,8 @@ void tb_invalidate_phys_range_fast(ram_addr_t ram_addr,
G_NORETURN void cpu_io_recompile(CPUState *cpu, uintptr_t retaddr);
#endif /* CONFIG_SOFTMMU */
-TranslationBlock *tb_gen_code(CPUState *cpu, target_ulong pc,
- target_ulong cs_base, uint32_t flags,
+TranslationBlock *tb_gen_code(CPUState *cpu, vaddr pc,
+ uint64_t cs_base, uint32_t flags,
int cflags);
void page_init(void);
void tb_htable_init(void);
@@ -55,7 +55,7 @@ void cpu_restore_state_from_tb(CPUState *cpu, TranslationBlock *tb,
uintptr_t host_pc);
/* Return the current PC from CPU, which may be cached in TB. */
-static inline target_ulong log_pc(CPUState *cpu, const TranslationBlock *tb)
+static inline vaddr log_pc(CPUState *cpu, const TranslationBlock *tb)
{
if (tb_cflags(tb) & CF_PCREL) {
return cpu->cc->get_pc(cpu);
diff --git a/accel/tcg/translate-all.c b/accel/tcg/translate-all.c
index c4d081f5ad..03caf62459 100644
--- a/accel/tcg/translate-all.c
+++ b/accel/tcg/translate-all.c
@@ -274,7 +274,7 @@ void page_init(void)
* Return the size of the generated code, or negative on error.
*/
static int setjmp_gen_code(CPUArchState *env, TranslationBlock *tb,
- target_ulong pc, void *host_pc,
+ vaddr pc, void *host_pc,
int *max_insns, int64_t *ti)
{
int ret = sigsetjmp(tcg_ctx->jmp_trans, 0);
@@ -302,7 +302,7 @@ static int setjmp_gen_code(CPUArchState *env, TranslationBlock *tb,
/* Called with mmap_lock held for user mode emulation. */
TranslationBlock *tb_gen_code(CPUState *cpu,
- target_ulong pc, target_ulong cs_base,
+ vaddr pc, uint64_t cs_base,
uint32_t flags, int cflags)
{
CPUArchState *env = cpu->env_ptr;
@@ -634,10 +634,10 @@ void cpu_io_recompile(CPUState *cpu, uintptr_t retaddr)
cpu->cflags_next_tb = curr_cflags(cpu) | CF_MEMI_ONLY | CF_LAST_IO | n;
if (qemu_loglevel_mask(CPU_LOG_EXEC)) {
- target_ulong pc = log_pc(cpu, tb);
+ vaddr pc = log_pc(cpu, tb);
if (qemu_log_in_addr_range(pc)) {
- qemu_log("cpu_io_recompile: rewound execution of TB to "
- TARGET_FMT_lx "\n", pc);
+ qemu_log("cpu_io_recompile: rewound execution of TB to %"
+ VADDR_PRIx "\n", pc);
}
}
--
2.41.0
^ permalink raw reply related [flat|nested] 16+ messages in thread
* [PATCH v3 03/12] target: Widen pc/cs_base in cpu_get_tb_cpu_state
2023-06-21 13:56 [PATCH v3 00/12] Start replacing target_ulong with vaddr Anton Johansson via
2023-06-21 13:56 ` [PATCH v3 01/12] accel: Replace target_ulong in tlb_*() Anton Johansson via
2023-06-21 13:56 ` [PATCH v3 02/12] accel/tcg/translate-all.c: Widen pc and cs_base Anton Johansson via
@ 2023-06-21 13:56 ` Anton Johansson via
2023-06-21 13:56 ` [PATCH v3 04/12] accel/tcg/cputlb.c: Widen CPUTLBEntry access functions Anton Johansson via
` (9 subsequent siblings)
12 siblings, 0 replies; 16+ messages in thread
From: Anton Johansson via @ 2023-06-21 13:56 UTC (permalink / raw)
To: qemu-devel
Cc: ale, richard.henderson, pbonzini, eduardo, philmd,
marcel.apfelbaum, wangyanan55
Signed-off-by: Anton Johansson <anjo@rev.ng>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
---
accel/tcg/cpu-exec.c | 9 ++++++---
accel/tcg/translate-all.c | 3 ++-
target/alpha/cpu.h | 4 ++--
target/arm/cpu.h | 4 ++--
target/arm/helper.c | 4 ++--
target/avr/cpu.h | 4 ++--
target/cris/cpu.h | 4 ++--
target/hexagon/cpu.h | 4 ++--
target/hppa/cpu.h | 5 ++---
target/i386/cpu.h | 4 ++--
target/loongarch/cpu.h | 6 ++----
target/m68k/cpu.h | 4 ++--
target/microblaze/cpu.h | 4 ++--
target/mips/cpu.h | 4 ++--
target/nios2/cpu.h | 4 ++--
target/openrisc/cpu.h | 5 ++---
target/ppc/cpu.h | 8 ++++----
target/ppc/helper_regs.c | 4 ++--
target/riscv/cpu.h | 4 ++--
target/riscv/cpu_helper.c | 4 ++--
target/rx/cpu.h | 4 ++--
target/s390x/cpu.h | 4 ++--
target/sh4/cpu.h | 4 ++--
target/sparc/cpu.h | 4 ++--
target/tricore/cpu.h | 4 ++--
target/xtensa/cpu.h | 4 ++--
26 files changed, 58 insertions(+), 58 deletions(-)
diff --git a/accel/tcg/cpu-exec.c b/accel/tcg/cpu-exec.c
index 179847b294..4d952a6cc2 100644
--- a/accel/tcg/cpu-exec.c
+++ b/accel/tcg/cpu-exec.c
@@ -408,7 +408,8 @@ const void *HELPER(lookup_tb_ptr)(CPUArchState *env)
{
CPUState *cpu = env_cpu(env);
TranslationBlock *tb;
- target_ulong cs_base, pc;
+ vaddr pc;
+ uint64_t cs_base;
uint32_t flags, cflags;
cpu_get_tb_cpu_state(env, &pc, &cs_base, &flags);
@@ -529,7 +530,8 @@ void cpu_exec_step_atomic(CPUState *cpu)
{
CPUArchState *env = cpu->env_ptr;
TranslationBlock *tb;
- target_ulong cs_base, pc;
+ vaddr pc;
+ uint64_t cs_base;
uint32_t flags, cflags;
int tb_exit;
@@ -942,7 +944,8 @@ cpu_exec_loop(CPUState *cpu, SyncClocks *sc)
while (!cpu_handle_interrupt(cpu, &last_tb)) {
TranslationBlock *tb;
- target_ulong cs_base, pc;
+ vaddr pc;
+ uint64_t cs_base;
uint32_t flags, cflags;
cpu_get_tb_cpu_state(cpu->env_ptr, &pc, &cs_base, &flags);
diff --git a/accel/tcg/translate-all.c b/accel/tcg/translate-all.c
index 03caf62459..03c49baf1c 100644
--- a/accel/tcg/translate-all.c
+++ b/accel/tcg/translate-all.c
@@ -580,7 +580,8 @@ void tb_check_watchpoint(CPUState *cpu, uintptr_t retaddr)
/* The exception probably happened in a helper. The CPU state should
have been saved before calling it. Fetch the PC from there. */
CPUArchState *env = cpu->env_ptr;
- target_ulong pc, cs_base;
+ vaddr pc;
+ uint64_t cs_base;
tb_page_addr_t addr;
uint32_t flags;
diff --git a/target/alpha/cpu.h b/target/alpha/cpu.h
index 5e67304d81..fcd20bfd3a 100644
--- a/target/alpha/cpu.h
+++ b/target/alpha/cpu.h
@@ -462,8 +462,8 @@ void alpha_cpu_do_transaction_failed(CPUState *cs, hwaddr physaddr,
MemTxResult response, uintptr_t retaddr);
#endif
-static inline void cpu_get_tb_cpu_state(CPUAlphaState *env, target_ulong *pc,
- target_ulong *cs_base, uint32_t *pflags)
+static inline void cpu_get_tb_cpu_state(CPUAlphaState *env, vaddr *pc,
+ uint64_t *cs_base, uint32_t *pflags)
{
*pc = env->pc;
*cs_base = 0;
diff --git a/target/arm/cpu.h b/target/arm/cpu.h
index af0119addf..eb2b570b3f 100644
--- a/target/arm/cpu.h
+++ b/target/arm/cpu.h
@@ -3134,8 +3134,8 @@ static inline bool arm_cpu_bswap_data(CPUARMState *env)
}
#endif
-void cpu_get_tb_cpu_state(CPUARMState *env, target_ulong *pc,
- target_ulong *cs_base, uint32_t *flags);
+void cpu_get_tb_cpu_state(CPUARMState *env, vaddr *pc,
+ uint64_t *cs_base, uint32_t *flags);
enum {
QEMU_PSCI_CONDUIT_DISABLED = 0,
diff --git a/target/arm/helper.c b/target/arm/helper.c
index d4bee43bd0..4360044cbf 100644
--- a/target/arm/helper.c
+++ b/target/arm/helper.c
@@ -11847,8 +11847,8 @@ static bool mve_no_pred(CPUARMState *env)
return true;
}
-void cpu_get_tb_cpu_state(CPUARMState *env, target_ulong *pc,
- target_ulong *cs_base, uint32_t *pflags)
+void cpu_get_tb_cpu_state(CPUARMState *env, vaddr *pc,
+ uint64_t *cs_base, uint32_t *pflags)
{
CPUARMTBFlags flags;
diff --git a/target/avr/cpu.h b/target/avr/cpu.h
index f19dd72926..7225174668 100644
--- a/target/avr/cpu.h
+++ b/target/avr/cpu.h
@@ -190,8 +190,8 @@ enum {
TB_FLAGS_SKIP = 2,
};
-static inline void cpu_get_tb_cpu_state(CPUAVRState *env, target_ulong *pc,
- target_ulong *cs_base, uint32_t *pflags)
+static inline void cpu_get_tb_cpu_state(CPUAVRState *env, vaddr *pc,
+ uint64_t *cs_base, uint32_t *pflags)
{
uint32_t flags = 0;
diff --git a/target/cris/cpu.h b/target/cris/cpu.h
index 71fa1f96e0..8e37c6e50d 100644
--- a/target/cris/cpu.h
+++ b/target/cris/cpu.h
@@ -266,8 +266,8 @@ static inline int cpu_mmu_index (CPUCRISState *env, bool ifetch)
#include "exec/cpu-all.h"
-static inline void cpu_get_tb_cpu_state(CPUCRISState *env, target_ulong *pc,
- target_ulong *cs_base, uint32_t *flags)
+static inline void cpu_get_tb_cpu_state(CPUCRISState *env, vaddr *pc,
+ uint64_t *cs_base, uint32_t *flags)
{
*pc = env->pc;
*cs_base = 0;
diff --git a/target/hexagon/cpu.h b/target/hexagon/cpu.h
index bfcb1057dd..daef5c3f00 100644
--- a/target/hexagon/cpu.h
+++ b/target/hexagon/cpu.h
@@ -153,8 +153,8 @@ struct ArchCPU {
FIELD(TB_FLAGS, IS_TIGHT_LOOP, 0, 1)
-static inline void cpu_get_tb_cpu_state(CPUHexagonState *env, target_ulong *pc,
- target_ulong *cs_base, uint32_t *flags)
+static inline void cpu_get_tb_cpu_state(CPUHexagonState *env, vaddr *pc,
+ uint64_t *cs_base, uint32_t *flags)
{
uint32_t hex_flags = 0;
*pc = env->gpr[HEX_REG_PC];
diff --git a/target/hppa/cpu.h b/target/hppa/cpu.h
index b595ef25a9..7373177b55 100644
--- a/target/hppa/cpu.h
+++ b/target/hppa/cpu.h
@@ -268,9 +268,8 @@ static inline target_ulong hppa_form_gva(CPUHPPAState *env, uint64_t spc,
#define TB_FLAG_PRIV_SHIFT 8
#define TB_FLAG_UNALIGN 0x400
-static inline void cpu_get_tb_cpu_state(CPUHPPAState *env, target_ulong *pc,
- target_ulong *cs_base,
- uint32_t *pflags)
+static inline void cpu_get_tb_cpu_state(CPUHPPAState *env, vaddr *pc,
+ uint64_t *cs_base, uint32_t *pflags)
{
uint32_t flags = env->psw_n * PSW_N;
diff --git a/target/i386/cpu.h b/target/i386/cpu.h
index cd047e0410..2c9b0d2ebc 100644
--- a/target/i386/cpu.h
+++ b/target/i386/cpu.h
@@ -2275,8 +2275,8 @@ static inline int cpu_mmu_index_kernel(CPUX86State *env)
#include "hw/i386/apic.h"
#endif
-static inline void cpu_get_tb_cpu_state(CPUX86State *env, target_ulong *pc,
- target_ulong *cs_base, uint32_t *flags)
+static inline void cpu_get_tb_cpu_state(CPUX86State *env, vaddr *pc,
+ uint64_t *cs_base, uint32_t *flags)
{
*cs_base = env->segs[R_CS].base;
*pc = *cs_base + env->eip;
diff --git a/target/loongarch/cpu.h b/target/loongarch/cpu.h
index b23f38c3d5..ed04027af1 100644
--- a/target/loongarch/cpu.h
+++ b/target/loongarch/cpu.h
@@ -427,10 +427,8 @@ static inline int cpu_mmu_index(CPULoongArchState *env, bool ifetch)
#define HW_FLAGS_EUEN_FPE 0x04
#define HW_FLAGS_EUEN_SXE 0x08
-static inline void cpu_get_tb_cpu_state(CPULoongArchState *env,
- target_ulong *pc,
- target_ulong *cs_base,
- uint32_t *flags)
+static inline void cpu_get_tb_cpu_state(CPULoongArchState *env, vaddr *pc,
+ uint64_t *cs_base, uint32_t *flags)
{
*pc = env->pc;
*cs_base = 0;
diff --git a/target/m68k/cpu.h b/target/m68k/cpu.h
index 048d5aae2b..cf70282717 100644
--- a/target/m68k/cpu.h
+++ b/target/m68k/cpu.h
@@ -601,8 +601,8 @@ void m68k_cpu_transaction_failed(CPUState *cs, hwaddr physaddr, vaddr addr,
#define TB_FLAGS_TRACE 16
#define TB_FLAGS_TRACE_BIT (1 << TB_FLAGS_TRACE)
-static inline void cpu_get_tb_cpu_state(CPUM68KState *env, target_ulong *pc,
- target_ulong *cs_base, uint32_t *flags)
+static inline void cpu_get_tb_cpu_state(CPUM68KState *env, vaddr *pc,
+ uint64_t *cs_base, uint32_t *flags)
{
*pc = env->pc;
*cs_base = 0;
diff --git a/target/microblaze/cpu.h b/target/microblaze/cpu.h
index 88324d0bc1..3525de144c 100644
--- a/target/microblaze/cpu.h
+++ b/target/microblaze/cpu.h
@@ -401,8 +401,8 @@ void mb_tcg_init(void);
/* Ensure there is no overlap between the two masks. */
QEMU_BUILD_BUG_ON(MSR_TB_MASK & IFLAGS_TB_MASK);
-static inline void cpu_get_tb_cpu_state(CPUMBState *env, target_ulong *pc,
- target_ulong *cs_base, uint32_t *flags)
+static inline void cpu_get_tb_cpu_state(CPUMBState *env, vaddr *pc,
+ uint64_t *cs_base, uint32_t *flags)
{
*pc = env->pc;
*flags = (env->iflags & IFLAGS_TB_MASK) | (env->msr & MSR_TB_MASK);
diff --git a/target/mips/cpu.h b/target/mips/cpu.h
index 142c55af47..a3bc646976 100644
--- a/target/mips/cpu.h
+++ b/target/mips/cpu.h
@@ -1313,8 +1313,8 @@ void itc_reconfigure(struct MIPSITUState *tag);
/* helper.c */
target_ulong exception_resume_pc(CPUMIPSState *env);
-static inline void cpu_get_tb_cpu_state(CPUMIPSState *env, target_ulong *pc,
- target_ulong *cs_base, uint32_t *flags)
+static inline void cpu_get_tb_cpu_state(CPUMIPSState *env, vaddr *pc,
+ uint64_t *cs_base, uint32_t *flags)
{
*pc = env->active_tc.PC;
*cs_base = 0;
diff --git a/target/nios2/cpu.h b/target/nios2/cpu.h
index 20042c4332..477a3161fd 100644
--- a/target/nios2/cpu.h
+++ b/target/nios2/cpu.h
@@ -302,8 +302,8 @@ FIELD(TBFLAGS, CRS0, 0, 1) /* Set if CRS == 0. */
FIELD(TBFLAGS, U, 1, 1) /* Overlaps CR_STATUS_U */
FIELD(TBFLAGS, R0_0, 2, 1) /* Set if R0 == 0. */
-static inline void cpu_get_tb_cpu_state(CPUNios2State *env, target_ulong *pc,
- target_ulong *cs_base, uint32_t *flags)
+static inline void cpu_get_tb_cpu_state(CPUNios2State *env, vaddr *pc,
+ uint64_t *cs_base, uint32_t *flags)
{
unsigned crs = FIELD_EX32(env->ctrl[CR_STATUS], CR_STATUS, CRS);
diff --git a/target/openrisc/cpu.h b/target/openrisc/cpu.h
index f16e8b3274..92c38f54c2 100644
--- a/target/openrisc/cpu.h
+++ b/target/openrisc/cpu.h
@@ -367,9 +367,8 @@ static inline void cpu_set_gpr(CPUOpenRISCState *env, int i, uint32_t val)
env->shadow_gpr[0][i] = val;
}
-static inline void cpu_get_tb_cpu_state(CPUOpenRISCState *env,
- target_ulong *pc,
- target_ulong *cs_base, uint32_t *flags)
+static inline void cpu_get_tb_cpu_state(CPUOpenRISCState *env, vaddr *pc,
+ uint64_t *cs_base, uint32_t *flags)
{
*pc = env->pc;
*cs_base = 0;
diff --git a/target/ppc/cpu.h b/target/ppc/cpu.h
index 0ee2adc105..fe63257485 100644
--- a/target/ppc/cpu.h
+++ b/target/ppc/cpu.h
@@ -2498,11 +2498,11 @@ void cpu_write_xer(CPUPPCState *env, target_ulong xer);
#define is_book3s_arch2x(ctx) (!!((ctx)->insns_flags & PPC_SEGMENT_64B))
#ifdef CONFIG_DEBUG_TCG
-void cpu_get_tb_cpu_state(CPUPPCState *env, target_ulong *pc,
- target_ulong *cs_base, uint32_t *flags);
+void cpu_get_tb_cpu_state(CPUPPCState *env, vaddr *pc,
+ uint64_t *cs_base, uint32_t *flags);
#else
-static inline void cpu_get_tb_cpu_state(CPUPPCState *env, target_ulong *pc,
- target_ulong *cs_base, uint32_t *flags)
+static inline void cpu_get_tb_cpu_state(CPUPPCState *env, vaddr *pc,
+ uint64_t *cs_base, uint32_t *flags)
{
*pc = env->nip;
*cs_base = 0;
diff --git a/target/ppc/helper_regs.c b/target/ppc/helper_regs.c
index e27f4a75a4..f380342d4d 100644
--- a/target/ppc/helper_regs.c
+++ b/target/ppc/helper_regs.c
@@ -218,8 +218,8 @@ void hreg_update_pmu_hflags(CPUPPCState *env)
}
#ifdef CONFIG_DEBUG_TCG
-void cpu_get_tb_cpu_state(CPUPPCState *env, target_ulong *pc,
- target_ulong *cs_base, uint32_t *flags)
+void cpu_get_tb_cpu_state(CPUPPCState *env, vaddr *pc,
+ uint64_t *cs_base, uint32_t *flags)
{
uint32_t hflags_current = env->hflags;
uint32_t hflags_rebuilt;
diff --git a/target/riscv/cpu.h b/target/riscv/cpu.h
index e3e08d315f..7bff1d47f6 100644
--- a/target/riscv/cpu.h
+++ b/target/riscv/cpu.h
@@ -587,8 +587,8 @@ static inline uint32_t vext_get_vlmax(RISCVCPU *cpu, target_ulong vtype)
return cpu->cfg.vlen >> (sew + 3 - lmul);
}
-void cpu_get_tb_cpu_state(CPURISCVState *env, target_ulong *pc,
- target_ulong *cs_base, uint32_t *pflags);
+void cpu_get_tb_cpu_state(CPURISCVState *env, vaddr *pc,
+ uint64_t *cs_base, uint32_t *pflags);
void riscv_cpu_update_mask(CPURISCVState *env);
diff --git a/target/riscv/cpu_helper.c b/target/riscv/cpu_helper.c
index 90cef9856d..a944f25694 100644
--- a/target/riscv/cpu_helper.c
+++ b/target/riscv/cpu_helper.c
@@ -61,8 +61,8 @@ int riscv_cpu_mmu_index(CPURISCVState *env, bool ifetch)
#endif
}
-void cpu_get_tb_cpu_state(CPURISCVState *env, target_ulong *pc,
- target_ulong *cs_base, uint32_t *pflags)
+void cpu_get_tb_cpu_state(CPURISCVState *env, vaddr *pc,
+ uint64_t *cs_base, uint32_t *pflags)
{
CPUState *cs = env_cpu(env);
RISCVCPU *cpu = RISCV_CPU(cs);
diff --git a/target/rx/cpu.h b/target/rx/cpu.h
index 555d230f24..7f03ffcfed 100644
--- a/target/rx/cpu.h
+++ b/target/rx/cpu.h
@@ -143,8 +143,8 @@ void rx_cpu_unpack_psw(CPURXState *env, uint32_t psw, int rte);
#define RX_CPU_IRQ 0
#define RX_CPU_FIR 1
-static inline void cpu_get_tb_cpu_state(CPURXState *env, target_ulong *pc,
- target_ulong *cs_base, uint32_t *flags)
+static inline void cpu_get_tb_cpu_state(CPURXState *env, vaddr *pc,
+ uint64_t *cs_base, uint32_t *flags)
{
*pc = env->pc;
*cs_base = 0;
diff --git a/target/s390x/cpu.h b/target/s390x/cpu.h
index f130c29f83..eb5b65b7d3 100644
--- a/target/s390x/cpu.h
+++ b/target/s390x/cpu.h
@@ -378,8 +378,8 @@ static inline int cpu_mmu_index(CPUS390XState *env, bool ifetch)
#endif
}
-static inline void cpu_get_tb_cpu_state(CPUS390XState* env, target_ulong *pc,
- target_ulong *cs_base, uint32_t *flags)
+static inline void cpu_get_tb_cpu_state(CPUS390XState *env, vaddr *pc,
+ uint64_t *cs_base, uint32_t *flags)
{
if (env->psw.addr & 1) {
/*
diff --git a/target/sh4/cpu.h b/target/sh4/cpu.h
index 02bfd612ea..1399d3840f 100644
--- a/target/sh4/cpu.h
+++ b/target/sh4/cpu.h
@@ -368,8 +368,8 @@ static inline void cpu_write_sr(CPUSH4State *env, target_ulong sr)
env->sr = sr & ~((1u << SR_M) | (1u << SR_Q) | (1u << SR_T));
}
-static inline void cpu_get_tb_cpu_state(CPUSH4State *env, target_ulong *pc,
- target_ulong *cs_base, uint32_t *flags)
+static inline void cpu_get_tb_cpu_state(CPUSH4State *env, vaddr *pc,
+ uint64_t *cs_base, uint32_t *flags)
{
*pc = env->pc;
/* For a gUSA region, notice the end of the region. */
diff --git a/target/sparc/cpu.h b/target/sparc/cpu.h
index 3d090e8278..95d2d0da71 100644
--- a/target/sparc/cpu.h
+++ b/target/sparc/cpu.h
@@ -762,8 +762,8 @@ trap_state* cpu_tsptr(CPUSPARCState* env);
#define TB_FLAG_HYPER (1 << 7)
#define TB_FLAG_ASI_SHIFT 24
-static inline void cpu_get_tb_cpu_state(CPUSPARCState *env, target_ulong *pc,
- target_ulong *cs_base, uint32_t *pflags)
+static inline void cpu_get_tb_cpu_state(CPUSPARCState *env, vaddr *pc,
+ uint64_t *cs_base, uint32_t *pflags)
{
uint32_t flags;
*pc = env->pc;
diff --git a/target/tricore/cpu.h b/target/tricore/cpu.h
index d98a3fb671..59ad14fbe6 100644
--- a/target/tricore/cpu.h
+++ b/target/tricore/cpu.h
@@ -380,8 +380,8 @@ static inline int cpu_mmu_index(CPUTriCoreState *env, bool ifetch)
void cpu_state_reset(CPUTriCoreState *s);
void tricore_tcg_init(void);
-static inline void cpu_get_tb_cpu_state(CPUTriCoreState *env, target_ulong *pc,
- target_ulong *cs_base, uint32_t *flags)
+static inline void cpu_get_tb_cpu_state(CPUTriCoreState *env, vaddr *pc,
+ uint64_t *cs_base, uint32_t *flags)
{
*pc = env->PC;
*cs_base = 0;
diff --git a/target/xtensa/cpu.h b/target/xtensa/cpu.h
index b7a54711a6..87fe992ba6 100644
--- a/target/xtensa/cpu.h
+++ b/target/xtensa/cpu.h
@@ -727,8 +727,8 @@ static inline int cpu_mmu_index(CPUXtensaState *env, bool ifetch)
#include "exec/cpu-all.h"
-static inline void cpu_get_tb_cpu_state(CPUXtensaState *env, target_ulong *pc,
- target_ulong *cs_base, uint32_t *flags)
+static inline void cpu_get_tb_cpu_state(CPUXtensaState *env, vaddr *pc,
+ uint64_t *cs_base, uint32_t *flags)
{
*pc = env->pc;
*cs_base = 0;
--
2.41.0
^ permalink raw reply related [flat|nested] 16+ messages in thread
* [PATCH v3 04/12] accel/tcg/cputlb.c: Widen CPUTLBEntry access functions
2023-06-21 13:56 [PATCH v3 00/12] Start replacing target_ulong with vaddr Anton Johansson via
` (2 preceding siblings ...)
2023-06-21 13:56 ` [PATCH v3 03/12] target: Widen pc/cs_base in cpu_get_tb_cpu_state Anton Johansson via
@ 2023-06-21 13:56 ` Anton Johansson via
2023-06-21 13:56 ` [PATCH v3 05/12] accel/tcg/cputlb.c: Widen addr in MMULookupPageData Anton Johansson via
` (8 subsequent siblings)
12 siblings, 0 replies; 16+ messages in thread
From: Anton Johansson via @ 2023-06-21 13:56 UTC (permalink / raw)
To: qemu-devel
Cc: ale, richard.henderson, pbonzini, eduardo, philmd,
marcel.apfelbaum, wangyanan55
Signed-off-by: Anton Johansson <anjo@rev.ng>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
---
accel/tcg/cputlb.c | 8 ++++----
include/exec/cpu_ldst.h | 10 +++++-----
2 files changed, 9 insertions(+), 9 deletions(-)
diff --git a/accel/tcg/cputlb.c b/accel/tcg/cputlb.c
index 5caeccb52d..ac990a1526 100644
--- a/accel/tcg/cputlb.c
+++ b/accel/tcg/cputlb.c
@@ -1453,7 +1453,7 @@ static bool victim_tlb_hit(CPUArchState *env, size_t mmu_idx, size_t index,
assert_cpu_is_self(env_cpu(env));
for (vidx = 0; vidx < CPU_VTLB_SIZE; ++vidx) {
CPUTLBEntry *vtlb = &env_tlb(env)->d[mmu_idx].vtable[vidx];
- target_ulong cmp = tlb_read_idx(vtlb, access_type);
+ uint64_t cmp = tlb_read_idx(vtlb, access_type);
if (cmp == page) {
/* Found entry in victim tlb, swap tlb and iotlb. */
@@ -1507,7 +1507,7 @@ static int probe_access_internal(CPUArchState *env, target_ulong addr,
{
uintptr_t index = tlb_index(env, mmu_idx, addr);
CPUTLBEntry *entry = tlb_entry(env, mmu_idx, addr);
- target_ulong tlb_addr = tlb_read_idx(entry, access_type);
+ uint64_t tlb_addr = tlb_read_idx(entry, access_type);
target_ulong page_addr = addr & TARGET_PAGE_MASK;
int flags = TLB_FLAGS_MASK;
@@ -1694,7 +1694,7 @@ bool tlb_plugin_lookup(CPUState *cpu, vaddr addr, int mmu_idx,
CPUArchState *env = cpu->env_ptr;
CPUTLBEntry *tlbe = tlb_entry(env, mmu_idx, addr);
uintptr_t index = tlb_index(env, mmu_idx, addr);
- vaddr tlb_addr = is_store ? tlb_addr_write(tlbe) : tlbe->addr_read;
+ uint64_t tlb_addr = is_store ? tlb_addr_write(tlbe) : tlbe->addr_read;
if (likely(tlb_hit(tlb_addr, addr))) {
/* We must have an iotlb entry for MMIO */
@@ -1759,7 +1759,7 @@ static bool mmu_lookup1(CPUArchState *env, MMULookupPageData *data,
target_ulong addr = data->addr;
uintptr_t index = tlb_index(env, mmu_idx, addr);
CPUTLBEntry *entry = tlb_entry(env, mmu_idx, addr);
- target_ulong tlb_addr = tlb_read_idx(entry, access_type);
+ uint64_t tlb_addr = tlb_read_idx(entry, access_type);
bool maybe_resized = false;
/* If the TLB entry is for a different page, reload and try again. */
diff --git a/include/exec/cpu_ldst.h b/include/exec/cpu_ldst.h
index 896f305ff3..645476f0e5 100644
--- a/include/exec/cpu_ldst.h
+++ b/include/exec/cpu_ldst.h
@@ -328,8 +328,8 @@ static inline void clear_helper_retaddr(void)
#include "tcg/oversized-guest.h"
-static inline target_ulong tlb_read_idx(const CPUTLBEntry *entry,
- MMUAccessType access_type)
+static inline uint64_t tlb_read_idx(const CPUTLBEntry *entry,
+ MMUAccessType access_type)
{
/* Do not rearrange the CPUTLBEntry structure members. */
QEMU_BUILD_BUG_ON(offsetof(CPUTLBEntry, addr_read) !=
@@ -355,14 +355,14 @@ static inline target_ulong tlb_read_idx(const CPUTLBEntry *entry,
#endif
}
-static inline target_ulong tlb_addr_write(const CPUTLBEntry *entry)
+static inline uint64_t tlb_addr_write(const CPUTLBEntry *entry)
{
return tlb_read_idx(entry, MMU_DATA_STORE);
}
/* Find the TLB index corresponding to the mmu_idx + address pair. */
static inline uintptr_t tlb_index(CPUArchState *env, uintptr_t mmu_idx,
- target_ulong addr)
+ vaddr addr)
{
uintptr_t size_mask = env_tlb(env)->f[mmu_idx].mask >> CPU_TLB_ENTRY_BITS;
@@ -371,7 +371,7 @@ static inline uintptr_t tlb_index(CPUArchState *env, uintptr_t mmu_idx,
/* Find the TLB entry corresponding to the mmu_idx + address pair. */
static inline CPUTLBEntry *tlb_entry(CPUArchState *env, uintptr_t mmu_idx,
- target_ulong addr)
+ vaddr addr)
{
return &env_tlb(env)->f[mmu_idx].table[tlb_index(env, mmu_idx, addr)];
}
--
2.41.0
^ permalink raw reply related [flat|nested] 16+ messages in thread
* [PATCH v3 05/12] accel/tcg/cputlb.c: Widen addr in MMULookupPageData
2023-06-21 13:56 [PATCH v3 00/12] Start replacing target_ulong with vaddr Anton Johansson via
` (3 preceding siblings ...)
2023-06-21 13:56 ` [PATCH v3 04/12] accel/tcg/cputlb.c: Widen CPUTLBEntry access functions Anton Johansson via
@ 2023-06-21 13:56 ` Anton Johansson via
2023-06-21 13:56 ` [PATCH v3 06/12] accel/tcg/cpu-exec.c: Widen pc to vaddr Anton Johansson via
` (7 subsequent siblings)
12 siblings, 0 replies; 16+ messages in thread
From: Anton Johansson via @ 2023-06-21 13:56 UTC (permalink / raw)
To: qemu-devel
Cc: ale, richard.henderson, pbonzini, eduardo, philmd,
marcel.apfelbaum, wangyanan55
Functions accessing MMULookupPageData are also updated.
Signed-off-by: Anton Johansson <anjo@rev.ng>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
---
accel/tcg/cputlb.c | 30 +++++++++++++++---------------
1 file changed, 15 insertions(+), 15 deletions(-)
diff --git a/accel/tcg/cputlb.c b/accel/tcg/cputlb.c
index ac990a1526..cc53d0fb64 100644
--- a/accel/tcg/cputlb.c
+++ b/accel/tcg/cputlb.c
@@ -1729,7 +1729,7 @@ bool tlb_plugin_lookup(CPUState *cpu, vaddr addr, int mmu_idx,
typedef struct MMULookupPageData {
CPUTLBEntryFull *full;
void *haddr;
- target_ulong addr;
+ vaddr addr;
int flags;
int size;
} MMULookupPageData;
@@ -1756,7 +1756,7 @@ typedef struct MMULookupLocals {
static bool mmu_lookup1(CPUArchState *env, MMULookupPageData *data,
int mmu_idx, MMUAccessType access_type, uintptr_t ra)
{
- target_ulong addr = data->addr;
+ vaddr addr = data->addr;
uintptr_t index = tlb_index(env, mmu_idx, addr);
CPUTLBEntry *entry = tlb_entry(env, mmu_idx, addr);
uint64_t tlb_addr = tlb_read_idx(entry, access_type);
@@ -1796,7 +1796,7 @@ static void mmu_watch_or_dirty(CPUArchState *env, MMULookupPageData *data,
MMUAccessType access_type, uintptr_t ra)
{
CPUTLBEntryFull *full = data->full;
- target_ulong addr = data->addr;
+ vaddr addr = data->addr;
int flags = data->flags;
int size = data->size;
@@ -1827,7 +1827,7 @@ static void mmu_watch_or_dirty(CPUArchState *env, MMULookupPageData *data,
* Resolve the translation for the page(s) beginning at @addr, for MemOp.size
* bytes. Return true if the lookup crosses a page boundary.
*/
-static bool mmu_lookup(CPUArchState *env, target_ulong addr, MemOpIdx oi,
+static bool mmu_lookup(CPUArchState *env, vaddr addr, MemOpIdx oi,
uintptr_t ra, MMUAccessType type, MMULookupLocals *l)
{
unsigned a_bits;
@@ -2024,7 +2024,7 @@ static uint64_t do_ld_mmio_beN(CPUArchState *env, MMULookupPageData *p,
MMUAccessType type, uintptr_t ra)
{
CPUTLBEntryFull *full = p->full;
- target_ulong addr = p->addr;
+ vaddr addr = p->addr;
int i, size = p->size;
QEMU_IOTHREAD_LOCK_GUARD();
@@ -2333,7 +2333,7 @@ static uint64_t do_ld_8(CPUArchState *env, MMULookupPageData *p, int mmu_idx,
return ret;
}
-static uint8_t do_ld1_mmu(CPUArchState *env, target_ulong addr, MemOpIdx oi,
+static uint8_t do_ld1_mmu(CPUArchState *env, vaddr addr, MemOpIdx oi,
uintptr_t ra, MMUAccessType access_type)
{
MMULookupLocals l;
@@ -2352,7 +2352,7 @@ tcg_target_ulong helper_ldub_mmu(CPUArchState *env, uint64_t addr,
return do_ld1_mmu(env, addr, oi, retaddr, MMU_DATA_LOAD);
}
-static uint16_t do_ld2_mmu(CPUArchState *env, target_ulong addr, MemOpIdx oi,
+static uint16_t do_ld2_mmu(CPUArchState *env, vaddr addr, MemOpIdx oi,
uintptr_t ra, MMUAccessType access_type)
{
MMULookupLocals l;
@@ -2383,7 +2383,7 @@ tcg_target_ulong helper_lduw_mmu(CPUArchState *env, uint64_t addr,
return do_ld2_mmu(env, addr, oi, retaddr, MMU_DATA_LOAD);
}
-static uint32_t do_ld4_mmu(CPUArchState *env, target_ulong addr, MemOpIdx oi,
+static uint32_t do_ld4_mmu(CPUArchState *env, vaddr addr, MemOpIdx oi,
uintptr_t ra, MMUAccessType access_type)
{
MMULookupLocals l;
@@ -2410,7 +2410,7 @@ tcg_target_ulong helper_ldul_mmu(CPUArchState *env, uint64_t addr,
return do_ld4_mmu(env, addr, oi, retaddr, MMU_DATA_LOAD);
}
-static uint64_t do_ld8_mmu(CPUArchState *env, target_ulong addr, MemOpIdx oi,
+static uint64_t do_ld8_mmu(CPUArchState *env, vaddr addr, MemOpIdx oi,
uintptr_t ra, MMUAccessType access_type)
{
MMULookupLocals l;
@@ -2460,7 +2460,7 @@ tcg_target_ulong helper_ldsl_mmu(CPUArchState *env, uint64_t addr,
return (int32_t)helper_ldul_mmu(env, addr, oi, retaddr);
}
-static Int128 do_ld16_mmu(CPUArchState *env, target_ulong addr,
+static Int128 do_ld16_mmu(CPUArchState *env, vaddr addr,
MemOpIdx oi, uintptr_t ra)
{
MMULookupLocals l;
@@ -2617,7 +2617,7 @@ static uint64_t do_st_mmio_leN(CPUArchState *env, MMULookupPageData *p,
uint64_t val_le, int mmu_idx, uintptr_t ra)
{
CPUTLBEntryFull *full = p->full;
- target_ulong addr = p->addr;
+ vaddr addr = p->addr;
int i, size = p->size;
QEMU_IOTHREAD_LOCK_GUARD();
@@ -2808,7 +2808,7 @@ void helper_stb_mmu(CPUArchState *env, uint64_t addr, uint32_t val,
do_st_1(env, &l.page[0], val, l.mmu_idx, ra);
}
-static void do_st2_mmu(CPUArchState *env, target_ulong addr, uint16_t val,
+static void do_st2_mmu(CPUArchState *env, vaddr addr, uint16_t val,
MemOpIdx oi, uintptr_t ra)
{
MMULookupLocals l;
@@ -2837,7 +2837,7 @@ void helper_stw_mmu(CPUArchState *env, uint64_t addr, uint32_t val,
do_st2_mmu(env, addr, val, oi, retaddr);
}
-static void do_st4_mmu(CPUArchState *env, target_ulong addr, uint32_t val,
+static void do_st4_mmu(CPUArchState *env, vaddr addr, uint32_t val,
MemOpIdx oi, uintptr_t ra)
{
MMULookupLocals l;
@@ -2864,7 +2864,7 @@ void helper_stl_mmu(CPUArchState *env, uint64_t addr, uint32_t val,
do_st4_mmu(env, addr, val, oi, retaddr);
}
-static void do_st8_mmu(CPUArchState *env, target_ulong addr, uint64_t val,
+static void do_st8_mmu(CPUArchState *env, vaddr addr, uint64_t val,
MemOpIdx oi, uintptr_t ra)
{
MMULookupLocals l;
@@ -2891,7 +2891,7 @@ void helper_stq_mmu(CPUArchState *env, uint64_t addr, uint64_t val,
do_st8_mmu(env, addr, val, oi, retaddr);
}
-static void do_st16_mmu(CPUArchState *env, target_ulong addr, Int128 val,
+static void do_st16_mmu(CPUArchState *env, vaddr addr, Int128 val,
MemOpIdx oi, uintptr_t ra)
{
MMULookupLocals l;
--
2.41.0
^ permalink raw reply related [flat|nested] 16+ messages in thread
* [PATCH v3 06/12] accel/tcg/cpu-exec.c: Widen pc to vaddr
2023-06-21 13:56 [PATCH v3 00/12] Start replacing target_ulong with vaddr Anton Johansson via
` (4 preceding siblings ...)
2023-06-21 13:56 ` [PATCH v3 05/12] accel/tcg/cputlb.c: Widen addr in MMULookupPageData Anton Johansson via
@ 2023-06-21 13:56 ` Anton Johansson via
2023-06-21 13:56 ` [PATCH v3 07/12] accel/tcg: Widen pc to vaddr in CPUJumpCache Anton Johansson via
` (6 subsequent siblings)
12 siblings, 0 replies; 16+ messages in thread
From: Anton Johansson via @ 2023-06-21 13:56 UTC (permalink / raw)
To: qemu-devel
Cc: ale, richard.henderson, pbonzini, eduardo, philmd,
marcel.apfelbaum, wangyanan55
Signed-off-by: Anton Johansson <anjo@rev.ng>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
---
accel/tcg/cpu-exec.c | 34 +++++++++++++++++-----------------
1 file changed, 17 insertions(+), 17 deletions(-)
diff --git a/accel/tcg/cpu-exec.c b/accel/tcg/cpu-exec.c
index 4d952a6cc2..ba1890a373 100644
--- a/accel/tcg/cpu-exec.c
+++ b/accel/tcg/cpu-exec.c
@@ -169,8 +169,8 @@ uint32_t curr_cflags(CPUState *cpu)
}
struct tb_desc {
- target_ulong pc;
- target_ulong cs_base;
+ vaddr pc;
+ uint64_t cs_base;
CPUArchState *env;
tb_page_addr_t page_addr0;
uint32_t flags;
@@ -193,7 +193,7 @@ static bool tb_lookup_cmp(const void *p, const void *d)
return true;
} else {
tb_page_addr_t phys_page1;
- target_ulong virt_page1;
+ vaddr virt_page1;
/*
* We know that the first page matched, and an otherwise valid TB
@@ -214,8 +214,8 @@ static bool tb_lookup_cmp(const void *p, const void *d)
return false;
}
-static TranslationBlock *tb_htable_lookup(CPUState *cpu, target_ulong pc,
- target_ulong cs_base, uint32_t flags,
+static TranslationBlock *tb_htable_lookup(CPUState *cpu, vaddr pc,
+ uint64_t cs_base, uint32_t flags,
uint32_t cflags)
{
tb_page_addr_t phys_pc;
@@ -238,9 +238,9 @@ static TranslationBlock *tb_htable_lookup(CPUState *cpu, target_ulong pc,
}
/* Might cause an exception, so have a longjmp destination ready */
-static inline TranslationBlock *tb_lookup(CPUState *cpu, target_ulong pc,
- target_ulong cs_base,
- uint32_t flags, uint32_t cflags)
+static inline TranslationBlock *tb_lookup(CPUState *cpu, vaddr pc,
+ uint64_t cs_base, uint32_t flags,
+ uint32_t cflags)
{
TranslationBlock *tb;
CPUJumpCache *jc;
@@ -292,13 +292,13 @@ static inline TranslationBlock *tb_lookup(CPUState *cpu, target_ulong pc,
return tb;
}
-static void log_cpu_exec(target_ulong pc, CPUState *cpu,
+static void log_cpu_exec(vaddr pc, CPUState *cpu,
const TranslationBlock *tb)
{
if (qemu_log_in_addr_range(pc)) {
qemu_log_mask(CPU_LOG_EXEC,
"Trace %d: %p [%08" PRIx64
- "/" TARGET_FMT_lx "/%08x/%08x] %s\n",
+ "/%" VADDR_PRIx "/%08x/%08x] %s\n",
cpu->cpu_index, tb->tc.ptr, tb->cs_base, pc,
tb->flags, tb->cflags, lookup_symbol(pc));
@@ -323,7 +323,7 @@ static void log_cpu_exec(target_ulong pc, CPUState *cpu,
}
}
-static bool check_for_breakpoints_slow(CPUState *cpu, target_ulong pc,
+static bool check_for_breakpoints_slow(CPUState *cpu, vaddr pc,
uint32_t *cflags)
{
CPUBreakpoint *bp;
@@ -389,7 +389,7 @@ static bool check_for_breakpoints_slow(CPUState *cpu, target_ulong pc,
return false;
}
-static inline bool check_for_breakpoints(CPUState *cpu, target_ulong pc,
+static inline bool check_for_breakpoints(CPUState *cpu, vaddr pc,
uint32_t *cflags)
{
return unlikely(!QTAILQ_EMPTY(&cpu->breakpoints)) &&
@@ -485,10 +485,10 @@ cpu_tb_exec(CPUState *cpu, TranslationBlock *itb, int *tb_exit)
cc->set_pc(cpu, last_tb->pc);
}
if (qemu_loglevel_mask(CPU_LOG_EXEC)) {
- target_ulong pc = log_pc(cpu, last_tb);
+ vaddr pc = log_pc(cpu, last_tb);
if (qemu_log_in_addr_range(pc)) {
- qemu_log("Stopped execution of TB chain before %p ["
- TARGET_FMT_lx "] %s\n",
+ qemu_log("Stopped execution of TB chain before %p [%"
+ VADDR_PRIx "] %s\n",
last_tb->tc.ptr, pc, lookup_symbol(pc));
}
}
@@ -882,8 +882,8 @@ static inline bool cpu_handle_interrupt(CPUState *cpu,
}
static inline void cpu_loop_exec_tb(CPUState *cpu, TranslationBlock *tb,
- target_ulong pc,
- TranslationBlock **last_tb, int *tb_exit)
+ vaddr pc, TranslationBlock **last_tb,
+ int *tb_exit)
{
int32_t insns_left;
--
2.41.0
^ permalink raw reply related [flat|nested] 16+ messages in thread
* [PATCH v3 07/12] accel/tcg: Widen pc to vaddr in CPUJumpCache
2023-06-21 13:56 [PATCH v3 00/12] Start replacing target_ulong with vaddr Anton Johansson via
` (5 preceding siblings ...)
2023-06-21 13:56 ` [PATCH v3 06/12] accel/tcg/cpu-exec.c: Widen pc to vaddr Anton Johansson via
@ 2023-06-21 13:56 ` Anton Johansson via
2023-06-21 13:56 ` [PATCH v3 08/12] accel: Replace target_ulong with vaddr in probe_*() Anton Johansson via
` (5 subsequent siblings)
12 siblings, 0 replies; 16+ messages in thread
From: Anton Johansson via @ 2023-06-21 13:56 UTC (permalink / raw)
To: qemu-devel
Cc: ale, richard.henderson, pbonzini, eduardo, philmd,
marcel.apfelbaum, wangyanan55
Related functions dealing with the jump cache are also updated.
Signed-off-by: Anton Johansson <anjo@rev.ng>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
---
accel/tcg/cputlb.c | 2 +-
accel/tcg/tb-hash.h | 12 ++++++------
accel/tcg/tb-jmp-cache.h | 2 +-
3 files changed, 8 insertions(+), 8 deletions(-)
diff --git a/accel/tcg/cputlb.c b/accel/tcg/cputlb.c
index cc53d0fb64..bdf400f6e6 100644
--- a/accel/tcg/cputlb.c
+++ b/accel/tcg/cputlb.c
@@ -99,7 +99,7 @@ static void tlb_window_reset(CPUTLBDesc *desc, int64_t ns,
desc->window_max_entries = max_entries;
}
-static void tb_jmp_cache_clear_page(CPUState *cpu, target_ulong page_addr)
+static void tb_jmp_cache_clear_page(CPUState *cpu, vaddr page_addr)
{
CPUJumpCache *jc = cpu->tb_jmp_cache;
int i, i0;
diff --git a/accel/tcg/tb-hash.h b/accel/tcg/tb-hash.h
index 2ba2193731..a0c61f25cd 100644
--- a/accel/tcg/tb-hash.h
+++ b/accel/tcg/tb-hash.h
@@ -35,16 +35,16 @@
#define TB_JMP_ADDR_MASK (TB_JMP_PAGE_SIZE - 1)
#define TB_JMP_PAGE_MASK (TB_JMP_CACHE_SIZE - TB_JMP_PAGE_SIZE)
-static inline unsigned int tb_jmp_cache_hash_page(target_ulong pc)
+static inline unsigned int tb_jmp_cache_hash_page(vaddr pc)
{
- target_ulong tmp;
+ vaddr tmp;
tmp = pc ^ (pc >> (TARGET_PAGE_BITS - TB_JMP_PAGE_BITS));
return (tmp >> (TARGET_PAGE_BITS - TB_JMP_PAGE_BITS)) & TB_JMP_PAGE_MASK;
}
-static inline unsigned int tb_jmp_cache_hash_func(target_ulong pc)
+static inline unsigned int tb_jmp_cache_hash_func(vaddr pc)
{
- target_ulong tmp;
+ vaddr tmp;
tmp = pc ^ (pc >> (TARGET_PAGE_BITS - TB_JMP_PAGE_BITS));
return (((tmp >> (TARGET_PAGE_BITS - TB_JMP_PAGE_BITS)) & TB_JMP_PAGE_MASK)
| (tmp & TB_JMP_ADDR_MASK));
@@ -53,7 +53,7 @@ static inline unsigned int tb_jmp_cache_hash_func(target_ulong pc)
#else
/* In user-mode we can get better hashing because we do not have a TLB */
-static inline unsigned int tb_jmp_cache_hash_func(target_ulong pc)
+static inline unsigned int tb_jmp_cache_hash_func(vaddr pc)
{
return (pc ^ (pc >> TB_JMP_CACHE_BITS)) & (TB_JMP_CACHE_SIZE - 1);
}
@@ -61,7 +61,7 @@ static inline unsigned int tb_jmp_cache_hash_func(target_ulong pc)
#endif /* CONFIG_SOFTMMU */
static inline
-uint32_t tb_hash_func(tb_page_addr_t phys_pc, target_ulong pc,
+uint32_t tb_hash_func(tb_page_addr_t phys_pc, vaddr pc,
uint32_t flags, uint64_t flags2, uint32_t cf_mask)
{
return qemu_xxhash8(phys_pc, pc, flags2, flags, cf_mask);
diff --git a/accel/tcg/tb-jmp-cache.h b/accel/tcg/tb-jmp-cache.h
index bee87eb840..bb424c8a05 100644
--- a/accel/tcg/tb-jmp-cache.h
+++ b/accel/tcg/tb-jmp-cache.h
@@ -21,7 +21,7 @@ struct CPUJumpCache {
struct rcu_head rcu;
struct {
TranslationBlock *tb;
- target_ulong pc;
+ vaddr pc;
} array[TB_JMP_CACHE_SIZE];
};
--
2.41.0
^ permalink raw reply related [flat|nested] 16+ messages in thread
* [PATCH v3 08/12] accel: Replace target_ulong with vaddr in probe_*()
2023-06-21 13:56 [PATCH v3 00/12] Start replacing target_ulong with vaddr Anton Johansson via
` (6 preceding siblings ...)
2023-06-21 13:56 ` [PATCH v3 07/12] accel/tcg: Widen pc to vaddr in CPUJumpCache Anton Johansson via
@ 2023-06-21 13:56 ` Anton Johansson via
2023-06-21 13:56 ` [PATCH v3 09/12] accel/tcg: Replace target_ulong with vaddr in *_mmu_lookup() Anton Johansson via
` (4 subsequent siblings)
12 siblings, 0 replies; 16+ messages in thread
From: Anton Johansson via @ 2023-06-21 13:56 UTC (permalink / raw)
To: qemu-devel
Cc: ale, richard.henderson, pbonzini, eduardo, philmd,
marcel.apfelbaum, wangyanan55
Functions for probing memory accesses (and functions that call these)
are updated to take a vaddr for guest virtual addresses over
target_ulong.
Signed-off-by: Anton Johansson <anjo@rev.ng>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
---
accel/stubs/tcg-stub.c | 4 ++--
accel/tcg/cputlb.c | 12 ++++++------
accel/tcg/user-exec.c | 8 ++++----
include/exec/exec-all.h | 14 +++++++-------
4 files changed, 19 insertions(+), 19 deletions(-)
diff --git a/accel/stubs/tcg-stub.c b/accel/stubs/tcg-stub.c
index 0998e601ad..a9e7a2d5b4 100644
--- a/accel/stubs/tcg-stub.c
+++ b/accel/stubs/tcg-stub.c
@@ -26,14 +26,14 @@ void tcg_flush_jmp_cache(CPUState *cpu)
{
}
-int probe_access_flags(CPUArchState *env, target_ulong addr, int size,
+int probe_access_flags(CPUArchState *env, vaddr addr, int size,
MMUAccessType access_type, int mmu_idx,
bool nonfault, void **phost, uintptr_t retaddr)
{
g_assert_not_reached();
}
-void *probe_access(CPUArchState *env, target_ulong addr, int size,
+void *probe_access(CPUArchState *env, vaddr addr, int size,
MMUAccessType access_type, int mmu_idx, uintptr_t retaddr)
{
/* Handled by hardware accelerator. */
diff --git a/accel/tcg/cputlb.c b/accel/tcg/cputlb.c
index bdf400f6e6..d873e58a5d 100644
--- a/accel/tcg/cputlb.c
+++ b/accel/tcg/cputlb.c
@@ -1499,7 +1499,7 @@ static void notdirty_write(CPUState *cpu, vaddr mem_vaddr, unsigned size,
}
}
-static int probe_access_internal(CPUArchState *env, target_ulong addr,
+static int probe_access_internal(CPUArchState *env, vaddr addr,
int fault_size, MMUAccessType access_type,
int mmu_idx, bool nonfault,
void **phost, CPUTLBEntryFull **pfull,
@@ -1508,7 +1508,7 @@ static int probe_access_internal(CPUArchState *env, target_ulong addr,
uintptr_t index = tlb_index(env, mmu_idx, addr);
CPUTLBEntry *entry = tlb_entry(env, mmu_idx, addr);
uint64_t tlb_addr = tlb_read_idx(entry, access_type);
- target_ulong page_addr = addr & TARGET_PAGE_MASK;
+ vaddr page_addr = addr & TARGET_PAGE_MASK;
int flags = TLB_FLAGS_MASK;
if (!tlb_hit_page(tlb_addr, page_addr)) {
@@ -1551,7 +1551,7 @@ static int probe_access_internal(CPUArchState *env, target_ulong addr,
return flags;
}
-int probe_access_full(CPUArchState *env, target_ulong addr, int size,
+int probe_access_full(CPUArchState *env, vaddr addr, int size,
MMUAccessType access_type, int mmu_idx,
bool nonfault, void **phost, CPUTLBEntryFull **pfull,
uintptr_t retaddr)
@@ -1568,7 +1568,7 @@ int probe_access_full(CPUArchState *env, target_ulong addr, int size,
return flags;
}
-int probe_access_flags(CPUArchState *env, target_ulong addr, int size,
+int probe_access_flags(CPUArchState *env, vaddr addr, int size,
MMUAccessType access_type, int mmu_idx,
bool nonfault, void **phost, uintptr_t retaddr)
{
@@ -1589,7 +1589,7 @@ int probe_access_flags(CPUArchState *env, target_ulong addr, int size,
return flags;
}
-void *probe_access(CPUArchState *env, target_ulong addr, int size,
+void *probe_access(CPUArchState *env, vaddr addr, int size,
MMUAccessType access_type, int mmu_idx, uintptr_t retaddr)
{
CPUTLBEntryFull *full;
@@ -1648,7 +1648,7 @@ void *tlb_vaddr_to_host(CPUArchState *env, abi_ptr addr,
* NOTE: This function will trigger an exception if the page is
* not executable.
*/
-tb_page_addr_t get_page_addr_code_hostp(CPUArchState *env, target_ulong addr,
+tb_page_addr_t get_page_addr_code_hostp(CPUArchState *env, vaddr addr,
void **hostp)
{
CPUTLBEntryFull *full;
diff --git a/accel/tcg/user-exec.c b/accel/tcg/user-exec.c
index dc8d6b5d40..d71e26a7b5 100644
--- a/accel/tcg/user-exec.c
+++ b/accel/tcg/user-exec.c
@@ -721,7 +721,7 @@ int page_unprotect(target_ulong address, uintptr_t pc)
return current_tb_invalidated ? 2 : 1;
}
-static int probe_access_internal(CPUArchState *env, target_ulong addr,
+static int probe_access_internal(CPUArchState *env, vaddr addr,
int fault_size, MMUAccessType access_type,
bool nonfault, uintptr_t ra)
{
@@ -759,7 +759,7 @@ static int probe_access_internal(CPUArchState *env, target_ulong addr,
cpu_loop_exit_sigsegv(env_cpu(env), addr, access_type, maperr, ra);
}
-int probe_access_flags(CPUArchState *env, target_ulong addr, int size,
+int probe_access_flags(CPUArchState *env, vaddr addr, int size,
MMUAccessType access_type, int mmu_idx,
bool nonfault, void **phost, uintptr_t ra)
{
@@ -771,7 +771,7 @@ int probe_access_flags(CPUArchState *env, target_ulong addr, int size,
return flags;
}
-void *probe_access(CPUArchState *env, target_ulong addr, int size,
+void *probe_access(CPUArchState *env, vaddr addr, int size,
MMUAccessType access_type, int mmu_idx, uintptr_t ra)
{
int flags;
@@ -783,7 +783,7 @@ void *probe_access(CPUArchState *env, target_ulong addr, int size,
return size ? g2h(env_cpu(env), addr) : NULL;
}
-tb_page_addr_t get_page_addr_code_hostp(CPUArchState *env, target_ulong addr,
+tb_page_addr_t get_page_addr_code_hostp(CPUArchState *env, vaddr addr,
void **hostp)
{
int flags;
diff --git a/include/exec/exec-all.h b/include/exec/exec-all.h
index f5508e242b..cc1c3556f6 100644
--- a/include/exec/exec-all.h
+++ b/include/exec/exec-all.h
@@ -413,16 +413,16 @@ static inline void tlb_flush_range_by_mmuidx_all_cpus_synced(CPUState *cpu,
* Finally, return the host address for a page that is backed by RAM,
* or NULL if the page requires I/O.
*/
-void *probe_access(CPUArchState *env, target_ulong addr, int size,
+void *probe_access(CPUArchState *env, vaddr addr, int size,
MMUAccessType access_type, int mmu_idx, uintptr_t retaddr);
-static inline void *probe_write(CPUArchState *env, target_ulong addr, int size,
+static inline void *probe_write(CPUArchState *env, vaddr addr, int size,
int mmu_idx, uintptr_t retaddr)
{
return probe_access(env, addr, size, MMU_DATA_STORE, mmu_idx, retaddr);
}
-static inline void *probe_read(CPUArchState *env, target_ulong addr, int size,
+static inline void *probe_read(CPUArchState *env, vaddr addr, int size,
int mmu_idx, uintptr_t retaddr)
{
return probe_access(env, addr, size, MMU_DATA_LOAD, mmu_idx, retaddr);
@@ -447,7 +447,7 @@ static inline void *probe_read(CPUArchState *env, target_ulong addr, int size,
* Do handle clean pages, so exclude TLB_NOTDIRY from the returned flags.
* For simplicity, all "mmio-like" flags are folded to TLB_MMIO.
*/
-int probe_access_flags(CPUArchState *env, target_ulong addr, int size,
+int probe_access_flags(CPUArchState *env, vaddr addr, int size,
MMUAccessType access_type, int mmu_idx,
bool nonfault, void **phost, uintptr_t retaddr);
@@ -460,7 +460,7 @@ int probe_access_flags(CPUArchState *env, target_ulong addr, int size,
* and must be consumed or copied immediately, before any further
* access or changes to TLB @mmu_idx.
*/
-int probe_access_full(CPUArchState *env, target_ulong addr, int size,
+int probe_access_full(CPUArchState *env, vaddr addr, int size,
MMUAccessType access_type, int mmu_idx,
bool nonfault, void **phost,
CPUTLBEntryFull **pfull, uintptr_t retaddr);
@@ -581,7 +581,7 @@ struct MemoryRegionSection *iotlb_to_section(CPUState *cpu,
*
* Note: this function can trigger an exception.
*/
-tb_page_addr_t get_page_addr_code_hostp(CPUArchState *env, target_ulong addr,
+tb_page_addr_t get_page_addr_code_hostp(CPUArchState *env, vaddr addr,
void **hostp);
/**
@@ -596,7 +596,7 @@ tb_page_addr_t get_page_addr_code_hostp(CPUArchState *env, target_ulong addr,
* Note: this function can trigger an exception.
*/
static inline tb_page_addr_t get_page_addr_code(CPUArchState *env,
- target_ulong addr)
+ vaddr addr)
{
return get_page_addr_code_hostp(env, addr, NULL);
}
--
2.41.0
^ permalink raw reply related [flat|nested] 16+ messages in thread
* [PATCH v3 09/12] accel/tcg: Replace target_ulong with vaddr in *_mmu_lookup()
2023-06-21 13:56 [PATCH v3 00/12] Start replacing target_ulong with vaddr Anton Johansson via
` (7 preceding siblings ...)
2023-06-21 13:56 ` [PATCH v3 08/12] accel: Replace target_ulong with vaddr in probe_*() Anton Johansson via
@ 2023-06-21 13:56 ` Anton Johansson via
2023-06-21 13:56 ` [PATCH v3 10/12] accel/tcg: Replace target_ulong with vaddr in translator_*() Anton Johansson via
` (3 subsequent siblings)
12 siblings, 0 replies; 16+ messages in thread
From: Anton Johansson via @ 2023-06-21 13:56 UTC (permalink / raw)
To: qemu-devel
Cc: ale, richard.henderson, pbonzini, eduardo, philmd,
marcel.apfelbaum, wangyanan55
Update atomic_mmu_lookup() and cpu_mmu_lookup() to take the guest
virtual address as a vaddr instead of a target_ulong.
Signed-off-by: Anton Johansson <anjo@rev.ng>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
---
accel/tcg/cputlb.c | 6 +++---
accel/tcg/user-exec.c | 6 +++---
2 files changed, 6 insertions(+), 6 deletions(-)
diff --git a/accel/tcg/cputlb.c b/accel/tcg/cputlb.c
index d873e58a5d..e02cfc550e 100644
--- a/accel/tcg/cputlb.c
+++ b/accel/tcg/cputlb.c
@@ -1898,15 +1898,15 @@ static bool mmu_lookup(CPUArchState *env, vaddr addr, MemOpIdx oi,
* Probe for an atomic operation. Do not allow unaligned operations,
* or io operations to proceed. Return the host address.
*/
-static void *atomic_mmu_lookup(CPUArchState *env, target_ulong addr,
- MemOpIdx oi, int size, uintptr_t retaddr)
+static void *atomic_mmu_lookup(CPUArchState *env, vaddr addr, MemOpIdx oi,
+ int size, uintptr_t retaddr)
{
uintptr_t mmu_idx = get_mmuidx(oi);
MemOp mop = get_memop(oi);
int a_bits = get_alignment_bits(mop);
uintptr_t index;
CPUTLBEntry *tlbe;
- target_ulong tlb_addr;
+ vaddr tlb_addr;
void *hostaddr;
CPUTLBEntryFull *full;
diff --git a/accel/tcg/user-exec.c b/accel/tcg/user-exec.c
index d71e26a7b5..f8b16d6ab8 100644
--- a/accel/tcg/user-exec.c
+++ b/accel/tcg/user-exec.c
@@ -889,7 +889,7 @@ void page_reset_target_data(target_ulong start, target_ulong last) { }
/* The softmmu versions of these helpers are in cputlb.c. */
-static void *cpu_mmu_lookup(CPUArchState *env, abi_ptr addr,
+static void *cpu_mmu_lookup(CPUArchState *env, vaddr addr,
MemOp mop, uintptr_t ra, MMUAccessType type)
{
int a_bits = get_alignment_bits(mop);
@@ -1324,8 +1324,8 @@ uint64_t cpu_ldq_code_mmu(CPUArchState *env, abi_ptr addr,
/*
* Do not allow unaligned operations to proceed. Return the host address.
*/
-static void *atomic_mmu_lookup(CPUArchState *env, target_ulong addr,
- MemOpIdx oi, int size, uintptr_t retaddr)
+static void *atomic_mmu_lookup(CPUArchState *env, vaddr addr, MemOpIdx oi,
+ int size, uintptr_t retaddr)
{
MemOp mop = get_memop(oi);
int a_bits = get_alignment_bits(mop);
--
2.41.0
^ permalink raw reply related [flat|nested] 16+ messages in thread
* [PATCH v3 10/12] accel/tcg: Replace target_ulong with vaddr in translator_*()
2023-06-21 13:56 [PATCH v3 00/12] Start replacing target_ulong with vaddr Anton Johansson via
` (8 preceding siblings ...)
2023-06-21 13:56 ` [PATCH v3 09/12] accel/tcg: Replace target_ulong with vaddr in *_mmu_lookup() Anton Johansson via
@ 2023-06-21 13:56 ` Anton Johansson via
2023-06-21 13:56 ` [PATCH v3 11/12] accel/tcg: Replace target_ulong with vaddr in page_*() Anton Johansson via
` (2 subsequent siblings)
12 siblings, 0 replies; 16+ messages in thread
From: Anton Johansson via @ 2023-06-21 13:56 UTC (permalink / raw)
To: qemu-devel
Cc: ale, richard.henderson, pbonzini, eduardo, philmd,
marcel.apfelbaum, wangyanan55
Use vaddr for guest virtual address in translator_use_goto_tb() and
translator_loop().
Signed-off-by: Anton Johansson <anjo@rev.ng>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
---
accel/tcg/translator.c | 10 +++++-----
include/exec/translator.h | 6 +++---
2 files changed, 8 insertions(+), 8 deletions(-)
diff --git a/accel/tcg/translator.c b/accel/tcg/translator.c
index 918a455e73..0fd9efceba 100644
--- a/accel/tcg/translator.c
+++ b/accel/tcg/translator.c
@@ -117,7 +117,7 @@ static void gen_tb_end(const TranslationBlock *tb, uint32_t cflags,
}
}
-bool translator_use_goto_tb(DisasContextBase *db, target_ulong dest)
+bool translator_use_goto_tb(DisasContextBase *db, vaddr dest)
{
/* Suppress goto_tb if requested. */
if (tb_cflags(db->tb) & CF_NO_GOTO_TB) {
@@ -129,8 +129,8 @@ bool translator_use_goto_tb(DisasContextBase *db, target_ulong dest)
}
void translator_loop(CPUState *cpu, TranslationBlock *tb, int *max_insns,
- target_ulong pc, void *host_pc,
- const TranslatorOps *ops, DisasContextBase *db)
+ vaddr pc, void *host_pc, const TranslatorOps *ops,
+ DisasContextBase *db)
{
uint32_t cflags = tb_cflags(tb);
TCGOp *icount_start_insn;
@@ -235,10 +235,10 @@ void translator_loop(CPUState *cpu, TranslationBlock *tb, int *max_insns,
}
static void *translator_access(CPUArchState *env, DisasContextBase *db,
- target_ulong pc, size_t len)
+ vaddr pc, size_t len)
{
void *host;
- target_ulong base, end;
+ vaddr base, end;
TranslationBlock *tb;
tb = db->tb;
diff --git a/include/exec/translator.h b/include/exec/translator.h
index 224ae14aa7..a53d3243d4 100644
--- a/include/exec/translator.h
+++ b/include/exec/translator.h
@@ -142,8 +142,8 @@ typedef struct TranslatorOps {
* - When too many instructions have been translated.
*/
void translator_loop(CPUState *cpu, TranslationBlock *tb, int *max_insns,
- target_ulong pc, void *host_pc,
- const TranslatorOps *ops, DisasContextBase *db);
+ vaddr pc, void *host_pc, const TranslatorOps *ops,
+ DisasContextBase *db);
/**
* translator_use_goto_tb
@@ -153,7 +153,7 @@ void translator_loop(CPUState *cpu, TranslationBlock *tb, int *max_insns,
* Return true if goto_tb is allowed between the current TB
* and the destination PC.
*/
-bool translator_use_goto_tb(DisasContextBase *db, target_ulong dest);
+bool translator_use_goto_tb(DisasContextBase *db, vaddr dest);
/**
* translator_io_start
--
2.41.0
^ permalink raw reply related [flat|nested] 16+ messages in thread
* [PATCH v3 11/12] accel/tcg: Replace target_ulong with vaddr in page_*()
2023-06-21 13:56 [PATCH v3 00/12] Start replacing target_ulong with vaddr Anton Johansson via
` (9 preceding siblings ...)
2023-06-21 13:56 ` [PATCH v3 10/12] accel/tcg: Replace target_ulong with vaddr in translator_*() Anton Johansson via
@ 2023-06-21 13:56 ` Anton Johansson via
2023-06-26 13:59 ` Richard Henderson
2023-06-21 13:56 ` [PATCH v3 12/12] cpu: Replace target_ulong with hwaddr in tb_invalidate_phys_addr() Anton Johansson via
2023-06-23 5:51 ` [PATCH v3 00/12] Start replacing target_ulong with vaddr Richard Henderson
12 siblings, 1 reply; 16+ messages in thread
From: Anton Johansson via @ 2023-06-21 13:56 UTC (permalink / raw)
To: qemu-devel
Cc: ale, richard.henderson, pbonzini, eduardo, philmd,
marcel.apfelbaum, wangyanan55
Use vaddr for guest virtual addresses for functions dealing with page
flags.
Signed-off-by: Anton Johansson <anjo@rev.ng>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
---
accel/tcg/user-exec.c | 44 +++++++++++++++++-------------------
include/exec/cpu-all.h | 10 ++++----
include/exec/translate-all.h | 2 +-
3 files changed, 27 insertions(+), 29 deletions(-)
diff --git a/accel/tcg/user-exec.c b/accel/tcg/user-exec.c
index f8b16d6ab8..6b0cbcf2e5 100644
--- a/accel/tcg/user-exec.c
+++ b/accel/tcg/user-exec.c
@@ -144,7 +144,7 @@ typedef struct PageFlagsNode {
static IntervalTreeRoot pageflags_root;
-static PageFlagsNode *pageflags_find(target_ulong start, target_long last)
+static PageFlagsNode *pageflags_find(vaddr start, vaddr last)
{
IntervalTreeNode *n;
@@ -152,8 +152,7 @@ static PageFlagsNode *pageflags_find(target_ulong start, target_long last)
return n ? container_of(n, PageFlagsNode, itree) : NULL;
}
-static PageFlagsNode *pageflags_next(PageFlagsNode *p, target_ulong start,
- target_long last)
+static PageFlagsNode *pageflags_next(PageFlagsNode *p, vaddr start, vaddr last)
{
IntervalTreeNode *n;
@@ -205,7 +204,7 @@ void page_dump(FILE *f)
walk_memory_regions(f, dump_region);
}
-int page_get_flags(target_ulong address)
+int page_get_flags(vaddr address)
{
PageFlagsNode *p = pageflags_find(address, address);
@@ -228,7 +227,7 @@ int page_get_flags(target_ulong address)
}
/* A subroutine of page_set_flags: insert a new node for [start,last]. */
-static void pageflags_create(target_ulong start, target_ulong last, int flags)
+static void pageflags_create(vaddr start, vaddr last, int flags)
{
PageFlagsNode *p = g_new(PageFlagsNode, 1);
@@ -239,13 +238,13 @@ static void pageflags_create(target_ulong start, target_ulong last, int flags)
}
/* A subroutine of page_set_flags: remove everything in [start,last]. */
-static bool pageflags_unset(target_ulong start, target_ulong last)
+static bool pageflags_unset(vaddr start, vaddr last)
{
bool inval_tb = false;
while (true) {
PageFlagsNode *p = pageflags_find(start, last);
- target_ulong p_last;
+ vaddr p_last;
if (!p) {
break;
@@ -284,8 +283,7 @@ static bool pageflags_unset(target_ulong start, target_ulong last)
* A subroutine of page_set_flags: nothing overlaps [start,last],
* but check adjacent mappings and maybe merge into a single range.
*/
-static void pageflags_create_merge(target_ulong start, target_ulong last,
- int flags)
+static void pageflags_create_merge(vaddr start, vaddr last, int flags)
{
PageFlagsNode *next = NULL, *prev = NULL;
@@ -336,11 +334,11 @@ static void pageflags_create_merge(target_ulong start, target_ulong last,
#define PAGE_STICKY (PAGE_ANON | PAGE_PASSTHROUGH | PAGE_TARGET_STICKY)
/* A subroutine of page_set_flags: add flags to [start,last]. */
-static bool pageflags_set_clear(target_ulong start, target_ulong last,
+static bool pageflags_set_clear(vaddr start, vaddr last,
int set_flags, int clear_flags)
{
PageFlagsNode *p;
- target_ulong p_start, p_last;
+ vaddr p_start, p_last;
int p_flags, merge_flags;
bool inval_tb = false;
@@ -480,7 +478,7 @@ static bool pageflags_set_clear(target_ulong start, target_ulong last,
* The flag PAGE_WRITE_ORG is positioned automatically depending
* on PAGE_WRITE. The mmap_lock should already be held.
*/
-void page_set_flags(target_ulong start, target_ulong last, int flags)
+void page_set_flags(vaddr start, vaddr last, int flags)
{
bool reset = false;
bool inval_tb = false;
@@ -520,9 +518,9 @@ void page_set_flags(target_ulong start, target_ulong last, int flags)
}
}
-int page_check_range(target_ulong start, target_ulong len, int flags)
+int page_check_range(vaddr start, vaddr len, int flags)
{
- target_ulong last;
+ vaddr last;
int locked; /* tri-state: =0: unlocked, +1: global, -1: local */
int ret;
@@ -601,7 +599,7 @@ int page_check_range(target_ulong start, target_ulong len, int flags)
void page_protect(tb_page_addr_t address)
{
PageFlagsNode *p;
- target_ulong start, last;
+ vaddr start, last;
int prot;
assert_memory_lock();
@@ -642,7 +640,7 @@ void page_protect(tb_page_addr_t address)
* immediately exited. (We can only return 2 if the 'pc' argument is
* non-zero.)
*/
-int page_unprotect(target_ulong address, uintptr_t pc)
+int page_unprotect(vaddr address, uintptr_t pc)
{
PageFlagsNode *p;
bool current_tb_invalidated;
@@ -676,7 +674,7 @@ int page_unprotect(target_ulong address, uintptr_t pc)
}
#endif
} else {
- target_ulong start, len, i;
+ vaddr start, len, i;
int prot;
if (qemu_host_page_size <= TARGET_PAGE_SIZE) {
@@ -691,7 +689,7 @@ int page_unprotect(target_ulong address, uintptr_t pc)
prot = 0;
for (i = 0; i < len; i += TARGET_PAGE_SIZE) {
- target_ulong addr = start + i;
+ vaddr addr = start + i;
p = pageflags_find(addr, addr);
if (p) {
@@ -814,7 +812,7 @@ typedef struct TargetPageDataNode {
static IntervalTreeRoot targetdata_root;
-void page_reset_target_data(target_ulong start, target_ulong last)
+void page_reset_target_data(vaddr start, vaddr last)
{
IntervalTreeNode *n, *next;
@@ -828,7 +826,7 @@ void page_reset_target_data(target_ulong start, target_ulong last)
n != NULL;
n = next,
next = next ? interval_tree_iter_next(n, start, last) : NULL) {
- target_ulong n_start, n_last, p_ofs, p_len;
+ vaddr n_start, n_last, p_ofs, p_len;
TargetPageDataNode *t = container_of(n, TargetPageDataNode, itree);
if (n->start >= start && n->last <= last) {
@@ -851,11 +849,11 @@ void page_reset_target_data(target_ulong start, target_ulong last)
}
}
-void *page_get_target_data(target_ulong address)
+void *page_get_target_data(vaddr address)
{
IntervalTreeNode *n;
TargetPageDataNode *t;
- target_ulong page, region;
+ vaddr page, region;
page = address & TARGET_PAGE_MASK;
region = address & TBD_MASK;
@@ -884,7 +882,7 @@ void *page_get_target_data(target_ulong address)
return t->data[(page - region) >> TARGET_PAGE_BITS];
}
#else
-void page_reset_target_data(target_ulong start, target_ulong last) { }
+void page_reset_target_data(vaddr start, vaddr last) { }
#endif /* TARGET_PAGE_DATA_SIZE */
/* The softmmu versions of these helpers are in cputlb.c. */
diff --git a/include/exec/cpu-all.h b/include/exec/cpu-all.h
index 09bf4c0cc6..97be7c2c3c 100644
--- a/include/exec/cpu-all.h
+++ b/include/exec/cpu-all.h
@@ -219,10 +219,10 @@ typedef int (*walk_memory_regions_fn)(void *, target_ulong,
target_ulong, unsigned long);
int walk_memory_regions(void *, walk_memory_regions_fn);
-int page_get_flags(target_ulong address);
-void page_set_flags(target_ulong start, target_ulong last, int flags);
-void page_reset_target_data(target_ulong start, target_ulong last);
-int page_check_range(target_ulong start, target_ulong len, int flags);
+int page_get_flags(vaddr address);
+void page_set_flags(vaddr start, vaddr last, int flags);
+void page_reset_target_data(vaddr start, vaddr last);
+int page_check_range(vaddr start, vaddr len, int flags);
/**
* page_get_target_data(address)
@@ -235,7 +235,7 @@ int page_check_range(target_ulong start, target_ulong len, int flags);
* The memory will be freed when the guest page is deallocated,
* e.g. with the munmap system call.
*/
-void *page_get_target_data(target_ulong address)
+void *page_get_target_data(vaddr address)
__attribute__((returns_nonnull));
#endif
diff --git a/include/exec/translate-all.h b/include/exec/translate-all.h
index 88602ae8d8..0b9d2e3b32 100644
--- a/include/exec/translate-all.h
+++ b/include/exec/translate-all.h
@@ -28,7 +28,7 @@ void tb_check_watchpoint(CPUState *cpu, uintptr_t retaddr);
#ifdef CONFIG_USER_ONLY
void page_protect(tb_page_addr_t page_addr);
-int page_unprotect(target_ulong address, uintptr_t pc);
+int page_unprotect(vaddr address, uintptr_t pc);
#endif
#endif /* TRANSLATE_ALL_H */
--
2.41.0
^ permalink raw reply related [flat|nested] 16+ messages in thread
* [PATCH v3 12/12] cpu: Replace target_ulong with hwaddr in tb_invalidate_phys_addr()
2023-06-21 13:56 [PATCH v3 00/12] Start replacing target_ulong with vaddr Anton Johansson via
` (10 preceding siblings ...)
2023-06-21 13:56 ` [PATCH v3 11/12] accel/tcg: Replace target_ulong with vaddr in page_*() Anton Johansson via
@ 2023-06-21 13:56 ` Anton Johansson via
2023-06-23 5:51 ` [PATCH v3 00/12] Start replacing target_ulong with vaddr Richard Henderson
12 siblings, 0 replies; 16+ messages in thread
From: Anton Johansson via @ 2023-06-21 13:56 UTC (permalink / raw)
To: qemu-devel
Cc: ale, richard.henderson, pbonzini, eduardo, philmd,
marcel.apfelbaum, wangyanan55
Signed-off-by: Anton Johansson <anjo@rev.ng>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
---
cpu.c | 2 +-
include/exec/exec-all.h | 2 +-
2 files changed, 2 insertions(+), 2 deletions(-)
diff --git a/cpu.c b/cpu.c
index 65ebaf8159..1c948d1161 100644
--- a/cpu.c
+++ b/cpu.c
@@ -293,7 +293,7 @@ void list_cpus(void)
}
#if defined(CONFIG_USER_ONLY)
-void tb_invalidate_phys_addr(target_ulong addr)
+void tb_invalidate_phys_addr(hwaddr addr)
{
mmap_lock();
tb_invalidate_phys_page(addr);
diff --git a/include/exec/exec-all.h b/include/exec/exec-all.h
index cc1c3556f6..200c27eadf 100644
--- a/include/exec/exec-all.h
+++ b/include/exec/exec-all.h
@@ -526,7 +526,7 @@ uint32_t curr_cflags(CPUState *cpu);
/* TranslationBlock invalidate API */
#if defined(CONFIG_USER_ONLY)
-void tb_invalidate_phys_addr(target_ulong addr);
+void tb_invalidate_phys_addr(hwaddr addr);
#else
void tb_invalidate_phys_addr(AddressSpace *as, hwaddr addr, MemTxAttrs attrs);
#endif
--
2.41.0
^ permalink raw reply related [flat|nested] 16+ messages in thread
* Re: [PATCH v3 00/12] Start replacing target_ulong with vaddr
2023-06-21 13:56 [PATCH v3 00/12] Start replacing target_ulong with vaddr Anton Johansson via
` (11 preceding siblings ...)
2023-06-21 13:56 ` [PATCH v3 12/12] cpu: Replace target_ulong with hwaddr in tb_invalidate_phys_addr() Anton Johansson via
@ 2023-06-23 5:51 ` Richard Henderson
12 siblings, 0 replies; 16+ messages in thread
From: Richard Henderson @ 2023-06-23 5:51 UTC (permalink / raw)
To: Anton Johansson, qemu-devel
Cc: ale, pbonzini, eduardo, philmd, marcel.apfelbaum, wangyanan55
On 6/21/23 15:56, Anton Johansson wrote:
> This is a first patchset in removing target_ulong from non-target/
> directories. As use of target_ulong is spread accross the codebase we
> are attempting to target as few maintainers as possible with each
> patchset in order to ease reviewing.
>
> The following instances of target_ulong remain in accel/ and tcg/
> - atomic helpers (atomic_common.c.inc), cpu_atomic_*()
> (atomic_template.h,) and cpu_[st|ld]*()
> (cputlb.c/ldst_common.c.inc) are only used in target/ and can
> be pulled out into a separate target-specific file;
>
> - walk_memory_regions() is used in user-exec.c and
> linux-user/elfload.c;
>
> - kvm_find_sw_breakpoint() in kvm-all.c used in target/;
>
> Changes in v2:
> - addr argument in tb_invalidate_phys_addr() changed from vaddr
> to hwaddr;
>
> - Removed previous patch:
>
> "[PATCH 4/8] accel/tcg: Replace target_ulong with vaddr in helper_unaligned_*()"
>
> as these functions are removed by Richard's patches;
>
> - First patch:
>
> "[PATCH 1/8] accel: Replace `target_ulong` with `vaddr` in TB/TLB"
>
> has been split into patches 1-7 to ease reviewing;
>
> - Pulled in target/ changes to cpu_get_tb_cpu_state() into this
> patchset. This was done to avoid pointer casts to target_ulong *
> which would break for 32-bit targets on a 64-bit BE host;
>
> Note the small target/ changes are collected in a single
> patch to not break bisection. If it's still desirable to split
> based on maintainer, let me know;
>
> - `last` argument of pageflags_[find|next] changed from target_long
> to vaddr. This change was left out of the last patchset due to
> triggering a "Bad ram pointer" error (softmmu/physmem.c:2273)
> when running make check for a i386-softmmu target.
>
> I was not able to recreate this on master or post rebase on
> Richard's tcg-once branch.
>
> Changes in v3:
> - Rebased on master
>
> Finally, the grand goal is to allow for heterogeneous QEMU binaries
> consisting of multiple frontends.
>
> RFC: https://lists.nongnu.org/archive/html/qemu-devel/2022-12/msg04518.html
>
> Anton Johansson (12):
> accel: Replace target_ulong in tlb_*()
> accel/tcg/translate-all.c: Widen pc and cs_base
> target: Widen pc/cs_base in cpu_get_tb_cpu_state
> accel/tcg/cputlb.c: Widen CPUTLBEntry access functions
> accel/tcg/cputlb.c: Widen addr in MMULookupPageData
> accel/tcg/cpu-exec.c: Widen pc to vaddr
> accel/tcg: Widen pc to vaddr in CPUJumpCache
> accel: Replace target_ulong with vaddr in probe_*()
> accel/tcg: Replace target_ulong with vaddr in *_mmu_lookup()
> accel/tcg: Replace target_ulong with vaddr in translator_*()
> accel/tcg: Replace target_ulong with vaddr in page_*()
> cpu: Replace target_ulong with hwaddr in tb_invalidate_phys_addr()
>
> accel/stubs/tcg-stub.c | 6 +-
> accel/tcg/cpu-exec.c | 43 ++++---
> accel/tcg/cputlb.c | 233 +++++++++++++++++------------------
> accel/tcg/internal.h | 6 +-
> accel/tcg/tb-hash.h | 12 +-
> accel/tcg/tb-jmp-cache.h | 2 +-
> accel/tcg/tb-maint.c | 2 +-
> accel/tcg/translate-all.c | 13 +-
> accel/tcg/translator.c | 10 +-
> accel/tcg/user-exec.c | 58 +++++----
> cpu.c | 2 +-
> include/exec/cpu-all.h | 10 +-
> include/exec/cpu-defs.h | 4 +-
> include/exec/cpu_ldst.h | 10 +-
> include/exec/exec-all.h | 95 +++++++-------
> include/exec/translate-all.h | 2 +-
> include/exec/translator.h | 6 +-
> include/qemu/plugin-memory.h | 2 +-
> target/alpha/cpu.h | 4 +-
> target/arm/cpu.h | 4 +-
> target/arm/helper.c | 4 +-
> target/avr/cpu.h | 4 +-
> target/cris/cpu.h | 4 +-
> target/hexagon/cpu.h | 4 +-
> target/hppa/cpu.h | 5 +-
> target/i386/cpu.h | 4 +-
> target/loongarch/cpu.h | 6 +-
> target/m68k/cpu.h | 4 +-
> target/microblaze/cpu.h | 4 +-
> target/mips/cpu.h | 4 +-
> target/nios2/cpu.h | 4 +-
> target/openrisc/cpu.h | 5 +-
> target/ppc/cpu.h | 8 +-
> target/ppc/helper_regs.c | 4 +-
> target/riscv/cpu.h | 4 +-
> target/riscv/cpu_helper.c | 4 +-
> target/rx/cpu.h | 4 +-
> target/s390x/cpu.h | 4 +-
> target/sh4/cpu.h | 4 +-
> target/sparc/cpu.h | 4 +-
> target/tricore/cpu.h | 4 +-
> target/xtensa/cpu.h | 4 +-
> 42 files changed, 307 insertions(+), 313 deletions(-)
>
> --
> 2.41.0
Queued to tcg-next, thanks.
r~
^ permalink raw reply [flat|nested] 16+ messages in thread
* Re: [PATCH v3 11/12] accel/tcg: Replace target_ulong with vaddr in page_*()
2023-06-21 13:56 ` [PATCH v3 11/12] accel/tcg: Replace target_ulong with vaddr in page_*() Anton Johansson via
@ 2023-06-26 13:59 ` Richard Henderson
2023-06-26 15:04 ` Anton Johansson via
0 siblings, 1 reply; 16+ messages in thread
From: Richard Henderson @ 2023-06-26 13:59 UTC (permalink / raw)
To: Anton Johansson, qemu-devel
Cc: ale, pbonzini, eduardo, philmd, marcel.apfelbaum, wangyanan55
On 6/21/23 15:56, Anton Johansson via wrote:
> Use vaddr for guest virtual addresses for functions dealing with page
> flags.
>
> Signed-off-by: Anton Johansson <anjo@rev.ng>
> Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
> ---
> accel/tcg/user-exec.c | 44 +++++++++++++++++-------------------
> include/exec/cpu-all.h | 10 ++++----
> include/exec/translate-all.h | 2 +-
> 3 files changed, 27 insertions(+), 29 deletions(-)
This causes other failures, such as
https://gitlab.com/rth7680/qemu/-/jobs/4540151776#L4468
qemu-hppa: ../accel/tcg/user-exec.c:490: page_set_flags: Assertion `last <=
GUEST_ADDR_MAX' failed.
which is caused by
#8 0x00005555556e5b77 in do_shmat (cpu_env=cpu_env@entry=0x555556274378,
shmid=54, shmaddr=<optimized out>, shmflg=0)
at ../src/linux-user/syscall.c:4598
4598 page_set_flags(raddr, raddr + shm_info.shm_segsz - 1,
4599 PAGE_VALID | PAGE_RESET | PAGE_READ |
4600 (shmflg & SHM_RDONLY ? 0 : PAGE_WRITE));
The host shm_info.shm_segsz is uint64_t, which means that the whole expression gets
converted to uint64_t. With this patch, it is not properly truncated to a guest address.
In this particular case, raddr is signed (abi_long), which is a mistake. Fixing that
avoids this particular error.
But since user-only is outside of the scope of this work, I'm going to drop this patch for
now.
r~
^ permalink raw reply [flat|nested] 16+ messages in thread
* Re: [PATCH v3 11/12] accel/tcg: Replace target_ulong with vaddr in page_*()
2023-06-26 13:59 ` Richard Henderson
@ 2023-06-26 15:04 ` Anton Johansson via
0 siblings, 0 replies; 16+ messages in thread
From: Anton Johansson via @ 2023-06-26 15:04 UTC (permalink / raw)
To: Richard Henderson, qemu-devel
Cc: ale, pbonzini, eduardo, philmd, marcel.apfelbaum, wangyanan55
On 6/26/23 15:59, Richard Henderson wrote:
> On 6/21/23 15:56, Anton Johansson via wrote:
>> Use vaddr for guest virtual addresses for functions dealing with page
>> flags.
>>
>> Signed-off-by: Anton Johansson <anjo@rev.ng>
>> Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
>> ---
>> accel/tcg/user-exec.c | 44 +++++++++++++++++-------------------
>> include/exec/cpu-all.h | 10 ++++----
>> include/exec/translate-all.h | 2 +-
>> 3 files changed, 27 insertions(+), 29 deletions(-)
>
> This causes other failures, such as
>
> https://gitlab.com/rth7680/qemu/-/jobs/4540151776#L4468
>
> qemu-hppa: ../accel/tcg/user-exec.c:490: page_set_flags: Assertion
> `last <= GUEST_ADDR_MAX' failed.
>
> which is caused by
>
> #8 0x00005555556e5b77 in do_shmat (cpu_env=cpu_env@entry=0x555556274378,
> shmid=54, shmaddr=<optimized out>, shmflg=0)
> at ../src/linux-user/syscall.c:4598
>
> 4598 page_set_flags(raddr, raddr + shm_info.shm_segsz - 1,
> 4599 PAGE_VALID | PAGE_RESET | PAGE_READ |
> 4600 (shmflg & SHM_RDONLY ? 0 : PAGE_WRITE));
>
> The host shm_info.shm_segsz is uint64_t, which means that the whole
> expression gets converted to uint64_t. With this patch, it is not
> properly truncated to a guest address.
>
> In this particular case, raddr is signed (abi_long), which is a
> mistake. Fixing that avoids this particular error.
>
> But since user-only is outside of the scope of this work, I'm going to
> drop this patch for now.
>
>
> r~
My bad! I saw the CI failure post-rebase but didn't look closely enough,
I saw a build-user failure in the branch I was based on
and assumed they were related.
Thanks for the explanation, makes sense!:)
--
Anton Johansson,
rev.ng Labs Srl.
^ permalink raw reply [flat|nested] 16+ messages in thread
end of thread, other threads:[~2023-06-26 15:06 UTC | newest]
Thread overview: 16+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2023-06-21 13:56 [PATCH v3 00/12] Start replacing target_ulong with vaddr Anton Johansson via
2023-06-21 13:56 ` [PATCH v3 01/12] accel: Replace target_ulong in tlb_*() Anton Johansson via
2023-06-21 13:56 ` [PATCH v3 02/12] accel/tcg/translate-all.c: Widen pc and cs_base Anton Johansson via
2023-06-21 13:56 ` [PATCH v3 03/12] target: Widen pc/cs_base in cpu_get_tb_cpu_state Anton Johansson via
2023-06-21 13:56 ` [PATCH v3 04/12] accel/tcg/cputlb.c: Widen CPUTLBEntry access functions Anton Johansson via
2023-06-21 13:56 ` [PATCH v3 05/12] accel/tcg/cputlb.c: Widen addr in MMULookupPageData Anton Johansson via
2023-06-21 13:56 ` [PATCH v3 06/12] accel/tcg/cpu-exec.c: Widen pc to vaddr Anton Johansson via
2023-06-21 13:56 ` [PATCH v3 07/12] accel/tcg: Widen pc to vaddr in CPUJumpCache Anton Johansson via
2023-06-21 13:56 ` [PATCH v3 08/12] accel: Replace target_ulong with vaddr in probe_*() Anton Johansson via
2023-06-21 13:56 ` [PATCH v3 09/12] accel/tcg: Replace target_ulong with vaddr in *_mmu_lookup() Anton Johansson via
2023-06-21 13:56 ` [PATCH v3 10/12] accel/tcg: Replace target_ulong with vaddr in translator_*() Anton Johansson via
2023-06-21 13:56 ` [PATCH v3 11/12] accel/tcg: Replace target_ulong with vaddr in page_*() Anton Johansson via
2023-06-26 13:59 ` Richard Henderson
2023-06-26 15:04 ` Anton Johansson via
2023-06-21 13:56 ` [PATCH v3 12/12] cpu: Replace target_ulong with hwaddr in tb_invalidate_phys_addr() Anton Johansson via
2023-06-23 5:51 ` [PATCH v3 00/12] Start replacing target_ulong with vaddr Richard Henderson
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).