* [PATCH 0/3] softmmu: Use async_run_on_cpu in tcg_commit @ 2023-08-26 23:24 Richard Henderson 2023-08-26 23:24 ` [PATCH 1/3] softmmu: Assert data in bounds in iotlb_to_section Richard Henderson ` (2 more replies) 0 siblings, 3 replies; 11+ messages in thread From: Richard Henderson @ 2023-08-26 23:24 UTC (permalink / raw) To: qemu-devel; +Cc: peter.maydell, pbonzini, alex.bennee This seems like something that simply hasn't been updated for MTTCG, since the last functional change appears to be: commit 79e2b9aeccedbfde762b05da662132c7fda292be Author: Paolo Bonzini <pbonzini@redhat.com> Date: Wed Jan 21 12:09:14 2015 +0100 exec: RCUify AddressSpaceDispatch and the MTTCG work starts to appear in September 2015. r~ Richard Henderson (3): softmmu: Assert data in bounds in iotlb_to_section softmmu: Use async_run_on_cpu in tcg_commit softmmu: Remove cpu_reloading_memory_map as unused include/exec/cpu-common.h | 1 - accel/tcg/cpu-exec-common.c | 30 ---------------------- softmmu/physmem.c | 50 +++++++++++++++++++++++++++---------- 3 files changed, 37 insertions(+), 44 deletions(-) -- 2.34.1 ^ permalink raw reply [flat|nested] 11+ messages in thread
* [PATCH 1/3] softmmu: Assert data in bounds in iotlb_to_section 2023-08-26 23:24 [PATCH 0/3] softmmu: Use async_run_on_cpu in tcg_commit Richard Henderson @ 2023-08-26 23:24 ` Richard Henderson 2023-08-27 9:39 ` Alex Bennée 2023-08-26 23:24 ` [PATCH 2/3] softmmu: Use async_run_on_cpu in tcg_commit Richard Henderson 2023-08-26 23:24 ` [PATCH 3/3] softmmu: Remove cpu_reloading_memory_map as unused Richard Henderson 2 siblings, 1 reply; 11+ messages in thread From: Richard Henderson @ 2023-08-26 23:24 UTC (permalink / raw) To: qemu-devel; +Cc: peter.maydell, pbonzini, alex.bennee Suggested-by: Alex Bennée <alex.bennee@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org> --- softmmu/physmem.c | 10 ++++++++-- 1 file changed, 8 insertions(+), 2 deletions(-) diff --git a/softmmu/physmem.c b/softmmu/physmem.c index 3df73542e1..7597dc1c39 100644 --- a/softmmu/physmem.c +++ b/softmmu/physmem.c @@ -2413,9 +2413,15 @@ MemoryRegionSection *iotlb_to_section(CPUState *cpu, int asidx = cpu_asidx_from_attrs(cpu, attrs); CPUAddressSpace *cpuas = &cpu->cpu_ases[asidx]; AddressSpaceDispatch *d = qatomic_rcu_read(&cpuas->memory_dispatch); - MemoryRegionSection *sections = d->map.sections; + int section_index = index & ~TARGET_PAGE_MASK; + MemoryRegionSection *ret; - return §ions[index & ~TARGET_PAGE_MASK]; + assert(section_index < d->map.sections_nb); + ret = d->map.sections + section_index; + assert(ret->mr); + assert(ret->mr->ops); + + return ret; } static void io_mem_init(void) -- 2.34.1 ^ permalink raw reply related [flat|nested] 11+ messages in thread
* Re: [PATCH 1/3] softmmu: Assert data in bounds in iotlb_to_section 2023-08-26 23:24 ` [PATCH 1/3] softmmu: Assert data in bounds in iotlb_to_section Richard Henderson @ 2023-08-27 9:39 ` Alex Bennée 0 siblings, 0 replies; 11+ messages in thread From: Alex Bennée @ 2023-08-27 9:39 UTC (permalink / raw) To: Richard Henderson; +Cc: qemu-devel, peter.maydell, pbonzini Richard Henderson <richard.henderson@linaro.org> writes: > Suggested-by: Alex Bennée <alex.bennee@linaro.org> > Signed-off-by: Richard Henderson <richard.henderson@linaro.org> > --- > softmmu/physmem.c | 10 ++++++++-- > 1 file changed, 8 insertions(+), 2 deletions(-) > > diff --git a/softmmu/physmem.c b/softmmu/physmem.c > index 3df73542e1..7597dc1c39 100644 > --- a/softmmu/physmem.c > +++ b/softmmu/physmem.c > @@ -2413,9 +2413,15 @@ MemoryRegionSection *iotlb_to_section(CPUState *cpu, > int asidx = cpu_asidx_from_attrs(cpu, attrs); > CPUAddressSpace *cpuas = &cpu->cpu_ases[asidx]; > AddressSpaceDispatch *d = qatomic_rcu_read(&cpuas->memory_dispatch); > - MemoryRegionSection *sections = d->map.sections; > + int section_index = index & ~TARGET_PAGE_MASK; > + MemoryRegionSection *ret; It might be worth a quick comment where section_index comes from because I was confused initially expecting it to be address like. > > - return §ions[index & ~TARGET_PAGE_MASK]; > + assert(section_index < d->map.sections_nb); > + ret = d->map.sections + section_index; > + assert(ret->mr); > + assert(ret->mr->ops); > + > + return ret; > } > > static void io_mem_init(void) Otherwise: Acked-by: Alex Bennée <alex.bennee@linaro.org> -- Alex Bennée Virtualisation Tech Lead @ Linaro ^ permalink raw reply [flat|nested] 11+ messages in thread
* [PATCH 2/3] softmmu: Use async_run_on_cpu in tcg_commit 2023-08-26 23:24 [PATCH 0/3] softmmu: Use async_run_on_cpu in tcg_commit Richard Henderson 2023-08-26 23:24 ` [PATCH 1/3] softmmu: Assert data in bounds in iotlb_to_section Richard Henderson @ 2023-08-26 23:24 ` Richard Henderson 2023-08-27 9:58 ` Alex Bennée ` (2 more replies) 2023-08-26 23:24 ` [PATCH 3/3] softmmu: Remove cpu_reloading_memory_map as unused Richard Henderson 2 siblings, 3 replies; 11+ messages in thread From: Richard Henderson @ 2023-08-26 23:24 UTC (permalink / raw) To: qemu-devel; +Cc: peter.maydell, pbonzini, alex.bennee After system startup, run the update to memory_dispatch and the tlb_flush on the cpu. This eliminates a race, wherein a running cpu sees the memory_dispatch change but has not yet seen the tlb_flush. Since the update now happens on the cpu, we need not use qatomic_rcu_read to protect the read of memory_dispatch. Resolves: https://gitlab.com/qemu-project/qemu/-/issues/1826 Resolves: https://gitlab.com/qemu-project/qemu/-/issues/1834 Resolves: https://gitlab.com/qemu-project/qemu/-/issues/1846 Signed-off-by: Richard Henderson <richard.henderson@linaro.org> --- softmmu/physmem.c | 40 +++++++++++++++++++++++++++++----------- 1 file changed, 29 insertions(+), 11 deletions(-) diff --git a/softmmu/physmem.c b/softmmu/physmem.c index 7597dc1c39..18277ddd67 100644 --- a/softmmu/physmem.c +++ b/softmmu/physmem.c @@ -680,8 +680,7 @@ address_space_translate_for_iotlb(CPUState *cpu, int asidx, hwaddr orig_addr, IOMMUTLBEntry iotlb; int iommu_idx; hwaddr addr = orig_addr; - AddressSpaceDispatch *d = - qatomic_rcu_read(&cpu->cpu_ases[asidx].memory_dispatch); + AddressSpaceDispatch *d = cpu->cpu_ases[asidx].memory_dispatch; for (;;) { section = address_space_translate_internal(d, addr, &addr, plen, false); @@ -2412,7 +2411,7 @@ MemoryRegionSection *iotlb_to_section(CPUState *cpu, { int asidx = cpu_asidx_from_attrs(cpu, attrs); CPUAddressSpace *cpuas = &cpu->cpu_ases[asidx]; - AddressSpaceDispatch *d = qatomic_rcu_read(&cpuas->memory_dispatch); + AddressSpaceDispatch *d = cpuas->memory_dispatch; int section_index = index & ~TARGET_PAGE_MASK; MemoryRegionSection *ret; @@ -2487,23 +2486,42 @@ static void tcg_log_global_after_sync(MemoryListener *listener) } } +static void tcg_commit_cpu(CPUState *cpu, run_on_cpu_data data) +{ + CPUAddressSpace *cpuas = data.host_ptr; + + cpuas->memory_dispatch = address_space_to_dispatch(cpuas->as); + tlb_flush(cpu); +} + static void tcg_commit(MemoryListener *listener) { CPUAddressSpace *cpuas; - AddressSpaceDispatch *d; + CPUState *cpu; assert(tcg_enabled()); /* since each CPU stores ram addresses in its TLB cache, we must reset the modified entries */ cpuas = container_of(listener, CPUAddressSpace, tcg_as_listener); - cpu_reloading_memory_map(); - /* The CPU and TLB are protected by the iothread lock. - * We reload the dispatch pointer now because cpu_reloading_memory_map() - * may have split the RCU critical section. + cpu = cpuas->cpu; + + /* + * Defer changes to as->memory_dispatch until the cpu is quiescent. + * Otherwise we race between (1) other cpu threads and (2) ongoing + * i/o for the current cpu thread, with data cached by mmu_lookup(). + * + * In addition, queueing the work function will kick the cpu back to + * the main loop, which will end the RCU critical section and reclaim + * the memory data structures. + * + * That said, the listener is also called during realize, before + * all of the tcg machinery for run-on is initialized: thus halt_cond. */ - d = address_space_to_dispatch(cpuas->as); - qatomic_rcu_set(&cpuas->memory_dispatch, d); - tlb_flush(cpuas->cpu); + if (cpu->halt_cond) { + async_run_on_cpu(cpu, tcg_commit_cpu, RUN_ON_CPU_HOST_PTR(cpuas)); + } else { + tcg_commit_cpu(cpu, RUN_ON_CPU_HOST_PTR(cpuas)); + } } static void memory_map_init(void) -- 2.34.1 ^ permalink raw reply related [flat|nested] 11+ messages in thread
* Re: [PATCH 2/3] softmmu: Use async_run_on_cpu in tcg_commit 2023-08-26 23:24 ` [PATCH 2/3] softmmu: Use async_run_on_cpu in tcg_commit Richard Henderson @ 2023-08-27 9:58 ` Alex Bennée 2023-08-27 14:54 ` Richard Henderson 2023-08-29 10:50 ` Jonathan Cameron via 2 siblings, 0 replies; 11+ messages in thread From: Alex Bennée @ 2023-08-27 9:58 UTC (permalink / raw) To: Richard Henderson; +Cc: qemu-devel, peter.maydell, pbonzini Richard Henderson <richard.henderson@linaro.org> writes: > After system startup, run the update to memory_dispatch > and the tlb_flush on the cpu. This eliminates a race, > wherein a running cpu sees the memory_dispatch change > but has not yet seen the tlb_flush. > > Since the update now happens on the cpu, we need not use > qatomic_rcu_read to protect the read of memory_dispatch. > > Resolves: https://gitlab.com/qemu-project/qemu/-/issues/1826 > Resolves: https://gitlab.com/qemu-project/qemu/-/issues/1834 > Resolves: https://gitlab.com/qemu-project/qemu/-/issues/1846 > Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Tested-by: Alex Bennée <alex.bennee@linaro.org> Reviewed-by: Alex Bennée <alex.bennee@linaro.org> > --- > softmmu/physmem.c | 40 +++++++++++++++++++++++++++++----------- > 1 file changed, 29 insertions(+), 11 deletions(-) > > diff --git a/softmmu/physmem.c b/softmmu/physmem.c > index 7597dc1c39..18277ddd67 100644 > --- a/softmmu/physmem.c > +++ b/softmmu/physmem.c > @@ -680,8 +680,7 @@ address_space_translate_for_iotlb(CPUState *cpu, int asidx, hwaddr orig_addr, > IOMMUTLBEntry iotlb; > int iommu_idx; > hwaddr addr = orig_addr; > - AddressSpaceDispatch *d = > - qatomic_rcu_read(&cpu->cpu_ases[asidx].memory_dispatch); > + AddressSpaceDispatch *d = cpu->cpu_ases[asidx].memory_dispatch; > > for (;;) { > section = address_space_translate_internal(d, addr, &addr, plen, false); > @@ -2412,7 +2411,7 @@ MemoryRegionSection *iotlb_to_section(CPUState *cpu, > { > int asidx = cpu_asidx_from_attrs(cpu, attrs); > CPUAddressSpace *cpuas = &cpu->cpu_ases[asidx]; > - AddressSpaceDispatch *d = qatomic_rcu_read(&cpuas->memory_dispatch); > + AddressSpaceDispatch *d = cpuas->memory_dispatch; > int section_index = index & ~TARGET_PAGE_MASK; > MemoryRegionSection *ret; > > @@ -2487,23 +2486,42 @@ static void tcg_log_global_after_sync(MemoryListener *listener) > } > } > > +static void tcg_commit_cpu(CPUState *cpu, run_on_cpu_data data) > +{ > + CPUAddressSpace *cpuas = data.host_ptr; > + > + cpuas->memory_dispatch = address_space_to_dispatch(cpuas->as); > + tlb_flush(cpu); > +} > + > static void tcg_commit(MemoryListener *listener) > { > CPUAddressSpace *cpuas; > - AddressSpaceDispatch *d; > + CPUState *cpu; > > assert(tcg_enabled()); > /* since each CPU stores ram addresses in its TLB cache, we must > reset the modified entries */ > cpuas = container_of(listener, CPUAddressSpace, tcg_as_listener); > - cpu_reloading_memory_map(); > - /* The CPU and TLB are protected by the iothread lock. > - * We reload the dispatch pointer now because cpu_reloading_memory_map() > - * may have split the RCU critical section. > + cpu = cpuas->cpu; > + > + /* > + * Defer changes to as->memory_dispatch until the cpu is quiescent. > + * Otherwise we race between (1) other cpu threads and (2) ongoing > + * i/o for the current cpu thread, with data cached by mmu_lookup(). > + * > + * In addition, queueing the work function will kick the cpu back to > + * the main loop, which will end the RCU critical section and reclaim > + * the memory data structures. > + * > + * That said, the listener is also called during realize, before > + * all of the tcg machinery for run-on is initialized: thus halt_cond. > */ > - d = address_space_to_dispatch(cpuas->as); > - qatomic_rcu_set(&cpuas->memory_dispatch, d); > - tlb_flush(cpuas->cpu); > + if (cpu->halt_cond) { > + async_run_on_cpu(cpu, tcg_commit_cpu, RUN_ON_CPU_HOST_PTR(cpuas)); > + } else { > + tcg_commit_cpu(cpu, RUN_ON_CPU_HOST_PTR(cpuas)); > + } > } > > static void memory_map_init(void) -- Alex Bennée Virtualisation Tech Lead @ Linaro ^ permalink raw reply [flat|nested] 11+ messages in thread
* Re: [PATCH 2/3] softmmu: Use async_run_on_cpu in tcg_commit 2023-08-26 23:24 ` [PATCH 2/3] softmmu: Use async_run_on_cpu in tcg_commit Richard Henderson 2023-08-27 9:58 ` Alex Bennée @ 2023-08-27 14:54 ` Richard Henderson 2023-08-27 20:17 ` Alex Bennée 2023-08-29 10:50 ` Jonathan Cameron via 2 siblings, 1 reply; 11+ messages in thread From: Richard Henderson @ 2023-08-27 14:54 UTC (permalink / raw) To: qemu-devel; +Cc: peter.maydell, pbonzini, alex.bennee On 8/26/23 16:24, Richard Henderson wrote: > +static void tcg_commit_cpu(CPUState *cpu, run_on_cpu_data data) > +{ > + CPUAddressSpace *cpuas = data.host_ptr; > + > + cpuas->memory_dispatch = address_space_to_dispatch(cpuas->as); > + tlb_flush(cpu); > +} Question: do I need to take the iothread lock here, while re-generating the address space dispatch? r~ ^ permalink raw reply [flat|nested] 11+ messages in thread
* Re: [PATCH 2/3] softmmu: Use async_run_on_cpu in tcg_commit 2023-08-27 14:54 ` Richard Henderson @ 2023-08-27 20:17 ` Alex Bennée 2023-08-27 21:16 ` Richard Henderson 0 siblings, 1 reply; 11+ messages in thread From: Alex Bennée @ 2023-08-27 20:17 UTC (permalink / raw) To: Richard Henderson; +Cc: qemu-devel, peter.maydell, pbonzini Richard Henderson <richard.henderson@linaro.org> writes: > On 8/26/23 16:24, Richard Henderson wrote: >> +static void tcg_commit_cpu(CPUState *cpu, run_on_cpu_data data) >> +{ >> + CPUAddressSpace *cpuas = data.host_ptr; >> + >> + cpuas->memory_dispatch = address_space_to_dispatch(cpuas->as); >> + tlb_flush(cpu); >> +} > > Question: do I need to take the iothread lock here, while > re-generating the address space dispatch? Does it regenerate or just collect a current live version under RCU? -- Alex Bennée Virtualisation Tech Lead @ Linaro ^ permalink raw reply [flat|nested] 11+ messages in thread
* Re: [PATCH 2/3] softmmu: Use async_run_on_cpu in tcg_commit 2023-08-27 20:17 ` Alex Bennée @ 2023-08-27 21:16 ` Richard Henderson 0 siblings, 0 replies; 11+ messages in thread From: Richard Henderson @ 2023-08-27 21:16 UTC (permalink / raw) To: Alex Bennée; +Cc: qemu-devel, peter.maydell, pbonzini On 8/27/23 13:17, Alex Bennée wrote: > > Richard Henderson <richard.henderson@linaro.org> writes: > >> On 8/26/23 16:24, Richard Henderson wrote: >>> +static void tcg_commit_cpu(CPUState *cpu, run_on_cpu_data data) >>> +{ >>> + CPUAddressSpace *cpuas = data.host_ptr; >>> + >>> + cpuas->memory_dispatch = address_space_to_dispatch(cpuas->as); >>> + tlb_flush(cpu); >>> +} >> >> Question: do I need to take the iothread lock here, while >> re-generating the address space dispatch? > > Does it regenerate or just collect a current live version under RCU? Quite right, just reads. r~ ^ permalink raw reply [flat|nested] 11+ messages in thread
* Re: [PATCH 2/3] softmmu: Use async_run_on_cpu in tcg_commit 2023-08-26 23:24 ` [PATCH 2/3] softmmu: Use async_run_on_cpu in tcg_commit Richard Henderson 2023-08-27 9:58 ` Alex Bennée 2023-08-27 14:54 ` Richard Henderson @ 2023-08-29 10:50 ` Jonathan Cameron via 2 siblings, 0 replies; 11+ messages in thread From: Jonathan Cameron via @ 2023-08-29 10:50 UTC (permalink / raw) To: Richard Henderson; +Cc: qemu-devel, peter.maydell, pbonzini, alex.bennee On Sat, 26 Aug 2023 16:24:14 -0700 Richard Henderson <richard.henderson@linaro.org> wrote: > After system startup, run the update to memory_dispatch > and the tlb_flush on the cpu. This eliminates a race, > wherein a running cpu sees the memory_dispatch change > but has not yet seen the tlb_flush. > > Since the update now happens on the cpu, we need not use > qatomic_rcu_read to protect the read of memory_dispatch. > > Resolves: https://gitlab.com/qemu-project/qemu/-/issues/1826 > Resolves: https://gitlab.com/qemu-project/qemu/-/issues/1834 > Resolves: https://gitlab.com/qemu-project/qemu/-/issues/1846 > Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Tested-by: Jonathan Cameron <Jonathan.Cameron@huawei.com> I'm not pretending I've understood the change though, just that it makes the crashes I saw go away. Jonathan > --- > softmmu/physmem.c | 40 +++++++++++++++++++++++++++++----------- > 1 file changed, 29 insertions(+), 11 deletions(-) > > diff --git a/softmmu/physmem.c b/softmmu/physmem.c > index 7597dc1c39..18277ddd67 100644 > --- a/softmmu/physmem.c > +++ b/softmmu/physmem.c > @@ -680,8 +680,7 @@ address_space_translate_for_iotlb(CPUState *cpu, int asidx, hwaddr orig_addr, > IOMMUTLBEntry iotlb; > int iommu_idx; > hwaddr addr = orig_addr; > - AddressSpaceDispatch *d = > - qatomic_rcu_read(&cpu->cpu_ases[asidx].memory_dispatch); > + AddressSpaceDispatch *d = cpu->cpu_ases[asidx].memory_dispatch; > > for (;;) { > section = address_space_translate_internal(d, addr, &addr, plen, false); > @@ -2412,7 +2411,7 @@ MemoryRegionSection *iotlb_to_section(CPUState *cpu, > { > int asidx = cpu_asidx_from_attrs(cpu, attrs); > CPUAddressSpace *cpuas = &cpu->cpu_ases[asidx]; > - AddressSpaceDispatch *d = qatomic_rcu_read(&cpuas->memory_dispatch); > + AddressSpaceDispatch *d = cpuas->memory_dispatch; > int section_index = index & ~TARGET_PAGE_MASK; > MemoryRegionSection *ret; > > @@ -2487,23 +2486,42 @@ static void tcg_log_global_after_sync(MemoryListener *listener) > } > } > > +static void tcg_commit_cpu(CPUState *cpu, run_on_cpu_data data) > +{ > + CPUAddressSpace *cpuas = data.host_ptr; > + > + cpuas->memory_dispatch = address_space_to_dispatch(cpuas->as); > + tlb_flush(cpu); > +} > + > static void tcg_commit(MemoryListener *listener) > { > CPUAddressSpace *cpuas; > - AddressSpaceDispatch *d; > + CPUState *cpu; > > assert(tcg_enabled()); > /* since each CPU stores ram addresses in its TLB cache, we must > reset the modified entries */ > cpuas = container_of(listener, CPUAddressSpace, tcg_as_listener); > - cpu_reloading_memory_map(); > - /* The CPU and TLB are protected by the iothread lock. > - * We reload the dispatch pointer now because cpu_reloading_memory_map() > - * may have split the RCU critical section. > + cpu = cpuas->cpu; > + > + /* > + * Defer changes to as->memory_dispatch until the cpu is quiescent. > + * Otherwise we race between (1) other cpu threads and (2) ongoing > + * i/o for the current cpu thread, with data cached by mmu_lookup(). > + * > + * In addition, queueing the work function will kick the cpu back to > + * the main loop, which will end the RCU critical section and reclaim > + * the memory data structures. > + * > + * That said, the listener is also called during realize, before > + * all of the tcg machinery for run-on is initialized: thus halt_cond. > */ > - d = address_space_to_dispatch(cpuas->as); > - qatomic_rcu_set(&cpuas->memory_dispatch, d); > - tlb_flush(cpuas->cpu); > + if (cpu->halt_cond) { > + async_run_on_cpu(cpu, tcg_commit_cpu, RUN_ON_CPU_HOST_PTR(cpuas)); > + } else { > + tcg_commit_cpu(cpu, RUN_ON_CPU_HOST_PTR(cpuas)); > + } > } > > static void memory_map_init(void) ^ permalink raw reply [flat|nested] 11+ messages in thread
* [PATCH 3/3] softmmu: Remove cpu_reloading_memory_map as unused 2023-08-26 23:24 [PATCH 0/3] softmmu: Use async_run_on_cpu in tcg_commit Richard Henderson 2023-08-26 23:24 ` [PATCH 1/3] softmmu: Assert data in bounds in iotlb_to_section Richard Henderson 2023-08-26 23:24 ` [PATCH 2/3] softmmu: Use async_run_on_cpu in tcg_commit Richard Henderson @ 2023-08-26 23:24 ` Richard Henderson 2023-08-27 9:59 ` Alex Bennée 2 siblings, 1 reply; 11+ messages in thread From: Richard Henderson @ 2023-08-26 23:24 UTC (permalink / raw) To: qemu-devel; +Cc: peter.maydell, pbonzini, alex.bennee Signed-off-by: Richard Henderson <richard.henderson@linaro.org> --- include/exec/cpu-common.h | 1 - accel/tcg/cpu-exec-common.c | 30 ------------------------------ 2 files changed, 31 deletions(-) diff --git a/include/exec/cpu-common.h b/include/exec/cpu-common.h index 87dc9a752c..41788c0bdd 100644 --- a/include/exec/cpu-common.h +++ b/include/exec/cpu-common.h @@ -133,7 +133,6 @@ static inline void cpu_physical_memory_write(hwaddr addr, { cpu_physical_memory_rw(addr, (void *)buf, len, true); } -void cpu_reloading_memory_map(void); void *cpu_physical_memory_map(hwaddr addr, hwaddr *plen, bool is_write); diff --git a/accel/tcg/cpu-exec-common.c b/accel/tcg/cpu-exec-common.c index 9a5fabf625..7e35d7f4b5 100644 --- a/accel/tcg/cpu-exec-common.c +++ b/accel/tcg/cpu-exec-common.c @@ -33,36 +33,6 @@ void cpu_loop_exit_noexc(CPUState *cpu) cpu_loop_exit(cpu); } -#if defined(CONFIG_SOFTMMU) -void cpu_reloading_memory_map(void) -{ - if (qemu_in_vcpu_thread() && current_cpu->running) { - /* The guest can in theory prolong the RCU critical section as long - * as it feels like. The major problem with this is that because it - * can do multiple reconfigurations of the memory map within the - * critical section, we could potentially accumulate an unbounded - * collection of memory data structures awaiting reclamation. - * - * Because the only thing we're currently protecting with RCU is the - * memory data structures, it's sufficient to break the critical section - * in this callback, which we know will get called every time the - * memory map is rearranged. - * - * (If we add anything else in the system that uses RCU to protect - * its data structures, we will need to implement some other mechanism - * to force TCG CPUs to exit the critical section, at which point this - * part of this callback might become unnecessary.) - * - * This pair matches cpu_exec's rcu_read_lock()/rcu_read_unlock(), which - * only protects cpu->as->dispatch. Since we know our caller is about - * to reload it, it's safe to split the critical section. - */ - rcu_read_unlock(); - rcu_read_lock(); - } -} -#endif - void cpu_loop_exit(CPUState *cpu) { /* Undo the setting in cpu_tb_exec. */ -- 2.34.1 ^ permalink raw reply related [flat|nested] 11+ messages in thread
* Re: [PATCH 3/3] softmmu: Remove cpu_reloading_memory_map as unused 2023-08-26 23:24 ` [PATCH 3/3] softmmu: Remove cpu_reloading_memory_map as unused Richard Henderson @ 2023-08-27 9:59 ` Alex Bennée 0 siblings, 0 replies; 11+ messages in thread From: Alex Bennée @ 2023-08-27 9:59 UTC (permalink / raw) To: Richard Henderson; +Cc: qemu-devel, peter.maydell, pbonzini Richard Henderson <richard.henderson@linaro.org> writes: > Signed-off-by: Richard Henderson <richard.henderson@linaro.org> This is just cleanup, I think it should be merged with 2/3 > --- > include/exec/cpu-common.h | 1 - > accel/tcg/cpu-exec-common.c | 30 ------------------------------ > 2 files changed, 31 deletions(-) > > diff --git a/include/exec/cpu-common.h b/include/exec/cpu-common.h > index 87dc9a752c..41788c0bdd 100644 > --- a/include/exec/cpu-common.h > +++ b/include/exec/cpu-common.h > @@ -133,7 +133,6 @@ static inline void cpu_physical_memory_write(hwaddr addr, > { > cpu_physical_memory_rw(addr, (void *)buf, len, true); > } > -void cpu_reloading_memory_map(void); > void *cpu_physical_memory_map(hwaddr addr, > hwaddr *plen, > bool is_write); > diff --git a/accel/tcg/cpu-exec-common.c b/accel/tcg/cpu-exec-common.c > index 9a5fabf625..7e35d7f4b5 100644 > --- a/accel/tcg/cpu-exec-common.c > +++ b/accel/tcg/cpu-exec-common.c > @@ -33,36 +33,6 @@ void cpu_loop_exit_noexc(CPUState *cpu) > cpu_loop_exit(cpu); > } > > -#if defined(CONFIG_SOFTMMU) > -void cpu_reloading_memory_map(void) > -{ > - if (qemu_in_vcpu_thread() && current_cpu->running) { > - /* The guest can in theory prolong the RCU critical section as long > - * as it feels like. The major problem with this is that because it > - * can do multiple reconfigurations of the memory map within the > - * critical section, we could potentially accumulate an unbounded > - * collection of memory data structures awaiting reclamation. > - * > - * Because the only thing we're currently protecting with RCU is the > - * memory data structures, it's sufficient to break the critical section > - * in this callback, which we know will get called every time the > - * memory map is rearranged. > - * > - * (If we add anything else in the system that uses RCU to protect > - * its data structures, we will need to implement some other mechanism > - * to force TCG CPUs to exit the critical section, at which point this > - * part of this callback might become unnecessary.) > - * > - * This pair matches cpu_exec's rcu_read_lock()/rcu_read_unlock(), which > - * only protects cpu->as->dispatch. Since we know our caller is about > - * to reload it, it's safe to split the critical section. > - */ > - rcu_read_unlock(); > - rcu_read_lock(); > - } > -} > -#endif > - > void cpu_loop_exit(CPUState *cpu) > { > /* Undo the setting in cpu_tb_exec. */ -- Alex Bennée Virtualisation Tech Lead @ Linaro ^ permalink raw reply [flat|nested] 11+ messages in thread
end of thread, other threads:[~2023-08-29 10:51 UTC | newest] Thread overview: 11+ messages (download: mbox.gz follow: Atom feed -- links below jump to the message on this page -- 2023-08-26 23:24 [PATCH 0/3] softmmu: Use async_run_on_cpu in tcg_commit Richard Henderson 2023-08-26 23:24 ` [PATCH 1/3] softmmu: Assert data in bounds in iotlb_to_section Richard Henderson 2023-08-27 9:39 ` Alex Bennée 2023-08-26 23:24 ` [PATCH 2/3] softmmu: Use async_run_on_cpu in tcg_commit Richard Henderson 2023-08-27 9:58 ` Alex Bennée 2023-08-27 14:54 ` Richard Henderson 2023-08-27 20:17 ` Alex Bennée 2023-08-27 21:16 ` Richard Henderson 2023-08-29 10:50 ` Jonathan Cameron via 2023-08-26 23:24 ` [PATCH 3/3] softmmu: Remove cpu_reloading_memory_map as unused Richard Henderson 2023-08-27 9:59 ` Alex Bennée
This is a public inbox, see mirroring instructions for how to clone and mirror all data and code used for this inbox; as well as URLs for NNTP newsgroup(s).