From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from eggs.gnu.org ([209.51.188.92]:35551) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1goe3o-0008QS-Pw for qemu-devel@nongnu.org; Tue, 29 Jan 2019 19:49:02 -0500 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71) (envelope-from ) id 1goe3k-000192-QF for qemu-devel@nongnu.org; Tue, 29 Jan 2019 19:48:58 -0500 Received: from out1-smtp.messagingengine.com ([66.111.4.25]:60639) by eggs.gnu.org with esmtps (TLS1.0:DHE_RSA_AES_256_CBC_SHA1:32) (Exim 4.71) (envelope-from ) id 1goe3i-0000yQ-VV for qemu-devel@nongnu.org; Tue, 29 Jan 2019 19:48:55 -0500 From: "Emilio G. Cota" Date: Tue, 29 Jan 2019 19:47:30 -0500 Message-Id: <20190130004811.27372-33-cota@braap.org> In-Reply-To: <20190130004811.27372-1-cota@braap.org> References: <20190130004811.27372-1-cota@braap.org> MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit Subject: [Qemu-devel] [PATCH v6 32/73] cpu: define cpu_interrupt_request helpers List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , To: qemu-devel@nongnu.org Cc: Richard Henderson , Paolo Bonzini Add a comment about how atomic_read works here. The comment refers to a "BQL-less CPU loop", which will materialize toward the end of this series. Note that the modifications to cpu_reset_interrupt are there to avoid deadlock during the CPU lock transition; once that is complete, cpu_interrupt_request will be simple again. Reviewed-by: Richard Henderson Reviewed-by: Alex Bennée Signed-off-by: Emilio G. Cota --- include/qom/cpu.h | 37 +++++++++++++++++++++++++++++++++++++ qom/cpu.c | 27 +++++++++++++++++++++------ 2 files changed, 58 insertions(+), 6 deletions(-) diff --git a/include/qom/cpu.h b/include/qom/cpu.h index 5047047666..4a87c1fef7 100644 --- a/include/qom/cpu.h +++ b/include/qom/cpu.h @@ -513,6 +513,43 @@ static inline void cpu_halted_set(CPUState *cpu, uint32_t val) cpu_mutex_unlock(cpu); } +/* + * When sending an interrupt, setters OR the appropriate bit and kick the + * destination vCPU. The latter can then read interrupt_request without + * acquiring the CPU lock, because once the kick-induced completes, they'll read + * an up-to-date interrupt_request. + * Setters always acquire the lock, which guarantees that (1) concurrent + * updates from different threads won't result in data races, and (2) the + * BQL-less CPU loop will always see an up-to-date interrupt_request, since the + * loop holds the CPU lock. + */ +static inline uint32_t cpu_interrupt_request(CPUState *cpu) +{ + return atomic_read(&cpu->interrupt_request); +} + +static inline void cpu_interrupt_request_or(CPUState *cpu, uint32_t mask) +{ + if (cpu_mutex_locked(cpu)) { + atomic_set(&cpu->interrupt_request, cpu->interrupt_request | mask); + return; + } + cpu_mutex_lock(cpu); + atomic_set(&cpu->interrupt_request, cpu->interrupt_request | mask); + cpu_mutex_unlock(cpu); +} + +static inline void cpu_interrupt_request_set(CPUState *cpu, uint32_t val) +{ + if (cpu_mutex_locked(cpu)) { + atomic_set(&cpu->interrupt_request, val); + return; + } + cpu_mutex_lock(cpu); + atomic_set(&cpu->interrupt_request, val); + cpu_mutex_unlock(cpu); +} + static inline void cpu_tb_jmp_cache_clear(CPUState *cpu) { unsigned int i; diff --git a/qom/cpu.c b/qom/cpu.c index c5106d5af8..00add81a7f 100644 --- a/qom/cpu.c +++ b/qom/cpu.c @@ -98,14 +98,29 @@ static void cpu_common_get_memory_mapping(CPUState *cpu, * BQL here if we need to. cpu_interrupt assumes it is held.*/ void cpu_reset_interrupt(CPUState *cpu, int mask) { - bool need_lock = !qemu_mutex_iothread_locked(); + bool has_bql = qemu_mutex_iothread_locked(); + bool has_cpu_lock = cpu_mutex_locked(cpu); - if (need_lock) { - qemu_mutex_lock_iothread(); + if (has_bql) { + if (has_cpu_lock) { + atomic_set(&cpu->interrupt_request, cpu->interrupt_request & ~mask); + } else { + cpu_mutex_lock(cpu); + atomic_set(&cpu->interrupt_request, cpu->interrupt_request & ~mask); + cpu_mutex_unlock(cpu); + } + return; + } + + if (has_cpu_lock) { + cpu_mutex_unlock(cpu); } - cpu->interrupt_request &= ~mask; - if (need_lock) { - qemu_mutex_unlock_iothread(); + qemu_mutex_lock_iothread(); + cpu_mutex_lock(cpu); + atomic_set(&cpu->interrupt_request, cpu->interrupt_request & ~mask); + qemu_mutex_unlock_iothread(); + if (!has_cpu_lock) { + cpu_mutex_unlock(cpu); } } -- 2.17.1