From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from eggs.gnu.org ([2001:4830:134:3::10]:43491) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1eOOQz-00043r-VA for qemu-devel@nongnu.org; Mon, 11 Dec 2017 08:47:54 -0500 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71) (envelope-from ) id 1eOOQz-0006oX-5Q for qemu-devel@nongnu.org; Mon, 11 Dec 2017 08:47:54 -0500 From: David Hildenbrand Date: Mon, 11 Dec 2017 14:47:27 +0100 Message-Id: <20171211134740.8235-3-david@redhat.com> In-Reply-To: <20171211134740.8235-1-david@redhat.com> References: <20171211134740.8235-1-david@redhat.com> Subject: [Qemu-devel] [PATCH v1 for-2-12 02/15] cpu-exec: fix missed CPU kick during interrupt injection List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , To: qemu-s390x@nongnu.org, qemu-devel@nongnu.org Cc: Christian Borntraeger , Cornelia Huck , Richard Henderson , Alexander Graf , Paolo Bonzini , Peter Crosthwaite , Thomas Huth , David Hildenbrand The conditional memory barrier not only looks strange but actually is wrong. On s390x, I can reproduce interrupts via cpu_interrupt() not leading to a proper kick out of emulation every now and then. cpu_interrupt() is especially used for inter CPU communication via SIGP (esp. external calls and emergency interrupts). With this patch, I was not able to reproduce. (esp. no stalls or hangs in the guest). My setup is s390x MTTCG with 16 VCPUs on 8 CPU host, running make -j16. Signed-off-by: David Hildenbrand --- accel/tcg/cpu-exec.c | 12 ++++-------- 1 file changed, 4 insertions(+), 8 deletions(-) diff --git a/accel/tcg/cpu-exec.c b/accel/tcg/cpu-exec.c index 9b544d88c8..dfba5ebd29 100644 --- a/accel/tcg/cpu-exec.c +++ b/accel/tcg/cpu-exec.c @@ -525,19 +525,15 @@ static inline bool cpu_handle_interrupt(CPUState *cpu, TranslationBlock **last_tb) { CPUClass *cc = CPU_GET_CLASS(cpu); - int32_t insns_left; /* Clear the interrupt flag now since we're processing * cpu->interrupt_request and cpu->exit_request. */ - insns_left = atomic_read(&cpu->icount_decr.u32); atomic_set(&cpu->icount_decr.u16.high, 0); - if (unlikely(insns_left < 0)) { - /* Ensure the zeroing of icount_decr comes before the next read - * of cpu->exit_request or cpu->interrupt_request. - */ - smp_mb(); - } + /* Ensure zeroing happens before reading cpu->exit_request or + * cpu->interrupt_request. (also see cpu_exit()) + */ + smp_mb(); if (unlikely(atomic_read(&cpu->interrupt_request))) { int interrupt_request; -- 2.14.3