From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from eggs.gnu.org ([2001:4830:134:3::10]:50007) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1ecVeQ-0001VZ-De for qemu-devel@nongnu.org; Fri, 19 Jan 2018 07:20:07 -0500 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71) (envelope-from ) id 1ecVeL-0000OB-DP for qemu-devel@nongnu.org; Fri, 19 Jan 2018 07:20:06 -0500 Received: from mx1.redhat.com ([209.132.183.28]:28930) by eggs.gnu.org with esmtps (TLS1.0:DHE_RSA_AES_256_CBC_SHA1:32) (Exim 4.71) (envelope-from ) id 1ecVeL-0000NW-7q for qemu-devel@nongnu.org; Fri, 19 Jan 2018 07:20:01 -0500 References: <20180119084235.7100.98318.stgit@pasha-VirtualBox> <20180119084417.7100.69568.stgit@pasha-VirtualBox> <002a01d3911d$dc13ca80$943b5f80$@ru> From: Paolo Bonzini Message-ID: <8aa1900f-2663-43bd-dab5-001be0aede09@redhat.com> Date: Fri, 19 Jan 2018 13:19:46 +0100 MIME-Version: 1.0 In-Reply-To: <002a01d3911d$dc13ca80$943b5f80$@ru> Content-Type: text/plain; charset=utf-8 Content-Language: en-US Content-Transfer-Encoding: 7bit Subject: Re: [Qemu-devel] [RFC PATCH v4 13/23] cpus: only take BQL for sleeping threads List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , To: Pavel Dovgalyuk , 'Pavel Dovgalyuk' , qemu-devel@nongnu.org Cc: kwolf@redhat.com, peter.maydell@linaro.org, boost.lists@gmail.com, quintela@redhat.com, jasowang@redhat.com, mst@redhat.com, zuban32s@gmail.com, maria.klimushenkova@ispras.ru, kraxel@redhat.com, alex.bennee@linaro.org On 19/01/2018 13:05, Pavel Dovgalyuk wrote: >> From: Paolo Bonzini [mailto:pbonzini@redhat.com] >> On 19/01/2018 09:44, Pavel Dovgalyuk wrote: >>> while (all_cpu_threads_idle()) { >>> + qemu_mutex_lock_iothread(); >>> stop_tcg_kick_timer(); >>> qemu_cond_wait(cpu->halt_cond, &qemu_global_mutex); >>> + qemu_mutex_unlock_iothread(); >>> } >> >> cpu_has_work cannot be called outside BQL yet. You first need to access >> cpu->interrupt_request with atomics. >> >> In general, testing the condition outside the mutex is a very dangerous >> pattern (and I'm usually the one who enjoys dangerous patterns). > > It means, that I'll have to fix all the has_work function to avoid races, > because x86_cpu_has_work may have them? Why only x86_cpu_has_work? Even reading cs->interrupt_request outside the mutex is unsafe. Paolo > static bool x86_cpu_has_work(CPUState *cs) > { > X86CPU *cpu = X86_CPU(cs); > CPUX86State *env = &cpu->env; > > return ((cs->interrupt_request & (CPU_INTERRUPT_HARD | > CPU_INTERRUPT_POLL)) && > (env->eflags & IF_MASK)) || > (cs->interrupt_request & (CPU_INTERRUPT_NMI | > CPU_INTERRUPT_INIT | > CPU_INTERRUPT_SIPI | > CPU_INTERRUPT_MCE)) || > ((cs->interrupt_request & CPU_INTERRUPT_SMI) && > !(env->hflags & HF_SMM_MASK)); > } > > Pavel Dovgalyuk >