From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from eggs.gnu.org ([2001:4830:134:3::10]:38437) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1bnlVv-00039l-3h for qemu-devel@nongnu.org; Sat, 24 Sep 2016 07:53:04 -0400 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71) (envelope-from ) id 1bnlVq-0002qj-TR for qemu-devel@nongnu.org; Sat, 24 Sep 2016 07:53:02 -0400 Received: from mx3-phx2.redhat.com ([209.132.183.24]:51045) by eggs.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1bnlVq-0002qU-JM for qemu-devel@nongnu.org; Sat, 24 Sep 2016 07:52:58 -0400 Date: Sat, 24 Sep 2016 07:52:53 -0400 (EDT) From: Paolo Bonzini Message-ID: <1267831407.2636295.1474717973602.JavaMail.zimbra@redhat.com> In-Reply-To: <20d59fed-4187-28d2-c179-5e66571d5e49@twiddle.net> References: <1474615909-17069-1-git-send-email-pbonzini@redhat.com> <1474615909-17069-17-git-send-email-pbonzini@redhat.com> <20d59fed-4187-28d2-c179-5e66571d5e49@twiddle.net> MIME-Version: 1.0 Content-Type: text/plain; charset=utf-8 Content-Transfer-Encoding: 7bit Subject: Re: [Qemu-devel] [PATCH 16/16] cpus-common: lock-free fast path for cpu_exec_start/end List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , To: Richard Henderson Cc: qemu-devel@nongnu.org, serge fdrv , cota@braap.org, alex bennee , sergey fedorov ----- Original Message ----- > From: "Richard Henderson" > To: "Paolo Bonzini" , qemu-devel@nongnu.org > Cc: "serge fdrv" , cota@braap.org, "alex bennee" , "sergey fedorov" > > Sent: Friday, September 23, 2016 8:23:46 PM > Subject: Re: [Qemu-devel] [PATCH 16/16] cpus-common: lock-free fast path for cpu_exec_start/end > > On 09/23/2016 12:31 AM, Paolo Bonzini wrote: > > + if (atomic_read(&other_cpu->running)) { > ... > > + atomic_set(&cpu->running, true); > ... > > + cpu->running = false; > ... > > + cpu->running = true; > > Inconsistent use of atomics. I don't see that the cpu_list_lock protects the > last two lines in any way. It does: qemu_mutex_lock(&qemu_cpu_list_lock); if (!cpu->has_waiter) { /* Not counted in pending_cpus, let the exclusive item * run. Since we have the lock, just set cpu->running to true * while holding it; no need to check pending_cpus again. */ cpu->running = false; exclusive_idle(); /* Now pending_cpus is zero. */ cpu->running = true; } else { /* Counted in pending_cpus, go ahead and release the * waiter at cpu_exec_end. */ } qemu_mutex_unlock(&qemu_cpu_list_lock); but I can change it anyway to atomic_set. Paolo