From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1754262AbbKLNu0 (ORCPT ); Thu, 12 Nov 2015 08:50:26 -0500 Received: from e31.co.us.ibm.com ([32.97.110.149]:46901 "EHLO e31.co.us.ibm.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1752853AbbKLNuW (ORCPT ); Thu, 12 Nov 2015 08:50:22 -0500 X-IBM-Helo: d03dlp03.boulder.ibm.com X-IBM-MailFrom: paulmck@linux.vnet.ibm.com X-IBM-RcptTo: linux-kernel@vger.kernel.org Date: Thu, 12 Nov 2015 05:50:24 -0800 From: "Paul E. McKenney" To: Boqun Feng Cc: Oleg Nesterov , Peter Zijlstra , mingo@kernel.org, linux-kernel@vger.kernel.org, corbet@lwn.net, mhocko@kernel.org, dhowells@redhat.com, torvalds@linux-foundation.org, will.deacon@arm.com Subject: Re: [PATCH 4/4] locking: Introduce smp_cond_acquire() Message-ID: <20151112135024.GQ3972@linux.vnet.ibm.com> Reply-To: paulmck@linux.vnet.ibm.com References: <20151102132901.157178466@infradead.org> <20151102134941.005198372@infradead.org> <20151103175958.GA4800@redhat.com> <20151111093939.GA6314@fixme-laptop.cn.ibm.com> <20151111103456.GB6314@fixme-laptop.cn.ibm.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20151111103456.GB6314@fixme-laptop.cn.ibm.com> User-Agent: Mutt/1.5.21 (2010-09-15) X-TM-AS-MML: disable X-Content-Scanned: Fidelis XPS MAILER x-cbid: 15111213-8236-0000-0000-0000137EFDC4 Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Wed, Nov 11, 2015 at 06:34:56PM +0800, Boqun Feng wrote: > On Wed, Nov 11, 2015 at 05:39:40PM +0800, Boqun Feng wrote: > > Hi Oleg, > > > > On Tue, Nov 03, 2015 at 06:59:58PM +0100, Oleg Nesterov wrote: > > [snip] > > > > > > Unfortunately this doesn't look exactly right... > > > > > > spin_unlock_wait() is not equal to "while (locked) relax", the latter > > > is live-lockable or at least sub-optimal: we do not really need to spin > > > > Just be curious, should spin_unlock_wait() semantically be an ACQUIRE? > > Hmm.. I guess I was wrong, it doesn't need to be an ACQUIRE, it needs > only to use the control dependency to order the load of lock state and > stores following it. I must say that spin_unlock_wait() and friends are a bit tricky. There are only a few uses in drivers/ata/libata-eh.c, ipc/sem.c, kernel/exit.c, kernel/sched/completion.c, and kernel/task_work.c. Almost all of these seem to assume that spin_unlock_wait() provides no ordering. We might have just barely enough uses to produce useful abstractions, but my guess is that it would not hurt to wait. > > Because spin_unlock_wait() is used for us to wait for a certain lock to > > RELEASE so that we can do something *after* we observe the RELEASE. > > Considering the follow example: > > > > CPU 0 CPU 1 > > ============================ =========================== > > { X = 0 } > > WRITE_ONCE(X, 1); > > spin_unlock(&lock); > > spin_unlock_wait(&lock) > > r1 = READ_ONCE(X); > > > > If spin_unlock_wait() is not an ACQUIRE, r1 can be 0 in this case, > > right? Am I missing something subtle here? Or spin_unlock_wait() itself > > doesn't have the ACQUIRE semantics, but it should always come with a > > smp_mb() *following* it to achieve the ACQUIRE semantics? However in > > do_exit(), an smp_mb() is preceding raw_spin_unlock_wait() rather than > > following, which makes me confused... could you explain that? Thank you > > ;-) > > > > But still, there is one suspicious use of smp_mb() in do_exit(): > > /* > * The setting of TASK_RUNNING by try_to_wake_up() may be delayed > * when the following two conditions become true. > * - There is race condition of mmap_sem (It is acquired by > * exit_mm()), and > * - SMI occurs before setting TASK_RUNINNG. > * (or hypervisor of virtual machine switches to other guest) > * As a result, we may become TASK_RUNNING after becoming TASK_DEAD > * > * To avoid it, we have to wait for releasing tsk->pi_lock which > * is held by try_to_wake_up() > */ > smp_mb(); > raw_spin_unlock_wait(&tsk->pi_lock); > > /* causes final put_task_struct in finish_task_switch(). */ > tsk->state = TASK_DEAD; > tsk->flags |= PF_NOFREEZE; /* tell freezer to ignore us */ > schedule(); > > Seems like smp_mb() doesn't need here? Because the control dependency > already orders load of tsk->pi_lock and store of tsk->state, and this > control dependency order guarantee pairs with the spin_unlock(->pi_lock) > in try_to_wake_up() to avoid data race on ->state. The exit() path is pretty heavyweight, so I suspect that an extra smp_mb() is down in the noise. Or are you saying that this is somehow unsafe? Thanx, Paul > Regards, > Boqun > > > Regards, > > Boqun > > > > > until we observe !spin_is_locked(), we only need to synchronize with the > > > current owner of this lock. Once it drops the lock we can proceed, we > > > do not care if another thread takes the same lock right after that. > > > > > > Oleg. > > > > >