From mboxrd@z Thu Jan 1 00:00:00 1970 From: Trond Myklebust Subject: Re: [RFC] Add support for semaphore-like structure with support for asynchronous I/O Date: Mon, 04 Apr 2005 12:39:37 -0400 Message-ID: <1112632777.10602.24.camel@lade.trondhjem.org> References: <1112219491.10771.18.camel@lade.trondhjem.org> <20050330143409.04f48431.akpm@osdl.org> <1112224663.18019.39.camel@lade.trondhjem.org> <1112309586.27458.19.camel@lade.trondhjem.org> <20050331161350.0dc7d376.akpm@osdl.org> <1112318537.11284.10.camel@lade.trondhjem.org> <20050401141225.GA3707@in.ibm.com> <20050404155245.GA4659@in.ibm.com> Mime-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: QUOTED-PRINTABLE Cc: Andrew Morton , linux-kernel@vger.kernel.org, Linux Filesystem Development , linux-aio@kvack.org Return-path: To: suparna@in.ibm.com In-Reply-To: <20050404155245.GA4659@in.ibm.com> Sender: owner-linux-aio@kvack.org List-Id: linux-fsdevel.vger.kernel.org m=C3=A5 den 04.04.2005 Klokka 21:22 (+0530) skreiv Suparna Bhattacharya= : > [cc'ing linux-aio] >=20 > The way I did this for semaphores was to modify the down routines to = accept > an explicit wait queue entry parameter, so the caller could set it up= to > be a synchronous waiter or a callback as appropriate. Of course, in t= he > AIO case, the callback was setup to restart the iocb operation to con= tinue > rest of its processing, which was when it acquired the lock. So, the > callback in itself didn't need to remember the lock. The only grey ar= ea > that I didn't resolve to satisfaction >=20 > And oh yes, even though this was a really small change, having to > update and verify those umpteen different arch specific pieces was no= t > entirely an exciting prospect - I got as far as x86, x86_64 and ppc64 > ... but stopped after that. It so happened that i_sem contention > didn't show up as a major latency issue even without async semaphores > in the kinds of workloads we were working with then. Right. This is the main problem with using the ordinary semaphores: changing all the different arch-specific versions, and then (not least!= ) testing the resulting mess. > Your approach looks reasonable and simple enough. It'd be useful if I > could visualize the caller side of things as in the NFS client stream > as you mention - do you plan to post that soon ?=20 > I'm tempted to think about the possibility of using your iosems > with retry-based AIO. Our immediate concern was with handling the cleanup of NFSv4 state in the case where a process was signalled. If you mount with "-ointr" usin= g the code in 2.6.12-rc1, then it can currently deadlock rpciod. The iosems prevent the deadlock by allowing us to move the code that tears down state into a workqueue item. You can find the full series of NFS client patches against 2.6.12-rc1-bk4 and newer on http://client.linux-nfs.org/Linux-2.6.x/2.6.12-rc1-bk4/ The particular 3 patches that make use of the iosem code in order to do asynchronous cleanup of state are: http://client.linux-nfs.org/Linux-2.6.x/2.6.12-rc1-bk4/linux-2.6.12-40-= nfs4_state_locks.dif http://client.linux-nfs.org/Linux-2.6.x/2.6.12-rc1-bk4/linux-2.6.12-41-= async_close.dif http://client.linux-nfs.org/Linux-2.6.x/2.6.12-rc1-bk4/linux-2.6.12-42-= async_locku.dif Cheers, Trond --=20 Trond Myklebust -- To unsubscribe, send a message with 'unsubscribe linux-aio' in the body to majordomo@kvack.org. For more info on Linux AIO, see: http://www.kvack.org/aio/ Don't email: aart@kvack.org