From mboxrd@z Thu Jan 1 00:00:00 1970 From: Christoph Hellwig Subject: Re: [RFC PATCH 4/6] isci: hardware / topology event handling Date: Fri, 25 Mar 2011 15:45:30 -0400 Message-ID: <20110325194530.GA593@infradead.org> References: <20110207003056.27040.89174.stgit@localhost6.localdomain6> <20110207003455.27040.94947.stgit@localhost6.localdomain6> <20110318161852.GA19008@infradead.org> <20110323084054.GA11533@infradead.org> <20110323090824.GA14536@infradead.org> <20110324062646.GA27051@infradead.org> Mime-Version: 1.0 Content-Type: text/plain; charset=us-ascii Return-path: Received: from bombadil.infradead.org ([18.85.46.34]:45046 "EHLO bombadil.infradead.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1751384Ab1CYTpe (ORCPT ); Fri, 25 Mar 2011 15:45:34 -0400 Content-Disposition: inline In-Reply-To: Sender: linux-scsi-owner@vger.kernel.org List-Id: linux-scsi@vger.kernel.org To: Dan Williams Cc: Christoph Hellwig , james.bottomley@suse.de, dave.jiang@intel.com, linux-scsi@vger.kernel.org, jacek.danecki@intel.com, ed.ciechanowski@intel.com, jeffrey.d.skirvin@intel.com, edmund.nadolski@intel.com On Thu, Mar 24, 2011 at 05:57:17PM -0700, Dan Williams wrote: > I have been waiting for this issue to be raised. This was one of the > first items brought up in our internal reviews of the Linux driver. > Why does it still exist and what is the rationale for addressing it > incrementally?: > > Starting with simple locking and then scaling it is arguably easier > than unwinding a more complex locking scheme implemented too early in > the design phase. I don't care about the lock scalability. The problem with a global spinlock is that a lot of primitives that a driver needs need to block, and with a global spinlock that's almost impossible to handle, as you'd need to drop the lock, and have very little chance to figure out what state it actually protects that now needs to be re-checked. Things are a little better with a global sleeping lock as it allows you to block, as long as you don't actually plan to keep it over I/O.