From mboxrd@z Thu Jan 1 00:00:00 1970 From: Mike Snitzer Subject: Re: RCU-ified dm-mpath for testing/review Date: Wed, 10 Feb 2016 22:35:41 -0500 Message-ID: <20160211033541.GA4900@redhat.com> References: <56B7659C.8040601@dev.mellanox.co.il> <56B772D6.2090403@sandisk.com> <56B77444.3030106@dev.mellanox.co.il> <56B776DE.30101@dev.mellanox.co.il> <20160207172055.GA6477@redhat.com> <56B99A49.5050400@suse.de> <20160209145547.GA21623@redhat.com> <56BA0689.9030007@suse.de> <20160210004518.GA23646@redhat.com> <20160211015030.GA4481@redhat.com> Mime-Version: 1.0 Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: 7bit Return-path: Content-Disposition: inline In-Reply-To: <20160211015030.GA4481@redhat.com> List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: dm-devel-bounces@redhat.com Errors-To: dm-devel-bounces@redhat.com To: Hannes Reinecke , Sagi Grimberg Cc: "axboe@kernel.dk" , "linux-block@vger.kernel.org" , Christoph Hellwig , "linux-nvme@lists.infradead.org" , "keith.busch@intel.com" , device-mapper development , Bart Van Assche List-Id: dm-devel.ids On Wed, Feb 10 2016 at 8:50pm -0500, Mike Snitzer wrote: > On Tue, Feb 09 2016 at 7:45pm -0500, > Mike Snitzer wrote: > > > > > OK, I took a crack at embracing RCU. Only slightly better performance > > on my single NUMA node testbed. (But I'll have to track down a system > > with multiple NUMA nodes to do any justice to the next wave of this > > optimization effort) > > > > This RCU work is very heavy-handed and way too fiddley (there could > > easily be bugs). Anyway, please see: > > http://git.kernel.org/cgit/linux/kernel/git/snitzer/linux.git/commit/?h=devel2&id=d80a7e4f8b5be9c81e4d452137623b003fa64745 > > > > But this might give you something to build on to arrive at something > > more scalable? > > I've a bit more polished version of this work (broken up into multiple > commits, with some fixes, etc) here: > http://git.kernel.org/cgit/linux/kernel/git/snitzer/linux.git/log/?h=devel3 > > Hannes and/or Sagi, if you get a chance to try this on your NUMA system > please let me know how it goes. FYI, with these changes my single NUMA node testbed's read IOPs went from: ~1310K to ~1410K w/ nr_hw_queues dm-mq=4 and null_blk=4 ~1330K to ~1415K w/ nr_hw_queues dm-mq=4 and null_blk=12 ~1365K to ~1425K w/ nr_hw_queues dm-mq=12 and null_blk=12