From mboxrd@z Thu Jan 1 00:00:00 1970 From: "Benjamin Marzinski" Subject: Re: dm-multipath low performance with blk-mq Date: Tue, 26 Jan 2016 09:57:26 -0600 Message-ID: <20160126155726.GS24960@octiron.msp.redhat.com> References: <569E11EA.8000305@dev.mellanox.co.il> <20160119224512.GA10515@redhat.com> <20160125214016.GA10060@redhat.com> <20160125233717.GQ24960@octiron.msp.redhat.com> <20160126132939.GA23967@redhat.com> <56A77C21.90605@suse.de> Reply-To: device-mapper development Mime-Version: 1.0 Content-Type: text/plain; charset="iso-8859-1" Content-Transfer-Encoding: quoted-printable Return-path: Content-Disposition: inline In-Reply-To: <56A77C21.90605@suse.de> List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: dm-devel-bounces@redhat.com Errors-To: dm-devel-bounces@redhat.com To: device-mapper development List-Id: dm-devel.ids On Tue, Jan 26, 2016 at 03:01:05PM +0100, Hannes Reinecke wrote: > On 01/26/2016 02:29 PM, Mike Snitzer wrote: > > On Mon, Jan 25 2016 at 6:37pm -0500, > > Benjamin Marzinski wrote: > > = > >> On Mon, Jan 25, 2016 at 04:40:16PM -0500, Mike Snitzer wrote: > >>> On Tue, Jan 19 2016 at 5:45P -0500, > >>> Mike Snitzer wrote: > >> > >> I don't think this is going to help __multipath_map() without some > >> configuration changes. Now that we're running on already merged > >> requests instead of bios, the m->repeat_count is almost always set to = 1, > >> so we call the path_selector every time, which means that we'll always > >> need the write lock. Bumping up the number of IOs we send before calli= ng > >> the path selector again will give this patch a change to do some good > >> here. > >> > >> To do that you need to set: > >> > >> rr_min_io_rq > >> > >> in the defaults section of /etc/multipath.conf and then reload the > >> multipathd service. > >> > >> The patch should hopefully help in multipath_busy() regardless of the > >> the rr_min_io_rq setting. > > = > > This patch, while generic, is meant to help the blk-mq case. A blk-mq > > request_queue doesn't have an elevator so the requests will not have > > seen merging. > > = > > But yes, implied in the patch is the requirement to increase > > m->repeat_count via multipathd's rr_min_io_rq (I'll backfill a proper > > header once it is tested). > > = > But that would defeat load balancing, would it not? > IE when you want to do load balancing you would constantly change > paths, thereby always taking the write lock. > Which would render the patch pointless. But putting in a large rr_min_io_rq value will allow us to validate that the patch does help things, and there's not another bottleneck hidden right behind the spinlock. > I was rather wondering if we could expose all active paths as > hardware contexts and let blk-mq do the I/O mapping. > That way we would only have to take the write lock if we have to > choose a new pgpath/priority group ie in the case the active > priority group goes down. > = > Cheers, > = > Hannes > -- = > Dr. Hannes Reinecke Teamlead Storage & Networking > hare@suse.de +49 911 74053 688 > SUSE LINUX GmbH, Maxfeldstr. 5, 90409 N=FCrnberg > GF: F. Imend=F6rffer, J. Smithard, J. Guild, D. Upmanyu, G. Norton > HRB 21284 (AG N=FCrnberg) > = > -- > dm-devel mailing list > dm-devel@redhat.com > https://www.redhat.com/mailman/listinfo/dm-devel