From mboxrd@z Thu Jan 1 00:00:00 1970 From: Jason Gunthorpe Subject: Re: [PATCH mlx5-next] RDMA/mlx5: Don't use cached IRQ affinity mask Date: Mon, 23 Jul 2018 10:49:10 -0600 Message-ID: <20180723164910.GS31540@mellanox.com> References: <40d49fe1-c548-31ec-7daa-b19056215d69@mellanox.com> <243215dc-2b06-9c99-a0cb-8a45e0257077@opengridcomputing.com> <3f827784-3089-2375-9feb-b3c1701d7471@mellanox.com> <01cd01d41dce$992f4f30$cb8ded90$@opengridcomputing.com> <0834cae6-33d6-3526-7d85-f5cae18c5487@grimberg.me> <9a4d8d50-19b0-fcaa-d4a3-6cfa2318a973@mellanox.com> <02dc01d41ecd$9cc8a0b0$d659e210$@opengridcomputing.com> Mime-Version: 1.0 Content-Type: text/plain; charset=utf-8 Content-Transfer-Encoding: 8bit Return-path: Content-Disposition: inline In-Reply-To: Sender: netdev-owner@vger.kernel.org To: Max Gurtovoy Cc: Steve Wise , 'Sagi Grimberg' , 'Leon Romanovsky' , 'Doug Ledford' , 'RDMA mailing list' , 'Saeed Mahameed' , 'linux-netdev' List-Id: linux-rdma@vger.kernel.org On Fri, Jul 20, 2018 at 04:25:32AM +0300, Max Gurtovoy wrote: > > >>>[ 2032.194376] nvme nvme0: failed to connect queue: 9 ret=-18 > >> > >>queue 9 is not mapped (overlap). > >>please try the bellow: > >> > > > >This seems to work.  Here are three mapping cases:  each vector on its > >own cpu, each vector on 1 cpu within the local numa node, and each > >vector having all cpus in its numa node.  The 2nd mapping looks kinda > >funny, but I think it achieved what you wanted?  And all the cases > >resulted in successful connections. > > > > Thanks for testing this. > I slightly improved the setting of the left CPUs and actually used Sagi's > initial proposal. > > Sagi, > please review the attached patch and let me know if I should add your > signature on it. > I'll run some perf test early next week on it (meanwhile I run login/logout > with different num_queues successfully and irq settings). > > Steve, > It will be great if you can apply the attached in your system and send your > findings. > > Regards, > Max, So the conlusion to this thread is that Leon's mlx5 patch needs to wait until this block-mq patch is accepted? Thanks, Jason