From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1751735AbbECBwn (ORCPT ); Sat, 2 May 2015 21:52:43 -0400 Received: from mail-qk0-f180.google.com ([209.85.220.180]:34201 "EHLO mail-qk0-f180.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1750807AbbECBwe (ORCPT ); Sat, 2 May 2015 21:52:34 -0400 Date: Sat, 2 May 2015 21:52:30 -0400 From: Tejun Heo To: Ming Lei Cc: Christoph Hellwig , Jens Axboe , Linux Kernel Mailing List , "Justin M. Forbes" , Jeff Moyer , "v4.0" Subject: Re: [PATCH v6] block: loop: avoiding too many pending per work I/O Message-ID: <20150503015230.GG1949@htj.duckdns.org> References: <1430450881-10881-1-git-send-email-ming.lei@canonical.com> <20150501101737.GA18577@infradead.org> <20150501142221.GC1949@htj.duckdns.org> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: User-Agent: Mutt/1.5.23 (2014-03-12) Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Hello, On Sat, May 02, 2015 at 10:56:20PM +0800, Ming Lei wrote: > > Maybe just cap max_active to NR_OF_LOOP_DEVS * 16 or sth? But idk, > > It might not work because there are nested loop devices like fedora live CD, and > in theory the max_active should have been set as loop's queue depth * > nr_loop, otherwise there may be possibility of hanging. > > So this patch is introduced. If loop devices can be stacked, regardless of what you do with nr_active, it may deadlock. There needs to be a rescuer per each nesting level (or just one per device). This means that the current code is broken. > > how many concurrent workers are we talking about and why are we > > capping per-queue concurrency from worker pool side instead of command > > tag side? > > I think there should be performance advantage to make queue depth a bit more > because it can help to make queue pipeline as full. Also queue depth often > means how many requests the hardware can queue, and it is a bit different > with per-queue concurrency. I'm not really following. Can you please elaborate? Thanks. -- tejun