From mboxrd@z Thu Jan 1 00:00:00 1970 From: Tejun Heo Subject: Re: [LSF/MM TOPIC] [ATTEND] Throttling I/O Date: Fri, 25 Jan 2013 09:57:11 -0800 Message-ID: <20130125175711.GJ3081@htj.dyndns.org> References: <51028666.1080109@suse.com> Mime-Version: 1.0 Content-Type: text/plain; charset=us-ascii Cc: lsf-pc@lists.linux-foundation.org, linux-fsdevel@vger.kernel.org, Fengguang Wu , Andrea Righi , Vivek Goyal , Jan Kara , Moyer Jeff Moyer To: Suresh Jayaraman Return-path: Received: from mail-pb0-f45.google.com ([209.85.160.45]:39860 "EHLO mail-pb0-f45.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S932256Ab3AYR5Q (ORCPT ); Fri, 25 Jan 2013 12:57:16 -0500 Received: by mail-pb0-f45.google.com with SMTP id rq13so345746pbb.32 for ; Fri, 25 Jan 2013 09:57:15 -0800 (PST) Content-Disposition: inline In-Reply-To: <51028666.1080109@suse.com> Sender: linux-fsdevel-owner@vger.kernel.org List-ID: Hey, Suresh. On Fri, Jan 25, 2013 at 06:49:34PM +0530, Suresh Jayaraman wrote: > - Making cfq schedule the per cgroup sync/async queues according to I/O > weights would mean that we'll need to use per cgroup cfqq's instead > of per process? What will the impact on sync latencies if for example > we have many sync only tasks in one cgroup and many async tasks in > another? What if BLK_CGROUP is not configured, what would be the > fallback behavior? So, we currently have synd cfqqs in cgroup cfqgs and shared cfqqs in the root cfqg. The end result would be splitting shared cfqqs into cgroup cfqgs. We may have to change how cfqgs are chosen depending on whether it only has async IOs pending. Not sure. > - Suppose if we have 100 cgroups and we are to have one cfqq per > priority per cgroup, this would mean we'll be requiring 100 x 3 x 8 = > 2400 cfqq's (3 classes and 8 priorities) in the worst case (as > opposed to current 24 cfqqs)? This may not be as drastic as it sounds > as we create cfqq's only on demand and we normally won't have tasks > with every priority and every class? I don't think that's a problem. We already have a cfqq per active IO context which can go way beyond 10k depending on work load. Thanks. -- tejun