From mboxrd@z Thu Jan 1 00:00:00 1970 From: Tejun Heo Subject: Re: [LSF/MM TOPIC] [ATTEND] Throttling I/O Date: Fri, 25 Jan 2013 09:52:33 -0800 Message-ID: <20130125175233.GI3081@htj.dyndns.org> References: <51028666.1080109@suse.com> <20130125163408.GE6197@redhat.com> Mime-Version: 1.0 Content-Type: text/plain; charset=us-ascii Cc: Suresh Jayaraman , lsf-pc@lists.linux-foundation.org, linux-fsdevel@vger.kernel.org, Fengguang Wu , Andrea Righi , Jan Kara , Moyer Jeff Moyer To: Vivek Goyal Return-path: Received: from mail-pa0-f49.google.com ([209.85.220.49]:57863 "EHLO mail-pa0-f49.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1755243Ab3AYRwi (ORCPT ); Fri, 25 Jan 2013 12:52:38 -0500 Received: by mail-pa0-f49.google.com with SMTP id bi1so391572pad.8 for ; Fri, 25 Jan 2013 09:52:37 -0800 (PST) Content-Disposition: inline In-Reply-To: <20130125163408.GE6197@redhat.com> Sender: linux-fsdevel-owner@vger.kernel.org List-ID: Hey, guys. On Fri, Jan 25, 2013 at 11:34:08AM -0500, Vivek Goyal wrote: > And I think tejun wanted to implement throttling at block layer and > wanted vm to adjust/respond to per group IO backlog when it comes > to writting to dirty data/inodes. > > Once we have take care of writeback problem then comes the issue > of being able to associate a dirty inode/page to a cgroup. Not sure > if something has happened on that front or not. In the past it was > thought to be simple that one inode belongs to one IO cgroup. Yeap, the above two sum it up pretty good. > Also seriously, in CFQ, group idling performance penalty is too > high and might start showing up easily on a single spindle sata disk > also. Especially given the fact that people will come up with hybrid > SATA drives with some caching internally. So SATA drive will not > be as slow. > > So proportional group scheduling of CFQ is limited to such a specific > corner case of slow SATA drive. I am not sure how many people really > use it. I don't think so. If you personal usages, sure, it's not very useful but then again proportional IO control itself isn't all that useful for personal use, but if you go to backend infrastructure requiring a lot of capacity, spindled drives still rule the roost and large deployment of on-device flash cache is not as immediate, if it ever happens, that is. Spindle drives may not be in your desk/laptops, but they continue to be deployed massively in the backend. For example, google has been using half-hacky hierarchical writeback support in cfq for quite some time now and they'll switch to upstream implementation once we get it working, so I don't think it's a wasted effort. Thanks. -- tejun