From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1754113AbcAVOs2 (ORCPT ); Fri, 22 Jan 2016 09:48:28 -0500 Received: from mail-qk0-f172.google.com ([209.85.220.172]:34511 "EHLO mail-qk0-f172.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1753845AbcAVOsZ (ORCPT ); Fri, 22 Jan 2016 09:48:25 -0500 Date: Fri, 22 Jan 2016 09:48:22 -0500 From: Tejun Heo To: Shaohua Li Cc: linux-kernel@vger.kernel.org, axboe@kernel.dk, vgoyal@redhat.com, jmoyer@redhat.com, Kernel-team@fb.com Subject: Re: [RFC 0/3] block: proportional based blk-throttling Message-ID: <20160122144822.GA32380@htj.duckdns.org> References: <20160121211002.GH5157@mtj.duckdns.org> <20160121222449.GA3770911@devbig084.prn1.facebook.com> <20160121224157.GL5157@mtj.duckdns.org> <20160122000015.GA4066045@devbig084.prn1.facebook.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20160122000015.GA4066045@devbig084.prn1.facebook.com> User-Agent: Mutt/1.5.24 (2015-08-30) Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Hello, Shaohua. On Thu, Jan 21, 2016 at 04:00:16PM -0800, Shaohua Li wrote: > > The thing is that most of the possible contentions can be removed by > > implementing per-cpu cache which shouldn't be too difficult. 10% > > extra cost on current gen hardware is already pretty high. > > I did think about this. per-cpu cache does sound straightforward, but it > could severely impact fairness. For example, we give each cpu a budget, > see 1MB. If a cgroup doesn't use the 1M budget, we don't hold the lock. > But if we have 128 CPUs, the cgroup can use 128 * 1M more budget, which > breaks fairness very much. I have no idea how this can be fixed. Let's say per-cgroup buffer budget B is calculated as, say, 100ms worth of IO cost (or bandwidth or iops) available to the cgroup. In practice, this may have to be adjusted down depending on the number of cgroups performing active IOs. For a given cgroup, B can be distributed among the CPUs that are actively issuing IOs in that cgroup. It will degenerate to round robin of small budget if there are too many active for the budget available but for most cases this will cut down most of cross-CPU traffic. > > They're way more predictable than rotational devices when measured > > over a period. I don't think we'll be able to measure anything > > meaningful at individual command level but aggregate numbers should be > > fairly stable. A simple approximation of IO cost such as fixed cost > > per IO + cost proportional to IO size would do a far better job than > > just depending on bandwidth or iops and that requires approximating > > two variables over time. I'm not sure how easy / feasible that > > actually would be tho. > > It still sounds like IO time, otherwise I can't imagine we can measure > the cost. If we use some sort of aggregate number, it likes a variation > of bandwidth. eg cost = bandwidth/ios. I think cost of an IO can be approxmiated by a fixed per-IO cost + cost proportional to the size, so cost = F + R * size > I understand you probably want something like: get disk total resource, > predicate resource of each IO, and then use the info to arbitrate > cgroups. I don't know how it's possible. A disk which uses all its > resources can still accept new IO queuing. Maybe someday a fancy device > can export the info. I don't know exactly how either; however, I don't want a situation where we implement something just because it's easy regardless of whether it's actually useful. We've done that multiple times in cgroup and they tend to become useless baggages which get in the way of proper solutions. Things don't have to be perfect from the beginning but at least the abstractions and interfaces we expose must be relevant to the capability that userland wants. It isn't uncommon for devices to have close to or over an order of magnitude difference in bandwidth between 4k random and sequential IO patterns. What the userland wants is proportional distribution of IO resources. I can't see how lumping up numbers whose differences are in an order of magnitude would be able to represent that, or anything, really. I understand that it is a difficult and nasty problem but we'll just have to solve it. I'll think more about it too. Thanks. -- tejun