From mboxrd@z Thu Jan 1 00:00:00 1970 From: Dave Chinner Subject: Re: [Lsf] IO less throttling and cgroup aware writeback Date: Fri, 8 Apr 2011 11:25:56 +1000 Message-ID: <20110408012556.GU31057@dastard> References: <20110401214947.GE6957@dastard> <20110405131359.GA14239@redhat.com> <20110405225639.GB31057@dastard> <20110406153954.GB18777@redhat.com> <20110406233602.GK31057@dastard> <20110407192424.GE27778@redhat.com> <20110407234249.GE30279@dastard> Mime-Version: 1.0 Content-Type: text/plain; charset=utf-8 Content-Transfer-Encoding: QUOTED-PRINTABLE Cc: Vivek Goyal , Curt Wohlgemuth , James Bottomley , lsf@lists.linux-foundation.org, linux-fsdevel@vger.kernel.org, linux-mm@kvack.org To: Greg Thelen Return-path: Received: from ipmail06.adl2.internode.on.net ([150.101.137.129]:61473 "EHLO ipmail06.adl2.internode.on.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1757258Ab1DHB0D (ORCPT ); Thu, 7 Apr 2011 21:26:03 -0400 Content-Disposition: inline In-Reply-To: Sender: linux-fsdevel-owner@vger.kernel.org List-ID: On Thu, Apr 07, 2011 at 05:59:35PM -0700, Greg Thelen wrote: > cc: linux-mm >=20 > Dave Chinner writes: >=20 > > On Thu, Apr 07, 2011 at 03:24:24PM -0400, Vivek Goyal wrote: > >> On Thu, Apr 07, 2011 at 09:36:02AM +1000, Dave Chinner wrote: > > [...] > >> > > When I_DIRTY is cleared, remove inode from bdi_memcg->b_dirty.= Delete bdi_memcg > >> > > if the list is now empty. > >> > >=20 > >> > > balance_dirty_pages() calls mem_cgroup_balance_dirty_pages(mem= cg, bdi) > >> > > if over bg limit, then > >> > > set bdi_memcg->b_over_limit > >> > > If there is no bdi_memcg (because all inodes of cur= rent=E2=80=99s > >> > > memcg dirty pages where first dirtied by other memc= g) then > >> > > memcg lru to find inode and call writeback_single_i= node(). > >> > > This is to handle uncommon sharing. > >> >=20 > >> > We don't want to introduce any new IO sources into > >> > balance_dirty_pages(). This needs to trigger memcg-LRU based bdi > >> > flusher writeback, not try to write back inodes itself. > >>=20 > >> Will we not enjoy more sequtial IO traffic once we find an inode b= y > >> traversing memcg->lru list? So isn't that better than pure LRU bas= ed > >> flushing? > > > > Sorry, I wasn't particularly clear there, What I meant was that we > > ask the bdi-flusher thread to select the inode to write back from > > the LRU, not do it directly from balance_dirty_pages(). i.e. > > bdp stays IO-less. > > > >> > Alternatively, this problem won't exist if you transfer page =D1= =89ache > >> > state from one memcg to another when you move the inode from one > >> > memcg to another. > >>=20 > >> But in case of shared inode problem still remains. inode is being = written > >> from two cgroups and it can't be in both the groups as per the exi= siting > >> design. > > > > But we've already determined that there is no use case for this > > shared inode behaviour, so we aren't going to explictly support it, > > right? >=20 > I am thinking that we should avoid ever scanning the memcg lru for di= rty > pages or corresponding dirty inodes previously associated with other > memcg. I think the only reason we considered scanning the lru was to > handle the unexpected shared inode case. When such inode sharing occ= urs > the sharing memcg will not be confined to the memcg's dirty limit. > There's always the memcg hard limit to cap memcg usage. Yup, fair enough. > I'd like to add a counter (or at least tracepoint) to record when suc= h > unsupported usage is detected. Definitely. Very good idea. > 1. memcg_1/process_a, writes to /var/log/messages and closes the file= =2E > This marks the inode in the bdi_memcg for memcg_1. >=20 > 2. memcg_2/process_b, continually writes to /var/log/messages. This > drives up memcg_2 dirty memory usage to the memcg_2 background > threshold. mem_cgroup_balance_dirty_pages() would normally mark t= he > corresponding bdi_memcg as over-bg-limit and kick the bdi_flusher = and > then return to the dirtying process. However, there is no bdi_mem= cg > because there are no dirty inodes for memcg_2. So the bdi flusher > sees no bdi_memcg as marked over-limit, so bdi flusher writes noth= ing > (assuming we're still below system background threshold). >=20 > 3. memcg_2/process_b, continues writing to /var/log/messages hitting = the > memcg_2 dirty memory foreground threshold. Using IO-less > balance_dirty_pages(), normally mem_cgroup_balance_dirty_pages() > would block waiting for the previously kicked bdi flusher to clean > some memcg_2 pages. In this case mem_cgroup_balance_dirty_pages() > sees no bdi_memcg and concludes that bdi flusher will not be lower= ing > memcg dirty memory usage. This is the unsupported sharing case, s= o > mem_cgroup_balance_dirty_pages() fires a tracepoint and just retur= ns > allowing memcg_2 dirty memory to exceed its foreground limit growi= ng > upwards to the memcg_2 memory limit_in_bytes. Once limit_in_bytes= is > hit it will use per memcg direct reclaim to recycle memcg_2 pages, > including the previously written memcg_2 /var/log/messages dirty > pages. Thanks for the good, simple example. > By cutting out lru scanning the code should be simpler and still > handle the common case well. Agreed. > If we later find that this supposed uncommon shared inode case is > important then we can either implement the previously described lru > scanning in mem_cgroup_balance_dirty_pages() or consider extending th= e > bdi/memcg/inode data structures (perhaps with a memcg_mapping) to > describe such sharing. Hmm, another idea I just had. What we're trying to avoid is needing to a) track inodes in multiple lists, and b) scanning to find something appropriate to write back. Rather than tracking at page or inode granularity, how about tracking "associated" memcgs at the memcg level? i.e. when we detect an inode is already dirty in another memcg, link the current memcg to the one that contains the inode. Hence if we get a situation where a memcg is throttling with no dirty inodes, it can quickly find and start writeback in an "associated" memcg that it _knows_ contain shared dirty inodes. Once we've triggered writeback on an associated memcg, it is removed from the list.... Cheers, Dave. --=20 Dave Chinner david@fromorbit.com -- To unsubscribe from this list: send the line "unsubscribe linux-fsdevel= " in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html