linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
* Re: [Lsf] IO less throttling and cgroup aware writeback
       [not found]                   ` <20110407234249.GE30279@dastard>
@ 2011-04-08  0:59                     ` Greg Thelen
  2011-04-08  1:25                       ` Dave Chinner
  0 siblings, 1 reply; 3+ messages in thread
From: Greg Thelen @ 2011-04-08  0:59 UTC (permalink / raw)
  To: Dave Chinner
  Cc: Vivek Goyal, Curt Wohlgemuth, James Bottomley, lsf, linux-fsdevel,
	linux-mm

cc: linux-mm

Dave Chinner <david@fromorbit.com> writes:

> On Thu, Apr 07, 2011 at 03:24:24PM -0400, Vivek Goyal wrote:
>> On Thu, Apr 07, 2011 at 09:36:02AM +1000, Dave Chinner wrote:
> [...]
>> > > When I_DIRTY is cleared, remove inode from bdi_memcg->b_dirty.  Delete bdi_memcg
>> > > if the list is now empty.
>> > > 
>> > > balance_dirty_pages() calls mem_cgroup_balance_dirty_pages(memcg, bdi)
>> > >    if over bg limit, then
>> > >        set bdi_memcg->b_over_limit
>> > >            If there is no bdi_memcg (because all inodes of current’s
>> > >            memcg dirty pages where first dirtied by other memcg) then
>> > >            memcg lru to find inode and call writeback_single_inode().
>> > >            This is to handle uncommon sharing.
>> > 
>> > We don't want to introduce any new IO sources into
>> > balance_dirty_pages(). This needs to trigger memcg-LRU based bdi
>> > flusher writeback, not try to write back inodes itself.
>> 
>> Will we not enjoy more sequtial IO traffic once we find an inode by
>> traversing memcg->lru list? So isn't that better than pure LRU based
>> flushing?
>
> Sorry, I wasn't particularly clear there, What I meant was that we
> ask the bdi-flusher thread to select the inode to write back from
> the LRU, not do it directly from balance_dirty_pages(). i.e.
> bdp stays IO-less.
>
>> > Alternatively, this problem won't exist if you transfer page щache
>> > state from one memcg to another when you move the inode from one
>> > memcg to another.
>> 
>> But in case of shared inode problem still remains. inode is being written
>> from two cgroups and it can't be in both the groups as per the exisiting
>> design.
>
> But we've already determined that there is no use case for this
> shared inode behaviour, so we aren't going to explictly support it,
> right?
>
> Cheers,
>
> Dave.

I am thinking that we should avoid ever scanning the memcg lru for dirty
pages or corresponding dirty inodes previously associated with other
memcg.  I think the only reason we considered scanning the lru was to
handle the unexpected shared inode case.  When such inode sharing occurs
the sharing memcg will not be confined to the memcg's dirty limit.
There's always the memcg hard limit to cap memcg usage.

I'd like to add a counter (or at least tracepoint) to record when such
unsupported usage is detected.

Here's an example time line of such sharing:

1. memcg_1/process_a, writes to /var/log/messages and closes the file.
   This marks the inode in the bdi_memcg for memcg_1.

2. memcg_2/process_b, continually writes to /var/log/messages.  This
   drives up memcg_2 dirty memory usage to the memcg_2 background
   threshold.  mem_cgroup_balance_dirty_pages() would normally mark the
   corresponding bdi_memcg as over-bg-limit and kick the bdi_flusher and
   then return to the dirtying process.  However, there is no bdi_memcg
   because there are no dirty inodes for memcg_2.  So the bdi flusher
   sees no bdi_memcg as marked over-limit, so bdi flusher writes nothing
   (assuming we're still below system background threshold).

3. memcg_2/process_b, continues writing to /var/log/messages hitting the
   memcg_2 dirty memory foreground threshold.  Using IO-less
   balance_dirty_pages(), normally mem_cgroup_balance_dirty_pages()
   would block waiting for the previously kicked bdi flusher to clean
   some memcg_2 pages.  In this case mem_cgroup_balance_dirty_pages()
   sees no bdi_memcg and concludes that bdi flusher will not be lowering
   memcg dirty memory usage.  This is the unsupported sharing case, so
   mem_cgroup_balance_dirty_pages() fires a tracepoint and just returns
   allowing memcg_2 dirty memory to exceed its foreground limit growing
   upwards to the memcg_2 memory limit_in_bytes.  Once limit_in_bytes is
   hit it will use per memcg direct reclaim to recycle memcg_2 pages,
   including the previously written memcg_2 /var/log/messages dirty
   pages.

By cutting out lru scanning the code should be simpler and still handle
the common case well.

If we later find that this supposed uncommon shared inode case is
important then we can either implement the previously described lru
scanning in mem_cgroup_balance_dirty_pages() or consider extending the
bdi/memcg/inode data structures (perhaps with a memcg_mapping) to
describe such sharing.

> Cheers,
>
> Dave.

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Fight unfair telecom internet charges in Canada: sign http://stopthemeter.ca/
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 3+ messages in thread

* Re: [Lsf] IO less throttling and cgroup aware writeback
  2011-04-08  0:59                     ` [Lsf] IO less throttling and cgroup aware writeback Greg Thelen
@ 2011-04-08  1:25                       ` Dave Chinner
  2011-04-12  3:17                         ` KAMEZAWA Hiroyuki
  0 siblings, 1 reply; 3+ messages in thread
From: Dave Chinner @ 2011-04-08  1:25 UTC (permalink / raw)
  To: Greg Thelen
  Cc: Vivek Goyal, Curt Wohlgemuth, James Bottomley, lsf, linux-fsdevel,
	linux-mm

On Thu, Apr 07, 2011 at 05:59:35PM -0700, Greg Thelen wrote:
> cc: linux-mm
> 
> Dave Chinner <david@fromorbit.com> writes:
> 
> > On Thu, Apr 07, 2011 at 03:24:24PM -0400, Vivek Goyal wrote:
> >> On Thu, Apr 07, 2011 at 09:36:02AM +1000, Dave Chinner wrote:
> > [...]
> >> > > When I_DIRTY is cleared, remove inode from bdi_memcg->b_dirty.  Delete bdi_memcg
> >> > > if the list is now empty.
> >> > > 
> >> > > balance_dirty_pages() calls mem_cgroup_balance_dirty_pages(memcg, bdi)
> >> > >    if over bg limit, then
> >> > >        set bdi_memcg->b_over_limit
> >> > >            If there is no bdi_memcg (because all inodes of currenta??s
> >> > >            memcg dirty pages where first dirtied by other memcg) then
> >> > >            memcg lru to find inode and call writeback_single_inode().
> >> > >            This is to handle uncommon sharing.
> >> > 
> >> > We don't want to introduce any new IO sources into
> >> > balance_dirty_pages(). This needs to trigger memcg-LRU based bdi
> >> > flusher writeback, not try to write back inodes itself.
> >> 
> >> Will we not enjoy more sequtial IO traffic once we find an inode by
> >> traversing memcg->lru list? So isn't that better than pure LRU based
> >> flushing?
> >
> > Sorry, I wasn't particularly clear there, What I meant was that we
> > ask the bdi-flusher thread to select the inode to write back from
> > the LRU, not do it directly from balance_dirty_pages(). i.e.
> > bdp stays IO-less.
> >
> >> > Alternatively, this problem won't exist if you transfer page N?ache
> >> > state from one memcg to another when you move the inode from one
> >> > memcg to another.
> >> 
> >> But in case of shared inode problem still remains. inode is being written
> >> from two cgroups and it can't be in both the groups as per the exisiting
> >> design.
> >
> > But we've already determined that there is no use case for this
> > shared inode behaviour, so we aren't going to explictly support it,
> > right?
> 
> I am thinking that we should avoid ever scanning the memcg lru for dirty
> pages or corresponding dirty inodes previously associated with other
> memcg.  I think the only reason we considered scanning the lru was to
> handle the unexpected shared inode case.  When such inode sharing occurs
> the sharing memcg will not be confined to the memcg's dirty limit.
> There's always the memcg hard limit to cap memcg usage.

Yup, fair enough.


> I'd like to add a counter (or at least tracepoint) to record when such
> unsupported usage is detected.

Definitely. Very good idea.

> 1. memcg_1/process_a, writes to /var/log/messages and closes the file.
>    This marks the inode in the bdi_memcg for memcg_1.
> 
> 2. memcg_2/process_b, continually writes to /var/log/messages.  This
>    drives up memcg_2 dirty memory usage to the memcg_2 background
>    threshold.  mem_cgroup_balance_dirty_pages() would normally mark the
>    corresponding bdi_memcg as over-bg-limit and kick the bdi_flusher and
>    then return to the dirtying process.  However, there is no bdi_memcg
>    because there are no dirty inodes for memcg_2.  So the bdi flusher
>    sees no bdi_memcg as marked over-limit, so bdi flusher writes nothing
>    (assuming we're still below system background threshold).
> 
> 3. memcg_2/process_b, continues writing to /var/log/messages hitting the
>    memcg_2 dirty memory foreground threshold.  Using IO-less
>    balance_dirty_pages(), normally mem_cgroup_balance_dirty_pages()
>    would block waiting for the previously kicked bdi flusher to clean
>    some memcg_2 pages.  In this case mem_cgroup_balance_dirty_pages()
>    sees no bdi_memcg and concludes that bdi flusher will not be lowering
>    memcg dirty memory usage.  This is the unsupported sharing case, so
>    mem_cgroup_balance_dirty_pages() fires a tracepoint and just returns
>    allowing memcg_2 dirty memory to exceed its foreground limit growing
>    upwards to the memcg_2 memory limit_in_bytes.  Once limit_in_bytes is
>    hit it will use per memcg direct reclaim to recycle memcg_2 pages,
>    including the previously written memcg_2 /var/log/messages dirty
>    pages.

Thanks for the good, simple  example.

> By cutting out lru scanning the code should be simpler and still
> handle the common case well.

Agreed.

> If we later find that this supposed uncommon shared inode case is
> important then we can either implement the previously described lru
> scanning in mem_cgroup_balance_dirty_pages() or consider extending the
> bdi/memcg/inode data structures (perhaps with a memcg_mapping) to
> describe such sharing.

Hmm, another idea I just had. What we're trying to avoid is needing
to a) track inodes in multiple lists, and b) scanning to find
something appropriate to write back.

Rather than tracking at page or inode granularity, how about
tracking "associated" memcgs at the memcg level? i.e. when we detect
an inode is already dirty in another memcg, link the current memcg
to the one that contains the inode. Hence if we get a situation
where a memcg is throttling with no dirty inodes, it can quickly
find and start writeback in an "associated" memcg that it _knows_
contain shared dirty inodes. Once we've triggered writeback on an
associated memcg, it is removed from the list....

Cheers,

Dave.
-- 
Dave Chinner
david@fromorbit.com

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Fight unfair telecom internet charges in Canada: sign http://stopthemeter.ca/
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 3+ messages in thread

* Re: [Lsf] IO less throttling and cgroup aware writeback
  2011-04-08  1:25                       ` Dave Chinner
@ 2011-04-12  3:17                         ` KAMEZAWA Hiroyuki
  0 siblings, 0 replies; 3+ messages in thread
From: KAMEZAWA Hiroyuki @ 2011-04-12  3:17 UTC (permalink / raw)
  To: Dave Chinner
  Cc: Greg Thelen, Vivek Goyal, Curt Wohlgemuth, James Bottomley, lsf,
	linux-fsdevel, linux-mm

On Fri, 8 Apr 2011 11:25:56 +1000
Dave Chinner <david@fromorbit.com> wrote:

> On Thu, Apr 07, 2011 at 05:59:35PM -0700, Greg Thelen wrote:
> > cc: linux-mm
> > 
> > Dave Chinner <david@fromorbit.com> writes:

> > If we later find that this supposed uncommon shared inode case is
> > important then we can either implement the previously described lru
> > scanning in mem_cgroup_balance_dirty_pages() or consider extending the
> > bdi/memcg/inode data structures (perhaps with a memcg_mapping) to
> > describe such sharing.
> 
> Hmm, another idea I just had. What we're trying to avoid is needing
> to a) track inodes in multiple lists, and b) scanning to find
> something appropriate to write back.
> 
> Rather than tracking at page or inode granularity, how about
> tracking "associated" memcgs at the memcg level? i.e. when we detect
> an inode is already dirty in another memcg, link the current memcg
> to the one that contains the inode. Hence if we get a situation
> where a memcg is throttling with no dirty inodes, it can quickly
> find and start writeback in an "associated" memcg that it _knows_
> contain shared dirty inodes. Once we've triggered writeback on an
> associated memcg, it is removed from the list....
> 

Thank you for an idea. I think we can start from following.

 0. add some feature to set 'preferred inode' for memcg.
    I think
      fadvise(fd, MAKE_THIF_FILE_UNDER_MY_MEMCG)
    or
      echo fd > /memory.move_file_here
    can be added. 

 1. account dirty pages for a memcg. as Greg does.
 2. at the same time, account dirty pages made dirty by threads in a memcg.
    (to check which internal/external thread made page dirty.)
 3. calculate internal/external dirty pages gap.
 
 With gap, we can have several choices.

 4-a. If it exceeds some thresh, do some notify.
      userland daemon can decide to move pages to some memcg or not.
      (Of coruse, if the _shared_ dirty can be caught before making page dirty,
       user daemon can move inode before making it dirty by inotify().)

      I like helps of userland because it can be more flexible than kernel,
      it can eat config files.

 4-b. set some flag to memcg as 'this memcg is dirty busy because of some extarnal
      threads'. When a page is newly dirtied, check the thread's memcg.
      If the memcg of a thread and a page is different from each other,
      write a memo as 'please check this memcgid , too' in task_struct and
      do double-memcg-check in balance_dirty_pages().
      (How to clear per-task flag is difficult ;)

      I don't want to handle 3-100 threads does shared write case..;)
      we'll need 4-a.
 

Thanks,
-Kame








--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Fight unfair telecom internet charges in Canada: sign http://stopthemeter.ca/
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 3+ messages in thread

end of thread, other threads:[~2011-04-12  3:24 UTC | newest]

Thread overview: 3+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
     [not found] <20110331222756.GC2904@dastard>
     [not found] ` <20110401171838.GD20986@redhat.com>
     [not found]   ` <20110401214947.GE6957@dastard>
     [not found]     ` <20110405131359.GA14239@redhat.com>
     [not found]       ` <20110405225639.GB31057@dastard>
     [not found]         ` <BANLkTikDPHcpjmb-EAiX+MQcu7hfE730DQ@mail.gmail.com>
     [not found]           ` <20110406153954.GB18777@redhat.com>
     [not found]             ` <xr937hb7568t.fsf@gthelen.mtv.corp.google.com>
     [not found]               ` <20110406233602.GK31057@dastard>
     [not found]                 ` <20110407192424.GE27778@redhat.com>
     [not found]                   ` <20110407234249.GE30279@dastard>
2011-04-08  0:59                     ` [Lsf] IO less throttling and cgroup aware writeback Greg Thelen
2011-04-08  1:25                       ` Dave Chinner
2011-04-12  3:17                         ` KAMEZAWA Hiroyuki

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).