linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
From: "Kirill A. Shutemov" <kirill@shutemov.name>
To: KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>
Cc: "linux-mm@kvack.org" <linux-mm@kvack.org>,
	"balbir@linux.vnet.ibm.com" <balbir@linux.vnet.ibm.com>,
	"nishimura@mxp.nes.nec.co.jp" <nishimura@mxp.nes.nec.co.jp>,
	"akpm@linux-foundation.org" <akpm@linux-foundation.org>
Subject: Re: [PATCH 2/2] memcg: share event counter rather than duplicate
Date: Fri, 12 Feb 2010 10:49:45 +0200	[thread overview]
Message-ID: <cc557aab1002120049v28322a29sbe11d7f049806115@mail.gmail.com> (raw)
In-Reply-To: <20100212171948.16346836.kamezawa.hiroyu@jp.fujitsu.com>

On Fri, Feb 12, 2010 at 10:19 AM, KAMEZAWA Hiroyuki
<kamezawa.hiroyu@jp.fujitsu.com> wrote:
> On Fri, 12 Feb 2010 10:07:25 +0200
> "Kirill A. Shutemov" <kirill@shutemov.name> wrote:
>
>> On Fri, Feb 12, 2010 at 8:48 AM, KAMEZAWA Hiroyuki
>> <kamezawa.hiroyu@jp.fujitsu.com> wrote:
>> > Memcg has 2 eventcountes which counts "the same" event. Just usages are
>> > different from each other. This patch tries to reduce event counter.
>> >
>> > This patch's logic uses "only increment, no reset" new_counter and masks for each
>> > checks. Softlimit chesk was done per 1000 events. So, the similar check
>> > can be done by !(new_counter & 0x3ff). Threshold check was done per 100
>> > events. So, the similar check can be done by (!new_counter & 0x7f)
>>
>> IIUC, with this change we have to check counter after each update,
>> since we check
>> for exact value.
>
> Yes.
>> So we have to move checks to mem_cgroup_charge_statistics() or
>> call them after each statistics charging. I'm not sure how it affects
>> performance.
>>
>
> My patch 1/2 does it.
>
> But hmm, move-task does counter updates in asynchronous manner. Then, there are
> bug. I'll add check in the next version.
>
> Maybe calling update_tree and threshold_check at the end of mova_task is
> better. Does thresholds user take care of batched-move manner in task_move ?
> Should we check one by one ?

No. mem_cgroup_threshold() at mem_cgroup_move_task() is enough.

But... Is task moving a critical path? If no, It's, probably, cleaner to check
everything at mem_cgroup_charge_statistics().

> (Maybe there will be another trouble when we handle hugepages...)

Yes, hugepages support requires more testing.

> Thanks,
> -Kame
>
>
>> > Cc: Kirill A. Shutemov <kirill@shutemov.name>
>> > Cc: Balbir Singh <balbir@linux.vnet.ibm.com>
>> > Cc: Daisuke Nishimura <nishimura@mxp.nes.nec.co.jp>
>> > Signed-off-by: KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>
>> > ---
>> >  mm/memcontrol.c |   36 ++++++++++++------------------------
>> >  1 file changed, 12 insertions(+), 24 deletions(-)
>> >
>> > Index: mmotm-2.6.33-Feb10/mm/memcontrol.c
>> > ===================================================================
>> > --- mmotm-2.6.33-Feb10.orig/mm/memcontrol.c
>> > +++ mmotm-2.6.33-Feb10/mm/memcontrol.c
>> > @@ -63,8 +63,8 @@ static int really_do_swap_account __init
>> >  #define do_swap_account                (0)
>> >  #endif
>> >
>> > -#define SOFTLIMIT_EVENTS_THRESH (1000)
>> > -#define THRESHOLDS_EVENTS_THRESH (100)
>> > +#define SOFTLIMIT_EVENTS_THRESH (0x3ff) /* once in 1024 */
>> > +#define THRESHOLDS_EVENTS_THRESH (0x7f) /* once in 128 */
>> >
>> >  /*
>> >  * Statistics for memory cgroup.
>> > @@ -79,10 +79,7 @@ enum mem_cgroup_stat_index {
>> >        MEM_CGROUP_STAT_PGPGIN_COUNT,   /* # of pages paged in */
>> >        MEM_CGROUP_STAT_PGPGOUT_COUNT,  /* # of pages paged out */
>> >        MEM_CGROUP_STAT_SWAPOUT, /* # of pages, swapped out */
>> > -       MEM_CGROUP_STAT_SOFTLIMIT, /* decrements on each page in/out.
>> > -                                       used by soft limit implementation */
>> > -       MEM_CGROUP_STAT_THRESHOLDS, /* decrements on each page in/out.
>> > -                                       used by threshold implementation */
>> > +       MEM_CGROUP_EVENTS,      /* incremented by 1 at pagein/pageout */
>> >
>> >        MEM_CGROUP_STAT_NSTATS,
>> >  };
>> > @@ -394,16 +391,12 @@ mem_cgroup_remove_exceeded(struct mem_cg
>> >
>> >  static bool mem_cgroup_soft_limit_check(struct mem_cgroup *mem)
>> >  {
>> > -       bool ret = false;
>> >        s64 val;
>> >
>> > -       val = this_cpu_read(mem->stat->count[MEM_CGROUP_STAT_SOFTLIMIT]);
>> > -       if (unlikely(val < 0)) {
>> > -               this_cpu_write(mem->stat->count[MEM_CGROUP_STAT_SOFTLIMIT],
>> > -                               SOFTLIMIT_EVENTS_THRESH);
>> > -               ret = true;
>> > -       }
>> > -       return ret;
>> > +       val = this_cpu_read(mem->stat->count[MEM_CGROUP_EVENTS]);
>> > +       if (unlikely(!(val & SOFTLIMIT_EVENTS_THRESH)))
>> > +               return true;
>> > +       return false;
>> >  }
>> >
>> >  static void mem_cgroup_update_tree(struct mem_cgroup *mem, struct page *page)
>> > @@ -542,8 +535,7 @@ static void mem_cgroup_charge_statistics
>> >                __this_cpu_inc(mem->stat->count[MEM_CGROUP_STAT_PGPGIN_COUNT]);
>> >        else
>> >                __this_cpu_inc(mem->stat->count[MEM_CGROUP_STAT_PGPGOUT_COUNT]);
>> > -       __this_cpu_dec(mem->stat->count[MEM_CGROUP_STAT_SOFTLIMIT]);
>> > -       __this_cpu_dec(mem->stat->count[MEM_CGROUP_STAT_THRESHOLDS]);
>> > +       __this_cpu_dec(mem->stat->count[MEM_CGROUP_EVENTS]);
>> >
>> >        preempt_enable();
>> >  }
>> > @@ -3211,16 +3203,12 @@ static int mem_cgroup_swappiness_write(s
>> >
>> >  static bool mem_cgroup_threshold_check(struct mem_cgroup *mem)
>> >  {
>> > -       bool ret = false;
>> >        s64 val;
>> >
>> > -       val = this_cpu_read(mem->stat->count[MEM_CGROUP_STAT_THRESHOLDS]);
>> > -       if (unlikely(val < 0)) {
>> > -               this_cpu_write(mem->stat->count[MEM_CGROUP_STAT_THRESHOLDS],
>> > -                               THRESHOLDS_EVENTS_THRESH);
>> > -               ret = true;
>> > -       }
>> > -       return ret;
>> > +       val = this_cpu_read(mem->stat->count[MEM_CGROUP_EVENTS]);
>> > +       if (unlikely(!(val & THRESHOLDS_EVENTS_THRESH)))
>> > +               return true;
>> > +       return false;
>> >  }
>> >
>> >  static void __mem_cgroup_threshold(struct mem_cgroup *memcg, bool swap)
>> >
>> >
>>
>
>

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

  reply	other threads:[~2010-02-12  8:49 UTC|newest]

Thread overview: 21+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2010-02-12  6:44 [PATCH 0/2] memcg patches around event counting...softlimit and thresholds KAMEZAWA Hiroyuki
2010-02-12  6:47 ` [PATCH 1/2] memcg : update softlimit and threshold at commit KAMEZAWA Hiroyuki
2010-02-12  7:33   ` Daisuke Nishimura
2010-02-12  7:42     ` KAMEZAWA Hiroyuki
2010-02-12  6:48 ` [PATCH 2/2] memcg: share event counter rather than duplicate KAMEZAWA Hiroyuki
2010-02-12  7:40   ` Daisuke Nishimura
2010-02-12  7:41     ` KAMEZAWA Hiroyuki
2010-02-12  7:46   ` Kirill A. Shutemov
2010-02-12  7:46     ` KAMEZAWA Hiroyuki
2010-02-12  8:07   ` Kirill A. Shutemov
2010-02-12  8:19     ` KAMEZAWA Hiroyuki
2010-02-12  8:49       ` Kirill A. Shutemov [this message]
2010-02-12  8:51         ` KAMEZAWA Hiroyuki
2010-02-12  9:05 ` [PATCH 0/2] memcg patches around event counting...softlimit and thresholds v2 KAMEZAWA Hiroyuki
2010-02-12  9:06   ` [PATCH 1/2] memcg: update threshold and softlimit at commit v2 KAMEZAWA Hiroyuki
2010-02-12  9:09   ` [PATCH 2/2] memcg : share event counter rather than duplicate v2 KAMEZAWA Hiroyuki
2010-02-12 11:48     ` Daisuke Nishimura
2010-02-15  0:19       ` KAMEZAWA Hiroyuki
2010-03-09 23:15         ` Andrew Morton
2010-02-15 10:57     ` Kirill A. Shutemov
2010-02-16  0:16       ` KAMEZAWA Hiroyuki

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=cc557aab1002120049v28322a29sbe11d7f049806115@mail.gmail.com \
    --to=kirill@shutemov.name \
    --cc=akpm@linux-foundation.org \
    --cc=balbir@linux.vnet.ibm.com \
    --cc=kamezawa.hiroyu@jp.fujitsu.com \
    --cc=linux-mm@kvack.org \
    --cc=nishimura@mxp.nes.nec.co.jp \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).