From: Ying Han <yinghan@google.com>
To: Minchan Kim <minchan.kim@gmail.com>
Cc: KOSAKI Motohiro <kosaki.motohiro@jp.fujitsu.com>,
Daisuke Nishimura <nishimura@mxp.nes.nec.co.jp>,
Balbir Singh <balbir@linux.vnet.ibm.com>,
Tejun Heo <tj@kernel.org>, Pavel Emelyanov <xemul@openvz.org>,
KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>,
Andrew Morton <akpm@linux-foundation.org>,
Li Zefan <lizf@cn.fujitsu.com>, Mel Gorman <mel@csn.ul.ie>,
Christoph Lameter <cl@linux.com>,
Johannes Weiner <hannes@cmpxchg.org>,
Rik van Riel <riel@redhat.com>, Hugh Dickins <hughd@google.com>,
Michal Hocko <mhocko@suse.cz>,
Dave Hansen <dave@linux.vnet.ibm.com>,
Zhu Yanhai <zhu.yanhai@gmail.com>,
linux-mm@kvack.org
Subject: Re: [PATCH V5 01/10] Add kswapd descriptor
Date: Mon, 18 Apr 2011 11:09:40 -0700 [thread overview]
Message-ID: <BANLkTi=h5DUL1k-31WDP3KfjmiNR8FTckQ@mail.gmail.com> (raw)
In-Reply-To: <BANLkTikgoSt4VUY63J+G6mUJJDCL+NWH8Q@mail.gmail.com>
[-- Attachment #1: Type: text/plain, Size: 16082 bytes --]
On Sun, Apr 17, 2011 at 5:57 PM, Minchan Kim <minchan.kim@gmail.com> wrote:
> Hi Ying,
>
> I have some comments and nitpick about coding style.
>
Hi Minchan, thank you for your comments and reviews.
>
> On Sat, Apr 16, 2011 at 8:23 AM, Ying Han <yinghan@google.com> wrote:
> > There is a kswapd kernel thread for each numa node. We will add a
> different
> > kswapd for each memcg. The kswapd is sleeping in the wait queue headed at
>
> Why?
>
> Easily, many kernel developers raise an eyebrow to increase kernel thread.
> So you should justify why we need new kernel thread, why we can't
> handle it with workqueue.
>
> Maybe you explained it and I didn't know it. If it is, sorry.
> But at least, the patch description included _why_ is much mergeable
> to maintainers and helpful to review the code to reviewers.
>
Here are the replies i posted on earlier version regarding on workqueue.
"
I did some study on workqueue after posting V2. There was a comment
suggesting workqueue instead of per-memcg kswapd thread, since it will cut
the number of kernel threads being created in host with lots of cgroups.
Each kernel thread allocates about 8K of stack and 8M in total w/ thousand
of cgroups.
The current workqueue model merged in 2.6.36 kernel is called "concurrency
managed workqueu(cmwq)", which is intended to provide flexible concurrency
without wasting resources. I studied a bit and here it is:
1. The workqueue is complicated and we need to be very careful of work items
in the workqueue. We've experienced in one workitem stucks and the rest of
the work item won't proceed. For example in dirty page writeback, one
heavily writer cgroup could starve the other cgroups from flushing dirty
pages to the same disk. In the kswapd case, I can image we might have
similar scenario.
2. How to prioritize the workitems is another problem. The order of adding
the workitems in the queue reflects the order of cgroups being reclaimed. We
don't have that restriction currently but relying on the cpu scheduler to
put kswapd on the right cpu-core to run. We "might" introduce priority later
for reclaim and how are we gonna deal with that.
3. Based on what i observed, not many callers has migrated to the cmwq and I
don't have much data of how good it is.
Back to the current model, on machine with thousands of cgroups which it
will take 8M total for thousand of kswapd threads (8K stack for each
thread). We are running system with fakenuma which each numa node has a
kswapd. So far we haven't noticed issue caused by "lots of" kswapd threads.
Also, there shouldn't be any performance overhead for kernel thread if it is
not running.
Based on the complexity of workqueue and the benefit it provides, I would
like to stick to the current model first. After we get the basic stuff in
and other targeting reclaim improvement, we can come back to this. What do
you think?
"
KAMEZAWA's reply:
"
Okay, fair enough. kthread_run() will win.
Then, I have another request. I'd like to kswapd-for-memcg to some cpu
cgroup to limit cpu usage.
- Could you show thread ID somewhere ? and
confirm we can put it to some cpu cgroup ?
(creating a auto cpu cgroup for memcg kswapd is a choice, I think.)
"
> > kswapd_wait field of a kswapd descriptor. The kswapd descriptor stores
> > information of node or memcg and it allows the global and per-memcg
> background
> > reclaim to share common reclaim algorithms.
> >
> > This patch adds the kswapd descriptor and moves the per-node kswapd to
> use the
> > new structure.
> >
> > changelog v5..v4:
> > 1. add comment on kswapds_spinlock
> > 2. remove the kswapds_spinlock. we don't need it here since the kswapd
> and pgdat
> > have 1:1 mapping.
> >
> > changelog v3..v2:
> > 1. move the struct mem_cgroup *kswapd_mem in kswapd sruct to later patch.
> > 2. rename thr in kswapd_run to something else.
> >
> > changelog v2..v1:
> > 1. dynamic allocate kswapd descriptor and initialize the wait_queue_head
> of pgdat
> > at kswapd_run.
> > 2. add helper macro is_node_kswapd to distinguish per-node/per-cgroup
> kswapd
> > descriptor.
> >
> > Signed-off-by: Ying Han <yinghan@google.com>
> > ---
> > include/linux/mmzone.h | 3 +-
> > include/linux/swap.h | 7 ++++
> > mm/page_alloc.c | 1 -
> > mm/vmscan.c | 89
> +++++++++++++++++++++++++++++++++++------------
> > 4 files changed, 74 insertions(+), 26 deletions(-)
> >
> > diff --git a/include/linux/mmzone.h b/include/linux/mmzone.h
> > index 628f07b..6cba7d2 100644
> > --- a/include/linux/mmzone.h
> > +++ b/include/linux/mmzone.h
> > @@ -640,8 +640,7 @@ typedef struct pglist_data {
> > unsigned long node_spanned_pages; /* total size of physical page
> > range, including holes */
> > int node_id;
> > - wait_queue_head_t kswapd_wait;
> > - struct task_struct *kswapd;
> > + wait_queue_head_t *kswapd_wait;
>
> Personally, I prefer kswapd not kswapd_wait.
> It's more readable and straightforward.
>
hmm. I would like to keep as it is for this version, and improve it after
the basic stuff are in. Hope that works for you?
> > int kswapd_max_order;
> > enum zone_type classzone_idx;
> > } pg_data_t;
> > diff --git a/include/linux/swap.h b/include/linux/swap.h
> > index ed6ebe6..f43d406 100644
> > --- a/include/linux/swap.h
> > +++ b/include/linux/swap.h
> > @@ -26,6 +26,13 @@ static inline int current_is_kswapd(void)
> > return current->flags & PF_KSWAPD;
> > }
> >
> > +struct kswapd {
> > + struct task_struct *kswapd_task;
> > + wait_queue_head_t kswapd_wait;
> > + pg_data_t *kswapd_pgdat;
> > +};
> > +
> > +int kswapd(void *p);
> > /*
> > * MAX_SWAPFILES defines the maximum number of swaptypes: things which
> can
> > * be swapped to. The swap type and the offset into that swap type are
> > diff --git a/mm/page_alloc.c b/mm/page_alloc.c
> > index 6e1b52a..6340865 100644
> > --- a/mm/page_alloc.c
> > +++ b/mm/page_alloc.c
> > @@ -4205,7 +4205,6 @@ static void __paginginit free_area_init_core(struct
> pglist_data *pgdat,
> >
> > pgdat_resize_init(pgdat);
> > pgdat->nr_zones = 0;
> > - init_waitqueue_head(&pgdat->kswapd_wait);
> > pgdat->kswapd_max_order = 0;
> > pgdat_page_cgroup_init(pgdat);
> >
> > diff --git a/mm/vmscan.c b/mm/vmscan.c
> > index 060e4c1..61fb96e 100644
> > --- a/mm/vmscan.c
> > +++ b/mm/vmscan.c
> > @@ -2242,12 +2242,13 @@ static bool pgdat_balanced(pg_data_t *pgdat,
> unsigned long balanced_pages,
> > }
> >
> > /* is kswapd sleeping prematurely? */
> > -static bool sleeping_prematurely(pg_data_t *pgdat, int order, long
> remaining,
> > - int classzone_idx)
> > +static int sleeping_prematurely(struct kswapd *kswapd, int order,
> > + long remaining, int classzone_idx)
> > {
> > int i;
> > unsigned long balanced = 0;
> > bool all_zones_ok = true;
> > + pg_data_t *pgdat = kswapd->kswapd_pgdat;
> >
> > /* If a direct reclaimer woke kswapd within HZ/10, it's premature
> */
> > if (remaining)
> > @@ -2570,28 +2571,31 @@ out:
> > return order;
> > }
> >
> > -static void kswapd_try_to_sleep(pg_data_t *pgdat, int order, int
> classzone_idx)
> > +static void kswapd_try_to_sleep(struct kswapd *kswapd_p, int order,
> > + int classzone_idx)
> > {
> > long remaining = 0;
> > DEFINE_WAIT(wait);
> > + pg_data_t *pgdat = kswapd_p->kswapd_pgdat;
> > + wait_queue_head_t *wait_h = &kswapd_p->kswapd_wait;
>
> kswapd_p? p means pointer?
>
yes,
> wait_h? h means header?
>
yes,
Hmm.. Of course, it's trivial and we can understand easily in such
> context but we don't have been used such words so it's rather awkward
> to me.
>
> How about kswapd instead of kswapd_p, kswapd_wait instead of wait_h?
>
that sounds ok for me for the change. however i would like to make the
change as sperate patch after the basic stuff are in. Is that ok?
>
> >
> > if (freezing(current) || kthread_should_stop())
> > return;
> >
> > - prepare_to_wait(&pgdat->kswapd_wait, &wait, TASK_INTERRUPTIBLE);
> > + prepare_to_wait(wait_h, &wait, TASK_INTERRUPTIBLE);
> >
> > /* Try to sleep for a short interval */
> > - if (!sleeping_prematurely(pgdat, order, remaining,
> classzone_idx)) {
> > + if (!sleeping_prematurely(kswapd_p, order, remaining,
> classzone_idx)) {
> > remaining = schedule_timeout(HZ/10);
> > - finish_wait(&pgdat->kswapd_wait, &wait);
> > - prepare_to_wait(&pgdat->kswapd_wait, &wait,
> TASK_INTERRUPTIBLE);
> > + finish_wait(wait_h, &wait);
> > + prepare_to_wait(wait_h, &wait, TASK_INTERRUPTIBLE);
> > }
> >
> > /*
> > * After a short sleep, check if it was a premature sleep. If not,
> then
> > * go fully to sleep until explicitly woken up.
> > */
> > - if (!sleeping_prematurely(pgdat, order, remaining,
> classzone_idx)) {
> > + if (!sleeping_prematurely(kswapd_p, order, remaining,
> classzone_idx)) {
> > trace_mm_vmscan_kswapd_sleep(pgdat->node_id);
> >
> > /*
> > @@ -2611,7 +2615,7 @@ static void kswapd_try_to_sleep(pg_data_t *pgdat,
> int order, int classzone_idx)
> > else
> > count_vm_event(KSWAPD_HIGH_WMARK_HIT_QUICKLY);
> > }
> > - finish_wait(&pgdat->kswapd_wait, &wait);
> > + finish_wait(wait_h, &wait);
> > }
> >
> > /*
> > @@ -2627,20 +2631,24 @@ static void kswapd_try_to_sleep(pg_data_t *pgdat,
> int order, int classzone_idx)
> > * If there are applications that are active memory-allocators
> > * (most normal use), this basically shouldn't matter.
> > */
> > -static int kswapd(void *p)
> > +int kswapd(void *p)
> > {
> > unsigned long order;
> > int classzone_idx;
> > - pg_data_t *pgdat = (pg_data_t*)p;
> > + struct kswapd *kswapd_p = (struct kswapd *)p;
> > + pg_data_t *pgdat = kswapd_p->kswapd_pgdat;
> > + wait_queue_head_t *wait_h = &kswapd_p->kswapd_wait;
> > struct task_struct *tsk = current;
> >
> > struct reclaim_state reclaim_state = {
> > .reclaimed_slab = 0,
> > };
> > - const struct cpumask *cpumask = cpumask_of_node(pgdat->node_id);
> > + const struct cpumask *cpumask;
> >
> > lockdep_set_current_reclaim_state(GFP_KERNEL);
> >
> > + BUG_ON(pgdat->kswapd_wait != wait_h);
>
> If we include kswapd instead of kswapd_wait in pgdat, maybe we could
> remove the check?
>
> > + cpumask = cpumask_of_node(pgdat->node_id);
> > if (!cpumask_empty(cpumask))
> > set_cpus_allowed_ptr(tsk, cpumask);
> > current->reclaim_state = &reclaim_state;
> > @@ -2679,7 +2687,7 @@ static int kswapd(void *p)
> > order = new_order;
> > classzone_idx = new_classzone_idx;
> > } else {
> > - kswapd_try_to_sleep(pgdat, order, classzone_idx);
> > + kswapd_try_to_sleep(kswapd_p, order,
> classzone_idx);
> > order = pgdat->kswapd_max_order;
> > classzone_idx = pgdat->classzone_idx;
> > pgdat->kswapd_max_order = 0;
> > @@ -2719,13 +2727,13 @@ void wakeup_kswapd(struct zone *zone, int order,
> enum zone_type classzone_idx)
> > pgdat->kswapd_max_order = order;
> > pgdat->classzone_idx = min(pgdat->classzone_idx,
> classzone_idx);
> > }
> > - if (!waitqueue_active(&pgdat->kswapd_wait))
> > + if (!waitqueue_active(pgdat->kswapd_wait))
> > return;
> > if (zone_watermark_ok_safe(zone, order, low_wmark_pages(zone), 0,
> 0))
> > return;
> >
> > trace_mm_vmscan_wakeup_kswapd(pgdat->node_id, zone_idx(zone),
> order);
> > - wake_up_interruptible(&pgdat->kswapd_wait);
> > + wake_up_interruptible(pgdat->kswapd_wait);
> > }
> >
> > /*
> > @@ -2817,12 +2825,21 @@ static int __devinit cpu_callback(struct
> notifier_block *nfb,
> > for_each_node_state(nid, N_HIGH_MEMORY) {
> > pg_data_t *pgdat = NODE_DATA(nid);
> > const struct cpumask *mask;
> > + struct kswapd *kswapd_p;
> > + struct task_struct *kswapd_thr;
> > + wait_queue_head_t *wait;
> >
> > mask = cpumask_of_node(pgdat->node_id);
> >
> > + wait = pgdat->kswapd_wait;
> > + kswapd_p = container_of(wait, struct kswapd,
> > + kswapd_wait);
> > + kswapd_thr = kswapd_p->kswapd_task;
>
> kswapd_thr? thr means thread?
> How about tsk?
>
ok. I made the change and will be included in the next post.
>
> > +
> If we include kswapd instead of kswapd_wait in pgdat, don't we make this
> simple?
>
> struct kswapd *kswapd = pgdat->kswapd;
> struct task_struct *kswapd_tsk = kswapd->kswapd_task;
>
>
> > if (cpumask_any_and(cpu_online_mask, mask) <
> nr_cpu_ids)
> > /* One of our CPUs online: restore mask */
> > - set_cpus_allowed_ptr(pgdat->kswapd,
> mask);
> > + if (kswapd_thr)
> > + set_cpus_allowed_ptr(kswapd_thr,
> mask);
> > }
> > }
> > return NOTIFY_OK;
> > @@ -2835,18 +2852,31 @@ static int __devinit cpu_callback(struct
> notifier_block *nfb,
> > int kswapd_run(int nid)
> > {
> > pg_data_t *pgdat = NODE_DATA(nid);
> > + struct task_struct *kswapd_thr;
> > + struct kswapd *kswapd_p;
> > int ret = 0;
> >
> > - if (pgdat->kswapd)
> > + if (pgdat->kswapd_wait)
> > return 0;
> >
> > - pgdat->kswapd = kthread_run(kswapd, pgdat, "kswapd%d", nid);
> > - if (IS_ERR(pgdat->kswapd)) {
> > + kswapd_p = kzalloc(sizeof(struct kswapd), GFP_KERNEL);
> > + if (!kswapd_p)
> > + return -ENOMEM;
> > +
> > + init_waitqueue_head(&kswapd_p->kswapd_wait);
> > + pgdat->kswapd_wait = &kswapd_p->kswapd_wait;
> > + kswapd_p->kswapd_pgdat = pgdat;
> > +
> > + kswapd_thr = kthread_run(kswapd, kswapd_p, "kswapd%d", nid);
> > + if (IS_ERR(kswapd_thr)) {
> > /* failure at boot is fatal */
> > BUG_ON(system_state == SYSTEM_BOOTING);
> > printk("Failed to start kswapd on node %d\n",nid);
> > + pgdat->kswapd_wait = NULL;
> > + kfree(kswapd_p);
> > ret = -1;
> > - }
> > + } else
> > + kswapd_p->kswapd_task = kswapd_thr;
> > return ret;
> > }
> >
> > @@ -2855,10 +2885,23 @@ int kswapd_run(int nid)
> > */
> > void kswapd_stop(int nid)
> > {
> > - struct task_struct *kswapd = NODE_DATA(nid)->kswapd;
> > + struct task_struct *kswapd_thr = NULL;
> > + struct kswapd *kswapd_p = NULL;
> > + wait_queue_head_t *wait;
> > +
> > + pg_data_t *pgdat = NODE_DATA(nid);
> > +
> > + wait = pgdat->kswapd_wait;
> > + if (wait) {
> > + kswapd_p = container_of(wait, struct kswapd,
> kswapd_wait);
> > + kswapd_thr = kswapd_p->kswapd_task;
> > + kswapd_p->kswapd_task = NULL;
> > + }
> > +
> > + if (kswapd_thr)
> > + kthread_stop(kswapd_thr);
> >
> > - if (kswapd)
> > - kthread_stop(kswapd);
> > + kfree(kswapd_p);
> > }
> >
> > static int __init kswapd_init(void)
> > --
> > 1.7.3.1
> >
> >
>
> Hmm, I don't like kswapd_p, kswapd_thr, wait_h and kswapd_wait of pgdat.
> But it's just my personal opinion. :)
>
Thank you for your comments :)
>
>
> --
> Kind regards,
> Minchan Kim
>
[-- Attachment #2: Type: text/html, Size: 24424 bytes --]
next prev parent reply other threads:[~2011-04-18 18:09 UTC|newest]
Thread overview: 27+ messages / expand[flat|nested] mbox.gz Atom feed top
2011-04-15 23:23 [PATCH V5 00/10] memcg: per cgroup background reclaim Ying Han
2011-04-15 23:23 ` [PATCH V5 01/10] Add kswapd descriptor Ying Han
2011-04-18 0:57 ` Minchan Kim
2011-04-18 18:09 ` Ying Han [this message]
2011-04-19 5:35 ` Minchan Kim
2011-04-15 23:23 ` [PATCH V5 02/10] Add per memcg reclaim watermarks Ying Han
2011-04-15 23:23 ` [PATCH V5 03/10] New APIs to adjust per-memcg wmarks Ying Han
2011-04-15 23:23 ` [PATCH V5 04/10] Infrastructure to support per-memcg reclaim Ying Han
2011-04-18 2:11 ` Minchan Kim
2011-04-18 18:44 ` Ying Han
2011-04-15 23:23 ` [PATCH V5 05/10] Implement the select_victim_node within memcg Ying Han
2011-04-18 2:22 ` Minchan Kim
2011-04-18 17:11 ` Ying Han
2011-04-15 23:23 ` [PATCH V5 06/10] Per-memcg background reclaim Ying Han
2011-04-18 3:51 ` Minchan Kim
2011-04-18 21:38 ` Ying Han
2011-04-18 23:32 ` Minchan Kim
2011-04-19 2:42 ` Ying Han
2011-04-19 5:50 ` Minchan Kim
2011-04-15 23:23 ` [PATCH V5 07/10] Add per-memcg zone "unreclaimable" Ying Han
2011-04-18 4:27 ` Minchan Kim
2011-04-18 17:31 ` Ying Han
2011-04-15 23:23 ` [PATCH V5 08/10] Enable per-memcg background reclaim Ying Han
2011-04-15 23:23 ` [PATCH V5 09/10] Add API to export per-memcg kswapd pid Ying Han
2011-04-18 5:01 ` Minchan Kim
2011-04-18 17:41 ` Ying Han
2011-04-15 23:23 ` [PATCH V5 10/10] Add some per-memcg stats Ying Han
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to='BANLkTi=h5DUL1k-31WDP3KfjmiNR8FTckQ@mail.gmail.com' \
--to=yinghan@google.com \
--cc=akpm@linux-foundation.org \
--cc=balbir@linux.vnet.ibm.com \
--cc=cl@linux.com \
--cc=dave@linux.vnet.ibm.com \
--cc=hannes@cmpxchg.org \
--cc=hughd@google.com \
--cc=kamezawa.hiroyu@jp.fujitsu.com \
--cc=kosaki.motohiro@jp.fujitsu.com \
--cc=linux-mm@kvack.org \
--cc=lizf@cn.fujitsu.com \
--cc=mel@csn.ul.ie \
--cc=mhocko@suse.cz \
--cc=minchan.kim@gmail.com \
--cc=nishimura@mxp.nes.nec.co.jp \
--cc=riel@redhat.com \
--cc=tj@kernel.org \
--cc=xemul@openvz.org \
--cc=zhu.yanhai@gmail.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).