* [PATCH] [RT] avoid preemption in memory controller code @ 2008-12-08 14:34 Tim Blechmann 2008-12-08 16:42 ` Steven Rostedt 2008-12-09 1:13 ` [PATCH] [RT] avoid preemption in memory controller code Li Zefan 0 siblings, 2 replies; 10+ messages in thread From: Tim Blechmann @ 2008-12-08 14:34 UTC (permalink / raw) To: rostedt, linux-rt-users; +Cc: linux-kernel, Tim Blechmann the lru_lock of struct mem_group_per_zone is used to avoid preemption during the __mem_cgroup_stat_add_safe function. therefore, a raw_spinlock_t should be used. Signed-off-by: Tim Blechmann <tim@klingt.org> --- mm/memcontrol.c | 4 ++-- 1 files changed, 2 insertions(+), 2 deletions(-) diff --git a/mm/memcontrol.c b/mm/memcontrol.c index 517f945..8661732 100644 --- a/mm/memcontrol.c +++ b/mm/memcontrol.c @@ -95,9 +95,9 @@ enum mem_cgroup_zstat_index { struct mem_cgroup_per_zone { /* - * spin_lock to protect the per cgroup LRU + * raw_spin_lock to protect the per cgroup LRU */ - spinlock_t lru_lock; + raw_spinlock_t lru_lock; struct list_head active_list; struct list_head inactive_list; unsigned long count[NR_MEM_CGROUP_ZSTAT]; -- 1.5.6.3 ^ permalink raw reply related [flat|nested] 10+ messages in thread
* Re: [PATCH] [RT] avoid preemption in memory controller code 2008-12-08 14:34 [PATCH] [RT] avoid preemption in memory controller code Tim Blechmann @ 2008-12-08 16:42 ` Steven Rostedt 2008-12-08 17:07 ` Tim Blechmann 2008-12-09 1:13 ` [PATCH] [RT] avoid preemption in memory controller code Li Zefan 1 sibling, 1 reply; 10+ messages in thread From: Steven Rostedt @ 2008-12-08 16:42 UTC (permalink / raw) To: Tim Blechmann Cc: linux-rt-users, LKML, Peter Zijlstra, Thomas Gleixner, Ingo Molnar Hi Tim, On Mon, 8 Dec 2008, Tim Blechmann wrote: > the lru_lock of struct mem_group_per_zone is used to avoid preemption > during the __mem_cgroup_stat_add_safe function. > therefore, a raw_spinlock_t should be used. What is the reason that this must avoid preemption? Is there another way to solve this? I rather not just add a raw spinlock if we can help it. -- Steve > > Signed-off-by: Tim Blechmann <tim@klingt.org> > --- > mm/memcontrol.c | 4 ++-- > 1 files changed, 2 insertions(+), 2 deletions(-) > > diff --git a/mm/memcontrol.c b/mm/memcontrol.c > index 517f945..8661732 100644 > --- a/mm/memcontrol.c > +++ b/mm/memcontrol.c > @@ -95,9 +95,9 @@ enum mem_cgroup_zstat_index { > > struct mem_cgroup_per_zone { > /* > - * spin_lock to protect the per cgroup LRU > + * raw_spin_lock to protect the per cgroup LRU > */ > - spinlock_t lru_lock; > + raw_spinlock_t lru_lock; > struct list_head active_list; > struct list_head inactive_list; > unsigned long count[NR_MEM_CGROUP_ZSTAT]; > -- > 1.5.6.3 > > > ^ permalink raw reply [flat|nested] 10+ messages in thread
* Re: [PATCH] [RT] avoid preemption in memory controller code 2008-12-08 16:42 ` Steven Rostedt @ 2008-12-08 17:07 ` Tim Blechmann 2008-12-08 17:23 ` Steven Rostedt 0 siblings, 1 reply; 10+ messages in thread From: Tim Blechmann @ 2008-12-08 17:07 UTC (permalink / raw) To: Steven Rostedt Cc: linux-rt-users, LKML, Peter Zijlstra, Thomas Gleixner, Ingo Molnar [-- Attachment #1: Type: text/plain, Size: 680 bytes --] > > the lru_lock of struct mem_group_per_zone is used to avoid preemption > > during the __mem_cgroup_stat_add_safe function. > > therefore, a raw_spinlock_t should be used. > > What is the reason that this must avoid preemption? it guards a call to smp_processor_id() in __mem_cgroup_stat_add_safe(). see http://article.gmane.org/gmane.linux.rt.user/3690 > Is there another > way to solve this? I rather not just add a raw spinlock if we can > help it. not sure, maybe one can disable preemption for that specific function? tim -- tim@klingt.org http://tim.klingt.org Art is either a complaint or do something else John Cage quoting Jasper Johns [-- Attachment #2: This is a digitally signed message part --] [-- Type: application/pgp-signature, Size: 197 bytes --] ^ permalink raw reply [flat|nested] 10+ messages in thread
* Re: [PATCH] [RT] avoid preemption in memory controller code 2008-12-08 17:07 ` Tim Blechmann @ 2008-12-08 17:23 ` Steven Rostedt 2008-12-11 19:05 ` [PATCH] [RT] preempt_disable_rt for CONFIG_PREEMPT_RT Tim Blechmann 0 siblings, 1 reply; 10+ messages in thread From: Steven Rostedt @ 2008-12-08 17:23 UTC (permalink / raw) To: Tim Blechmann Cc: linux-rt-users, LKML, Peter Zijlstra, Thomas Gleixner, Ingo Molnar On Mon, 8 Dec 2008, Tim Blechmann wrote: > > > the lru_lock of struct mem_group_per_zone is used to avoid preemption > > > during the __mem_cgroup_stat_add_safe function. > > > therefore, a raw_spinlock_t should be used. > > > > What is the reason that this must avoid preemption? > > it guards a call to smp_processor_id() in __mem_cgroup_stat_add_safe(). > see http://article.gmane.org/gmane.linux.rt.user/3690 We could simply create a preempt_disable_rt(); that will only disable preemption when CONFIG_PREEMPT_RT is set. and then we could add int cpu; preempt_disable_rt(); cpu = smp_processor_id(); stat->cpustat[cpu].count[idx] += val; preempt_enable_rt(); Or something similar. -- Steve > > > Is there another > > way to solve this? I rather not just add a raw spinlock if we can > > help it. > > not sure, maybe one can disable preemption for that specific function? > > tim > > -- > tim@klingt.org > http://tim.klingt.org > > Art is either a complaint or do something else > John Cage quoting Jasper Johns > ^ permalink raw reply [flat|nested] 10+ messages in thread
* [PATCH] [RT] preempt_disable_rt for CONFIG_PREEMPT_RT 2008-12-08 17:23 ` Steven Rostedt @ 2008-12-11 19:05 ` Tim Blechmann 0 siblings, 0 replies; 10+ messages in thread From: Tim Blechmann @ 2008-12-11 19:05 UTC (permalink / raw) To: rostedt; +Cc: linux-kernel, linux-rt-users, mingo, tglx, Tim Blechmann in some cases, spinlocks are used to avoid preemption. does not work correctly, when CONFIG_PREEMPT_RT is enabled. therefore two new macros are introduced: preempt_disable_rt/preempt_disable_rt behave like their equivalents, if CONFIG_PREEMPT_RT is enabled, and as noops otherwise. Signed-off-by: Tim Blechmann <tim@klingt.org> --- include/linux/preempt.h | 10 ++++++++++ 1 files changed, 10 insertions(+), 0 deletions(-) diff --git a/include/linux/preempt.h b/include/linux/preempt.h index 72b1a10..2526170 100644 --- a/include/linux/preempt.h +++ b/include/linux/preempt.h @@ -93,6 +93,16 @@ do { \ #endif +#ifdef CONFIG_PREEMPT_RT +#define preempt_disable_rt preempt_disable +#define preempt_enable_rt preempt_enable +#else +#define +#define preempt_disable_rt do { } while (0) +#define preempt_enable_rt do { } while (0) +#endif + + #ifdef CONFIG_PREEMPT_NOTIFIERS struct preempt_notifier; -- 1.5.6.3 ^ permalink raw reply related [flat|nested] 10+ messages in thread
* Re: [PATCH] [RT] avoid preemption in memory controller code 2008-12-08 14:34 [PATCH] [RT] avoid preemption in memory controller code Tim Blechmann 2008-12-08 16:42 ` Steven Rostedt @ 2008-12-09 1:13 ` Li Zefan 2008-12-09 1:32 ` KAMEZAWA Hiroyuki 1 sibling, 1 reply; 10+ messages in thread From: Li Zefan @ 2008-12-09 1:13 UTC (permalink / raw) To: Tim Blechmann Cc: rostedt, linux-rt-users, linux-kernel, Balbir Singh, KAMEZAWA Hiroyuki CC: Balbir Singh <balbir@linux.vnet.ibm.com> CC: KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com> Tim Blechmann wrote: > the lru_lock of struct mem_group_per_zone is used to avoid preemption > during the __mem_cgroup_stat_add_safe function. > therefore, a raw_spinlock_t should be used. > FYI, this lru_lock no longer exists in -mm tree. The following patch removes that lock: http://marc.info/?l=linux-mm&m=122665814801979&w=2 > Signed-off-by: Tim Blechmann <tim@klingt.org> > --- > mm/memcontrol.c | 4 ++-- > 1 files changed, 2 insertions(+), 2 deletions(-) > > diff --git a/mm/memcontrol.c b/mm/memcontrol.c > index 517f945..8661732 100644 > --- a/mm/memcontrol.c > +++ b/mm/memcontrol.c > @@ -95,9 +95,9 @@ enum mem_cgroup_zstat_index { > > struct mem_cgroup_per_zone { > /* > - * spin_lock to protect the per cgroup LRU > + * raw_spin_lock to protect the per cgroup LRU > */ > - spinlock_t lru_lock; > + raw_spinlock_t lru_lock; > struct list_head active_list; > struct list_head inactive_list; > unsigned long count[NR_MEM_CGROUP_ZSTAT]; ^ permalink raw reply [flat|nested] 10+ messages in thread
* Re: [PATCH] [RT] avoid preemption in memory controller code 2008-12-09 1:13 ` [PATCH] [RT] avoid preemption in memory controller code Li Zefan @ 2008-12-09 1:32 ` KAMEZAWA Hiroyuki 0 siblings, 0 replies; 10+ messages in thread From: KAMEZAWA Hiroyuki @ 2008-12-09 1:32 UTC (permalink / raw) To: Li Zefan; +Cc: Tim Blechmann, rostedt, linux-rt-users, linux-kernel, Balbir Singh On Tue, 09 Dec 2008 09:13:12 +0800 Li Zefan <lizf@cn.fujitsu.com> wrote: > CC: Balbir Singh <balbir@linux.vnet.ibm.com> > CC: KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com> > > Tim Blechmann wrote: > > the lru_lock of struct mem_group_per_zone is used to avoid preemption > > during the __mem_cgroup_stat_add_safe function. > > therefore, a raw_spinlock_t should be used. > > > > FYI, this lru_lock no longer exists in -mm tree. The following patch > removes that lock: > > http://marc.info/?l=linux-mm&m=122665814801979&w=2 > > > Signed-off-by: Tim Blechmann <tim@klingt.org> > > --- > > mm/memcontrol.c | 4 ++-- > > 1 files changed, 2 insertions(+), 2 deletions(-) > > > > diff --git a/mm/memcontrol.c b/mm/memcontrol.c > > index 517f945..8661732 100644 > > --- a/mm/memcontrol.c > > +++ b/mm/memcontrol.c > > @@ -95,9 +95,9 @@ enum mem_cgroup_zstat_index { > > > > struct mem_cgroup_per_zone { > > /* > > - * spin_lock to protect the per cgroup LRU > > + * raw_spin_lock to protect the per cgroup LRU > > */ > > - spinlock_t lru_lock; > > + raw_spinlock_t lru_lock; > > struct list_head active_list; > > struct list_head inactive_list; > > unsigned long count[NR_MEM_CGROUP_ZSTAT]; yes..I remvoed that. Please see mmotm and subscribe linux-mm, if you have interests memory resource controller. Anyway, thanks. -Kame ^ permalink raw reply [flat|nested] 10+ messages in thread
[parent not found: <preempt_enable_rt();>]
* [PATCH] [RT] avoid preemption in memory controller code [not found] <preempt_enable_rt();> @ 2008-12-11 19:00 ` Tim Blechmann 2008-12-12 1:17 ` KAMEZAWA Hiroyuki 0 siblings, 1 reply; 10+ messages in thread From: Tim Blechmann @ 2008-12-11 19:00 UTC (permalink / raw) To: rostedt; +Cc: linux-kernel, linux-rt-users, mingo, tglx, Tim Blechmann the lru_lock of struct mem_group_per_zone is used to avoid preemption during the mem_cgroup_charge_statistics function. this does not work correctly, when CONFIG_PREEMPT_RT is enabled. therefore, the preemption is disabled using the preempt_disable_rt macro in these cases. Signed-off-by: Tim Blechmann <tim@klingt.org> --- mm/memcontrol.c | 4 +++- 1 files changed, 3 insertions(+), 1 deletions(-) diff --git a/mm/memcontrol.c b/mm/memcontrol.c index 562b94f..70493c4 100644 --- a/mm/memcontrol.c +++ b/mm/memcontrol.c @@ -158,7 +158,7 @@ pcg_default_flags[NR_CHARGE_TYPE] = { }; /* - * Always modified under lru lock. Then, not necessary to preempt_disable() + * Always modified under lru lock. Disable preemption with preempt_disable_rt() */ static void mem_cgroup_charge_statistics(struct mem_cgroup *mem, struct page_cgroup *pc, @@ -170,6 +170,7 @@ static void mem_cgroup_charge_statistics(struct mem_cgroup *mem, VM_BUG_ON(!irqs_disabled()); + preempt_disable_rt(); cpustat = &stat->cpustat[smp_processor_id()]; if (PageCgroupCache(pc)) __mem_cgroup_stat_add_safe(cpustat, MEM_CGROUP_STAT_CACHE, val); @@ -182,6 +183,7 @@ static void mem_cgroup_charge_statistics(struct mem_cgroup *mem, else __mem_cgroup_stat_add_safe(cpustat, MEM_CGROUP_STAT_PGPGOUT_COUNT, 1); + preempt_enable_rt(); } static struct mem_cgroup_per_zone * -- 1.5.6.3 ^ permalink raw reply related [flat|nested] 10+ messages in thread
* Re: [PATCH] [RT] avoid preemption in memory controller code 2008-12-11 19:00 ` Tim Blechmann @ 2008-12-12 1:17 ` KAMEZAWA Hiroyuki 2008-12-12 9:08 ` Tim Blechmann 0 siblings, 1 reply; 10+ messages in thread From: KAMEZAWA Hiroyuki @ 2008-12-12 1:17 UTC (permalink / raw) To: Tim Blechmann; +Cc: rostedt, linux-kernel, linux-rt-users, mingo, tglx On Thu, 11 Dec 2008 20:00:45 +0100 Tim Blechmann <tim@klingt.org> wrote: > the lru_lock of struct mem_group_per_zone is used to avoid preemption > during the mem_cgroup_charge_statistics function. this does not work > correctly, when CONFIG_PREEMPT_RT is enabled. > therefore, the preemption is disabled using the preempt_disable_rt macro > in these cases. > > Signed-off-by: Tim Blechmann <tim@klingt.org> Sorry, memcg's code this function in mmotm kernel is now, following. Please give me advice if some more thinking is necessary for RT. == static void mem_cgroup_charge_statistics(struct mem_cgroup *mem, struct page_cgroup *pc, bool charge) { int val = (charge)? 1 : -1; struct mem_cgroup_stat *stat = &mem->stat; struct mem_cgroup_stat_cpu *cpustat; int cpu = get_cpu(); cpustat = &stat->cpustat[cpu]; if (PageCgroupCache(pc)) __mem_cgroup_stat_add_safe(cpustat, MEM_CGROUP_STAT_CACHE, val); else __mem_cgroup_stat_add_safe(cpustat, MEM_CGROUP_STAT_RSS, val); if (charge) __mem_cgroup_stat_add_safe(cpustat, MEM_CGROUP_STAT_PGPGIN_COUNT, 1); else __mem_cgroup_stat_add_safe(cpustat, MEM_CGROUP_STAT_PGPGOUT_COUNT, 1); put_cpu(); } == Regards, -Kame > --- > mm/memcontrol.c | 4 +++- > 1 files changed, 3 insertions(+), 1 deletions(-) > > diff --git a/mm/memcontrol.c b/mm/memcontrol.c > index 562b94f..70493c4 100644 > --- a/mm/memcontrol.c > +++ b/mm/memcontrol.c > @@ -158,7 +158,7 @@ pcg_default_flags[NR_CHARGE_TYPE] = { > }; > > /* > - * Always modified under lru lock. Then, not necessary to preempt_disable() > + * Always modified under lru lock. Disable preemption with preempt_disable_rt() > */ > static void mem_cgroup_charge_statistics(struct mem_cgroup *mem, > struct page_cgroup *pc, > @@ -170,6 +170,7 @@ static void mem_cgroup_charge_statistics(struct mem_cgroup *mem, > > VM_BUG_ON(!irqs_disabled()); > > + preempt_disable_rt(); > cpustat = &stat->cpustat[smp_processor_id()]; > if (PageCgroupCache(pc)) > __mem_cgroup_stat_add_safe(cpustat, MEM_CGROUP_STAT_CACHE, val); > @@ -182,6 +183,7 @@ static void mem_cgroup_charge_statistics(struct mem_cgroup *mem, > else > __mem_cgroup_stat_add_safe(cpustat, > MEM_CGROUP_STAT_PGPGOUT_COUNT, 1); > + preempt_enable_rt(); > } > > static struct mem_cgroup_per_zone * > -- > 1.5.6.3 > > -- > To unsubscribe from this list: send the line "unsubscribe linux-kernel" in > the body of a message to majordomo@vger.kernel.org > More majordomo info at http://vger.kernel.org/majordomo-info.html > Please read the FAQ at http://www.tux.org/lkml/ > ^ permalink raw reply [flat|nested] 10+ messages in thread
* Re: [PATCH] [RT] avoid preemption in memory controller code 2008-12-12 1:17 ` KAMEZAWA Hiroyuki @ 2008-12-12 9:08 ` Tim Blechmann 0 siblings, 0 replies; 10+ messages in thread From: Tim Blechmann @ 2008-12-12 9:08 UTC (permalink / raw) To: KAMEZAWA Hiroyuki; +Cc: rostedt, linux-kernel, linux-rt-users, mingo, tglx [-- Attachment #1: Type: text/plain, Size: 1893 bytes --] > > the lru_lock of struct mem_group_per_zone is used to avoid preemption > > during the mem_cgroup_charge_statistics function. this does not work > > correctly, when CONFIG_PREEMPT_RT is enabled. > > therefore, the preemption is disabled using the preempt_disable_rt macro > > in these cases. > > > > Signed-off-by: Tim Blechmann <tim@klingt.org> > > Sorry, memcg's code this function in mmotm kernel is now, following. > Please give me advice if some more thinking is necessary for RT. using get_cpu/put_cpu, the preemption is disabled ... so mmotm should not require any changes. my patch would only be required for an rt-patch, based on a current kernel version ... instead of applying my patch, one could also use the specific part of mmotm for the RT tree ... tim > static void mem_cgroup_charge_statistics(struct mem_cgroup *mem, > struct page_cgroup *pc, > bool charge) > { > int val = (charge)? 1 : -1; > struct mem_cgroup_stat *stat = &mem->stat; > struct mem_cgroup_stat_cpu *cpustat; > int cpu = get_cpu(); > > cpustat = &stat->cpustat[cpu]; > if (PageCgroupCache(pc)) > __mem_cgroup_stat_add_safe(cpustat, MEM_CGROUP_STAT_CACHE, val); > else > __mem_cgroup_stat_add_safe(cpustat, MEM_CGROUP_STAT_RSS, val); > > if (charge) > __mem_cgroup_stat_add_safe(cpustat, > MEM_CGROUP_STAT_PGPGIN_COUNT, 1); > else > __mem_cgroup_stat_add_safe(cpustat, > MEM_CGROUP_STAT_PGPGOUT_COUNT, 1); > put_cpu(); > } -- tim@klingt.org http://tim.klingt.org Silence is only frightening to people who are compulsively verbalizing. William S. Burroughs [-- Attachment #2: This is a digitally signed message part --] [-- Type: application/pgp-signature, Size: 197 bytes --] ^ permalink raw reply [flat|nested] 10+ messages in thread
end of thread, other threads:[~2008-12-12 9:09 UTC | newest]
Thread overview: 10+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2008-12-08 14:34 [PATCH] [RT] avoid preemption in memory controller code Tim Blechmann
2008-12-08 16:42 ` Steven Rostedt
2008-12-08 17:07 ` Tim Blechmann
2008-12-08 17:23 ` Steven Rostedt
2008-12-11 19:05 ` [PATCH] [RT] preempt_disable_rt for CONFIG_PREEMPT_RT Tim Blechmann
2008-12-09 1:13 ` [PATCH] [RT] avoid preemption in memory controller code Li Zefan
2008-12-09 1:32 ` KAMEZAWA Hiroyuki
[not found] <preempt_enable_rt();>
2008-12-11 19:00 ` Tim Blechmann
2008-12-12 1:17 ` KAMEZAWA Hiroyuki
2008-12-12 9:08 ` Tim Blechmann
This is a public inbox, see mirroring instructions for how to clone and mirror all data and code used for this inbox