From mboxrd@z Thu Jan 1 00:00:00 1970 From: KOSAKI Motohiro Subject: Re: [PATCH v5 7/8] mm: Only IPI CPUs to drain local pages if they exist Date: Tue, 03 Jan 2012 17:02:36 -0500 Message-ID: <4F037AFC.8030604@gmail.com> References: <1325499859-2262-1-git-send-email-gilad@benyossef.com> <1325499859-2262-8-git-send-email-gilad@benyossef.com> <4F033EC9.4050909@gmail.com> Mime-Version: 1.0 Content-Type: text/plain; charset=ISO-8859-1; format=flowed Content-Transfer-Encoding: 7bit Cc: linux-kernel@vger.kernel.org, Chris Metcalf , Peter Zijlstra , Frederic Weisbecker , Russell King , linux-mm@kvack.org, Pekka Enberg , Matt Mackall , Sasha Levin , Rik van Riel , Andi Kleen , Mel Gorman , Andrew Morton , Alexander Viro , linux-fsdevel@vger.kernel.org, Avi Kivity To: Gilad Ben-Yossef Return-path: In-Reply-To: Sender: owner-linux-mm@kvack.org List-Id: linux-fsdevel.vger.kernel.org (1/3/12 1:58 PM), Gilad Ben-Yossef wrote: > 2012/1/3 KOSAKI Motohiro: >> (1/2/12 5:24 AM), Gilad Ben-Yossef wrote: >>> Calculate a cpumask of CPUs with per-cpu pages in any zone >>> and only send an IPI requesting CPUs to drain these pages >>> to the buddy allocator if they actually have pages when >>> asked to flush. >>> >>> This patch saves 99% of IPIs asking to drain per-cpu >>> pages in case of severe memory preassure that leads >>> to OOM since in these cases multiple, possibly concurrent, >>> allocation requests end up in the direct reclaim code >>> path so when the per-cpu pages end up reclaimed on first >>> allocation failure for most of the proceeding allocation >>> attempts until the memory pressure is off (possibly via >>> the OOM killer) there are no per-cpu pages on most CPUs >>> (and there can easily be hundreds of them). >>> >>> This also has the side effect of shortening the average >>> latency of direct reclaim by 1 or more order of magnitude >>> since waiting for all the CPUs to ACK the IPI takes a >>> long time. >>> >>> Tested by running "hackbench 400" on a 4 CPU x86 otherwise >>> idle VM and observing the difference between the number >>> of direct reclaim attempts that end up in drain_all_pages() >>> and those were more then 1/2 of the online CPU had any >>> per-cpu page in them, using the vmstat counters introduced >>> in the next patch in the series and using proc/interrupts. >>> >>> In the test sceanrio, this saved around 500 global IPIs. >>> After trigerring an OOM: >>> >>> $ cat /proc/vmstat >>> ... >>> pcp_global_drain 627 >>> pcp_global_ipi_saved 578 >>> >>> I've also seen the number of drains reach 15k calls >>> with the saved percentage reaching 99% when there >>> are more tasks running during an OOM kill. >>> >>> Signed-off-by: Gilad Ben-Yossef >>> Acked-by: Christoph Lameter >>> CC: Chris Metcalf >>> CC: Peter Zijlstra >>> CC: Frederic Weisbecker >>> CC: Russell King >>> CC: linux-mm@kvack.org >>> CC: Pekka Enberg >>> CC: Matt Mackall >>> CC: Sasha Levin >>> CC: Rik van Riel >>> CC: Andi Kleen >>> CC: Mel Gorman >>> CC: Andrew Morton >>> CC: Alexander Viro >>> CC: linux-fsdevel@vger.kernel.org >>> CC: Avi Kivity >>> --- >>> Christopth Ack was for a previous version that allocated >>> the cpumask in drain_all_pages(). >> >> When you changed a patch design and implementation, ACKs are >> should be dropped. otherwise you miss to chance to get a good >> review. >> > > Got you. Thanks for the review :-) >> >> >>> mm/page_alloc.c | 26 +++++++++++++++++++++++++- >>> 1 files changed, 25 insertions(+), 1 deletions(-) >>> >>> diff --git a/mm/page_alloc.c b/mm/page_alloc.c >>> index 2b8ba3a..092c331 100644 >>> --- a/mm/page_alloc.c >>> +++ b/mm/page_alloc.c >>> @@ -67,6 +67,14 @@ DEFINE_PER_CPU(int, numa_node); >>> EXPORT_PER_CPU_SYMBOL(numa_node); >>> #endif >>> >>> +/* >>> + * A global cpumask of CPUs with per-cpu pages that gets >>> + * recomputed on each drain. We use a global cpumask >>> + * for to avoid allocation on direct reclaim code path >>> + * for CONFIG_CPUMASK_OFFSTACK=y >>> + */ >>> +static cpumask_var_t cpus_with_pcps; >>> + >>> #ifdef CONFIG_HAVE_MEMORYLESS_NODES >>> /* >>> * N.B., Do NOT reference the '_numa_mem_' per cpu variable directly. >>> @@ -1119,7 +1127,19 @@ void drain_local_pages(void *arg) >>> */ >>> void drain_all_pages(void) >>> { >>> - on_each_cpu(drain_local_pages, NULL, 1); >>> + int cpu; >>> + struct per_cpu_pageset *pcp; >>> + struct zone *zone; >>> + >> >> get_online_cpu() ? > > I believe this is not needed here as on_each_cpu_mask() (smp_call_function_many > really) later masks the cpumask with the online cpus, so at worst we > are turning on or off > a meaningless bit. You are right. this function can't call get_online_cpus() and cpu unplug event automatically drop pcps. so, no worry. > > Anyway, If I'm wrong someone should fix show_free_areas() as well :-) > >>> + for_each_online_cpu(cpu) >>> + for_each_populated_zone(zone) { >>> + pcp = per_cpu_ptr(zone->pageset, cpu); >>> + if (pcp->pcp.count) >>> + cpumask_set_cpu(cpu, cpus_with_pcps); >>> + else >>> + cpumask_clear_cpu(cpu, cpus_with_pcps); >> >> cpumask* functions can't be used locklessly? > > I'm not sure I understand your question ocrrectly. As far as I > understand cpumask_set_cpu and cpumask_set_cpu > are atomic operations that do not require a lock (they might be > implemented using one though). Ahh, yup. right you are. Reviewed-by: KOSAKI Motohiro -- To unsubscribe, send a message with 'unsubscribe linux-mm' in the body to majordomo@kvack.org. For more info on Linux MM, see: http://www.linux-mm.org/ . Fight unfair telecom internet charges in Canada: sign http://stopthemeter.ca/ Don't email: email@kvack.org