* [RFC PATCH 0/3] Reduce watermark-related problems with the per-cpu allocator
@ 2010-08-16 9:42 Mel Gorman
2010-08-16 9:42 ` [PATCH 1/3] mm: page allocator: Update free page counters after pages are placed on the free list Mel Gorman
` (2 more replies)
0 siblings, 3 replies; 82+ messages in thread
From: Mel Gorman @ 2010-08-16 9:42 UTC (permalink / raw)
To: linux-mm
Cc: Rik van Riel, Nick Piggin, Johannes Weiner, KAMEZAWA Hiroyuki,
KOSAKI Motohiro, Mel Gorman
Internal IBM test teams beta testing distribution kernels have reported
problems on machines with a large number of CPUs whereby page allocator
failure messages show huge differences between the nr_free_pages vmstat
counter and what is available on the buddy lists. In an extreme example,
nr_free_pages was above the min watermark but zero pages were on the buddy
lists allowing the system to potentially deadlock. There is no reason why
the problems would not affect mainline so the following series mitigates the
problems in the page allocator related to to per-cpu counter drift and lists.
The first patch ensures that counters are updated after pages are added to
free lists.
The second patch notes that the counter drift between nr_free_pages and what
is on the per-cpu lists can be very high. When memory is low and kswapd
is awake, the per-cpu counters are checked as well as reading the value
of NR_FREE_PAGES. This will slow the page allocator when memory is low and
kswapd is awake but it will be much harder to breach the min watermark and
potentially livelock the system.
The third patch notes that after direct-reclaim an allocation can
fail because the necessary pages are on the per-cpu lists. After a
direct-reclaim-and-allocation-failure, the per-cpu lists are drained and
a second attempt is made.
Performance tests did not show up anything interesting. A version of this
series that continually called vmstat_update() when memory was low was
tested internally and found to help the counter drift problem. I described
this during LSF/MM Summit and the potential for IPI storms was frowned
upon. An alternative fix is in patch two which uses for_each_online_cpu()
to read the vmstat deltas while memory is low and kswapd is awake. This
should be functionally similar.
Comments?
include/linux/mmzone.h | 9 +++++++++
mm/mmzone.c | 27 +++++++++++++++++++++++++++
mm/page_alloc.c | 28 ++++++++++++++++++++++------
mm/vmstat.c | 5 ++++-
4 files changed, 62 insertions(+), 7 deletions(-)
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
^ permalink raw reply [flat|nested] 82+ messages in thread
* [PATCH 1/3] mm: page allocator: Update free page counters after pages are placed on the free list
2010-08-16 9:42 [RFC PATCH 0/3] Reduce watermark-related problems with the per-cpu allocator Mel Gorman
@ 2010-08-16 9:42 ` Mel Gorman
2010-08-16 14:04 ` Rik van Riel
` (3 more replies)
2010-08-16 9:42 ` [PATCH 2/3] mm: page allocator: Calculate a better estimate of NR_FREE_PAGES when memory is low and kswapd is awake Mel Gorman
2010-08-16 9:42 ` [PATCH 3/3] mm: page allocator: Drain per-cpu lists after direct reclaim allocation fails Mel Gorman
2 siblings, 4 replies; 82+ messages in thread
From: Mel Gorman @ 2010-08-16 9:42 UTC (permalink / raw)
To: linux-mm
Cc: Rik van Riel, Nick Piggin, Johannes Weiner, KAMEZAWA Hiroyuki,
KOSAKI Motohiro, Mel Gorman
When allocating a page, the system uses NR_FREE_PAGES counters to determine
if watermarks would remain intact after the allocation was made. This
check is made without interrupts disabled or the zone lock held and so is
race-prone by nature. Unfortunately, when pages are being freed in batch,
the counters are updated before the pages are added on the list. During this
window, the counters are misleading as the pages do not exist yet. When
under significant pressure on systems with large numbers of CPUs, it's
possible for processes to make progress even though they should have been
stalled. This is particularly problematic if a number of the processes are
using GFP_ATOMIC as the min watermark can be accidentally breached and in
extreme cases, the system can livelock.
This patch updates the counters after the pages have been added to the
list. This makes the allocator more cautious with respect to preserving
the watermarks and mitigates livelock possibilities.
Signed-off-by: Mel Gorman <mel@csn.ul.ie>
---
mm/page_alloc.c | 5 +++--
1 files changed, 3 insertions(+), 2 deletions(-)
diff --git a/mm/page_alloc.c b/mm/page_alloc.c
index 9bd339e..c2407a4 100644
--- a/mm/page_alloc.c
+++ b/mm/page_alloc.c
@@ -588,12 +588,12 @@ static void free_pcppages_bulk(struct zone *zone, int count,
{
int migratetype = 0;
int batch_free = 0;
+ int freed = count;
spin_lock(&zone->lock);
zone->all_unreclaimable = 0;
zone->pages_scanned = 0;
- __mod_zone_page_state(zone, NR_FREE_PAGES, count);
while (count) {
struct page *page;
struct list_head *list;
@@ -621,6 +621,7 @@ static void free_pcppages_bulk(struct zone *zone, int count,
trace_mm_page_pcpu_drain(page, 0, page_private(page));
} while (--count && --batch_free && !list_empty(list));
}
+ __mod_zone_page_state(zone, NR_FREE_PAGES, freed);
spin_unlock(&zone->lock);
}
@@ -631,8 +632,8 @@ static void free_one_page(struct zone *zone, struct page *page, int order,
zone->all_unreclaimable = 0;
zone->pages_scanned = 0;
- __mod_zone_page_state(zone, NR_FREE_PAGES, 1 << order);
__free_one_page(page, zone, order, migratetype);
+ __mod_zone_page_state(zone, NR_FREE_PAGES, 1 << order);
spin_unlock(&zone->lock);
}
--
1.7.1
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
^ permalink raw reply related [flat|nested] 82+ messages in thread
* Re: [PATCH 1/3] mm: page allocator: Update free page counters after pages are placed on the free list
2010-08-16 9:42 ` [PATCH 1/3] mm: page allocator: Update free page counters after pages are placed on the free list Mel Gorman
@ 2010-08-16 14:04 ` Rik van Riel
2010-08-16 15:26 ` Johannes Weiner
` (2 subsequent siblings)
3 siblings, 0 replies; 82+ messages in thread
From: Rik van Riel @ 2010-08-16 14:04 UTC (permalink / raw)
To: Mel Gorman
Cc: linux-mm, Nick Piggin, Johannes Weiner, KAMEZAWA Hiroyuki,
KOSAKI Motohiro
On 08/16/2010 05:42 AM, Mel Gorman wrote:
> When allocating a page, the system uses NR_FREE_PAGES counters to determine
> if watermarks would remain intact after the allocation was made. This
> check is made without interrupts disabled or the zone lock held and so is
> race-prone by nature. Unfortunately, when pages are being freed in batch,
> the counters are updated before the pages are added on the list. During this
> window, the counters are misleading as the pages do not exist yet. When
> under significant pressure on systems with large numbers of CPUs, it's
> possible for processes to make progress even though they should have been
> stalled. This is particularly problematic if a number of the processes are
> using GFP_ATOMIC as the min watermark can be accidentally breached and in
> extreme cases, the system can livelock.
>
> This patch updates the counters after the pages have been added to the
> list. This makes the allocator more cautious with respect to preserving
> the watermarks and mitigates livelock possibilities.
>
> Signed-off-by: Mel Gorman<mel@csn.ul.ie>
Reviewed-by: Rik van Riel <riel@redhat.com>
--
All rights reversed
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
^ permalink raw reply [flat|nested] 82+ messages in thread
* Re: [PATCH 1/3] mm: page allocator: Update free page counters after pages are placed on the free list
2010-08-16 9:42 ` [PATCH 1/3] mm: page allocator: Update free page counters after pages are placed on the free list Mel Gorman
2010-08-16 14:04 ` Rik van Riel
@ 2010-08-16 15:26 ` Johannes Weiner
2010-08-17 2:21 ` Minchan Kim
2010-08-18 2:21 ` KAMEZAWA Hiroyuki
3 siblings, 0 replies; 82+ messages in thread
From: Johannes Weiner @ 2010-08-16 15:26 UTC (permalink / raw)
To: Mel Gorman
Cc: linux-mm, Rik van Riel, Nick Piggin, KAMEZAWA Hiroyuki,
KOSAKI Motohiro
On Mon, Aug 16, 2010 at 10:42:11AM +0100, Mel Gorman wrote:
> When allocating a page, the system uses NR_FREE_PAGES counters to determine
> if watermarks would remain intact after the allocation was made. This
> check is made without interrupts disabled or the zone lock held and so is
> race-prone by nature. Unfortunately, when pages are being freed in batch,
> the counters are updated before the pages are added on the list. During this
> window, the counters are misleading as the pages do not exist yet. When
> under significant pressure on systems with large numbers of CPUs, it's
> possible for processes to make progress even though they should have been
> stalled. This is particularly problematic if a number of the processes are
> using GFP_ATOMIC as the min watermark can be accidentally breached and in
> extreme cases, the system can livelock.
>
> This patch updates the counters after the pages have been added to the
> list. This makes the allocator more cautious with respect to preserving
> the watermarks and mitigates livelock possibilities.
>
> Signed-off-by: Mel Gorman <mel@csn.ul.ie>
Acked-by: Johannes Weiner <hannes@cmpxchg.org>
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
^ permalink raw reply [flat|nested] 82+ messages in thread
* Re: [PATCH 1/3] mm: page allocator: Update free page counters after pages are placed on the free list
2010-08-16 9:42 ` [PATCH 1/3] mm: page allocator: Update free page counters after pages are placed on the free list Mel Gorman
2010-08-16 14:04 ` Rik van Riel
2010-08-16 15:26 ` Johannes Weiner
@ 2010-08-17 2:21 ` Minchan Kim
2010-08-17 9:59 ` Mel Gorman
2010-08-18 2:21 ` KAMEZAWA Hiroyuki
3 siblings, 1 reply; 82+ messages in thread
From: Minchan Kim @ 2010-08-17 2:21 UTC (permalink / raw)
To: Mel Gorman
Cc: linux-mm, Rik van Riel, Nick Piggin, Johannes Weiner,
KAMEZAWA Hiroyuki, KOSAKI Motohiro
Hi, Mel.
On Mon, Aug 16, 2010 at 6:42 PM, Mel Gorman <mel@csn.ul.ie> wrote:
> When allocating a page, the system uses NR_FREE_PAGES counters to determine
> if watermarks would remain intact after the allocation was made. This
> check is made without interrupts disabled or the zone lock held and so is
> race-prone by nature. Unfortunately, when pages are being freed in batch,
> the counters are updated before the pages are added on the list. During this
> window, the counters are misleading as the pages do not exist yet. When
> under significant pressure on systems with large numbers of CPUs, it's
> possible for processes to make progress even though they should have been
> stalled. This is particularly problematic if a number of the processes are
> using GFP_ATOMIC as the min watermark can be accidentally breached and in
> extreme cases, the system can livelock.
>
> This patch updates the counters after the pages have been added to the
> list. This makes the allocator more cautious with respect to preserving
> the watermarks and mitigates livelock possibilities.
>
> Signed-off-by: Mel Gorman <mel@csn.ul.ie>
Reviewed-by: Minchan Kim <minchan.kim@gmail.com>
Page free path looks good by your patch.
Now allocation path decrease NR_FREE_PAGES _after_ it remove pages from buddy.
It can make that actually we don't have enough pages in buddy but
pretend to have enough pages.
It could make same situation with free path which is your concern.
So I think it can confuse watermark check in extreme case.
So don't we need to consider _allocation_ path with conservative?
--
Kind regards,
Minchan Kim
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
^ permalink raw reply [flat|nested] 82+ messages in thread
* Re: [PATCH 1/3] mm: page allocator: Update free page counters after pages are placed on the free list
2010-08-17 2:21 ` Minchan Kim
@ 2010-08-17 9:59 ` Mel Gorman
2010-08-17 14:25 ` Minchan Kim
0 siblings, 1 reply; 82+ messages in thread
From: Mel Gorman @ 2010-08-17 9:59 UTC (permalink / raw)
To: Minchan Kim
Cc: linux-mm, Rik van Riel, Nick Piggin, Johannes Weiner,
KAMEZAWA Hiroyuki, KOSAKI Motohiro
On Tue, Aug 17, 2010 at 11:21:15AM +0900, Minchan Kim wrote:
> Hi, Mel.
>
> On Mon, Aug 16, 2010 at 6:42 PM, Mel Gorman <mel@csn.ul.ie> wrote:
> > When allocating a page, the system uses NR_FREE_PAGES counters to determine
> > if watermarks would remain intact after the allocation was made. This
> > check is made without interrupts disabled or the zone lock held and so is
> > race-prone by nature. Unfortunately, when pages are being freed in batch,
> > the counters are updated before the pages are added on the list. During this
> > window, the counters are misleading as the pages do not exist yet. When
> > under significant pressure on systems with large numbers of CPUs, it's
> > possible for processes to make progress even though they should have been
> > stalled. This is particularly problematic if a number of the processes are
> > using GFP_ATOMIC as the min watermark can be accidentally breached and in
> > extreme cases, the system can livelock.
> >
> > This patch updates the counters after the pages have been added to the
> > list. This makes the allocator more cautious with respect to preserving
> > the watermarks and mitigates livelock possibilities.
> >
> > Signed-off-by: Mel Gorman <mel@csn.ul.ie>
> Reviewed-by: Minchan Kim <minchan.kim@gmail.com>
>
> Page free path looks good by your patch.
>
Thanks
> Now allocation path decrease NR_FREE_PAGES _after_ it remove pages from buddy.
> It can make that actually we don't have enough pages in buddy but
> pretend to have enough pages.
> It could make same situation with free path which is your concern.
> So I think it can confuse watermark check in extreme case.
>
> So don't we need to consider _allocation_ path with conservative?
>
I considered it and it would be desirable. The downside was that the
paths became more complicated. Take rmqueue_bulk() for example. It could
start by modifying the counters but there then needs to be a recovery
path if all the requested pages were not allocated.
It'd be nice to see if these patches on their own were enough to
alleviate the worst of the per-cpu-counter drift before adding new
branches to the allocation path.
Does that make sense?
--
Mel Gorman
Part-time Phd Student Linux Technology Center
University of Limerick IBM Dublin Software Lab
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
^ permalink raw reply [flat|nested] 82+ messages in thread
* Re: [PATCH 1/3] mm: page allocator: Update free page counters after pages are placed on the free list
2010-08-17 9:59 ` Mel Gorman
@ 2010-08-17 14:25 ` Minchan Kim
0 siblings, 0 replies; 82+ messages in thread
From: Minchan Kim @ 2010-08-17 14:25 UTC (permalink / raw)
To: Mel Gorman
Cc: linux-mm, Rik van Riel, Nick Piggin, Johannes Weiner,
KAMEZAWA Hiroyuki, KOSAKI Motohiro
On Tue, Aug 17, 2010 at 10:59:18AM +0100, Mel Gorman wrote:
> On Tue, Aug 17, 2010 at 11:21:15AM +0900, Minchan Kim wrote:
> > Now allocation path decrease NR_FREE_PAGES _after_ it remove pages from buddy.
> > It can make that actually we don't have enough pages in buddy but
> > pretend to have enough pages.
> > It could make same situation with free path which is your concern.
> > So I think it can confuse watermark check in extreme case.
> >
> > So don't we need to consider _allocation_ path with conservative?
> >
>
> I considered it and it would be desirable. The downside was that the
> paths became more complicated. Take rmqueue_bulk() for example. It could
> start by modifying the counters but there then needs to be a recovery
> path if all the requested pages were not allocated.
>
> It'd be nice to see if these patches on their own were enough to
> alleviate the worst of the per-cpu-counter drift before adding new
> branches to the allocation path.
>
> Does that make sense?
No problem. It was a usecase of big machine.
I also hope we don't add unnecessary overhead in normal machine due to unlikely problem.
Let's consider it by further step if it isn't enough.
Thanks, Mel.
>
> --
> Mel Gorman
> Part-time Phd Student Linux Technology Center
> University of Limerick IBM Dublin Software Lab
--
Kind regards,
Minchan Kim
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
^ permalink raw reply [flat|nested] 82+ messages in thread
* Re: [PATCH 1/3] mm: page allocator: Update free page counters after pages are placed on the free list
2010-08-16 9:42 ` [PATCH 1/3] mm: page allocator: Update free page counters after pages are placed on the free list Mel Gorman
` (2 preceding siblings ...)
2010-08-17 2:21 ` Minchan Kim
@ 2010-08-18 2:21 ` KAMEZAWA Hiroyuki
3 siblings, 0 replies; 82+ messages in thread
From: KAMEZAWA Hiroyuki @ 2010-08-18 2:21 UTC (permalink / raw)
To: Mel Gorman
Cc: linux-mm, Rik van Riel, Nick Piggin, Johannes Weiner,
KOSAKI Motohiro
On Mon, 16 Aug 2010 10:42:11 +0100
Mel Gorman <mel@csn.ul.ie> wrote:
> When allocating a page, the system uses NR_FREE_PAGES counters to determine
> if watermarks would remain intact after the allocation was made. This
> check is made without interrupts disabled or the zone lock held and so is
> race-prone by nature. Unfortunately, when pages are being freed in batch,
> the counters are updated before the pages are added on the list. During this
> window, the counters are misleading as the pages do not exist yet. When
> under significant pressure on systems with large numbers of CPUs, it's
> possible for processes to make progress even though they should have been
> stalled. This is particularly problematic if a number of the processes are
> using GFP_ATOMIC as the min watermark can be accidentally breached and in
> extreme cases, the system can livelock.
>
> This patch updates the counters after the pages have been added to the
> list. This makes the allocator more cautious with respect to preserving
> the watermarks and mitigates livelock possibilities.
>
> Signed-off-by: Mel Gorman <mel@csn.ul.ie>
Reviewed-by: KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
^ permalink raw reply [flat|nested] 82+ messages in thread
* [PATCH 2/3] mm: page allocator: Calculate a better estimate of NR_FREE_PAGES when memory is low and kswapd is awake
2010-08-16 9:42 [RFC PATCH 0/3] Reduce watermark-related problems with the per-cpu allocator Mel Gorman
2010-08-16 9:42 ` [PATCH 1/3] mm: page allocator: Update free page counters after pages are placed on the free list Mel Gorman
@ 2010-08-16 9:42 ` Mel Gorman
2010-08-16 9:43 ` Mel Gorman
2010-08-18 2:59 ` KAMEZAWA Hiroyuki
2010-08-16 9:42 ` [PATCH 3/3] mm: page allocator: Drain per-cpu lists after direct reclaim allocation fails Mel Gorman
2 siblings, 2 replies; 82+ messages in thread
From: Mel Gorman @ 2010-08-16 9:42 UTC (permalink / raw)
To: linux-mm
Cc: Rik van Riel, Nick Piggin, Johannes Weiner, KAMEZAWA Hiroyuki,
KOSAKI Motohiro, Mel Gorman
Ordinarily watermark checks are made based on the vmstat NR_FREE_PAGES as
it is cheaper than scanning a number of lists. To avoid synchronization
overhead, counter deltas are maintained on a per-cpu basis and drained both
periodically and when the delta is above a threshold. On large CPU systems,
the difference between the estimated and real value of NR_FREE_PAGES can be
very high. If the system is under both load and low memory, it's possible
for watermarks to be breached. In extreme cases, the number of free pages
can drop to 0 leading to the possibility of system livelock.
This patch introduces zone_nr_free_pages() to take a slightly more accurate
estimate of NR_FREE_PAGES while kswapd is awake. The estimate is not perfect
and may result in cache line bounces but is expected to be lighter than the
IPI calls necessary to continually drain the per-cpu counters while kswapd
is awake.
Signed-off-by: Mel Gorman <mel@csn.ul.ie>
---
include/linux/mmzone.h | 9 +++++++++
mm/mmzone.c | 27 +++++++++++++++++++++++++++
mm/page_alloc.c | 4 ++--
mm/vmstat.c | 5 ++++-
4 files changed, 42 insertions(+), 3 deletions(-)
diff --git a/include/linux/mmzone.h b/include/linux/mmzone.h
index b4d109e..1df3c43 100644
--- a/include/linux/mmzone.h
+++ b/include/linux/mmzone.h
@@ -284,6 +284,13 @@ struct zone {
unsigned long watermark[NR_WMARK];
/*
+ * When free pages are below this point, additional steps are taken
+ * when reading the number of free pages to avoid per-cpu counter
+ * drift allowing watermarks to be breached
+ */
+ unsigned long percpu_drift_mark;
+
+ /*
* We don't know if the memory that we're going to allocate will be freeable
* or/and it will be released eventually, so to avoid totally wasting several
* GB of ram we must reserve some of the lower zone memory (otherwise we risk
@@ -456,6 +463,8 @@ static inline int zone_is_oom_locked(const struct zone *zone)
return test_bit(ZONE_OOM_LOCKED, &zone->flags);
}
+unsigned long zone_nr_free_pages(struct zone *zone);
+
/*
* The "priority" of VM scanning is how much of the queues we will scan in one
* go. A value of 12 for DEF_PRIORITY implies that we will scan 1/4096th of the
diff --git a/mm/mmzone.c b/mm/mmzone.c
index f5b7d17..89842ec 100644
--- a/mm/mmzone.c
+++ b/mm/mmzone.c
@@ -87,3 +87,30 @@ int memmap_valid_within(unsigned long pfn,
return 1;
}
#endif /* CONFIG_ARCH_HAS_HOLES_MEMORYMODEL */
+
+/* Called when a more accurate view of NR_FREE_PAGES is needed */
+unsigned long zone_nr_free_pages(struct zone *zone)
+{
+ unsigned long nr_free_pages = zone_page_state(zone, NR_FREE_PAGES);
+
+ /*
+ * While kswapd is awake, it is considered the zone is under some
+ * memory pressure. Under pressure, there is a risk that
+ * er-cpu-counter-drift will allow the min watermark to be breached
+ * potentially causing a live-lock. While kswapd is awake and
+ * free pages are low, get a better estimate for free pages
+ */
+ if (free < zone->percpu_drift_mark &&
+ !waitqueue_active(&zone->zone_pgdat->kswapd_wait)) {
+ int cpu;
+
+ for_each_online_cpu(cpu) {
+ struct per_cpu_pageset *pset;
+
+ pset = per_cpu_ptr(zone->pageset, cpu);
+ nr_free_pages += pset->vm_stat_diff[NR_FREE_PAGES];
+ }
+ }
+
+ return nr_free_pages;
+}
diff --git a/mm/page_alloc.c b/mm/page_alloc.c
index c2407a4..67a2ed0 100644
--- a/mm/page_alloc.c
+++ b/mm/page_alloc.c
@@ -1462,7 +1462,7 @@ int zone_watermark_ok(struct zone *z, int order, unsigned long mark,
{
/* free_pages my go negative - that's OK */
long min = mark;
- long free_pages = zone_page_state(z, NR_FREE_PAGES) - (1 << order) + 1;
+ long free_pages = zone_nr_free_pages(z) - (1 << order) + 1;
int o;
if (alloc_flags & ALLOC_HIGH)
@@ -2413,7 +2413,7 @@ void show_free_areas(void)
" all_unreclaimable? %s"
"\n",
zone->name,
- K(zone_page_state(zone, NR_FREE_PAGES)),
+ K(zone_nr_free_pages(zone)),
K(min_wmark_pages(zone)),
K(low_wmark_pages(zone)),
K(high_wmark_pages(zone)),
diff --git a/mm/vmstat.c b/mm/vmstat.c
index 7759941..c95a159 100644
--- a/mm/vmstat.c
+++ b/mm/vmstat.c
@@ -143,6 +143,9 @@ static void refresh_zone_stat_thresholds(void)
for_each_online_cpu(cpu)
per_cpu_ptr(zone->pageset, cpu)->stat_threshold
= threshold;
+
+ zone->percpu_drift_mark = high_wmark_pages(zone) +
+ num_online_cpus() * threshold;
}
}
@@ -813,7 +816,7 @@ static void zoneinfo_show_print(struct seq_file *m, pg_data_t *pgdat,
"\n scanned %lu"
"\n spanned %lu"
"\n present %lu",
- zone_page_state(zone, NR_FREE_PAGES),
+ zone_nr_free_pages(zone),
min_wmark_pages(zone),
low_wmark_pages(zone),
high_wmark_pages(zone),
--
1.7.1
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
^ permalink raw reply related [flat|nested] 82+ messages in thread
* Re: [PATCH 2/3] mm: page allocator: Calculate a better estimate of NR_FREE_PAGES when memory is low and kswapd is awake
2010-08-16 9:42 ` [PATCH 2/3] mm: page allocator: Calculate a better estimate of NR_FREE_PAGES when memory is low and kswapd is awake Mel Gorman
@ 2010-08-16 9:43 ` Mel Gorman
2010-08-16 14:47 ` Rik van Riel
` (2 more replies)
2010-08-18 2:59 ` KAMEZAWA Hiroyuki
1 sibling, 3 replies; 82+ messages in thread
From: Mel Gorman @ 2010-08-16 9:43 UTC (permalink / raw)
To: linux-mm
Cc: Rik van Riel, Nick Piggin, Johannes Weiner, KAMEZAWA Hiroyuki,
KOSAKI Motohiro
On Mon, Aug 16, 2010 at 10:42:12AM +0100, Mel Gorman wrote:
> Ordinarily watermark checks are made based on the vmstat NR_FREE_PAGES as
> it is cheaper than scanning a number of lists. To avoid synchronization
> overhead, counter deltas are maintained on a per-cpu basis and drained both
> periodically and when the delta is above a threshold. On large CPU systems,
> the difference between the estimated and real value of NR_FREE_PAGES can be
> very high. If the system is under both load and low memory, it's possible
> for watermarks to be breached. In extreme cases, the number of free pages
> can drop to 0 leading to the possibility of system livelock.
>
> This patch introduces zone_nr_free_pages() to take a slightly more accurate
> estimate of NR_FREE_PAGES while kswapd is awake. The estimate is not perfect
> and may result in cache line bounces but is expected to be lighter than the
> IPI calls necessary to continually drain the per-cpu counters while kswapd
> is awake.
>
> Signed-off-by: Mel Gorman <mel@csn.ul.ie>
And the second I sent this, I realised I had sent a slightly old version
that missed a compile-fix :(
==== CUT HERE ====
mm: page allocator: Calculate a better estimate of NR_FREE_PAGES when memory is low and kswapd is awake
Ordinarily watermark checks are made based on the vmstat NR_FREE_PAGES as
it is cheaper than scanning a number of lists. To avoid synchronization
overhead, counter deltas are maintained on a per-cpu basis and drained both
periodically and when the delta is above a threshold. On large CPU systems,
the difference between the estimated and real value of NR_FREE_PAGES can be
very high. If the system is under both load and low memory, it's possible
for watermarks to be breached. In extreme cases, the number of free pages
can drop to 0 leading to the possibility of system livelock.
This patch introduces zone_nr_free_pages() to take a slightly more accurate
estimate of NR_FREE_PAGES while kswapd is awake. The estimate is not perfect
and may result in cache line bounces but is expected to be lighter than the
IPI calls necessary to continually drain the per-cpu counters while kswapd
is awake.
Signed-off-by: Mel Gorman <mel@csn.ul.ie>
---
include/linux/mmzone.h | 9 +++++++++
mm/mmzone.c | 27 +++++++++++++++++++++++++++
mm/page_alloc.c | 4 ++--
mm/vmstat.c | 5 ++++-
4 files changed, 42 insertions(+), 3 deletions(-)
diff --git a/include/linux/mmzone.h b/include/linux/mmzone.h
index b4d109e..1df3c43 100644
--- a/include/linux/mmzone.h
+++ b/include/linux/mmzone.h
@@ -284,6 +284,13 @@ struct zone {
unsigned long watermark[NR_WMARK];
/*
+ * When free pages are below this point, additional steps are taken
+ * when reading the number of free pages to avoid per-cpu counter
+ * drift allowing watermarks to be breached
+ */
+ unsigned long percpu_drift_mark;
+
+ /*
* We don't know if the memory that we're going to allocate will be freeable
* or/and it will be released eventually, so to avoid totally wasting several
* GB of ram we must reserve some of the lower zone memory (otherwise we risk
@@ -456,6 +463,8 @@ static inline int zone_is_oom_locked(const struct zone *zone)
return test_bit(ZONE_OOM_LOCKED, &zone->flags);
}
+unsigned long zone_nr_free_pages(struct zone *zone);
+
/*
* The "priority" of VM scanning is how much of the queues we will scan in one
* go. A value of 12 for DEF_PRIORITY implies that we will scan 1/4096th of the
diff --git a/mm/mmzone.c b/mm/mmzone.c
index f5b7d17..056e374 100644
--- a/mm/mmzone.c
+++ b/mm/mmzone.c
@@ -87,3 +87,30 @@ int memmap_valid_within(unsigned long pfn,
return 1;
}
#endif /* CONFIG_ARCH_HAS_HOLES_MEMORYMODEL */
+
+/* Called when a more accurate view of NR_FREE_PAGES is needed */
+unsigned long zone_nr_free_pages(struct zone *zone)
+{
+ unsigned long nr_free_pages = zone_page_state(zone, NR_FREE_PAGES);
+
+ /*
+ * While kswapd is awake, it is considered the zone is under some
+ * memory pressure. Under pressure, there is a risk that
+ * er-cpu-counter-drift will allow the min watermark to be breached
+ * potentially causing a live-lock. While kswapd is awake and
+ * free pages are low, get a better estimate for free pages
+ */
+ if (nr_free_pages < zone->percpu_drift_mark &&
+ !waitqueue_active(&zone->zone_pgdat->kswapd_wait)) {
+ int cpu;
+
+ for_each_online_cpu(cpu) {
+ struct per_cpu_pageset *pset;
+
+ pset = per_cpu_ptr(zone->pageset, cpu);
+ nr_free_pages += pset->vm_stat_diff[NR_FREE_PAGES];
+ }
+ }
+
+ return nr_free_pages;
+}
diff --git a/mm/page_alloc.c b/mm/page_alloc.c
index c2407a4..67a2ed0 100644
--- a/mm/page_alloc.c
+++ b/mm/page_alloc.c
@@ -1462,7 +1462,7 @@ int zone_watermark_ok(struct zone *z, int order, unsigned long mark,
{
/* free_pages my go negative - that's OK */
long min = mark;
- long free_pages = zone_page_state(z, NR_FREE_PAGES) - (1 << order) + 1;
+ long free_pages = zone_nr_free_pages(z) - (1 << order) + 1;
int o;
if (alloc_flags & ALLOC_HIGH)
@@ -2413,7 +2413,7 @@ void show_free_areas(void)
" all_unreclaimable? %s"
"\n",
zone->name,
- K(zone_page_state(zone, NR_FREE_PAGES)),
+ K(zone_nr_free_pages(zone)),
K(min_wmark_pages(zone)),
K(low_wmark_pages(zone)),
K(high_wmark_pages(zone)),
diff --git a/mm/vmstat.c b/mm/vmstat.c
index 7759941..c95a159 100644
--- a/mm/vmstat.c
+++ b/mm/vmstat.c
@@ -143,6 +143,9 @@ static void refresh_zone_stat_thresholds(void)
for_each_online_cpu(cpu)
per_cpu_ptr(zone->pageset, cpu)->stat_threshold
= threshold;
+
+ zone->percpu_drift_mark = high_wmark_pages(zone) +
+ num_online_cpus() * threshold;
}
}
@@ -813,7 +816,7 @@ static void zoneinfo_show_print(struct seq_file *m, pg_data_t *pgdat,
"\n scanned %lu"
"\n spanned %lu"
"\n present %lu",
- zone_page_state(zone, NR_FREE_PAGES),
+ zone_nr_free_pages(zone),
min_wmark_pages(zone),
low_wmark_pages(zone),
high_wmark_pages(zone),
--
1.7.1
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
^ permalink raw reply related [flat|nested] 82+ messages in thread
* Re: [PATCH 2/3] mm: page allocator: Calculate a better estimate of NR_FREE_PAGES when memory is low and kswapd is awake
2010-08-16 9:43 ` Mel Gorman
@ 2010-08-16 14:47 ` Rik van Riel
2010-08-16 16:06 ` Johannes Weiner
2010-08-19 15:46 ` Minchan Kim
2 siblings, 0 replies; 82+ messages in thread
From: Rik van Riel @ 2010-08-16 14:47 UTC (permalink / raw)
To: Mel Gorman
Cc: linux-mm, Nick Piggin, Johannes Weiner, KAMEZAWA Hiroyuki,
KOSAKI Motohiro
On 08/16/2010 05:43 AM, Mel Gorman wrote:
> On Mon, Aug 16, 2010 at 10:42:12AM +0100, Mel Gorman wrote:
>> Ordinarily watermark checks are made based on the vmstat NR_FREE_PAGES as
>> it is cheaper than scanning a number of lists. To avoid synchronization
>> overhead, counter deltas are maintained on a per-cpu basis and drained both
>> periodically and when the delta is above a threshold. On large CPU systems,
>> the difference between the estimated and real value of NR_FREE_PAGES can be
>> very high. If the system is under both load and low memory, it's possible
>> for watermarks to be breached. In extreme cases, the number of free pages
>> can drop to 0 leading to the possibility of system livelock.
>>
>> This patch introduces zone_nr_free_pages() to take a slightly more accurate
>> estimate of NR_FREE_PAGES while kswapd is awake. The estimate is not perfect
>> and may result in cache line bounces but is expected to be lighter than the
>> IPI calls necessary to continually drain the per-cpu counters while kswapd
>> is awake.
>>
>> Signed-off-by: Mel Gorman<mel@csn.ul.ie>
>
> And the second I sent this, I realised I had sent a slightly old version
> that missed a compile-fix :(
Acked-by: Rik van Riel <riel@redhat.com>
--
All rights reversed
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
^ permalink raw reply [flat|nested] 82+ messages in thread
* Re: [PATCH 2/3] mm: page allocator: Calculate a better estimate of NR_FREE_PAGES when memory is low and kswapd is awake
2010-08-16 9:43 ` Mel Gorman
2010-08-16 14:47 ` Rik van Riel
@ 2010-08-16 16:06 ` Johannes Weiner
2010-08-17 2:26 ` Minchan Kim
2010-08-17 10:16 ` Mel Gorman
2010-08-19 15:46 ` Minchan Kim
2 siblings, 2 replies; 82+ messages in thread
From: Johannes Weiner @ 2010-08-16 16:06 UTC (permalink / raw)
To: Mel Gorman
Cc: linux-mm, Rik van Riel, Nick Piggin, KAMEZAWA Hiroyuki,
KOSAKI Motohiro
[npiggin@suse.de bounces, switched to yahoo address]
On Mon, Aug 16, 2010 at 10:43:50AM +0100, Mel Gorman wrote:
> On Mon, Aug 16, 2010 at 10:42:12AM +0100, Mel Gorman wrote:
> > Ordinarily watermark checks are made based on the vmstat NR_FREE_PAGES as
> > it is cheaper than scanning a number of lists. To avoid synchronization
> > overhead, counter deltas are maintained on a per-cpu basis and drained both
> > periodically and when the delta is above a threshold. On large CPU systems,
> > the difference between the estimated and real value of NR_FREE_PAGES can be
> > very high. If the system is under both load and low memory, it's possible
> > for watermarks to be breached. In extreme cases, the number of free pages
> > can drop to 0 leading to the possibility of system livelock.
> >
> > This patch introduces zone_nr_free_pages() to take a slightly more accurate
> > estimate of NR_FREE_PAGES while kswapd is awake. The estimate is not perfect
> > and may result in cache line bounces but is expected to be lighter than the
> > IPI calls necessary to continually drain the per-cpu counters while kswapd
> > is awake.
> >
> > Signed-off-by: Mel Gorman <mel@csn.ul.ie>
>
> And the second I sent this, I realised I had sent a slightly old version
> that missed a compile-fix :(
>
> ==== CUT HERE ====
> mm: page allocator: Calculate a better estimate of NR_FREE_PAGES when memory is low and kswapd is awake
>
> Ordinarily watermark checks are made based on the vmstat NR_FREE_PAGES as
> it is cheaper than scanning a number of lists. To avoid synchronization
> overhead, counter deltas are maintained on a per-cpu basis and drained both
> periodically and when the delta is above a threshold. On large CPU systems,
> the difference between the estimated and real value of NR_FREE_PAGES can be
> very high. If the system is under both load and low memory, it's possible
> for watermarks to be breached. In extreme cases, the number of free pages
> can drop to 0 leading to the possibility of system livelock.
>
> This patch introduces zone_nr_free_pages() to take a slightly more accurate
> estimate of NR_FREE_PAGES while kswapd is awake. The estimate is not perfect
> and may result in cache line bounces but is expected to be lighter than the
> IPI calls necessary to continually drain the per-cpu counters while kswapd
> is awake.
>
> Signed-off-by: Mel Gorman <mel@csn.ul.ie>
[...]
> --- a/mm/mmzone.c
> +++ b/mm/mmzone.c
> @@ -87,3 +87,30 @@ int memmap_valid_within(unsigned long pfn,
> return 1;
> }
> #endif /* CONFIG_ARCH_HAS_HOLES_MEMORYMODEL */
> +
> +/* Called when a more accurate view of NR_FREE_PAGES is needed */
> +unsigned long zone_nr_free_pages(struct zone *zone)
> +{
> + unsigned long nr_free_pages = zone_page_state(zone, NR_FREE_PAGES);
> +
> + /*
> + * While kswapd is awake, it is considered the zone is under some
> + * memory pressure. Under pressure, there is a risk that
> + * er-cpu-counter-drift will allow the min watermark to be breached
Missing `p'.
> + * potentially causing a live-lock. While kswapd is awake and
> + * free pages are low, get a better estimate for free pages
> + */
> + if (nr_free_pages < zone->percpu_drift_mark &&
> + !waitqueue_active(&zone->zone_pgdat->kswapd_wait)) {
> + int cpu;
> +
> + for_each_online_cpu(cpu) {
> + struct per_cpu_pageset *pset;
> +
> + pset = per_cpu_ptr(zone->pageset, cpu);
> + nr_free_pages += pset->vm_stat_diff[NR_FREE_PAGES];
> + }
> + }
> +
> + return nr_free_pages;
> +}
> diff --git a/mm/page_alloc.c b/mm/page_alloc.c
> index c2407a4..67a2ed0 100644
> --- a/mm/page_alloc.c
> +++ b/mm/page_alloc.c
> @@ -1462,7 +1462,7 @@ int zone_watermark_ok(struct zone *z, int order, unsigned long mark,
> {
> /* free_pages my go negative - that's OK */
> long min = mark;
> - long free_pages = zone_page_state(z, NR_FREE_PAGES) - (1 << order) + 1;
> + long free_pages = zone_nr_free_pages(z) - (1 << order) + 1;
> int o;
>
> if (alloc_flags & ALLOC_HIGH)
> @@ -2413,7 +2413,7 @@ void show_free_areas(void)
> " all_unreclaimable? %s"
> "\n",
> zone->name,
> - K(zone_page_state(zone, NR_FREE_PAGES)),
> + K(zone_nr_free_pages(zone)),
> K(min_wmark_pages(zone)),
> K(low_wmark_pages(zone)),
> K(high_wmark_pages(zone)),
> diff --git a/mm/vmstat.c b/mm/vmstat.c
> index 7759941..c95a159 100644
> --- a/mm/vmstat.c
> +++ b/mm/vmstat.c
> @@ -143,6 +143,9 @@ static void refresh_zone_stat_thresholds(void)
> for_each_online_cpu(cpu)
> per_cpu_ptr(zone->pageset, cpu)->stat_threshold
> = threshold;
> +
> + zone->percpu_drift_mark = high_wmark_pages(zone) +
> + num_online_cpus() * threshold;
> }
> }
Hm, this one I don't quite get (might be the jetlag, though): we have
_at least_ NR_FREE_PAGES free pages, there may just be more lurking in
the pcp counters.
So shouldn't we only collect the pcp deltas in case the high watermark
is breached? Above this point, we should be fine or better, no?
Hannes
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
^ permalink raw reply [flat|nested] 82+ messages in thread
* Re: [PATCH 2/3] mm: page allocator: Calculate a better estimate of NR_FREE_PAGES when memory is low and kswapd is awake
2010-08-16 16:06 ` Johannes Weiner
@ 2010-08-17 2:26 ` Minchan Kim
2010-08-17 10:42 ` Mel Gorman
2010-08-17 10:16 ` Mel Gorman
1 sibling, 1 reply; 82+ messages in thread
From: Minchan Kim @ 2010-08-17 2:26 UTC (permalink / raw)
To: Johannes Weiner
Cc: Mel Gorman, linux-mm, Rik van Riel, Nick Piggin,
KAMEZAWA Hiroyuki, KOSAKI Motohiro
On Tue, Aug 17, 2010 at 1:06 AM, Johannes Weiner <hannes@cmpxchg.org> wrote:
> [npiggin@suse.de bounces, switched to yahoo address]
>
> On Mon, Aug 16, 2010 at 10:43:50AM +0100, Mel Gorman wrote:
<snip>
>> + * potentially causing a live-lock. While kswapd is awake and
>> + * free pages are low, get a better estimate for free pages
>> + */
>> + if (nr_free_pages < zone->percpu_drift_mark &&
>> + !waitqueue_active(&zone->zone_pgdat->kswapd_wait)) {
>> + int cpu;
>> +
>> + for_each_online_cpu(cpu) {
>> + struct per_cpu_pageset *pset;
>> +
>> + pset = per_cpu_ptr(zone->pageset, cpu);
>> + nr_free_pages += pset->vm_stat_diff[NR_FREE_PAGES];
We need to consider CONFIG_SMP.
>> + }
>> + }
>> +
>> + return nr_free_pages;
>> +}
>> diff --git a/mm/page_alloc.c b/mm/page_alloc.c
>> index c2407a4..67a2ed0 100644
>> --- a/mm/page_alloc.c
>> +++ b/mm/page_alloc.c
>> @@ -1462,7 +1462,7 @@ int zone_watermark_ok(struct zone *z, int order, unsigned long mark,
>> {
>> /* free_pages my go negative - that's OK */
>> long min = mark;
>> - long free_pages = zone_page_state(z, NR_FREE_PAGES) - (1 << order) + 1;
>> + long free_pages = zone_nr_free_pages(z) - (1 << order) + 1;
>> int o;
>>
>> if (alloc_flags & ALLOC_HIGH)
>> @@ -2413,7 +2413,7 @@ void show_free_areas(void)
>> " all_unreclaimable? %s"
>> "\n",
>> zone->name,
>> - K(zone_page_state(zone, NR_FREE_PAGES)),
>> + K(zone_nr_free_pages(zone)),
>> K(min_wmark_pages(zone)),
>> K(low_wmark_pages(zone)),
>> K(high_wmark_pages(zone)),
>> diff --git a/mm/vmstat.c b/mm/vmstat.c
>> index 7759941..c95a159 100644
>> --- a/mm/vmstat.c
>> +++ b/mm/vmstat.c
>> @@ -143,6 +143,9 @@ static void refresh_zone_stat_thresholds(void)
>> for_each_online_cpu(cpu)
>> per_cpu_ptr(zone->pageset, cpu)->stat_threshold
>> = threshold;
>> +
>> + zone->percpu_drift_mark = high_wmark_pages(zone) +
>> + num_online_cpus() * threshold;
>> }
>> }
>
> Hm, this one I don't quite get (might be the jetlag, though): we have
> _at least_ NR_FREE_PAGES free pages, there may just be more lurking in
We can't make sure it.
As I said previous mail, current allocation path decreases
NR_FREE_PAGES after it removes pages from buddy list.
> the pcp counters.
>
> So shouldn't we only collect the pcp deltas in case the high watermark
> is breached? Above this point, we should be fine or better, no?
If we don't consider allocation path, I agree on Hannes's opinion.
At least, we need to listen why Mel determine the threshold. :)
--
Kind regards,
Minchan Kim
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
^ permalink raw reply [flat|nested] 82+ messages in thread
* Re: [PATCH 2/3] mm: page allocator: Calculate a better estimate of NR_FREE_PAGES when memory is low and kswapd is awake
2010-08-17 2:26 ` Minchan Kim
@ 2010-08-17 10:42 ` Mel Gorman
2010-08-17 15:01 ` Minchan Kim
0 siblings, 1 reply; 82+ messages in thread
From: Mel Gorman @ 2010-08-17 10:42 UTC (permalink / raw)
To: Minchan Kim
Cc: Johannes Weiner, linux-mm, Rik van Riel, Nick Piggin,
KAMEZAWA Hiroyuki, KOSAKI Motohiro
On Tue, Aug 17, 2010 at 11:26:05AM +0900, Minchan Kim wrote:
> On Tue, Aug 17, 2010 at 1:06 AM, Johannes Weiner <hannes@cmpxchg.org> wrote:
> > [npiggin@suse.de bounces, switched to yahoo address]
> >
> > On Mon, Aug 16, 2010 at 10:43:50AM +0100, Mel Gorman wrote:
>
> <snip>
>
> >> + * potentially causing a live-lock. While kswapd is awake and
> >> + * free pages are low, get a better estimate for free pages
> >> + */
> >> + if (nr_free_pages < zone->percpu_drift_mark &&
> >> + !waitqueue_active(&zone->zone_pgdat->kswapd_wait)) {
> >> + int cpu;
> >> +
> >> + for_each_online_cpu(cpu) {
> >> + struct per_cpu_pageset *pset;
> >> +
> >> + pset = per_cpu_ptr(zone->pageset, cpu);
> >> + nr_free_pages += pset->vm_stat_diff[NR_FREE_PAGES];
>
> We need to consider CONFIG_SMP.
>
We do.
#ifdef CONFIG_SMP
unsigned long zone_nr_free_pages(struct zone *zone);
#else
#define zone_nr_free_pages(zone) zone_page_state(zone, NR_FREE_PAGES)
#endif /* CONFIG_SMP */
and a wrapping of CONFIG_SMP around the function in mmzone.c .
> >> + }
> >> + }
> >> +
> >> + return nr_free_pages;
> >> +}
> >> diff --git a/mm/page_alloc.c b/mm/page_alloc.c
> >> index c2407a4..67a2ed0 100644
> >> --- a/mm/page_alloc.c
> >> +++ b/mm/page_alloc.c
> >> @@ -1462,7 +1462,7 @@ int zone_watermark_ok(struct zone *z, int order, unsigned long mark,
> >> {
> >> /* free_pages my go negative - that's OK */
> >> long min = mark;
> >> - long free_pages = zone_page_state(z, NR_FREE_PAGES) - (1 << order) + 1;
> >> + long free_pages = zone_nr_free_pages(z) - (1 << order) + 1;
> >> int o;
> >>
> >> if (alloc_flags & ALLOC_HIGH)
> >> @@ -2413,7 +2413,7 @@ void show_free_areas(void)
> >> " all_unreclaimable? %s"
> >> "\n",
> >> zone->name,
> >> - K(zone_page_state(zone, NR_FREE_PAGES)),
> >> + K(zone_nr_free_pages(zone)),
> >> K(min_wmark_pages(zone)),
> >> K(low_wmark_pages(zone)),
> >> K(high_wmark_pages(zone)),
> >> diff --git a/mm/vmstat.c b/mm/vmstat.c
> >> index 7759941..c95a159 100644
> >> --- a/mm/vmstat.c
> >> +++ b/mm/vmstat.c
> >> @@ -143,6 +143,9 @@ static void refresh_zone_stat_thresholds(void)
> >> for_each_online_cpu(cpu)
> >> per_cpu_ptr(zone->pageset, cpu)->stat_threshold
> >> = threshold;
> >> +
> >> + zone->percpu_drift_mark = high_wmark_pages(zone) +
> >> + num_online_cpus() * threshold;
> >> }
> >> }
> >
> > Hm, this one I don't quite get (might be the jetlag, though): we have
> > _at least_ NR_FREE_PAGES free pages, there may just be more lurking in
>
> We can't make sure it.
> As I said previous mail, current allocation path decreases
> NR_FREE_PAGES after it removes pages from buddy list.
>
> > the pcp counters.
> >
> > So shouldn't we only collect the pcp deltas in case the high watermark
> > is breached? Above this point, we should be fine or better, no?
>
> If we don't consider allocation path, I agree on Hannes's opinion.
> At least, we need to listen why Mel determine the threshold. :)
>
--
Mel Gorman
Part-time Phd Student Linux Technology Center
University of Limerick IBM Dublin Software Lab
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
^ permalink raw reply [flat|nested] 82+ messages in thread
* Re: [PATCH 2/3] mm: page allocator: Calculate a better estimate of NR_FREE_PAGES when memory is low and kswapd is awake
2010-08-17 10:42 ` Mel Gorman
@ 2010-08-17 15:01 ` Minchan Kim
2010-08-17 15:05 ` Mel Gorman
0 siblings, 1 reply; 82+ messages in thread
From: Minchan Kim @ 2010-08-17 15:01 UTC (permalink / raw)
To: Mel Gorman
Cc: Johannes Weiner, linux-mm, Rik van Riel, Nick Piggin,
KAMEZAWA Hiroyuki, KOSAKI Motohiro
On Tue, Aug 17, 2010 at 11:42:46AM +0100, Mel Gorman wrote:
> On Tue, Aug 17, 2010 at 11:26:05AM +0900, Minchan Kim wrote:
> > On Tue, Aug 17, 2010 at 1:06 AM, Johannes Weiner <hannes@cmpxchg.org> wrote:
> > > [npiggin@suse.de bounces, switched to yahoo address]
> > >
> > > On Mon, Aug 16, 2010 at 10:43:50AM +0100, Mel Gorman wrote:
> >
> > <snip>
> >
> > >> + * potentially causing a live-lock. While kswapd is awake and
> > >> + * free pages are low, get a better estimate for free pages
> > >> + */
> > >> + if (nr_free_pages < zone->percpu_drift_mark &&
> > >> + !waitqueue_active(&zone->zone_pgdat->kswapd_wait)) {
> > >> + int cpu;
> > >> +
> > >> + for_each_online_cpu(cpu) {
> > >> + struct per_cpu_pageset *pset;
> > >> +
> > >> + pset = per_cpu_ptr(zone->pageset, cpu);
> > >> + nr_free_pages += pset->vm_stat_diff[NR_FREE_PAGES];
> >
> > We need to consider CONFIG_SMP.
> >
>
> We do.
>
> #ifdef CONFIG_SMP
> unsigned long zone_nr_free_pages(struct zone *zone);
> #else
> #define zone_nr_free_pages(zone) zone_page_state(zone, NR_FREE_PAGES)
> #endif /* CONFIG_SMP */
>
> and a wrapping of CONFIG_SMP around the function in mmzone.c .
I can't find it in this patch series.
Hmm.. :(
--
Kind regards,
Minchan Kim
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
^ permalink raw reply [flat|nested] 82+ messages in thread
* Re: [PATCH 2/3] mm: page allocator: Calculate a better estimate of NR_FREE_PAGES when memory is low and kswapd is awake
2010-08-17 15:01 ` Minchan Kim
@ 2010-08-17 15:05 ` Mel Gorman
0 siblings, 0 replies; 82+ messages in thread
From: Mel Gorman @ 2010-08-17 15:05 UTC (permalink / raw)
To: Minchan Kim
Cc: Johannes Weiner, linux-mm, Rik van Riel, Nick Piggin,
KAMEZAWA Hiroyuki, KOSAKI Motohiro
On Wed, Aug 18, 2010 at 12:01:44AM +0900, Minchan Kim wrote:
> On Tue, Aug 17, 2010 at 11:42:46AM +0100, Mel Gorman wrote:
> > On Tue, Aug 17, 2010 at 11:26:05AM +0900, Minchan Kim wrote:
> > > On Tue, Aug 17, 2010 at 1:06 AM, Johannes Weiner <hannes@cmpxchg.org> wrote:
> > > > [npiggin@suse.de bounces, switched to yahoo address]
> > > >
> > > > On Mon, Aug 16, 2010 at 10:43:50AM +0100, Mel Gorman wrote:
> > >
> > > <snip>
> > >
> > > >> + * potentially causing a live-lock. While kswapd is awake and
> > > >> + * free pages are low, get a better estimate for free pages
> > > >> + */
> > > >> + if (nr_free_pages < zone->percpu_drift_mark &&
> > > >> + !waitqueue_active(&zone->zone_pgdat->kswapd_wait)) {
> > > >> + int cpu;
> > > >> +
> > > >> + for_each_online_cpu(cpu) {
> > > >> + struct per_cpu_pageset *pset;
> > > >> +
> > > >> + pset = per_cpu_ptr(zone->pageset, cpu);
> > > >> + nr_free_pages += pset->vm_stat_diff[NR_FREE_PAGES];
> > >
> > > We need to consider CONFIG_SMP.
> > >
> >
> > We do.
> >
> > #ifdef CONFIG_SMP
> > unsigned long zone_nr_free_pages(struct zone *zone);
> > #else
> > #define zone_nr_free_pages(zone) zone_page_state(zone, NR_FREE_PAGES)
> > #endif /* CONFIG_SMP */
> >
> > and a wrapping of CONFIG_SMP around the function in mmzone.c .
>
> I can't find it in this patch series.
My bad. What I meant is "You're right, we do need to consider
CONFIG_SMP, how about something like the following";
I've made such a change to my local tree but it was not part of the
released series.
Thanks
--
Mel Gorman
Part-time Phd Student Linux Technology Center
University of Limerick IBM Dublin Software Lab
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
^ permalink raw reply [flat|nested] 82+ messages in thread
* Re: [PATCH 2/3] mm: page allocator: Calculate a better estimate of NR_FREE_PAGES when memory is low and kswapd is awake
2010-08-16 16:06 ` Johannes Weiner
2010-08-17 2:26 ` Minchan Kim
@ 2010-08-17 10:16 ` Mel Gorman
2010-08-17 11:05 ` Johannes Weiner
2010-08-17 14:20 ` Minchan Kim
1 sibling, 2 replies; 82+ messages in thread
From: Mel Gorman @ 2010-08-17 10:16 UTC (permalink / raw)
To: Johannes Weiner
Cc: linux-mm, Rik van Riel, Nick Piggin, KAMEZAWA Hiroyuki,
KOSAKI Motohiro
On Mon, Aug 16, 2010 at 06:06:23PM +0200, Johannes Weiner wrote:
> [npiggin@suse.de bounces, switched to yahoo address]
>
> On Mon, Aug 16, 2010 at 10:43:50AM +0100, Mel Gorman wrote:
> > On Mon, Aug 16, 2010 at 10:42:12AM +0100, Mel Gorman wrote:
> > > Ordinarily watermark checks are made based on the vmstat NR_FREE_PAGES as
> > > it is cheaper than scanning a number of lists. To avoid synchronization
> > > overhead, counter deltas are maintained on a per-cpu basis and drained both
> > > periodically and when the delta is above a threshold. On large CPU systems,
> > > the difference between the estimated and real value of NR_FREE_PAGES can be
> > > very high. If the system is under both load and low memory, it's possible
> > > for watermarks to be breached. In extreme cases, the number of free pages
> > > can drop to 0 leading to the possibility of system livelock.
> > >
> > > This patch introduces zone_nr_free_pages() to take a slightly more accurate
> > > estimate of NR_FREE_PAGES while kswapd is awake. The estimate is not perfect
> > > and may result in cache line bounces but is expected to be lighter than the
> > > IPI calls necessary to continually drain the per-cpu counters while kswapd
> > > is awake.
> > >
> > > Signed-off-by: Mel Gorman <mel@csn.ul.ie>
> >
> > And the second I sent this, I realised I had sent a slightly old version
> > that missed a compile-fix :(
> >
> > ==== CUT HERE ====
> > mm: page allocator: Calculate a better estimate of NR_FREE_PAGES when memory is low and kswapd is awake
> >
> > Ordinarily watermark checks are made based on the vmstat NR_FREE_PAGES as
> > it is cheaper than scanning a number of lists. To avoid synchronization
> > overhead, counter deltas are maintained on a per-cpu basis and drained both
> > periodically and when the delta is above a threshold. On large CPU systems,
> > the difference between the estimated and real value of NR_FREE_PAGES can be
> > very high. If the system is under both load and low memory, it's possible
> > for watermarks to be breached. In extreme cases, the number of free pages
> > can drop to 0 leading to the possibility of system livelock.
> >
> > This patch introduces zone_nr_free_pages() to take a slightly more accurate
> > estimate of NR_FREE_PAGES while kswapd is awake. The estimate is not perfect
> > and may result in cache line bounces but is expected to be lighter than the
> > IPI calls necessary to continually drain the per-cpu counters while kswapd
> > is awake.
> >
> > Signed-off-by: Mel Gorman <mel@csn.ul.ie>
>
> [...]
>
> > --- a/mm/mmzone.c
> > +++ b/mm/mmzone.c
> > @@ -87,3 +87,30 @@ int memmap_valid_within(unsigned long pfn,
> > return 1;
> > }
> > #endif /* CONFIG_ARCH_HAS_HOLES_MEMORYMODEL */
> > +
> > +/* Called when a more accurate view of NR_FREE_PAGES is needed */
> > +unsigned long zone_nr_free_pages(struct zone *zone)
> > +{
> > + unsigned long nr_free_pages = zone_page_state(zone, NR_FREE_PAGES);
> > +
> > + /*
> > + * While kswapd is awake, it is considered the zone is under some
> > + * memory pressure. Under pressure, there is a risk that
> > + * er-cpu-counter-drift will allow the min watermark to be breached
>
> Missing `p'.
>
D'oh. Fixed
> > + * potentially causing a live-lock. While kswapd is awake and
> > + * free pages are low, get a better estimate for free pages
> > + */
> > + if (nr_free_pages < zone->percpu_drift_mark &&
> > + !waitqueue_active(&zone->zone_pgdat->kswapd_wait)) {
> > + int cpu;
> > +
> > + for_each_online_cpu(cpu) {
> > + struct per_cpu_pageset *pset;
> > +
> > + pset = per_cpu_ptr(zone->pageset, cpu);
> > + nr_free_pages += pset->vm_stat_diff[NR_FREE_PAGES];
> > + }
> > + }
> > +
> > + return nr_free_pages;
> > +}
> > diff --git a/mm/page_alloc.c b/mm/page_alloc.c
> > index c2407a4..67a2ed0 100644
> > --- a/mm/page_alloc.c
> > +++ b/mm/page_alloc.c
> > @@ -1462,7 +1462,7 @@ int zone_watermark_ok(struct zone *z, int order, unsigned long mark,
> > {
> > /* free_pages my go negative - that's OK */
> > long min = mark;
> > - long free_pages = zone_page_state(z, NR_FREE_PAGES) - (1 << order) + 1;
> > + long free_pages = zone_nr_free_pages(z) - (1 << order) + 1;
> > int o;
> >
> > if (alloc_flags & ALLOC_HIGH)
> > @@ -2413,7 +2413,7 @@ void show_free_areas(void)
> > " all_unreclaimable? %s"
> > "\n",
> > zone->name,
> > - K(zone_page_state(zone, NR_FREE_PAGES)),
> > + K(zone_nr_free_pages(zone)),
> > K(min_wmark_pages(zone)),
> > K(low_wmark_pages(zone)),
> > K(high_wmark_pages(zone)),
> > diff --git a/mm/vmstat.c b/mm/vmstat.c
> > index 7759941..c95a159 100644
> > --- a/mm/vmstat.c
> > +++ b/mm/vmstat.c
> > @@ -143,6 +143,9 @@ static void refresh_zone_stat_thresholds(void)
> > for_each_online_cpu(cpu)
> > per_cpu_ptr(zone->pageset, cpu)->stat_threshold
> > = threshold;
> > +
> > + zone->percpu_drift_mark = high_wmark_pages(zone) +
> > + num_online_cpus() * threshold;
> > }
> > }
>
> Hm, this one I don't quite get (might be the jetlag, though): we have
> _at least_ NR_FREE_PAGES free pages, there may just be more lurking in
> the pcp counters.
>
Well, the drift can be either direction because drift can be due to pages
being either freed or allocated. e.g. it could be something like
NR_FREE_PAGES CPU 0 CPU 1 Actual Free
128 -32 +64 160
Because CPU 0 was allocating pages while CPU 1 was freeing them but that
is not what is important here. At any given time, the NR_FREE_PAGES can be
wrong by as much as
num_online_cpus * (threshold - 1)
As kswapd goes back to sleep when the high watermark is reached, it's important
that it has actually reached the watermark before sleeping. Similarly,
if an allocator is checking the low watermark, it needs an accurate count.
Hence a more careful accounting for NR_FREE_PAGES should happen when the
number of free pages is within
high_watermark + (num_online_cpus * (threshold - 1))
Only checking when kswapd is awake still leaves a window between the low
and min watermark when we could breach the watermark but I'm expecting it
can only happen for at worst one allocation. After that, kswapd wakes
and the count becomes accurate again.
> So shouldn't we only collect the pcp deltas in case the high watermark
> is breached? Above this point, we should be fine or better, no?
>
Is that not what is happening in zone_nr_free_pages with this check?
/*
* While kswapd is awake, it is considered the zone is under some
* memory pressure. Under pressure, there is a risk that
* per-cpu-counter-drift will allow the min watermark to be breached
* potentially causing a live-lock. While kswapd is awake and
* free pages are low, get a better estimate for free pages
*/
if (nr_free_pages < zone->percpu_drift_mark &&
!waitqueue_active(&zone->zone_pgdat->kswapd_wait)) {
Maybe I'm misunderstanding your question.
--
Mel Gorman
Part-time Phd Student Linux Technology Center
University of Limerick IBM Dublin Software Lab
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
^ permalink raw reply [flat|nested] 82+ messages in thread
* Re: [PATCH 2/3] mm: page allocator: Calculate a better estimate of NR_FREE_PAGES when memory is low and kswapd is awake
2010-08-17 10:16 ` Mel Gorman
@ 2010-08-17 11:05 ` Johannes Weiner
2010-08-17 14:20 ` Minchan Kim
1 sibling, 0 replies; 82+ messages in thread
From: Johannes Weiner @ 2010-08-17 11:05 UTC (permalink / raw)
To: Mel Gorman; +Cc: linux-mm, Rik van Riel, KAMEZAWA Hiroyuki, KOSAKI Motohiro
On Tue, Aug 17, 2010 at 11:16:55AM +0100, Mel Gorman wrote:
> On Mon, Aug 16, 2010 at 06:06:23PM +0200, Johannes Weiner wrote:
> > On Mon, Aug 16, 2010 at 10:43:50AM +0100, Mel Gorman wrote:
> > > diff --git a/mm/vmstat.c b/mm/vmstat.c
> > > index 7759941..c95a159 100644
> > > --- a/mm/vmstat.c
> > > +++ b/mm/vmstat.c
> > > @@ -143,6 +143,9 @@ static void refresh_zone_stat_thresholds(void)
> > > for_each_online_cpu(cpu)
> > > per_cpu_ptr(zone->pageset, cpu)->stat_threshold
> > > = threshold;
> > > +
> > > + zone->percpu_drift_mark = high_wmark_pages(zone) +
> > > + num_online_cpus() * threshold;
> > > }
> > > }
> >
> > Hm, this one I don't quite get (might be the jetlag, though): we have
> > _at least_ NR_FREE_PAGES free pages, there may just be more lurking in
> > the pcp counters.
> >
>
> Well, the drift can be either direction because drift can be due to pages
> being either freed or allocated. e.g. it could be something like
>
> NR_FREE_PAGES CPU 0 CPU 1 Actual Free
> 128 -32 +64 160
>
> Because CPU 0 was allocating pages while CPU 1 was freeing them but that
> is not what is important here. At any given time, the NR_FREE_PAGES can be
> wrong by as much as
>
> num_online_cpus * (threshold - 1)
I somehow assumed the pcp cache could only be positive, but the
vm_stat_diff can indeed hold negative values.
> > So shouldn't we only collect the pcp deltas in case the high watermark
> > is breached? Above this point, we should be fine or better, no?
> >
>
> Is that not what is happening in zone_nr_free_pages with this check?
>
> /*
> * While kswapd is awake, it is considered the zone is under some
> * memory pressure. Under pressure, there is a risk that
> * per-cpu-counter-drift will allow the min watermark to be breached
> * potentially causing a live-lock. While kswapd is awake and
> * free pages are low, get a better estimate for free pages
> */
> if (nr_free_pages < zone->percpu_drift_mark &&
> !waitqueue_active(&zone->zone_pgdat->kswapd_wait)) {
>
> Maybe I'm misunderstanding your question.
This was just a conclusion based on my wrong assumption: if the pcp
diff could only be positive, it would be enough to go for accurate
counts at the point NR_FREE_PAGES breaches the watermark.
As it is, however, the error margin needs to be taken into account in
both directions, as you said, so your patch makes perfect sense.
Sorry for the noise! And
Acked-by: Johannes Weiner <hannes@cmpxchg.org>
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
^ permalink raw reply [flat|nested] 82+ messages in thread
* Re: [PATCH 2/3] mm: page allocator: Calculate a better estimate of NR_FREE_PAGES when memory is low and kswapd is awake
2010-08-17 10:16 ` Mel Gorman
2010-08-17 11:05 ` Johannes Weiner
@ 2010-08-17 14:20 ` Minchan Kim
2010-08-18 8:51 ` Mel Gorman
1 sibling, 1 reply; 82+ messages in thread
From: Minchan Kim @ 2010-08-17 14:20 UTC (permalink / raw)
To: Mel Gorman
Cc: Johannes Weiner, linux-mm, Rik van Riel, Nick Piggin,
KAMEZAWA Hiroyuki, KOSAKI Motohiro
On Tue, Aug 17, 2010 at 11:16:55AM +0100, Mel Gorman wrote:
> Well, the drift can be either direction because drift can be due to pages
> being either freed or allocated. e.g. it could be something like
>
> NR_FREE_PAGES CPU 0 CPU 1 Actual Free
> 128 -32 +64 160
>
> Because CPU 0 was allocating pages while CPU 1 was freeing them but that
> is not what is important here. At any given time, the NR_FREE_PAGES can be
> wrong by as much as
>
> num_online_cpus * (threshold - 1)
That's the answer I expected.
As I mentioned previous mail, we need to consider allocation path.
But you already have been considered it by partially in here.
Yes. It looks good to me. :)
Reviewed-by: Minchan Kim <minchan.kim@gmail.com>
>
> As kswapd goes back to sleep when the high watermark is reached, it's important
> that it has actually reached the watermark before sleeping. Similarly,
> if an allocator is checking the low watermark, it needs an accurate count.
> Hence a more careful accounting for NR_FREE_PAGES should happen when the
> number of free pages is within
>
> high_watermark + (num_online_cpus * (threshold - 1))
>
> Only checking when kswapd is awake still leaves a window between the low
> and min watermark when we could breach the watermark but I'm expecting it
> can only happen for at worst one allocation. After that, kswapd wakes
> and the count becomes accurate again.
I can't understand the point.
Now kswapd starts from below low wmark and stops until high wmark.
So if VM has pages of below low wmark, it could always check by zone_nr_free_pages
regardless of min.
What's a window low and min wmark? Maybe I can miss your point.
--
Kind regards,
Minchan Kim
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
^ permalink raw reply [flat|nested] 82+ messages in thread
* Re: [PATCH 2/3] mm: page allocator: Calculate a better estimate of NR_FREE_PAGES when memory is low and kswapd is awake
2010-08-17 14:20 ` Minchan Kim
@ 2010-08-18 8:51 ` Mel Gorman
2010-08-18 14:57 ` Minchan Kim
0 siblings, 1 reply; 82+ messages in thread
From: Mel Gorman @ 2010-08-18 8:51 UTC (permalink / raw)
To: Minchan Kim
Cc: Johannes Weiner, linux-mm, Rik van Riel, Nick Piggin,
KAMEZAWA Hiroyuki, KOSAKI Motohiro
On Tue, Aug 17, 2010 at 11:20:40PM +0900, Minchan Kim wrote:
> On Tue, Aug 17, 2010 at 11:16:55AM +0100, Mel Gorman wrote:
> > Well, the drift can be either direction because drift can be due to pages
> > being either freed or allocated. e.g. it could be something like
> >
> > NR_FREE_PAGES CPU 0 CPU 1 Actual Free
> > 128 -32 +64 160
> >
> > Because CPU 0 was allocating pages while CPU 1 was freeing them but that
> > is not what is important here. At any given time, the NR_FREE_PAGES can be
> > wrong by as much as
> >
> > num_online_cpus * (threshold - 1)
>
> That's the answer I expected.
> As I mentioned previous mail, we need to consider allocation path.
> But you already have been considered it by partially in here.
> Yes. It looks good to me. :)
>
> Reviewed-by: Minchan Kim <minchan.kim@gmail.com>
>
Thanks.
> >
> > As kswapd goes back to sleep when the high watermark is reached, it's important
> > that it has actually reached the watermark before sleeping. Similarly,
> > if an allocator is checking the low watermark, it needs an accurate count.
> > Hence a more careful accounting for NR_FREE_PAGES should happen when the
> > number of free pages is within
> >
> > high_watermark + (num_online_cpus * (threshold - 1))
> >
> > Only checking when kswapd is awake still leaves a window between the low
> > and min watermark when we could breach the watermark but I'm expecting it
> > can only happen for at worst one allocation. After that, kswapd wakes
> > and the count becomes accurate again.
>
> I can't understand the point.
> Now kswapd starts from below low wmark and stops until high wmark.
Correct.
> So if VM has pages of below low wmark, it could always check by zone_nr_free_pages
> regardless of min.
>
The difficulty is that NR_FREE_PAGES is an estimate so for a time the VM may
not know it is below the low watermark. We can get a more accurate view but
it's costly so we want to avoid that cost whenever we can.
> What's a window low and min wmark? Maybe I can miss your point.
>
The window is due to the fact kswapd is not awake yet. The window is because
kswapd might not be awake as NR_FREE_PAGES is higher than it should be. The
system is really somewhere between the low and min watermark but we are not
taking the accurate measure until kswapd gets woken up. The first allocation
to notice we are below the low watermark (be it due to vmstat refreshing or
that NR_FREE_PAGES happens to report we are below the watermark regardless of
any drift) wakes kswapd and other callers then take an accurate count hence
"we could breach the watermark but I'm expecting it can only happen for at
worst one allocation".
--
Mel Gorman
Part-time Phd Student Linux Technology Center
University of Limerick IBM Dublin Software Lab
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
^ permalink raw reply [flat|nested] 82+ messages in thread
* Re: [PATCH 2/3] mm: page allocator: Calculate a better estimate of NR_FREE_PAGES when memory is low and kswapd is awake
2010-08-18 8:51 ` Mel Gorman
@ 2010-08-18 14:57 ` Minchan Kim
2010-08-19 8:06 ` Mel Gorman
0 siblings, 1 reply; 82+ messages in thread
From: Minchan Kim @ 2010-08-18 14:57 UTC (permalink / raw)
To: Mel Gorman
Cc: Johannes Weiner, linux-mm, Rik van Riel, Nick Piggin,
KAMEZAWA Hiroyuki, KOSAKI Motohiro
On Wed, Aug 18, 2010 at 09:51:23AM +0100, Mel Gorman wrote:
> > What's a window low and min wmark? Maybe I can miss your point.
> >
>
> The window is due to the fact kswapd is not awake yet. The window is because
> kswapd might not be awake as NR_FREE_PAGES is higher than it should be. The
> system is really somewhere between the low and min watermark but we are not
> taking the accurate measure until kswapd gets woken up. The first allocation
> to notice we are below the low watermark (be it due to vmstat refreshing or
> that NR_FREE_PAGES happens to report we are below the watermark regardless of
> any drift) wakes kswapd and other callers then take an accurate count hence
> "we could breach the watermark but I'm expecting it can only happen for at
> worst one allocation".
Right. I misunderstood your word.
One more question.
Could you explain live lock scenario?
I looked over the code. Although the VM pass zone_watermark_ok by luck,
It can't allocate the page from buddy and then might go OOM.
When do we meet live lock case?
I think the description in change log would be better to understand
this patch in future.
Thanks.
--
Kind regards,
Minchan Kim
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
^ permalink raw reply [flat|nested] 82+ messages in thread
* Re: [PATCH 2/3] mm: page allocator: Calculate a better estimate of NR_FREE_PAGES when memory is low and kswapd is awake
2010-08-18 14:57 ` Minchan Kim
@ 2010-08-19 8:06 ` Mel Gorman
2010-08-19 10:33 ` Minchan Kim
0 siblings, 1 reply; 82+ messages in thread
From: Mel Gorman @ 2010-08-19 8:06 UTC (permalink / raw)
To: Minchan Kim
Cc: Johannes Weiner, linux-mm, Rik van Riel, Nick Piggin,
KAMEZAWA Hiroyuki, KOSAKI Motohiro
On Wed, Aug 18, 2010 at 11:57:26PM +0900, Minchan Kim wrote:
> On Wed, Aug 18, 2010 at 09:51:23AM +0100, Mel Gorman wrote:
> > > What's a window low and min wmark? Maybe I can miss your point.
> > >
> >
> > The window is due to the fact kswapd is not awake yet. The window is because
> > kswapd might not be awake as NR_FREE_PAGES is higher than it should be. The
> > system is really somewhere between the low and min watermark but we are not
> > taking the accurate measure until kswapd gets woken up. The first allocation
> > to notice we are below the low watermark (be it due to vmstat refreshing or
> > that NR_FREE_PAGES happens to report we are below the watermark regardless of
> > any drift) wakes kswapd and other callers then take an accurate count hence
> > "we could breach the watermark but I'm expecting it can only happen for at
> > worst one allocation".
>
> Right. I misunderstood your word.
> One more question.
>
> Could you explain live lock scenario?
>
Lets say
NR_FREE_PAGES = 256
Actual free pages = 8
The PCP lists get refilled in patch taking all 8 pages. Now there are
zero free pages. Reclaim kicks in but to reclaim any pages it needs to
clean something but all the pages are on a network-backed filesystem. To
clean them, it must transmit on the network so it tries to allocate some
buffers.
The livelock is that to free some memory, an allocation must succeed but
for an allocation to succeed, some memory must be freed. The system
might still remain alive if a process exits and does not need to
allocate memory while exiting but by and large, the system is in a
dangerous state.
> I looked over the code. Although the VM pass zone_watermark_ok by luck,
> It can't allocate the page from buddy and then might go OOM.
> When do we meet live lock case?
>
> I think the description in change log would be better to understand
> this patch in future.
>
Is the above description useful? If so, I can put it in the leader.
--
Mel Gorman
Part-time Phd Student Linux Technology Center
University of Limerick IBM Dublin Software Lab
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
^ permalink raw reply [flat|nested] 82+ messages in thread
* Re: [PATCH 2/3] mm: page allocator: Calculate a better estimate of NR_FREE_PAGES when memory is low and kswapd is awake
2010-08-19 8:06 ` Mel Gorman
@ 2010-08-19 10:33 ` Minchan Kim
2010-08-19 10:38 ` Mel Gorman
0 siblings, 1 reply; 82+ messages in thread
From: Minchan Kim @ 2010-08-19 10:33 UTC (permalink / raw)
To: Mel Gorman
Cc: Johannes Weiner, linux-mm, Rik van Riel, Nick Piggin,
KAMEZAWA Hiroyuki, KOSAKI Motohiro
On Thu, Aug 19, 2010 at 5:06 PM, Mel Gorman <mel@csn.ul.ie> wrote:
> On Wed, Aug 18, 2010 at 11:57:26PM +0900, Minchan Kim wrote:
>> On Wed, Aug 18, 2010 at 09:51:23AM +0100, Mel Gorman wrote:
>> > > What's a window low and min wmark? Maybe I can miss your point.
>> > >
>> >
>> > The window is due to the fact kswapd is not awake yet. The window is because
>> > kswapd might not be awake as NR_FREE_PAGES is higher than it should be. The
>> > system is really somewhere between the low and min watermark but we are not
>> > taking the accurate measure until kswapd gets woken up. The first allocation
>> > to notice we are below the low watermark (be it due to vmstat refreshing or
>> > that NR_FREE_PAGES happens to report we are below the watermark regardless of
>> > any drift) wakes kswapd and other callers then take an accurate count hence
>> > "we could breach the watermark but I'm expecting it can only happen for at
>> > worst one allocation".
>>
>> Right. I misunderstood your word.
>> One more question.
>>
>> Could you explain live lock scenario?
>>
>
> Lets say
>
> NR_FREE_PAGES = 256
> Actual free pages = 8
>
> The PCP lists get refilled in patch taking all 8 pages. Now there are
> zero free pages. Reclaim kicks in but to reclaim any pages it needs to
> clean something but all the pages are on a network-backed filesystem. To
> clean them, it must transmit on the network so it tries to allocate some
> buffers.
>
> The livelock is that to free some memory, an allocation must succeed but
> for an allocation to succeed, some memory must be freed. The system
Yes. I understood this as livelock but at last VM will kill victim
process then it can allocate free pages.
So I think it's not a livelock.
> might still remain alive if a process exits and does not need to
> allocate memory while exiting but by and large, the system is in a
> dangerous state.
Do you mean dangerous state of the system is livelock?
Maybe not.
I can't understand livelock in this context.
Anyway, I am okay with this patch except livelock pharse. :)
Thanks, Mel.
--
Kind regards,
Minchan Kim
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
^ permalink raw reply [flat|nested] 82+ messages in thread
* Re: [PATCH 2/3] mm: page allocator: Calculate a better estimate of NR_FREE_PAGES when memory is low and kswapd is awake
2010-08-19 10:33 ` Minchan Kim
@ 2010-08-19 10:38 ` Mel Gorman
2010-08-19 14:01 ` Minchan Kim
0 siblings, 1 reply; 82+ messages in thread
From: Mel Gorman @ 2010-08-19 10:38 UTC (permalink / raw)
To: Minchan Kim
Cc: Johannes Weiner, linux-mm, Rik van Riel, Nick Piggin,
KAMEZAWA Hiroyuki, KOSAKI Motohiro
On Thu, Aug 19, 2010 at 07:33:57PM +0900, Minchan Kim wrote:
> On Thu, Aug 19, 2010 at 5:06 PM, Mel Gorman <mel@csn.ul.ie> wrote:
> > On Wed, Aug 18, 2010 at 11:57:26PM +0900, Minchan Kim wrote:
> >> On Wed, Aug 18, 2010 at 09:51:23AM +0100, Mel Gorman wrote:
> >> > > What's a window low and min wmark? Maybe I can miss your point.
> >> > >
> >> >
> >> > The window is due to the fact kswapd is not awake yet. The window is because
> >> > kswapd might not be awake as NR_FREE_PAGES is higher than it should be. The
> >> > system is really somewhere between the low and min watermark but we are not
> >> > taking the accurate measure until kswapd gets woken up. The first allocation
> >> > to notice we are below the low watermark (be it due to vmstat refreshing or
> >> > that NR_FREE_PAGES happens to report we are below the watermark regardless of
> >> > any drift) wakes kswapd and other callers then take an accurate count hence
> >> > "we could breach the watermark but I'm expecting it can only happen for at
> >> > worst one allocation".
> >>
> >> Right. I misunderstood your word.
> >> One more question.
> >>
> >> Could you explain live lock scenario?
> >>
> >
> > Lets say
> >
> > NR_FREE_PAGES = 256
> > Actual free pages = 8
> >
> > The PCP lists get refilled in patch taking all 8 pages. Now there are
> > zero free pages. Reclaim kicks in but to reclaim any pages it needs to
> > clean something but all the pages are on a network-backed filesystem. To
> > clean them, it must transmit on the network so it tries to allocate some
> > buffers.
> >
> > The livelock is that to free some memory, an allocation must succeed but
> > for an allocation to succeed, some memory must be freed. The system
>
> Yes. I understood this as livelock but at last VM will kill victim
> process then it can allocate free pages.
And if the exit path for the OOM kill needs to allocate a page what
should it do?
> So I think it's not a livelock.
>
> > might still remain alive if a process exits and does not need to
> > allocate memory while exiting but by and large, the system is in a
> > dangerous state.
>
> Do you mean dangerous state of the system is livelock?
> Maybe not.
> I can't understand livelock in this context.
> Anyway, I am okay with this patch except livelock pharse. :)
>
--
Mel Gorman
Part-time Phd Student Linux Technology Center
University of Limerick IBM Dublin Software Lab
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
^ permalink raw reply [flat|nested] 82+ messages in thread
* Re: [PATCH 2/3] mm: page allocator: Calculate a better estimate of NR_FREE_PAGES when memory is low and kswapd is awake
2010-08-19 10:38 ` Mel Gorman
@ 2010-08-19 14:01 ` Minchan Kim
2010-08-19 14:09 ` Mel Gorman
0 siblings, 1 reply; 82+ messages in thread
From: Minchan Kim @ 2010-08-19 14:01 UTC (permalink / raw)
To: Mel Gorman
Cc: Johannes Weiner, linux-mm, Rik van Riel, Nick Piggin,
KAMEZAWA Hiroyuki, KOSAKI Motohiro
On Thu, Aug 19, 2010 at 11:38:39AM +0100, Mel Gorman wrote:
> On Thu, Aug 19, 2010 at 07:33:57PM +0900, Minchan Kim wrote:
> > On Thu, Aug 19, 2010 at 5:06 PM, Mel Gorman <mel@csn.ul.ie> wrote:
> > > On Wed, Aug 18, 2010 at 11:57:26PM +0900, Minchan Kim wrote:
> > >> On Wed, Aug 18, 2010 at 09:51:23AM +0100, Mel Gorman wrote:
> > >> > > What's a window low and min wmark? Maybe I can miss your point.
> > >> > >
> > >> >
> > >> > The window is due to the fact kswapd is not awake yet. The window is because
> > >> > kswapd might not be awake as NR_FREE_PAGES is higher than it should be. The
> > >> > system is really somewhere between the low and min watermark but we are not
> > >> > taking the accurate measure until kswapd gets woken up. The first allocation
> > >> > to notice we are below the low watermark (be it due to vmstat refreshing or
> > >> > that NR_FREE_PAGES happens to report we are below the watermark regardless of
> > >> > any drift) wakes kswapd and other callers then take an accurate count hence
> > >> > "we could breach the watermark but I'm expecting it can only happen for at
> > >> > worst one allocation".
> > >>
> > >> Right. I misunderstood your word.
> > >> One more question.
> > >>
> > >> Could you explain live lock scenario?
> > >>
> > >
> > > Lets say
> > >
> > > NR_FREE_PAGES = 256
> > > Actual free pages = 8
> > >
> > > The PCP lists get refilled in patch taking all 8 pages. Now there are
> > > zero free pages. Reclaim kicks in but to reclaim any pages it needs to
> > > clean something but all the pages are on a network-backed filesystem. To
> > > clean them, it must transmit on the network so it tries to allocate some
> > > buffers.
> > >
> > > The livelock is that to free some memory, an allocation must succeed but
> > > for an allocation to succeed, some memory must be freed. The system
> >
> > Yes. I understood this as livelock but at last VM will kill victim
> > process then it can allocate free pages.
>
> And if the exit path for the OOM kill needs to allocate a page what
> should it do?
Yeah. It might be livelock.
Then, let's rethink the problem.
The problem is following as.
1. Process A try to allocate the page
2. VM try to reclaim the page for process A
3. VM reclaims some pages but it remains on PCP so can't allocate pages for A
4. VM try to kill process B
5. The exit path need new pages for exiting process B
6. Livelock happens(I am not sure but we need any warning if it really happens at least)
If OOM kills process B successfully, there ins't the livelock problem.
So then How about this?
We need to retry allocation of new page with draining free pages just before OOM.
It doesn't have any overhead before going OOM and it's not frequent.
This patch can't handle your problem?
diff --git a/mm/page_alloc.c b/mm/page_alloc.c
index 1bb327a..113bea9 100644
--- a/mm/page_alloc.c
+++ b/mm/page_alloc.c
@@ -2045,6 +2045,15 @@ rebalance:
* running out of options and have to consider going OOM
*/
if (!did_some_progress) {
+
+ /* Ther are some free pages on PCP */
+ drain_all_pages();
+ page = get_page_from_freelist(gfp_mask, nodemask, order, zonelist,
+ high_zoneidx, alloc_flags &~ALLOCX_NO_WATERMARKS,
+ preferred_zone, migratetype);
+ if (page)
+ goto got_pg;
+
if ((gfp_mask & __GFP_FS) && !(gfp_mask & __GFP_NORETRY)) {
if (oom_killer_disabled)
goto nopage;
--
Kind regards,
Minchan Kim
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
^ permalink raw reply related [flat|nested] 82+ messages in thread
* Re: [PATCH 2/3] mm: page allocator: Calculate a better estimate of NR_FREE_PAGES when memory is low and kswapd is awake
2010-08-19 14:01 ` Minchan Kim
@ 2010-08-19 14:09 ` Mel Gorman
2010-08-19 14:34 ` Minchan Kim
0 siblings, 1 reply; 82+ messages in thread
From: Mel Gorman @ 2010-08-19 14:09 UTC (permalink / raw)
To: Minchan Kim
Cc: Johannes Weiner, linux-mm, Rik van Riel, Nick Piggin,
KAMEZAWA Hiroyuki, KOSAKI Motohiro
On Thu, Aug 19, 2010 at 11:01:50PM +0900, Minchan Kim wrote:
> On Thu, Aug 19, 2010 at 11:38:39AM +0100, Mel Gorman wrote:
> > On Thu, Aug 19, 2010 at 07:33:57PM +0900, Minchan Kim wrote:
> > > On Thu, Aug 19, 2010 at 5:06 PM, Mel Gorman <mel@csn.ul.ie> wrote:
> > > > On Wed, Aug 18, 2010 at 11:57:26PM +0900, Minchan Kim wrote:
> > > >> On Wed, Aug 18, 2010 at 09:51:23AM +0100, Mel Gorman wrote:
> > > >> > > What's a window low and min wmark? Maybe I can miss your point.
> > > >> > >
> > > >> >
> > > >> > The window is due to the fact kswapd is not awake yet. The window is because
> > > >> > kswapd might not be awake as NR_FREE_PAGES is higher than it should be. The
> > > >> > system is really somewhere between the low and min watermark but we are not
> > > >> > taking the accurate measure until kswapd gets woken up. The first allocation
> > > >> > to notice we are below the low watermark (be it due to vmstat refreshing or
> > > >> > that NR_FREE_PAGES happens to report we are below the watermark regardless of
> > > >> > any drift) wakes kswapd and other callers then take an accurate count hence
> > > >> > "we could breach the watermark but I'm expecting it can only happen for at
> > > >> > worst one allocation".
> > > >>
> > > >> Right. I misunderstood your word.
> > > >> One more question.
> > > >>
> > > >> Could you explain live lock scenario?
> > > >>
> > > >
> > > > Lets say
> > > >
> > > > NR_FREE_PAGES = 256
> > > > Actual free pages = 8
> > > >
> > > > The PCP lists get refilled in patch taking all 8 pages. Now there are
> > > > zero free pages. Reclaim kicks in but to reclaim any pages it needs to
> > > > clean something but all the pages are on a network-backed filesystem. To
> > > > clean them, it must transmit on the network so it tries to allocate some
> > > > buffers.
> > > >
> > > > The livelock is that to free some memory, an allocation must succeed but
> > > > for an allocation to succeed, some memory must be freed. The system
> > >
> > > Yes. I understood this as livelock but at last VM will kill victim
> > > process then it can allocate free pages.
> >
> > And if the exit path for the OOM kill needs to allocate a page what
> > should it do?
>
> Yeah. It might be livelock.
> Then, let's rethink the problem.
>
> The problem is following as.
>
> 1. Process A try to allocate the page
> 2. VM try to reclaim the page for process A
> 3. VM reclaims some pages but it remains on PCP so can't allocate pages for A
> 4. VM try to kill process B
> 5. The exit path need new pages for exiting process B
> 6. Livelock happens(I am not sure but we need any warning if it really happens at least)
>
The problem this patch is concerned with is about the vmstat counters, not
the pages on the per-cpu lists. The issue being dealt with is that the page
allocator grants a page going below the min watermark because NR_FREE_PAGES
can be inaccurate. The patch aims to fix that but taking greater care
with NR_FREE_PAGES when memory is low.
> If OOM kills process B successfully, there ins't the livelock problem.
> So then How about this?
>
> We need to retry allocation of new page with draining free pages just before OOM.
> It doesn't have any overhead before going OOM and it's not frequent.
>
It's a different problem and it's what patch 3/3 of this series aims to
address.
> This patch can't handle your problem?
>
> diff --git a/mm/page_alloc.c b/mm/page_alloc.c
> index 1bb327a..113bea9 100644
> --- a/mm/page_alloc.c
> +++ b/mm/page_alloc.c
> @@ -2045,6 +2045,15 @@ rebalance:
> * running out of options and have to consider going OOM
> */
> if (!did_some_progress) {
> +
> + /* Ther are some free pages on PCP */
> + drain_all_pages();
> + page = get_page_from_freelist(gfp_mask, nodemask, order, zonelist,
> + high_zoneidx, alloc_flags &~ALLOCX_NO_WATERMARKS,
> + preferred_zone, migratetype);
> + if (page)
> + goto got_pg;
> +
> if ((gfp_mask & __GFP_FS) && !(gfp_mask & __GFP_NORETRY)) {
> if (oom_killer_disabled)
> goto nopage;
>
>
>
> --
> Kind regards,
> Minchan Kim
>
--
Mel Gorman
Part-time Phd Student Linux Technology Center
University of Limerick IBM Dublin Software Lab
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
^ permalink raw reply [flat|nested] 82+ messages in thread
* Re: [PATCH 2/3] mm: page allocator: Calculate a better estimate of NR_FREE_PAGES when memory is low and kswapd is awake
2010-08-19 14:09 ` Mel Gorman
@ 2010-08-19 14:34 ` Minchan Kim
2010-08-19 15:07 ` Mel Gorman
0 siblings, 1 reply; 82+ messages in thread
From: Minchan Kim @ 2010-08-19 14:34 UTC (permalink / raw)
To: Mel Gorman
Cc: Johannes Weiner, linux-mm, Rik van Riel, Nick Piggin,
KAMEZAWA Hiroyuki, KOSAKI Motohiro
On Thu, Aug 19, 2010 at 03:09:46PM +0100, Mel Gorman wrote:
> On Thu, Aug 19, 2010 at 11:01:50PM +0900, Minchan Kim wrote:
> > On Thu, Aug 19, 2010 at 11:38:39AM +0100, Mel Gorman wrote:
> > > On Thu, Aug 19, 2010 at 07:33:57PM +0900, Minchan Kim wrote:
> > > > On Thu, Aug 19, 2010 at 5:06 PM, Mel Gorman <mel@csn.ul.ie> wrote:
> > > > > On Wed, Aug 18, 2010 at 11:57:26PM +0900, Minchan Kim wrote:
> > > > >> On Wed, Aug 18, 2010 at 09:51:23AM +0100, Mel Gorman wrote:
> > > > >> > > What's a window low and min wmark? Maybe I can miss your point.
> > > > >> > >
> > > > >> >
> > > > >> > The window is due to the fact kswapd is not awake yet. The window is because
> > > > >> > kswapd might not be awake as NR_FREE_PAGES is higher than it should be. The
> > > > >> > system is really somewhere between the low and min watermark but we are not
> > > > >> > taking the accurate measure until kswapd gets woken up. The first allocation
> > > > >> > to notice we are below the low watermark (be it due to vmstat refreshing or
> > > > >> > that NR_FREE_PAGES happens to report we are below the watermark regardless of
> > > > >> > any drift) wakes kswapd and other callers then take an accurate count hence
> > > > >> > "we could breach the watermark but I'm expecting it can only happen for at
> > > > >> > worst one allocation".
> > > > >>
> > > > >> Right. I misunderstood your word.
> > > > >> One more question.
> > > > >>
> > > > >> Could you explain live lock scenario?
> > > > >>
> > > > >
> > > > > Lets say
> > > > >
> > > > > NR_FREE_PAGES = 256
> > > > > Actual free pages = 8
> > > > >
> > > > > The PCP lists get refilled in patch taking all 8 pages. Now there are
> > > > > zero free pages. Reclaim kicks in but to reclaim any pages it needs to
> > > > > clean something but all the pages are on a network-backed filesystem. To
> > > > > clean them, it must transmit on the network so it tries to allocate some
> > > > > buffers.
> > > > >
> > > > > The livelock is that to free some memory, an allocation must succeed but
> > > > > for an allocation to succeed, some memory must be freed. The system
> > > >
> > > > Yes. I understood this as livelock but at last VM will kill victim
> > > > process then it can allocate free pages.
> > >
> > > And if the exit path for the OOM kill needs to allocate a page what
> > > should it do?
> >
> > Yeah. It might be livelock.
> > Then, let's rethink the problem.
> >
> > The problem is following as.
> >
> > 1. Process A try to allocate the page
> > 2. VM try to reclaim the page for process A
> > 3. VM reclaims some pages but it remains on PCP so can't allocate pages for A
> > 4. VM try to kill process B
> > 5. The exit path need new pages for exiting process B
> > 6. Livelock happens(I am not sure but we need any warning if it really happens at least)
> >
>
> The problem this patch is concerned with is about the vmstat counters, not
> the pages on the per-cpu lists. The issue being dealt with is that the page
> allocator grants a page going below the min watermark because NR_FREE_PAGES
> can be inaccurate. The patch aims to fix that but taking greater care
> with NR_FREE_PAGES when memory is low.
Your goal is to protect _min_ pages which is reserved. Right?
I thought your final goal is to protect the livelock problem.
Hmm.. Sorry for the noise. :(
--
Kind regards,
Minchan Kim
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
^ permalink raw reply [flat|nested] 82+ messages in thread
* Re: [PATCH 2/3] mm: page allocator: Calculate a better estimate of NR_FREE_PAGES when memory is low and kswapd is awake
2010-08-19 14:34 ` Minchan Kim
@ 2010-08-19 15:07 ` Mel Gorman
2010-08-19 15:22 ` Minchan Kim
0 siblings, 1 reply; 82+ messages in thread
From: Mel Gorman @ 2010-08-19 15:07 UTC (permalink / raw)
To: Minchan Kim
Cc: Johannes Weiner, linux-mm, Rik van Riel, Nick Piggin,
KAMEZAWA Hiroyuki, KOSAKI Motohiro
On Thu, Aug 19, 2010 at 11:34:39PM +0900, Minchan Kim wrote:
> On Thu, Aug 19, 2010 at 03:09:46PM +0100, Mel Gorman wrote:
> > On Thu, Aug 19, 2010 at 11:01:50PM +0900, Minchan Kim wrote:
> > > On Thu, Aug 19, 2010 at 11:38:39AM +0100, Mel Gorman wrote:
> > > > On Thu, Aug 19, 2010 at 07:33:57PM +0900, Minchan Kim wrote:
> > > > > On Thu, Aug 19, 2010 at 5:06 PM, Mel Gorman <mel@csn.ul.ie> wrote:
> > > > > > On Wed, Aug 18, 2010 at 11:57:26PM +0900, Minchan Kim wrote:
> > > > > >> On Wed, Aug 18, 2010 at 09:51:23AM +0100, Mel Gorman wrote:
> > > > > >> > > What's a window low and min wmark? Maybe I can miss your point.
> > > > > >> > >
> > > > > >> >
> > > > > >> > The window is due to the fact kswapd is not awake yet. The window is because
> > > > > >> > kswapd might not be awake as NR_FREE_PAGES is higher than it should be. The
> > > > > >> > system is really somewhere between the low and min watermark but we are not
> > > > > >> > taking the accurate measure until kswapd gets woken up. The first allocation
> > > > > >> > to notice we are below the low watermark (be it due to vmstat refreshing or
> > > > > >> > that NR_FREE_PAGES happens to report we are below the watermark regardless of
> > > > > >> > any drift) wakes kswapd and other callers then take an accurate count hence
> > > > > >> > "we could breach the watermark but I'm expecting it can only happen for at
> > > > > >> > worst one allocation".
> > > > > >>
> > > > > >> Right. I misunderstood your word.
> > > > > >> One more question.
> > > > > >>
> > > > > >> Could you explain live lock scenario?
> > > > > >>
> > > > > >
> > > > > > Lets say
> > > > > >
> > > > > > NR_FREE_PAGES = 256
> > > > > > Actual free pages = 8
> > > > > >
> > > > > > The PCP lists get refilled in patch taking all 8 pages. Now there are
> > > > > > zero free pages. Reclaim kicks in but to reclaim any pages it needs to
> > > > > > clean something but all the pages are on a network-backed filesystem. To
> > > > > > clean them, it must transmit on the network so it tries to allocate some
> > > > > > buffers.
> > > > > >
> > > > > > The livelock is that to free some memory, an allocation must succeed but
> > > > > > for an allocation to succeed, some memory must be freed. The system
> > > > >
> > > > > Yes. I understood this as livelock but at last VM will kill victim
> > > > > process then it can allocate free pages.
> > > >
> > > > And if the exit path for the OOM kill needs to allocate a page what
> > > > should it do?
> > >
> > > Yeah. It might be livelock.
> > > Then, let's rethink the problem.
> > >
> > > The problem is following as.
> > >
> > > 1. Process A try to allocate the page
> > > 2. VM try to reclaim the page for process A
> > > 3. VM reclaims some pages but it remains on PCP so can't allocate pages for A
> > > 4. VM try to kill process B
> > > 5. The exit path need new pages for exiting process B
> > > 6. Livelock happens(I am not sure but we need any warning if it really happens at least)
> > >
> >
> > The problem this patch is concerned with is about the vmstat counters, not
> > the pages on the per-cpu lists. The issue being dealt with is that the page
> > allocator grants a page going below the min watermark because NR_FREE_PAGES
> > can be inaccurate. The patch aims to fix that but taking greater care
> > with NR_FREE_PAGES when memory is low.
>
> Your goal is to protect _min_ pages which is reserved. Right?
> I thought your final goal is to protect the livelock problem.
> Hmm.. Sorry for the noise. :(
>
Emm, it's the same thing. If the min watermark is not properly
preserved, the system is in danger of being live-locked.
--
Mel Gorman
Part-time Phd Student Linux Technology Center
University of Limerick IBM Dublin Software Lab
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
^ permalink raw reply [flat|nested] 82+ messages in thread
* Re: [PATCH 2/3] mm: page allocator: Calculate a better estimate of NR_FREE_PAGES when memory is low and kswapd is awake
2010-08-19 15:07 ` Mel Gorman
@ 2010-08-19 15:22 ` Minchan Kim
2010-08-19 15:40 ` Mel Gorman
0 siblings, 1 reply; 82+ messages in thread
From: Minchan Kim @ 2010-08-19 15:22 UTC (permalink / raw)
To: Mel Gorman
Cc: Johannes Weiner, linux-mm, Rik van Riel, Nick Piggin,
KAMEZAWA Hiroyuki, KOSAKI Motohiro
On Thu, Aug 19, 2010 at 04:07:39PM +0100, Mel Gorman wrote:
> On Thu, Aug 19, 2010 at 11:34:39PM +0900, Minchan Kim wrote:
> > On Thu, Aug 19, 2010 at 03:09:46PM +0100, Mel Gorman wrote:
> > > On Thu, Aug 19, 2010 at 11:01:50PM +0900, Minchan Kim wrote:
> > > > On Thu, Aug 19, 2010 at 11:38:39AM +0100, Mel Gorman wrote:
> > > > > On Thu, Aug 19, 2010 at 07:33:57PM +0900, Minchan Kim wrote:
> > > > > > On Thu, Aug 19, 2010 at 5:06 PM, Mel Gorman <mel@csn.ul.ie> wrote:
> > > > > > > On Wed, Aug 18, 2010 at 11:57:26PM +0900, Minchan Kim wrote:
> > > > > > >> On Wed, Aug 18, 2010 at 09:51:23AM +0100, Mel Gorman wrote:
> > > > > > >> > > What's a window low and min wmark? Maybe I can miss your point.
> > > > > > >> > >
> > > > > > >> >
> > > > > > >> > The window is due to the fact kswapd is not awake yet. The window is because
> > > > > > >> > kswapd might not be awake as NR_FREE_PAGES is higher than it should be. The
> > > > > > >> > system is really somewhere between the low and min watermark but we are not
> > > > > > >> > taking the accurate measure until kswapd gets woken up. The first allocation
> > > > > > >> > to notice we are below the low watermark (be it due to vmstat refreshing or
> > > > > > >> > that NR_FREE_PAGES happens to report we are below the watermark regardless of
> > > > > > >> > any drift) wakes kswapd and other callers then take an accurate count hence
> > > > > > >> > "we could breach the watermark but I'm expecting it can only happen for at
> > > > > > >> > worst one allocation".
> > > > > > >>
> > > > > > >> Right. I misunderstood your word.
> > > > > > >> One more question.
> > > > > > >>
> > > > > > >> Could you explain live lock scenario?
> > > > > > >>
> > > > > > >
> > > > > > > Lets say
> > > > > > >
> > > > > > > NR_FREE_PAGES = 256
> > > > > > > Actual free pages = 8
> > > > > > >
> > > > > > > The PCP lists get refilled in patch taking all 8 pages. Now there are
> > > > > > > zero free pages. Reclaim kicks in but to reclaim any pages it needs to
> > > > > > > clean something but all the pages are on a network-backed filesystem. To
> > > > > > > clean them, it must transmit on the network so it tries to allocate some
> > > > > > > buffers.
> > > > > > >
> > > > > > > The livelock is that to free some memory, an allocation must succeed but
> > > > > > > for an allocation to succeed, some memory must be freed. The system
> > > > > >
> > > > > > Yes. I understood this as livelock but at last VM will kill victim
> > > > > > process then it can allocate free pages.
> > > > >
> > > > > And if the exit path for the OOM kill needs to allocate a page what
> > > > > should it do?
> > > >
> > > > Yeah. It might be livelock.
> > > > Then, let's rethink the problem.
> > > >
> > > > The problem is following as.
> > > >
> > > > 1. Process A try to allocate the page
> > > > 2. VM try to reclaim the page for process A
> > > > 3. VM reclaims some pages but it remains on PCP so can't allocate pages for A
> > > > 4. VM try to kill process B
> > > > 5. The exit path need new pages for exiting process B
> > > > 6. Livelock happens(I am not sure but we need any warning if it really happens at least)
> > > >
> > >
> > > The problem this patch is concerned with is about the vmstat counters, not
> > > the pages on the per-cpu lists. The issue being dealt with is that the page
> > > allocator grants a page going below the min watermark because NR_FREE_PAGES
> > > can be inaccurate. The patch aims to fix that but taking greater care
> > > with NR_FREE_PAGES when memory is low.
> >
> > Your goal is to protect _min_ pages which is reserved. Right?
> > I thought your final goal is to protect the livelock problem.
> > Hmm.. Sorry for the noise. :(
> >
>
> Emm, it's the same thing. If the min watermark is not properly
> preserved, the system is in danger of being live-locked.
Totally right.
Maybe I am sleeping.
Let's add follwing as comment about livelock.
"If NR_FREE_PAGES is much higher than number of real free page in buddy,
the VM can allocate pages below min watermark(At worst, buddy is zero).
Although VM kills some victim for freeing memory, it can't do it if the
exit path requires new page since buddy have zero page. It can result in
livelock."
At least, it help to not hurt you in future by me who is fool.
Thanks, Mel.
>
> --
> Mel Gorman
> Part-time Phd Student Linux Technology Center
> University of Limerick IBM Dublin Software Lab
>
> --
> To unsubscribe, send a message with 'unsubscribe linux-mm' in
> the body to majordomo@kvack.org. For more info on Linux MM,
> see: http://www.linux-mm.org/ .
> Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
--
Kind regards,
Minchan Kim
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
^ permalink raw reply [flat|nested] 82+ messages in thread
* Re: [PATCH 2/3] mm: page allocator: Calculate a better estimate of NR_FREE_PAGES when memory is low and kswapd is awake
2010-08-19 15:22 ` Minchan Kim
@ 2010-08-19 15:40 ` Mel Gorman
2010-08-19 15:44 ` Minchan Kim
0 siblings, 1 reply; 82+ messages in thread
From: Mel Gorman @ 2010-08-19 15:40 UTC (permalink / raw)
To: Minchan Kim
Cc: Johannes Weiner, linux-mm, Rik van Riel, Nick Piggin,
KAMEZAWA Hiroyuki, KOSAKI Motohiro
On Fri, Aug 20, 2010 at 12:22:33AM +0900, Minchan Kim wrote:
> On Thu, Aug 19, 2010 at 04:07:39PM +0100, Mel Gorman wrote:
> > On Thu, Aug 19, 2010 at 11:34:39PM +0900, Minchan Kim wrote:
> > > On Thu, Aug 19, 2010 at 03:09:46PM +0100, Mel Gorman wrote:
> > > > On Thu, Aug 19, 2010 at 11:01:50PM +0900, Minchan Kim wrote:
> > > > > On Thu, Aug 19, 2010 at 11:38:39AM +0100, Mel Gorman wrote:
> > > > > > On Thu, Aug 19, 2010 at 07:33:57PM +0900, Minchan Kim wrote:
> > > > > > > On Thu, Aug 19, 2010 at 5:06 PM, Mel Gorman <mel@csn.ul.ie> wrote:
> > > > > > > > On Wed, Aug 18, 2010 at 11:57:26PM +0900, Minchan Kim wrote:
> > > > > > > >> On Wed, Aug 18, 2010 at 09:51:23AM +0100, Mel Gorman wrote:
> > > > > > > >> > > What's a window low and min wmark? Maybe I can miss your point.
> > > > > > > >> > >
> > > > > > > >> >
> > > > > > > >> > The window is due to the fact kswapd is not awake yet. The window is because
> > > > > > > >> > kswapd might not be awake as NR_FREE_PAGES is higher than it should be. The
> > > > > > > >> > system is really somewhere between the low and min watermark but we are not
> > > > > > > >> > taking the accurate measure until kswapd gets woken up. The first allocation
> > > > > > > >> > to notice we are below the low watermark (be it due to vmstat refreshing or
> > > > > > > >> > that NR_FREE_PAGES happens to report we are below the watermark regardless of
> > > > > > > >> > any drift) wakes kswapd and other callers then take an accurate count hence
> > > > > > > >> > "we could breach the watermark but I'm expecting it can only happen for at
> > > > > > > >> > worst one allocation".
> > > > > > > >>
> > > > > > > >> Right. I misunderstood your word.
> > > > > > > >> One more question.
> > > > > > > >>
> > > > > > > >> Could you explain live lock scenario?
> > > > > > > >>
> > > > > > > >
> > > > > > > > Lets say
> > > > > > > >
> > > > > > > > NR_FREE_PAGES = 256
> > > > > > > > Actual free pages = 8
> > > > > > > >
> > > > > > > > The PCP lists get refilled in patch taking all 8 pages. Now there are
> > > > > > > > zero free pages. Reclaim kicks in but to reclaim any pages it needs to
> > > > > > > > clean something but all the pages are on a network-backed filesystem. To
> > > > > > > > clean them, it must transmit on the network so it tries to allocate some
> > > > > > > > buffers.
> > > > > > > >
> > > > > > > > The livelock is that to free some memory, an allocation must succeed but
> > > > > > > > for an allocation to succeed, some memory must be freed. The system
> > > > > > >
> > > > > > > Yes. I understood this as livelock but at last VM will kill victim
> > > > > > > process then it can allocate free pages.
> > > > > >
> > > > > > And if the exit path for the OOM kill needs to allocate a page what
> > > > > > should it do?
> > > > >
> > > > > Yeah. It might be livelock.
> > > > > Then, let's rethink the problem.
> > > > >
> > > > > The problem is following as.
> > > > >
> > > > > 1. Process A try to allocate the page
> > > > > 2. VM try to reclaim the page for process A
> > > > > 3. VM reclaims some pages but it remains on PCP so can't allocate pages for A
> > > > > 4. VM try to kill process B
> > > > > 5. The exit path need new pages for exiting process B
> > > > > 6. Livelock happens(I am not sure but we need any warning if it really happens at least)
> > > > >
> > > >
> > > > The problem this patch is concerned with is about the vmstat counters, not
> > > > the pages on the per-cpu lists. The issue being dealt with is that the page
> > > > allocator grants a page going below the min watermark because NR_FREE_PAGES
> > > > can be inaccurate. The patch aims to fix that but taking greater care
> > > > with NR_FREE_PAGES when memory is low.
> > >
> > > Your goal is to protect _min_ pages which is reserved. Right?
> > > I thought your final goal is to protect the livelock problem.
> > > Hmm.. Sorry for the noise. :(
> > >
> >
> > Emm, it's the same thing. If the min watermark is not properly
> > preserved, the system is in danger of being live-locked.
>
> Totally right.
> Maybe I am sleeping.
>
> Let's add follwing as comment about livelock.
>
Sure!
> "If NR_FREE_PAGES is much higher than number of real free page in buddy,
> the VM can allocate pages below min watermark(At worst, buddy is zero).
> Although VM kills some victim for freeing memory, it can't do it if the
> exit path requires new page since buddy have zero page. It can result in
> livelock."
>
Thanks
> At least, it help to not hurt you in future by me who is fool.
>
The patch leader now reads as
Ordinarily watermark checks are based on the vmstat NR_FREE_PAGES as it is
cheaper than scanning a number of lists. To avoid synchronization overhead,
counter deltas are maintained on a per-cpu basis and drained both periodically
and when the delta is above a threshold. On large CPU systems, the difference
between the estimated and real value of NR_FREE_PAGES can be very high.
If NR_FREE_PAGES is much higher than number of real free page in buddy, the VM
can allocate pages below min watermark, at worst reducing the real number of
pages to zero. Even if the OOM killer kills some victim for freeing memory, it
may not free memory if the exit path requires a new page resulting in livelock.
This patch introduces zone_nr_free_pages() to take a slightly more accurate
estimate of NR_FREE_PAGES while kswapd is awake. The estimate is not perfect
and may result in cache line bounces but is expected to be lighter than the
IPI calls necessary to continually drain the per-cpu counters while kswapd
is awake.
Is that better?
--
Mel Gorman
Part-time Phd Student Linux Technology Center
University of Limerick IBM Dublin Software Lab
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
^ permalink raw reply [flat|nested] 82+ messages in thread
* Re: [PATCH 2/3] mm: page allocator: Calculate a better estimate of NR_FREE_PAGES when memory is low and kswapd is awake
2010-08-19 15:40 ` Mel Gorman
@ 2010-08-19 15:44 ` Minchan Kim
0 siblings, 0 replies; 82+ messages in thread
From: Minchan Kim @ 2010-08-19 15:44 UTC (permalink / raw)
To: Mel Gorman
Cc: Johannes Weiner, linux-mm, Rik van Riel, Nick Piggin,
KAMEZAWA Hiroyuki, KOSAKI Motohiro
On Thu, Aug 19, 2010 at 04:40:33PM +0100, Mel Gorman wrote:
> The patch leader now reads as
>
> Ordinarily watermark checks are based on the vmstat NR_FREE_PAGES as it is
> cheaper than scanning a number of lists. To avoid synchronization overhead,
> counter deltas are maintained on a per-cpu basis and drained both periodically
> and when the delta is above a threshold. On large CPU systems, the difference
> between the estimated and real value of NR_FREE_PAGES can be very high.
> If NR_FREE_PAGES is much higher than number of real free page in buddy, the VM
> can allocate pages below min watermark, at worst reducing the real number of
> pages to zero. Even if the OOM killer kills some victim for freeing memory, it
> may not free memory if the exit path requires a new page resulting in livelock.
>
> This patch introduces zone_nr_free_pages() to take a slightly more accurate
> estimate of NR_FREE_PAGES while kswapd is awake. The estimate is not perfect
> and may result in cache line bounces but is expected to be lighter than the
> IPI calls necessary to continually drain the per-cpu counters while kswapd
> is awake.
>
> Is that better?
Good!
>
> --
> Mel Gorman
> Part-time Phd Student Linux Technology Center
> University of Limerick IBM Dublin Software Lab
--
Kind regards,
Minchan Kim
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
^ permalink raw reply [flat|nested] 82+ messages in thread
* Re: [PATCH 2/3] mm: page allocator: Calculate a better estimate of NR_FREE_PAGES when memory is low and kswapd is awake
2010-08-16 9:43 ` Mel Gorman
2010-08-16 14:47 ` Rik van Riel
2010-08-16 16:06 ` Johannes Weiner
@ 2010-08-19 15:46 ` Minchan Kim
2010-08-19 16:06 ` Mel Gorman
2 siblings, 1 reply; 82+ messages in thread
From: Minchan Kim @ 2010-08-19 15:46 UTC (permalink / raw)
To: Mel Gorman
Cc: linux-mm, Rik van Riel, Nick Piggin, Johannes Weiner,
KAMEZAWA Hiroyuki, KOSAKI Motohiro
On Mon, Aug 16, 2010 at 10:43:50AM +0100, Mel Gorman wrote:
> On Mon, Aug 16, 2010 at 10:42:12AM +0100, Mel Gorman wrote:
> > Ordinarily watermark checks are made based on the vmstat NR_FREE_PAGES as
> > it is cheaper than scanning a number of lists. To avoid synchronization
> > overhead, counter deltas are maintained on a per-cpu basis and drained both
> > periodically and when the delta is above a threshold. On large CPU systems,
> > the difference between the estimated and real value of NR_FREE_PAGES can be
> > very high. If the system is under both load and low memory, it's possible
> > for watermarks to be breached. In extreme cases, the number of free pages
> > can drop to 0 leading to the possibility of system livelock.
> >
> > This patch introduces zone_nr_free_pages() to take a slightly more accurate
> > estimate of NR_FREE_PAGES while kswapd is awake. The estimate is not perfect
> > and may result in cache line bounces but is expected to be lighter than the
> > IPI calls necessary to continually drain the per-cpu counters while kswapd
> > is awake.
> >
> > Signed-off-by: Mel Gorman <mel@csn.ul.ie>
>
> And the second I sent this, I realised I had sent a slightly old version
> that missed a compile-fix :(
>
> ==== CUT HERE ====
> mm: page allocator: Calculate a better estimate of NR_FREE_PAGES when memory is low and kswapd is awake
>
> Ordinarily watermark checks are made based on the vmstat NR_FREE_PAGES as
> it is cheaper than scanning a number of lists. To avoid synchronization
> overhead, counter deltas are maintained on a per-cpu basis and drained both
> periodically and when the delta is above a threshold. On large CPU systems,
> the difference between the estimated and real value of NR_FREE_PAGES can be
> very high. If the system is under both load and low memory, it's possible
> for watermarks to be breached. In extreme cases, the number of free pages
> can drop to 0 leading to the possibility of system livelock.
Mel. Could you consider normal(or small) system but has two core at least?
I means we apply you rule according to the number of CPU and RAM size. (ie,
threshold value).
Now mobile system begin to have two core in system and above 1G RAM.
Such case, it has threshold 8.
It is unlikey to happen livelock.
Is it worth to have such overhead in such system?
What do you think?
--
Kind regards,
Minchan Kim
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
^ permalink raw reply [flat|nested] 82+ messages in thread
* Re: [PATCH 2/3] mm: page allocator: Calculate a better estimate of NR_FREE_PAGES when memory is low and kswapd is awake
2010-08-19 15:46 ` Minchan Kim
@ 2010-08-19 16:06 ` Mel Gorman
2010-08-19 16:45 ` Minchan Kim
0 siblings, 1 reply; 82+ messages in thread
From: Mel Gorman @ 2010-08-19 16:06 UTC (permalink / raw)
To: Minchan Kim
Cc: linux-mm, Rik van Riel, Nick Piggin, Johannes Weiner,
KAMEZAWA Hiroyuki, KOSAKI Motohiro
On Fri, Aug 20, 2010 at 12:46:38AM +0900, Minchan Kim wrote:
> On Mon, Aug 16, 2010 at 10:43:50AM +0100, Mel Gorman wrote:
> > On Mon, Aug 16, 2010 at 10:42:12AM +0100, Mel Gorman wrote:
> > > Ordinarily watermark checks are made based on the vmstat NR_FREE_PAGES as
> > > it is cheaper than scanning a number of lists. To avoid synchronization
> > > overhead, counter deltas are maintained on a per-cpu basis and drained both
> > > periodically and when the delta is above a threshold. On large CPU systems,
> > > the difference between the estimated and real value of NR_FREE_PAGES can be
> > > very high. If the system is under both load and low memory, it's possible
> > > for watermarks to be breached. In extreme cases, the number of free pages
> > > can drop to 0 leading to the possibility of system livelock.
> > >
> > > This patch introduces zone_nr_free_pages() to take a slightly more accurate
> > > estimate of NR_FREE_PAGES while kswapd is awake. The estimate is not perfect
> > > and may result in cache line bounces but is expected to be lighter than the
> > > IPI calls necessary to continually drain the per-cpu counters while kswapd
> > > is awake.
> > >
> > > Signed-off-by: Mel Gorman <mel@csn.ul.ie>
> >
> > And the second I sent this, I realised I had sent a slightly old version
> > that missed a compile-fix :(
> >
> > ==== CUT HERE ====
> > mm: page allocator: Calculate a better estimate of NR_FREE_PAGES when memory is low and kswapd is awake
> >
> > Ordinarily watermark checks are made based on the vmstat NR_FREE_PAGES as
> > it is cheaper than scanning a number of lists. To avoid synchronization
> > overhead, counter deltas are maintained on a per-cpu basis and drained both
> > periodically and when the delta is above a threshold. On large CPU systems,
> > the difference between the estimated and real value of NR_FREE_PAGES can be
> > very high. If the system is under both load and low memory, it's possible
> > for watermarks to be breached. In extreme cases, the number of free pages
> > can drop to 0 leading to the possibility of system livelock.
>
> Mel. Could you consider normal(or small) system but has two core at least?
I did consider it but I was not keen on the idea of small systems behaving
very differently to large systems in this regard. I thought there was a
danger that a problem problem would be hidden by such a move.
> I means we apply you rule according to the number of CPU and RAM size. (ie,
> threshold value).
> Now mobile system begin to have two core in system and above 1G RAM.
> Such case, it has threshold 8.
>
> It is unlikey to happen livelock.
> Is it worth to have such overhead in such system?
> What do you think?
>
Such overhead could be avoided if we made a check like the following in
refresh_zone_stat_thresholds()
/*
* Only set percpu_drift_mark if there is a danger that
* NR_FREE_PAGES reports the low watermark is ok when in fact
* the min watermark could be breached by an allocation
*/
tolerate_drift = low_wmark_pages(zone) - min_wmark_pages(zone);
max_drift = num_online_cpus() * threshold;
if (max_drift > tolerate_drift)
zone->percpu_drift_mark = high_wmark_pages(zone)
+ max_drift;
Would this be preferable?
--
Mel Gorman
Part-time Phd Student Linux Technology Center
University of Limerick IBM Dublin Software Lab
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
^ permalink raw reply [flat|nested] 82+ messages in thread
* Re: [PATCH 2/3] mm: page allocator: Calculate a better estimate of NR_FREE_PAGES when memory is low and kswapd is awake
2010-08-19 16:06 ` Mel Gorman
@ 2010-08-19 16:45 ` Minchan Kim
0 siblings, 0 replies; 82+ messages in thread
From: Minchan Kim @ 2010-08-19 16:45 UTC (permalink / raw)
To: Mel Gorman
Cc: linux-mm, Rik van Riel, Nick Piggin, Johannes Weiner,
KAMEZAWA Hiroyuki, KOSAKI Motohiro
On Thu, Aug 19, 2010 at 05:06:12PM +0100, Mel Gorman wrote:
> On Fri, Aug 20, 2010 at 12:46:38AM +0900, Minchan Kim wrote:
> Mel. Could you consider normal(or small) system but has two core at least?
>
> I did consider it but I was not keen on the idea of small systems behaving
> very differently to large systems in this regard. I thought there was a
> danger that a problem problem would be hidden by such a move.
>
> > I means we apply you rule according to the number of CPU and RAM size. (ie,
> > threshold value).
> > Now mobile system begin to have two core in system and above 1G RAM.
> > Such case, it has threshold 8.
> >
> > It is unlikey to happen livelock.
> > Is it worth to have such overhead in such system?
> > What do you think?
> >
>
> Such overhead could be avoided if we made a check like the following in
> refresh_zone_stat_thresholds()
>
> /*
> * Only set percpu_drift_mark if there is a danger that
> * NR_FREE_PAGES reports the low watermark is ok when in fact
> * the min watermark could be breached by an allocation
> */
> tolerate_drift = low_wmark_pages(zone) - min_wmark_pages(zone);
> max_drift = num_online_cpus() * threshold;
> if (max_drift > tolerate_drift)
> zone->percpu_drift_mark = high_wmark_pages(zone)
> + max_drift;
>
> Would this be preferable?
Yes. It looks good to me.
>
> --
> Mel Gorman
> Part-time Phd Student Linux Technology Center
> University of Limerick IBM Dublin Software Lab
--
Kind regards,
Minchan Kim
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
^ permalink raw reply [flat|nested] 82+ messages in thread
* Re: [PATCH 2/3] mm: page allocator: Calculate a better estimate of NR_FREE_PAGES when memory is low and kswapd is awake
2010-08-16 9:42 ` [PATCH 2/3] mm: page allocator: Calculate a better estimate of NR_FREE_PAGES when memory is low and kswapd is awake Mel Gorman
2010-08-16 9:43 ` Mel Gorman
@ 2010-08-18 2:59 ` KAMEZAWA Hiroyuki
2010-08-18 15:55 ` Christoph Lameter
1 sibling, 1 reply; 82+ messages in thread
From: KAMEZAWA Hiroyuki @ 2010-08-18 2:59 UTC (permalink / raw)
To: Mel Gorman
Cc: linux-mm, Rik van Riel, Nick Piggin, Johannes Weiner,
KOSAKI Motohiro, cl@linux-foundation.org
On Mon, 16 Aug 2010 10:42:12 +0100
Mel Gorman <mel@csn.ul.ie> wrote:
> Ordinarily watermark checks are made based on the vmstat NR_FREE_PAGES as
> it is cheaper than scanning a number of lists. To avoid synchronization
> overhead, counter deltas are maintained on a per-cpu basis and drained both
> periodically and when the delta is above a threshold. On large CPU systems,
> the difference between the estimated and real value of NR_FREE_PAGES can be
> very high. If the system is under both load and low memory, it's possible
> for watermarks to be breached. In extreme cases, the number of free pages
> can drop to 0 leading to the possibility of system livelock.
>
> This patch introduces zone_nr_free_pages() to take a slightly more accurate
> estimate of NR_FREE_PAGES while kswapd is awake. The estimate is not perfect
> and may result in cache line bounces but is expected to be lighter than the
> IPI calls necessary to continually drain the per-cpu counters while kswapd
> is awake.
>
> Signed-off-by: Mel Gorman <mel@csn.ul.ie>
Reviewed-by: KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>
BTW, a nitpick.
> @@ -143,6 +143,9 @@ static void refresh_zone_stat_thresholds(void)
> for_each_online_cpu(cpu)
> per_cpu_ptr(zone->pageset, cpu)->stat_threshold
> = threshold;
> +
> + zone->percpu_drift_mark = high_wmark_pages(zone) +
> + num_online_cpus() * threshold;
> }
> }
This function is now called only at CPU_DEAD. IOW, not called at CPU_UP_PREPARE
It's done by this patch....but the reason is unclear to me.
==
http://git.kernel.org/?p=linux/kernel/git/torvalds/linux-2.6.git;a=commitdiff;h=d1187ed21026fd512b87851d0ca26d9ae16f9059
==
Christoph ?
Thanks,
-Kame
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
^ permalink raw reply [flat|nested] 82+ messages in thread
* Re: [PATCH 2/3] mm: page allocator: Calculate a better estimate of NR_FREE_PAGES when memory is low and kswapd is awake
2010-08-18 2:59 ` KAMEZAWA Hiroyuki
@ 2010-08-18 15:55 ` Christoph Lameter
2010-08-19 0:07 ` KAMEZAWA Hiroyuki
0 siblings, 1 reply; 82+ messages in thread
From: Christoph Lameter @ 2010-08-18 15:55 UTC (permalink / raw)
To: KAMEZAWA Hiroyuki
Cc: Mel Gorman, linux-mm, Rik van Riel, Nick Piggin, Johannes Weiner,
KOSAKI Motohiro
On Wed, 18 Aug 2010, KAMEZAWA Hiroyuki wrote:
> BTW, a nitpick.
>
> > @@ -143,6 +143,9 @@ static void refresh_zone_stat_thresholds(void)
> > for_each_online_cpu(cpu)
> > per_cpu_ptr(zone->pageset, cpu)->stat_threshold
> > = threshold;
> > +
> > + zone->percpu_drift_mark = high_wmark_pages(zone) +
> > + num_online_cpus() * threshold;
> > }
> > }
>
> This function is now called only at CPU_DEAD. IOW, not called at CPU_UP_PREPARE
calculate_threshold() does its calculation based on the number of online
cpus. Therefore the threshold may change if a cpu is brought down.
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
^ permalink raw reply [flat|nested] 82+ messages in thread
* Re: [PATCH 2/3] mm: page allocator: Calculate a better estimate of NR_FREE_PAGES when memory is low and kswapd is awake
2010-08-18 15:55 ` Christoph Lameter
@ 2010-08-19 0:07 ` KAMEZAWA Hiroyuki
2010-08-19 19:00 ` Christoph Lameter
0 siblings, 1 reply; 82+ messages in thread
From: KAMEZAWA Hiroyuki @ 2010-08-19 0:07 UTC (permalink / raw)
To: Christoph Lameter
Cc: Mel Gorman, linux-mm, Rik van Riel, Nick Piggin, Johannes Weiner,
KOSAKI Motohiro
On Wed, 18 Aug 2010 10:55:53 -0500 (CDT)
Christoph Lameter <cl@linux-foundation.org> wrote:
> On Wed, 18 Aug 2010, KAMEZAWA Hiroyuki wrote:
>
> > BTW, a nitpick.
> >
> > > @@ -143,6 +143,9 @@ static void refresh_zone_stat_thresholds(void)
> > > for_each_online_cpu(cpu)
> > > per_cpu_ptr(zone->pageset, cpu)->stat_threshold
> > > = threshold;
> > > +
> > > + zone->percpu_drift_mark = high_wmark_pages(zone) +
> > > + num_online_cpus() * threshold;
> > > }
> > > }
> >
> > This function is now called only at CPU_DEAD. IOW, not called at CPU_UP_PREPARE
>
> calculate_threshold() does its calculation based on the number of online
> cpus. Therefore the threshold may change if a cpu is brought down.
>
yes. but why not calculate at bringing up ?
Thanks,
-Kame
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
^ permalink raw reply [flat|nested] 82+ messages in thread
* Re: [PATCH 2/3] mm: page allocator: Calculate a better estimate of NR_FREE_PAGES when memory is low and kswapd is awake
2010-08-19 0:07 ` KAMEZAWA Hiroyuki
@ 2010-08-19 19:00 ` Christoph Lameter
2010-08-19 23:49 ` KAMEZAWA Hiroyuki
0 siblings, 1 reply; 82+ messages in thread
From: Christoph Lameter @ 2010-08-19 19:00 UTC (permalink / raw)
To: KAMEZAWA Hiroyuki
Cc: Mel Gorman, linux-mm, Rik van Riel, Nick Piggin, Johannes Weiner,
KOSAKI Motohiro
n Thu, 19 Aug 2010, KAMEZAWA Hiroyuki wrote:
> > > This function is now called only at CPU_DEAD. IOW, not called at CPU_UP_PREPARE
> >
> > calculate_threshold() does its calculation based on the number of online
> > cpus. Therefore the threshold may change if a cpu is brought down.
> >
> yes. but why not calculate at bringing up ?
True. Seems to have gone missing somehow.
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
^ permalink raw reply [flat|nested] 82+ messages in thread
* Re: [PATCH 2/3] mm: page allocator: Calculate a better estimate of NR_FREE_PAGES when memory is low and kswapd is awake
2010-08-19 19:00 ` Christoph Lameter
@ 2010-08-19 23:49 ` KAMEZAWA Hiroyuki
2010-08-20 0:22 ` [PATCH] vmstat : update zone stat threshold at onlining a cpu KAMEZAWA Hiroyuki
0 siblings, 1 reply; 82+ messages in thread
From: KAMEZAWA Hiroyuki @ 2010-08-19 23:49 UTC (permalink / raw)
To: Christoph Lameter
Cc: Mel Gorman, linux-mm, Rik van Riel, Nick Piggin, Johannes Weiner,
KOSAKI Motohiro
On Thu, 19 Aug 2010 14:00:44 -0500 (CDT)
Christoph Lameter <cl@linux-foundation.org> wrote:
> n Thu, 19 Aug 2010, KAMEZAWA Hiroyuki wrote:
>
> > > > This function is now called only at CPU_DEAD. IOW, not called at CPU_UP_PREPARE
> > >
> > > calculate_threshold() does its calculation based on the number of online
> > > cpus. Therefore the threshold may change if a cpu is brought down.
> > >
> > yes. but why not calculate at bringing up ?
>
> True. Seems to have gone missing somehow.
>
ok, thank you for checking. I'll prepare a patch.
-Kame
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
^ permalink raw reply [flat|nested] 82+ messages in thread
* [PATCH] vmstat : update zone stat threshold at onlining a cpu
2010-08-19 23:49 ` KAMEZAWA Hiroyuki
@ 2010-08-20 0:22 ` KAMEZAWA Hiroyuki
2010-08-20 14:54 ` Christoph Lameter
2010-08-23 7:18 ` Mel Gorman
0 siblings, 2 replies; 82+ messages in thread
From: KAMEZAWA Hiroyuki @ 2010-08-20 0:22 UTC (permalink / raw)
To: KAMEZAWA Hiroyuki
Cc: Christoph Lameter, Mel Gorman, linux-mm,
akpm@linux-foundation.org
refresh_zone_stat_thresholds() calculates parameter based on
the number of online cpus. It's called at cpu offlining but
needs to be called at onlining, too.
Cc: Christoph Lameter <cl@linux-foundation.org>
Signed-off-by: KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>
---
mm/vmstat.c | 1 +
1 file changed, 1 insertion(+)
Index: mmotm-0811/mm/vmstat.c
===================================================================
--- mmotm-0811.orig/mm/vmstat.c
+++ mmotm-0811/mm/vmstat.c
@@ -998,6 +998,7 @@ static int __cpuinit vmstat_cpuup_callba
switch (action) {
case CPU_ONLINE:
case CPU_ONLINE_FROZEN:
+ refresh_zone_stat_thresholds();
start_cpu_timer(cpu);
node_set_state(cpu_to_node(cpu), N_CPU);
break;
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
^ permalink raw reply [flat|nested] 82+ messages in thread
* Re: [PATCH] vmstat : update zone stat threshold at onlining a cpu
2010-08-20 0:22 ` [PATCH] vmstat : update zone stat threshold at onlining a cpu KAMEZAWA Hiroyuki
@ 2010-08-20 14:54 ` Christoph Lameter
2010-08-20 17:29 ` Andrew Morton
2010-08-23 7:18 ` Mel Gorman
1 sibling, 1 reply; 82+ messages in thread
From: Christoph Lameter @ 2010-08-20 14:54 UTC (permalink / raw)
To: KAMEZAWA Hiroyuki; +Cc: Mel Gorman, linux-mm, akpm@linux-foundation.org
On Fri, 20 Aug 2010, KAMEZAWA Hiroyuki wrote:
> 1 file changed, 1 insertion(+)
>
> Index: mmotm-0811/mm/vmstat.c
> ===================================================================
> --- mmotm-0811.orig/mm/vmstat.c
> +++ mmotm-0811/mm/vmstat.c
> @@ -998,6 +998,7 @@ static int __cpuinit vmstat_cpuup_callba
> switch (action) {
> case CPU_ONLINE:
> case CPU_ONLINE_FROZEN:
> + refresh_zone_stat_thresholds();
> start_cpu_timer(cpu);
> node_set_state(cpu_to_node(cpu), N_CPU);
> break;
refresh_zone_stat_threshold must be run *after* the number of online cpus
has been incremented. Does that occur before the callback?
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
^ permalink raw reply [flat|nested] 82+ messages in thread
* Re: [PATCH] vmstat : update zone stat threshold at onlining a cpu
2010-08-20 14:54 ` Christoph Lameter
@ 2010-08-20 17:29 ` Andrew Morton
0 siblings, 0 replies; 82+ messages in thread
From: Andrew Morton @ 2010-08-20 17:29 UTC (permalink / raw)
To: Christoph Lameter; +Cc: KAMEZAWA Hiroyuki, Mel Gorman, linux-mm
On Fri, 20 Aug 2010 09:54:56 -0500 (CDT) Christoph Lameter <cl@linux-foundation.org> wrote:
> On Fri, 20 Aug 2010, KAMEZAWA Hiroyuki wrote:
>
> > 1 file changed, 1 insertion(+)
> >
> > Index: mmotm-0811/mm/vmstat.c
> > ===================================================================
> > --- mmotm-0811.orig/mm/vmstat.c
> > +++ mmotm-0811/mm/vmstat.c
> > @@ -998,6 +998,7 @@ static int __cpuinit vmstat_cpuup_callba
> > switch (action) {
> > case CPU_ONLINE:
> > case CPU_ONLINE_FROZEN:
> > + refresh_zone_stat_thresholds();
> > start_cpu_timer(cpu);
> > node_set_state(cpu_to_node(cpu), N_CPU);
> > break;
>
> refresh_zone_stat_threshold must be run *after* the number of online cpus
> has been incremented. Does that occur before the callback?
It does. _cpu_up() calls __cpu_up() before calling cpu_notify().
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
^ permalink raw reply [flat|nested] 82+ messages in thread
* Re: [PATCH] vmstat : update zone stat threshold at onlining a cpu
2010-08-20 0:22 ` [PATCH] vmstat : update zone stat threshold at onlining a cpu KAMEZAWA Hiroyuki
2010-08-20 14:54 ` Christoph Lameter
@ 2010-08-23 7:18 ` Mel Gorman
1 sibling, 0 replies; 82+ messages in thread
From: Mel Gorman @ 2010-08-23 7:18 UTC (permalink / raw)
To: KAMEZAWA Hiroyuki; +Cc: Christoph Lameter, linux-mm, akpm@linux-foundation.org
On Fri, Aug 20, 2010 at 09:22:51AM +0900, KAMEZAWA Hiroyuki wrote:
>
> refresh_zone_stat_thresholds() calculates parameter based on
> the number of online cpus. It's called at cpu offlining but
> needs to be called at onlining, too.
>
> Cc: Christoph Lameter <cl@linux-foundation.org>
> Signed-off-by: KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>
Acked-by: Mel Gorman <mel@csn.ul.ie>
Thanks
--
Mel Gorman
Part-time Phd Student Linux Technology Center
University of Limerick IBM Dublin Software Lab
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
^ permalink raw reply [flat|nested] 82+ messages in thread
* [PATCH 3/3] mm: page allocator: Drain per-cpu lists after direct reclaim allocation fails
2010-08-16 9:42 [RFC PATCH 0/3] Reduce watermark-related problems with the per-cpu allocator Mel Gorman
2010-08-16 9:42 ` [PATCH 1/3] mm: page allocator: Update free page counters after pages are placed on the free list Mel Gorman
2010-08-16 9:42 ` [PATCH 2/3] mm: page allocator: Calculate a better estimate of NR_FREE_PAGES when memory is low and kswapd is awake Mel Gorman
@ 2010-08-16 9:42 ` Mel Gorman
2010-08-16 14:50 ` Rik van Riel
` (3 more replies)
2 siblings, 4 replies; 82+ messages in thread
From: Mel Gorman @ 2010-08-16 9:42 UTC (permalink / raw)
To: linux-mm
Cc: Rik van Riel, Nick Piggin, Johannes Weiner, KAMEZAWA Hiroyuki,
KOSAKI Motohiro, Mel Gorman
When under significant memory pressure, a process enters direct reclaim
and immediately afterwards tries to allocate a page. If it fails and no
further progress is made, it's possible the system will go OOM. However,
on systems with large amounts of memory, it's possible that a significant
number of pages are on per-cpu lists and inaccessible to the calling
process. This leads to a process entering direct reclaim more often than
it should increasing the pressure on the system and compounding the problem.
This patch notes that if direct reclaim is making progress but
allocations are still failing that the system is already under heavy
pressure. In this case, it drains the per-cpu lists and tries the
allocation a second time before continuing.
Signed-off-by: Mel Gorman <mel@csn.ul.ie>
---
mm/page_alloc.c | 19 +++++++++++++++++--
1 files changed, 17 insertions(+), 2 deletions(-)
diff --git a/mm/page_alloc.c b/mm/page_alloc.c
index 67a2ed0..a8651a4 100644
--- a/mm/page_alloc.c
+++ b/mm/page_alloc.c
@@ -1844,6 +1844,7 @@ __alloc_pages_direct_reclaim(gfp_t gfp_mask, unsigned int order,
struct page *page = NULL;
struct reclaim_state reclaim_state;
struct task_struct *p = current;
+ bool drained = false;
cond_resched();
@@ -1865,11 +1866,25 @@ __alloc_pages_direct_reclaim(gfp_t gfp_mask, unsigned int order,
if (order != 0)
drain_all_pages();
- if (likely(*did_some_progress))
- page = get_page_from_freelist(gfp_mask, nodemask, order,
+ if (unlikely(!(*did_some_progress)))
+ return NULL;
+
+retry:
+ page = get_page_from_freelist(gfp_mask, nodemask, order,
zonelist, high_zoneidx,
alloc_flags, preferred_zone,
migratetype);
+
+ /*
+ * If an allocation failed after direct reclaim, it could be because
+ * pages are pinned on the per-cpu lists. Drain them and try again
+ */
+ if (!page && !drained) {
+ drain_all_pages();
+ drained = true;
+ goto retry;
+ }
+
return page;
}
--
1.7.1
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
^ permalink raw reply related [flat|nested] 82+ messages in thread
* Re: [PATCH 3/3] mm: page allocator: Drain per-cpu lists after direct reclaim allocation fails
2010-08-16 9:42 ` [PATCH 3/3] mm: page allocator: Drain per-cpu lists after direct reclaim allocation fails Mel Gorman
@ 2010-08-16 14:50 ` Rik van Riel
2010-08-17 2:57 ` Minchan Kim
` (2 subsequent siblings)
3 siblings, 0 replies; 82+ messages in thread
From: Rik van Riel @ 2010-08-16 14:50 UTC (permalink / raw)
To: Mel Gorman
Cc: linux-mm, Nick Piggin, Johannes Weiner, KAMEZAWA Hiroyuki,
KOSAKI Motohiro
On 08/16/2010 05:42 AM, Mel Gorman wrote:
> When under significant memory pressure, a process enters direct reclaim
> and immediately afterwards tries to allocate a page. If it fails and no
> further progress is made, it's possible the system will go OOM. However,
> on systems with large amounts of memory, it's possible that a significant
> number of pages are on per-cpu lists and inaccessible to the calling
> process. This leads to a process entering direct reclaim more often than
> it should increasing the pressure on the system and compounding the problem.
>
> This patch notes that if direct reclaim is making progress but
> allocations are still failing that the system is already under heavy
> pressure. In this case, it drains the per-cpu lists and tries the
> allocation a second time before continuing.
>
> Signed-off-by: Mel Gorman<mel@csn.ul.ie>
Reviewed-by: Rik van Riel <riel@redhat.com>
--
All rights reversed
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
^ permalink raw reply [flat|nested] 82+ messages in thread
* Re: [PATCH 3/3] mm: page allocator: Drain per-cpu lists after direct reclaim allocation fails
2010-08-16 9:42 ` [PATCH 3/3] mm: page allocator: Drain per-cpu lists after direct reclaim allocation fails Mel Gorman
2010-08-16 14:50 ` Rik van Riel
@ 2010-08-17 2:57 ` Minchan Kim
2010-08-18 3:02 ` KAMEZAWA Hiroyuki
2010-08-19 14:47 ` Minchan Kim
3 siblings, 0 replies; 82+ messages in thread
From: Minchan Kim @ 2010-08-17 2:57 UTC (permalink / raw)
To: Mel Gorman
Cc: linux-mm, Rik van Riel, Nick Piggin, Johannes Weiner,
KAMEZAWA Hiroyuki, KOSAKI Motohiro
On Mon, Aug 16, 2010 at 6:42 PM, Mel Gorman <mel@csn.ul.ie> wrote:
> When under significant memory pressure, a process enters direct reclaim
> and immediately afterwards tries to allocate a page. If it fails and no
> further progress is made, it's possible the system will go OOM. However,
> on systems with large amounts of memory, it's possible that a significant
> number of pages are on per-cpu lists and inaccessible to the calling
> process. This leads to a process entering direct reclaim more often than
> it should increasing the pressure on the system and compounding the problem.
>
> This patch notes that if direct reclaim is making progress but
> allocations are still failing that the system is already under heavy
> pressure. In this case, it drains the per-cpu lists and tries the
> allocation a second time before continuing.
>
> Signed-off-by: Mel Gorman <mel@csn.ul.ie>
Reviewed-by: Minchan Kim <minchan.kim@gmail.com>
IPI overhead would be good rather than going OOM or nopage.
In addition, here isn't a hot path and frequent case.
--
Kind regards,
Minchan Kim
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
^ permalink raw reply [flat|nested] 82+ messages in thread
* Re: [PATCH 3/3] mm: page allocator: Drain per-cpu lists after direct reclaim allocation fails
2010-08-16 9:42 ` [PATCH 3/3] mm: page allocator: Drain per-cpu lists after direct reclaim allocation fails Mel Gorman
2010-08-16 14:50 ` Rik van Riel
2010-08-17 2:57 ` Minchan Kim
@ 2010-08-18 3:02 ` KAMEZAWA Hiroyuki
2010-08-19 14:47 ` Minchan Kim
3 siblings, 0 replies; 82+ messages in thread
From: KAMEZAWA Hiroyuki @ 2010-08-18 3:02 UTC (permalink / raw)
To: Mel Gorman
Cc: linux-mm, Rik van Riel, Nick Piggin, Johannes Weiner,
KOSAKI Motohiro
On Mon, 16 Aug 2010 10:42:13 +0100
Mel Gorman <mel@csn.ul.ie> wrote:
> When under significant memory pressure, a process enters direct reclaim
> and immediately afterwards tries to allocate a page. If it fails and no
> further progress is made, it's possible the system will go OOM. However,
> on systems with large amounts of memory, it's possible that a significant
> number of pages are on per-cpu lists and inaccessible to the calling
> process. This leads to a process entering direct reclaim more often than
> it should increasing the pressure on the system and compounding the problem.
>
> This patch notes that if direct reclaim is making progress but
> allocations are still failing that the system is already under heavy
> pressure. In this case, it drains the per-cpu lists and tries the
> allocation a second time before continuing.
>
> Signed-off-by: Mel Gorman <mel@csn.ul.ie>
Reviewed-by: KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
^ permalink raw reply [flat|nested] 82+ messages in thread
* Re: [PATCH 3/3] mm: page allocator: Drain per-cpu lists after direct reclaim allocation fails
2010-08-16 9:42 ` [PATCH 3/3] mm: page allocator: Drain per-cpu lists after direct reclaim allocation fails Mel Gorman
` (2 preceding siblings ...)
2010-08-18 3:02 ` KAMEZAWA Hiroyuki
@ 2010-08-19 14:47 ` Minchan Kim
2010-08-19 15:10 ` Mel Gorman
3 siblings, 1 reply; 82+ messages in thread
From: Minchan Kim @ 2010-08-19 14:47 UTC (permalink / raw)
To: Mel Gorman
Cc: linux-mm, Rik van Riel, Nick Piggin, Johannes Weiner,
KAMEZAWA Hiroyuki, KOSAKI Motohiro
On Mon, Aug 16, 2010 at 10:42:13AM +0100, Mel Gorman wrote:
> When under significant memory pressure, a process enters direct reclaim
> and immediately afterwards tries to allocate a page. If it fails and no
> further progress is made, it's possible the system will go OOM. However,
> on systems with large amounts of memory, it's possible that a significant
> number of pages are on per-cpu lists and inaccessible to the calling
> process. This leads to a process entering direct reclaim more often than
> it should increasing the pressure on the system and compounding the problem.
>
> This patch notes that if direct reclaim is making progress but
> allocations are still failing that the system is already under heavy
> pressure. In this case, it drains the per-cpu lists and tries the
> allocation a second time before continuing.
>
> Signed-off-by: Mel Gorman <mel@csn.ul.ie>
> ---
> mm/page_alloc.c | 19 +++++++++++++++++--
> 1 files changed, 17 insertions(+), 2 deletions(-)
>
> diff --git a/mm/page_alloc.c b/mm/page_alloc.c
> index 67a2ed0..a8651a4 100644
> --- a/mm/page_alloc.c
> +++ b/mm/page_alloc.c
> @@ -1844,6 +1844,7 @@ __alloc_pages_direct_reclaim(gfp_t gfp_mask, unsigned int order,
> struct page *page = NULL;
> struct reclaim_state reclaim_state;
> struct task_struct *p = current;
> + bool drained = false;
>
> cond_resched();
>
> @@ -1865,11 +1866,25 @@ __alloc_pages_direct_reclaim(gfp_t gfp_mask, unsigned int order,
> if (order != 0)
> drain_all_pages();
>
Nitpick:
How about removing above condition and drain_all_pages?
If get_page_from_freelist fails, we do drain_all_pages at last.
It can remove double calling of drain_all_pagse in case of order > 0.
In addition, if the VM can't reclaim anythings, we don't need to drain
in case of order > 0.
> - if (likely(*did_some_progress))
> - page = get_page_from_freelist(gfp_mask, nodemask, order,
> + if (unlikely(!(*did_some_progress)))
> + return NULL;
> +
> +retry:
> + page = get_page_from_freelist(gfp_mask, nodemask, order,
> zonelist, high_zoneidx,
> alloc_flags, preferred_zone,
> migratetype);
> +
> + /*
> + * If an allocation failed after direct reclaim, it could be because
> + * pages are pinned on the per-cpu lists. Drain them and try again
> + */
> + if (!page && !drained) {
> + drain_all_pages();
> + drained = true;
> + goto retry;
> + }
> +
> return page;
> }
>
> --
> 1.7.1
>
> --
> To unsubscribe, send a message with 'unsubscribe linux-mm' in
> the body to majordomo@kvack.org. For more info on Linux MM,
> see: http://www.linux-mm.org/ .
> Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
--
Kind regards,
Minchan Kim
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
^ permalink raw reply [flat|nested] 82+ messages in thread
* Re: [PATCH 3/3] mm: page allocator: Drain per-cpu lists after direct reclaim allocation fails
2010-08-19 14:47 ` Minchan Kim
@ 2010-08-19 15:10 ` Mel Gorman
0 siblings, 0 replies; 82+ messages in thread
From: Mel Gorman @ 2010-08-19 15:10 UTC (permalink / raw)
To: Minchan Kim
Cc: linux-mm, Rik van Riel, Nick Piggin, Johannes Weiner,
KAMEZAWA Hiroyuki, KOSAKI Motohiro
On Thu, Aug 19, 2010 at 11:47:03PM +0900, Minchan Kim wrote:
> On Mon, Aug 16, 2010 at 10:42:13AM +0100, Mel Gorman wrote:
> > When under significant memory pressure, a process enters direct reclaim
> > and immediately afterwards tries to allocate a page. If it fails and no
> > further progress is made, it's possible the system will go OOM. However,
> > on systems with large amounts of memory, it's possible that a significant
> > number of pages are on per-cpu lists and inaccessible to the calling
> > process. This leads to a process entering direct reclaim more often than
> > it should increasing the pressure on the system and compounding the problem.
> >
> > This patch notes that if direct reclaim is making progress but
> > allocations are still failing that the system is already under heavy
> > pressure. In this case, it drains the per-cpu lists and tries the
> > allocation a second time before continuing.
> >
> > Signed-off-by: Mel Gorman <mel@csn.ul.ie>
> > ---
> > mm/page_alloc.c | 19 +++++++++++++++++--
> > 1 files changed, 17 insertions(+), 2 deletions(-)
> >
> > diff --git a/mm/page_alloc.c b/mm/page_alloc.c
> > index 67a2ed0..a8651a4 100644
> > --- a/mm/page_alloc.c
> > +++ b/mm/page_alloc.c
> > @@ -1844,6 +1844,7 @@ __alloc_pages_direct_reclaim(gfp_t gfp_mask, unsigned int order,
> > struct page *page = NULL;
> > struct reclaim_state reclaim_state;
> > struct task_struct *p = current;
> > + bool drained = false;
> >
> > cond_resched();
> >
> > @@ -1865,11 +1866,25 @@ __alloc_pages_direct_reclaim(gfp_t gfp_mask, unsigned int order,
> > if (order != 0)
> > drain_all_pages();
> >
>
> Nitpick:
>
> How about removing above condition and drain_all_pages?
> If get_page_from_freelist fails, we do drain_all_pages at last.
> It can remove double calling of drain_all_pagse in case of order > 0.
> In addition, if the VM can't reclaim anythings, we don't need to drain
> in case of order > 0.
>
That sounds reasonable. V2 of this series will delete the lines
if (order != 0)
drain_all_pages()
>
> > - if (likely(*did_some_progress))
> > - page = get_page_from_freelist(gfp_mask, nodemask, order,
> > + if (unlikely(!(*did_some_progress)))
> > + return NULL;
> > +
> > +retry:
> > + page = get_page_from_freelist(gfp_mask, nodemask, order,
> > zonelist, high_zoneidx,
> > alloc_flags, preferred_zone,
> > migratetype);
> > +
> > + /*
> > + * If an allocation failed after direct reclaim, it could be because
> > + * pages are pinned on the per-cpu lists. Drain them and try again
> > + */
> > + if (!page && !drained) {
> > + drain_all_pages();
> > + drained = true;
> > + goto retry;
> > + }
> > +
> > return page;
> > }
> >
> > --
> > 1.7.1
> >
--
Mel Gorman
Part-time Phd Student Linux Technology Center
University of Limerick IBM Dublin Software Lab
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
^ permalink raw reply [flat|nested] 82+ messages in thread
* [PATCH 0/3] Reduce watermark-related problems with the per-cpu allocator V2
@ 2010-08-23 8:00 Mel Gorman
2010-08-23 8:00 ` [PATCH 3/3] mm: page allocator: Drain per-cpu lists after direct reclaim allocation fails Mel Gorman
0 siblings, 1 reply; 82+ messages in thread
From: Mel Gorman @ 2010-08-23 8:00 UTC (permalink / raw)
To: Andrew Morton
Cc: Linux Kernel List, linux-mm, Rik van Riel, Johannes Weiner,
Minchan Kim, Christoph Lameter, KAMEZAWA Hiroyuki,
KOSAKI Motohiro, Mel Gorman
Changelog since V1
o Fix for !CONFIG_SMP
o Correct spelling mistakes
o Clarify a ChangeLog
o Only check for counter drift on machines large enough for the counter
drift to breach the min watermark when NR_FREE_PAGES report the low
watermark is fine
Internal IBM test teams beta testing distribution kernels have reported
problems on machines with a large number of CPUs whereby page allocator
failure messages show huge differences between the nr_free_pages vmstat
counter and what is available on the buddy lists. In an extreme example,
nr_free_pages was above the min watermark but zero pages were on the buddy
lists allowing the system to potentially livelock unable to make forward
progress unless an allocation succeeds. There is no reason why the problems
would not affect mainline so the following series mitigates the problems
in the page allocator related to to per-cpu counter drift and lists.
The first patch ensures that counters are updated after pages are added to
free lists.
The second patch notes that the counter drift between nr_free_pages and what
is on the per-cpu lists can be very high. When memory is low and kswapd
is awake, the per-cpu counters are checked as well as reading the value
of NR_FREE_PAGES. This will slow the page allocator when memory is low and
kswapd is awake but it will be much harder to breach the min watermark and
potentially livelock the system.
The third patch notes that after direct-reclaim an allocation can
fail because the necessary pages are on the per-cpu lists. After a
direct-reclaim-and-allocation-failure, the per-cpu lists are drained and
a second attempt is made.
Performance tests did not show up anything interesting. A version of this
series that continually called vmstat_update() when memory was low was
tested internally and found to help the counter drift problem. I described
this during LSF/MM Summit and the potential for IPI storms was frowned
upon. An alternative fix is in patch two which uses for_each_online_cpu()
to read the vmstat deltas while memory is low and kswapd is awake. This
should be functionally similar.
This patch should be merged after the patch "vmstat : update
zone stat threshold at onlining a cpu" which is in mmotm as
vmstat-update-zone-stat-threshold-when-onlining-a-cpu.patch .
Are there any objections to merging?
include/linux/mmzone.h | 13 +++++++++++++
mm/mmzone.c | 29 +++++++++++++++++++++++++++++
mm/page_alloc.c | 29 +++++++++++++++++++++--------
mm/vmstat.c | 15 ++++++++++++++-
4 files changed, 77 insertions(+), 9 deletions(-)
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
^ permalink raw reply [flat|nested] 82+ messages in thread
* [PATCH 3/3] mm: page allocator: Drain per-cpu lists after direct reclaim allocation fails
2010-08-23 8:00 [PATCH 0/3] Reduce watermark-related problems with the per-cpu allocator V2 Mel Gorman
@ 2010-08-23 8:00 ` Mel Gorman
2010-08-23 23:17 ` KOSAKI Motohiro
0 siblings, 1 reply; 82+ messages in thread
From: Mel Gorman @ 2010-08-23 8:00 UTC (permalink / raw)
To: Andrew Morton
Cc: Linux Kernel List, linux-mm, Rik van Riel, Johannes Weiner,
Minchan Kim, Christoph Lameter, KAMEZAWA Hiroyuki,
KOSAKI Motohiro, Mel Gorman
When under significant memory pressure, a process enters direct reclaim
and immediately afterwards tries to allocate a page. If it fails and no
further progress is made, it's possible the system will go OOM. However,
on systems with large amounts of memory, it's possible that a significant
number of pages are on per-cpu lists and inaccessible to the calling
process. This leads to a process entering direct reclaim more often than
it should increasing the pressure on the system and compounding the problem.
This patch notes that if direct reclaim is making progress but
allocations are still failing that the system is already under heavy
pressure. In this case, it drains the per-cpu lists and tries the
allocation a second time before continuing.
Signed-off-by: Mel Gorman <mel@csn.ul.ie>
Reviewed-by: Minchan Kim <minchan.kim@gmail.com>
Reviewed-by: KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>
---
mm/page_alloc.c | 20 ++++++++++++++++----
1 files changed, 16 insertions(+), 4 deletions(-)
diff --git a/mm/page_alloc.c b/mm/page_alloc.c
index bbaa959..750e1dc 100644
--- a/mm/page_alloc.c
+++ b/mm/page_alloc.c
@@ -1847,6 +1847,7 @@ __alloc_pages_direct_reclaim(gfp_t gfp_mask, unsigned int order,
struct page *page = NULL;
struct reclaim_state reclaim_state;
struct task_struct *p = current;
+ bool drained = false;
cond_resched();
@@ -1865,14 +1866,25 @@ __alloc_pages_direct_reclaim(gfp_t gfp_mask, unsigned int order,
cond_resched();
- if (order != 0)
- drain_all_pages();
+ if (unlikely(!(*did_some_progress)))
+ return NULL;
- if (likely(*did_some_progress))
- page = get_page_from_freelist(gfp_mask, nodemask, order,
+retry:
+ page = get_page_from_freelist(gfp_mask, nodemask, order,
zonelist, high_zoneidx,
alloc_flags, preferred_zone,
migratetype);
+
+ /*
+ * If an allocation failed after direct reclaim, it could be because
+ * pages are pinned on the per-cpu lists. Drain them and try again
+ */
+ if (!page && !drained) {
+ drain_all_pages();
+ drained = true;
+ goto retry;
+ }
+
return page;
}
--
1.7.1
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
^ permalink raw reply related [flat|nested] 82+ messages in thread
* Re: [PATCH 3/3] mm: page allocator: Drain per-cpu lists after direct reclaim allocation fails
2010-08-23 8:00 ` [PATCH 3/3] mm: page allocator: Drain per-cpu lists after direct reclaim allocation fails Mel Gorman
@ 2010-08-23 23:17 ` KOSAKI Motohiro
0 siblings, 0 replies; 82+ messages in thread
From: KOSAKI Motohiro @ 2010-08-23 23:17 UTC (permalink / raw)
To: Mel Gorman
Cc: kosaki.motohiro, Andrew Morton, Linux Kernel List, linux-mm,
Rik van Riel, Johannes Weiner, Minchan Kim, Christoph Lameter,
KAMEZAWA Hiroyuki
> When under significant memory pressure, a process enters direct reclaim
> and immediately afterwards tries to allocate a page. If it fails and no
> further progress is made, it's possible the system will go OOM. However,
> on systems with large amounts of memory, it's possible that a significant
> number of pages are on per-cpu lists and inaccessible to the calling
> process. This leads to a process entering direct reclaim more often than
> it should increasing the pressure on the system and compounding the problem.
>
> This patch notes that if direct reclaim is making progress but
> allocations are still failing that the system is already under heavy
> pressure. In this case, it drains the per-cpu lists and tries the
> allocation a second time before continuing.
>
> Signed-off-by: Mel Gorman <mel@csn.ul.ie>
> Reviewed-by: Minchan Kim <minchan.kim@gmail.com>
> Reviewed-by: KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>
> ---
> mm/page_alloc.c | 20 ++++++++++++++++----
> 1 files changed, 16 insertions(+), 4 deletions(-)
>
> diff --git a/mm/page_alloc.c b/mm/page_alloc.c
> index bbaa959..750e1dc 100644
> --- a/mm/page_alloc.c
> +++ b/mm/page_alloc.c
> @@ -1847,6 +1847,7 @@ __alloc_pages_direct_reclaim(gfp_t gfp_mask, unsigned int order,
> struct page *page = NULL;
> struct reclaim_state reclaim_state;
> struct task_struct *p = current;
> + bool drained = false;
>
> cond_resched();
>
> @@ -1865,14 +1866,25 @@ __alloc_pages_direct_reclaim(gfp_t gfp_mask, unsigned int order,
>
> cond_resched();
>
> - if (order != 0)
> - drain_all_pages();
> + if (unlikely(!(*did_some_progress)))
> + return NULL;
>
> - if (likely(*did_some_progress))
> - page = get_page_from_freelist(gfp_mask, nodemask, order,
> +retry:
> + page = get_page_from_freelist(gfp_mask, nodemask, order,
> zonelist, high_zoneidx,
> alloc_flags, preferred_zone,
> migratetype);
> +
> + /*
> + * If an allocation failed after direct reclaim, it could be because
> + * pages are pinned on the per-cpu lists. Drain them and try again
> + */
> + if (!page && !drained) {
> + drain_all_pages();
> + drained = true;
> + goto retry;
> + }
> +
> return page;
I haven't read all of this patch series. (iow, this mail is luckly on top
of my mail box now) but at least I think this one is correct and good.
Reviewed-by: KOSAKI Motohiro <kosaki.motohiro@jp.fujitsu.com>
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
^ permalink raw reply [flat|nested] 82+ messages in thread
* [PATCH 0/3] Reduce watermark-related problems with the per-cpu allocator V3
@ 2010-08-31 17:37 Mel Gorman
2010-08-31 17:37 ` [PATCH 3/3] mm: page allocator: Drain per-cpu lists after direct reclaim allocation fails Mel Gorman
0 siblings, 1 reply; 82+ messages in thread
From: Mel Gorman @ 2010-08-31 17:37 UTC (permalink / raw)
To: Andrew Morton
Cc: Linux Kernel List, linux-mm, Rik van Riel, Johannes Weiner,
Minchan Kim, Christoph Lameter, KAMEZAWA Hiroyuki,
KOSAKI Motohiro, Mel Gorman
Changelog since V3
o Minor clarifications
o Rebase to 2.6.36-rc3
Changelog since V1
o Fix for !CONFIG_SMP
o Correct spelling mistakes
o Clarify a ChangeLog
o Only check for counter drift on machines large enough for the counter
drift to breach the min watermark when NR_FREE_PAGES report the low
watermark is fine
Internal IBM test teams beta testing distribution kernels have reported
problems on machines with a large number of CPUs whereby page allocator
failure messages show huge differences between the nr_free_pages vmstat
counter and what is available on the buddy lists. In an extreme example,
nr_free_pages was above the min watermark but zero pages were on the buddy
lists allowing the system to potentially livelock unable to make forward
progress unless an allocation succeeds. There is no reason why the problems
would not affect mainline so the following series mitigates the problems
in the page allocator related to to per-cpu counter drift and lists.
The first patch ensures that counters are updated after pages are added to
free lists.
The second patch notes that the counter drift between nr_free_pages and what
is on the per-cpu lists can be very high. When memory is low and kswapd
is awake, the per-cpu counters are checked as well as reading the value
of NR_FREE_PAGES. This will slow the page allocator when memory is low and
kswapd is awake but it will be much harder to breach the min watermark and
potentially livelock the system.
The third patch notes that after direct-reclaim an allocation can
fail because the necessary pages are on the per-cpu lists. After a
direct-reclaim-and-allocation-failure, the per-cpu lists are drained and
a second attempt is made.
Performance tests against 2.6.36-rc1 did not show up anything interesting. A
version of this series that continually called vmstat_update() when
memory was low was tested internally and found to help the counter drift
problem. I described this during LSF/MM Summit and the potential for IPI
storms was frowned upon. An alternative fix is in patch two which uses
for_each_online_cpu() to read the vmstat deltas while memory is low and
kswapd is awake. This should be functionally similar.
Christoph Lameter made two suggestions that I did not take action on. The
first was to make a generic helper that could be used to get a semi-accurate
reading of any vmstat counter. However, there is no evidence this is
necessary and it would be better to get a clear understanding of what counter
other than NR_FREE_PAGES would need special treatment by making it obvious
when such a helper is introduced. The second suggestion was to shrink the
threshold that vmstat got updated for affecting all counters. It was also
unclear if this was sufficient or necessary as again. Only NR_FREE_PAGES
is thhe problem counter so why affect every other counter? Also, shrinking
the threshold just shrinks the window the race can occur in. Hence, I'm
reposting the series as-is to see if there are any current objections to
deal with or if we can close up this problem now.
This patch should be merged after the patch "vmstat : update
zone stat threshold at onlining a cpu" which is in mmotm as
vmstat-update-zone-stat-threshold-when-onlining-a-cpu.patch . If we can
agree on it, it's a stable candidate.
include/linux/mmzone.h | 13 +++++++++++++
mm/mmzone.c | 29 +++++++++++++++++++++++++++++
mm/page_alloc.c | 29 +++++++++++++++++++++--------
mm/vmstat.c | 15 ++++++++++++++-
4 files changed, 77 insertions(+), 9 deletions(-)
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
^ permalink raw reply [flat|nested] 82+ messages in thread
* [PATCH 3/3] mm: page allocator: Drain per-cpu lists after direct reclaim allocation fails
2010-08-31 17:37 [PATCH 0/3] Reduce watermark-related problems with the per-cpu allocator V3 Mel Gorman
@ 2010-08-31 17:37 ` Mel Gorman
2010-08-31 18:26 ` Christoph Lameter
0 siblings, 1 reply; 82+ messages in thread
From: Mel Gorman @ 2010-08-31 17:37 UTC (permalink / raw)
To: Andrew Morton
Cc: Linux Kernel List, linux-mm, Rik van Riel, Johannes Weiner,
Minchan Kim, Christoph Lameter, KAMEZAWA Hiroyuki,
KOSAKI Motohiro, Mel Gorman
When under significant memory pressure, a process enters direct reclaim
and immediately afterwards tries to allocate a page. If it fails and no
further progress is made, it's possible the system will go OOM. However,
on systems with large amounts of memory, it's possible that a significant
number of pages are on per-cpu lists and inaccessible to the calling
process. This leads to a process entering direct reclaim more often than
it should increasing the pressure on the system and compounding the problem.
This patch notes that if direct reclaim is making progress but
allocations are still failing that the system is already under heavy
pressure. In this case, it drains the per-cpu lists and tries the
allocation a second time before continuing.
Signed-off-by: Mel Gorman <mel@csn.ul.ie>
Reviewed-by: Minchan Kim <minchan.kim@gmail.com>
Reviewed-by: KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>
Reviewed-by: KOSAKI Motohiro <kosaki.motohiro@jp.fujitsu.com>
---
mm/page_alloc.c | 20 ++++++++++++++++----
1 files changed, 16 insertions(+), 4 deletions(-)
diff --git a/mm/page_alloc.c b/mm/page_alloc.c
index bbaa959..750e1dc 100644
--- a/mm/page_alloc.c
+++ b/mm/page_alloc.c
@@ -1847,6 +1847,7 @@ __alloc_pages_direct_reclaim(gfp_t gfp_mask, unsigned int order,
struct page *page = NULL;
struct reclaim_state reclaim_state;
struct task_struct *p = current;
+ bool drained = false;
cond_resched();
@@ -1865,14 +1866,25 @@ __alloc_pages_direct_reclaim(gfp_t gfp_mask, unsigned int order,
cond_resched();
- if (order != 0)
- drain_all_pages();
+ if (unlikely(!(*did_some_progress)))
+ return NULL;
- if (likely(*did_some_progress))
- page = get_page_from_freelist(gfp_mask, nodemask, order,
+retry:
+ page = get_page_from_freelist(gfp_mask, nodemask, order,
zonelist, high_zoneidx,
alloc_flags, preferred_zone,
migratetype);
+
+ /*
+ * If an allocation failed after direct reclaim, it could be because
+ * pages are pinned on the per-cpu lists. Drain them and try again
+ */
+ if (!page && !drained) {
+ drain_all_pages();
+ drained = true;
+ goto retry;
+ }
+
return page;
}
--
1.7.1
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
^ permalink raw reply related [flat|nested] 82+ messages in thread
* Re: [PATCH 3/3] mm: page allocator: Drain per-cpu lists after direct reclaim allocation fails
2010-08-31 17:37 ` [PATCH 3/3] mm: page allocator: Drain per-cpu lists after direct reclaim allocation fails Mel Gorman
@ 2010-08-31 18:26 ` Christoph Lameter
0 siblings, 0 replies; 82+ messages in thread
From: Christoph Lameter @ 2010-08-31 18:26 UTC (permalink / raw)
To: Mel Gorman
Cc: Andrew Morton, Linux Kernel List, linux-mm, Rik van Riel,
Johannes Weiner, Minchan Kim, KAMEZAWA Hiroyuki, KOSAKI Motohiro
Reviewed-by: Christoph Lameter <cl@linux.com>
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
^ permalink raw reply [flat|nested] 82+ messages in thread
* [PATCH 0/3] Reduce watermark-related problems with the per-cpu allocator V4
@ 2010-09-03 9:08 Mel Gorman
2010-09-03 9:08 ` [PATCH 3/3] mm: page allocator: Drain per-cpu lists after direct reclaim allocation fails Mel Gorman
0 siblings, 1 reply; 82+ messages in thread
From: Mel Gorman @ 2010-09-03 9:08 UTC (permalink / raw)
To: Andrew Morton
Cc: Linux Kernel List, linux-mm, Rik van Riel, Johannes Weiner,
Minchan Kim, Christoph Lameter, KAMEZAWA Hiroyuki,
KOSAKI Motohiro, Mel Gorman
The noteworthy change is to patch 2 which now uses the generic
zone_page_state_snapshot() in zone_nr_free_pages(). Similar logic still
applies for *when* zone_page_state_snapshot() to avoid ovedhead.
Changelog since V3
o Use generic helper for NR_FREE_PAGES estimate when necessary
Changelog since V2
o Minor clarifications
o Rebase to 2.6.36-rc3
Changelog since V1
o Fix for !CONFIG_SMP
o Correct spelling mistakes
o Clarify a ChangeLog
o Only check for counter drift on machines large enough for the counter
drift to breach the min watermark when NR_FREE_PAGES report the low
watermark is fine
Internal IBM test teams beta testing distribution kernels have reported
problems on machines with a large number of CPUs whereby page allocator
failure messages show huge differences between the nr_free_pages vmstat
counter and what is available on the buddy lists. In an extreme example,
nr_free_pages was above the min watermark but zero pages were on the buddy
lists allowing the system to potentially livelock unable to make forward
progress unless an allocation succeeds. There is no reason why the problems
would not affect mainline so the following series mitigates the problems
in the page allocator related to to per-cpu counter drift and lists.
The first patch ensures that counters are updated after pages are added to
free lists.
The second patch notes that the counter drift between nr_free_pages and what
is on the per-cpu lists can be very high. When memory is low and kswapd
is awake, the per-cpu counters are checked as well as reading the value
of NR_FREE_PAGES. This will slow the page allocator when memory is low and
kswapd is awake but it will be much harder to breach the min watermark and
potentially livelock the system.
The third patch notes that after direct-reclaim an allocation can
fail because the necessary pages are on the per-cpu lists. After a
direct-reclaim-and-allocation-failure, the per-cpu lists are drained and
a second attempt is made.
Performance tests against 2.6.36-rc3 did not show up anything interesting. A
version of this series that continually called vmstat_update() when
memory was low was tested internally and found to help the counter drift
problem. I described this during LSF/MM Summit and the potential for IPI
storms was frowned upon. An alternative fix is in patch two which uses
for_each_online_cpu() to read the vmstat deltas while memory is low and
kswapd is awake. This should be functionally similar.
This patch should be merged after the patch "vmstat : update
zone stat threshold at onlining a cpu" which is in mmotm as
vmstat-update-zone-stat-threshold-when-onlining-a-cpu.patch .
If we can agree on it, this series is a stable candidate.
include/linux/mmzone.h | 13 +++++++++++++
include/linux/vmstat.h | 22 ++++++++++++++++++++++
mm/mmzone.c | 21 +++++++++++++++++++++
mm/page_alloc.c | 29 +++++++++++++++++++++--------
mm/vmstat.c | 15 ++++++++++++++-
5 files changed, 91 insertions(+), 9 deletions(-)
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
^ permalink raw reply [flat|nested] 82+ messages in thread
* [PATCH 3/3] mm: page allocator: Drain per-cpu lists after direct reclaim allocation fails
2010-09-03 9:08 [PATCH 0/3] Reduce watermark-related problems with the per-cpu allocator V4 Mel Gorman
@ 2010-09-03 9:08 ` Mel Gorman
2010-09-03 23:00 ` Andrew Morton
2010-09-08 7:43 ` KOSAKI Motohiro
0 siblings, 2 replies; 82+ messages in thread
From: Mel Gorman @ 2010-09-03 9:08 UTC (permalink / raw)
To: Andrew Morton
Cc: Linux Kernel List, linux-mm, Rik van Riel, Johannes Weiner,
Minchan Kim, Christoph Lameter, KAMEZAWA Hiroyuki,
KOSAKI Motohiro, Mel Gorman
When under significant memory pressure, a process enters direct reclaim
and immediately afterwards tries to allocate a page. If it fails and no
further progress is made, it's possible the system will go OOM. However,
on systems with large amounts of memory, it's possible that a significant
number of pages are on per-cpu lists and inaccessible to the calling
process. This leads to a process entering direct reclaim more often than
it should increasing the pressure on the system and compounding the problem.
This patch notes that if direct reclaim is making progress but
allocations are still failing that the system is already under heavy
pressure. In this case, it drains the per-cpu lists and tries the
allocation a second time before continuing.
Signed-off-by: Mel Gorman <mel@csn.ul.ie>
Reviewed-by: Minchan Kim <minchan.kim@gmail.com>
Reviewed-by: KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>
Reviewed-by: KOSAKI Motohiro <kosaki.motohiro@jp.fujitsu.com>
Reviewed-by: Christoph Lameter <cl@linux.com>
---
mm/page_alloc.c | 20 ++++++++++++++++----
1 files changed, 16 insertions(+), 4 deletions(-)
diff --git a/mm/page_alloc.c b/mm/page_alloc.c
index bbaa959..750e1dc 100644
--- a/mm/page_alloc.c
+++ b/mm/page_alloc.c
@@ -1847,6 +1847,7 @@ __alloc_pages_direct_reclaim(gfp_t gfp_mask, unsigned int order,
struct page *page = NULL;
struct reclaim_state reclaim_state;
struct task_struct *p = current;
+ bool drained = false;
cond_resched();
@@ -1865,14 +1866,25 @@ __alloc_pages_direct_reclaim(gfp_t gfp_mask, unsigned int order,
cond_resched();
- if (order != 0)
- drain_all_pages();
+ if (unlikely(!(*did_some_progress)))
+ return NULL;
- if (likely(*did_some_progress))
- page = get_page_from_freelist(gfp_mask, nodemask, order,
+retry:
+ page = get_page_from_freelist(gfp_mask, nodemask, order,
zonelist, high_zoneidx,
alloc_flags, preferred_zone,
migratetype);
+
+ /*
+ * If an allocation failed after direct reclaim, it could be because
+ * pages are pinned on the per-cpu lists. Drain them and try again
+ */
+ if (!page && !drained) {
+ drain_all_pages();
+ drained = true;
+ goto retry;
+ }
+
return page;
}
--
1.7.1
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
^ permalink raw reply related [flat|nested] 82+ messages in thread
* Re: [PATCH 3/3] mm: page allocator: Drain per-cpu lists after direct reclaim allocation fails
2010-09-03 9:08 ` [PATCH 3/3] mm: page allocator: Drain per-cpu lists after direct reclaim allocation fails Mel Gorman
@ 2010-09-03 23:00 ` Andrew Morton
2010-09-04 2:25 ` Dave Chinner
2010-09-05 18:14 ` Mel Gorman
2010-09-08 7:43 ` KOSAKI Motohiro
1 sibling, 2 replies; 82+ messages in thread
From: Andrew Morton @ 2010-09-03 23:00 UTC (permalink / raw)
To: Mel Gorman
Cc: Linux Kernel List, linux-mm, Rik van Riel, Johannes Weiner,
Minchan Kim, Christoph Lameter, KAMEZAWA Hiroyuki,
KOSAKI Motohiro, Dave Chinner, Wu Fengguang, David Rientjes
On Fri, 3 Sep 2010 10:08:46 +0100
Mel Gorman <mel@csn.ul.ie> wrote:
> When under significant memory pressure, a process enters direct reclaim
> and immediately afterwards tries to allocate a page. If it fails and no
> further progress is made, it's possible the system will go OOM. However,
> on systems with large amounts of memory, it's possible that a significant
> number of pages are on per-cpu lists and inaccessible to the calling
> process. This leads to a process entering direct reclaim more often than
> it should increasing the pressure on the system and compounding the problem.
>
> This patch notes that if direct reclaim is making progress but
> allocations are still failing that the system is already under heavy
> pressure. In this case, it drains the per-cpu lists and tries the
> allocation a second time before continuing.
>
> Signed-off-by: Mel Gorman <mel@csn.ul.ie>
> Reviewed-by: Minchan Kim <minchan.kim@gmail.com>
> Reviewed-by: KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>
> Reviewed-by: KOSAKI Motohiro <kosaki.motohiro@jp.fujitsu.com>
> Reviewed-by: Christoph Lameter <cl@linux.com>
> ---
> mm/page_alloc.c | 20 ++++++++++++++++----
> 1 files changed, 16 insertions(+), 4 deletions(-)
>
> diff --git a/mm/page_alloc.c b/mm/page_alloc.c
> index bbaa959..750e1dc 100644
> --- a/mm/page_alloc.c
> +++ b/mm/page_alloc.c
> @@ -1847,6 +1847,7 @@ __alloc_pages_direct_reclaim(gfp_t gfp_mask, unsigned int order,
> struct page *page = NULL;
> struct reclaim_state reclaim_state;
> struct task_struct *p = current;
> + bool drained = false;
>
> cond_resched();
>
> @@ -1865,14 +1866,25 @@ __alloc_pages_direct_reclaim(gfp_t gfp_mask, unsigned int order,
>
> cond_resched();
>
> - if (order != 0)
> - drain_all_pages();
> + if (unlikely(!(*did_some_progress)))
> + return NULL;
>
> - if (likely(*did_some_progress))
> - page = get_page_from_freelist(gfp_mask, nodemask, order,
> +retry:
> + page = get_page_from_freelist(gfp_mask, nodemask, order,
> zonelist, high_zoneidx,
> alloc_flags, preferred_zone,
> migratetype);
> +
> + /*
> + * If an allocation failed after direct reclaim, it could be because
> + * pages are pinned on the per-cpu lists. Drain them and try again
> + */
> + if (!page && !drained) {
> + drain_all_pages();
> + drained = true;
> + goto retry;
> + }
> +
> return page;
> }
The patch looks reasonable.
But please take a look at the recent thread "mm: minute-long livelocks
in memory reclaim". There, people are pointing fingers at that
drain_all_pages() call, suspecting that it's causing huge IPI storms.
Dave was going to test this theory but afaik hasn't yet done so. It
would be nice to tie these threads together if poss?
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
^ permalink raw reply [flat|nested] 82+ messages in thread
* Re: [PATCH 3/3] mm: page allocator: Drain per-cpu lists after direct reclaim allocation fails
2010-09-03 23:00 ` Andrew Morton
@ 2010-09-04 2:25 ` Dave Chinner
2010-09-04 3:21 ` Andrew Morton
` (2 more replies)
2010-09-05 18:14 ` Mel Gorman
1 sibling, 3 replies; 82+ messages in thread
From: Dave Chinner @ 2010-09-04 2:25 UTC (permalink / raw)
To: Andrew Morton
Cc: Mel Gorman, Linux Kernel List, linux-mm, Rik van Riel,
Johannes Weiner, Minchan Kim, Christoph Lameter,
KAMEZAWA Hiroyuki, KOSAKI Motohiro, Wu Fengguang, David Rientjes
On Fri, Sep 03, 2010 at 04:00:26PM -0700, Andrew Morton wrote:
> On Fri, 3 Sep 2010 10:08:46 +0100
> Mel Gorman <mel@csn.ul.ie> wrote:
>
> > When under significant memory pressure, a process enters direct reclaim
> > and immediately afterwards tries to allocate a page. If it fails and no
> > further progress is made, it's possible the system will go OOM. However,
> > on systems with large amounts of memory, it's possible that a significant
> > number of pages are on per-cpu lists and inaccessible to the calling
> > process. This leads to a process entering direct reclaim more often than
> > it should increasing the pressure on the system and compounding the problem.
> >
> > This patch notes that if direct reclaim is making progress but
> > allocations are still failing that the system is already under heavy
> > pressure. In this case, it drains the per-cpu lists and tries the
> > allocation a second time before continuing.
....
> The patch looks reasonable.
>
> But please take a look at the recent thread "mm: minute-long livelocks
> in memory reclaim". There, people are pointing fingers at that
> drain_all_pages() call, suspecting that it's causing huge IPI storms.
>
> Dave was going to test this theory but afaik hasn't yet done so. It
> would be nice to tie these threads together if poss?
It's been my "next-thing-to-do" since David suggested I try it -
tracking down other problems has got in the way, though. I
just ran my test a couple of times through:
$ ./fs_mark -D 10000 -L 63 -S0 -n 100000 -s 0 \
-d /mnt/scratch/0 -d /mnt/scratch/1 \
-d /mnt/scratch/3 -d /mnt/scratch/2 \
-d /mnt/scratch/4 -d /mnt/scratch/5 \
-d /mnt/scratch/6 -d /mnt/scratch/7
To create millions of inodes in parallel on an 8p/4G RAM VM.
The filesystem is ~1.1TB XFS:
# mkfs.xfs -f -d agcount=16 /dev/vdb
meta-data=/dev/vdb isize=256 agcount=16, agsize=16777216 blks
= sectsz=512 attr=2
data = bsize=4096 blocks=268435456, imaxpct=5
= sunit=0 swidth=0 blks
naming =version 2 bsize=4096 ascii-ci=0
log =internal log bsize=4096 blocks=131072, version=2
= sectsz=512 sunit=0 blks, lazy-count=1
realtime =none extsz=4096 blocks=0, rtextents=0
# mount -o inode64,delaylog,logbsize=262144,nobarrier /dev/vdb /mnt/scratch
Performance prior to this patch was that each iteration resulted in
~65k files/s, with occassionaly peaks to 90k files/s, but drops to
frequently 45k files/s when reclaim ran to reclaim the inode
caches. This load ran permanently at 800% CPU usage.
Every so often (may once or twice a 50M inode create run) all 8 CPUs
would remain pegged but the create rate would drop to zero for a few
seconds to a couple of minutes. that was the livelock issues I
reported.
With this patchset, I'm seeing a per-iteration average of ~77k
files/s, with only a couple of iterations dropping down to ~55k
file/s and a significantly number above 90k/s. The runtime to 50M
inodes is down by ~30% and the average CPU usage across the run is
around 700%. IOWs, there a significant gain in performance there is
a significant drop in CPU usage. I've done two runs to 50m inodes,
and not seen any sign of a livelock, even for short periods of time.
Ah, spoke too soon - I let the second run keep going, and at ~68M
inodes it's just pegged all the CPUs and is pretty much completely
wedged. Serial console is not responding, I can't get a new login,
and the only thing responding that tells me the machine is alive is
the remote PCP monitoring. It's been stuck for 5 minutes .... and
now it is back. Here's what I saw:
http://userweb.kernel.org/~dgc/shrinker-2.6.36/fs_mark-wedge-1.png
The livelock is at the right of the charts, where the top chart is
all red (system CPU time), and the other charts flat line to zero.
And according to fsmark:
1 66400000 0 64554.2 7705926
1 67200000 0 64836.1 7573013
<hang happened here>
2 68000000 0 69472.8 7941399
2 68800000 0 85017.5 7585203
it didn't record any change in performance, which means the livelock
probably occurred between iterations. I couldn't get any info on
what caused the livelock this time so I can only assume it has the
same cause....
Still, given the improvements in performance from this patchset,
I'd say inclusion is a no-braniner....
Cheers,
Dave.
--
Dave Chinner
david@fromorbit.com
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
^ permalink raw reply [flat|nested] 82+ messages in thread
* Re: [PATCH 3/3] mm: page allocator: Drain per-cpu lists after direct reclaim allocation fails
2010-09-04 2:25 ` Dave Chinner
@ 2010-09-04 3:21 ` Andrew Morton
2010-09-04 7:58 ` Dave Chinner
2010-09-04 3:23 ` Wu Fengguang
2010-09-05 18:22 ` Mel Gorman
2 siblings, 1 reply; 82+ messages in thread
From: Andrew Morton @ 2010-09-04 3:21 UTC (permalink / raw)
To: Dave Chinner
Cc: Mel Gorman, Linux Kernel List, linux-mm, Rik van Riel,
Johannes Weiner, Minchan Kim, Christoph Lameter,
KAMEZAWA Hiroyuki, KOSAKI Motohiro, Wu Fengguang, David Rientjes
On Sat, 4 Sep 2010 12:25:45 +1000 Dave Chinner <david@fromorbit.com> wrote:
> Still, given the improvements in performance from this patchset,
> I'd say inclusion is a no-braniner....
OK, thanks.
It'd be interesting to check the IPI frequency with and without -
/proc/interrupts "CAL" field. Presumably it went down a lot.
I wouldn't bust a gut over it though :)
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
^ permalink raw reply [flat|nested] 82+ messages in thread
* Re: [PATCH 3/3] mm: page allocator: Drain per-cpu lists after direct reclaim allocation fails
2010-09-04 3:21 ` Andrew Morton
@ 2010-09-04 7:58 ` Dave Chinner
2010-09-04 8:14 ` Dave Chinner
0 siblings, 1 reply; 82+ messages in thread
From: Dave Chinner @ 2010-09-04 7:58 UTC (permalink / raw)
To: Andrew Morton
Cc: Mel Gorman, Linux Kernel List, linux-mm, Rik van Riel,
Johannes Weiner, Minchan Kim, Christoph Lameter,
KAMEZAWA Hiroyuki, KOSAKI Motohiro, Wu Fengguang, David Rientjes
On Fri, Sep 03, 2010 at 08:21:01PM -0700, Andrew Morton wrote:
> On Sat, 4 Sep 2010 12:25:45 +1000 Dave Chinner <david@fromorbit.com> wrote:
>
> > Still, given the improvements in performance from this patchset,
> > I'd say inclusion is a no-braniner....
>
> OK, thanks.
>
> It'd be interesting to check the IPI frequency with and without -
> /proc/interrupts "CAL" field. Presumably it went down a lot.
Maybe I suspected you would ask for this. I happened to dump
/proc/interrupts after the livelock run finished, so you're in
luck :)
The lines below are:
before: before running the single 50M inode create workload
after: the numbers after the run completes
livelock: the numbers after two runs with a livelock in the second
Vanilla 2.6.36-rc3:
before: 561 350 614 282 559 335 365 363
after: 10472 10473 10544 10681 9818 10837 10187 9923
.36-rc3 With patchset:
before: 452 426 441 337 748 321 498 357
after: 9463 9112 8671 8830 9391 8684 9768 8971
The numbers aren't that different - roughly 10% lower on average
with the patchset. I will state that vanilla kernel runs I ijust did
had noticably more consistent performance than the previous results
I had acheived, so perhaps it wasn't triggering the livelock
conditions as effectively this time through.
And finally:
livelock: 59458 58367 58559 59493 59614 57970 59060 58207
So the livelock case tends to indicate roughly 40,000 more IPI
interrupts per CPU occurred. The livelock occurred for close to 5
minutes, so that's roughly 130 IPIs per second per CPU....
Cheers,
Dave.
--
Dave Chinner
david@fromorbit.com
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
^ permalink raw reply [flat|nested] 82+ messages in thread
* Re: [PATCH 3/3] mm: page allocator: Drain per-cpu lists after direct reclaim allocation fails
2010-09-04 7:58 ` Dave Chinner
@ 2010-09-04 8:14 ` Dave Chinner
[not found] ` <20100905015400.GA10714@localhost>
0 siblings, 1 reply; 82+ messages in thread
From: Dave Chinner @ 2010-09-04 8:14 UTC (permalink / raw)
To: Andrew Morton
Cc: Mel Gorman, Linux Kernel List, linux-mm, Rik van Riel,
Johannes Weiner, Minchan Kim, Christoph Lameter,
KAMEZAWA Hiroyuki, KOSAKI Motohiro, Wu Fengguang, David Rientjes
On Sat, Sep 04, 2010 at 05:58:40PM +1000, Dave Chinner wrote:
> On Fri, Sep 03, 2010 at 08:21:01PM -0700, Andrew Morton wrote:
> > On Sat, 4 Sep 2010 12:25:45 +1000 Dave Chinner <david@fromorbit.com> wrote:
> >
> > > Still, given the improvements in performance from this patchset,
> > > I'd say inclusion is a no-braniner....
> >
> > OK, thanks.
> >
> > It'd be interesting to check the IPI frequency with and without -
> > /proc/interrupts "CAL" field. Presumably it went down a lot.
>
> Maybe I suspected you would ask for this. I happened to dump
> /proc/interrupts after the livelock run finished, so you're in
> luck :)
....
>
> livelock: 59458 58367 58559 59493 59614 57970 59060 58207
>
> So the livelock case tends to indicate roughly 40,000 more IPI
> interrupts per CPU occurred. The livelock occurred for close to 5
> minutes, so that's roughly 130 IPIs per second per CPU....
And just to confuse the issue further, I just had a livelock on a
vanilla kernel that did *not* cause the CAL counts to increase.
Hence it appears that the IPI storms are not the cause of the
livelocks D?'m triggering....
Cheers,
Dave.
--
Dave Chinner
david@fromorbit.com
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
^ permalink raw reply [flat|nested] 82+ messages in thread
* Re: [PATCH 3/3] mm: page allocator: Drain per-cpu lists after direct reclaim allocation fails
2010-09-04 2:25 ` Dave Chinner
2010-09-04 3:21 ` Andrew Morton
@ 2010-09-04 3:23 ` Wu Fengguang
2010-09-04 3:59 ` Andrew Morton
2010-09-05 18:22 ` Mel Gorman
2 siblings, 1 reply; 82+ messages in thread
From: Wu Fengguang @ 2010-09-04 3:23 UTC (permalink / raw)
To: Dave Chinner
Cc: Andrew Morton, Mel Gorman, Linux Kernel List, linux-mm@kvack.org,
Rik van Riel, Johannes Weiner, Minchan Kim, Christoph Lameter,
KAMEZAWA Hiroyuki, KOSAKI Motohiro, David Rientjes
On Sat, Sep 04, 2010 at 10:25:45AM +0800, Dave Chinner wrote:
> On Fri, Sep 03, 2010 at 04:00:26PM -0700, Andrew Morton wrote:
> > On Fri, 3 Sep 2010 10:08:46 +0100
> > Mel Gorman <mel@csn.ul.ie> wrote:
> >
> > > When under significant memory pressure, a process enters direct reclaim
> > > and immediately afterwards tries to allocate a page. If it fails and no
> > > further progress is made, it's possible the system will go OOM. However,
> > > on systems with large amounts of memory, it's possible that a significant
> > > number of pages are on per-cpu lists and inaccessible to the calling
> > > process. This leads to a process entering direct reclaim more often than
> > > it should increasing the pressure on the system and compounding the problem.
> > >
> > > This patch notes that if direct reclaim is making progress but
> > > allocations are still failing that the system is already under heavy
> > > pressure. In this case, it drains the per-cpu lists and tries the
> > > allocation a second time before continuing.
> ....
> > The patch looks reasonable.
> >
> > But please take a look at the recent thread "mm: minute-long livelocks
> > in memory reclaim". There, people are pointing fingers at that
> > drain_all_pages() call, suspecting that it's causing huge IPI storms.
> >
> > Dave was going to test this theory but afaik hasn't yet done so. It
> > would be nice to tie these threads together if poss?
>
> It's been my "next-thing-to-do" since David suggested I try it -
> tracking down other problems has got in the way, though. I
> just ran my test a couple of times through:
>
> $ ./fs_mark -D 10000 -L 63 -S0 -n 100000 -s 0 \
> -d /mnt/scratch/0 -d /mnt/scratch/1 \
> -d /mnt/scratch/3 -d /mnt/scratch/2 \
> -d /mnt/scratch/4 -d /mnt/scratch/5 \
> -d /mnt/scratch/6 -d /mnt/scratch/7
>
> To create millions of inodes in parallel on an 8p/4G RAM VM.
> The filesystem is ~1.1TB XFS:
>
> # mkfs.xfs -f -d agcount=16 /dev/vdb
> meta-data=/dev/vdb isize=256 agcount=16, agsize=16777216 blks
> = sectsz=512 attr=2
> data = bsize=4096 blocks=268435456, imaxpct=5
> = sunit=0 swidth=0 blks
> naming =version 2 bsize=4096 ascii-ci=0
> log =internal log bsize=4096 blocks=131072, version=2
> = sectsz=512 sunit=0 blks, lazy-count=1
> realtime =none extsz=4096 blocks=0, rtextents=0
> # mount -o inode64,delaylog,logbsize=262144,nobarrier /dev/vdb /mnt/scratch
>
> Performance prior to this patch was that each iteration resulted in
> ~65k files/s, with occassionaly peaks to 90k files/s, but drops to
> frequently 45k files/s when reclaim ran to reclaim the inode
> caches. This load ran permanently at 800% CPU usage.
>
> Every so often (may once or twice a 50M inode create run) all 8 CPUs
> would remain pegged but the create rate would drop to zero for a few
> seconds to a couple of minutes. that was the livelock issues I
> reported.
>
> With this patchset, I'm seeing a per-iteration average of ~77k
> files/s, with only a couple of iterations dropping down to ~55k
> file/s and a significantly number above 90k/s. The runtime to 50M
> inodes is down by ~30% and the average CPU usage across the run is
> around 700%. IOWs, there a significant gain in performance there is
> a significant drop in CPU usage. I've done two runs to 50m inodes,
> and not seen any sign of a livelock, even for short periods of time.
>
> Ah, spoke too soon - I let the second run keep going, and at ~68M
> inodes it's just pegged all the CPUs and is pretty much completely
> wedged. Serial console is not responding, I can't get a new login,
> and the only thing responding that tells me the machine is alive is
> the remote PCP monitoring. It's been stuck for 5 minutes .... and
> now it is back. Here's what I saw:
>
> http://userweb.kernel.org/~dgc/shrinker-2.6.36/fs_mark-wedge-1.png
>
> The livelock is at the right of the charts, where the top chart is
> all red (system CPU time), and the other charts flat line to zero.
>
> And according to fsmark:
>
> 1 66400000 0 64554.2 7705926
> 1 67200000 0 64836.1 7573013
> <hang happened here>
> 2 68000000 0 69472.8 7941399
> 2 68800000 0 85017.5 7585203
>
> it didn't record any change in performance, which means the livelock
> probably occurred between iterations. I couldn't get any info on
> what caused the livelock this time so I can only assume it has the
> same cause....
>
> Still, given the improvements in performance from this patchset,
> I'd say inclusion is a no-braniner....
In your case it's not really high memory pressure, but maybe too many
concurrent direct reclaimers, so that when one reclaimed some free
pages, others kick in and "steal" the free pages. So we need to kill
the second cond_resched() call (which effectively gives other tasks a
good chance to steal this task's vmscan fruits), and only do
drain_all_pages() when nothing was reclaimed (instead of allocated).
Dave, will you give a try of this patch? It's based on Mel's.
Thanks,
Fengguang
---
--- linux-next.orig/mm/page_alloc.c 2010-09-04 11:08:03.000000000 +0800
+++ linux-next/mm/page_alloc.c 2010-09-04 11:16:33.000000000 +0800
@@ -1850,6 +1850,7 @@ __alloc_pages_direct_reclaim(gfp_t gfp_m
cond_resched();
+retry:
/* We now go into synchronous reclaim */
cpuset_memory_pressure_bump();
p->flags |= PF_MEMALLOC;
@@ -1863,26 +1864,23 @@ __alloc_pages_direct_reclaim(gfp_t gfp_m
lockdep_clear_current_reclaim_state();
p->flags &= ~PF_MEMALLOC;
- cond_resched();
-
- if (unlikely(!(*did_some_progress)))
+ if (unlikely(!(*did_some_progress))) {
+ if (!drained) {
+ drain_all_pages();
+ drained = true;
+ goto retry;
+ }
return NULL;
+ }
-retry:
page = get_page_from_freelist(gfp_mask, nodemask, order,
zonelist, high_zoneidx,
alloc_flags, preferred_zone,
migratetype);
- /*
- * If an allocation failed after direct reclaim, it could be because
- * pages are pinned on the per-cpu lists. Drain them and try again
- */
- if (!page && !drained) {
- drain_all_pages();
- drained = true;
+ /* someone steal our vmscan fruits? */
+ if (!page && *did_some_progress)
goto retry;
- }
return page;
}
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
^ permalink raw reply [flat|nested] 82+ messages in thread
* Re: [PATCH 3/3] mm: page allocator: Drain per-cpu lists after direct reclaim allocation fails
2010-09-04 3:23 ` Wu Fengguang
@ 2010-09-04 3:59 ` Andrew Morton
2010-09-04 4:37 ` Wu Fengguang
0 siblings, 1 reply; 82+ messages in thread
From: Andrew Morton @ 2010-09-04 3:59 UTC (permalink / raw)
To: Wu Fengguang
Cc: Dave Chinner, Mel Gorman, Linux Kernel List, linux-mm@kvack.org,
Rik van Riel, Johannes Weiner, Minchan Kim, Christoph Lameter,
KAMEZAWA Hiroyuki, KOSAKI Motohiro, David Rientjes
On Sat, 4 Sep 2010 11:23:11 +0800 Wu Fengguang <fengguang.wu@intel.com> wrote:
> > Still, given the improvements in performance from this patchset,
> > I'd say inclusion is a no-braniner....
>
> In your case it's not really high memory pressure, but maybe too many
> concurrent direct reclaimers, so that when one reclaimed some free
> pages, others kick in and "steal" the free pages. So we need to kill
> the second cond_resched() call (which effectively gives other tasks a
> good chance to steal this task's vmscan fruits), and only do
> drain_all_pages() when nothing was reclaimed (instead of allocated).
Well... cond_resched() will only resched when this task has been
marked for preemption. If that's happening at such a high frequency
then Something Is Up with the scheduler, and the reported context
switch rate will be high.
> Dave, will you give a try of this patch? It's based on Mel's.
>
>
> --- linux-next.orig/mm/page_alloc.c 2010-09-04 11:08:03.000000000 +0800
> +++ linux-next/mm/page_alloc.c 2010-09-04 11:16:33.000000000 +0800
> @@ -1850,6 +1850,7 @@ __alloc_pages_direct_reclaim(gfp_t gfp_m
>
> cond_resched();
>
> +retry:
> /* We now go into synchronous reclaim */
> cpuset_memory_pressure_bump();
> p->flags |= PF_MEMALLOC;
> @@ -1863,26 +1864,23 @@ __alloc_pages_direct_reclaim(gfp_t gfp_m
> lockdep_clear_current_reclaim_state();
> p->flags &= ~PF_MEMALLOC;
>
> - cond_resched();
> -
> - if (unlikely(!(*did_some_progress)))
> + if (unlikely(!(*did_some_progress))) {
> + if (!drained) {
> + drain_all_pages();
> + drained = true;
> + goto retry;
> + }
> return NULL;
> + }
>
> -retry:
> page = get_page_from_freelist(gfp_mask, nodemask, order,
> zonelist, high_zoneidx,
> alloc_flags, preferred_zone,
> migratetype);
>
> - /*
> - * If an allocation failed after direct reclaim, it could be because
> - * pages are pinned on the per-cpu lists. Drain them and try again
> - */
> - if (!page && !drained) {
> - drain_all_pages();
> - drained = true;
> + /* someone steal our vmscan fruits? */
> + if (!page && *did_some_progress)
> goto retry;
> - }
Perhaps the fruit-stealing event is worth adding to the
userspace-exposed vm stats somewhere. But not in /proc - somewhere
more temporary, in debugfs.
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
^ permalink raw reply [flat|nested] 82+ messages in thread
* Re: [PATCH 3/3] mm: page allocator: Drain per-cpu lists after direct reclaim allocation fails
2010-09-04 3:59 ` Andrew Morton
@ 2010-09-04 4:37 ` Wu Fengguang
0 siblings, 0 replies; 82+ messages in thread
From: Wu Fengguang @ 2010-09-04 4:37 UTC (permalink / raw)
To: Andrew Morton
Cc: Dave Chinner, Mel Gorman, Linux Kernel List, linux-mm@kvack.org,
Rik van Riel, Johannes Weiner, Minchan Kim, Christoph Lameter,
KAMEZAWA Hiroyuki, KOSAKI Motohiro, David Rientjes
On Sat, Sep 04, 2010 at 11:59:45AM +0800, Andrew Morton wrote:
> On Sat, 4 Sep 2010 11:23:11 +0800 Wu Fengguang <fengguang.wu@intel.com> wrote:
>
> > > Still, given the improvements in performance from this patchset,
> > > I'd say inclusion is a no-braniner....
> >
> > In your case it's not really high memory pressure, but maybe too many
> > concurrent direct reclaimers, so that when one reclaimed some free
> > pages, others kick in and "steal" the free pages. So we need to kill
> > the second cond_resched() call (which effectively gives other tasks a
> > good chance to steal this task's vmscan fruits), and only do
> > drain_all_pages() when nothing was reclaimed (instead of allocated).
>
> Well... cond_resched() will only resched when this task has been
> marked for preemption. If that's happening at such a high frequency
> then Something Is Up with the scheduler, and the reported context
> switch rate will be high.
Yes it may not necessarily schedule away. But if ever this happens,
the task will likely run into drain_all_pages() when re-gain CPU.
Because the drain_all_pages() cost is very high, it don't need too
many reschedules to create the IPI storm..
> > Dave, will you give a try of this patch? It's based on Mel's.
> >
> >
> > --- linux-next.orig/mm/page_alloc.c 2010-09-04 11:08:03.000000000 +0800
> > +++ linux-next/mm/page_alloc.c 2010-09-04 11:16:33.000000000 +0800
> > @@ -1850,6 +1850,7 @@ __alloc_pages_direct_reclaim(gfp_t gfp_m
> >
> > cond_resched();
> >
> > +retry:
> > /* We now go into synchronous reclaim */
> > cpuset_memory_pressure_bump();
> > p->flags |= PF_MEMALLOC;
> > @@ -1863,26 +1864,23 @@ __alloc_pages_direct_reclaim(gfp_t gfp_m
> > lockdep_clear_current_reclaim_state();
> > p->flags &= ~PF_MEMALLOC;
> >
> > - cond_resched();
> > -
> > - if (unlikely(!(*did_some_progress)))
> > + if (unlikely(!(*did_some_progress))) {
> > + if (!drained) {
> > + drain_all_pages();
> > + drained = true;
> > + goto retry;
> > + }
> > return NULL;
> > + }
> >
> > -retry:
> > page = get_page_from_freelist(gfp_mask, nodemask, order,
> > zonelist, high_zoneidx,
> > alloc_flags, preferred_zone,
> > migratetype);
> >
> > - /*
> > - * If an allocation failed after direct reclaim, it could be because
> > - * pages are pinned on the per-cpu lists. Drain them and try again
> > - */
> > - if (!page && !drained) {
> > - drain_all_pages();
> > - drained = true;
> > + /* someone steal our vmscan fruits? */
> > + if (!page && *did_some_progress)
> > goto retry;
> > - }
>
> Perhaps the fruit-stealing event is worth adding to the
> userspace-exposed vm stats somewhere. But not in /proc - somewhere
> more temporary, in debugfs.
There are no existing debugfs interfaces for vm stats, and I need to
go out right now.. So I did the following quick (and temporary) hack
to allow Dave to collect the information. Will revisit the proper
interface to use later :)
Thanks,
Fengguang
---
include/linux/mmzone.h | 1 +
mm/page_alloc.c | 4 +++-
mm/vmstat.c | 1 +
3 files changed, 5 insertions(+), 1 deletion(-)
--- linux-next.orig/include/linux/mmzone.h 2010-09-04 12:30:26.000000000 +0800
+++ linux-next/include/linux/mmzone.h 2010-09-04 12:30:36.000000000 +0800
@@ -104,6 +104,7 @@ enum zone_stat_item {
NR_ISOLATED_ANON, /* Temporary isolated pages from anon lru */
NR_ISOLATED_FILE, /* Temporary isolated pages from file lru */
NR_SHMEM, /* shmem pages (included tmpfs/GEM pages) */
+ NR_RECLAIM_STEAL,
#ifdef CONFIG_NUMA
NUMA_HIT, /* allocated in intended node */
NUMA_MISS, /* allocated in non intended node */
--- linux-next.orig/mm/page_alloc.c 2010-09-04 12:28:09.000000000 +0800
+++ linux-next/mm/page_alloc.c 2010-09-04 12:33:39.000000000 +0800
@@ -1879,8 +1879,10 @@ retry:
migratetype);
/* someone steal our vmscan fruits? */
- if (!page && *did_some_progress)
+ if (!page && *did_some_progress) {
+ inc_zone_state(preferred_zone, NR_RECLAIM_STEAL);
goto retry;
+ }
return page;
}
--- linux-next.orig/mm/vmstat.c 2010-09-04 12:31:30.000000000 +0800
+++ linux-next/mm/vmstat.c 2010-09-04 12:31:42.000000000 +0800
@@ -732,6 +732,7 @@ static const char * const vmstat_text[]
"nr_isolated_anon",
"nr_isolated_file",
"nr_shmem",
+ "nr_reclaim_steal",
#ifdef CONFIG_NUMA
"numa_hit",
"numa_miss",
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
^ permalink raw reply [flat|nested] 82+ messages in thread
* Re: [PATCH 3/3] mm: page allocator: Drain per-cpu lists after direct reclaim allocation fails
2010-09-04 2:25 ` Dave Chinner
2010-09-04 3:21 ` Andrew Morton
2010-09-04 3:23 ` Wu Fengguang
@ 2010-09-05 18:22 ` Mel Gorman
2 siblings, 0 replies; 82+ messages in thread
From: Mel Gorman @ 2010-09-05 18:22 UTC (permalink / raw)
To: Dave Chinner
Cc: Andrew Morton, Linux Kernel List, linux-mm, Rik van Riel,
Johannes Weiner, Minchan Kim, Christoph Lameter,
KAMEZAWA Hiroyuki, KOSAKI Motohiro, Wu Fengguang, David Rientjes
On Sat, Sep 04, 2010 at 12:25:45PM +1000, Dave Chinner wrote:
> On Fri, Sep 03, 2010 at 04:00:26PM -0700, Andrew Morton wrote:
> > On Fri, 3 Sep 2010 10:08:46 +0100
> > Mel Gorman <mel@csn.ul.ie> wrote:
> >
> > > When under significant memory pressure, a process enters direct reclaim
> > > and immediately afterwards tries to allocate a page. If it fails and no
> > > further progress is made, it's possible the system will go OOM. However,
> > > on systems with large amounts of memory, it's possible that a significant
> > > number of pages are on per-cpu lists and inaccessible to the calling
> > > process. This leads to a process entering direct reclaim more often than
> > > it should increasing the pressure on the system and compounding the problem.
> > >
> > > This patch notes that if direct reclaim is making progress but
> > > allocations are still failing that the system is already under heavy
> > > pressure. In this case, it drains the per-cpu lists and tries the
> > > allocation a second time before continuing.
> ....
> > The patch looks reasonable.
> >
> > But please take a look at the recent thread "mm: minute-long livelocks
> > in memory reclaim". There, people are pointing fingers at that
> > drain_all_pages() call, suspecting that it's causing huge IPI storms.
> >
> > Dave was going to test this theory but afaik hasn't yet done so. It
> > would be nice to tie these threads together if poss?
>
> It's been my "next-thing-to-do" since David suggested I try it -
> tracking down other problems has got in the way, though. I
> just ran my test a couple of times through:
>
> $ ./fs_mark -D 10000 -L 63 -S0 -n 100000 -s 0 \
> -d /mnt/scratch/0 -d /mnt/scratch/1 \
> -d /mnt/scratch/3 -d /mnt/scratch/2 \
> -d /mnt/scratch/4 -d /mnt/scratch/5 \
> -d /mnt/scratch/6 -d /mnt/scratch/7
>
> To create millions of inodes in parallel on an 8p/4G RAM VM.
> The filesystem is ~1.1TB XFS:
>
> # mkfs.xfs -f -d agcount=16 /dev/vdb
> meta-data=/dev/vdb isize=256 agcount=16, agsize=16777216 blks
> = sectsz=512 attr=2
> data = bsize=4096 blocks=268435456, imaxpct=5
> = sunit=0 swidth=0 blks
> naming =version 2 bsize=4096 ascii-ci=0
> log =internal log bsize=4096 blocks=131072, version=2
> = sectsz=512 sunit=0 blks, lazy-count=1
> realtime =none extsz=4096 blocks=0, rtextents=0
> # mount -o inode64,delaylog,logbsize=262144,nobarrier /dev/vdb /mnt/scratch
>
Unfortunately, I doubt I'll be able to reproduce this test. I don't have
access to a machine with enough processors or disk. I will try on 4p/4G
and 500M and see how that pans out.
> Performance prior to this patch was that each iteration resulted in
> ~65k files/s, with occassionaly peaks to 90k files/s, but drops to
> frequently 45k files/s when reclaim ran to reclaim the inode
> caches. This load ran permanently at 800% CPU usage.
>
> Every so often (may once or twice a 50M inode create run) all 8 CPUs
> would remain pegged but the create rate would drop to zero for a few
> seconds to a couple of minutes. that was the livelock issues I
> reported.
>
Should be easy to spot at least.
> With this patchset, I'm seeing a per-iteration average of ~77k
> files/s, with only a couple of iterations dropping down to ~55k
> file/s and a significantly number above 90k/s. The runtime to 50M
> inodes is down by ~30% and the average CPU usage across the run is
> around 700%. IOWs, there a significant gain in performance there is
> a significant drop in CPU usage. I've done two runs to 50m inodes,
> and not seen any sign of a livelock, even for short periods of time.
>
Very cool.
> Ah, spoke too soon - I let the second run keep going, and at ~68M
> inodes it's just pegged all the CPUs and is pretty much completely
> wedged. Serial console is not responding, I can't get a new login,
> and the only thing responding that tells me the machine is alive is
> the remote PCP monitoring. It's been stuck for 5 minutes .... and
> now it is back. Here's what I saw:
>
> http://userweb.kernel.org/~dgc/shrinker-2.6.36/fs_mark-wedge-1.png
>
> The livelock is at the right of the charts, where the top chart is
> all red (system CPU time), and the other charts flat line to zero.
>
> And according to fsmark:
>
> 1 66400000 0 64554.2 7705926
> 1 67200000 0 64836.1 7573013
> <hang happened here>
> 2 68000000 0 69472.8 7941399
> 2 68800000 0 85017.5 7585203
>
> it didn't record any change in performance, which means the livelock
> probably occurred between iterations. I couldn't get any info on
> what caused the livelock this time so I can only assume it has the
> same cause....
>
Not sure where you could have gotten stuck. I thought it might have
locked up in congestion_wait() but it wouldn't have locked up this badly
if that was teh case. Sluggish sure but not that dead.
I'll see about reproducing with your test tomorrow and see what I find.
Thanks.
> Still, given the improvements in performance from this patchset,
> I'd say inclusion is a no-braniner....
>
--
Mel Gorman
Part-time Phd Student Linux Technology Center
University of Limerick IBM Dublin Software Lab
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
^ permalink raw reply [flat|nested] 82+ messages in thread
* Re: [PATCH 3/3] mm: page allocator: Drain per-cpu lists after direct reclaim allocation fails
2010-09-03 23:00 ` Andrew Morton
2010-09-04 2:25 ` Dave Chinner
@ 2010-09-05 18:14 ` Mel Gorman
1 sibling, 0 replies; 82+ messages in thread
From: Mel Gorman @ 2010-09-05 18:14 UTC (permalink / raw)
To: Andrew Morton
Cc: Linux Kernel List, linux-mm, Rik van Riel, Johannes Weiner,
Minchan Kim, Christoph Lameter, KAMEZAWA Hiroyuki,
KOSAKI Motohiro, Dave Chinner, Wu Fengguang, David Rientjes
On Fri, Sep 03, 2010 at 04:00:26PM -0700, Andrew Morton wrote:
> On Fri, 3 Sep 2010 10:08:46 +0100
> Mel Gorman <mel@csn.ul.ie> wrote:
>
> > When under significant memory pressure, a process enters direct reclaim
> > and immediately afterwards tries to allocate a page. If it fails and no
> > further progress is made, it's possible the system will go OOM. However,
> > on systems with large amounts of memory, it's possible that a significant
> > number of pages are on per-cpu lists and inaccessible to the calling
> > process. This leads to a process entering direct reclaim more often than
> > it should increasing the pressure on the system and compounding the problem.
> >
> > This patch notes that if direct reclaim is making progress but
> > allocations are still failing that the system is already under heavy
> > pressure. In this case, it drains the per-cpu lists and tries the
> > allocation a second time before continuing.
> >
> > Signed-off-by: Mel Gorman <mel@csn.ul.ie>
> > Reviewed-by: Minchan Kim <minchan.kim@gmail.com>
> > Reviewed-by: KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>
> > Reviewed-by: KOSAKI Motohiro <kosaki.motohiro@jp.fujitsu.com>
> > Reviewed-by: Christoph Lameter <cl@linux.com>
> > ---
> > mm/page_alloc.c | 20 ++++++++++++++++----
> > 1 files changed, 16 insertions(+), 4 deletions(-)
> >
> > diff --git a/mm/page_alloc.c b/mm/page_alloc.c
> > index bbaa959..750e1dc 100644
> > --- a/mm/page_alloc.c
> > +++ b/mm/page_alloc.c
> > @@ -1847,6 +1847,7 @@ __alloc_pages_direct_reclaim(gfp_t gfp_mask, unsigned int order,
> > struct page *page = NULL;
> > struct reclaim_state reclaim_state;
> > struct task_struct *p = current;
> > + bool drained = false;
> >
> > cond_resched();
> >
> > @@ -1865,14 +1866,25 @@ __alloc_pages_direct_reclaim(gfp_t gfp_mask, unsigned int order,
> >
> > cond_resched();
> >
> > - if (order != 0)
> > - drain_all_pages();
> > + if (unlikely(!(*did_some_progress)))
> > + return NULL;
> >
> > - if (likely(*did_some_progress))
> > - page = get_page_from_freelist(gfp_mask, nodemask, order,
> > +retry:
> > + page = get_page_from_freelist(gfp_mask, nodemask, order,
> > zonelist, high_zoneidx,
> > alloc_flags, preferred_zone,
> > migratetype);
> > +
> > + /*
> > + * If an allocation failed after direct reclaim, it could be because
> > + * pages are pinned on the per-cpu lists. Drain them and try again
> > + */
> > + if (!page && !drained) {
> > + drain_all_pages();
> > + drained = true;
> > + goto retry;
> > + }
> > +
> > return page;
> > }
>
> The patch looks reasonable.
>
> But please take a look at the recent thread "mm: minute-long livelocks
> in memory reclaim". There, people are pointing fingers at that
> drain_all_pages() call, suspecting that it's causing huge IPI storms.
>
I'm aware of it.
> Dave was going to test this theory but afaik hasn't yet done so. It
> would be nice to tie these threads together if poss?
>
I was waiting to hear the results of the test. Certainly it seemed very
plausible that this patch would help it. I also have a hunch that the
congestion_wait() problems are cropping up. I have a revised patch
series that might close the rest of the problem.
--
Mel Gorman
Part-time Phd Student Linux Technology Center
University of Limerick IBM Dublin Software Lab
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
^ permalink raw reply [flat|nested] 82+ messages in thread
* Re: [PATCH 3/3] mm: page allocator: Drain per-cpu lists after direct reclaim allocation fails
2010-09-03 9:08 ` [PATCH 3/3] mm: page allocator: Drain per-cpu lists after direct reclaim allocation fails Mel Gorman
2010-09-03 23:00 ` Andrew Morton
@ 2010-09-08 7:43 ` KOSAKI Motohiro
2010-09-08 20:05 ` Christoph Lameter
2010-09-09 12:41 ` Mel Gorman
1 sibling, 2 replies; 82+ messages in thread
From: KOSAKI Motohiro @ 2010-09-08 7:43 UTC (permalink / raw)
To: Mel Gorman
Cc: kosaki.motohiro, Andrew Morton, Linux Kernel List, linux-mm,
Rik van Riel, Johannes Weiner, Minchan Kim, Christoph Lameter,
KAMEZAWA Hiroyuki
> + /*
> + * If an allocation failed after direct reclaim, it could be because
> + * pages are pinned on the per-cpu lists. Drain them and try again
> + */
> + if (!page && !drained) {
> + drain_all_pages();
> + drained = true;
> + goto retry;
> + }
nit: when slub, get_page_from_freelist() failure is frequently happen
than slab because slub try to allocate high order page at first.
So, I guess we have to avoid drain_all_pages() if __GFP_NORETRY is passed.
From 9209ceb1d48446b031576ba9360036ddabc1a0e5 Mon Sep 17 00:00:00 2001
From: KOSAKI Motohiro <kosaki.motohiro@jp.fujitsu.com>
Date: Fri, 10 Sep 2010 03:29:05 +0900
Subject: [PATCH] mm: don't call drain_all_pages() when __GFP_NORETRY
SLUB try to allocate high order pages at first. therefore page allocator
eventually call drain_all_pages() frequently. We don't hope IPI storm.
Thus, we don't call drain_all_pages() when caller passed __GFP_NORETRY.
Signed-off-by: KOSAKI Motohiro <kosaki.motohiro@jp.fujitsu.com>
---
mm/page_alloc.c | 2 +-
1 files changed, 1 insertions(+), 1 deletions(-)
diff --git a/mm/page_alloc.c b/mm/page_alloc.c
index 8587c10..b9eafb1 100644
--- a/mm/page_alloc.c
+++ b/mm/page_alloc.c
@@ -1878,7 +1878,7 @@ retry:
* If an allocation failed after direct reclaim, it could be because
* pages are pinned on the per-cpu lists. Drain them and try again
*/
- if (!page && !drained) {
+ if (!page && !drained && !(gfp_mask & __GFP_NORETRY)) {
drain_all_pages();
drained = true;
goto retry;
--
1.6.5.2
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
^ permalink raw reply related [flat|nested] 82+ messages in thread
* Re: [PATCH 3/3] mm: page allocator: Drain per-cpu lists after direct reclaim allocation fails
2010-09-08 7:43 ` KOSAKI Motohiro
@ 2010-09-08 20:05 ` Christoph Lameter
2010-09-09 12:41 ` Mel Gorman
1 sibling, 0 replies; 82+ messages in thread
From: Christoph Lameter @ 2010-09-08 20:05 UTC (permalink / raw)
To: KOSAKI Motohiro
Cc: Mel Gorman, Andrew Morton, Linux Kernel List, linux-mm,
Rik van Riel, Johannes Weiner, Minchan Kim, KAMEZAWA Hiroyuki
On Wed, 8 Sep 2010, KOSAKI Motohiro wrote:
> nit: when slub, get_page_from_freelist() failure is frequently happen
> than slab because slub try to allocate high order page at first.
> So, I guess we have to avoid drain_all_pages() if __GFP_NORETRY is passed.
SLAB also tries to allocate higher order pages for many slabs but not as
high as SLUB (SLAB does not support fallback to order 0). SLAB also always
uses GFP_THISNODE (which include GFP_NORETRY).
Your patch will make SLAB's initial call to the page allocator fail more
frequently and therefore will increase the use of fallback_alloc().
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
^ permalink raw reply [flat|nested] 82+ messages in thread
* Re: [PATCH 3/3] mm: page allocator: Drain per-cpu lists after direct reclaim allocation fails
2010-09-08 7:43 ` KOSAKI Motohiro
2010-09-08 20:05 ` Christoph Lameter
@ 2010-09-09 12:41 ` Mel Gorman
2010-09-09 13:45 ` Christoph Lameter
1 sibling, 1 reply; 82+ messages in thread
From: Mel Gorman @ 2010-09-09 12:41 UTC (permalink / raw)
To: KOSAKI Motohiro
Cc: Andrew Morton, Linux Kernel List, linux-mm, Rik van Riel,
Johannes Weiner, Minchan Kim, Christoph Lameter,
KAMEZAWA Hiroyuki
On Wed, Sep 08, 2010 at 04:43:03PM +0900, KOSAKI Motohiro wrote:
> > + /*
> > + * If an allocation failed after direct reclaim, it could be because
> > + * pages are pinned on the per-cpu lists. Drain them and try again
> > + */
> > + if (!page && !drained) {
> > + drain_all_pages();
> > + drained = true;
> > + goto retry;
> > + }
>
> nit: when slub, get_page_from_freelist() failure is frequently happen
> than slab because slub try to allocate high order page at first.
> So, I guess we have to avoid drain_all_pages() if __GFP_NORETRY is passed.
>
Old behaviour was for high-order allocations which one would assume did
not have __GFP_NORETRY specified except in very rare cases. Still, calling
drain_all_pages() raises interrupt counts and I worried that large machines
might exhibit some livelock-like problem. I'm considering the following patch,
what do you think?
==== CUT HERE ====
mm: page allocator: Reduce the instances where drain_all_pages() is called
When a page allocation fails after direct reclaim, the per-cpu lists are
drained and another attempt made to allocate. On larger systems,
this can cause IPI storms in low-memory situations with latencies
increasing the more CPUs there are on the system. In extreme situations,
it is suspected it could cause livelock-like situations.
This patch restores older behaviour to call drain_all_pages() after direct
reclaim fails only for high-order allocations. As there is an expectation
that lower-orders will free naturally, the drain only occurs for order >
PAGE_ALLOC_COSTLY_ORDER. The reasoning is that the allocation is already
expected to be very expensive and rare so there will not be a resulting IPI
storm. drain_all_pages() called are not eliminated as it is still the case
that an allocation can fail because the necessary pages are pinned in the
per-cpu list. After this patch, the lists are only drained as a last-resort
before calling the OOM killer.
Signed-off-by: Mel Gorman <mel@csn.ul.ie>
---
mm/page_alloc.c | 23 ++++++++++++++++++++---
1 files changed, 20 insertions(+), 3 deletions(-)
diff --git a/mm/page_alloc.c b/mm/page_alloc.c
index 750e1dc..16f516c 100644
--- a/mm/page_alloc.c
+++ b/mm/page_alloc.c
@@ -1737,6 +1737,7 @@ __alloc_pages_may_oom(gfp_t gfp_mask, unsigned int order,
int migratetype)
{
struct page *page;
+ bool drained = false;
/* Acquire the OOM killer lock for the zones in zonelist */
if (!try_set_zonelist_oom(zonelist, gfp_mask)) {
@@ -1744,6 +1745,7 @@ __alloc_pages_may_oom(gfp_t gfp_mask, unsigned int order,
return NULL;
}
+retry:
/*
* Go through the zonelist yet one more time, keep very high watermark
* here, this is only to catch a parallel oom killing, we must fail if
@@ -1773,6 +1775,18 @@ __alloc_pages_may_oom(gfp_t gfp_mask, unsigned int order,
if (gfp_mask & __GFP_THISNODE)
goto out;
}
+
+ /*
+ * If an allocation failed, it could be because pages are pinned on
+ * the per-cpu lists. Before resorting to the OOM killer, try
+ * draining
+ */
+ if (!drained) {
+ drain_all_pages();
+ drained = true;
+ goto retry;
+ }
+
/* Exhausted what can be done so it's blamo time */
out_of_memory(zonelist, gfp_mask, order, nodemask);
@@ -1876,10 +1890,13 @@ retry:
migratetype);
/*
- * If an allocation failed after direct reclaim, it could be because
- * pages are pinned on the per-cpu lists. Drain them and try again
+ * If a high-order allocation failed after direct reclaim, it could
+ * be because pages are pinned on the per-cpu lists. However, only
+ * do it for PAGE_ALLOC_COSTLY_ORDER as the cost of the IPI needed
+ * to drain the pages is itself high. Assume that lower orders
+ * will naturally free without draining.
*/
- if (!page && !drained) {
+ if (!page && !drained && order > PAGE_ALLOC_COSTLY_ORDER) {
drain_all_pages();
drained = true;
goto retry;
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
^ permalink raw reply related [flat|nested] 82+ messages in thread
* Re: [PATCH 3/3] mm: page allocator: Drain per-cpu lists after direct reclaim allocation fails
2010-09-09 12:41 ` Mel Gorman
@ 2010-09-09 13:45 ` Christoph Lameter
2010-09-09 13:55 ` Mel Gorman
0 siblings, 1 reply; 82+ messages in thread
From: Christoph Lameter @ 2010-09-09 13:45 UTC (permalink / raw)
To: Mel Gorman
Cc: KOSAKI Motohiro, Andrew Morton, Linux Kernel List, linux-mm,
Rik van Riel, Johannes Weiner, Minchan Kim, KAMEZAWA Hiroyuki
On Thu, 9 Sep 2010, Mel Gorman wrote:
> @@ -1876,10 +1890,13 @@ retry:
> migratetype);
>
> /*
> - * If an allocation failed after direct reclaim, it could be because
> - * pages are pinned on the per-cpu lists. Drain them and try again
> + * If a high-order allocation failed after direct reclaim, it could
> + * be because pages are pinned on the per-cpu lists. However, only
> + * do it for PAGE_ALLOC_COSTLY_ORDER as the cost of the IPI needed
> + * to drain the pages is itself high. Assume that lower orders
> + * will naturally free without draining.
> */
> - if (!page && !drained) {
> + if (!page && !drained && order > PAGE_ALLOC_COSTLY_ORDER) {
> drain_all_pages();
> drained = true;
> goto retry;
>
This will have the effect of never sending IPIs for slab allocations since
they do not do allocations for orders > PAGE_ALLOC_COSTLY_ORDER.
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
^ permalink raw reply [flat|nested] 82+ messages in thread
* Re: [PATCH 3/3] mm: page allocator: Drain per-cpu lists after direct reclaim allocation fails
2010-09-09 13:45 ` Christoph Lameter
@ 2010-09-09 13:55 ` Mel Gorman
2010-09-09 14:32 ` Christoph Lameter
0 siblings, 1 reply; 82+ messages in thread
From: Mel Gorman @ 2010-09-09 13:55 UTC (permalink / raw)
To: Christoph Lameter
Cc: KOSAKI Motohiro, Andrew Morton, Linux Kernel List, linux-mm,
Rik van Riel, Johannes Weiner, Minchan Kim, KAMEZAWA Hiroyuki
On Thu, Sep 09, 2010 at 08:45:16AM -0500, Christoph Lameter wrote:
> On Thu, 9 Sep 2010, Mel Gorman wrote:
>
> > @@ -1876,10 +1890,13 @@ retry:
> > migratetype);
> >
> > /*
> > - * If an allocation failed after direct reclaim, it could be because
> > - * pages are pinned on the per-cpu lists. Drain them and try again
> > + * If a high-order allocation failed after direct reclaim, it could
> > + * be because pages are pinned on the per-cpu lists. However, only
> > + * do it for PAGE_ALLOC_COSTLY_ORDER as the cost of the IPI needed
> > + * to drain the pages is itself high. Assume that lower orders
> > + * will naturally free without draining.
> > */
> > - if (!page && !drained) {
> > + if (!page && !drained && order > PAGE_ALLOC_COSTLY_ORDER) {
> > drain_all_pages();
> > drained = true;
> > goto retry;
> >
>
> This will have the effect of never sending IPIs for slab allocations since
> they do not do allocations for orders > PAGE_ALLOC_COSTLY_ORDER.
>
The question is how severe is that? There is somewhat of an expectation
that the lower orders free naturally so it the IPI justified? That said,
our historical behaviour would have looked like
if (!page && !drained && order) {
drain_all_pages();
draiained = true;
goto retry;
}
Play it safe for now and go with that?
--
Mel Gorman
Part-time Phd Student Linux Technology Center
University of Limerick IBM Dublin Software Lab
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
^ permalink raw reply [flat|nested] 82+ messages in thread
* Re: [PATCH 3/3] mm: page allocator: Drain per-cpu lists after direct reclaim allocation fails
2010-09-09 13:55 ` Mel Gorman
@ 2010-09-09 14:32 ` Christoph Lameter
2010-09-09 15:05 ` Mel Gorman
0 siblings, 1 reply; 82+ messages in thread
From: Christoph Lameter @ 2010-09-09 14:32 UTC (permalink / raw)
To: Mel Gorman
Cc: KOSAKI Motohiro, Andrew Morton, Linux Kernel List, linux-mm,
Rik van Riel, Johannes Weiner, Minchan Kim, KAMEZAWA Hiroyuki
On Thu, 9 Sep 2010, Mel Gorman wrote:
> > This will have the effect of never sending IPIs for slab allocations since
> > they do not do allocations for orders > PAGE_ALLOC_COSTLY_ORDER.
> >
>
> The question is how severe is that? There is somewhat of an expectation
> that the lower orders free naturally so it the IPI justified? That said,
> our historical behaviour would have looked like
>
> if (!page && !drained && order) {
> drain_all_pages();
> draiained = true;
> goto retry;
> }
>
> Play it safe for now and go with that?
I am fine with no IPIs for order <= COSTLY. Just be aware that this is
a change that may have some side effects. Lets run some tests and see
how it affect the issues that we are seeing.
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
^ permalink raw reply [flat|nested] 82+ messages in thread
* Re: [PATCH 3/3] mm: page allocator: Drain per-cpu lists after direct reclaim allocation fails
2010-09-09 14:32 ` Christoph Lameter
@ 2010-09-09 15:05 ` Mel Gorman
2010-09-10 2:56 ` KOSAKI Motohiro
0 siblings, 1 reply; 82+ messages in thread
From: Mel Gorman @ 2010-09-09 15:05 UTC (permalink / raw)
To: Christoph Lameter
Cc: KOSAKI Motohiro, Andrew Morton, Linux Kernel List, linux-mm,
Rik van Riel, Johannes Weiner, Minchan Kim, KAMEZAWA Hiroyuki
On Thu, Sep 09, 2010 at 09:32:52AM -0500, Christoph Lameter wrote:
> On Thu, 9 Sep 2010, Mel Gorman wrote:
>
> > > This will have the effect of never sending IPIs for slab allocations since
> > > they do not do allocations for orders > PAGE_ALLOC_COSTLY_ORDER.
> > >
> >
> > The question is how severe is that? There is somewhat of an expectation
> > that the lower orders free naturally so it the IPI justified? That said,
> > our historical behaviour would have looked like
> >
> > if (!page && !drained && order) {
> > drain_all_pages();
> > draiained = true;
> > goto retry;
> > }
> >
> > Play it safe for now and go with that?
>
> I am fine with no IPIs for order <= COSTLY. Just be aware that this is
> a change that may have some side effects.
I made the choice consciously. I felt that if slab or slub were depending on
IPIs to make successful allocations in low-memory conditions that it would
experience varying stalls on bigger machines due to increased interrupts that
might be difficult to diagnose while not necessarily improving allocation
success rates. I also considered that if the machine is under pressure then
slab and slub may also be releasing pages of the same order and effectively
recycling their pages without depending on IPIs.
> Lets run some tests and see
> how it affect the issues that we are seeing.
>
Perfect, thanks.
--
Mel Gorman
Part-time Phd Student Linux Technology Center
University of Limerick IBM Dublin Software Lab
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
^ permalink raw reply [flat|nested] 82+ messages in thread
* Re: [PATCH 3/3] mm: page allocator: Drain per-cpu lists after direct reclaim allocation fails
2010-09-09 15:05 ` Mel Gorman
@ 2010-09-10 2:56 ` KOSAKI Motohiro
0 siblings, 0 replies; 82+ messages in thread
From: KOSAKI Motohiro @ 2010-09-10 2:56 UTC (permalink / raw)
To: Mel Gorman
Cc: kosaki.motohiro, Christoph Lameter, Andrew Morton,
Linux Kernel List, linux-mm, Rik van Riel, Johannes Weiner,
Minchan Kim, KAMEZAWA Hiroyuki
> On Thu, Sep 09, 2010 at 09:32:52AM -0500, Christoph Lameter wrote:
> > On Thu, 9 Sep 2010, Mel Gorman wrote:
> >
> > > > This will have the effect of never sending IPIs for slab allocations since
> > > > they do not do allocations for orders > PAGE_ALLOC_COSTLY_ORDER.
> > > >
> > >
> > > The question is how severe is that? There is somewhat of an expectation
> > > that the lower orders free naturally so it the IPI justified? That said,
> > > our historical behaviour would have looked like
> > >
> > > if (!page && !drained && order) {
> > > drain_all_pages();
> > > draiained = true;
> > > goto retry;
> > > }
> > >
> > > Play it safe for now and go with that?
> >
> > I am fine with no IPIs for order <= COSTLY. Just be aware that this is
> > a change that may have some side effects.
>
> I made the choice consciously. I felt that if slab or slub were depending on
> IPIs to make successful allocations in low-memory conditions that it would
> experience varying stalls on bigger machines due to increased interrupts that
> might be difficult to diagnose while not necessarily improving allocation
> success rates. I also considered that if the machine is under pressure then
> slab and slub may also be releasing pages of the same order and effectively
> recycling their pages without depending on IPIs.
+1.
In these days, average numbers of CPUs are increasing. So we need to be afraid
IPI storm than past.
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
^ permalink raw reply [flat|nested] 82+ messages in thread
end of thread, other threads:[~2010-09-10 6:18 UTC | newest]
Thread overview: 82+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2010-08-16 9:42 [RFC PATCH 0/3] Reduce watermark-related problems with the per-cpu allocator Mel Gorman
2010-08-16 9:42 ` [PATCH 1/3] mm: page allocator: Update free page counters after pages are placed on the free list Mel Gorman
2010-08-16 14:04 ` Rik van Riel
2010-08-16 15:26 ` Johannes Weiner
2010-08-17 2:21 ` Minchan Kim
2010-08-17 9:59 ` Mel Gorman
2010-08-17 14:25 ` Minchan Kim
2010-08-18 2:21 ` KAMEZAWA Hiroyuki
2010-08-16 9:42 ` [PATCH 2/3] mm: page allocator: Calculate a better estimate of NR_FREE_PAGES when memory is low and kswapd is awake Mel Gorman
2010-08-16 9:43 ` Mel Gorman
2010-08-16 14:47 ` Rik van Riel
2010-08-16 16:06 ` Johannes Weiner
2010-08-17 2:26 ` Minchan Kim
2010-08-17 10:42 ` Mel Gorman
2010-08-17 15:01 ` Minchan Kim
2010-08-17 15:05 ` Mel Gorman
2010-08-17 10:16 ` Mel Gorman
2010-08-17 11:05 ` Johannes Weiner
2010-08-17 14:20 ` Minchan Kim
2010-08-18 8:51 ` Mel Gorman
2010-08-18 14:57 ` Minchan Kim
2010-08-19 8:06 ` Mel Gorman
2010-08-19 10:33 ` Minchan Kim
2010-08-19 10:38 ` Mel Gorman
2010-08-19 14:01 ` Minchan Kim
2010-08-19 14:09 ` Mel Gorman
2010-08-19 14:34 ` Minchan Kim
2010-08-19 15:07 ` Mel Gorman
2010-08-19 15:22 ` Minchan Kim
2010-08-19 15:40 ` Mel Gorman
2010-08-19 15:44 ` Minchan Kim
2010-08-19 15:46 ` Minchan Kim
2010-08-19 16:06 ` Mel Gorman
2010-08-19 16:45 ` Minchan Kim
2010-08-18 2:59 ` KAMEZAWA Hiroyuki
2010-08-18 15:55 ` Christoph Lameter
2010-08-19 0:07 ` KAMEZAWA Hiroyuki
2010-08-19 19:00 ` Christoph Lameter
2010-08-19 23:49 ` KAMEZAWA Hiroyuki
2010-08-20 0:22 ` [PATCH] vmstat : update zone stat threshold at onlining a cpu KAMEZAWA Hiroyuki
2010-08-20 14:54 ` Christoph Lameter
2010-08-20 17:29 ` Andrew Morton
2010-08-23 7:18 ` Mel Gorman
2010-08-16 9:42 ` [PATCH 3/3] mm: page allocator: Drain per-cpu lists after direct reclaim allocation fails Mel Gorman
2010-08-16 14:50 ` Rik van Riel
2010-08-17 2:57 ` Minchan Kim
2010-08-18 3:02 ` KAMEZAWA Hiroyuki
2010-08-19 14:47 ` Minchan Kim
2010-08-19 15:10 ` Mel Gorman
-- strict thread matches above, loose matches on Subject: below --
2010-08-23 8:00 [PATCH 0/3] Reduce watermark-related problems with the per-cpu allocator V2 Mel Gorman
2010-08-23 8:00 ` [PATCH 3/3] mm: page allocator: Drain per-cpu lists after direct reclaim allocation fails Mel Gorman
2010-08-23 23:17 ` KOSAKI Motohiro
2010-08-31 17:37 [PATCH 0/3] Reduce watermark-related problems with the per-cpu allocator V3 Mel Gorman
2010-08-31 17:37 ` [PATCH 3/3] mm: page allocator: Drain per-cpu lists after direct reclaim allocation fails Mel Gorman
2010-08-31 18:26 ` Christoph Lameter
2010-09-03 9:08 [PATCH 0/3] Reduce watermark-related problems with the per-cpu allocator V4 Mel Gorman
2010-09-03 9:08 ` [PATCH 3/3] mm: page allocator: Drain per-cpu lists after direct reclaim allocation fails Mel Gorman
2010-09-03 23:00 ` Andrew Morton
2010-09-04 2:25 ` Dave Chinner
2010-09-04 3:21 ` Andrew Morton
2010-09-04 7:58 ` Dave Chinner
2010-09-04 8:14 ` Dave Chinner
[not found] ` <20100905015400.GA10714@localhost>
[not found] ` <20100905021555.GG705@dastard>
[not found] ` <20100905060539.GA17450@localhost>
[not found] ` <20100905131447.GJ705@dastard>
2010-09-05 13:45 ` Wu Fengguang
2010-09-05 23:33 ` Dave Chinner
2010-09-06 4:02 ` Dave Chinner
2010-09-06 8:40 ` Mel Gorman
2010-09-06 21:50 ` Dave Chinner
2010-09-08 8:49 ` Dave Chinner
2010-09-09 12:39 ` Mel Gorman
2010-09-10 6:17 ` Dave Chinner
2010-09-07 14:23 ` Christoph Lameter
2010-09-08 2:13 ` Wu Fengguang
2010-09-04 3:23 ` Wu Fengguang
2010-09-04 3:59 ` Andrew Morton
2010-09-04 4:37 ` Wu Fengguang
2010-09-05 18:22 ` Mel Gorman
2010-09-05 18:14 ` Mel Gorman
2010-09-08 7:43 ` KOSAKI Motohiro
2010-09-08 20:05 ` Christoph Lameter
2010-09-09 12:41 ` Mel Gorman
2010-09-09 13:45 ` Christoph Lameter
2010-09-09 13:55 ` Mel Gorman
2010-09-09 14:32 ` Christoph Lameter
2010-09-09 15:05 ` Mel Gorman
2010-09-10 2:56 ` KOSAKI Motohiro
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).