From: Nikola Ciprich <nikola.ciprich@linuxbox.cz>
To: gregkh@linuxfoundation.org
Cc: mel@csn.ul.ie, stable@vger.kernel.org,
linux-kernel mlist <linux-kernel@vger.kernel.org>
Subject: Re: Patch "mm: vmscan: forcibly scan highmem if there are too many buffer_heads pinning highmem" has been added to the 3.3-stable tree
Date: Thu, 29 Mar 2012 07:36:39 +0200 [thread overview]
Message-ID: <20120329053639.GA7603@pcnci2.linuxbox.cz> (raw)
In-Reply-To: <13325234973644@kroah.org>
[-- Attachment #1: Type: text/plain, Size: 6473 bytes --]
Hi,
I'm not 100% sure, but I think this one could go to 3.0.x as well, am I right?
If it's so, could I try to provide backport? (it't doesn't apply cleanly).
Mel, would You care to review then? Or do You plan to send Your own backport?
cheers!
nik
On Fri, Mar 23, 2012 at 10:24:57AM -0700, gregkh@linuxfoundation.org wrote:
>
> This is a note to let you know that I've just added the patch titled
>
> mm: vmscan: forcibly scan highmem if there are too many buffer_heads pinning highmem
>
> to the 3.3-stable tree which can be found at:
> http://www.kernel.org/git/?p=linux/kernel/git/stable/stable-queue.git;a=summary
>
> The filename of the patch is:
> mm-vmscan-forcibly-scan-highmem-if-there-are-too-many-buffer_heads-pinning-highmem.patch
> and it can be found in the queue-3.3 subdirectory.
>
> If you, or anyone else, feels it should not be added to the stable tree,
> please let <stable@vger.kernel.org> know about it.
>
>
> From cc715d99e529d470dde2f33a6614f255adea71f3 Mon Sep 17 00:00:00 2001
> From: Mel Gorman <mel@csn.ul.ie>
> Date: Wed, 21 Mar 2012 16:34:00 -0700
> Subject: mm: vmscan: forcibly scan highmem if there are too many buffer_heads pinning highmem
>
> From: Mel Gorman <mel@csn.ul.ie>
>
> commit cc715d99e529d470dde2f33a6614f255adea71f3 upstream.
>
> Stuart Foster reported on bugzilla that copying large amounts of data
> from NTFS caused an OOM kill on 32-bit X86 with 16G of memory. Andrew
> Morton correctly identified that the problem was NTFS was using 512
> blocks meaning each page had 8 buffer_heads in low memory pinning it.
>
> In the past, direct reclaim used to scan highmem even if the allocating
> process did not specify __GFP_HIGHMEM but not any more. kswapd no longer
> will reclaim from zones that are above the high watermark. The intention
> in both cases was to minimise unnecessary reclaim. The downside is on
> machines with large amounts of highmem that lowmem can be fully consumed
> by buffer_heads with nothing trying to free them.
>
> The following patch is based on a suggestion by Andrew Morton to extend
> the buffer_heads_over_limit case to force kswapd and direct reclaim to
> scan the highmem zone regardless of the allocation request or watermarks.
>
> Addresses https://bugzilla.kernel.org/show_bug.cgi?id=42578
>
> [hughd@google.com: move buffer_heads_over_limit check up]
> [akpm@linux-foundation.org: buffer_heads_over_limit is unlikely]
> Reported-by: Stuart Foster <smf.linux@ntlworld.com>
> Tested-by: Stuart Foster <smf.linux@ntlworld.com>
> Signed-off-by: Mel Gorman <mgorman@suse.de>
> Signed-off-by: Hugh Dickins <hughd@google.com>
> Cc: Johannes Weiner <hannes@cmpxchg.org>
> Cc: Rik van Riel <riel@redhat.com>
> Cc: Christoph Lameter <cl@linux.com>
> Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
>
>
> ---
> mm/vmscan.c | 42 +++++++++++++++++++++++++++++-------------
> 1 file changed, 29 insertions(+), 13 deletions(-)
>
> --- a/mm/vmscan.c
> +++ b/mm/vmscan.c
> @@ -1643,18 +1643,6 @@ static void move_active_pages_to_lru(str
> unsigned long pgmoved = 0;
> struct page *page;
>
> - if (buffer_heads_over_limit) {
> - spin_unlock_irq(&zone->lru_lock);
> - list_for_each_entry(page, list, lru) {
> - if (page_has_private(page) && trylock_page(page)) {
> - if (page_has_private(page))
> - try_to_release_page(page, 0);
> - unlock_page(page);
> - }
> - }
> - spin_lock_irq(&zone->lru_lock);
> - }
> -
> while (!list_empty(list)) {
> struct lruvec *lruvec;
>
> @@ -1737,6 +1725,14 @@ static void shrink_active_list(unsigned
> continue;
> }
>
> + if (unlikely(buffer_heads_over_limit)) {
> + if (page_has_private(page) && trylock_page(page)) {
> + if (page_has_private(page))
> + try_to_release_page(page, 0);
> + unlock_page(page);
> + }
> + }
> +
> if (page_referenced(page, 0, mz->mem_cgroup, &vm_flags)) {
> nr_rotated += hpage_nr_pages(page);
> /*
> @@ -2235,6 +2231,14 @@ static bool shrink_zones(int priority, s
> unsigned long nr_soft_scanned;
> bool aborted_reclaim = false;
>
> + /*
> + * If the number of buffer_heads in the machine exceeds the maximum
> + * allowed level, force direct reclaim to scan the highmem zone as
> + * highmem pages could be pinning lowmem pages storing buffer_heads
> + */
> + if (buffer_heads_over_limit)
> + sc->gfp_mask |= __GFP_HIGHMEM;
> +
> for_each_zone_zonelist_nodemask(zone, z, zonelist,
> gfp_zone(sc->gfp_mask), sc->nodemask) {
> if (!populated_zone(zone))
> @@ -2724,6 +2728,17 @@ loop_again:
> */
> age_active_anon(zone, &sc, priority);
>
> + /*
> + * If the number of buffer_heads in the machine
> + * exceeds the maximum allowed level and this node
> + * has a highmem zone, force kswapd to reclaim from
> + * it to relieve lowmem pressure.
> + */
> + if (buffer_heads_over_limit && is_highmem_idx(i)) {
> + end_zone = i;
> + break;
> + }
> +
> if (!zone_watermark_ok_safe(zone, order,
> high_wmark_pages(zone), 0, 0)) {
> end_zone = i;
> @@ -2786,7 +2801,8 @@ loop_again:
> (zone->present_pages +
> KSWAPD_ZONE_BALANCE_GAP_RATIO-1) /
> KSWAPD_ZONE_BALANCE_GAP_RATIO);
> - if (!zone_watermark_ok_safe(zone, order,
> + if ((buffer_heads_over_limit && is_highmem_idx(i)) ||
> + !zone_watermark_ok_safe(zone, order,
> high_wmark_pages(zone) + balance_gap,
> end_zone, 0)) {
> shrink_zone(priority, zone, &sc);
>
>
> Patches currently in stable-queue which might be from mel@csn.ul.ie are
>
> queue-3.3/mm-vmscan-forcibly-scan-highmem-if-there-are-too-many-buffer_heads-pinning-highmem.patch
> --
> To unsubscribe from this list: send the line "unsubscribe stable" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at http://vger.kernel.org/majordomo-info.html
>
--
-------------------------------------
Ing. Nikola CIPRICH
LinuxBox.cz, s.r.o.
28. rijna 168, 709 01 Ostrava
tel.: +420 596 603 142
fax: +420 596 621 273
mobil: +420 777 093 799
www.linuxbox.cz
mobil servis: +420 737 238 656
email servis: servis@linuxbox.cz
-------------------------------------
[-- Attachment #2: Type: application/pgp-signature, Size: 198 bytes --]
next parent reply other threads:[~2012-03-29 5:37 UTC|newest]
Thread overview: 2+ messages / expand[flat|nested] mbox.gz Atom feed top
[not found] <13325234973644@kroah.org>
2012-03-29 5:36 ` Nikola Ciprich [this message]
2012-03-29 9:31 ` Patch "mm: vmscan: forcibly scan highmem if there are too many buffer_heads pinning highmem" has been added to the 3.3-stable tree Mel Gorman
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20120329053639.GA7603@pcnci2.linuxbox.cz \
--to=nikola.ciprich@linuxbox.cz \
--cc=gregkh@linuxfoundation.org \
--cc=linux-kernel@vger.kernel.org \
--cc=mel@csn.ul.ie \
--cc=stable@vger.kernel.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox