linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
From: Vlastimil Babka <vbabka@suse.cz>
To: baotiao <baotiao@gmail.com>
Cc: linux-mm@kvack.org, Dave Chinner <david@fromorbit.com>
Subject: Re: why the kmalloc return fail when there is free physical address but return success after dropping page caches
Date: Wed, 18 May 2016 10:45:42 +0200	[thread overview]
Message-ID: <573C2BB6.6070801@suse.cz> (raw)
In-Reply-To: <D64A3952-53D8-4B9D-98A1-C99D7E231D42@gmail.com>

[+CC Dave]

On 05/18/2016 04:38 AM, baotiao wrote:
> Hello every, I meet an interesting kernel memory problem. Can anyone
> help me explain what happen under the kernel

Which kernel version is that?

> The machine's status is describe as blow:
>
> the machine has 96 physical memory. And the real use memory is about
> 64G, and the page cache use about 32G. we also use the swap area, at
> that time we have about 10G(we set the swap max size to 32G). At that
> moment, we find xfs report
>
> |Apr 29 21:54:31 w-openstack86 kernel: XFS: possible memory allocation
> deadlock in kmem_alloc (mode:0x250) |

Just once, or many times?

> after reading the source code. This message is display from this line
>
> |ptr = kmalloc(size, lflags); if (ptr || (flags &
> (KM_MAYFAIL|KM_NOSLEEP))) return ptr; if (!(++retries % 100))
> xfs_err(NULL, "possible memory allocation deadlock in %s (mode:0x%x)",
> __func__, lflags); congestion_wait(BLK_RW_ASYNC, HZ/50); |

Any indication what is the size used here?

> The error is cause by the kmalloc() function, there is not enough memory
> in the system. But there is still 32G page cache.
>
> So I run
>
> |echo 3 > /proc/sys/vm/drop_caches |
>
> to drop the page cache.
>
> Then the system is fine.

Are you saying that the error message was repeated infinitely until you 
did the drop_caches?

> But I really don't know the reason. Why after I
> run drop_caches operation the kmalloc() function will success? I think
> even we use whole physical memory, but we only use 64 real momory, the
> 32G memory are page cache, further we have enough swap space. So why the
> kernel don't flush the page cache or the swap to reserved the kmalloc
> operation.
>
>
> ----------------------------------------
> Github: https://github.com/baotiao
> Blog: http://baotiao.github.io/
> Stackoverflow: http://stackoverflow.com/users/634415/baotiao
> Linkedin: http://www.linkedin.com/profile/view?id=145231990
>

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

  reply	other threads:[~2016-05-18  8:45 UTC|newest]

Thread overview: 5+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2016-05-18  2:38 why the kmalloc return fail when there is free physical address but return success after dropping page caches baotiao
2016-05-18  8:45 ` Vlastimil Babka [this message]
2016-05-18  8:58   ` baotiao
2016-05-18 14:41     ` Dave Chinner
2016-05-25  9:25       ` 陈宗志

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=573C2BB6.6070801@suse.cz \
    --to=vbabka@suse.cz \
    --cc=baotiao@gmail.com \
    --cc=david@fromorbit.com \
    --cc=linux-mm@kvack.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).