From: Daniel Walker <danielwa@cisco.com>
To: Alexander Viro <viro@zeniv.linux.org.uk>,
Johannes Weiner <hannes@cmpxchg.org>,
Michal Hocko <mhocko@suse.com>,
Andrew Morton <akpm@linux-foundation.org>,
linux-fsdevel@vger.kernel.org, linux-kernel@vger.kernel.org,
linux-mm@kvack.org, "Khalid Mughal (khalidm)" <khalidm@cisco.com>,
"xe-kernel@external.cisco.com" <xe-kernel@external.cisco.com>
Subject: computing drop-able caches
Date: Thu, 28 Jan 2016 15:42:53 -0800 [thread overview]
Message-ID: <56AAA77D.7090000@cisco.com> (raw)
Hi,
My colleague Khalid and I are working on a patch which will provide a
/proc file to output the size of the drop-able page cache.
One way to implement this is to use the current drop_caches /proc
routine, but instead of actually droping the caches just add
up the amount.
Here's a quote Khalid,
"Currently there is no way to figure out the droppable pagecache size
from the meminfo output. The MemFree size can shrink during normal
system operation, when some of the memory pages get cached and is
reflected in "Cached" field. Similarly for file operations some of
the buffer memory gets cached and it is reflected in "Buffers" field.
The kernel automatically reclaims all this cached & buffered memory,
when it is needed elsewhere on the system. The only way to manually
reclaim this memory is by writing 1 to /proc/sys/vm/drop_caches. "
So my impression is that the drop-able cache is spread over two fields
in meminfo.
Alright, the question is does this info live someplace else that we
don't know about? Or someplace in the kernel where it could be
added to meminfo trivially ?
The point of the whole exercise is to get a better idea of free memory
for our employer. Does it make sense to do this for computing free memory?
Any comments welcome..
Daniel
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
next reply other threads:[~2016-01-28 23:42 UTC|newest]
Thread overview: 21+ messages / expand[flat|nested] mbox.gz Atom feed top
2016-01-28 23:42 Daniel Walker [this message]
2016-01-28 23:58 ` computing drop-able caches Johannes Weiner
2016-01-29 1:03 ` Daniel Walker
2016-01-29 1:29 ` Daniel Walker
2016-01-29 1:55 ` Johannes Weiner
2016-01-29 21:21 ` Daniel Walker
2016-01-29 22:33 ` Johannes Weiner
2016-01-29 22:41 ` Rik van Riel
2016-02-08 20:57 ` Khalid Mughal (khalidm)
2016-02-10 18:04 ` Daniel Walker
2016-02-10 18:13 ` Dave Hansen
2016-02-10 19:11 ` Daniel Walker
2016-02-11 22:11 ` Rik van Riel
2016-02-12 18:01 ` Khalid Mughal (khalidm)
2016-02-12 21:46 ` Dave Hansen
2016-02-12 22:15 ` Johannes Weiner
2016-02-12 18:06 ` Dave Hansen
2016-02-12 18:15 ` Daniel Walker
2016-02-12 18:18 ` Dave Hansen
2016-02-12 18:25 ` Daniel Walker
2016-02-12 20:15 ` Daniel Walker
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=56AAA77D.7090000@cisco.com \
--to=danielwa@cisco.com \
--cc=akpm@linux-foundation.org \
--cc=hannes@cmpxchg.org \
--cc=khalidm@cisco.com \
--cc=linux-fsdevel@vger.kernel.org \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-mm@kvack.org \
--cc=mhocko@suse.com \
--cc=viro@zeniv.linux.org.uk \
--cc=xe-kernel@external.cisco.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).