From: Rik van Riel <riel@redhat.com>
To: Daniel Walker <danielwa@cisco.com>,
Dave Hansen <dave.hansen@intel.com>,
"Khalid Mughal (khalidm)" <khalidm@cisco.com>,
Johannes Weiner <hannes@cmpxchg.org>
Cc: Alexander Viro <viro@zeniv.linux.org.uk>,
Michal Hocko <mhocko@suse.com>,
Andrew Morton <akpm@linux-foundation.org>,
"linux-fsdevel@vger.kernel.org" <linux-fsdevel@vger.kernel.org>,
"linux-kernel@vger.kernel.org" <linux-kernel@vger.kernel.org>,
"linux-mm@kvack.org" <linux-mm@kvack.org>,
"xe-kernel@external.cisco.com" <xe-kernel@external.cisco.com>
Subject: Re: computing drop-able caches
Date: Thu, 11 Feb 2016 17:11:59 -0500 [thread overview]
Message-ID: <1455228719.15821.18.camel@redhat.com> (raw)
In-Reply-To: <56BB8B5E.0@cisco.com>
[-- Attachment #1: Type: text/plain, Size: 1244 bytes --]
On Wed, 2016-02-10 at 11:11 -0800, Daniel Walker wrote:
> On 02/10/2016 10:13 AM, Dave Hansen wrote:
> > On 02/10/2016 10:04 AM, Daniel Walker wrote:
> > > > [Linux_0:/]$ echo 3 > /proc/sys/vm/drop_caches
> > > > [Linux_0:/]$ cat /proc/meminfo
> > > > MemTotal: 3977836 kB
> > > > MemFree: 1095012 kB
> > > > MemAvailable: 1434148 kB
> > > I suspect MemAvailable takes into account more than just the
> > > droppable
> > > caches. For instance, reclaimable slab is included, but I don't
> > > think
> > > drop_caches drops that part.
> > There's a bit for page cache and a bit for slab, see:
> >
> > https://kernel.org/doc/Documentation/sysctl/vm.txt
> >
> >
>
> Ok, then this looks like a defect then. I would think MemAvailable
> would
> always be smaller then MemFree (after echo 3 >
> /proc/sys/vm/drop_caches).. Unless there is something else be
> accounted
> for that we aren't aware of.
echo 3 > /proc/sys/vm/drop_caches will only
drop unmapped page cache, IIRC
The system may still have a number of page
cache pages left that are mapped in processes,
but will be reclaimable if the VM needs the
memory for something else.
--
All rights reversed
[-- Attachment #2: This is a digitally signed message part --]
[-- Type: application/pgp-signature, Size: 473 bytes --]
next prev parent reply other threads:[~2016-02-11 22:12 UTC|newest]
Thread overview: 21+ messages / expand[flat|nested] mbox.gz Atom feed top
2016-01-28 23:42 computing drop-able caches Daniel Walker
2016-01-28 23:58 ` Johannes Weiner
2016-01-29 1:03 ` Daniel Walker
2016-01-29 1:29 ` Daniel Walker
2016-01-29 1:55 ` Johannes Weiner
2016-01-29 21:21 ` Daniel Walker
2016-01-29 22:33 ` Johannes Weiner
2016-01-29 22:41 ` Rik van Riel
2016-02-08 20:57 ` Khalid Mughal (khalidm)
2016-02-10 18:04 ` Daniel Walker
2016-02-10 18:13 ` Dave Hansen
2016-02-10 19:11 ` Daniel Walker
2016-02-11 22:11 ` Rik van Riel [this message]
2016-02-12 18:01 ` Khalid Mughal (khalidm)
2016-02-12 21:46 ` Dave Hansen
2016-02-12 22:15 ` Johannes Weiner
2016-02-12 18:06 ` Dave Hansen
2016-02-12 18:15 ` Daniel Walker
2016-02-12 18:18 ` Dave Hansen
2016-02-12 18:25 ` Daniel Walker
2016-02-12 20:15 ` Daniel Walker
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=1455228719.15821.18.camel@redhat.com \
--to=riel@redhat.com \
--cc=akpm@linux-foundation.org \
--cc=danielwa@cisco.com \
--cc=dave.hansen@intel.com \
--cc=hannes@cmpxchg.org \
--cc=khalidm@cisco.com \
--cc=linux-fsdevel@vger.kernel.org \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-mm@kvack.org \
--cc=mhocko@suse.com \
--cc=viro@zeniv.linux.org.uk \
--cc=xe-kernel@external.cisco.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).