From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S933520AbcHDMdZ (ORCPT ); Thu, 4 Aug 2016 08:33:25 -0400 Received: from outbound-smtp06.blacknight.com ([81.17.249.39]:43759 "EHLO outbound-smtp06.blacknight.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S933403AbcHDMdY (ORCPT ); Thu, 4 Aug 2016 08:33:24 -0400 Date: Thu, 4 Aug 2016 13:24:09 +0100 From: Mel Gorman To: Dave Chinner Cc: linux-kernel@vger.kernel.org Subject: Re: [bug, 4.8] /proc/meminfo: counter values are very wrong Message-ID: <20160804122409.GK2799@techsingularity.net> References: <20160804051051.GS12670@dastard> MIME-Version: 1.0 Content-Type: text/plain; charset=iso-8859-15 Content-Disposition: inline In-Reply-To: <20160804051051.GS12670@dastard> User-Agent: Mutt/1.5.23 (2014-03-12) Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Thu, Aug 04, 2016 at 03:10:51PM +1000, Dave Chinner wrote: > Hi folks, > > I just noticed a whacky memory usage profile when running some basic > IO tests on a current 4.8 tree. It looked like there was a massive > memory leak from my monitoring graphs - doing buffered IO was > causing huge amounts of memory to be considered used, but the cache > size was not increasing. > > Looking at /proc/meminfo: > > $ cat /proc/meminfo > MemTotal: 16395408 kB > MemFree: 79424 kB > MemAvailable: 2497240 kB > Buffers: 4372 kB > Cached: 558744 kB > SwapCached: 48 kB > Active: 2127212 kB > Inactive: 100400 kB > Active(anon): 25348 kB > Inactive(anon): 79424 kB > Active(file): 2101864 kB > Inactive(file): 20976 kB > Unevictable: 13612980 kB <<<<<<<<< This? Very quickly done, no boot testing diff --git a/fs/proc/meminfo.c b/fs/proc/meminfo.c index 09e18fdf61e5..b9a8c813e5e6 100644 --- a/fs/proc/meminfo.c +++ b/fs/proc/meminfo.c @@ -46,7 +46,7 @@ static int meminfo_proc_show(struct seq_file *m, void *v) cached = 0; for (lru = LRU_BASE; lru < NR_LRU_LISTS; lru++) - pages[lru] = global_page_state(NR_LRU_BASE + lru); + pages[lru] = global_node_page_state(NR_LRU_BASE + lru); available = si_mem_available();