From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1756462Ab1BAAeD (ORCPT ); Mon, 31 Jan 2011 19:34:03 -0500 Received: from e31.co.us.ibm.com ([32.97.110.149]:60451 "EHLO e31.co.us.ibm.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1752920Ab1BAAeB (ORCPT ); Mon, 31 Jan 2011 19:34:01 -0500 Subject: [RFC][PATCH 0/6] more detailed per-process transparent hugepage statistics To: linux-kernel@vger.kernel.org Cc: linux-mm@kvack.org, Michael J Wolf , Andrea Arcangeli , Dave Hansen From: Dave Hansen Date: Mon, 31 Jan 2011 16:33:57 -0800 Message-Id: <20110201003357.D6F0BE0D@kernel> Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org I'm working on some more reports that transparent huge pages and KSM do not play nicely together. Basically, whenever THP's are present along with KSM, there is a lot of attrition over time, and we do not see much overall progress keeping THP's around: http://sr71.net/~dave/ibm/038_System_Anonymous_Pages.png (That's Karl Rister's graph, thanks Karl!) However, I realized that we do not currently have a nice way to find out where individual THP's might be on the system. We have an overall count, but no way of telling which processes or VMAs they might be in. I started to implement this in the /proc/$pid/smaps code, but quickly realized that the lib/pagewalk.c code unconditionally splits THPs up. This set reworks that code a bit and, in the end, gives you a per-map count of the numbers of huge pages. It also makes it possible for page walks to _not_ split THPs.