From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from psmtp.com (na3sys010amx107.postini.com [74.125.245.107]) by kanga.kvack.org (Postfix) with SMTP id B48576B0062 for ; Mon, 5 Nov 2012 17:47:38 -0500 (EST) Received: by mail-pa0-f41.google.com with SMTP id fa10so4669502pad.14 for ; Mon, 05 Nov 2012 14:47:38 -0800 (PST) From: Michel Lespinasse Subject: [PATCH 04/16] mm: rearrange vm_area_struct for fewer cache misses Date: Mon, 5 Nov 2012 14:47:01 -0800 Message-Id: <1352155633-8648-5-git-send-email-walken@google.com> In-Reply-To: <1352155633-8648-1-git-send-email-walken@google.com> References: <1352155633-8648-1-git-send-email-walken@google.com> Sender: owner-linux-mm@kvack.org List-ID: To: Andrew Morton , Rik van Riel , Hugh Dickins , linux-kernel@vger.kernel.org, Russell King , Ralf Baechle , Paul Mundt , "David S. Miller" , Chris Metcalf , x86@kernel.org, William Irwin Cc: linux-mm@kvack.org, linux-arm-kernel@lists.infradead.org, linux-mips@linux-mips.org, linux-sh@vger.kernel.org, sparclinux@vger.kernel.org, Rik van Riel From: Rik van Riel The kernel walks the VMA rbtree in various places, including the page fault path. However, the vm_rb node spanned two cache lines, on 64 bit systems with 64 byte cache lines (most x86 systems). Rearrange vm_area_struct a little, so all the information we need to do a VMA tree walk is in the first cache line. Signed-off-by: Michel Lespinasse Signed-off-by: Rik van Riel --- include/linux/mm_types.h | 12 ++++++++---- 1 files changed, 8 insertions(+), 4 deletions(-) diff --git a/include/linux/mm_types.h b/include/linux/mm_types.h index 94fa52b28ee8..528da4abf8ee 100644 --- a/include/linux/mm_types.h +++ b/include/linux/mm_types.h @@ -224,7 +224,8 @@ struct vm_region { * library, the executable area etc). */ struct vm_area_struct { - struct mm_struct * vm_mm; /* The address space we belong to. */ + /* The first cache line has the info for VMA tree walking. */ + unsigned long vm_start; /* Our start address within vm_mm. */ unsigned long vm_end; /* The first byte after our end address within vm_mm. */ @@ -232,9 +233,6 @@ struct vm_area_struct { /* linked list of VM areas per task, sorted by address */ struct vm_area_struct *vm_next, *vm_prev; - pgprot_t vm_page_prot; /* Access permissions of this VMA. */ - unsigned long vm_flags; /* Flags, see mm.h. */ - struct rb_node vm_rb; /* @@ -245,6 +243,12 @@ struct vm_area_struct { */ unsigned long rb_subtree_gap; + /* Second cache line starts here. */ + + struct mm_struct * vm_mm; /* The address space we belong to. */ + pgprot_t vm_page_prot; /* Access permissions of this VMA. */ + unsigned long vm_flags; /* Flags, see mm.h. */ + /* * For areas with an address space and backing store, * linkage into the address_space->i_mmap interval tree, or -- 1.7.7.3 -- To unsubscribe, send a message with 'unsubscribe linux-mm' in the body to majordomo@kvack.org. For more info on Linux MM, see: http://www.linux-mm.org/ . Don't email: email@kvack.org