From mboxrd@z Thu Jan 1 00:00:00 1970 From: "Liuweni" Subject: [PATCH 2/3]fs/inode: iunique() Optimize Performance Date: Wed, 25 Nov 2009 22:12:19 +0800 Message-ID: <200911252212166092236@gmail.com> Mime-Version: 1.0 Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: 7bit Cc: "viro" , "akpm" , "jack" , "npiggin" , "linux-fsdevel" , "linux-kernel" , "strongzgy" , "xgr178" , "Liu Hui" To: "linux-kernel" Return-path: Sender: linux-kernel-owner@vger.kernel.org List-Id: linux-fsdevel.vger.kernel.org --- Change log: Change the hash operation from division to shift. It will cost less time. Also, I change the divisor from L1_CACHE_BYTES to L1_CACHE_SHIFT. In the cache.h, the most L1_CACHE_BYTES defined as "(1 << L1_CACHE_SHIFT)". --- Signed-off-by: Liuwenyi Cc: Alexander Viro Cc: Andrew Morton Cc: Jan Kara Cc: Nick Piggin Cc: linux-fsdevel@vger.kernel.org Cc: linux-kernel@vger.kernel.org --- diff --git a/fs/inode.c b/fs/inode.c index 4d8e3be..397d65f 100644 --- a/fs/inode.c +++ b/fs/inode.c @@ -605,8 +605,8 @@ static unsigned long hash(struct super_block *sb, unsigned long hashval) { unsigned long tmp; - tmp = (hashval * (unsigned long)sb) ^ (GOLDEN_RATIO_PRIME + hashval) / - L1_CACHE_BYTES; + tmp = (hashval * (unsigned long)sb) ^ (GOLDEN_RATIO_PRIME + hashval) >> + L1_CACHE_SHIFT; tmp = tmp ^ ((tmp ^ GOLDEN_RATIO_PRIME) >> I_HASHBITS); return tmp & I_HASHMASK; } -------------- Best Regards, Liuweni 2009-11-25