From mboxrd@z Thu Jan 1 00:00:00 1970 From: Eric Dumazet Subject: [PATCH] fs: inode per-cpu last_ino allocator Date: Thu, 30 Sep 2010 12:22:16 +0200 Message-ID: <1285842136.2615.251.camel@edumazet-laptop> References: <1285762729-17928-1-git-send-email-david@fromorbit.com> <1285762729-17928-16-git-send-email-david@fromorbit.com> <20100929215312.5fcb6976.akpm@linux-foundation.org> <1285824982.5211.675.camel@edumazet-laptop> <1285833189.2615.31.camel@edumazet-laptop> <20100930011417.6ca16ed7.akpm@linux-foundation.org> Mime-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: QUOTED-PRINTABLE Cc: Dave Chinner , linux-fsdevel@vger.kernel.org, linux-kernel@vger.kernel.org To: Andrew Morton Return-path: In-Reply-To: <20100930011417.6ca16ed7.akpm@linux-foundation.org> Sender: linux-kernel-owner@vger.kernel.org List-Id: linux-fsdevel.vger.kernel.org Le jeudi 30 septembre 2010 =C3=A0 01:14 -0700, Andrew Morton a =C3=A9cr= it : > Perhaps >=20 > WARN_ON_ONCE(preemptible()); >=20 > if we had a developer-only version of WARN_ON_ONCE, which we don't. Or just use a regular PER_CPU variable, even on !SMP, and get preempt safe implementation. What do you think of following patch, on top of current linux-2.6 tree = ? Thanks [PATCH] fs: inode per-cpu last_ino allocator new_inode() dirties a contended cache line to get increasing inode numbers. Solve this problem by providing to each cpu a per_cpu variable, feeded by the shared last_ino, but once every 1024 allocations. This reduces contention on the shared last_ino, and give same spreading ino numbers than before (i.e. same wraparound after 2^32 allocations). Signed-off-by: Eric Dumazet Signed-off-by: Nick Piggin Signed-off-by: Dave Chinner --- fs/inode.c | 45 ++++++++++++++++++++++++++++++++++++++------- 1 file changed, 38 insertions(+), 7 deletions(-) diff --git a/fs/inode.c b/fs/inode.c index 8646433..122914e 100644 --- a/fs/inode.c +++ b/fs/inode.c @@ -624,6 +624,43 @@ void inode_add_to_lists(struct super_block *sb, st= ruct inode *inode) } EXPORT_SYMBOL_GPL(inode_add_to_lists); =20 +#define LAST_INO_BATCH 1024 + +/* + * Each cpu owns a range of LAST_INO_BATCH numbers. + * 'shared_last_ino' is dirtied only once out of LAST_INO_BATCH alloca= tions, + * to renew the exhausted range. + * + * This does not significantly increase overflow rate because every CP= U can + * consume at most LAST_INO_BATCH-1 unused inode numbers. So there is + * NR_CPUS*(LAST_INO_BATCH-1) wastage. At 4096 and 1024, this is ~0.1%= of the + * 2^32 range, and is a worst-case. Even a 50% wastage would only incr= ease + * overflow rate by 2x, which does not seem too significant. + * + * On a 32bit, non LFS stat() call, glibc will generate an EOVERFLOW + * error if st_ino won't fit in target struct field. Use 32bit counter + * here to attempt to avoid that. + */ +static DEFINE_PER_CPU(unsigned int, last_ino); + +static noinline unsigned int last_ino_get(void) +{ + unsigned int *p =3D &get_cpu_var(last_ino); + unsigned int res =3D *p; + +#ifdef CONFIG_SMP + if (unlikely((res & (LAST_INO_BATCH - 1)) =3D=3D 0)) { + static atomic_t shared_last_ino; + int next =3D atomic_add_return(LAST_INO_BATCH, &shared_last_ino); + + res =3D next - LAST_INO_BATCH; + } +#endif + *p =3D ++res; + put_cpu_var(last_ino); + return res; +} + /** * new_inode - obtain an inode * @sb: superblock @@ -638,12 +675,6 @@ EXPORT_SYMBOL_GPL(inode_add_to_lists); */ struct inode *new_inode(struct super_block *sb) { - /* - * On a 32bit, non LFS stat() call, glibc will generate an EOVERFLOW - * error if st_ino won't fit in target struct field. Use 32bit counte= r - * here to attempt to avoid that. - */ - static unsigned int last_ino; struct inode *inode; =20 spin_lock_prefetch(&inode_lock); @@ -652,7 +683,7 @@ struct inode *new_inode(struct super_block *sb) if (inode) { spin_lock(&inode_lock); __inode_add_to_lists(sb, NULL, inode); - inode->i_ino =3D ++last_ino; + inode->i_ino =3D last_ino_get(); inode->i_state =3D 0; spin_unlock(&inode_lock); }