From mboxrd@z Thu Jan 1 00:00:00 1970 From: Dave Chinner Subject: Re: [patch 1/6] fs: icache RCU free inodes Date: Tue, 16 Nov 2010 14:02:43 +1100 Message-ID: <20101116030242.GI22876@dastard> References: <20101109124610.GB11477@amd> <1289319698.2774.16.camel@edumazet-laptop> <20101109220506.GE3246@amd> <20101115010027.GC22876@dastard> <20101115042059.GB3320@amd> Mime-Version: 1.0 Content-Type: text/plain; charset=iso-8859-1 Content-Transfer-Encoding: QUOTED-PRINTABLE Cc: Nick Piggin , Linus Torvalds , Eric Dumazet , Al Viro , linux-kernel@vger.kernel.org, linux-fsdevel@vger.kernel.org To: Nick Piggin Return-path: Received: from bld-mail15.adl6.internode.on.net ([150.101.137.100]:37672 "EHLO mail.internode.on.net" rhost-flags-OK-OK-OK-FAIL) by vger.kernel.org with ESMTP id S1754636Ab0KPDC7 (ORCPT ); Mon, 15 Nov 2010 22:02:59 -0500 Content-Disposition: inline In-Reply-To: <20101115042059.GB3320@amd> Sender: linux-fsdevel-owner@vger.kernel.org List-ID: On Mon, Nov 15, 2010 at 03:21:00PM +1100, Nick Piggin wrote: > On Mon, Nov 15, 2010 at 12:00:27PM +1100, Dave Chinner wrote: > > On Fri, Nov 12, 2010 at 12:24:21PM +1100, Nick Piggin wrote: > > > On Wed, Nov 10, 2010 at 9:05 AM, Nick Piggin = wrote: > > > > On Tue, Nov 09, 2010 at 09:08:17AM -0800, Linus Torvalds wrote: > > > >> On Tue, Nov 9, 2010 at 8:21 AM, Eric Dumazet wrote: > > > >> > > > > >> > You can see problems using this fancy thing : > > > >> > > > > >> > - Need to use slab ctor() to not overwrite some sensitive fi= elds of > > > >> > reused inodes. > > > >> > =A0(spinlock, next pointer) > > > >> > > > >> Yes, the downside of using SLAB_DESTROY_BY_RCU is that you rea= lly > > > >> cannot initialize some fields in the allocation path, because = they may > > > >> end up being still used while allocating a new (well, re-used)= entry. > > > >> > > > >> However, I think that in the long run we pretty much _have_ to= do that > > > >> anyway, because the "free each inode separately with RCU" is a= real > > > >> overhead (Nick reports 10-20% cost). So it just makes my skin = crawl to > > > >> go that way. > > > > > > > > This is a creat/unlink loop on a tmpfs filesystem. Any real fil= esystem > > > > is going to be *much* heavier in creat/unlink (so that 10-20% c= ost would > > > > look more like a few %), and any real workload is going to have= much > > > > less intensive pattern. > > >=20 > > > So to get some more precise numbers, on a new kernel, and on a ne= halem > > > class CPU, creat/unlink busy loop on ramfs (worst possible case f= or inode > > > RCU), then inode RCU costs 12% more time. > > >=20 > > > If we go to ext4 over ramdisk, it's 4.2% slower. Btrfs is 4.3% sl= ower, XFS > > > is about 4.9% slower. > >=20 > > That is actually significant because in the current XFS performance > > using delayed logging for pure metadata operations is not that far > > off ramdisk results. Indeed, the simple test: > >=20 > > while (i++ < 1000 * 1000) { > > int fd =3D open("foo", O_CREAT|O_RDWR, 777); > > unlink("foo"); > > close(fd); > > } > >=20 > > Running 8 instances of the above on XFS, each in their own > > directory, on a single sata drive with delayed logging enabled with > > my current working XFS tree (includes SLAB_DESTROY_BY_RCU inode > > cache and XFS inode cache, and numerous other XFS scalability > > enhancements) currently runs at ~250k files/s. It took ~33s for 8 o= f > > those loops above to complete in parallel, and was 100% CPU bound..= =2E >=20 > David, >=20 > This is 30K inodes per second per CPU, versus nearly 800K per second > number that I measured the 12% slowdown with. About 25x slower. Hi Nick, the ramfs (800k/12%) numbers are not the context I was responding to - you're comparing apples to oranges. I was responding to the "XFS [on a ramdisk] is about 4.9% slower" result. > How you > are trying to FUD this as doing anything but confirming my hypothesis= , I > don't know and honestly I don't want to know so don't try to tell me. Hardly FUD. I thought it important to point out that your filesystem-on-ramdisk numbers are not theoretical at all - we can acheive the same level of performance on a single SATA drive for this workload on XFS. Therefore, the 5% difference in performance you've measured on a ramdisk will definitely be visible in the real world and we need to consider it in that context, not as a "theoretical concern". Cheers, Dave. --=20 Dave Chinner david@fromorbit.com -- To unsubscribe from this list: send the line "unsubscribe linux-fsdevel= " in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html