From mboxrd@z Thu Jan 1 00:00:00 1970 From: Dave Chinner Subject: Re: [patch 1/6] fs: icache RCU free inodes Date: Mon, 15 Nov 2010 12:00:27 +1100 Message-ID: <20101115010027.GC22876@dastard> References: <20101109124610.GB11477@amd> <1289319698.2774.16.camel@edumazet-laptop> <20101109220506.GE3246@amd> Mime-Version: 1.0 Content-Type: text/plain; charset=iso-8859-1 Content-Transfer-Encoding: QUOTED-PRINTABLE Cc: Nick Piggin , Linus Torvalds , Eric Dumazet , Al Viro , linux-kernel@vger.kernel.org, linux-fsdevel@vger.kernel.org To: Nick Piggin Return-path: Content-Disposition: inline In-Reply-To: Sender: linux-kernel-owner@vger.kernel.org List-Id: linux-fsdevel.vger.kernel.org On Fri, Nov 12, 2010 at 12:24:21PM +1100, Nick Piggin wrote: > On Wed, Nov 10, 2010 at 9:05 AM, Nick Piggin wrot= e: > > On Tue, Nov 09, 2010 at 09:08:17AM -0800, Linus Torvalds wrote: > >> On Tue, Nov 9, 2010 at 8:21 AM, Eric Dumazet wrote: > >> > > >> > You can see problems using this fancy thing : > >> > > >> > - Need to use slab ctor() to not overwrite some sensitive fields= of > >> > reused inodes. > >> > =A0(spinlock, next pointer) > >> > >> Yes, the downside of using SLAB_DESTROY_BY_RCU is that you really > >> cannot initialize some fields in the allocation path, because they= may > >> end up being still used while allocating a new (well, re-used) ent= ry. > >> > >> However, I think that in the long run we pretty much _have_ to do = that > >> anyway, because the "free each inode separately with RCU" is a rea= l > >> overhead (Nick reports 10-20% cost). So it just makes my skin craw= l to > >> go that way. > > > > This is a creat/unlink loop on a tmpfs filesystem. Any real filesys= tem > > is going to be *much* heavier in creat/unlink (so that 10-20% cost = would > > look more like a few %), and any real workload is going to have muc= h > > less intensive pattern. >=20 > So to get some more precise numbers, on a new kernel, and on a nehale= m > class CPU, creat/unlink busy loop on ramfs (worst possible case for i= node > RCU), then inode RCU costs 12% more time. >=20 > If we go to ext4 over ramdisk, it's 4.2% slower. Btrfs is 4.3% slower= , XFS > is about 4.9% slower. That is actually significant because in the current XFS performance using delayed logging for pure metadata operations is not that far off ramdisk results. Indeed, the simple test: while (i++ < 1000 * 1000) { int fd =3D open("foo", O_CREAT|O_RDWR, 777); unlink("foo"); close(fd); } Running 8 instances of the above on XFS, each in their own directory, on a single sata drive with delayed logging enabled with my current working XFS tree (includes SLAB_DESTROY_BY_RCU inode cache and XFS inode cache, and numerous other XFS scalability enhancements) currently runs at ~250k files/s. It took ~33s for 8 of those loops above to complete in parallel, and was 100% CPU bound... > Remember, this is on a ramdisk that's _hitting the CPU's L3 if not L2= _ > cache. A real disk, even a fast SSD, is going to do IO far slower. The amount of IO done during the above test? A single log write - one IO. Hence it isn't going to be any faster on a RAM disk, an SSD, a large RAID array, etc because it is CPU bound, not IO bound. IOWs, that 5% difference in CPU usage is significant for XFS regardless of the storage.... > And also remember that real workloads will not approach creat/unlink = busy > loop behaviour of creating and destroying 800K files/s. Perhaps not a local workload, but I expect to see things like fileservers getting hit with these sorts of loads (i.e. hundreds of thousands of create/unlinks a second). Especially as XFS now has the journal scalability to make this possible... Cheers, Dave. --=20 Dave Chinner david@fromorbit.com