From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from relay.sgi.com (relay3.corp.sgi.com [198.149.34.15]) by oss.sgi.com (Postfix) with ESMTP id E6D187F37 for ; Wed, 30 Jul 2014 03:19:18 -0500 (CDT) Received: from cuda.sgi.com (cuda3.sgi.com [192.48.176.15]) by relay3.corp.sgi.com (Postfix) with ESMTP id 76160AC005 for ; Wed, 30 Jul 2014 01:19:15 -0700 (PDT) Received: from ipmail06.adl6.internode.on.net (ipmail06.adl6.internode.on.net [150.101.137.145]) by cuda.sgi.com with ESMTP id BRH2YHMbcWc2UGpG for ; Wed, 30 Jul 2014 01:19:12 -0700 (PDT) Date: Wed, 30 Jul 2014 18:18:58 +1000 From: Dave Chinner Subject: Re: Delaylog information enquiry Message-ID: <20140730081858.GN26465@dastard> References: <20140729123815.GA13120@bfoster.bfoster> <20140729234151.GJ26465@dastard> MIME-Version: 1.0 Content-Disposition: inline In-Reply-To: List-Id: XFS Filesystem from SGI List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: 7bit Errors-To: xfs-bounces@oss.sgi.com Sender: xfs-bounces@oss.sgi.com To: Grozdan Cc: Brian Foster , "Frank ." , "xfs@oss.sgi.com" On Wed, Jul 30, 2014 at 07:42:32AM +0200, Grozdan wrote: > On Wed, Jul 30, 2014 at 1:41 AM, Dave Chinner wrote: > > Note that this does not change file data behaviour. In this case you > > need to add the "sync" mount option, which forces all buffered IO to > > be synchronous and so will be *very slow*. But if you've already > > turned off the BBWC on the RAID controller then your storage is > > already terribly slow and so you probably won't care about making > > performance even worse... > > Dave, excuse my ignorant questions > > I know the Linux kernel keeps data in cache up to 30 seconds before a > kernel daemon flushes it to disk, unless > the configured dirty ratio (which is 40% of RAM, iirc) is reached 10% of RAM, actually. > before these 30 seconds so the flush is done before it > > What I did is lower these 30 seconds to 5 seconds so every 5 seconds > data is flushed to disk (I've set the dirty_expire_centisecs to 500). > So, are there any drawbacks in doing this? Depends on your workload. For a desktop, you probably won't notice anything different. For a machine that creates lots of temporary files and then removes them (e.g. build machines) then it could crater performance completely because it causes writeback before the files are removed... > I mean, I don't care *that* > much for performance but I do want my dirty data to be on > storage in a reasonable amount of time. I looked at the various sync > mount options but they all are synchronous so it is my > impression they'll be slower than giving the kernel 5 seconds to keep > data and then flush it. > > From XFS perspective, I'd like to know if this is not recommended or > if it is? I know that with setting the above to 500 centisecs > means that there will be more writes to disk and potentially may > result in tear & wear, thus shortening the lifetime of the > storage > > This is a regular desktop system with a single Seagate Constellation > SATA disk so no RAID, LVM, thin provision or anything else > > What do you think? :) I don't think it really matters either way. I don't change the writeback time on my workstations, build machines or test machines, but I actually *increase* it on my laptops to save power by not writing to disk as often. So if you want a little more safety, then reducing the writeback timeout shouldn't have any significant affect on performance or wear unless you are doing something unusual.... Cheers, Dave. -- Dave Chinner david@fromorbit.com _______________________________________________ xfs mailing list xfs@oss.sgi.com http://oss.sgi.com/mailman/listinfo/xfs