From mboxrd@z Thu Jan 1 00:00:00 1970 From: Justin Piszcz Subject: Re: stripe_cache_size and performance [BUG with =64kb] Date: Mon, 25 Jun 2007 16:42:29 -0400 (EDT) Message-ID: References: <5d96567b0706211242p7f03bd1cw9102ae3e44d67cee@mail.gmail.com> <5d96567b0706220801v76c1c19cl7448050e62e0860b@mail.gmail.com> <467FF1D5.6080007@tmr.com> Mime-Version: 1.0 Content-Type: TEXT/PLAIN; charset=US-ASCII; format=flowed Return-path: In-Reply-To: Sender: linux-raid-owner@vger.kernel.org To: Bill Davidsen Cc: Raz , jnelson-linux-raid@jamponi.net, linux-raid@vger.kernel.org List-Id: linux-raid.ids On Mon, 25 Jun 2007, Justin Piszcz wrote: > > > On Mon, 25 Jun 2007, Justin Piszcz wrote: > >> >> >> On Mon, 25 Jun 2007, Justin Piszcz wrote: >> >>> >>> >>> On Mon, 25 Jun 2007, Justin Piszcz wrote: >>> >>>> >>>> >>>> On Mon, 25 Jun 2007, Bill Davidsen wrote: >>>> >>>>> Justin Piszcz wrote: >>>>>> I have found a 16MB stripe_cache_size results in optimal performance >>>>>> after testing many many values :) >>>>> >>>>> We have discussed this before, my experience has been that after 8 x >>>>> stripe size the performance gains hit diminishing returns, particularly >>>>> for typical write instead of big aligned blocks, possibly with O_DIRECT. >>>>> I would suggest that as a target even on a low memory machine. >>>>> >>>>> Do your tests show similar? I was only able to test three and four drive >>>>> setups using dedicated drives. >>>>> >>>>> -- >>>>> bill davidsen >>>>> CTO TMR Associates, Inc >>>>> Doing interesting things with small computers since 1979 >>>>> >>>>> - >>>>> To unsubscribe from this list: send the line "unsubscribe linux-raid" in >>>>> the body of a message to majordomo@vger.kernel.org >>>>> More majordomo info at http://vger.kernel.org/majordomo-info.html >>>>> >>>> >>>> Running test with 10 RAPTOR 150 hard drives, expect it to take awhile >>>> until I get the results, avg them etc. :) >>>> >>>> 128k,256k,512k,1024k,2048k,4096k,8192k,16384k >>>> >>>> Justin. >>>> >>> >>> Definitely a kernel bug, I set it to 64kb and it stayed in D-state until I >>> ran alt-sysrq-b. >>> >>> Pretty nasty! >>> >>> Justin. >>> >> >> Ack, help?? >> >> [ 64.032895] Starting XFS recovery on filesystem: md3 (logdev: internal) >> [ 66.210602] XFS: xlog_recover_process_data: bad clientid >> [ 66.210656] XFS: log mount/recovery failed: error 5 >> [ 66.210709] XFS: log mount failed >> >> After I ran 64kb it killed my RAID! >> - >> To unsubscribe from this list: send the line "unsubscribe linux-raid" in >> the body of a message to majordomo@vger.kernel.org >> More majordomo info at http://vger.kernel.org/majordomo-info.html >> > > Yeah, I won't be trying that anymore :P > > p34:/r1# find lost+found/|wc > 157 157 3369 > p34:/r1# du -sh lost+found/ > 166G lost+found/ > p34:/r1# > > - > To unsubscribe from this list: send the line "unsubscribe linux-raid" in > the body of a message to majordomo@vger.kernel.org > More majordomo info at http://vger.kernel.org/majordomo-info.html > 16MB is best performance per previous benchmarks.