From mboxrd@z Thu Jan 1 00:00:00 1970 From: Jeff Moyer Subject: Re: Crash when IO is being submitted and block size is changed Date: Thu, 19 Jul 2012 09:33:11 -0400 Message-ID: References: <20120628111541.GB17515@quack.suse.cz> Mime-Version: 1.0 Content-Type: text/plain; charset=us-ascii Cc: Jan Kara , Alexander Viro , Jens Axboe , "Alasdair G. Kergon" , linux-kernel@vger.kernel.org, linux-fsdevel@vger.kernel.org, dm-devel@redhat.com, lwoodman@redhat.com, Andrea Arcangeli , kosaki.motohiro@jp.fujitsu.com To: Mikulas Patocka Return-path: In-Reply-To: (Mikulas Patocka's message of "Wed, 18 Jul 2012 22:27:13 -0400 (EDT)") Sender: linux-kernel-owner@vger.kernel.org List-Id: linux-fsdevel.vger.kernel.org Mikulas Patocka writes: > On Tue, 17 Jul 2012, Jeff Moyer wrote: > >> > This is the patch that fixes this crash: it takes a rw-semaphore around >> > all direct-IO path. >> > >> > (note that if someone is concerned about performance, the rw-semaphore >> > could be made per-cpu --- take it for read on the current CPU and take it >> > for write on all CPUs). >> >> Here we go again. :-) I believe we had at one point tried taking a rw >> semaphore around GUP inside of the direct I/O code path to fix the fork >> vs. GUP race (that still exists today). When testing that, the overhead >> of the semaphore was *way* too high to be considered an acceptable >> solution. I've CC'd Larry Woodman, Andrea, and Kosaki Motohiro who all >> worked on that particular bug. Hopefully they can give better >> quantification of the slowdown than my poor memory. >> >> Cheers, >> Jeff > > Both down_read and up_read together take 82 ticks on Core2, 69 ticks on > AMD K10, 62 ticks on UltraSparc2 if the target is in L1 cache. So, if > percpu rw_semaphores were used, it would slow down only by this amount. Sorry, I'm not familiar with per-cpu rw semaphores. Where are they implemented? > I hope that Linux developers are not so obsessed with performance that > they want a fast crashing kernel rather than a slow reliable kernel. > Note that anything that changes a device block size (for example > mounting a filesystem with non-default block size) may trigger a crash > if lvm or udev reads the device simultaneously; the crash really > happened in business environment). I wasn't suggesting that we leave the problem unfixed (though I can see how you might have gotten that idea, sorry for not being more clear). I was merely suggesting that we should try to fix the problem in a way that does not kill performance. Cheers, Jeff