From mboxrd@z Thu Jan 1 00:00:00 1970 From: Jeff Moyer Subject: Re: [PATCH 0/4] Fix a crash when block device is read and block size is changed at the same time Date: Tue, 25 Sep 2012 13:49:51 -0400 Message-ID: References: <20120628111541.GB17515@quack.suse.cz> <1343508252.2626.13184.camel@edumazet-glaptop> <1343556630.2626.13257.camel@edumazet-glaptop> <1343586962.2626.13266.camel@edumazet-glaptop> Mime-Version: 1.0 Content-Type: text/plain; charset=us-ascii Cc: Eric Dumazet , Jens Axboe , Andrea Arcangeli , Jan Kara , dm-devel@redhat.com, linux-kernel@vger.kernel.org, Alexander Viro , kosaki.motohiro@jp.fujitsu.com, linux-fsdevel@vger.kernel.org, lwoodman@redhat.com, "Alasdair G. Kergon" To: Mikulas Patocka Return-path: In-Reply-To: (Jeff Moyer's message of "Tue, 18 Sep 2012 16:11:26 -0400") Sender: linux-kernel-owner@vger.kernel.org List-Id: linux-fsdevel.vger.kernel.org Jeff Moyer writes: > Mikulas Patocka writes: > >> Hi Jeff >> >> Thanks for testing. >> >> It would be interesting ... what happens if you take the patch 3, leave >> "struct percpu_rw_semaphore bd_block_size_semaphore" in "struct >> block_device", but remove any use of the semaphore from fs/block_dev.c? - >> will the performance be like unpatched kernel or like patch 3? It could be >> that the change in the alignment affects performance on your CPU too, just >> differently than on my CPU. > > It turns out to be exactly the same performance as with the 3rd patch > applied, so I guess it does have something to do with cache alignment. > Here is the patch (against vanilla) I ended up testing. Let me know if > I've botched it somehow. > > So, I next up I'll play similar tricks to what you did (padding struct > block_device in all kernels) to eliminate the differences due to > structure alignment and provide a clear picture of what the locking > effects are. After trying again with the same padding you used in the struct bdev_inode, I see no performance differences between any of the patches. I tried bumping up the number of threads to saturate the number of cpus on a single NUMA node on my hardware, but that resulted in lower IOPS to the device, and hence consumption of less CPU time. So, I believe my results to be inconclusive. After talking with Vivek about the problem, he had mentioned that it might be worth investigating whether bd_block_size could be protected using SRCU. I looked into it, and the one thing I couldn't reconcile is updating both the bd_block_size and the inode->i_blkbits at the same time. It would involve (afaiui) adding fields to both the inode and the block_device data structures and using rcu_assign_pointer and rcu_dereference to modify and access the fields, and both fields would need to protected by the same struct srcu_struct. I'm not sure whether that's a desirable approach. When I started to implement it, it got ugly pretty quickly. What do others think? For now, my preference is to get the full patch set in. I will continue to investigate the performance impact of the data structure size changes that I've been seeing. So, for the four patches: Acked-by: Jeff Moyer Jens, can you have a look at the patch set? We are seeing problem reports of this in the wild[1][2]. Cheers, Jeff [1] https://bugzilla.redhat.com/show_bug.cgi?id=824107 [2] https://bugzilla.redhat.com/show_bug.cgi?id=812129