linux-fsdevel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH] block: return EBUSY from drop_partitions on mounted whole disk device
@ 2015-08-05 19:13 Eric Sandeen
  2016-03-01 20:45 ` Eric Sandeen
  0 siblings, 1 reply; 5+ messages in thread
From: Eric Sandeen @ 2015-08-05 19:13 UTC (permalink / raw)
  To: linux-fsdevel@vger.kernel.org; +Cc: Jens Axboe

The BLKRRPART ioctl already fails today if any partition under
the device is mounted.  However, if we mkfs a whole disk and mount
it, BLKRRPART happily proceeds down the invalidation path, which
seems like a bad idea.

Check whether the whole device is mounted by checking bd_super,
and return -EBUSY if so.

Signed-off-by: Eric Sandeen <sandeen@redhat.com>
---

I don't know for sure if this is the right approach, but figure
I'll ask in the form of a patch.  ;)

diff --git a/block/partition-generic.c b/block/partition-generic.c
index 0d9e5f9..04f304c 100644
--- a/block/partition-generic.c
+++ b/block/partition-generic.c
@@ -397,7 +397,7 @@ static int drop_partitions(struct gendisk *disk, struct block_device *bdev)
 	struct hd_struct *part;
 	int res;
 
-	if (bdev->bd_part_count)
+	if (bdev->bd_super || bdev->bd_part_count)
 		return -EBUSY;
 	res = invalidate_partition(disk, 0);
 	if (res)


^ permalink raw reply related	[flat|nested] 5+ messages in thread

* Re: [PATCH] block: return EBUSY from drop_partitions on mounted whole disk device
  2015-08-05 19:13 [PATCH] block: return EBUSY from drop_partitions on mounted whole disk device Eric Sandeen
@ 2016-03-01 20:45 ` Eric Sandeen
  2016-03-02  3:38   ` Eric Sandeen
  2016-03-02 22:13   ` Eric Sandeen
  0 siblings, 2 replies; 5+ messages in thread
From: Eric Sandeen @ 2016-03-01 20:45 UTC (permalink / raw)
  To: linux-fsdevel@vger.kernel.org; +Cc: Jens Axboe, Jes.Sorensen

On 8/5/15 2:13 PM, Eric Sandeen wrote:
> The BLKRRPART ioctl already fails today if any partition under
> the device is mounted.  However, if we mkfs a whole disk and mount
> it, BLKRRPART happily proceeds down the invalidation path, which
> seems like a bad idea.
> 
> Check whether the whole device is mounted by checking bd_super,
> and return -EBUSY if so.
> 
> Signed-off-by: Eric Sandeen <sandeen@redhat.com>
> ---
> 
> I don't know for sure if this is the right approach, but figure
> I'll ask in the form of a patch.  ;)

I'm now thinking that this is not the right approach.  :(  I got a
bug report stating that during some md raid1 testing with replacing
failed disks, filesystems were losing data.  I haven't reproduced
that part yet, but...

It's hitting the "bd_super" case added in the patch below, and returning
-EBUSY to md when mdadm tries to to remove a disk:

# mdadm /dev/md0 -r /dev/loop0
mdadm: hot remove failed for /dev/loop0: Device or resource busy

[ 1309.894718] md: cannot remove active disk loop0 from md0 ...
[ 1309.906270] drop_partitions: bd_part_count 0 bd_super ffff880111364000
[ 1309.919295] drop_partitions: s_id md0 uuid 6bb155fe-3ea1-4a84-b66a-d44d44829c36
[ 1309.933878] CPU: 2 PID: 531 Comm: systemd-udevd Not tainted 3.10.0+ #4

I had not thought about "bd_super" existing in this case; I just had my
filesystem hat on.  I'm still digging through the somewhat messy bug
report, I don't know how he's getting to data loss, but that patch
might be half-baked if nothing else because of this behavior...

Note that there are no partitions on md0...

This patch should probably be reverted for now, unless there is some obvious
better fix.

-Eric


> diff --git a/block/partition-generic.c b/block/partition-generic.c
> index 0d9e5f9..04f304c 100644
> --- a/block/partition-generic.c
> +++ b/block/partition-generic.c
> @@ -397,7 +397,7 @@ static int drop_partitions(struct gendisk *disk, struct block_device *bdev)
>  	struct hd_struct *part;
>  	int res;
>  
> -	if (bdev->bd_part_count)
> +	if (bdev->bd_super || bdev->bd_part_count)
>  		return -EBUSY;
>  	res = invalidate_partition(disk, 0);
>  	if (res)
> 


^ permalink raw reply	[flat|nested] 5+ messages in thread

* Re: [PATCH] block: return EBUSY from drop_partitions on mounted whole disk device
  2016-03-01 20:45 ` Eric Sandeen
@ 2016-03-02  3:38   ` Eric Sandeen
  2016-03-02 22:13   ` Eric Sandeen
  1 sibling, 0 replies; 5+ messages in thread
From: Eric Sandeen @ 2016-03-02  3:38 UTC (permalink / raw)
  To: Eric Sandeen, linux-fsdevel@vger.kernel.org; +Cc: Jens Axboe, Jes.Sorensen



On 3/1/16 2:45 PM, Eric Sandeen wrote:
> On 8/5/15 2:13 PM, Eric Sandeen wrote:
>> The BLKRRPART ioctl already fails today if any partition under
>> the device is mounted.  However, if we mkfs a whole disk and mount
>> it, BLKRRPART happily proceeds down the invalidation path, which
>> seems like a bad idea.
>>
>> Check whether the whole device is mounted by checking bd_super,
>> and return -EBUSY if so.
>>
>> Signed-off-by: Eric Sandeen <sandeen@redhat.com>
>> ---
>>
>> I don't know for sure if this is the right approach, but figure
>> I'll ask in the form of a patch.  ;)
> 
> I'm now thinking that this is not the right approach.  :(  I got a
> bug report stating that during some md raid1 testing with replacing
> failed disks, filesystems were losing data.  I haven't reproduced
> that part yet, but...
> 
> It's hitting the "bd_super" case added in the patch below, and returning
> -EBUSY to md when mdadm tries to to remove a disk:
> 
> # mdadm /dev/md0 -r /dev/loop0
> mdadm: hot remove failed for /dev/loop0: Device or resource busy
> 
> [ 1309.894718] md: cannot remove active disk loop0 from md0 ...
> [ 1309.906270] drop_partitions: bd_part_count 0 bd_super ffff880111364000
> [ 1309.919295] drop_partitions: s_id md0 uuid 6bb155fe-3ea1-4a84-b66a-d44d44829c36
> [ 1309.933878] CPU: 2 PID: 531 Comm: systemd-udevd Not tainted 3.10.0+ #4

Urk, forgot I was testing an old kernel, sorry. I think upstream is ok.
I'll shut up and investigate more.

-Eric

^ permalink raw reply	[flat|nested] 5+ messages in thread

* Re: [PATCH] block: return EBUSY from drop_partitions on mounted whole disk device
  2016-03-01 20:45 ` Eric Sandeen
  2016-03-02  3:38   ` Eric Sandeen
@ 2016-03-02 22:13   ` Eric Sandeen
  2016-03-02 22:15     ` Jens Axboe
  1 sibling, 1 reply; 5+ messages in thread
From: Eric Sandeen @ 2016-03-02 22:13 UTC (permalink / raw)
  To: Eric Sandeen, linux-fsdevel@vger.kernel.org; +Cc: Jens Axboe, Jes.Sorensen



On 3/1/16 2:45 PM, Eric Sandeen wrote:
> On 8/5/15 2:13 PM, Eric Sandeen wrote:
>> The BLKRRPART ioctl already fails today if any partition under
>> the device is mounted.  However, if we mkfs a whole disk and mount
>> it, BLKRRPART happily proceeds down the invalidation path, which
>> seems like a bad idea.
>>
>> Check whether the whole device is mounted by checking bd_super,
>> and return -EBUSY if so.
>>
>> Signed-off-by: Eric Sandeen <sandeen@redhat.com>
>> ---
>>
>> I don't know for sure if this is the right approach, but figure
>> I'll ask in the form of a patch.  ;)
> 
> I'm now thinking that this is not the right approach.  :(  I got a
> bug report stating that during some md raid1 testing with replacing
> failed disks, filesystems were losing data.  I haven't reproduced
> that part yet, but...
> 
> It's hitting the "bd_super" case added in the patch below, and returning
> -EBUSY to md when mdadm tries to to remove a disk:
> 
> # mdadm /dev/md0 -r /dev/loop0
> mdadm: hot remove failed for /dev/loop0: Device or resource busy

FWIW, just ignore me, I was being an idiot.  a) the patch *prevents*
the corruption; does not cause it; without the EBUSY, drop_partitions
will get to invalidate_inodes() etc, and no wonder data is lost.
And b) the above EBUSY is because I forgot to fail the disk first. :/

Nothing to see here, move along, sorry!

-Eric

^ permalink raw reply	[flat|nested] 5+ messages in thread

* Re: [PATCH] block: return EBUSY from drop_partitions on mounted whole disk device
  2016-03-02 22:13   ` Eric Sandeen
@ 2016-03-02 22:15     ` Jens Axboe
  0 siblings, 0 replies; 5+ messages in thread
From: Jens Axboe @ 2016-03-02 22:15 UTC (permalink / raw)
  To: Eric Sandeen, Eric Sandeen, linux-fsdevel@vger.kernel.org; +Cc: Jes.Sorensen

On 03/02/2016 03:13 PM, Eric Sandeen wrote:
>
>
> On 3/1/16 2:45 PM, Eric Sandeen wrote:
>> On 8/5/15 2:13 PM, Eric Sandeen wrote:
>>> The BLKRRPART ioctl already fails today if any partition under
>>> the device is mounted.  However, if we mkfs a whole disk and mount
>>> it, BLKRRPART happily proceeds down the invalidation path, which
>>> seems like a bad idea.
>>>
>>> Check whether the whole device is mounted by checking bd_super,
>>> and return -EBUSY if so.
>>>
>>> Signed-off-by: Eric Sandeen <sandeen@redhat.com>
>>> ---
>>>
>>> I don't know for sure if this is the right approach, but figure
>>> I'll ask in the form of a patch.  ;)
>>
>> I'm now thinking that this is not the right approach.  :(  I got a
>> bug report stating that during some md raid1 testing with replacing
>> failed disks, filesystems were losing data.  I haven't reproduced
>> that part yet, but...
>>
>> It's hitting the "bd_super" case added in the patch below, and returning
>> -EBUSY to md when mdadm tries to to remove a disk:
>>
>> # mdadm /dev/md0 -r /dev/loop0
>> mdadm: hot remove failed for /dev/loop0: Device or resource busy
>
> FWIW, just ignore me, I was being an idiot.  a) the patch *prevents*
> the corruption; does not cause it; without the EBUSY, drop_partitions
> will get to invalidate_inodes() etc, and no wonder data is lost.
> And b) the above EBUSY is because I forgot to fail the disk first. :/
>
> Nothing to see here, move along, sorry!

Still beats a regression :-)

-- 
Jens Axboe


^ permalink raw reply	[flat|nested] 5+ messages in thread

end of thread, other threads:[~2016-03-02 22:15 UTC | newest]

Thread overview: 5+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2015-08-05 19:13 [PATCH] block: return EBUSY from drop_partitions on mounted whole disk device Eric Sandeen
2016-03-01 20:45 ` Eric Sandeen
2016-03-02  3:38   ` Eric Sandeen
2016-03-02 22:13   ` Eric Sandeen
2016-03-02 22:15     ` Jens Axboe

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).