From: Jan Kara <jack@suse.cz>
To: Ming Lei <ming.lei@redhat.com>
Cc: Jan Kara <jack@suse.cz>, Yu Kuai <yukuai1@huaweicloud.com>,
hch@infradead.org, axboe@kernel.dk, yukuai3@huawei.com,
linux-block@vger.kernel.org, linux-kernel@vger.kernel.org,
yi.zhang@huawei.com, yangerkun@huawei.com,
Xiao Ni <xni@redhat.com>,
linux-raid@vger.kernel.org
Subject: Re: [PATCH] block: don't set GD_NEED_PART_SCAN if scan partition failed
Date: Thu, 23 Mar 2023 11:51:20 +0100 [thread overview]
Message-ID: <20230323105120.jrhgjfbj3jlgw2h6@quack3> (raw)
In-Reply-To: <ZBsoE677zEuAm23E@ovpn-8-17.pek2.redhat.com>
On Thu 23-03-23 00:08:51, Ming Lei wrote:
> On Wed, Mar 22, 2023 at 02:07:09PM +0100, Jan Kara wrote:
> > On Wed 22-03-23 19:34:30, Ming Lei wrote:
> > > On Wed, Mar 22, 2023 at 10:47:07AM +0100, Jan Kara wrote:
> > > > On Wed 22-03-23 15:58:35, Ming Lei wrote:
> > > > > On Wed, Mar 22, 2023 at 11:59:26AM +0800, Yu Kuai wrote:
> > > > > > From: Yu Kuai <yukuai3@huawei.com>
> > > > > >
> > > > > > Currently if disk_scan_partitions() failed, GD_NEED_PART_SCAN will still
> > > > > > set, and partition scan will be proceed again when blkdev_get_by_dev()
> > > > > > is called. However, this will cause a problem that re-assemble partitioned
> > > > > > raid device will creat partition for underlying disk.
> > > > > >
> > > > > > Test procedure:
> > > > > >
> > > > > > mdadm -CR /dev/md0 -l 1 -n 2 /dev/sda /dev/sdb -e 1.0
> > > > > > sgdisk -n 0:0:+100MiB /dev/md0
> > > > > > blockdev --rereadpt /dev/sda
> > > > > > blockdev --rereadpt /dev/sdb
> > > > > > mdadm -S /dev/md0
> > > > > > mdadm -A /dev/md0 /dev/sda /dev/sdb
> > > > > >
> > > > > > Test result: underlying disk partition and raid partition can be
> > > > > > observed at the same time
> > > > > >
> > > > > > Note that this can still happen in come corner cases that
> > > > > > GD_NEED_PART_SCAN can be set for underlying disk while re-assemble raid
> > > > > > device.
> > > > > >
> > > > > > Fixes: e5cfefa97bcc ("block: fix scan partition for exclusively open device again")
> > > > > > Signed-off-by: Yu Kuai <yukuai3@huawei.com>
> > > > >
> > > > > The issue still can't be avoided completely, such as, after rebooting,
> > > > > /dev/sda1 & /dev/md0p1 can be observed at the same time. And this one
> > > > > should be underlying partitions scanned before re-assembling raid, I
> > > > > guess it may not be easy to avoid.
> > > >
> > > > So this was always happening (before my patches, after my patches, and now
> > > > after Yu's patches) and kernel does not have enough information to know
> > > > that sda will become part of md0 device in the future. But mdadm actually
> > > > deals with this as far as I remember and deletes partitions for all devices
> > > > it is assembling the array from (and quick tracing experiment I did
> > > > supports this).
> > >
> > > I am testing on Fedora 37, so mdadm v4.2 doesn't delete underlying
> > > partitions before re-assemble.
> >
> > Strange, I'm on openSUSE Leap 15.4 and mdadm v4.1 deletes these partitions
> > (at least I can see mdadm do BLKPG_DEL_PARTITION ioctls). And checking
> > mdadm sources I can see calls to remove_partitions() from start_array()
> > function in Assemble.c so I'm not sure why this is not working for you...
>
> I added dump_stack() in delete_partition() for partition 1, not observe
> stack trace during booting.
>
> >
> > > Also given mdadm or related userspace has to change for avoiding
> > > to scan underlying partitions, just wondering why not let userspace
> > > to tell kernel not do it explicitly?
> >
> > Well, those userspace changes are long deployed, now you would introduce
> > new API that needs to proliferate again. Not very nice. Also how would that
> > exactly work? I mean once mdadm has underlying device open, the current
> > logic makes sure we do not create partitions anymore. But there's no way
> > how mdadm could possibly prevent creation of partitions for devices it
> > doesn't know about yet so it still has to delete existing partitions...
>
> I meant if mdadm has to change to delete existed partitions, why not add
> one ioctl to disable partition scan for this disk when deleting
> partitions/re-assemble, and re-enable scan after stopping array.
>
> But looks it isn't so, since you mentioned that remove_partitions is
> supposed to be called before starting array, however I didn't observe this
> behavior.
Yeah, not sure what's happening on your system.
> I am worrying if the current approach may cause regression, one concern is
> that ioctl(BLKRRPART) needs exclusive open now, such as:
>
> 1) mount /dev/vdb1 /mnt
>
> 2) ioctl(BLKRRPART) may fail after removing /dev/vdb3
Well, but we always had some variant of:
if (disk->open_partitions)
return -EBUSY;
in disk_scan_partitions(). So as long as any partition on the disk is used,
EBUSY is the correct return value from BLKRRPART.
Honza
--
Jan Kara <jack@suse.com>
SUSE Labs, CR
next prev parent reply other threads:[~2023-03-23 10:52 UTC|newest]
Thread overview: 29+ messages / expand[flat|nested] mbox.gz Atom feed top
2023-02-17 2:21 [PATCH -next 0/2] block: fix scan partition for exclusively open device again Yu Kuai
2023-02-17 2:21 ` [PATCH -next 1/2] block: Revert "block: Do not reread partition table on exclusively open device" Yu Kuai
2023-02-17 11:01 ` Jan Kara
2023-02-17 2:22 ` [PATCH -next 2/2] block: fix scan partition for exclusively open device again Yu Kuai
2023-02-17 7:29 ` Christoph Hellwig
2023-02-17 11:05 ` Jan Kara
2023-02-17 13:16 ` [PATCH -next 0/2] " Jens Axboe
2023-03-21 11:43 ` Ming Lei
2023-03-22 1:26 ` Yu Kuai
2023-03-22 1:34 ` Ming Lei
2023-03-22 2:02 ` Yu Kuai
2023-03-22 2:15 ` Yu Kuai
2023-03-22 3:38 ` Ming Lei
2023-03-22 4:00 ` Yu Kuai
2023-03-22 3:59 ` [PATCH] block: don't set GD_NEED_PART_SCAN if scan partition failed Yu Kuai
2023-03-22 7:58 ` Ming Lei
2023-03-22 9:12 ` Yu Kuai
2023-03-22 9:47 ` Jan Kara
2023-03-22 11:34 ` Ming Lei
2023-03-22 13:07 ` Jan Kara
2023-03-22 16:08 ` Ming Lei
2023-03-23 10:51 ` Jan Kara [this message]
2023-03-23 12:03 ` Ming Lei
2023-03-22 9:52 ` Jan Kara
2023-03-23 23:59 ` Ming Lei
2023-04-06 3:42 ` Yu Kuai
2023-04-06 22:29 ` Jens Axboe
2023-04-07 2:01 ` Ming Lei
2023-04-07 2:42 ` Jens Axboe
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20230323105120.jrhgjfbj3jlgw2h6@quack3 \
--to=jack@suse.cz \
--cc=axboe@kernel.dk \
--cc=hch@infradead.org \
--cc=linux-block@vger.kernel.org \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-raid@vger.kernel.org \
--cc=ming.lei@redhat.com \
--cc=xni@redhat.com \
--cc=yangerkun@huawei.com \
--cc=yi.zhang@huawei.com \
--cc=yukuai1@huaweicloud.com \
--cc=yukuai3@huawei.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox