* [PATCH 0/2] md: it panice after reshape from raid1 to raid5
@ 2021-12-10 9:31 Xiao Ni
2021-12-10 9:31 ` [PATCH V2 1/2] md/raid0: Free r0conf memory when register integrity failed Xiao Ni
` (2 more replies)
0 siblings, 3 replies; 8+ messages in thread
From: Xiao Ni @ 2021-12-10 9:31 UTC (permalink / raw)
To: song; +Cc: guoqing.jiang, ncroxon, linux-raid
Hi all
After reshape from raid1 to raid5, it can panice when there are I/Os
The steps can reproduce this:
mdadm -CR /dev/md0 -l1 -n2 /dev/loop0 /dev/loop1
mdadm --wait /dev/md0
mkfs.xfs /dev/md0
mdadm /dev/md0 --grow -l5
mount /dev/md0 /mnt
These two patches fix this problem.
Xiao Ni (2):
Free r0conf memory when register integrity failed
Move alloc/free acct bioset in to personality
drivers/md/md.c | 27 +++++++++++++++++----------
drivers/md/md.h | 2 ++
drivers/md/raid0.c | 28 ++++++++++++++++++++++++----
drivers/md/raid5.c | 41 ++++++++++++++++++++++++++++++-----------
4 files changed, 73 insertions(+), 25 deletions(-)
--
2.31.1
^ permalink raw reply [flat|nested] 8+ messages in thread* [PATCH V2 1/2] md/raid0: Free r0conf memory when register integrity failed 2021-12-10 9:31 [PATCH 0/2] md: it panice after reshape from raid1 to raid5 Xiao Ni @ 2021-12-10 9:31 ` Xiao Ni 2021-12-10 9:31 ` [PATCH 2/2] md: Move alloc/free acct bioset in to personality Xiao Ni 2022-01-04 23:30 ` [PATCH 0/2] md: it panice after reshape from raid1 to raid5 Xiao Ni 2 siblings, 0 replies; 8+ messages in thread From: Xiao Ni @ 2021-12-10 9:31 UTC (permalink / raw) To: song; +Cc: guoqing.jiang, ncroxon, linux-raid It doesn't free memory when register integrity failed. And it will add acct_bioset_exit in raid0_free. So split free r0conf codes into a single function to make error handling more clear. Signed-off-by: Xiao Ni <xni@redhat.com> --- V2: set mddev->private to NULL and move free_conf/raid0_free above to avoid the extra declaration --- drivers/md/raid0.c | 28 +++++++++++++++++++--------- 1 file changed, 19 insertions(+), 9 deletions(-) diff --git a/drivers/md/raid0.c b/drivers/md/raid0.c index 62c8b6adac70..88424d7a6ebd 100644 --- a/drivers/md/raid0.c +++ b/drivers/md/raid0.c @@ -356,7 +356,20 @@ static sector_t raid0_size(struct mddev *mddev, sector_t sectors, int raid_disks return array_sectors; } -static void raid0_free(struct mddev *mddev, void *priv); +static void free_conf(struct mddev *mddev, struct r0conf *conf) +{ + kfree(conf->strip_zone); + kfree(conf->devlist); + kfree(conf); + mddev->private = NULL; +} + +static void raid0_free(struct mddev *mddev, void *priv) +{ + struct r0conf *conf = priv; + + free_conf(mddev, conf); +} static int raid0_run(struct mddev *mddev) { @@ -413,17 +426,14 @@ static int raid0_run(struct mddev *mddev) dump_zones(mddev); ret = md_integrity_register(mddev); + if (ret) + goto free; return ret; -} - -static void raid0_free(struct mddev *mddev, void *priv) -{ - struct r0conf *conf = priv; - kfree(conf->strip_zone); - kfree(conf->devlist); - kfree(conf); +free: + free_conf(mddev, conf); + return ret; } static void raid0_handle_discard(struct mddev *mddev, struct bio *bio) -- 2.31.1 ^ permalink raw reply related [flat|nested] 8+ messages in thread
* [PATCH 2/2] md: Move alloc/free acct bioset in to personality 2021-12-10 9:31 [PATCH 0/2] md: it panice after reshape from raid1 to raid5 Xiao Ni 2021-12-10 9:31 ` [PATCH V2 1/2] md/raid0: Free r0conf memory when register integrity failed Xiao Ni @ 2021-12-10 9:31 ` Xiao Ni 2022-01-04 23:30 ` [PATCH 0/2] md: it panice after reshape from raid1 to raid5 Xiao Ni 2 siblings, 0 replies; 8+ messages in thread From: Xiao Ni @ 2021-12-10 9:31 UTC (permalink / raw) To: song; +Cc: guoqing.jiang, ncroxon, linux-raid Now it alloc acct bioset in md_run and only raid0/raid5 need acct bioset. For example, it doesn't create acct bioset when creating raid1. Then reshape from raid1 to raid0/raid5, it will access acct bioset after reshaping. It can panic because of NULL pointer reference. We can move alloc/free jobs to personality. pers->run alloc acct bioset and pers->clean free it. Fixes: daee2024715d (md: check level before create and exit io_acct_set) Signed-off-by: Xiao Ni <xni@redhat.com> --- drivers/md/md.c | 27 +++++++++++++++++---------- drivers/md/md.h | 2 ++ drivers/md/raid0.c | 10 +++++++++- drivers/md/raid5.c | 41 ++++++++++++++++++++++++++++++----------- 4 files changed, 58 insertions(+), 22 deletions(-) diff --git a/drivers/md/md.c b/drivers/md/md.c index e8666bdc0d28..0fc34a05a655 100644 --- a/drivers/md/md.c +++ b/drivers/md/md.c @@ -5878,13 +5878,6 @@ int md_run(struct mddev *mddev) if (err) goto exit_bio_set; } - if (mddev->level != 1 && mddev->level != 10 && - !bioset_initialized(&mddev->io_acct_set)) { - err = bioset_init(&mddev->io_acct_set, BIO_POOL_SIZE, - offsetof(struct md_io_acct, bio_clone), 0); - if (err) - goto exit_sync_set; - } spin_lock(&pers_lock); pers = find_pers(mddev->level, mddev->clevel); @@ -6061,9 +6054,6 @@ int md_run(struct mddev *mddev) module_put(pers->owner); md_bitmap_destroy(mddev); abort: - if (mddev->level != 1 && mddev->level != 10) - bioset_exit(&mddev->io_acct_set); -exit_sync_set: bioset_exit(&mddev->sync_set); exit_bio_set: bioset_exit(&mddev->bio_set); @@ -8596,6 +8586,23 @@ void md_submit_discard_bio(struct mddev *mddev, struct md_rdev *rdev, } EXPORT_SYMBOL_GPL(md_submit_discard_bio); +int acct_bioset_init(struct mddev *mddev) +{ + int err = 0; + + if (!bioset_initialized(&mddev->io_acct_set)) + err = bioset_init(&mddev->io_acct_set, BIO_POOL_SIZE, + offsetof(struct md_io_acct, bio_clone), 0); + return err; +} +EXPORT_SYMBOL_GPL(acct_bioset_init); + +void acct_bioset_exit(struct mddev *mddev) +{ + bioset_exit(&mddev->io_acct_set); +} +EXPORT_SYMBOL_GPL(acct_bioset_exit); + static void md_end_io_acct(struct bio *bio) { struct md_io_acct *md_io_acct = bio->bi_private; diff --git a/drivers/md/md.h b/drivers/md/md.h index 53ea7a6961de..f1bf3625ef4c 100644 --- a/drivers/md/md.h +++ b/drivers/md/md.h @@ -721,6 +721,8 @@ extern void md_error(struct mddev *mddev, struct md_rdev *rdev); extern void md_finish_reshape(struct mddev *mddev); void md_submit_discard_bio(struct mddev *mddev, struct md_rdev *rdev, struct bio *bio, sector_t start, sector_t size); +int acct_bioset_init(struct mddev *mddev); +void acct_bioset_exit(struct mddev *mddev); void md_account_bio(struct mddev *mddev, struct bio **bio); extern bool __must_check md_flush_request(struct mddev *mddev, struct bio *bio); diff --git a/drivers/md/raid0.c b/drivers/md/raid0.c index 88424d7a6ebd..b59a77b31b90 100644 --- a/drivers/md/raid0.c +++ b/drivers/md/raid0.c @@ -369,6 +369,7 @@ static void raid0_free(struct mddev *mddev, void *priv) struct r0conf *conf = priv; free_conf(mddev, conf); + acct_bioset_exit(mddev); } static int raid0_run(struct mddev *mddev) @@ -383,11 +384,16 @@ static int raid0_run(struct mddev *mddev) if (md_check_no_bitmap(mddev)) return -EINVAL; + if (acct_bioset_init(mddev)) { + pr_err("md/raid0:%s: alloc acct bioset failed.\n", mdname(mddev)); + return -ENOMEM; + } + /* if private is not null, we are here after takeover */ if (mddev->private == NULL) { ret = create_strip_zones(mddev, &conf); if (ret < 0) - return ret; + goto exit_acct_set; mddev->private = conf; } conf = mddev->private; @@ -433,6 +439,8 @@ static int raid0_run(struct mddev *mddev) free: free_conf(mddev, conf); +exit_acct_set: + acct_bioset_exit(mddev); return ret; } diff --git a/drivers/md/raid5.c b/drivers/md/raid5.c index 1240a5c16af8..13afa8c5cc8a 100644 --- a/drivers/md/raid5.c +++ b/drivers/md/raid5.c @@ -7447,12 +7447,19 @@ static int raid5_run(struct mddev *mddev) struct md_rdev *rdev; struct md_rdev *journal_dev = NULL; sector_t reshape_offset = 0; - int i; + int i, ret = 0; long long min_offset_diff = 0; int first = 1; - if (mddev_init_writes_pending(mddev) < 0) + if (acct_bioset_init(mddev)) { + pr_err("md/raid456:%s: alloc acct bioset failed.\n", mdname(mddev)); return -ENOMEM; + } + + if (mddev_init_writes_pending(mddev) < 0) { + ret = -ENOMEM; + goto exit_acct_set; + } if (mddev->recovery_cp != MaxSector) pr_notice("md/raid:%s: not clean -- starting background reconstruction\n", @@ -7483,7 +7490,8 @@ static int raid5_run(struct mddev *mddev) (mddev->bitmap_info.offset || mddev->bitmap_info.file)) { pr_notice("md/raid:%s: array cannot have both journal and bitmap\n", mdname(mddev)); - return -EINVAL; + ret = -EINVAL; + goto exit_acct_set; } if (mddev->reshape_position != MaxSector) { @@ -7508,13 +7516,15 @@ static int raid5_run(struct mddev *mddev) if (journal_dev) { pr_warn("md/raid:%s: don't support reshape with journal - aborting.\n", mdname(mddev)); - return -EINVAL; + ret = -EINVAL; + goto exit_acct_set; } if (mddev->new_level != mddev->level) { pr_warn("md/raid:%s: unsupported reshape required - aborting.\n", mdname(mddev)); - return -EINVAL; + ret = -EINVAL; + goto exit_acct_set; } old_disks = mddev->raid_disks - mddev->delta_disks; /* reshape_position must be on a new-stripe boundary, and one @@ -7530,7 +7540,8 @@ static int raid5_run(struct mddev *mddev) if (sector_div(here_new, chunk_sectors * new_data_disks)) { pr_warn("md/raid:%s: reshape_position not on a stripe boundary\n", mdname(mddev)); - return -EINVAL; + ret = -EINVAL; + goto exit_acct_set; } reshape_offset = here_new * chunk_sectors; /* here_new is the stripe we will write to */ @@ -7552,7 +7563,8 @@ static int raid5_run(struct mddev *mddev) else if (mddev->ro == 0) { pr_warn("md/raid:%s: in-place reshape must be started in read-only mode - aborting\n", mdname(mddev)); - return -EINVAL; + ret = -EINVAL; + goto exit_acct_set; } } else if (mddev->reshape_backwards ? (here_new * chunk_sectors + min_offset_diff <= @@ -7562,7 +7574,8 @@ static int raid5_run(struct mddev *mddev) /* Reading from the same stripe as writing to - bad */ pr_warn("md/raid:%s: reshape_position too early for auto-recovery - aborting.\n", mdname(mddev)); - return -EINVAL; + ret = -EINVAL; + goto exit_acct_set; } pr_debug("md/raid:%s: reshape will continue\n", mdname(mddev)); /* OK, we should be able to continue; */ @@ -7586,8 +7599,10 @@ static int raid5_run(struct mddev *mddev) else conf = mddev->private; - if (IS_ERR(conf)) - return PTR_ERR(conf); + if (IS_ERR(conf)) { + ret = PTR_ERR(conf); + goto exit_acct_set; + } if (test_bit(MD_HAS_JOURNAL, &mddev->flags)) { if (!journal_dev) { @@ -7784,7 +7799,10 @@ static int raid5_run(struct mddev *mddev) free_conf(conf); mddev->private = NULL; pr_warn("md/raid:%s: failed to run raid set.\n", mdname(mddev)); - return -EIO; + ret = -EIO; +exit_acct_set: + acct_bioset_exit(mddev); + return ret; } static void raid5_free(struct mddev *mddev, void *priv) @@ -7792,6 +7810,7 @@ static void raid5_free(struct mddev *mddev, void *priv) struct r5conf *conf = priv; free_conf(conf); + acct_bioset_exit(mddev); mddev->to_remove = &raid5_attrs_group; } -- 2.31.1 ^ permalink raw reply related [flat|nested] 8+ messages in thread
* Re: [PATCH 0/2] md: it panice after reshape from raid1 to raid5 2021-12-10 9:31 [PATCH 0/2] md: it panice after reshape from raid1 to raid5 Xiao Ni 2021-12-10 9:31 ` [PATCH V2 1/2] md/raid0: Free r0conf memory when register integrity failed Xiao Ni 2021-12-10 9:31 ` [PATCH 2/2] md: Move alloc/free acct bioset in to personality Xiao Ni @ 2022-01-04 23:30 ` Xiao Ni 2022-01-05 18:59 ` Song Liu 2 siblings, 1 reply; 8+ messages in thread From: Xiao Ni @ 2022-01-04 23:30 UTC (permalink / raw) To: song; +Cc: guoqing.jiang, ncroxon, linux-raid Hi Song Ping. Do I still change something else? Regards Xiao 在 2021/12/10 17:31, Xiao Ni 写道: > Hi all > > After reshape from raid1 to raid5, it can panice when there are I/Os > > The steps can reproduce this: > mdadm -CR /dev/md0 -l1 -n2 /dev/loop0 /dev/loop1 > mdadm --wait /dev/md0 > mkfs.xfs /dev/md0 > mdadm /dev/md0 --grow -l5 > mount /dev/md0 /mnt > > These two patches fix this problem. > > Xiao Ni (2): > Free r0conf memory when register integrity failed > Move alloc/free acct bioset in to personality > > drivers/md/md.c | 27 +++++++++++++++++---------- > drivers/md/md.h | 2 ++ > drivers/md/raid0.c | 28 ++++++++++++++++++++++++---- > drivers/md/raid5.c | 41 ++++++++++++++++++++++++++++++----------- > 4 files changed, 73 insertions(+), 25 deletions(-) > ^ permalink raw reply [flat|nested] 8+ messages in thread
* Re: [PATCH 0/2] md: it panice after reshape from raid1 to raid5 2022-01-04 23:30 ` [PATCH 0/2] md: it panice after reshape from raid1 to raid5 Xiao Ni @ 2022-01-05 18:59 ` Song Liu 2022-01-06 1:53 ` Xiao Ni 0 siblings, 1 reply; 8+ messages in thread From: Song Liu @ 2022-01-05 18:59 UTC (permalink / raw) To: Xiao Ni; +Cc: Guoqing Jiang, Nigel Croxon, linux-raid On Tue, Jan 4, 2022 at 3:30 PM Xiao Ni <xni@redhat.com> wrote: > > Hi Song > > Ping. Do I still change something else? I merged the two patches into one, rewrote the commit log, added Guoqing's Acked-by, and applied it to md-next. For future patches, please write the commit log according to the guidance in Documentation/process/submitting-patches.rst. Thanks, Song ^ permalink raw reply [flat|nested] 8+ messages in thread
* Re: [PATCH 0/2] md: it panice after reshape from raid1 to raid5 2022-01-05 18:59 ` Song Liu @ 2022-01-06 1:53 ` Xiao Ni 0 siblings, 0 replies; 8+ messages in thread From: Xiao Ni @ 2022-01-06 1:53 UTC (permalink / raw) To: Song Liu; +Cc: Guoqing Jiang, Nigel Croxon, linux-raid On Thu, Jan 6, 2022 at 2:59 AM Song Liu <song@kernel.org> wrote: > > On Tue, Jan 4, 2022 at 3:30 PM Xiao Ni <xni@redhat.com> wrote: > > > > Hi Song > > > > Ping. Do I still change something else? > > I merged the two patches into one, rewrote the commit log, added > Guoqing's Acked-by, and applied it to md-next. > > For future patches, please write the commit log according to the > guidance in Documentation/process/submitting-patches.rst. > > Thanks, > Song > Thanks. I'll read this doc and follow the instructions. Regards Xiao ^ permalink raw reply [flat|nested] 8+ messages in thread
* [PATCH 0/2] md: it panic after reshape from raid1 to raid5 @ 2021-12-09 5:55 Xiao Ni 2021-12-09 5:55 ` [PATCH 2/2] md: Move alloc/free acct bioset in to personality Xiao Ni 0 siblings, 1 reply; 8+ messages in thread From: Xiao Ni @ 2021-12-09 5:55 UTC (permalink / raw) To: song; +Cc: guoqing.jiang, ncroxon, linux-raid Hi all After reshape from raid1 to raid5, it can panice when there are I/Os The steps can reproduce this: mdadm -CR /dev/md0 -l1 -n2 /dev/loop0 /dev/loop1 mdadm --wait /dev/md0 mkfs.xfs /dev/md0 mdadm /dev/md0 --grow -l5 mount /dev/md0 /mnt These two patches fix this problem. Xiao Ni (2): Free r0conf memory when register integrity failed Move alloc/free acct bioset in to personality drivers/md/md.c | 27 +++++++++++++++++---------- drivers/md/md.h | 2 ++ drivers/md/raid0.c | 28 ++++++++++++++++++++++++---- drivers/md/raid5.c | 41 ++++++++++++++++++++++++++++++----------- 4 files changed, 73 insertions(+), 25 deletions(-) -- 2.31.1 ^ permalink raw reply [flat|nested] 8+ messages in thread
* [PATCH 2/2] md: Move alloc/free acct bioset in to personality 2021-12-09 5:55 [PATCH 0/2] md: it panic " Xiao Ni @ 2021-12-09 5:55 ` Xiao Ni 2021-12-10 1:30 ` Guoqing Jiang 0 siblings, 1 reply; 8+ messages in thread From: Xiao Ni @ 2021-12-09 5:55 UTC (permalink / raw) To: song; +Cc: guoqing.jiang, ncroxon, linux-raid Now it alloc acct bioset in md_run and only raid0/raid5 need acct bioset. For example, it doesn't create acct bioset when creating raid1. Then reshape from raid1 to raid0/raid5, it will access acct bioset after reshaping. It can panic because of NULL pointer reference. We can move alloc/free jobs to personality. pers->run alloc acct bioset and pers->clean free it. Fixes: daee2024715d (md: check level before create and exit io_acct_set) Signed-off-by: Xiao Ni <xni@redhat.com> --- drivers/md/md.c | 27 +++++++++++++++++---------- drivers/md/md.h | 2 ++ drivers/md/raid0.c | 10 +++++++++- drivers/md/raid5.c | 41 ++++++++++++++++++++++++++++++----------- 4 files changed, 58 insertions(+), 22 deletions(-) diff --git a/drivers/md/md.c b/drivers/md/md.c index e8666bdc0d28..0fc34a05a655 100644 --- a/drivers/md/md.c +++ b/drivers/md/md.c @@ -5878,13 +5878,6 @@ int md_run(struct mddev *mddev) if (err) goto exit_bio_set; } - if (mddev->level != 1 && mddev->level != 10 && - !bioset_initialized(&mddev->io_acct_set)) { - err = bioset_init(&mddev->io_acct_set, BIO_POOL_SIZE, - offsetof(struct md_io_acct, bio_clone), 0); - if (err) - goto exit_sync_set; - } spin_lock(&pers_lock); pers = find_pers(mddev->level, mddev->clevel); @@ -6061,9 +6054,6 @@ int md_run(struct mddev *mddev) module_put(pers->owner); md_bitmap_destroy(mddev); abort: - if (mddev->level != 1 && mddev->level != 10) - bioset_exit(&mddev->io_acct_set); -exit_sync_set: bioset_exit(&mddev->sync_set); exit_bio_set: bioset_exit(&mddev->bio_set); @@ -8596,6 +8586,23 @@ void md_submit_discard_bio(struct mddev *mddev, struct md_rdev *rdev, } EXPORT_SYMBOL_GPL(md_submit_discard_bio); +int acct_bioset_init(struct mddev *mddev) +{ + int err = 0; + + if (!bioset_initialized(&mddev->io_acct_set)) + err = bioset_init(&mddev->io_acct_set, BIO_POOL_SIZE, + offsetof(struct md_io_acct, bio_clone), 0); + return err; +} +EXPORT_SYMBOL_GPL(acct_bioset_init); + +void acct_bioset_exit(struct mddev *mddev) +{ + bioset_exit(&mddev->io_acct_set); +} +EXPORT_SYMBOL_GPL(acct_bioset_exit); + static void md_end_io_acct(struct bio *bio) { struct md_io_acct *md_io_acct = bio->bi_private; diff --git a/drivers/md/md.h b/drivers/md/md.h index 53ea7a6961de..f1bf3625ef4c 100644 --- a/drivers/md/md.h +++ b/drivers/md/md.h @@ -721,6 +721,8 @@ extern void md_error(struct mddev *mddev, struct md_rdev *rdev); extern void md_finish_reshape(struct mddev *mddev); void md_submit_discard_bio(struct mddev *mddev, struct md_rdev *rdev, struct bio *bio, sector_t start, sector_t size); +int acct_bioset_init(struct mddev *mddev); +void acct_bioset_exit(struct mddev *mddev); void md_account_bio(struct mddev *mddev, struct bio **bio); extern bool __must_check md_flush_request(struct mddev *mddev, struct bio *bio); diff --git a/drivers/md/raid0.c b/drivers/md/raid0.c index 3fa47df1c60e..2391a4a63b4d 100644 --- a/drivers/md/raid0.c +++ b/drivers/md/raid0.c @@ -371,11 +371,16 @@ static int raid0_run(struct mddev *mddev) if (md_check_no_bitmap(mddev)) return -EINVAL; + if (acct_bioset_init(mddev)) { + pr_err("md/raid0:%s: alloc acct bioset failed.\n", mdname(mddev)); + return -ENOMEM; + } + /* if private is not null, we are here after takeover */ if (mddev->private == NULL) { ret = create_strip_zones(mddev, &conf); if (ret < 0) - return ret; + goto exit_acct_set; mddev->private = conf; } conf = mddev->private; @@ -421,6 +426,8 @@ static int raid0_run(struct mddev *mddev) free: free_conf(conf); +exit_acct_set: + acct_bioset_exit(mddev); return ret; } @@ -436,6 +443,7 @@ static void raid0_free(struct mddev *mddev, void *priv) struct r0conf *conf = priv; free_conf(conf); + acct_bioset_exit(mddev); } static void raid0_handle_discard(struct mddev *mddev, struct bio *bio) diff --git a/drivers/md/raid5.c b/drivers/md/raid5.c index 1240a5c16af8..13afa8c5cc8a 100644 --- a/drivers/md/raid5.c +++ b/drivers/md/raid5.c @@ -7447,12 +7447,19 @@ static int raid5_run(struct mddev *mddev) struct md_rdev *rdev; struct md_rdev *journal_dev = NULL; sector_t reshape_offset = 0; - int i; + int i, ret = 0; long long min_offset_diff = 0; int first = 1; - if (mddev_init_writes_pending(mddev) < 0) + if (acct_bioset_init(mddev)) { + pr_err("md/raid456:%s: alloc acct bioset failed.\n", mdname(mddev)); return -ENOMEM; + } + + if (mddev_init_writes_pending(mddev) < 0) { + ret = -ENOMEM; + goto exit_acct_set; + } if (mddev->recovery_cp != MaxSector) pr_notice("md/raid:%s: not clean -- starting background reconstruction\n", @@ -7483,7 +7490,8 @@ static int raid5_run(struct mddev *mddev) (mddev->bitmap_info.offset || mddev->bitmap_info.file)) { pr_notice("md/raid:%s: array cannot have both journal and bitmap\n", mdname(mddev)); - return -EINVAL; + ret = -EINVAL; + goto exit_acct_set; } if (mddev->reshape_position != MaxSector) { @@ -7508,13 +7516,15 @@ static int raid5_run(struct mddev *mddev) if (journal_dev) { pr_warn("md/raid:%s: don't support reshape with journal - aborting.\n", mdname(mddev)); - return -EINVAL; + ret = -EINVAL; + goto exit_acct_set; } if (mddev->new_level != mddev->level) { pr_warn("md/raid:%s: unsupported reshape required - aborting.\n", mdname(mddev)); - return -EINVAL; + ret = -EINVAL; + goto exit_acct_set; } old_disks = mddev->raid_disks - mddev->delta_disks; /* reshape_position must be on a new-stripe boundary, and one @@ -7530,7 +7540,8 @@ static int raid5_run(struct mddev *mddev) if (sector_div(here_new, chunk_sectors * new_data_disks)) { pr_warn("md/raid:%s: reshape_position not on a stripe boundary\n", mdname(mddev)); - return -EINVAL; + ret = -EINVAL; + goto exit_acct_set; } reshape_offset = here_new * chunk_sectors; /* here_new is the stripe we will write to */ @@ -7552,7 +7563,8 @@ static int raid5_run(struct mddev *mddev) else if (mddev->ro == 0) { pr_warn("md/raid:%s: in-place reshape must be started in read-only mode - aborting\n", mdname(mddev)); - return -EINVAL; + ret = -EINVAL; + goto exit_acct_set; } } else if (mddev->reshape_backwards ? (here_new * chunk_sectors + min_offset_diff <= @@ -7562,7 +7574,8 @@ static int raid5_run(struct mddev *mddev) /* Reading from the same stripe as writing to - bad */ pr_warn("md/raid:%s: reshape_position too early for auto-recovery - aborting.\n", mdname(mddev)); - return -EINVAL; + ret = -EINVAL; + goto exit_acct_set; } pr_debug("md/raid:%s: reshape will continue\n", mdname(mddev)); /* OK, we should be able to continue; */ @@ -7586,8 +7599,10 @@ static int raid5_run(struct mddev *mddev) else conf = mddev->private; - if (IS_ERR(conf)) - return PTR_ERR(conf); + if (IS_ERR(conf)) { + ret = PTR_ERR(conf); + goto exit_acct_set; + } if (test_bit(MD_HAS_JOURNAL, &mddev->flags)) { if (!journal_dev) { @@ -7784,7 +7799,10 @@ static int raid5_run(struct mddev *mddev) free_conf(conf); mddev->private = NULL; pr_warn("md/raid:%s: failed to run raid set.\n", mdname(mddev)); - return -EIO; + ret = -EIO; +exit_acct_set: + acct_bioset_exit(mddev); + return ret; } static void raid5_free(struct mddev *mddev, void *priv) @@ -7792,6 +7810,7 @@ static void raid5_free(struct mddev *mddev, void *priv) struct r5conf *conf = priv; free_conf(conf); + acct_bioset_exit(mddev); mddev->to_remove = &raid5_attrs_group; } -- 2.31.1 ^ permalink raw reply related [flat|nested] 8+ messages in thread
* Re: [PATCH 2/2] md: Move alloc/free acct bioset in to personality 2021-12-09 5:55 ` [PATCH 2/2] md: Move alloc/free acct bioset in to personality Xiao Ni @ 2021-12-10 1:30 ` Guoqing Jiang 0 siblings, 0 replies; 8+ messages in thread From: Guoqing Jiang @ 2021-12-10 1:30 UTC (permalink / raw) To: Xiao Ni, song; +Cc: ncroxon, linux-raid On 12/9/21 1:55 PM, Xiao Ni wrote: > Now it alloc acct bioset in md_run and only raid0/raid5 need acct > bioset. For example, it doesn't create acct bioset when creating > raid1. Then reshape from raid1 to raid0/raid5, it will access acct > bioset after reshaping. It can panic because of NULL pointer > reference. Thanks, I think the previous commit didn't think of the reshape scenario. Could you paste the relevant info into commit header? > We can move alloc/free jobs to personality. pers->run alloc acct > bioset and pers->clean free it. In the reshape case, the caller of pers->run is level_store, so. Acked-by: Guoqing Jiang <guoqing.jiang@linux.dev> Thanks, Guoqing ^ permalink raw reply [flat|nested] 8+ messages in thread
end of thread, other threads:[~2022-01-06 1:53 UTC | newest] Thread overview: 8+ messages (download: mbox.gz follow: Atom feed -- links below jump to the message on this page -- 2021-12-10 9:31 [PATCH 0/2] md: it panice after reshape from raid1 to raid5 Xiao Ni 2021-12-10 9:31 ` [PATCH V2 1/2] md/raid0: Free r0conf memory when register integrity failed Xiao Ni 2021-12-10 9:31 ` [PATCH 2/2] md: Move alloc/free acct bioset in to personality Xiao Ni 2022-01-04 23:30 ` [PATCH 0/2] md: it panice after reshape from raid1 to raid5 Xiao Ni 2022-01-05 18:59 ` Song Liu 2022-01-06 1:53 ` Xiao Ni -- strict thread matches above, loose matches on Subject: below -- 2021-12-09 5:55 [PATCH 0/2] md: it panic " Xiao Ni 2021-12-09 5:55 ` [PATCH 2/2] md: Move alloc/free acct bioset in to personality Xiao Ni 2021-12-10 1:30 ` Guoqing Jiang
This is a public inbox, see mirroring instructions for how to clone and mirror all data and code used for this inbox; as well as URLs for NNTP newsgroup(s).