* [PATCH v3 0/2] md: simplify md_seq_ops
@ 2023-09-27 6:12 Yu Kuai
2023-09-27 6:12 ` [PATCH v3 1/2] md: factor out a helper from mddev_put() Yu Kuai
` (2 more replies)
0 siblings, 3 replies; 9+ messages in thread
From: Yu Kuai @ 2023-09-27 6:12 UTC (permalink / raw)
To: mariusz.tkaczyk, xni, song
Cc: linux-raid, linux-kernel, yukuai3, yukuai1, yi.zhang, yangerkun
From: Yu Kuai <yukuai3@huawei.com>
Changes in v3:
- rework patch 1, because the condition is confusing
Changes in v2:
- don't hold lock while show mddev in md_seq_show
- add patch 1
- add commit message
Yu Kuai (2):
md: factor out a helper from mddev_put()
md: simplify md_seq_ops
drivers/md/md.c | 129 +++++++++++++++---------------------------------
1 file changed, 39 insertions(+), 90 deletions(-)
--
2.39.2
^ permalink raw reply [flat|nested] 9+ messages in thread* [PATCH v3 1/2] md: factor out a helper from mddev_put() 2023-09-27 6:12 [PATCH v3 0/2] md: simplify md_seq_ops Yu Kuai @ 2023-09-27 6:12 ` Yu Kuai 2023-09-27 6:12 ` [PATCH v3 2/2] md: simplify md_seq_ops Yu Kuai 2023-09-28 6:37 ` [PATCH v3 0/2] " Song Liu 2 siblings, 0 replies; 9+ messages in thread From: Yu Kuai @ 2023-09-27 6:12 UTC (permalink / raw) To: mariusz.tkaczyk, xni, song Cc: linux-raid, linux-kernel, yukuai3, yukuai1, yi.zhang, yangerkun From: Yu Kuai <yukuai3@huawei.com> There are no functional changes, prepare to simplify md_seq_ops in next patch. Signed-off-by: Yu Kuai <yukuai3@huawei.com> --- drivers/md/md.c | 29 +++++++++++++++++------------ 1 file changed, 17 insertions(+), 12 deletions(-) diff --git a/drivers/md/md.c b/drivers/md/md.c index 10cb4dfbf4ae..73782cafad4e 100644 --- a/drivers/md/md.c +++ b/drivers/md/md.c @@ -616,23 +616,28 @@ static inline struct mddev *mddev_get(struct mddev *mddev) static void mddev_delayed_delete(struct work_struct *ws); +static void __mddev_put(struct mddev *mddev) +{ + if (mddev->raid_disks || !list_empty(&mddev->disks) || + mddev->ctime || mddev->hold_active) + return; + + /* Array is not configured at all, and not held active, so destroy it */ + set_bit(MD_DELETED, &mddev->flags); + + /* + * Call queue_work inside the spinlock so that flush_workqueue() after + * mddev_find will succeed in waiting for the work to be done. + */ + queue_work(md_misc_wq, &mddev->del_work); +} + void mddev_put(struct mddev *mddev) { if (!atomic_dec_and_lock(&mddev->active, &all_mddevs_lock)) return; - if (!mddev->raid_disks && list_empty(&mddev->disks) && - mddev->ctime == 0 && !mddev->hold_active) { - /* Array is not configured at all, and not held active, - * so destroy it */ - set_bit(MD_DELETED, &mddev->flags); - /* - * Call queue_work inside the spinlock so that - * flush_workqueue() after mddev_find will succeed in waiting - * for the work to be done. - */ - queue_work(md_misc_wq, &mddev->del_work); - } + __mddev_put(mddev); spin_unlock(&all_mddevs_lock); } -- 2.39.2 ^ permalink raw reply related [flat|nested] 9+ messages in thread
* [PATCH v3 2/2] md: simplify md_seq_ops 2023-09-27 6:12 [PATCH v3 0/2] md: simplify md_seq_ops Yu Kuai 2023-09-27 6:12 ` [PATCH v3 1/2] md: factor out a helper from mddev_put() Yu Kuai @ 2023-09-27 6:12 ` Yu Kuai 2024-01-08 23:38 ` Song Liu 2023-09-28 6:37 ` [PATCH v3 0/2] " Song Liu 2 siblings, 1 reply; 9+ messages in thread From: Yu Kuai @ 2023-09-27 6:12 UTC (permalink / raw) To: mariusz.tkaczyk, xni, song Cc: linux-raid, linux-kernel, yukuai3, yukuai1, yi.zhang, yangerkun From: Yu Kuai <yukuai3@huawei.com> Before this patch, the implementation is hacky and hard to understand: 1) md_seq_start set pos to 1; 2) md_seq_show found pos is 1, then print Personalities; 3) md_seq_next found pos is 1, then it update pos to the first mddev; 4) md_seq_show found pos is not 1 or 2, show mddev; 5) md_seq_next found pos is not 1 or 2, update pos to next mddev; 6) loop 4-5 until the last mddev, then md_seq_next update pos to 2; 7) md_seq_show found pos is 2, then print unused devices; 8) md_seq_next found pos is 2, stop; This patch remove the magic value and use seq_list_start/next/stop() directly, and move printing "Personalities" to md_seq_start(), "unsed devices" to md_seq_stop(): 1) md_seq_start print Personalities, and then set pos to first mddev; 2) md_seq_show show mddev; 3) md_seq_next update pos to next mddev; 4) loop 2-3 until the last mddev; 5) md_seq_stop print unsed devices; Signed-off-by: Yu Kuai <yukuai3@huawei.com> --- drivers/md/md.c | 100 +++++++++++------------------------------------- 1 file changed, 22 insertions(+), 78 deletions(-) diff --git a/drivers/md/md.c b/drivers/md/md.c index 73782cafad4e..0b83a406e797 100644 --- a/drivers/md/md.c +++ b/drivers/md/md.c @@ -8213,105 +8213,46 @@ static int status_resync(struct seq_file *seq, struct mddev *mddev) } static void *md_seq_start(struct seq_file *seq, loff_t *pos) + __acquires(&all_mddevs_lock) { - struct list_head *tmp; - loff_t l = *pos; - struct mddev *mddev; + struct md_personality *pers; - if (l == 0x10000) { - ++*pos; - return (void *)2; - } - if (l > 0x10000) - return NULL; - if (!l--) - /* header */ - return (void*)1; + seq_puts(seq, "Personalities : "); + spin_lock(&pers_lock); + list_for_each_entry(pers, &pers_list, list) + seq_printf(seq, "[%s] ", pers->name); + + spin_unlock(&pers_lock); + seq_puts(seq, "\n"); + seq->poll_event = atomic_read(&md_event_count); spin_lock(&all_mddevs_lock); - list_for_each(tmp,&all_mddevs) - if (!l--) { - mddev = list_entry(tmp, struct mddev, all_mddevs); - if (!mddev_get(mddev)) - continue; - spin_unlock(&all_mddevs_lock); - return mddev; - } - spin_unlock(&all_mddevs_lock); - if (!l--) - return (void*)2;/* tail */ - return NULL; + + return seq_list_start(&all_mddevs, *pos); } static void *md_seq_next(struct seq_file *seq, void *v, loff_t *pos) { - struct list_head *tmp; - struct mddev *next_mddev, *mddev = v; - struct mddev *to_put = NULL; - - ++*pos; - if (v == (void*)2) - return NULL; - - spin_lock(&all_mddevs_lock); - if (v == (void*)1) { - tmp = all_mddevs.next; - } else { - to_put = mddev; - tmp = mddev->all_mddevs.next; - } - - for (;;) { - if (tmp == &all_mddevs) { - next_mddev = (void*)2; - *pos = 0x10000; - break; - } - next_mddev = list_entry(tmp, struct mddev, all_mddevs); - if (mddev_get(next_mddev)) - break; - mddev = next_mddev; - tmp = mddev->all_mddevs.next; - } - spin_unlock(&all_mddevs_lock); - - if (to_put) - mddev_put(to_put); - return next_mddev; - + return seq_list_next(v, &all_mddevs, pos); } static void md_seq_stop(struct seq_file *seq, void *v) + __releases(&all_mddevs_lock) { - struct mddev *mddev = v; - - if (mddev && v != (void*)1 && v != (void*)2) - mddev_put(mddev); + status_unused(seq); + spin_unlock(&all_mddevs_lock); } static int md_seq_show(struct seq_file *seq, void *v) { - struct mddev *mddev = v; + struct mddev *mddev = list_entry(v, struct mddev, all_mddevs); sector_t sectors; struct md_rdev *rdev; - if (v == (void*)1) { - struct md_personality *pers; - seq_printf(seq, "Personalities : "); - spin_lock(&pers_lock); - list_for_each_entry(pers, &pers_list, list) - seq_printf(seq, "[%s] ", pers->name); - - spin_unlock(&pers_lock); - seq_printf(seq, "\n"); - seq->poll_event = atomic_read(&md_event_count); + if (!mddev_get(mddev)) return 0; - } - if (v == (void*)2) { - status_unused(seq); - return 0; - } + spin_unlock(&all_mddevs_lock); spin_lock(&mddev->lock); if (mddev->pers || mddev->raid_disks || !list_empty(&mddev->disks)) { seq_printf(seq, "%s : %sactive", mdname(mddev), @@ -8382,6 +8323,9 @@ static int md_seq_show(struct seq_file *seq, void *v) seq_printf(seq, "\n"); } spin_unlock(&mddev->lock); + spin_lock(&all_mddevs_lock); + if (atomic_dec_and_test(&mddev->active)) + __mddev_put(mddev); return 0; } -- 2.39.2 ^ permalink raw reply related [flat|nested] 9+ messages in thread
* Re: [PATCH v3 2/2] md: simplify md_seq_ops 2023-09-27 6:12 ` [PATCH v3 2/2] md: simplify md_seq_ops Yu Kuai @ 2024-01-08 23:38 ` Song Liu 2024-01-09 1:21 ` Yu Kuai 0 siblings, 1 reply; 9+ messages in thread From: Song Liu @ 2024-01-08 23:38 UTC (permalink / raw) To: Yu Kuai Cc: mariusz.tkaczyk, xni, linux-raid, linux-kernel, yukuai3, yi.zhang, yangerkun On Tue, Sep 26, 2023 at 11:19 PM Yu Kuai <yukuai1@huaweicloud.com> wrote: > > From: Yu Kuai <yukuai3@huawei.com> > > Before this patch, the implementation is hacky and hard to understand: > > 1) md_seq_start set pos to 1; > 2) md_seq_show found pos is 1, then print Personalities; > 3) md_seq_next found pos is 1, then it update pos to the first mddev; > 4) md_seq_show found pos is not 1 or 2, show mddev; > 5) md_seq_next found pos is not 1 or 2, update pos to next mddev; > 6) loop 4-5 until the last mddev, then md_seq_next update pos to 2; > 7) md_seq_show found pos is 2, then print unused devices; > 8) md_seq_next found pos is 2, stop; > > This patch remove the magic value and use seq_list_start/next/stop() > directly, and move printing "Personalities" to md_seq_start(), > "unsed devices" to md_seq_stop(): > > 1) md_seq_start print Personalities, and then set pos to first mddev; > 2) md_seq_show show mddev; > 3) md_seq_next update pos to next mddev; > 4) loop 2-3 until the last mddev; > 5) md_seq_stop print unsed devices; > > Signed-off-by: Yu Kuai <yukuai3@huawei.com> Just realized this introduced a behavior change: When there is not md devices, before this patch, we have [root@eth50-1 ~]# cat /proc/mdstat Personalities : [raid0] [raid1] [raid10] [raid6] [raid5] [raid4] unused devices: <none> After this patch, "cat /proc/mdstat" returns nothing. This causes some confusion for users who want to read "Personalities" line, for example, the mdadm test suite reads it. I haven't figured out the best fix yet. Thanks, Song ^ permalink raw reply [flat|nested] 9+ messages in thread
* Re: [PATCH v3 2/2] md: simplify md_seq_ops 2024-01-08 23:38 ` Song Liu @ 2024-01-09 1:21 ` Yu Kuai 2024-01-09 7:48 ` Yu Kuai 0 siblings, 1 reply; 9+ messages in thread From: Yu Kuai @ 2024-01-09 1:21 UTC (permalink / raw) To: Song Liu, Yu Kuai Cc: mariusz.tkaczyk, xni, linux-raid, linux-kernel, yi.zhang, yangerkun, yukuai (C) Hi, 在 2024/01/09 7:38, Song Liu 写道: > On Tue, Sep 26, 2023 at 11:19 PM Yu Kuai <yukuai1@huaweicloud.com> wrote: >> >> From: Yu Kuai <yukuai3@huawei.com> >> >> Before this patch, the implementation is hacky and hard to understand: >> >> 1) md_seq_start set pos to 1; >> 2) md_seq_show found pos is 1, then print Personalities; >> 3) md_seq_next found pos is 1, then it update pos to the first mddev; >> 4) md_seq_show found pos is not 1 or 2, show mddev; >> 5) md_seq_next found pos is not 1 or 2, update pos to next mddev; >> 6) loop 4-5 until the last mddev, then md_seq_next update pos to 2; >> 7) md_seq_show found pos is 2, then print unused devices; >> 8) md_seq_next found pos is 2, stop; >> >> This patch remove the magic value and use seq_list_start/next/stop() >> directly, and move printing "Personalities" to md_seq_start(), >> "unsed devices" to md_seq_stop(): >> >> 1) md_seq_start print Personalities, and then set pos to first mddev; >> 2) md_seq_show show mddev; >> 3) md_seq_next update pos to next mddev; >> 4) loop 2-3 until the last mddev; >> 5) md_seq_stop print unsed devices; >> >> Signed-off-by: Yu Kuai <yukuai3@huawei.com> > > Just realized this introduced a behavior change: > > When there is not md devices, before this patch, we have > > [root@eth50-1 ~]# cat /proc/mdstat > Personalities : [raid0] [raid1] [raid10] [raid6] [raid5] [raid4] > unused devices: <none> > > After this patch, "cat /proc/mdstat" returns nothing. This causes > some confusion for users who want to read "Personalities" line, > for example, the mdadm test suite reads it. > > I haven't figured out the best fix yet. Yes, that's a problem. And after reviewing seq_read_iter() in detail, I realize that I also can't use seq_printf() in m->op->start() directly, because if seq buffer overflowed, md_seq_start() can be called more than once. I'll fix these problems soon. Thanks, Kuai > > Thanks, > Song > > . > ^ permalink raw reply [flat|nested] 9+ messages in thread
* Re: [PATCH v3 2/2] md: simplify md_seq_ops 2024-01-09 1:21 ` Yu Kuai @ 2024-01-09 7:48 ` Yu Kuai 2024-01-09 8:12 ` Song Liu 0 siblings, 1 reply; 9+ messages in thread From: Yu Kuai @ 2024-01-09 7:48 UTC (permalink / raw) To: Yu Kuai, Song Liu Cc: mariusz.tkaczyk, xni, linux-raid, linux-kernel, yi.zhang, yangerkun, yukuai (C) Hi, 在 2024/01/09 9:21, Yu Kuai 写道: > Hi, > > 在 2024/01/09 7:38, Song Liu 写道: >> On Tue, Sep 26, 2023 at 11:19 PM Yu Kuai <yukuai1@huaweicloud.com> wrote: >>> >>> From: Yu Kuai <yukuai3@huawei.com> >>> >>> Before this patch, the implementation is hacky and hard to understand: >>> >>> 1) md_seq_start set pos to 1; >>> 2) md_seq_show found pos is 1, then print Personalities; >>> 3) md_seq_next found pos is 1, then it update pos to the first mddev; >>> 4) md_seq_show found pos is not 1 or 2, show mddev; >>> 5) md_seq_next found pos is not 1 or 2, update pos to next mddev; >>> 6) loop 4-5 until the last mddev, then md_seq_next update pos to 2; >>> 7) md_seq_show found pos is 2, then print unused devices; >>> 8) md_seq_next found pos is 2, stop; >>> >>> This patch remove the magic value and use seq_list_start/next/stop() >>> directly, and move printing "Personalities" to md_seq_start(), >>> "unsed devices" to md_seq_stop(): >>> >>> 1) md_seq_start print Personalities, and then set pos to first mddev; >>> 2) md_seq_show show mddev; >>> 3) md_seq_next update pos to next mddev; >>> 4) loop 2-3 until the last mddev; >>> 5) md_seq_stop print unsed devices; >>> >>> Signed-off-by: Yu Kuai <yukuai3@huawei.com> >> >> Just realized this introduced a behavior change: >> >> When there is not md devices, before this patch, we have >> >> [root@eth50-1 ~]# cat /proc/mdstat >> Personalities : [raid0] [raid1] [raid10] [raid6] [raid5] [raid4] >> unused devices: <none> >> >> After this patch, "cat /proc/mdstat" returns nothing. This causes >> some confusion for users who want to read "Personalities" line, >> for example, the mdadm test suite reads it. >> >> I haven't figured out the best fix yet. > > Yes, that's a problem. And after reviewing seq_read_iter() in detail, I > realize that I also can't use seq_printf() in m->op->start() directly, > because if seq buffer overflowed, md_seq_start() can be called more than > once. > > I'll fix these problems soon. How about following patch(already tested)? Thanks, Kuai diff --git a/drivers/md/md.c b/drivers/md/md.c index e351e6c51cc7..289d3d89e73d 100644 --- a/drivers/md/md.c +++ b/drivers/md/md.c @@ -8135,6 +8135,19 @@ static void status_unused(struct seq_file *seq) seq_printf(seq, "\n"); } +static void status_personalities(struct seq_file *seq) +{ + struct md_personality *pers; + + seq_puts(seq, "Personalities : "); + spin_lock(&pers_lock); + list_for_each_entry(pers, &pers_list, list) + seq_printf(seq, "[%s] ", pers->name); + + spin_unlock(&pers_lock); + seq_puts(seq, "\n"); +} + static int status_resync(struct seq_file *seq, struct mddev *mddev) { sector_t max_sectors, resync, res; @@ -8273,43 +8286,53 @@ static int status_resync(struct seq_file *seq, struct mddev *mddev) return 1; } +#define MDDEV_NONE (void *)1 + static void *md_seq_start(struct seq_file *seq, loff_t *pos) __acquires(&all_mddevs_lock) { - struct md_personality *pers; - - seq_puts(seq, "Personalities : "); - spin_lock(&pers_lock); - list_for_each_entry(pers, &pers_list, list) - seq_printf(seq, "[%s] ", pers->name); - - spin_unlock(&pers_lock); - seq_puts(seq, "\n"); seq->poll_event = atomic_read(&md_event_count); - spin_lock(&all_mddevs_lock); - return seq_list_start(&all_mddevs, *pos); + if (!list_empty(&all_mddevs)) + return seq_list_start(&all_mddevs, *pos); + else if (*pos == 0) + return MDDEV_NONE; + else + return NULL; } static void *md_seq_next(struct seq_file *seq, void *v, loff_t *pos) { + if (v == MDDEV_NONE) { + ++*pos; + return NULL; + } + return seq_list_next(v, &all_mddevs, pos); } static void md_seq_stop(struct seq_file *seq, void *v) __releases(&all_mddevs_lock) { - status_unused(seq); spin_unlock(&all_mddevs_lock); } static int md_seq_show(struct seq_file *seq, void *v) { - struct mddev *mddev = list_entry(v, struct mddev, all_mddevs); + struct mddev *mddev; sector_t sectors; struct md_rdev *rdev; + if (v == MDDEV_NONE) { + status_personalities(seq); + status_unused(seq); + return 0; + } + + mddev = list_entry(v, struct mddev, all_mddevs); + if (mddev == list_first_entry(&all_mddevs, struct mddev, all_mddevs)) + status_personalities(seq); if (!mddev_get(mddev)) return 0; @@ -8385,6 +8408,10 @@ static int md_seq_show(struct seq_file *seq, void *v) } spin_unlock(&mddev->lock); spin_lock(&all_mddevs_lock); + + if (mddev == list_last_entry(&all_mddevs, struct mddev, all_mddevs)) + status_unused(seq); + if (atomic_dec_and_test(&mddev->active)) __mddev_put(mddev); > > Thanks, > Kuai > >> >> Thanks, >> Song >> >> . >> > > . > ^ permalink raw reply related [flat|nested] 9+ messages in thread
* Re: [PATCH v3 2/2] md: simplify md_seq_ops 2024-01-09 7:48 ` Yu Kuai @ 2024-01-09 8:12 ` Song Liu 2024-01-09 8:28 ` Yu Kuai 0 siblings, 1 reply; 9+ messages in thread From: Song Liu @ 2024-01-09 8:12 UTC (permalink / raw) To: Yu Kuai Cc: mariusz.tkaczyk, xni, linux-raid, linux-kernel, yi.zhang, yangerkun, yukuai (C) On Mon, Jan 8, 2024 at 11:48 PM Yu Kuai <yukuai1@huaweicloud.com> wrote: > > Hi, > > 在 2024/01/09 9:21, Yu Kuai 写道: > > Hi, > > > > 在 2024/01/09 7:38, Song Liu 写道: > >> On Tue, Sep 26, 2023 at 11:19 PM Yu Kuai <yukuai1@huaweicloud.com> wrote: [...] > diff --git a/drivers/md/md.c b/drivers/md/md.c > index e351e6c51cc7..289d3d89e73d 100644 > --- a/drivers/md/md.c > +++ b/drivers/md/md.c > @@ -8135,6 +8135,19 @@ static void status_unused(struct seq_file *seq) > seq_printf(seq, "\n"); > } > > +static void status_personalities(struct seq_file *seq) > +{ > + struct md_personality *pers; > + > + seq_puts(seq, "Personalities : "); > + spin_lock(&pers_lock); > + list_for_each_entry(pers, &pers_list, list) > + seq_printf(seq, "[%s] ", pers->name); > + > + spin_unlock(&pers_lock); > + seq_puts(seq, "\n"); > +} > + > static int status_resync(struct seq_file *seq, struct mddev *mddev) > { > sector_t max_sectors, resync, res; > @@ -8273,43 +8286,53 @@ static int status_resync(struct seq_file *seq, > struct mddev *mddev) > return 1; > } > > +#define MDDEV_NONE (void *)1 > + > static void *md_seq_start(struct seq_file *seq, loff_t *pos) > __acquires(&all_mddevs_lock) > { > - struct md_personality *pers; > - > - seq_puts(seq, "Personalities : "); > - spin_lock(&pers_lock); > - list_for_each_entry(pers, &pers_list, list) > - seq_printf(seq, "[%s] ", pers->name); > - > - spin_unlock(&pers_lock); > - seq_puts(seq, "\n"); > seq->poll_event = atomic_read(&md_event_count); > - > spin_lock(&all_mddevs_lock); > > - return seq_list_start(&all_mddevs, *pos); > + if (!list_empty(&all_mddevs)) > + return seq_list_start(&all_mddevs, *pos); > + else if (*pos == 0) > + return MDDEV_NONE; > + else > + return NULL; > } > > static void *md_seq_next(struct seq_file *seq, void *v, loff_t *pos) > { > + if (v == MDDEV_NONE) { > + ++*pos; > + return NULL; > + } > + > return seq_list_next(v, &all_mddevs, pos); > } > > static void md_seq_stop(struct seq_file *seq, void *v) > __releases(&all_mddevs_lock) > { > - status_unused(seq); > spin_unlock(&all_mddevs_lock); > } > static int md_seq_show(struct seq_file *seq, void *v) > { > - struct mddev *mddev = list_entry(v, struct mddev, all_mddevs); > + struct mddev *mddev; > sector_t sectors; > struct md_rdev *rdev; > > + if (v == MDDEV_NONE) { > + status_personalities(seq); > + status_unused(seq); > + return 0; > + } > + > + mddev = list_entry(v, struct mddev, all_mddevs); > + if (mddev == list_first_entry(&all_mddevs, struct mddev, > all_mddevs)) > + status_personalities(seq); > if (!mddev_get(mddev)) > return 0; > > @@ -8385,6 +8408,10 @@ static int md_seq_show(struct seq_file *seq, void *v) > } > spin_unlock(&mddev->lock); > spin_lock(&all_mddevs_lock); > + > + if (mddev == list_last_entry(&all_mddevs, struct mddev, all_mddevs)) > + status_unused(seq); > + > if (atomic_dec_and_test(&mddev->active)) > __mddev_put(mddev); > I think something like the following is the right way to do this. Thanks, Song diff --git i/drivers/md/md.c w/drivers/md/md.c index 38a6767c65b1..14044febe009 100644 --- i/drivers/md/md.c +++ w/drivers/md/md.c @@ -8215,20 +8215,8 @@ static int status_resync(struct seq_file *seq, struct mddev *mddev) static void *md_seq_start(struct seq_file *seq, loff_t *pos) __acquires(&all_mddevs_lock) { - struct md_personality *pers; - - seq_puts(seq, "Personalities : "); - spin_lock(&pers_lock); - list_for_each_entry(pers, &pers_list, list) - seq_printf(seq, "[%s] ", pers->name); - - spin_unlock(&pers_lock); - seq_puts(seq, "\n"); - seq->poll_event = atomic_read(&md_event_count); - spin_lock(&all_mddevs_lock); - - return seq_list_start(&all_mddevs, *pos); + return seq_list_start_head(&all_mddevs, *pos); } static void *md_seq_next(struct seq_file *seq, void *v, loff_t *pos) @@ -8243,12 +8231,31 @@ static void md_seq_stop(struct seq_file *seq, void *v) spin_unlock(&all_mddevs_lock); } +static void md_seq_print_header(struct seq_file *seq) +{ + struct md_personality *pers; + + seq_puts(seq, "Personalities : "); + spin_lock(&pers_lock); + list_for_each_entry(pers, &pers_list, list) + seq_printf(seq, "[%s] ", pers->name); + + spin_unlock(&pers_lock); + seq_puts(seq, "\n"); + seq->poll_event = atomic_read(&md_event_count); +} + static int md_seq_show(struct seq_file *seq, void *v) { struct mddev *mddev = list_entry(v, struct mddev, all_mddevs); sector_t sectors; struct md_rdev *rdev; + if (v == &all_mddevs) { + md_seq_print_header(seq); + return 0; + } + if (!mddev_get(mddev)) return 0; ^ permalink raw reply related [flat|nested] 9+ messages in thread
* Re: [PATCH v3 2/2] md: simplify md_seq_ops 2024-01-09 8:12 ` Song Liu @ 2024-01-09 8:28 ` Yu Kuai 0 siblings, 0 replies; 9+ messages in thread From: Yu Kuai @ 2024-01-09 8:28 UTC (permalink / raw) To: Song Liu, Yu Kuai Cc: mariusz.tkaczyk, xni, linux-raid, linux-kernel, yi.zhang, yangerkun, yukuai (C) Hi, 在 2024/01/09 16:12, Song Liu 写道: > On Mon, Jan 8, 2024 at 11:48 PM Yu Kuai <yukuai1@huaweicloud.com> wrote: >> >> Hi, >> >> 在 2024/01/09 9:21, Yu Kuai 写道: >>> Hi, >>> >>> 在 2024/01/09 7:38, Song Liu 写道: >>>> On Tue, Sep 26, 2023 at 11:19 PM Yu Kuai <yukuai1@huaweicloud.com> wrote: > [...] >> diff --git a/drivers/md/md.c b/drivers/md/md.c >> index e351e6c51cc7..289d3d89e73d 100644 >> --- a/drivers/md/md.c >> +++ b/drivers/md/md.c >> @@ -8135,6 +8135,19 @@ static void status_unused(struct seq_file *seq) >> seq_printf(seq, "\n"); >> } >> >> +static void status_personalities(struct seq_file *seq) >> +{ >> + struct md_personality *pers; >> + >> + seq_puts(seq, "Personalities : "); >> + spin_lock(&pers_lock); >> + list_for_each_entry(pers, &pers_list, list) >> + seq_printf(seq, "[%s] ", pers->name); >> + >> + spin_unlock(&pers_lock); >> + seq_puts(seq, "\n"); >> +} >> + >> static int status_resync(struct seq_file *seq, struct mddev *mddev) >> { >> sector_t max_sectors, resync, res; >> @@ -8273,43 +8286,53 @@ static int status_resync(struct seq_file *seq, >> struct mddev *mddev) >> return 1; >> } >> >> +#define MDDEV_NONE (void *)1 >> + >> static void *md_seq_start(struct seq_file *seq, loff_t *pos) >> __acquires(&all_mddevs_lock) >> { >> - struct md_personality *pers; >> - >> - seq_puts(seq, "Personalities : "); >> - spin_lock(&pers_lock); >> - list_for_each_entry(pers, &pers_list, list) >> - seq_printf(seq, "[%s] ", pers->name); >> - >> - spin_unlock(&pers_lock); >> - seq_puts(seq, "\n"); >> seq->poll_event = atomic_read(&md_event_count); >> - >> spin_lock(&all_mddevs_lock); >> >> - return seq_list_start(&all_mddevs, *pos); >> + if (!list_empty(&all_mddevs)) >> + return seq_list_start(&all_mddevs, *pos); >> + else if (*pos == 0) >> + return MDDEV_NONE; >> + else >> + return NULL; >> } >> >> static void *md_seq_next(struct seq_file *seq, void *v, loff_t *pos) >> { >> + if (v == MDDEV_NONE) { >> + ++*pos; >> + return NULL; >> + } >> + >> return seq_list_next(v, &all_mddevs, pos); >> } >> >> static void md_seq_stop(struct seq_file *seq, void *v) >> __releases(&all_mddevs_lock) >> { >> - status_unused(seq); >> spin_unlock(&all_mddevs_lock); >> } >> static int md_seq_show(struct seq_file *seq, void *v) >> { >> - struct mddev *mddev = list_entry(v, struct mddev, all_mddevs); >> + struct mddev *mddev; >> sector_t sectors; >> struct md_rdev *rdev; >> >> + if (v == MDDEV_NONE) { >> + status_personalities(seq); >> + status_unused(seq); >> + return 0; >> + } >> + >> + mddev = list_entry(v, struct mddev, all_mddevs); >> + if (mddev == list_first_entry(&all_mddevs, struct mddev, >> all_mddevs)) >> + status_personalities(seq); >> if (!mddev_get(mddev)) >> return 0; >> >> @@ -8385,6 +8408,10 @@ static int md_seq_show(struct seq_file *seq, void *v) >> } >> spin_unlock(&mddev->lock); >> spin_lock(&all_mddevs_lock); >> + >> + if (mddev == list_last_entry(&all_mddevs, struct mddev, all_mddevs)) >> + status_unused(seq); >> + >> if (atomic_dec_and_test(&mddev->active)) >> __mddev_put(mddev); >> > > I think something like the following is the right way to do this. > > Thanks, > Song > > diff --git i/drivers/md/md.c w/drivers/md/md.c > index 38a6767c65b1..14044febe009 100644 > --- i/drivers/md/md.c > +++ w/drivers/md/md.c > @@ -8215,20 +8215,8 @@ static int status_resync(struct seq_file *seq, > struct mddev *mddev) > static void *md_seq_start(struct seq_file *seq, loff_t *pos) > __acquires(&all_mddevs_lock) > { > - struct md_personality *pers; > - > - seq_puts(seq, "Personalities : "); > - spin_lock(&pers_lock); > - list_for_each_entry(pers, &pers_list, list) > - seq_printf(seq, "[%s] ", pers->name); > - > - spin_unlock(&pers_lock); > - seq_puts(seq, "\n"); > - seq->poll_event = atomic_read(&md_event_count); > - > spin_lock(&all_mddevs_lock); > - > - return seq_list_start(&all_mddevs, *pos); > + return seq_list_start_head(&all_mddevs, *pos); Yes, this is good. I didn't notice the api seq_list_start_head(). > } > > static void *md_seq_next(struct seq_file *seq, void *v, loff_t *pos) > @@ -8243,12 +8231,31 @@ static void md_seq_stop(struct seq_file *seq, void *v) > spin_unlock(&all_mddevs_lock); > } > > +static void md_seq_print_header(struct seq_file *seq) > +{ > + struct md_personality *pers; > + > + seq_puts(seq, "Personalities : "); > + spin_lock(&pers_lock); > + list_for_each_entry(pers, &pers_list, list) > + seq_printf(seq, "[%s] ", pers->name); > + > + spin_unlock(&pers_lock); > + seq_puts(seq, "\n"); > + seq->poll_event = atomic_read(&md_event_count); > +} > + > static int md_seq_show(struct seq_file *seq, void *v) > { > struct mddev *mddev = list_entry(v, struct mddev, all_mddevs); > sector_t sectors; > struct md_rdev *rdev; > > + if (v == &all_mddevs) { > + md_seq_print_header(seq); And I will still move status_unused() to md_seq_show(), because seq_read_iter() only handle the case that seq_printf() overflowed from md_seq_show(), not md_seq_start/stop(). Thanks, Kuai > + return 0; > + } > + > if (!mddev_get(mddev)) > return 0; > > . > ^ permalink raw reply [flat|nested] 9+ messages in thread
* Re: [PATCH v3 0/2] md: simplify md_seq_ops 2023-09-27 6:12 [PATCH v3 0/2] md: simplify md_seq_ops Yu Kuai 2023-09-27 6:12 ` [PATCH v3 1/2] md: factor out a helper from mddev_put() Yu Kuai 2023-09-27 6:12 ` [PATCH v3 2/2] md: simplify md_seq_ops Yu Kuai @ 2023-09-28 6:37 ` Song Liu 2 siblings, 0 replies; 9+ messages in thread From: Song Liu @ 2023-09-28 6:37 UTC (permalink / raw) To: Yu Kuai Cc: mariusz.tkaczyk, xni, linux-raid, linux-kernel, yukuai3, yi.zhang, yangerkun On Tue, Sep 26, 2023 at 11:19 PM Yu Kuai <yukuai1@huaweicloud.com> wrote: > > From: Yu Kuai <yukuai3@huawei.com> > > Changes in v3: > - rework patch 1, because the condition is confusing > > Changes in v2: > - don't hold lock while show mddev in md_seq_show > - add patch 1 > - add commit message Applied to md-next. Thanks! Song > > Yu Kuai (2): > md: factor out a helper from mddev_put() > md: simplify md_seq_ops > > drivers/md/md.c | 129 +++++++++++++++--------------------------------- > 1 file changed, 39 insertions(+), 90 deletions(-) > > -- > 2.39.2 > ^ permalink raw reply [flat|nested] 9+ messages in thread
end of thread, other threads:[~2024-01-09 8:28 UTC | newest] Thread overview: 9+ messages (download: mbox.gz follow: Atom feed -- links below jump to the message on this page -- 2023-09-27 6:12 [PATCH v3 0/2] md: simplify md_seq_ops Yu Kuai 2023-09-27 6:12 ` [PATCH v3 1/2] md: factor out a helper from mddev_put() Yu Kuai 2023-09-27 6:12 ` [PATCH v3 2/2] md: simplify md_seq_ops Yu Kuai 2024-01-08 23:38 ` Song Liu 2024-01-09 1:21 ` Yu Kuai 2024-01-09 7:48 ` Yu Kuai 2024-01-09 8:12 ` Song Liu 2024-01-09 8:28 ` Yu Kuai 2023-09-28 6:37 ` [PATCH v3 0/2] " Song Liu
This is a public inbox, see mirroring instructions for how to clone and mirror all data and code used for this inbox; as well as URLs for NNTP newsgroup(s).