linux-ide.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [RFC DEPT v16] Question for dept.
@ 2025-05-30 11:27 Yeo Reum Yun
  2025-06-02  2:59 ` Byungchul Park
  0 siblings, 1 reply; 2+ messages in thread
From: Yeo Reum Yun @ 2025-05-30 11:27 UTC (permalink / raw)
  To: Byungchul Park
  Cc: kernel_team@skhynix.com, linux-ide@vger.kernel.org,
	kernel-team@lge.com, open list:MEMORY MANAGEMENT,
	harry.yoo@oracle.com, yskelg@gmail.com, her0gyugyu@gmail.com,
	max.byungchul.park@gmail.com, Andrew Morton,
	Linux Kernel Mailing List

Hi Byungchul,

Thanks for your great work for the latest dept patch.

But I have a some quetions with the below dept log supplied from 
Yunseong Kim<yskelg@gmail.com>

...
[13304.604203] context A
[13304.604209]    [S] lock(&uprobe->register_rwsem:0)
[13304.604217]    [W] __wait_rcu_gp(<sched>:0)
[13304.604226]    [E] unlock(&uprobe->register_rwsem:0)
[13304.604234]
[13304.604239] context B 
[13304.604244]    [S] lock(event_mutex:0)
[13304.604252]    [W] lock(&uprobe->register_rwsem:0)
[13304.604261]    [E] unlock(event_mutex:0)
[13304.604269]
[13304.604274] context C
[13304.604279]    [S] lock(&ctx->mutex:0)
[13304.604287]    [W] lock(event_mutex:0)
[13304.604295]    [E] unlock(&ctx->mutex:0)
[13304.604303]
[13304.604308] context D
[13304.604313]    [S] lock(&sig->exec_update_lock:0)
[13304.604322]    [W] lock(&ctx->mutex:0)
[13304.604330]    [E] unlock(&sig->exec_update_lock:0)
[13304.604338]
[13304.604343] context E
[13304.604348]    [S] lock(&f->f_pos_lock:0)
[13304.604356]    [W] lock(&sig->exec_update_lock:0)
[13304.604365]    [E] unlock(&f->f_pos_lock:0)
[13304.604373]
[13304.604378] context F
[13304.604383]    [S] (unknown)(<sched>:0)
[13304.604391]    [W] lock(&f->f_pos_lock:0)
[13304.604399]    [E] try_to_wake_up(<sched>:0)
[13304.604408]
[13304.604413] context G
[13304.604418]    [S] lock(btrfs_trans_num_writers:0)
[13304.604427]    [W] btrfs_commit_transaction(<sched>:0)
[13304.604436]    [E] unlock(btrfs_trans_num_writers:0)
[13304.604445]
[13304.604449] context H
[13304.604455]    [S] (unknown)(<sched>:0)
[13304.604463]    [W] lock(btrfs_trans_num_writers:0)
[13304.604471]    [E] try_to_wake_up(<sched>:0)
[13304.604484] context I
[13304.604490]    [S] (unknown)(<sched>:0)
[13304.604498]    [W] synchronize_rcu_expedited_wait_once(<sched>:0)
[13304.604507]    --------------- >8 timeout ---------------
[13304.604527] context J
[13304.604533]    [S] (unknown)(<sched>:0)
[13304.604541]    [W] synchronize_rcu_expedited(<sched>:0)
[13304.604549]    [E] try_to_wake_up(<sched>:0)

[end of circular]
...

1. I wonder how context A could be printed with 
    [13304.604217]    [W] __wait_rcu_gp(<sched>:0) 
    since, the completion's dept map will be initailized with 
       sdt_might_sleep_start_timeout((x)->dmap, -1L);
   
    I think last dept_task's stage_sched_map affects this wrong print.
    Should this be fixed with:

 @@ -2713,6 +2713,7 @@ void dept_stage_wait(struct dept_map *m, struct dept_key *k,
        if (m) {
                dt->stage_m = *m;
                dt->stage_real_m = m;
+               dt->stage_sched_map = false;

                /*
                 * Ensure dt->stage_m.keys != NULL and it works with the
    
2. Whenever prints the dependency which initalized with sdt_might_sleep_start_timeout() currently it prints
   (unknown)(<sched>:0) only.
   Would it much better to print task information? (pid, comm and other).    

Thanks.

^ permalink raw reply	[flat|nested] 2+ messages in thread

* Re: [RFC DEPT v16] Question for dept.
  2025-05-30 11:27 [RFC DEPT v16] Question for dept Yeo Reum Yun
@ 2025-06-02  2:59 ` Byungchul Park
  0 siblings, 0 replies; 2+ messages in thread
From: Byungchul Park @ 2025-06-02  2:59 UTC (permalink / raw)
  To: Yeo Reum Yun
  Cc: kernel_team@skhynix.com, linux-ide@vger.kernel.org,
	kernel-team@lge.com, open list:MEMORY MANAGEMENT,
	harry.yoo@oracle.com, yskelg@gmail.com, her0gyugyu@gmail.com,
	max.byungchul.park@gmail.com, Andrew Morton,
	Linux Kernel Mailing List

On Fri, May 30, 2025 at 11:27:48AM +0000, Yeo Reum Yun wrote:
> Hi Byungchul,
> 
> Thanks for your great work for the latest dept patch.
> 
> But I have a some quetions with the below dept log supplied from 
> Yunseong Kim<yskelg@gmail.com>
> 
> ...
> [13304.604203] context A
> [13304.604209]    [S] lock(&uprobe->register_rwsem:0)
> [13304.604217]    [W] __wait_rcu_gp(<sched>:0)
> [13304.604226]    [E] unlock(&uprobe->register_rwsem:0)
> [13304.604234]
> [13304.604239] context B 
> [13304.604244]    [S] lock(event_mutex:0)
> [13304.604252]    [W] lock(&uprobe->register_rwsem:0)
> [13304.604261]    [E] unlock(event_mutex:0)
> [13304.604269]
> [13304.604274] context C
> [13304.604279]    [S] lock(&ctx->mutex:0)
> [13304.604287]    [W] lock(event_mutex:0)
> [13304.604295]    [E] unlock(&ctx->mutex:0)
> [13304.604303]
> [13304.604308] context D
> [13304.604313]    [S] lock(&sig->exec_update_lock:0)
> [13304.604322]    [W] lock(&ctx->mutex:0)
> [13304.604330]    [E] unlock(&sig->exec_update_lock:0)
> [13304.604338]
> [13304.604343] context E
> [13304.604348]    [S] lock(&f->f_pos_lock:0)
> [13304.604356]    [W] lock(&sig->exec_update_lock:0)
> [13304.604365]    [E] unlock(&f->f_pos_lock:0)
> [13304.604373]
> [13304.604378] context F
> [13304.604383]    [S] (unknown)(<sched>:0)
> [13304.604391]    [W] lock(&f->f_pos_lock:0)
> [13304.604399]    [E] try_to_wake_up(<sched>:0)
> [13304.604408]
> [13304.604413] context G
> [13304.604418]    [S] lock(btrfs_trans_num_writers:0)
> [13304.604427]    [W] btrfs_commit_transaction(<sched>:0)
> [13304.604436]    [E] unlock(btrfs_trans_num_writers:0)
> [13304.604445]
> [13304.604449] context H
> [13304.604455]    [S] (unknown)(<sched>:0)
> [13304.604463]    [W] lock(btrfs_trans_num_writers:0)
> [13304.604471]    [E] try_to_wake_up(<sched>:0)
> [13304.604484] context I
> [13304.604490]    [S] (unknown)(<sched>:0)
> [13304.604498]    [W] synchronize_rcu_expedited_wait_once(<sched>:0)
> [13304.604507]    --------------- >8 timeout ---------------
> [13304.604527] context J
> [13304.604533]    [S] (unknown)(<sched>:0)
> [13304.604541]    [W] synchronize_rcu_expedited(<sched>:0)
> [13304.604549]    [E] try_to_wake_up(<sched>:0)

What a long circle!  Dept is working great!

However, this is a false positive that comes from rcu waits that haven't
been classified properly yet, the fix of which is in progress by
Yunseong Kim.  We should wait for him to complete the fix :(

> [end of circular]
> ...
> 
> 1. I wonder how context A could be printed with 
>     [13304.604217]    [W] __wait_rcu_gp(<sched>:0) 
>     since, the completion's dept map will be initailized with 
>        sdt_might_sleep_start_timeout((x)->dmap, -1L);
>    
>     I think last dept_task's stage_sched_map affects this wrong print.

No.  It's working as it should.  Since (x)->dmap is NULL in this case,
it's supposed to print <sched>.

>     Should this be fixed with:
> 
>  @@ -2713,6 +2713,7 @@ void dept_stage_wait(struct dept_map *m, struct dept_key *k,
>         if (m) {
>                 dt->stage_m = *m;
>                 dt->stage_real_m = m;
> +               dt->stage_sched_map = false;

It should already be false since sdt_might_sleep_end() reset this value
to false.  DEPT_WARN_ON(dt->stage_sched_map) in here might make more
sense.

>                 /*
>                  * Ensure dt->stage_m.keys != NULL and it works with the
>     
> 2. Whenever prints the dependency which initalized with sdt_might_sleep_start_timeout() currently it prints
>    (unknown)(<sched>:0) only.
>    Would it much better to print task information? (pid, comm and other).    

Thanks for such a valuable feedback.  I will add it to to-do.

	Byungchul
> 
> Thanks.

^ permalink raw reply	[flat|nested] 2+ messages in thread

end of thread, other threads:[~2025-06-02  3:14 UTC | newest]

Thread overview: 2+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2025-05-30 11:27 [RFC DEPT v16] Question for dept Yeo Reum Yun
2025-06-02  2:59 ` Byungchul Park

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).