linux-block.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* Potential hang on ublk_ctrl_del_dev()
@ 2023-01-03 21:47 Nadav Amit
  2023-01-03 21:51 ` Jens Axboe
  2023-01-04  5:42 ` Ming Lei
  0 siblings, 2 replies; 8+ messages in thread
From: Nadav Amit @ 2023-01-03 21:47 UTC (permalink / raw)
  To: Ming Lei; +Cc: Jens Axboe, linux-block

Hello Ming,

I am trying the ublk and it seems very exciting.

However, I encounter an issue when I remove a ublk device that is mounted or
in use.

In ublk_ctrl_del_dev(), shouldn’t we *not* wait if ublk_idr_freed() is false?
It seems to me that it is saner to return -EBUSY in such a case and let
userspace deal with the results.

For instance, if I run the following (using ubdsrv):

 $ mkfs.ext4 /dev/ram0
 $ ./ublk add -t loop -f /dev/ram0
 $ sudo mount /dev/ublkb0 tmp
 $ sudo ./ublk del -a

ublk_ctrl_del_dev() would not be done until the partition is unmounted, and you
can get a splat that is similar to the one below.

What do you say? Would you agree to change the behavior to return -EBUSY?

Thanks,
Nadav


[  974.149938] INFO: task ublk:2250 blocked for more than 120 seconds.
[  974.157786]       Not tainted 6.1.0 #30
[  974.162369] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
[  974.171417] task:ublk            state:D stack:0     pid:2250  ppid:2249   flags:0x00004004
[  974.181054] Call Trace:
[  974.184097]  <TASK>
[  974.186726]  __schedule+0x37e/0xe10
[  974.190915]  ? __this_cpu_preempt_check+0x13/0x20
[  974.196463]  ? lock_release+0x133/0x2a0
[  974.201043]  schedule+0x67/0xe0
[  974.204846]  ublk_ctrl_uring_cmd+0xf45/0x1110
[  974.210016]  ? lock_is_held_type+0xdd/0x130
[  974.214990]  ? var_wake_function+0x60/0x60
[  974.219872]  ? rcu_read_lock_sched_held+0x4f/0x80
[  974.225443]  io_uring_cmd+0x9a/0x130
[  974.229743]  ? io_uring_cmd_prep+0xf0/0xf0
[  974.234638]  io_issue_sqe+0xfe/0x340
[  974.238946]  io_submit_sqes+0x231/0x750
[  974.243553]  __x64_sys_io_uring_enter+0x22b/0x640
[  974.249134]  ? trace_hardirqs_on+0x3c/0xe0
[  974.254042]  do_syscall_64+0x35/0x80
[  974.258361]  entry_SYSCALL_64_after_hwframe+0x46/0xb0
[  974.264335] RIP: 0033:0x7f1dc2958efd
[  974.268657] RSP: 002b:00007ffdfd22d638 EFLAGS: 00000246 ORIG_RAX: 00000000000001aa
[  974.277471] RAX: ffffffffffffffda RBX: 00005592eabe7f60 RCX: 00007f1dc2958efd
[  974.285800] RDX: 0000000000000000 RSI: 0000000000000001 RDI: 0000000000000004
[  974.294139] RBP: 00005592eabe7f60 R08: 0000000000000000 R09: 0000000000000008
[  974.302473] R10: 0000000000000000 R11: 0000000000000246 R12: 0000000000000000
[  974.310811] R13: 0000000000000000 R14: 00000000ffffffff R15: 0000000000000000
[  974.319168]  </TASK>
[  974.321982] 
[  974.321982] Showing all locks held in the system:
[  974.329776] 1 lock held by rcu_tasks_kthre/12:
[  974.335443]  #0: ffffffff82f6e890 (rcu_tasks.tasks_gp_mutex){+.+.}-{3:3}, at: rcu_tasks_one_gp+0x2d/0x3f0
[  974.346935] 1 lock held by rcu_tasks_rude_/13:
[  974.352573]  #0: ffffffff82f6e610 (rcu_tasks_rude.tasks_gp_mutex){+.+.}-{3:3}, at: rcu_tasks_one_gp+0x2d/0x3f0
[  974.364522] 1 lock held by rcu_tasks_trace/14:
[  974.370246]  #0: ffffffff82f6e350 (rcu_tasks_trace.tasks_gp_mutex){+.+.}-{3:3}, at: rcu_tasks_one_gp+0x2d/0x3f0
[  974.382331] 1 lock held by khungtaskd/310:
[  974.387730]  #0: ffffffff82f6f2a0 (rcu_read_lock){....}-{1:2}, at: debug_show_all_locks+0x23/0x17e
[  974.398598] 5 locks held by kworker/8:1/330:
[  974.404176] 1 lock held by systemd-journal/761:
[  974.410003] 1 lock held by in:imklog/1390:
[  974.415337]  #0: ffff88810ead82e8 (&f->f_pos_lock){+.+.}-{3:3}, at: __fdget_pos+0x45/0x50
[  974.425284] 2 locks held by ublk/2250:
[  974.430167]  #0: ffff8881764e68a8 (&ctx->uring_lock){+.+.}-{3:3}, at: __x64_sys_io_uring_enter+0x21f/0x640
[  974.441708]  #1: ffffffff83106368 (ublk_ctl_mutex){+.+.}-{3:3}, at: ublk_ctrl_uring_cmd+0x6e4/0x1110
[  974.452674] 
[  974.455090] =============================================

^ permalink raw reply	[flat|nested] 8+ messages in thread

end of thread, other threads:[~2023-01-06  1:42 UTC | newest]

Thread overview: 8+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2023-01-03 21:47 Potential hang on ublk_ctrl_del_dev() Nadav Amit
2023-01-03 21:51 ` Jens Axboe
2023-01-04  7:50   ` Ming Lei
2023-01-04  5:42 ` Ming Lei
2023-01-04 18:13   ` Nadav Amit
2023-01-05  3:16     ` Ming Lei
2023-01-05 17:52       ` Nadav Amit
2023-01-06  1:40         ` Ming Lei

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).